Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-non-default-magento-themes
Packt
07 Oct 2009
4 min read
Save for later

Non-default Magento Themes

Packt
07 Oct 2009
4 min read
Uses of non-default themes Magento's flexibility in themes gives a lot of scope for possible uses of non-default themes. Along with the ability to have seasonal themes on our Magento store, non-default themes have a range of uses: A/B testing Easily rolled-back themes Changing the look and feel of specific pages, such as for a particular product within your store Creating brand-specific stores within your store, distinguishing your store's products further, if you sell a variety of the same products from different brands A/B testing A/B testing allows you to compare two different aspects of your store. You can test different designs on different weeks, and can then compare which design attracted more sales. Magento's support for non-default themes allows you to do this relatively easily. Bear in mind that the results of such a test may not represent what actually drives your customers to buy your store's products for a number of reasons. True A/B testing on websites is performed by presenting the different designs to your visitors at random. However, performing it this way may give you an insight in to what your customers prefer. Easily rolled-back themes If you want to make changes to your store's existing theme, then you can make use of a non-default theme to overwrite certain aspects of your store's look and feel, without editing your original theme. This means that if your customers don't like a change, or a change causes problems in a particular browser, then you can simply roll-back the changes, by changing your store's settings to display the original theme. Non-default themes A default theme is the default look and feel to your Magento store. That is, if no other styling or presentational logic is specified, then the default theme is the one that your store's visitors will see. Magento's default theme looks similar to the following screenshot: Non-default themes are very similar to the default themes in Magento. Like default themes, Magento's non-default themes can consist of one or more of the following elements: Skins—images and CSS Templates—the logic that inserts each block's content or feature (for example, the shopping cart) in to the page Layout—XML files that define where content is displayed Locale—translations of your store The major difference between a default and a non-default theme in Magento is that a default theme must have all of the layout and template files required for Magento to run. On the other hand, a non-default theme does not need all of these to function, as it relies on your store's default theme, to some extent. Locales in Magento: Many themes are already partially or fully translated into a huge variety of languages. Locales can be downloaded from the Magento Commerce website at http://www.magentocommerce.com/langs. Magento theme hierarchy In its current releases, Magento supports two themes: a default theme, and a non-default theme. The non-default theme takes priority when Magento is deciding what it needs to display. Any elements not found in the non-default theme are then found in the default theme specified. Future versions of Magento should allow more than one default theme to be used at a time, as well as allow more detailed control over the hierarchy of themes in your store. Magento theme directory structure Every theme in Magento must maintain the same directory structure for its files. The skin, templates, and layout are stored in their own directories. Templates Templates are located in the app/design/frontend/interface/theme/template directory of your Magento store's installation, where interface is your store's interface (or package) name (usually default), and theme is the name of your theme (for example, cheese). Templates are further organized in subdirectories by module. So, templates related to the catalog module are stored in app/design/frontend/interface/theme/template/catalog directory, whereas templates for the checkout module are stored in app/design/frontend/interface/theme/template/checkout directory. Layout Layout files are stored in app/design/frontend/interface/theme/layout. The name of each layout file refers to a particular module. For example, catalog.xml contains layout information for the catalog module, whereas checkout.xml contains layout information for the checkout module.
Read more
  • 0
  • 0
  • 2607

article-image-html5-canvas
Packt
11 Sep 2013
5 min read
Save for later

HTML5 Canvas

Packt
11 Sep 2013
5 min read
(For more resources related to this topic, see here.) Setting up your HTML5 canvas (Should know) This recipe will show you how to first of all set up your own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. For this task we will be creating a series of primitives such as circles and rectangles. Modern video games make use of these types of primitives in many different forms. For example, both circles and rectangles are commonly used within collision-detection algorithms such as bounding circles or bounding boxes. How to do it... As previously mentioned we will begin by creating our own HTML5 canvas. We will start by creating a blank HTML file. To do this, you will need some form of text editor such as Microsoft Notepad (available for Windows) or the TextEdit application (available on Mac OS). Once you have a basic webpage set up, all that is left to do in order to create a canvas is to place the following between both body tags: <canvas id="canvas" width="800" height="600"></canvas> As previously mentioned, we will be implementing a number of basic elements within the canvas. In order to do this we must first link the JavaScript file to our webpage. This file will be responsible for the initialization, loading, and drawing of objects to the canvas. In order for our scripts to have any effect on our canvas we must create a separate file called canvas example. Create this new file within your text editor and then insert the following code declarations: var canvas = document.getElementById("canvas"), context = canvas.getContext("2d"); These declarations are responsible for retrieving both the canvas element and context. Using the canvas context, we can begin to draw primitives, text, and load textures into our canvas. We will begin by drawing a rectangle in the top-left corner of our canvas. In order to do this place the following code below our previous JavaScript declarations: context.fillStyle="#FF00FF";context.fillRect(15,15,150,75); If you were to now view the original webpage we created, you would see the rectangle being drawn in the top-left corner at position X: 15, Y: 15. Now that we have a rectangle, we can look at how we would go about drawing a circle onto our canvas. This can be achieved by means of the following code: context.beginPath();context.arc(350,150,40,0,2 * Math.PI);context.stroke(); How it works... The first code extract represents the basic framework required to produce a blank webpage and is necessary for a browser to read and display the webpage in question. With a basic webpage created, we then declare a new HTML5 canvas. This is done by assigning an id attribute, which we use to refer to the canvas within our scripts. The canvas declaration then takes a width and height attribute, both of which are also necessary to specify the size of the canvas, that is, the number of pixels wide and pixels high. Before any objects can be drawn to the canvas, we first need to get the canvas element. This is done through means of the getElementById method that you can see in our canvas example. When retrieving the canvas element, we are also required to specify the canvas context by calling a built-in HTML5 method known as getContext. This object gives access to many different properties and methods for drawing edges, circles, rectangles, external images, and so on. This can be seen when we draw a rectangle to our the canvas. This was done using the fillStyle property, which takes in a hexadecimal value and in return specifies the color of an element. Our next line makes use of the fillRect method, which requires a minimum of four values to be passed to it. These values include the X and Y position of the rectangle, as well as the width and height of the rectangle. As a result, a rectangle is drawn to the canvas with the color, position, width, and height specified. We then move on to drawing a circle to the canvas, which is done by firstly calling a built-in HTML canvas method known as BeginPath. This method is used to either begin a new path or to reset a current path. With a new path setup, we then take advantage of a method known as Arc that allows for the creation of arcs or curves, which can be used to create circles. This method requires that we pass both an X and Y position, a radius, and a starting angle measured in radians. This angle is between 0 and 2 * Pi where 0 and 2 are located at the 3 o'clock position of the arc's circle. We also must pass an ending angle, which is also measured in radians. The following figure is taken directly from the W3C HTML canvas reference, which you can find at the following link http://bit.ly/UCVPY1: Summary In this article we saw how to first of all set up our own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. Resources for Article: Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 2606

article-image-safely-manage-different-versions-content-plone
Packt
08 Oct 2009
2 min read
Save for later

Safely Manage Different Versions of Content with Plone

Packt
08 Oct 2009
2 min read
(For more resources on Plone, see here.) Introducing versioning Now that you have learned how to add various types of content, from pages to events to news items, we're ready to introduce a feature of Plone called versioning, which is an important part of content management. The content items you work with in your Plone site may go through many changes over time. Plone provides versioning information to help you manage your content from the time it was initially created through to the current version. By default, Plone provides versioning for the following content types: Pages News Items Events Links Other content types can be configured to provide versioning through the Plone Control Panel under Types. Although you may enable the File type to use versioning, the only changes that are tracked are those items actually describing the File (for example, Title, Description, and so on). The changes to the contents of the File are not tracked. Creating a new version Versions are created each time you save your content. Note that there is a Change note field at the bottom of the Edit page for content items with versioning enabled: The information entered in the Change note field will be stored along with other versioning information, which you are able to view through the History tab. Viewing the version history You can view the list of all of the versions of a content item by clicking on the History tab for that content item. In the History view that you can see in the following screenshot, the most recent version is listed fi rst. Clicking on any of the column headers will re-sort the listing based on that column heading. The most current version is always labeled Working Copy in the History view. Previewing previous versions To preview a specific version, simply click the preview link of the desired revision. In the following example, revision 3 has been identified, and will be displayed if this link is clicked: On the subsequent page, you may either click on the jump down link to the point of the content preview: or you may scroll down the page in order to see the actual preview:
Read more
  • 0
  • 0
  • 2602

article-image-installing-alfresco-software-development-kit-sdk
Packt
28 Oct 2009
6 min read
Save for later

Installing Alfresco Software Development Kit (SDK)

Packt
28 Oct 2009
6 min read
Obtaining the SDK If you are running the Enterprise network, it is likely that the SDK has been provided to you as a binary. Alternatively, you can check out the Enterprise source code and build it yourself. In the Enterprise SVN repository, specific releases are tagged. So if you wanted 2.2.0, for example, you'd check out V2.2.0-ENTERPRISE-FINAL. The Enterprise SVN repository for the Enterprise network is password-protected. Consult your Alfresco representative for the URL, port, and credentials that are needed to obtain the Enterprise source code. Labs network users can either download the SDK as a binary from SourceForge (https://sourceforge.net/project/showfiles.php?group_id=143373&package_id=189441) or check out the Labs source code and build it. The SVN URL for the Labs source code is svn://svn.alfresco.com. In the Labs repository, nothing is tagged. You must check out HEAD. Step-by-Step: Building Alfresco from Source Regardless of whether you are using Enterprise or Labs, if you've decided to build from the source it is very easy to do it. At a high level, you simply check out the source and then run Ant. If you've opted to use the pre-compiled binaries, skip to the next section. Otherwise, let's use Ant to create the same ZIP/TAR file that is available on the download page. To do that, follow these steps: Check out the source from the appropriate SVN repository, as mentioned earlier. Set the TOMCAT_HOME environment variable to the root of your Apache Tomcat install directory. Navigate to the root of the source directory, then run the default Ant target: ant build.xml ant build.xml It will take a few minutes to build everything. When it is done, run the distribute task like this: ant -f continuous.xml distribute Again, it may take several minutes for this to run. When it is done, you should see several archives in the build|dist directory. For example, running this Ant task for Alfresco 3.0 Labs produces several archives. The subset relevant to the article includes: alfresco-labs-sdk-*.tar.gz alfresco-labs-sdk-*.zip alfresco-labs-tomcat-*.tar.gz alfresco-labs-tomcat-*.zip alfresco-labs-war-*.tar.gz alfresco-labs-war-*.zip alfresco-labs-wcm-*.tar.gz alfresco-labs-wcm-*.zip You should extract the SDK archive somewhere handy. The next step will be to import the SDK into Eclipse. Setting up the SDK in Eclipse Nothing about Alfresco requires you to use Eclipse or any other IDE. But Eclipse is very widely used and the Alfresco SDK distribution includes Eclipse projects that can easily be imported into Eclipse, so that's what these instructions will cover. In addition to the Alfresco JARs, dependent JARs, Javadocs, and source code, the SDK bundle has several Eclipse projects. Most of the Eclipse projects are sample projects showing how to write code for a particular area of Alfresco. Two are special, however. The SDK AlfrescoEmbedded project and the SDK AlfrescoRemote project reference all of the JARs needed for the Java API and the Web Services API respectively. The easiest way to make sure your own Eclipse project has everything it needs to compile is to import the projects bundled with the SDK into your Eclipse workspace, and then add the appropriate SDK projects to your project's build path. Step-by-Step: Importing the SDK into Eclipse Every developer has his or her own favorite way of configuring tools. If you are going to work with multiple versions of Alfresco, you should use version-specific Eclipse workspaces. For example, you might want to have a workspace-alfresco-2.2 workspace as well as a workspace-alfresco-3.0 workspace, each with the corresponding Alfresco SDK projects imported. Then, if you need to test customizations against a different version of the Alfresco SDK, all you have to do is switch your workspace, import your customization project if it isn't in the workspace already, and build it. Let's go ahead and set this up. Follow these steps: In Eclipse, select File|Switch Workspace or specify a new workspace location. This will be your workspace for a specific version of the Alfresco SDK so use a name such as workspace-alfresco-3.0. Eclipse will restart with an empty workspace. Make sure the Java compiler compliance level preference is set to 5.0 (Window|Preferences|Java|Compiler). If you forget to do that, Eclipse won't be able to build the projects after they are imported. Select File|Import|Existing Projects into Workspace. For the root directory, specify the directory where the SDK was uncompressed. For the root directory, specify the directory where the SDK was uncompressed. You want the root SDK directory, not the Samples directory. Select all of the projects that are listed and click Import. After the import, Eclipse should be able to build all projects cleanly. If not, double-check the compiler compliance level. If that is set but there are still errors, make sure you imported all SDK projects including SDK AlfrescoEmbedded and SDK AlfrescoRemote. Now that the files are in the workspace, take a look at the Embedded project. That's quite a list of dependent JAR files! The Alfresco-specific JARs all start with alfresco-. It depends on what you are doing, of course, but the JAR that is referenced most often is likely to be alfresco-repository.jar because that's where the bulk of the API resides. The SDK comes with zipped source code and Javadocs, which are both useful references (although the Javadocs are pretty sparse). It's a good idea to tell Eclipse where those files are, so you can drill in to the Alfresco source when debugging. To do that, right-click on the Alfresco JAR, and then select Properties. You'll see panels for Java Source Attachment and Javadoc Location that you can use to associate the JAR with the appropriate source and Javadoc archives. The following image shows the Java Source Attachment for alfresco-repository.jar: The following image shows the Javadoc Location panel for alfresco-repository.jar. Source and Javadoc are provided for each of the Alfresco JARs, as shown in the following table. Note that source and Javadoc for everything is available. This is open source software after all, not just all bundled with the SDK: Alfresco JAR Source archive Javadoc archive alfresco-core.jar Src|core-src.zip Doc|api|core-doc.zip alfresco-remote-api.jar Src|remote-api-src.zip Doc|api|remote-api-doc.zip alfresco-web-client.jar src|web-client-src.zip doc|api|web-client-doc.zip alfresco-repository.jar src|repository-src.zip doc|api|repository-doc.zip    
Read more
  • 0
  • 0
  • 2601

article-image-working-report-builder-microsoft-sql-server-2008-part-2
Packt
22 Oct 2009
3 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 2

Packt
22 Oct 2009
3 min read
Enabling and reviewing My Reports As described in Part 1 the My Reports folder needs to be enabled in order to use the folder or display it in the Open Report dialogue. The RC0 version had a documentation bug which has been rectified (https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=366413) Getting ready In order to enable the My Reports folder you need to carry out a few tasks. This will require authentication and working with the SQL Server Management Studio. These tasks are listed here: Make sure the Report Server has started. Make sure you have adequate permissions to access the Servers. Open the Microsoft SQL Server Management Studio as described previously. Connect to the Reporting Services after making sure you have started the Reporting Services. Right-click the Report Server node. General Execution History Logging Security Advanced The Server Properties window is displayed with a navigation list on the left consisting of the following: In the General page the name, version, edition, authentication mode, and URL of Reporting Service is displayed. Download of an ActiveX Client Print control is enabled by default. In order to work with Report Builder effectively and provide a My Reports folder for each user, you need to place a check mark for the check box Enable a My Reports folder for each user. The My Reports feature has been turned on as shown in the next screenshot. In the Execution page there is choice for report timeout execution, with the default set such that the report execution expires after 1800 seconds. In the History page there is choice between keeping an unlimited number of snapshots in the report history (default) or to limit the copies allowing you to specify how many to be kept. In the Logging page, report execution logging is enabled and the log entries older than 60 days are removed by default. This can be changed if desired. In the Security page, both Windows integrated security for report data sources and ad hoc report executions are enabled by default. The Advanced page shows several more items including the ones described thus far as shown in the next figure. In the General page enable the My Reports feature by placing a check mark. Click on the Advanced list item in the left. The Advanced page is displayed as shown: Now expand the Security node of Reporting Services and you will see that the My Reports role is present in the list of roles as shown. This is also added to the ReportServer database. The description of everything that a user with the assignment My Reports role can do is as follows: May publish reports and linked reports, manage folders, reports, and resources in a users My Reports folder. Now bring up Report Builder 2.0 by clicking Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. Report Builder 2.0 is displayed. Click on Office Button | Open. The Open Report dialogue appears as shown. When the report Server is offline, the default location is My Documents, like Microsoft products Excel and MS Access. Choose the Recent sites and Servers. The Report server that is active should get displayed here as shown: Highlight the Server URL and click Open. All the folders and files on the server become accessible as shown: Open the Report Manager by providing its URL address. Verify that a My Reports folder is created for the user (current user). There could be slight differences in the look of the interface depending on whether you are using the RTM or the final version of SQL Server 2008 Enterprise edition.
Read more
  • 0
  • 0
  • 2596

article-image-customize-backend-component-joomla-15
Packt
04 Jun 2010
13 min read
Save for later

Customize backend Component in Joomla! 1.5

Packt
04 Jun 2010
13 min read
(Read more interesting articles on Joomla! 1.5here.) Itemized data Most components handle and display itemized data. Itemized data is data having many instances; most commonly this reflects rows in a database table. When dealing with itemized data there are three areas of functionality that users generally expect: Pagination Ordering Filtering and searching In this section we will discuss each of these areas of functionality and how to implement them in the backend of a component. Pagination To make large amounts of itemized data easier to understand, we can split the data across multiple pages. Joomla! provides us with the JPagination class to help us handle pagination in our extensions. There are four important attributes associated with the JPagination class: limitstart: This is the item with which we begin a page, for example the first page will always begin with item 0. limit: This is the maximum number of items to display on a page. total: This is the total number of items across all the pages. _viewall: This is the option to ignore pagination and display all items. Before we dive into piles of code, let's take the time to examine the listFooter, the footer that is used at the bottom of pagination lists: The box to the far left describes the maximum number of items to display per page (limit). The remaining buttons are used to navigate between pages. The final text defines the current page out of the total number of pages. The great thing about this footer is we don't have to work very hard to create it! We can use a JPagination object to build it. This not only means that it is easy to implement, but that the pagination footers are consistent throughout Joomla!. JPagination is used extensively by components in the backend when displaying lists of items. In order to add pagination to our revues list we must make some modifications to our backend revues model. Our current model consists of one private property $_revues and two methods: getRevues() and delete(). We need to add two additional private properties for pagination purposes. Let's place them immediately following the existing $_revues property: /** @var array of revue objects */var $_revues = null;/** @var int total number of revues */var $_total = null;/** @var JPagination object */var $_pagination = null; Next we must add a class constructor, as we will need to retrieve and initialize the global pagination variables $limit and $limitstart. JModel objects store a state object in order to record the state of the model. It is common to use the state variables limit and limitstart to record the number of items per page and starting item for the page. We set the state variables in the constructor: /** * Constructor */function __construct(){ global $mainframe; parent::__construct(); // Get the pagination request variables $limit = $mainframe->getUserStateFromRequest( 'global.list.limit', 'limit', $mainframe->getCfg('list_limit')); $limitstart = $mainframe->getUserStateFromRequest( $option.'limitstart', 'limitstart', 0); // Set the state pagination variables $this->setState('limit', $limit); $this->setState('limitstart', $limitstart);} Remember that $mainframe references the global JApplication object. We use the getUserStateFromRequest() method to get the limit and limitstart variables. We use the user state variable, global.list.limit, to determine the limit. This variable is used throughout Joomla! to determine the length of lists. For example, if we were to view the Article Manager and select a limit of five items per page, if we move to a different list it will also be limited to five items. If a value is set in the request value limit (part of the listFooter), we use that value. Alternatively we use the previous value, and if that is not set we use the default value defined in the application configuration. The limitstart variable is retrieved from the user state value $option, plus .limitstart. The $option value holds the component name, for example com_content. If we build a component that has multiple lists we should add an extra level to this, which is normally named after the entity. If a value is set in the request value limitstart (part of the listFooter) we use that value. Alternatively we use the previous value, and if that is not set we use the default value 0, which will lead us to the first page. The reason we retrieve these values in the constructor and not in another method is that in addition to using these values for the JPagination object, we will also need them when getting data from the database. In our existing component model we have a single method for retrieving data from the database, getRevues(). For reasons that will become apparent shortly we need to create a private method that will build the query string and modify our getRevues() method to use it. /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid; return $query;} We now must modify our getRevues() method: /** * Get a list of revues * * @access public * @return array of objects */function getRevues(){ // Get the database connection $db =& $this->_db; if( empty($this->_revues) ) { // Build query and get the limits from current state $query = $this->_buildQuery(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); $this->_revues = $this->_getList($query, $limitstart, $limit); } // Return the list of revues return $this->_revues;} We retrieve the object state variables limit and limitstart and pass them to the private JModel method _getList(). The _getList() method is used to get an array of objects from the database based on a query and, optionally, limit and limitstart. The last two parameters will modify the first parameter, a query, in such a way that we only return the desired results. For example if we requested page 1 and were displaying a maximum of five items per page, the following would be appended to the query: LIMIT 0, 5. To handle pagination we need to add a method called getPagination() to our model. This method will handle items we are trying to paginate using a JPagination object. Here is our code for the getPagination() method: /** * Get a pagination object * * @access public * @return pagination object */function getPagination(){ if (empty($this->_pagination)) { // Import the pagination library jimport('joomla.html.pagination'); // Prepare the pagination values $total = $this->getTotal(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); // Create the pagination object $this->_pagination = new JPagination($total, $limitstart, $limit); } return $this->_pagination;} There are three important aspects to this method. We use the private property $_pagination to cache the object, we use the getTotal() method to determine the total number of items, and we use the getState() method to determine the number of results to display. The getTotal() method is a method that we must define in order to use. We don't have to use this name or this mechanism to determine the total number of items. Here is one way of implementing the getTotal() method: /** * Get number of items * * @access public * @return integer */function getTotal(){ if (empty($this->_total)) { $query = $this->_buildQuery(); $this->_total = $this->_getListCount($query); } return $this->_total;} This method calls our model's private method _buildQuery() to build the query, the same query that we use to retrieve our list of revues. We then use the private JModel method _getListCount()to count the number of results that will be returned from the query. We now have all we need to be able to add pagination to our revues list except for actually adding pagination to our list page. We need to add a few lines of code to our revues/view.html.php file. We will need to access to global user state variables so we must add a reference to the global application object as the first line in our display method: global $mainframe; Next we need to create and populate an array that will contain user state information. We will add this code immediately after the code that builds the toolbar: // Prepare list array$lists = array();// Get the user state$filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'published');$filter_order_Dir = $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC');// Build the list array for use in the layout$lists['order'] = $filter_order;$lists['order_Dir'] = $filter_order_Dir;// Get revues and pagination from the model$model =& $this->getModel( 'revues' );$revues =& $model->getRevues();$page =& $model->getPagination();// Assign references for the layout to use$this->assignRef('lists', $lists);$this->assignRef('revues', $revues);$this->assignRef('page', $page); After we create and populate the $lists array, we add a variable $page that receives a reference to a JPagination object by calling our model's getPagination() method. And finally we assign references to the $lists and $page variables so that our layout can access them. Within our layout default.php file we must make some minor changes toward the end of the existing code. Between the closing </tbody> tag and the </table> tag we must add the following: <tfoot> <tr> <td colspan="10"> <?php echo $this->page->getListFooter(); ?> </td> </tr></tfoot> This creates the pagination footer using the JPagination method getListFooter(). The final change we need to make is to add two hidden fields to the form. Under the existing hidden fields we add the following code: <input type="hidden" name="filter_order" value="<?php echo $this->lists['order']; ?>" /><input type="hidden" name="filter_order_Dir" value="" /> The most important thing to notice is that we leave the value of the filter_order_Dir field empty. This is because the listFooter deals with this for us. That is it! We now have added pagination to our page. Ordering Another enhancement that we can add is the ability to sort or order our data by column, which we can accomplish easily using the JHTML grid.sort type. And, as an added bonus, we have already completed a significant amount of the necessary code when we added pagination. Most of the changes to revues/view.html.php that we made for pagination are used for implementing column ordering; we don't have to make a single change. We also added two hidden fields, filter_order and filter_order_Dir, to our layout form, default.php. The first defines the column to order our data and the latter defines the direction, ascending or descending. Most of the column headings for our existing layout are currently composed of simple text wrapped in table heading tags (<th>Title</th> for example). We need to replace the text with the output of the grid.sort function for those columns that we wish to be orderable. Here is our new code: <thead> <tr> <th width="20" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('ID'), 'id', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20" nowrap="nowrap"> <input type="checkbox" name="toggle" value="" onclick="checkAll( <?php echo count($this->revues); ?>);" /> </th> <th width="40%"> <?php echo JHTML::_('grid.sort', JText::_('TITLE'), 'title', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20%"> <?php echo JHTML::_('grid.sort', JText::_('REVUER'), 'revuer', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('REVUED'), 'revued', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', 'ORDER', 'ordering', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="10" nowrap="nowrap"> <?php if($ordering) echo JHTML::_('grid.order', $this->revues); ?> </th> <th width="50" nowrap="nowrap"> <?php echo JText::_('HITS'); ?> </th> <th width="100" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('CATEGORY'), 'category', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="60" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('PUBLISHED'), 'published', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> </tr></thead> Let's look at the last column, Published, and dissect the call to grid.sort. Following grid.sort we have the name of the column, filtered through JText::_() passing it a key to our translation file. The next parameter is the sort value, the current order direction, and the current column by which the data is ordered. In order for us to be able to use these headings to order our data we must make a few additional modifications to our JModel class. We created the _buildQuery() method earlier when we were adding pagination. We now need to make a change to that method to handle ordering: /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid' . $this->_buildQueryOrderBy(); return $query;} Our method now calls a method named _buildQueryOrderBy() that builds the ORDER BY clause for the query: /** * Build the ORDER part of a query * * @return string part of an SQL query */function _buildQueryOrderBy(){ global $mainframe, $option; // Array of allowable order fields $orders = array('title', 'revuer', 'revued', 'category', 'published', 'ordering', 'id'); // Get the order field and direction, default order field // is 'ordering', default direction is ascending $filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'ordering'); $filter_order_Dir = strtoupper( $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC')); // Validate the order direction, must be ASC or DESC if ($filter_order_Dir != 'ASC' && $filter_order_Dir != 'DESC') { $filter_order_Dir = 'ASC'; } // If order column is unknown use the default if (!in_array($filter_order, $orders)) { $filter_order = 'ordering'; } $orderby = ' ORDER BY '.$filter_order.' '.$filter_order_Dir; if ($filter_order != 'ordering') { $orderby .= ' , ordering '; } // Return the ORDER BY clause return $orderby;} As with the view, we retrieve the order column name and direction using the application getUserStateFromRequest() method. Since this data is going to be used to interact with the database, we perform some data sanity checks to ensure that the data is safe to use with the database. Now that we have done this we can use the table headings to order itemized data. This is a screenshot of such a table: Notice that the current ordering is title descending, as denoted by the small arrow to the right of Title.
Read more
  • 0
  • 0
  • 2596
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-development-windows-mobile-applications-part-2
Packt
26 Oct 2009
3 min read
Save for later

Development of Windows Mobile Applications (Part 2)

Packt
26 Oct 2009
3 min read
Now let us see how to deploy it on Windows Mobile Device. For deploying the application on device you need to have ActiveSync installed. There are two ways in which application can be deployed on to the device.  First option is to connect the device to the Development machine via USB. ActiveSync will automatically detect it and you can click on on the top bar. And this time select option "Windows Mobile 6 Professional Device". But then this approach is useful when you want to test/deploy and use the application yourself. What if you want to distribute it to others? In that case you need to create an installation program for your Windows mobile application. The installation file in the Windows Mobile world is distributed in the form of a CAB file. So once we have done with application we should opt for option 2 of creating a CAB file (A CAB file is a library of compressed files stored as a single file). Creating CAB File Creating a CAB file itself is a new project. To create CAB project right click on the solution and select the option New Project as shown below. Clicking on New Project option will open Add New Project Wizard. Select option Setup and Deployment under Other Project Types on the left hand menu. Then select option Smart Device CAB Project on right hand side as shown below. We have named this project as MyFirstAppCAB. Click OK and MyFirstAppCAB project is created under the solution a shown. Now to add files to the CAB, right click on the Application Folder under File System on Target Machine and select option Add-> Project Output as shown. On selecting Project Output option, the following screen will popup. Depending upon the requirement, select what all needs to be compressed in CAB file. For this example we require only output, hence will select option Primary output. Now right click on CAB project MyFirstAppCAB and select option Build. CAB file with name MyFirstAppCAB will be created at the location MyFirstAppMyFirstAppCABDebug. Now let us see how we can deploy this CAB file on emulator and run the application. Click on Tools on the top bar and select option Device Emulator Manager. This will open a Device Emulator Manager as shown below. Select option Windows Mobile 6 Classic Emulator. Right click and select option Connect. Windows Mobile Emulator will start and on Device Emulator Manager you can see to left of Windows Mobile 6 Classic Emulator as shown below. Next step is to Cradle. If you are attempting to cradle for the 1st time, then you need to setup ActiveSync. To setup ActiveSync, double click on on right bottom on Task bar. Microsoft ActiveSync will open up, select option File -> Connection Settings as shown in figure below. Connection Settings window will be open up as shown below. Check the option Allow Connections to one of the followings and then from the drop down select the option DMA. After connecting Windows Mobile 6 Classic Emulator using Device Emulator Manager, again right click and select option Cradle as shown below. Cradle will start ActiveSync and make Emulator work as device connected using ActiveSync. On successful connection you can see on the left of option Windows Mobile 6 Classic Emulator on Device Emulator Manager as shown.
Read more
  • 0
  • 0
  • 2593

article-image-nodejs-building-maintainable-codebase
Benjamin Reed
06 May 2015
8 min read
Save for later

NodeJS: Building a Maintainable Codebase

Benjamin Reed
06 May 2015
8 min read
NodeJS has become the most anticipated web development technology since Ruby on Rails. This is not an introduction to Node. First, you must realize that NodeJS is not a direct competitor to Rails or Django. Instead, Node is a collection of libraries that allow JavaScript to run on the v8 runtime. Node powers many tools, and some of the tools have nothing to do with a scaling web application. For instance, GitHub’s Atom editor is built on top of Node. Its web application frameworks, like Express, are the competitors. This article can apply to all environments using Node. Second, Node is designed under the asynchronous ideology. Not all of the operations in Node are asynchronous. Many libraries offer synchronous and asynchronous options. A Node developer must decipher the best operation for his or her needs. Third, you should have a solid understanding of the concept of a callback in Node. Over the course of two weeks, a team attempted to refactor a Rails app to be an Express application. We loved the concepts behind Node, and we truly believed that all we needed was a barebones framework. We transferred our controller logic over to Express routes in a weekend. As a beginning team, I will analyze some of the pitfalls that we came across. Hopefully, this will help you identify strategies to tackle Node with your team. First, attempt to structure callbacks and avoid anonymous functions. As we added more and more logic, we added more and more callbacks. Everything was beautifully asynchronous, and our code would successfully run. However, we soon found ourselves debugging an anonymous function nested inside of other anonymous functions. In other words, the codebase was incredibly difficult to follow. Anyone starting out with Node could potentially notice the novice “spaghetti code.” Here’s a simple example of nested callbacks: router.put('/:id', function(req, res) { console.log("attempt to update bathroom"); models.User.find({ where: {id: req.param('id')} }).success(function (user) { var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; var raw_digest = req.param('digest') ? req.param('digest') : user.digest; user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.digest = raw_digest; user.updated_on = new Date(); user.save().success(function () { res.json(user); }).error(function () { res.json({"status": "error"}); }); }) .error(function() { res.json({"status": "error"}); }) }); Notice that there are many success and error callbacks. Locating a specific callback is not difficult if the whitespace is perfect or the developer can count closing brackets back up to the destination. However, this is pretty nasty to any newcomer. And this illegibility will only increase as the application becomes more complex. A developer may get this response: {"status": "error"} Where did this response come from? Did the ORM fail to update the object? Did it fail to find the object in the first place? A developer could add descriptions to the json in the chained error callbacks, but there has to be a better way. Let’s extract some of the callbacks into separate methods: router.put('/:id', function(req, res) { var id = req.param('id'); var query = { where: {id: id} }; // search for user models.User.find(query).success(function (user) { // parse req parameters var raw_cell = req.param('cell') ? req.param('cell') : user.cell; var raw_email = req.param('email') ? req.param('email') : user.email; var raw_username = req.param('username') ? req.param('username') : user.username; // set user attributes user.cell = raw_cell; user.email = raw_email; user.username = raw_username; user.updated_on = new Date(); // attempt to save user user.save() .success(SuccessHandler.userSaved(res, user)) .error(ErrorHandler.userNotSaved(res, id)); }) .error(ErrorHandler.userNotFound(res, id)) }); var ErrorHandler = { userNotFound: function(res, user_id) { res.json({"status": "error", "description": "The user with the specified id could not be found.", "user_id": user_id}); }, userNotSaved: function(res, user_id) { res.json({"status": "error", "description": "The update to the user with the specified id could not be completed.", "user_id": user_id}); } }; var SuccessHandler = { userSaved: function(res, user) { res.json(user); } } This seemed to help clean up our minimal sample. There is now only one anonymous function. The code seems to be a lot more readable and independent. However, our code is still cluttered by chaining success and error callbacks. One could make these global mutable variables, or, perhaps we can consider another approach. Futures, also known as promises, are becoming more prominent. Twitter has adopted them in Scala. It is definitely something to consider. Next, do what makes your team comfortable and productive. At the same time, do not compromise the integrity of the project. There are numerous posts that encourage certain styles over others. There are also extensive posts on the subject of CoffeeScript. If you aren’t aware, CoffeeScript is a language with some added syntactic flavor that compiles to JavaScript. Our team was primarily ruby developers, and it definitely appealed to us. When we migrated some of the project over to CoffeeScript, we found that our code was a lot shorter and appeared more legible. GitHub uses CoffeeScript for the Atom text editor to this day, and the Rails community has openly embraced it. The majority of node module documentation will use JavaScript, so CoffeeScript developers will have to become acquainted with translation. There are some problems with CoffeeScript being ES6 ready, and there are some modules that are clearly not meant to be utilized in CoffeeScript. CoffeeScript is an open source project, but it has appears to have a good backbone and a stable community. If your developers are more comfortable with it, utilize it. When it comes to open source projects, everyone tends to trust them. In the purest form, open source projects are absolutely beautiful. They make the lives of all of the developers better. Nobody has to re-implement the wheel unless they choose. Obviously, both Node and CoffeeScript are open source. However, the community is very new, and it is dangerous to assume that any package you find on NPM is stable. For us, the problem occurred when we searched for an ORM. We truly missed ActiveRecord, and we assumed that other projects would work similarly.  We tried several solutions, and none of them interacted the way we wanted. Besides expressing our entire schema in a JavaScript format, we found relations to be a bit of a hack. Settling on one, we ran our server. And our database cleared out. That’s fine in development, but we struggled to find a way to get it into production. We needed more documentation. Also, the module was not designed with CoffeeScript in mind. We practically needed to revert to JavaScript. In contrast, the Node community has openly embraced some NoSQL databases, such as MongoDB. They are definitely worth considering.   Either way, make sure that your team’s dependencies are very well documented. There should be a written documentation for each exposed object, function, etc. To sum everything up, this article comes down to two fundamental things learned in any computer science class: write modular code and document everything. Do your research on Node and find a style that is legible for your team and any newcomers. A NodeJS project can only be maintained if developers utilizing the framework recognize the importance of the project in the future. If your code is messy now, it will only become messier. If you cannot find necessary information in a module’s documentation, you probably will miss other information when there is a problem in production. Don’t take shortcuts. A node application can only be as good as its developers and dependencies. About the Author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 2586

article-image-components
Packt
25 Nov 2014
14 min read
Save for later

Components

Packt
25 Nov 2014
14 min read
This article by Timothy Moran, author of Mastering KnockoutJS, teaches you how to use the new Knockout components feature. (For more resources related to this topic, see here.) In Version 3.2, Knockout added components using the combination of a template (view) with a viewmodel to create reusable, behavior-driven DOM objects. Knockout components are inspired by web components, a new (and experimental, at the time of writing this) set of standards that allow developers to define custom HTML elements paired with JavaScript that create packed controls. Like web components, Knockout allows the developer to use custom HTML tags to represent these components in the DOM. Knockout also allows components to be instantiated with a binding handler on standard HTML elements. Knockout binds components by injecting an HTML template, which is bound to its own viewmodel. This is probably the single largest feature Knockout has ever added to the core library. The reason we started with RequireJS is that components can optionally be loaded and defined with module loaders, including their HTML templates! This means that our entire application (even the HTML) can be defined in independent modules, instead of as a single hierarchy, and loaded asynchronously. The basic component registration Unlike extenders and binding handlers, which are created by just adding an object to Knockout, components are created by calling the ko.components.register function: ko.components.register('contact-list, { viewModel: function(params) { }, template: //template string or object }); This will create a new component named contact-list, which uses the object returned by the viewModel function as a binding context, and the template as its view. It is recommended that you use lowercase, dash-separated names for components so that they can easily be used as custom elements in your HTML. To use this newly created component, you can use a custom element or the component binding. All the following three tags produce equivalent results: <contact-list params="data: contacts"><contact-list> <div data-bind="component: { name: 'contact-list', params: { data: contacts }"></div> <!-- ko component: { name: 'contact-list', params: { data: contacts } --><!-- /ko --> Obviously, the custom element syntax is much cleaner and easier to read. It is important to note that custom elements cannot be self-closing tags. This is a restriction of the HTML parser and cannot be controlled by Knockout. There is one advantage of using the component binding: the name of the component can be an observable. If the name of the component changes, the previous component will be disposed (just like it would if a control flow binding removed it) and the new component will be initialized. The params attribute of custom elements work in a manner that is similar to the data-bind attribute. Comma-separated key/value pairs are parsed to create a property bag, which is given to the component. The values can contain JavaScript literals, observable properties, or expressions. It is also possible to register a component without a viewmodel, in which case, the object created by params is directly used as the binding context. To see this, we'll convert the list of contacts into a component: <contact-list params="contacts: displayContacts, edit: editContact, delete: deleteContact"> </contact-list> The HTML code for the list is replaced with a custom element with parameters for the list as well as callbacks for the two buttons, which are edit and delete: ko.components.register('contact-list', { template: '<ul class="list-unstyled" data-bind="foreach: contacts">'    +'<li>'      +'<h3>'        +'<span data-bind="text: displayName"></span> <small data-          bind="text: phoneNumber"></small> '        +'<button class="btn btn-sm btn-default" data-bind="click:          $parent.edit">Edit</button> '        +'<button class="btn btn-sm btn-danger" data-bind="click:          $parent.delete">Delete</button>'      +'</h3>'    +'</li>' +'</ul>' }); This component registration uses an inline template. Everything still looks and works the same, but the resulting HTML now includes our custom element. Custom elements in IE 8 and higher IE 9 and later versions as well as all other major browsers have no issue with seeing custom elements in the DOM before they have been registered. However, older versions of IE will remove the element if it hasn't been registered. The registration can be done either with Knockout, with ko.components.register('component-name'), or with the standard document.createElement('component-name') expression statement. One of these must come before the custom element, either by the script containing them being first in the DOM, or by the custom element being added at runtime. When using RequireJS, being in the DOM first won't help as the loading is asynchronous. If you need to support older IE versions, it is recommended that you include a separate script to register the custom element names at the top of the body tag or in the head tag: <!DOCTYPE html> <html> <body>    <script>      document.createElement('my-custom-element');    </script>    <script src='require.js' data-main='app/startup'></script>      <my-custom-element></my-custom-element> </body> </html> Once this has been done, components will work in IE 6 and higher even with custom elements. Template registration The template property of the configuration sent to register can take any of the following formats: ko.components.register('component-name', { template: [OPTION] }); The element ID Consider the following code statement: template: { element: 'component-template' } If you specify the ID of an element in the DOM, the contents of that element will be used as the template for the component. Although it isn't supported in IE yet, the template element is a good candidate, as browsers do not visually render the contents of template elements. The element instance Consider the following code statement: template: { element: instance } You can pass a real DOM element to the template to be used. This might be useful in a scenario where the template was constructed programmatically. Like the element ID method, only the contents of the elements will be used as the template: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: { element: template } }); An array of DOM nodes Consider the following code statement: template: [nodes] If you pass an array of DOM nodes to the template configuration, then the entire array will be used as a template and not just the descendants: var template = document.getElementById('contact-list-template') nodes = Array.prototype.slice.call(template.content.childNodes); ko.components.register('contact-list', { template: nodes }); Document fragments Consider the following code statement: template: documentFragmentInstance If you pass a document fragment, the entire fragment will be used as a template instead of just the descendants: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: template.content }); This example works because template elements wrap their contents in a document fragment in order to stop the normal rendering. Using the content is the same method that Knockout uses internally when a template element is supplied. HTML strings We already saw an example for HTML strings in the previous section. While using the value inline is probably uncommon, supplying a string would be an easy thing to do if your build system provided it for you. Registering templates using the AMD module Consider the following code statement: template: { require: 'module/path' } If a require property is passed to the configuration object of a template, the default module loader will load the module and use it as the template. The module can return any of the preceding formats. This is especially useful for the RequireJS text plugin: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'} }); Using this method, we can extract the HTML template into its own file, drastically improving its organization. By itself, this is a huge benefit to development. The viewmodel registration Like template registration, viewmodels can be registered using several different formats. To demonstrate this, we'll use a simple viewmodel of our contacts list components: function ListViewmodel(params) { this.contacts = params.contacts; this.edit = params.edit; this.delete = function(contact) {    console.log('Mock Deleting Contact', ko.toJS(contact)); }; }; To verify that things are getting wired up properly, you'll want something interactive; hence, we use the fake delete function. The constructor function Consider the following code statement: viewModel: Constructor If you supply a function to the viewModel property, it will be treated as a constructor. When the component is instantiated, new will be called on the function, with the params object as its first parameter: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: ListViewmodel //Defined above }); A singleton object Consider the following code statement: viewModel: { instance: singleton } If you want all your component instances to be backed by a shared object—though this is not recommended—you can pass it as the instance property of a configuration object. Because the object is shared, parameters cannot be passed to the viewmodel using this method. The factory function Consider the following code statement: viewModel: { createViewModel: function(params, componentInfo) {} } This method is useful because it supplies the container element of the component to the second parameter on componentInfo.element. It also provides you with the opportunity to perform any other setup, such as modifying or extending the constructor parameters. The createViewModel function should return an instance of a viewmodel component: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: { createViewModel: function(params, componentInfo) {    console.log('Initializing component for',      componentInfo.element);    return new ListViewmodel(params); }} }); Registering viewmodels using an AMD module Consider the following code statement: viewModel: { require: 'module-path' } Just like templates, viewmodels can be registered with an AMD module that returns any of the preceding formats. Registering AMD In addition to registering the template and the viewmodel as AMD modules individually, you can register the entire component with a require call: ko.components.register('contact-list', { require: 'contact-list' }); The AMD module will return the entire component configuration: define(['knockout', 'text!contact-list.html'], function(ko, templateString) {   function ListViewmodel(params) {    this.contacts = params.contacts;    this.edit = params.edit;    this.delete = function(contact) {      console.log('Mock Deleting Contact', ko.toJS(contact));    }; }   return { template: templateString, viewModel: ListViewmodel }; }); As the Knockout documentation points out, this method has several benefits: The registration call is just a require path, which is easy to manage. The component is composed of two parts: a JavaScript module and an HTML module. This provides both simple organization and clean separation. The RequireJS optimizer, which is r.js, can use the text dependency on the HTML module to bundle the HTML code with the bundled output. This means your entire application, including the HTML templates, can be a single file in production (or a collection of bundles if you want to take advantage of lazy loading). Observing changes in component parameters Component parameters will be passed via the params object to the component's viewmodel in one of the following three ways: No observable expression evaluation needs to occur, and the value is passed literally: <component params="name: 'Timothy Moran'"></component> <component params="name: nonObservableProperty"> </component> <component params="name: observableProperty"></component> <component params="name: viewModel.observableSubProperty "></component> In all of these cases, the value is passed directly to the component on the params object. This means that changes to these values will change the property on the instantiating viewmodel, except for the first case (literal values). Observable values can be subscribed to normally. An observable expression needs to be evaluated, so it is wrapped in a computed observable: <component params="name: name() + '!'"></component> In this case, params.name is not the original property. Calling params.name() will evaluate the computed wrapper. Trying to modify the value will fail, as the computed value is not writable. The value can be subscribed to normally. An observable expression evaluates an observable instance, so it is wrapped in an observable that unwraps the result of the expression: <component params="name: isFormal() ? firstName : lastName"></component> In this example, firstName and lastName are both observable properties. If calling params.name() returned the observable, you will need to call params.name()() to get the actual value, which is rather ugly. Instead, Knockout automatically unwraps the expression so that calling params.name() returns the actual value of either firstName or lastName. If you need to access the actual observable instances to, for example, write a value to them, trying to write to params.name will fail, as it is a computed observable. To get the unwrapped value, you can use the params.$raw object, which provides the unwrapped values. In this case, you can update the name by calling params.$raw.name('New'). In general, this case should be avoided by removing the logic from the binding expression and placing it in a computed observable in the viewmodel. The component's life cycle When a component binding is applied, Knockout takes the following steps. The component loader asynchronously creates the viewmodel factory and template. This result is cached so that it is only performed once per component. The template is cloned and injected into the container (either the custom element or the element with the component binding). If the component has a viewmodel, it is instantiated. This is done synchronously. The component is bound to either the viewmodel or the params object. The component is left active until it is disposed. The component is disposed. If the viewmodel has a dispose method, it is called, and then the template is removed from the DOM. The component's disposal If the component is removed from the DOM by Knockout, either because of the name of the component binding or a control flow binding being changed (for example, if and foreach), the component will be disposed. If the component's viewmodel has a dispose function, it will be called. Normal Knockout bindings in the components view will be automatically disposed, just as they would in a normal control flow situation. However, anything set up by the viewmodel needs to be manually cleaned up. Some examples of viewmodel cleanup include the following: The setInterval callbacks can be removed with clearInterval. Computed observables can be removed by calling their dispose method. Pure computed observables don't need to be disposed. Computed observables that are only used by bindings or other viewmodel properties also do not need to be disposed, as garbage collection will catch them. Observable subscriptions can be disposed by calling their dispose method. Event handlers can be created by components that are not part of a normal Knockout binding. Combining components with data bindings There is only one restriction of data-bind attributes that are used on custom elements with the component binding: the binding handlers cannot use controlsDescendantBindings. This isn't a new restriction; two bindings that control descendants cannot be on a single element, and since components control descendant bindings that cannot be combined with a binding handler that also controls descendants. It is worth remembering, though, as you might be inclined to place an if or foreach binding on a component; doing this will cause an error. Instead, wrap the component with an element or a containerless binding: <ul data-bind='foreach: allProducts'> <product-details params='product: $data'></product-details> </ul> It's also worth noting that bindings such as text and html will replace the contents of the element they are on. When used with components, this will potentially result in the component being lost, so it's not a good idea. Summary In this article, we learned that the Knockout components feature gives you a powerful tool that will help you create reusable, behavior-driven DOM elements. Resources for Article: Further resources on this subject: Deploying a Vert.x application [Article] The Dialog Widget [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 2586

article-image-plone-4-development-creating-custom-workflow
Packt
30 Aug 2011
7 min read
Save for later

Plone 4 Development: Creating a Custom Workflow

Packt
30 Aug 2011
7 min read
Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.        Keeping control with workflow As we have alluded to before, managing permissions directly anywhere other than the site root is usually a bad idea. Every content object in a Plone site is subject to security, and will in most cases inherit permission settings from its parent. If we start making special settings in particular folders, we will quickly lose control. However, if settings are always acquired, how can we restrict access to particular folders or prevent authors from editing published content whilst still giving them rights to work on items in a draft state? The answer to both of these problems is workflow. Workflows are managed by the portal_workflow tool. This controls a mapping of content types to workflows definitions, and sets a default workflow for types not explicitly mapped. The workflow tool allows a workflow chain of multiple workflows to be assigned to a content type. Each workflow is given its own state variable. Multiple workflows can manage permissions concurrently. Plone's user interface does not explicitly support more than one workflow, but can be used in combination with custom user interface elements to address complex security and workflow requirements. The workflow definitions themselves are objects found inside the portal_workflow tool, under the Contents tab. Each definition consists of states, such as private or published, and transitions between them. Transitions can be protected by permissions or restricted to particular roles. Although it is fairly common to protect workflow transitions by role, this is not actually a very good use of the security system. It would be much more sensible to use an appropriate permission. The exception is when custom roles are used solely for the purpose of defining roles in a workflow. Some transitions are automatic, which means that they will be invoked as soon as an object enters a state that has this transition as a possible exit (that is, provided the relevant guard conditions are met). More commonly, transitions are invoked following some user action, normally through the State drop-down menu in Plone's user interface. It is possible to execute code immediately before or after a transition is executed. States may be used simply for information purposes. For example, it is useful to be able to mark a content object as "published" and be able to search for all published content. More commonly, states are also used to control content item security. When an object enters a particular state, either its initial state, when it is first created, or a state that is the result of a workflow transition, the workflow tool can set a number of permissions according to a predefined permissions map associated with the target state. The permissions that are managed by a particular workflow are listed under the Permissions tab on the workflow definition: These permissions are used in a permission map for each state: If you change workflow security settings, your changes will not take effect immediately, since permissions are only modified upon workflow transitions. To synchronize permissions with the current workflow definitions, use the Update security settings button at the bottom of the Workflows tab of the portal_workflow tool. Note that this can take a long time on large sites, because it needs to find all content items using the old settings and update their permissions. If you use the Types control panel in Plone's Site Setup to change workflows, this reindexing happens automatically. Workflows can also be used to manage role-to-group assignments in the same way they can be used to manage role-to-permission assignments. This feature is rarely used in Plone, however. All workflows manage a number of workflow variables, whose values can change with transitions and be queried through the workflow tool. These are rarely changed, however, and Plone relies on a number of the default ones. These include the previous transition (action), the user ID of the person who performed that transition (actor), any associated comments (comments), the date/time of the last transition (time), and the full transition history (review_history). Finally, workflows can define work lists, which are used by Plone's Review list portlet to show pending tasks for the current user. A work list in effect performs a catalog search using the workflow's state variable. In Plone, the state variable is always called review_state. The workflow system is very powerful, and can be used to solve many kinds of problems where objects of the same type need to be in different states. Learning to use it effectively can pay off greatly in the long run. Interacting with workflow in code Interacting with workflow from our own code is usually straightforward. To get the workflow state of a particular object, we can do: from Products.CMFCore.utils import getToolByName wftool = getToolByName(context, 'portal_workflow') review_state = wftool.getInfoFor(context, 'review_state') However, if we are doing a search using the portal_catalog tool, the results it returns has the review state as metadata already: from Products.CMFCore.utils import getToolByName catalog = getToolByName(context, 'portal_catalog') for result in catalog(dict( portal_type=('Document', 'News Item',), review_state=('published', 'public', 'visible',), )): review_state = result.review_state # do something with the review_state To change the workflow state of an object, we can use the following line of code: wftool.doActionFor(context, action='publish') The action here is the name of a transition, which must be available to the current user, from current state of context. There is no (easy) way to directly specify the target state. This is by design: recall that transitions form the paths between states, and may involve additional security restrictions or the triggering of scripts. Again, the Doc tab for the portal_workflow tool and its sub-objects (the workflow definitions and their states and transitions) should be your first point of call if you need more detail. The workflow code can be found in Products.CMFCore.WorkflowTool and Products.DCWorkflow. Installing a custom workflow It is fairly common to create custom workflows when building a Plone website. Plone ships with several useful workflows, but security and approvals processes tend to differ from site to site, so we will often find ourselves creating our own workflows. Workflows are a form of customization. We should ensure they are installable using GenericSetup. However, the workflow XML syntax is quite verbose, so it is often easier to start from the ZMI and export the workflow definition to the filesystem. Designing a workflow for Optilux Cinemas It is important to get the design of a workflow policy right, considering the different roles that need to interact with the objects, and the permissions they should have in the various states. Draft content should be visible to cinema staff, but not customers, and should go through review before being published. The following diagram illustrates this workflow: This workflow will be made the default, and should therefore apply to most content. However, we will keep the standard Plone policy of omitting workflow for the File and Image types. This means that permissions for content items of these types will be acquired from the Folder in which they are contained, making them simpler to manage. In particular, this means it is not necessary to separately publish linked files and embedded images when publishing a Page. Because we need to distinguish between logged-in customers and staff members, we will introduce a new role called StaffMember. This role will be granted View permission by default for all items in the site, much like a Manager or Site Administrator user is by default (although workflow may override this). We will let the Site Administrator role represent site administrators, and the Reviewer role represent content reviewers, as they do in a default Plone installation. We will also create a new group, Staff, which is given the StaffMember role. Among other things, this will allow us to easily grant the Reader, Editor and Contributor role in particular folders to all staff from the Sharing screen. The preceding workflow is designed for content production and review. This is probably the most common use for workflow in Plone, but it is by no means the only use case. For example, the author once used workflows to control the payment status on an Invoice content type. As you become more proficient with the workflow engine, you will find that it is useful in a number of scenarios.
Read more
  • 0
  • 0
  • 2583
article-image-creating-and-modifying-filters-moodle-19
Packt
06 May 2010
6 min read
Save for later

Creating and Modifying Filters in Moodle 1.9

Packt
06 May 2010
6 min read
Moodle filters modify content from the database as it is output to the screen, thus adding function to the display. An example of this is the multimedia filter, which can detect references to video and audio files, and can replace them with a "mini-player" embedded in the content. How a filter works Before trying to build a filter, it would help to understand how it works. To begin with, any text written to the screen in Moodle should be processed through the format_text function. The purpose of this function is to process the text, such that it is always safe to be displayed. This means making sure there are no security issues and that any HTML used contains only allowed tags. Additionally, the output is run through the filter_text function, and this is the function we are interested in. This function takes the text destined for the screen, and applies all enabled filters to it. The resulting text will be the result of all of these filters. filter_text applies each enabled filter to the text in the order defined in the filter configuration screen (shown in the following screenshot). The order is important; each filter will be fed the output of the previous filter's text. So it is always possible that one filter may change the text in a way that impacts the next filter. Building a filter Now it's time to build our own filter. To begin with, let's come up with a requirement. Let's assume that our organization, called "Learning is Fun", has a main website at http://2fun2learn.org. Now, we need any instance of the phrase learning is fun to be hyperlinked to the website URL every time it appears on the screen, as in the forum post shown in the following screenshots: We can do this by implementing a policy with our content creators that forces them to create hyperlink tags around the phrase every time they write it. However, this will be difficult to enforce and will be fraught with errors. Instead, wouldn't it be easier if the system itself could recognize the phrase and create the hyperlink for us? That's what our filter will do. Getting started We need a name for our filter. It is the name that will be used for the directory the filter will reside in. We want a name that will describe what our filter does and will be unlikely to conflict with any other filter name. Let's call it "learningisfunlink". To start with, create a new subdirectory in the /filter directory and call it learningisfunlink. Next, create a new file called filter.php. This is the only file required for a filter. Open the new filter.php file in your development environment. The filter only requires one function, which is named after the filter name and suffixed with _filter. Add the PHP open and close tags (<?php and ?>), and an empty function called learningisfunlink_filter that takes two arguments: a course ID and the text to filter. When completed, you should have a file that looks like this: <?phpfunction learningisfunlink_filter($courseid, $text) {return $text;}?> We now have the bare minimum required for the filter to be recognized by the system. It doesn't do what we want yet, but it will be present. Creating the language file Log in to the site (that makes use of your new filter) as an administrator. On the main page of your site, look for the Modules Filters| folder in the Site Administration block. Click on the Manage filters link. If you have the default filter setup, you will see your new filter near the bottom of the list, called Learningisfunlink, as shown in the following screenshot: Now, even though the name is reasonably descriptive, it will be better if it were a phrase similar to the others in the list; something like Main website link. To do this, we need to create a new directory in our /filter/learningisfunlink directory called lang/en_utf8/ (the en_utf8 is the language specific part—English in this case). In this directory, we create a new file called filter_learningisfunlink.php. This name is the concatenation of the phrase filter_ and the name of our filter. In this file, we need to add the following line: $string['filtername'] = "Main website link"; This language string defines the text that will be displayed as the name of our filter, replacing the phrase Learningisfunlink that we saw earlier with Main website link. This file will contain any other strings that we may output to the screen, specifically for this filter. Once we have created this file, returning to the Manage filters page should now show our filter with the name that we provided for it in our language file. Creating the filter code We now have a filter that is recognized by the system and that displays the name we want it to. However, we haven't made it do anything. Let's create the code to add some functionality. Remember, what we want this filter to do is to search the text and add a hyperlink pointing to our website for all occurrences of the phrase "learning is fun". We could simply perform a search and replace function on the text and return it, and that would be perfectly valid. However, for the sake of learning more about the Moodle API, we'll use some functions that are set up specifically for filters. To that end, we'll look at two code constructs: the filterobject class and the filter_phrases function, both of which are contained in the /lib/filterlib.php file. The filterobject class defines an object that contains all of the information required by the filter_phrases function to change the text to the way the filter wants it to be. It contains the phrase to be filtered, the tag to start the replacement with, the tag to end the replacement with, whether to match case, whether a full match is required, and any replacement text for the match. An array of filterobjects is sent to the filter_phrases function, along with the text to search in. It's intended to be used when you have a number of phrases and replacements to apply at one time, but we'll use it anyway. Let's initialize our filter strings: $searchphrase = "learning is fun";$starttag = "<a href="http://2fun2learn.org">";$endtag = "</a>"; Now, let's create our filterobject: $filterobjects = array();$filterobjects[] = new filterobject($searchphrase, $starttag, $endtag); Lastly, let's pass the structure to the filter_phrases function, along with the text to be filtered: $text = filter_phrases($text, $filterobjects); Our function now has the code to change any occurrence of the phrase "learning is fun" to a hyperlinked phrase. Let's go test it
Read more
  • 0
  • 0
  • 2576

article-image-cms-made-simple-16-getting-started-e-commerce-website
Packt
19 Mar 2010
10 min read
Save for later

CMS Made Simple 1.6: Getting Started with an E-commerce Website

Packt
19 Mar 2010
10 min read
With the Products module, you can manage: Products Product attributes that will have an impact on the price (like size or color) Categories Product hierarchy Custom fields This module is the basis for all other modules that you will use later on. After all, you cannot start a shop if you do not have any place to store the products. Install the Products module with the Module Manager. Pay attention to the Dependencies section of the module before installing it. There are two (CGExtensions and CGSimpleSmarty) modules that provide convenience APIs, reusable forms, and Smarty tags for use in other modules. If you are not a programmer, you probably will not need to do anything with these modules besides adjusting some preferences if you ever need them. In the workshop described here, you just need to install them. Time for action – adding the first product After the Products module is installed, we will display it on the page Shop and add the first product to it as follows: Create a new content page Shop (Content | Pages | Add New Content). Add the Smarty tag {Products} into the field Content of the page. If you see the page in browser, it will not show anything at this time as you have not added any product to the shop so far. In the admin console of CMS Made Simple, click on Content | Product Manager. Under the Products tab, click on the Add A Product link and add your product as shown in the following screenshot: Click on Submit and see the Shop page on your website. What just happened? You have added the first product to the Products module. This product is displayed on the page with the Price and Weight (we can delete this field later on). Click the product link to see the detailed view of the product. The template looks very technical, but with some HTML, CSS, and Smarty knowledge you can change its look and feel later on. Let's concentrate on the functionality of the module first and not on the design. Add some more products in the Product Manager and see the list of products on the Shop page. Pay attention that the detailed view of every product is displayed in the same way. In the Products module, there are some fields like Price and Weight already defined. But you will need to add your own fields. Creating custom fields Usually one or more pictures of the product can be found in an online shop. However, there is no special image field where you can upload the product picture. Luckily, you can add as many custom fields as you need. In the next step, you will create a custom field, where the image of the product can be uploaded. In the admin area of the Product Manager, choose the tab Field Definitions and click on the link Add A Field. Name is a kind of technical field that is not displayed to your visitors. You should not use any special characters or spaces in the name of the field. Use only letters and numbers, no dashes, slashes, or anything else non-alphanumeric. The Prompt field is the label of the field that you will see in the admin area of the Product Manager during adding or editing products. You can use any characters in this field. The Type of the field should be Image. By selecting this type you ensure that the field is displayed as a field for file uploads in the admin area. This field will also be validated, so that only images can be uploaded here. Additionally, thumbnails for the uploaded images (small preview versions) will be created automatically after upload. Let the field be public by selecting the checkbox for Is this a public field? It means that the content of the field (the image itself) will be shown to the visitors of your shop. If you make it private, only the administrator of the website can see the field in the admin area of the module. Save the field. This field is automatically added to the detail template on the page and the editing view of the product in the admin area of the Product Manager. To test the field, open any product for editing in the admin area, and the field Product image (Prompt), and upload an image for the product using this field. Control the display of the field in the detailed view of the product on the website. The small preview version of the product is added to the section Custom Fields of the detailed view. We still do not care of how it looks like, but how it works. We will change the detailed view of the product when we are ready with all the custom fields and the product hierarchy. Image already exists When you try to upload the same image twice you will get an error saying that the image has been already uploaded. To control what images are already saved for the product and delete them, open Content | File Manager in the admin console. Find the folder Products and then the folder name product_[ID]. The ID of the product is displayed in the list of products in the admin area. Click on this folder and remove the images already uploaded. Now, you can upload the same image in the admin area of the Product Manager. Define your own fields Create as many custom fields as you need to display and manage the product. With the Type of the field you decide how the field is displayed in the admin area. The output of the field on the page can be fully customized and does not depend on the type. If you need a Product Number field, create a new custom field (Text Input) with maximum length of 12 characters and make the field public. Then edit each of your products and enter a number in this field. You can adjust the order of the fields under the Field Definitions tab. Again, this order only applies to how the admin area for the product management looks; the output on the page can be completely different. Creating a product hierarchy Next, let us create a product hierarchy. In the official shop that I am trying to reproduce here, there are four hierarchy items: Shirts (short) Shirts (long) Home & Office Mugs You should understand the difference between product categories and product items in the Product Manager module. Product categories are kind of tags for the products. It is not possible to arrange them in the hierarchy. However, you can assign one product to multiple categories if you like. In contrast to the categories, a product can belong to only one hierarchy item. That means the structure above should be implemented as a hierarchy and not as categories. One product cannot be a shirt and a mug at the same time. We will use categories later on to mark the products as: New Popular Discounted Categories will allow you to make one product both new and discounted at the same time. A hierarchy would not, as multiple assignment is not possible. In the admin area of Product Manager, click on the Product Hierarchy tab and create four hierarchy items displayed in the first list. It is your choice if you want to add any description or image to the hierarchy or leave it empty. Once the hierarchy is created, go through your already created products and assign them to the proper hierarchy item. The hierarchy can now be displayed in the sidebar navigation on the page. Open the main page template (Layout | Templates) and find the section with sidebar navigation. Add the Smarty tag for the product hierarchy shown as follows: {Products action="hierarchy"} Customizing product templates The display of the product hierarchy template is very technical. Let's customize all the templates for the module. There are three of them: Summary Templates Detail Templates Hierarchy Report Templates Let's start with  the Hierarchy Report Templates. This template defines how the hierarchy in the sidebar is displayed. In the admin area of the Product Manager, click on the Hierarchy Report Templates tab and find the list of existing templates for the hierarchy. The template Sample is the one that is used by default. You can customize this template or create your own by clicking on the New Template link. I choose the second way. It is generally advisable not to change sample templates, but create your own and set them as default. This way you can delete anything from the custom template and use the Sample template for reference if you need parts removed from the custom template. For the template name, I chose My Shop. However, you can use any name you wish. In the Template Text field, the sample template code is already suggested. Leave this code as it is and submit the new template. Now you see two templates in the list. Make the template My Shop the default one by clicking on the red cross in the Default column. Let's see what we have in the template and what we actually need. Open the new template for editing: Smarty variable Description {$hierarchy_item.name} The name of the hierarchy item {$hierarchy_item.id} The ID of the hierarchy item {$upurl} The URL to the parent hierarchy item. Only applicable if there are more than one hierarchy level. {$mod} The object containing all the information about the module Products. In the template the object is used to get translations: {$mod->Lang('parent')} returns the translation for the key parent from the translation file. You can replace this variable with your custom text if your website is monolingual and the language of the website will never be changed. {$parent} This array supposes to hold the information about the parent item. However, it is not assigned in the current version of the module and cannot be used. {$child_nodes} The array that contains information about all child hierarchy items. The information from this array: {$child_nodes.id} - ID of the hierarchy item {$child_nodes.name} - name of the hierarchy item {$child_nodes.description} - description of the hierarchy item {$child_nodes.parent_id} - ID of the parent hierarchy item {$child_nodes. image } - the name of the image file for the hierarchy item {$child_nodes. thumbnail} - the name of the thumbnail file for the hierarchy item {$child_nodes. long_name} - the full name of the hierarchy item (including the names of all parents) {$child_nodes. extra1} - the value saved in the Extra Field 1 {$child_nodes. extra2} - the value saved in the Extra Field 2 {$child_nodes. downurl} - the URL for this hierarchy item {$hierarchy_image_location} Path to the folder where images for the product are saved. {$hierarchy_item} An array that contains the id of the actual item hierarchy.
Read more
  • 0
  • 0
  • 2574

article-image-components-reusing-rules-conditions-and-actions
Packt
03 Jan 2013
4 min read
Save for later

Components - Reusing Rules, Conditions, and Actions

Packt
03 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Enable the Rules and Rules UI modules on your site. How to do it... Go to Confguration | Workfow | Rules | Components. Add a new component and set the plugin to Condition set (AND). Enter a name for the component and add a parameter Entity | Node. Add a Condition, Data comparison, set the value to the author of the node, set OPERATOR to equals, enter 1 in the Data value field and tick Negate. Add an OR group by clicking on Add or, as shown in the following screenshot: Add a Condition, Node | Content is of type and set it to Article. Add a Condition, Entity | Entity has field, set Entity to node, and select the field, field_image, as shown in the following screenshot: Organize the Conditions so that the last two Conditions are in the OR group we created before. Create a new rule configuration and set the Event to Comment | After saving a new comment. Add a new Condition and select the component that we created. An example is shown in the following screenshot: Select comment:node as the parameter. Add a new Action, System | Show a message on the site and configure the message. How it works... Components require parameters to be specified, that will be used as placeholders for the objects we want to execute a rule configuration on. Depending on what our goal is, we can select from the core Rules data types, entities, or lists. In this example, we've added a Node parameter to the component, because we wanted to see who is the node's author, if it's an article or if it has an image field. Then in our Condition, we've provided the actual object on which we've evaluated the Condition. If you're familiar with programming, then you'll see that components are just like functions; they expect parameters and can be re-used in other scenarios. There's more... The main benefit of using Rules components is that we can re-use complex Conditions, Actions, and other rule configurations. That means that we don't have to configure the same settings over and over again. Instead we can create components and use them in our rule configurations. Other benefits also include exportability: components can be exported individually, which is a very useful addition when using configuration management, such as Features. Components can also be executed on the UI, which is very useful for debugging and can also save a lot of development time. Other component types Apart from Condition sets, there are a few other component types we can use. They are as follows: Action set As the name suggests, this is a set of Actions, executed one after the other. It can be useful when we have a certain chain of Actions that we want to execute in various scenarios. Rule We can also create a rule configuration as a component to be used in other rule configurations. Think about a scenario when you want to perform an action on a list of node references (which would require a looped Action) but only if those nodes were created before 2012. While it is not possible to create a Condition within an Action, we can create a Rule component so we can add a Condition and an Action within the component itself and then use it as the Action of the other rule configuration. Rule set Rule sets are a set of Rules, executed one after the other. It can be useful when we want to execute a chain of Rules when an event occurs. Parameters and provided variables Condition sets require parameters which are input data for the component. These are the variables that need to be specified so that the Condition can evaluate to FALSE or TRUE. Action sets, Rules, and Rule sets can provide variables. That means they can return data after the action is executed. Summary This article explained the benefits of using Rules components by creating a Condition that can be re-used in other rule configurations. Resources for Article : Further resources on this subject: Drupal 7 Preview [Article] Creating Content in Drupal 7 [Article] Drupal FAQs [Article]
Read more
  • 0
  • 0
  • 2574
article-image-your-first-application
Packt
15 Jan 2014
7 min read
Save for later

Your First Application

Packt
15 Jan 2014
7 min read
(For more resources related to this topic, see here.) Sketching out the application We are going to build a browsable database of cat profiles. Visitors will be able to create pages for their cats and fill in basic information such as the name, date of birth, and breed for each cat. This application will support the default Create-Retrieve-Update-Delete operations (CRUD). We will also create an overview page with the option to filter cats by breed. All of the security, authentication, and permission features are intentionally left out since they will be covered later. Entities, relationships, and attributes Firstly, we need to define the entities of our application. In broad terms, an entity is a thing (person, place, or object) about which the application should store data. From the requirements, we can extract the following entities and attributes: Cats, which have a numeric identifier, a name, a date of birth, and a breed Breeds, which only have an identifier and a name This information will help us when defining the database schema that will store the entities, relationships, and attributes, as well as the models, which are the PHP classes that represent the objects in our database. The map of our application We now need to think about the URL structure of our application. Having clean and expressive URLs has many benefits. On a usability level, the application will be easier to navigate and look less intimidating to the user. For frequent users, individual pages will be easier to remember or bookmark and, if they contain relevant keywords, they will often rank higher in search engine results. To fulfill the initial set of requirements, we are going to need the following routes in our application: Method Route Description GET / Index GET /cats Overview page GET /cats/breeds/:name Overview page for specific breed GET /cats/:id Individual cat page GET /cats/create Form to create a new cat page POST /cats Handle creation of new cat page GET /cats/:id/edit Form to edit existing cat page PUT /cats/:id Handle updates to cat page GET /cats/:id/delete Form to confirm deletion of page DELETE /cats/:id Handle deletion of cat page We will shortly learn how Laravel helps us to turn this routing sketch into actual code. If you have written PHP applications without a framework, you can briefly reflect on how you would have implemented such a routing structure. To add some perspective, this is what the second to last URL could have looked like with a traditional PHP script (without URL rewriting): /index.php?p=cats&id=1&_action=delete&confirm=true. The preceding table can be prepared using a pen and paper, in a spreadsheet editor, or even in your favorite code editor using ASCII characters. In the initial development phases, this table of routes is an important prototyping tool that forces you to think about URLs first and helps you define and refine the structure of your application iteratively. If you have worked with REST APIs, this kind of routing structure will look familiar to you. In RESTful terms, we have a cats resource that responds to the different HTTP verbs and provides an additional set of routes to display the necessary forms. If, on the other hand, you have not worked with RESTful sites, the use of the PUT and DELETE HTTP methods might be new to you. Even though web browsers do not support these methods for standard HTTP requests, Laravel uses a technique that other frameworks such as Rails use, and emulates those methods by adding a _method input field to the forms. This way, they can be sent over a standard POST request and are then delegated to the correct route or controller method in the application. Note also that none of the form submissions endpoints are handled with a GET method. This is primarily because they have side effects; a user could trigger the same action multiple times accidentally when using the browser history. Therefore, when they are called, these routes never display anything to the users. Instead, they redirect them after completing the action (for instance, DELETE /cats/:id will redirect the user to GET/cats). Starting the application Now that we have the blueprints for the application, let's roll up our sleeves and start writing some code. Start by opening a new terminal window and create a new project with Composer, as follows: $ composer create-project laravel/laravel cats --prefer-dist $ cd cats Once Composer finishes downloading Laravel and resolving its dependencies, you will have a directory structure identical to the one presented previously. Using the built-in development server To start the application, unless you are running an older version of PHP (5.3.*), you will not need a local server such as WAMP on Windows or MAMP on Mac OS since Laravel uses the built-in development server that is bundled with PHP 5.4 or later. To start the development server, we use the following artisan command: $ php artisan serve Artisan is the command-line utility that ships with Laravel and its features will be covered in more detail later. Next, open your web browser and visit http://localhost:8000; you will be greeted with Laravel's welcome message. If you get an error telling you that the php command does not exist or cannot be found, make sure that it is present in your PATH variable. If the command fails because you are running PHP 5.3 and you have no upgrade possibility, simply use your local development server (MAMP/WAMP) and set Apache's DocumentRoot to point to cats-app/public/. Writing the first routes Let's start by writing the first two routes of our application inside app/routes.php. This file already contains some comments as well as a sample route. You can keep the comments but you must remove the existing route before adding the following routes: Route::get('/', function(){   return "All cats"; }); Route::get('cats/{id}', function($id){   return "Cat #$id"; }); The first parameter of the get method is the URI pattern. When a pattern is matched, the closure function in the second parameter is executed with any parameters that were extracted from the pattern. Note that the slash prefix in the pattern is optional; however, you should not have any trailing slashes. You can make sure that your routes work by opening your web browser and visiting http://localhost:8000/cats/123. If you are not using PHP's built-in development server and are getting a 404 error at this stage, make sure that Apache's mod_rewrite configuration is enabled and works correctly. Restricting the route parameters In the pattern of the second route, {id} currently matches any string or number. To restrict it so that it only matches numbers, we can chain a where method to our route as follows: Route::get('cats/{id}', function($id){ return "Cat #$id"; })->where('id', '[0-9]+'); The where method takes two arguments: the first one is the name of the parameter and the second one is the regular expression pattern that it needs to match. Summary We have covered a lot in this article. We learned how to define routes, prepare the models of the application, and interact with them. Moreover, we have had a glimpse at the many powerful features of Eloquent, Blade, as well as the other convenient helpers in Laravel to create forms and input fields: all of this in under 200 lines of code! Resources for Article: Further resources on this subject: Laravel 4 - Creating a Simple CRUD Application in Hours [Article] Creating and Using Composer Packages [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 2573

article-image-using-mongoid
Packt
09 Dec 2013
12 min read
Save for later

Using Mongoid

Packt
09 Dec 2013
12 min read
(For more resources related to this topic, see here.) Introducing Origin Origin is a gem that provides the DSL for Mongoid queries. Though at first glance, a question may seem to arise as to why we need a DSL for Mongoid queries; If we are finally going to convert the query to a MongoDB-compliant hash, then why do we need a DSL? Origin was extracted from Mongoid gem and put into a new gem, so that there is a standard for querying. It has no dependency on any other gem and is a standalone pure query builder. The idea was that this could be a generic DSL that can be used even without Mongoid! So, now we have a very generic and standard querying pattern. For example, in Mongoid 2.x we had the criteria any_in and any_of, and no direct support for the and, or, and nor operations. In Mongoid 2.x, the only way we could fire a $or or a $and query was like this: Author.where("$or" => {'language' => 'English', 'address.city' => 'London '}) And now in Mongoid 3, we have a cleaner approach. Author.or(:language => 'English', 'address.city' => 'London') Origin also provides good selectors directly in our models. So, this is now much more readable: Book.gte(published_at: Date.parse('2012/11/11')) Memory maps, delayed sync, and journals As we have seen earlier, MongoDB stores data in memory-mapped files of at most 2 GB each. After the data is loaded for the first time into the memory mapped files, we now get almost memory-like speeds for access instead of disk I/O, which is much slower. These memory-mapped files are preallocated to ensure that there is no delay of the file generation while saving data. However, to ensure that the data is not lost, it needs to be persisted to the disk. This is achieved by journaling. With journaling, every database operation is written to the oplog collection and that is flushed to disk every 100 ms. Journaling is turned on by default in the MongoDB configuration. This is not the actual data but the operation itself. This helps in better recovery (in case of any crash) and also ensures the consistency of writes. The data that is written to various collections are flushed to the disk every 60 seconds. This ensures that the data is persisted periodically and also ensures the speed of data access is almost as fast as memory. MongoDB relies on the operating system for the memory management of its memory-mapped files. This has the advantage of getting inherent OS benefits as the OS is improved. Also, there's the disadvantage of lack of control on how memory is managed by MongoDB. However, what happens if something goes wrong (server crashes, database stops, or disk is corrupted)? To ensure durability, whenever data is saved in files, the action is logged to a file in a chronological order. This is the journal entry, which is also a memory-mapped file but is synced with the disk every 100 ms. Using the journal, the database can be easily recovered in case of any crash. So, in the worst case scenario, we could potentially lose 100 ms of information. This is a fair price to pay for the benefits of using MongoDB. MongoDB journaling makes it a very robust and durable database. However, it also helps us decide when to use MongoDB and when not to use it. 100 ms is a long time for some services, such as financial core banking or maybe stock price updates. In such applications, MongoDB is not recommended. For most cases that are not related to heavy multi-table transactions like most financial applications MongoDB can be suitable. All these things are handled seamlessly, and we don't usually need to change anything. We can control this behavior via the configuration of MongoDB but usually it's not recommended. Let's now see how we save data using Mongoid. Updating documents and attributes As with ActiveModel specifications, save will update the changed attributes and return the updated object, otherwise it will return false on failure. The save! function will raise an exception on the error. In both cases, if we pass validate: false as a parameter to save, it will bypass the validations. A lesser-known persistence option is the upsert action. An upsert action creates a new document if it does not find it and overwrites the object if it finds it. A good reason to use upsert is in the find_and_modify action. For example, suppose we want to reserve a book in our Sodibee system, and we want to ensure that at any one point, there can be only one reservation for a book. In a traditional scenario: t1: Request-1 searches for a for a book which is not reserved and finds it t2: Now, it saves the book with the reservation information t3: Request-2 searches for a reservation for the same book and finds that the book is reserved t4: Request-2 handles the situation with either error or waits for reservation to be freed So far so good! However in a concurrent model, especially for web applications, it creates problems. t1: Request-1 searches for a book which is not reserved and finds it t2: Request-2 searches for a reservation for the same book and also gets back that book since it's not yet reserved t3: Request-1 saves the book with its reservation information t4: Request-2 now overwrites previous update and saves the book with its reservation information Now we have a situation where two requests think that the reservation for the book was successful and that is against our expectations. This is a typical problem that plagues most web applications. The various ways in which we can solve this is discussed in the subsequent sections. Write concern MongoDB helps us ensure write consistency. This means that when we write something to MongoDB, it now guarantees the success of the write operation. Interestingly, this is a configurable option and is set to acknowledged by default. This means that the write is guaranteed because it waits for an acknowledgement before returning success. In earlier versions of Mongoid, safe: true was turned off by default. This meant that success of the write operation was not guaranteed. The write concern is configured in Mongoid.yml as follows: development:sessions:default:hosts:- localhost:27017options:write:w: 1 The default write concern in Mongoid is configured with w: 1, which means that the success of a write operation is guaranteed. Let's see an example: class Authorinclude Mongoid::Documentfield :name, type: Stringindex( {name: 1}, {unique: true, background: true})end Indexing blocks read and write operations. Hence, its recommended to configure indexing in the background in a Rails application. We shall now start a Rails console and see how this reacts to a duplicate key index by creating two Author objects with the same name. irb> Author.create(name: "Gautam") => #<Author _id: 5143678345db7ca255000001, name: "Gautam"> irb> Author.create(name: "Gautam") Moped::Errors::OperationFailure: The operation: #<Moped::Protocol::Command @length=83 @request_id=3 @response_to=0 @op_code=2004 @flags=[] @full_collection_name="sodibee_development.$cmd" @skip=0 @limit=-1 @selector={:getlasterror=>1, :w=>1} @fields=nil> failed with error 11000: "E11000 duplicate key error index: sodibee_ development.authors.$name_1 dup key: { : "Gautam" }" As we can see, it has raised a duplicate key error and the document is not saved. Now, let's have some fun. Let's change the write concern to unacknowledged: development:sessions:default:hosts:- localhost:27017options:write:w: 0 The write concern is now set to unacknowledged writes. That means we do not wait for the MongoDB write to eventually succeed, but assume that it will. Now let's see what happens with the same command that had failed earlier. irb > Author.where(name: "Gautam").count=> 1irb > Author.create(name: "Gautam")=> #<Author _id: 5287cba54761755624000000, name: "Gautam">irb > Author.where(name: "Gautam").count=> 1 There seems to be a discrepancy here. Though Mongoid create returned successfully, the data was not saved to the database. Since we specified background: true for the name index, the document creation seemed to succeed as MongoDB had not indexed it yet, and we did not wait for acknowledging the success of the write operation. So, when MongoDB tries to index the data in the background, it realizes that the index criterion is not met (since the index is unique), and it deletes the document from the database. Now, since that was in the background, there is no way to figure this out on the console or in our Rails application. This leads to an inconsistent result. So, how can we solve this problem? There are various ways to solve this problem: We leave the Mongoid default write concern configuration alone. By default, it is w: 1 and it will raise an exception. This is the recommended approach as prevention is better than cure! Do not specify the background: true option. This will create indexes in the foreground. However, this approach is not recommended as it can cause a drop in performance because index creations block read and write access. Add drop_dups: true. This deletes data, so you have to be really careful when using this option. Other options to the index command create different types of indexes as shown in the following table: Index Type Example Description sparse index({twitter_name: 1}, { sparse: true}) This creates sparse indexes, that is, only the documents containing the indexed fields are indexed. Use this with care as you can get incomplete results. 2d 2dsphere index({:location => "2dsphere"}) This creates a two-dimensional spherical index. Text index MongoDB 2.4 introduced text indexes that are as close to free text search indexes as it gets. However, it does only basic text indexing—that is, it supports stop words and stemming. It also assigns a relevance score with each search. Text indexes are still an experimental feature in MongoDB, and they are not recommended for extensive use. Use ElasticSearch, Solr (Sunspot), or ThinkingSphinx instead. The following code snippet shows how we can specify a text index with weightage: index({ "name" => 'text',"last_name" => 'text'},{weights: {'name' => 10,'last_name' => 5,},name: 'author_text_index'}) There is no direct search support in Mongoid (as yet). So, if you want to invoke a text search, you need to hack around a little. irb> srch = Mongoid::Contextual::TextSearch.new(Author.collection,Author.all, 'john')=> #<Mongoid::Contextual::TextSearchselector: {}class: Authorsearch: johnfilter: {}project: N/Alimit: N/Alanguage: default>irb> srch.execute=> {"queryDebugString"=>"john||||||", "language"=>"english","results"=>[{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00030b'), "name"=>"Bettye Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00046d'), "name"=>"John Pagac"}},{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f000578'),"name"=>"Jeanie Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058445db7c843f0007e7')...{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058a45db7c843f0025f1'), "name"=>"Alford Johns"}}], "stats"=>{"nscanned"=>25,"nscannedObjects"=>0, "n"=>25, "nfound"=>25, "timeMicros"=>31103},"ok"=>1.0} By default, text search is disabled in MongoDB configuration. We need to turn it on by adding setParameter = textSearchEnabled=true in the MongoDB configuration file, typically /usr/local/mongo.conf. This returns a result with statistical data as well the documents and their relevance score. Interestingly, it also specifies the language. There are a few more things we can do with the search result. For example, we can see the statistical information as follows: irb> a.stats=> {"nscanned"=>25, "nscannedObjects"=>0, "n"=>25, "nfound"=>25,"timeMicros"=>31103} We can also convert the data into our Mongoid model objects by using project, as shown in the following command: > a.project(:name).to_a=> [#<Author _id: 51fc058345db7c843f00030b, name: "Bettye Johns",last_name: nil, password: nil>, #<Author _id: 51fc058345db7c843f00046d,name: "John Pagac", last_name: nil, password: nil>, #<Author _id:51fc058345db7c843f000578, name: "Jeanie Johns", last_name: nil, password:nil> ... Some of the important things to remember are as follows: Text indexes can be very heavy in memory. They can return the documents, so the result can be large. We can use multiple keys (or filters) along with a text search. For example, the index with index ({ 'state': 1, name: 'text'}) will mandate the use of the state for every text search that is specified. A search for "john doe" will result in a search for "john"or "doe"or both. A search for "john"and "doe" will search for all "john"and "doe"in a random order. A search for ""john doe"", that is, with escaped quotes, will search for documents containing the exact phrase "john doe". A lot more data can be found at http://docs.mongodb.org/manual/tutorial/search-for-text/ Summary This article provides an excellent reference for using Mongoid. The article has examples with code samples and explanations that help in understanding the various features of Mongoid. Resources for Article: Further resources on this subject: Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article] Documentation with phpDocumentor: Part 1 [Article]
Read more
  • 0
  • 0
  • 2570
Modal Close icon
Modal Close icon