Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-dynamic-theming-drupal-6-part-1
Packt
24 Oct 2009
9 min read
Save for later

Dynamic Theming in Drupal 6 - Part 1

Packt
24 Oct 2009
9 min read
Using Multiple Templates Most advanced sites built today employ multiple page templates. In this section, we will look at the most common scenarios and how to address them with a PHPTemplate theme. While there are many good reasons for running multiple page templates, you should not create additional templates solely for the purpose of disabling regions to hide blocks. While the approach will work, it will result in a performance hit for the site, as the system will still produce the blocks, only to then wind up not displaying them for the pages. The better practice is to control your block visibility. Using a Separate Admin Theme With the arrival of Drupal 5, one of the most common Drupal user requests was satisfied; that is, the ability to easily designate a separate admin theme. In Drupal, designating a separate theme for your admin interface remains a simple matter that you can handle directly from within the admin system. To designate a separate theme for your admin section, follow these steps: Log in and access your site's admin system. Go to Administer | Site configuration | Administration theme. Select the theme you desire from the drop-down box listing all the installed themes. Click Save configuration, and your selected theme should appear immediately. Multiple Page or Section Templates In contrast to the complete ease of setting up a separate administration theme is the comparative difficulty of setting up multiple templates for different pages or sections. The bad news is that there is no admin system shortcut—you must manually create the various templates and customize them to suit your needs. The good news is that creating and implementing additional templates is not difficult and it is possible to attain a high degree of granularity with the techniques described below. Indeed, should you be so inclined, you could literally define a distinct template for each individual page of your site. Drupal employs an order of precedence based on a naming convention (or "suggestions" as they are now being called on the Drupal site). You can unlock the granularity of the system through proper application of the naming convention. It is possible, for example, to associate templates with every element on the path, or with specific users, or with a particular functionality—all through the simple process of creating a new template and naming it appropriately. The system will search for alternative templates, preferring the specific to the general, and failing to find a more specific template, will apply the default page.tpl.php. Consider the following example of the order of precedence and the naming convention in action. The custom templates above could be used to override the default page.tpl.php and theme either an entire node (page-node.tpl.php), or simply the node with an ID of 1 (page-node-1.tpl.php),or the node in edit mode (page-node-edit.tpl.php), depending on the name given the template. In the example above, the page-node templates would be applied to the node in full page view. In contrast, should you wish to theme the node in its entirety, you would need to intercept and override the default node.tpl.php. The fundamental methodology of the system is to use the first template file it finds and ignore other, more general templates (if any). This basic principle, combined with proper naming of the templates, gives you control over the template used in various situations. The default suggestions provided by the Drupal system should be sufficient for the vast majority of theme developers. However, if you find that you need additional suggestions beyond those provided by the system, it is possible to extend your site and add new suggestions. See http://drupal.org/node/223440 for a discussion of this advanced Drupal theming technique. Let's take a series of four examples to show how this feature can be used to provide solutions to common problems: Create a unique homepage template. Use a different template for a group of pages. Assign a specific template to a specific page. Designate a specific template for a specific user. Create a Unique Homepage Template Let's assume that you wish to set up a unique template for the homepage of a site. Employing separate templates for the homepage and the interior pages is one of the most common requests web developers hear. With Drupal, you can, without having to create a new template, achieve some variety within a theme by controlling the visibility of blocks on the homepage. If that simple technique does not give you enough flexibility, you will need to consider using a dedicated template that is purpose-built for your homepage content. The easiest way to set up a distinct front page template is to copy the existing page.tpl.php file, rename it, and make your changes to the new file. Alternatively, you can create a new file from scratch. In either situation, your front-page-specific template must be named page-front.tpl.php. The system will automatically display your new file for the site's homepage, and use the default page.tpl.php for the rest of the site. Note that page-front.tpl.php is whatever page you specify as the site's front page via the site configuration settings. To override the default homepage setting visit Administer | Site configuration | Site information, then enter the URL you desire into the field labeled Default home page. Use a Different Template for a Group of Pages Next, let's associate a template with a group of pages. You can provide a template to be used by any distinct group of pages, using as your guide the path for the pages. For example, to theme all the user pages you would create the template page-user.tpl.php. To theme according to the type of content, you can associate your page template with a specific node, for example, all blog entry pages can be controlled by the filepage-blog-tpl.php. The table below presents a list of suggestions you can employ to theme various pages associated with the default functionalities in the Drupal system. Suggestion Affected Page page-user.tpl.php user pages page-blog.tpl.php blog pages (but not the individual node pages) page-forum.tpl.php forum pages (but not the individual node pages) page-book.tpl.php book pages (but not the individual node pages) page-contact.tpl.php contact form (but not the form content)   Assign a Specific Template to a Specific Page Taking this to its extreme, you can associate a specific template with a specific page. By way of example, assume we wish to provide a unique template for a specific content item. Let's assume our example page is located at http://www.demosite.com/node/2/edit. The path of this specific page gives you a number of options. We could theme this page with any of the following templates (in addition to the default page.tpl.php): page-node.tpl.php page-node-2.tpl.php page-node-edit.tpl.php A Note on Templates and URLsDrupal bases the template order of precedence on the default path generated by the system. If the site is using a module like pathauto, which alters the path that appears to site visitors, remember that your templates will still be displayed based on the original paths. The exception here being page-front.tpl.php, which will be applied to whatever page you specify as the site's front page via the site configuration settings (Administer | Site configuration| Site information). Designate a Specific Template for a Specific User Assume that you want to add a personalized theme for the user with the ID of 1(the Drupal equivalent of a Super Administrator). To do this, copy the existing page.tpl.php file, rename it to reflect its association with the specific user, and make any changes to the new file. To associate the new template file with the user, name the file: page-user-1.tpl. Now, when user 1 logs into the site, they will be presented with this template. Only user 1 will see this template and only when he or she is logged in and visiting the account page. The official Drupal site includes a collection of snippets relating to the creation of custom templates for user profile pages. The discussion is instructive and worth review, though you should always be a bit cautious with user-submitted code snippets as they are not official releases from the Drupal Association. See, http://drupal.org/node/35728 Dynamically Theming Page Elements In addition to being able to style particular pages or groups of pages, Drupal and PHPTemplate make it possible to provide specific styling for different page elements. Associating Elements with the Front Page Drupal provides $is_front as a means of determining whether the page currently displayed is the front page. $is_front is set to true if Drupal is rendering the front page; otherwise it is set to false. We can use $is_front in our page.tpl.php file to help toggle display of items we want to associate with the front page. To display an element on only the front page, make it conditional on the state of $is_front. For example, to display the site mission on only the front page of the site, wrap $mission (in your page.tpl.php file) as follows: <?php if ($is_front): ?> <div id="mission"> <?php print $mission; ?> </div><?php endif; ?> To set up an alternative condition, so that one element will appear on the front page but a different element will appear on other pages, modify the statement like this: <?php if ($is_front): ?> //whatever you want to display on front page<?php else: ?> //what is displayed when not on the front page<?php endif; ?> $is_front is one of the default baseline variables available to all templates.
Read more
  • 0
  • 0
  • 2952

article-image-mootool-understanding-foundational-basics
Packt
01 Aug 2011
9 min read
Save for later

MooTool: Understanding the Foundational Basics

Packt
01 Aug 2011
9 min read
  MooTools 1.3 Cookbook Over 110 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page MooTroduction MooTools was conceived by Valerio Proietti and copy written under MIT License in 2006. We send a great round of roaring applause to Valerio for creating the Moo.FX (My Object Oriented Effects) plugin for Prototype, a JavaScript abstraction library. That work gave life to an arguably more effects-oriented (and highly extensible) abstraction layer of its own: MooTools (My Object Oriented Tools).   Knowing our MooTools version This recipe is an introduction to the different MooTools versions and how to be sure we are coding in the right version. Getting ready Not all are equal nor are backwards compatible! The biggest switch in compatibility came between MooTools 1.1 and MooTools 1.2. This minor version change caused clamor in the community given the rather major changes included. In our experience, we find that 1.2 and 1.3 MooTool scripts play well together while 1.0 and 1.1 scripts tend to be agreeable as well. However, Moo's popularity spiked with version 1.1, and well-used scripts written with 1.0, like MooTabs, were upgraded to 1.1 when released. The exact note in Google Libraries for the version difference between 1.1 and 1.2 reads: Since 1.1 versions are not compatible with 1.2 versions, specifying version "1" will map to the latest 1.1 version (currently 1.1.2). MooTools 1.1.1 has inline comments, which cause the uncompressed version to be about 180% larger than version 1.2.5 and 130% larger than the 1.3.0 release. When compressed, with YUI compression, 1.1 and 1.2 weigh in at about 65K while 1.3.0 with the CSS3 selectors is a modest 85K. In the code snippets, the compressed versions are denoted with a c.js file ending. Two great additions in 1.3.0 that account for most of the difference in size from 1.2.5 are Slick.Parser and Slick.Finder. We may not need CSS3 parsing; so we may download the MooTools Core with only the particular class or classes we need. Browse http://mootools.net/core/ and pick and choose the classes needed for the project. We should note that the best practice is to download all modules during development and pare down to what is needed when taking an application into production. When we are more concerned with functionality than we are with performance and have routines that require backwards compatibility with MooTools 1.1, we can download the 1.2.5 version with the 1.1 classes from the MooTools download page at http://mootools.net/download. The latest MooTools version as of authoring is 1.3.0. All scripts within this cookbook are built and tested using MooTools version 1.3.0 as hosted by Google Libraries. How to do it... This is the basic HTML framework within which all recipes will be launched. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title>MooTools Recipes</title> <meta http-equiv="content-type" content="text/html;charset=utf-8"/> Note that the portion above is necessary but is not included in the other recipes to save space. Please do always include a DOCTYPE, and opening HTML, HEAD, TITLE, and META tag for the HTTP-EQUIV and CONTENT. <script type="text/javascript" src="mootools-1.3.0.js"></script> </head> <body> <noscript>Your Browser has JavaScript Disabled. Please use industry best practices for coding in JavaScript; letting users know they are missing out is crucial!</noscript> <script type="text/javascript"> // best practice: ALWAYS include a NOSCRIPT tag! var mooversion = MooTools.version; var msg = 'version: '+mooversion; document.write(msg); // just for fun: var question = 'Use MooTools version '+msg+'?'; var yes = 'It is as you have requested!'; var no = "Please change the mootools source attribute in HTML->head->script."; // give 'em ham alert((confirm(question)?yes:no)); </script> </body> </html> How it works... Inclusion of external libraries like MooTools is usually handled within the HEAD element of the HTML document. The NOSCRIPT tag will only be read by browsers that have their JavaScript disabled. The SCRIPT tag may be placed directly within the layout of the page. There's more... Using the XHTML doctype (or any doctype for that matter) allows your HTML to validate, helps browsers parse your pages faster, and helps the Dynamic Object Model (DOM) behave consistently. When our HTML does not validate, our JavaScript errors will be more random and difficult to solve. Many seasoned developers have settled upon a favorite doctype. This allows them to become familiar with the ad-nauseam of cross browser oddities associated with that particular doctype. To further delve into doctypes, quirksmode, and other HTML specification esoterica, the heavily trafficked http://www.quirksmode.org/css/quirksmode.html provides an easy-to-follow and complete discourse.   Finding MooTools documentation both new and old Browsing http://mootools.net/docs/core will afford us the opportunity to use the version of our choice. The 1.2/1.3 demonstrations at the time of writing are expanding nicely. Tabs in the demonstrations at http://mootools.net/demos display each of the important elements of the demonstration. (Move the mouse over the image to enlarge.) MooTools had a major split at the minor revision number of 1.1. If working on a legacy project that still implements the deprecated MooTools version 1.1, take a shortcut to http://docs111.mootools.net. Copying the demonstrations line-for-line, without studying them to see how they work, may afford our project the opportunity to become malicious code.   Using Google Library's MooTools scripts Let Google maintain the core files and provide the bandwidth to serve them. Getting ready Google is leading the way in helping MooTools developers save time in the arenas of development, maintenance, and hosting by working together with the MooTools developers to host and deliver compressed and uncompressed versions of MooTools to our website visitors. Hosting on their servers eliminates the resources required to host, bandwidth required to deliver, and developer time required to maintain the requested, fully patched, and up-to-date version. Usually we link to a minor version of a library to prevent major version changes that could cause unexpected behavior in our production code. Google API keys that are required in the documentation to use Google Library can be easily and quickly obtained at: http://code.google.com/apis/libraries/devguide.html#sign_up_for_an_api_key. How to do it... Once you have the API Key, use the script tag method to include MooTools. For more information on loading the JavaScript API see http://code.google.com/apis/libraries/devguide.html#load_the_javascript_api_and_ajax_search_module. <!--script type="text/javascript" src="mootools-1.3.0.js"> </script--> <!--we've got ours commented out so that we can use google's here:--> <script src="https://www.google.com/jsapi?key=OUR-KEY-HERE" type="text/javascript"></script> // the full src path is truncated for display here <script src="https://ajax.googleapis.com/... /mootools-yui-compressed.js" type="text/javascript"></script> </head> <body> <noscript>JavaScript is disabled.</noscript> <script type="text/javascript"> var mooversion = MooTools.version; var msg = 'MooTools version: '+mooversion+' from Google'; // show the msg in two different ways (just because) document.write(msg); alert(msg); </script> Using google.load(), which is available to us when we include the Google Library API, we can make the inclusion code a bit more readable. See the line below that includes the string jsapi?key=. We replace OUR-KEY-HERE with our API key, which is tied to our domain name so Google can contact us if they detect a problem: <!--script type="text/javascript" src="mootools-1.3.0.js"> </script--> <!--we've got ours commented out so that we can use google's here:--> <script src="https://www.google.com/jsapi?key=OUR-KEY-HERE" type="text/javascript"></script> <script type="text/javascript"> google.load("mootools", "1.2.5"); </script> </head> <body> <noscript>JavaScript is disabled.</noscript> <script type="text/javascript"> var mooversion = MooTools.version; var msg = 'MooTools version: '+mooversion+' from Google'; // show the msg in two different ways (just because) document.write(msg); alert(msg); </script> How it works... There are several competing factors that go into the decision to use a direct load or dynamic load via google.load(): Are we loading more than one library? Are our visitors using other sites that include this dynamic load? Can our page benefit from parallel loading? Do we need to provide a secure environment? There's more... If we are only loading one library, a direct load or local load will almost assuredly benchmark faster than a dynamic load. However, this can be untrue when browser accelerator techniques, most specifically browser caching, come into play. If our web server is sending no-cache headers, then dynamic load, or even direct load, as opposed to a local load, will allow the browser to cache the Google code and reduce our page load time. If our page is making a number of requests to our web server, it may be possible to have the browser waiting on a response from the server. In this instance, parallel loading from another website can allow those requests that the browser can handle in parallel to continue during such a delay. We need to also take a look at how secure websites function with non-secure, external includes. Many of us are familiar with the errors that can occur when a secure website is loaded with an external (or internal) resource that is not provided via http. The browser can pop up an alert message that can be very concerning and lose the confidence of our visitors. Also, it is common to have some sort of negative indicator in the address bar or in the status bar that alerts visitors that not all resources on the page are secure. Avoid mixing http and https resources; if using a secure site, opt for a local load of MooTools or use Google Library over HTTPS.  
Read more
  • 0
  • 1
  • 2951

article-image-marshalling-data-services-extdirect
Packt
14 Oct 2010
9 min read
Save for later

Marshalling Data Services with Ext.Direct

Packt
14 Oct 2010
9 min read
What is Direct? Part of the power of any client-side library is its ability to tap nearly any server-side technology. That said, with so many server-side options available there were many different implementations being written for accessing the data. Direct is a means of marshalling those server-side connections, creating a 'one-stop-shop' for handling your basic Create, Read, Update, and Delete actions against that remote data. Through some basic configuration, we can now easily create entire server-side API's that we may programmatically expose to our Ext JS applications. In the process, we end up with one set of consistent, predefined methods for managing that data access. Building server-side stacks There are several examples of server-side stacks already available for Ext JS, directly from their site's Direct information. These are examples, showing you how you might use Direct with a particular server-side technology, but Ext provides us with a specification so that we might write our own. Current stack examples are available for: PHP .NET Java ColdFusion Ruby Perl These are examples written directly by the Ext team, as guides, as to what we can do. Each of us writes applications differently, so it may be that our application requires a different way of handling things at the server level. The Direct specification, along with the examples, gives us the guideposts we need for writing our own stacks when necessary. We will deconstruct one such example here to help illustrate this point. Each server-side stack is made up of three basic components: Configuration— denoting which components/classes are available to Ext JS API— client-side descriptors of our configuration Router— a means to 'route' our requests to their proper API counterparts To illustrate each of these pieces of the server-side stack we will deconstruct one of the example stacks provided by the Ext JS team. I have chosen the ColdFusion stack because: It is a good example of using a metadata configuration DirectCFM (the ColdFusion stack example) was written by Aaron Conran, who is the Senior Software Architect and Ext Services Team Leader for Ext, LLC Each of the following sections will contain a "Stack Deconstruction" section to illustrate each of the concepts. These are to show you how these concepts might be written in a server-side language, but you are welcome to move on if you feel you have a good grasp of the material. Configuration Ultimately the configuration must define the classes/objects being accessed, the functions of those objects that can be called, and the length (number) of arguments that the method is expecting. Different servers will allow us to define our configuration in different ways. The method we choose will sometimes depend upon the capabilities or deficiencies of the platform we're coding to. Some platforms provide the ability to introspect components/classes at runtime to build configurations, while others require a far more manual approach. You can also include an optional formHandler attribute to your method definitions, if the method can take form submissions directly. There are four basic ways to write a configuration. Programmatic A programmatic configuration may be achieved by creating a simple API object of key/value pairs in the native language. A key/value pair object is known by many different names, depending upon the platform to which we're writing for: HashMap, Structure, Object, Dictionary, or an Associative Array. For example, in PHP you might write something like this: $API = array( 'Authors'=>array( 'methods'=>array( 'GetAll'=>array( 'len'=>0 ), 'add'=>array( 'len'=>1 ), 'update'=>array( 'len'=>1 ) ) ) ); Look familiar? It should, in some way, as it's very similar to a JavaScript object. The same basic structure is true for our next two methods of configuration as well. JSON and XML For this configuration, we can pass in a basic JSON configuration of our API: { Authors:{ methods:{ GetAll:{ len:0 }, add:{ len:1 }, update:{ len:1 } } } } Or we could return an XML configuration object: <Authors> <methods> <method name="GetAll" len="0" /> <method name="add" len="1" /> <method name="update" len="1" /> </methods> </Authors> All of these forms have given us the same basic outcome, by providing a basic definition of server-side classes/objects to be exposed for use with our Ext applications. But, each of these methods require us to build these configurations basically by hand. Some server-side options make it a little easier. Metadata There are a few server-side technologies that allow us to add additional metadata to classes and function definitions, using which we can then introspect objects at runtime to create our configurations. The following example demonstrates this by adding additional metadata to a ColdFusion component (CFC): <cfcomponent name="Authors" ExtDirect="true"> <cffunction name="GetAll" ExtDirect="true"> <cfreturn true /> </cffunction> <cffunction name="add" ExtDirect="true"> <cfargument name="author" /> <cfreturn true /> </cffunction> <cffunction name="update" ExtDirect="true"> <cfargument name="author" /> <cfreturn true /> </cffunction> </cfcomponent> This is a very powerful method for creating our configuration, as it means adding a single name/value attribute (ExtDirect="true") to any object and function we want to make available to our Ext application. The ColdFusion server is able to introspect this metadata at runtime, passing the configuration object back to our Ext application for use. Stack deconstruction—configuration The example ColdFusion Component provided with the DirectCFM stack is pretty basic, so we'll write one slightly more detailed to illustrate the configuration. ColdFusion has a facility for attaching additional metadata to classes and methods, so we'll use the fourth configuration method for this example, Metadata. We'll start off with creating the Authors.cfc class: <cfcomponent name="Authors" ExtDirect="true"> </cfcomponent> Next we'll create our GetAll method for returning all the authors in the database: <cffunction name="GetAll" ExtDirect="true"> <cfset var q = "" /> <cfquery name="q" datasource="cfbookclub"> SELECT AuthorID, FirstName, LastName FROM Authors ORDER BY LastName </cfquery> <cfreturn q /> </cffunction> We're leaving out basic error handling and stuff, but these are the basics behind it. The classes and methods we want to make available will all contain the additional metadata. Building your API So now that we've explored how to create a configuration at the server, we need to take the next step by passing that configuration to our Ext application. We do this by writing a server-side template that will output our JavaScript configuration. Yes, we'll actually dynamically produce a JavaScript include, calling the server-side template directly from within our <script> tag: <script src="Api.cfm"></script> How we write our server-side file really depends on the platform, but ultimately we just want it to return a block of JavaScript (just like calling a .js file) containing our API configuration description. The configuration will appear as part of the actions attribute, but we must also pass the url of our Router, the type of connection, and our namespace. That API return might look something like this: Ext.ns("com.cc"); com.cc.APIDesc = { "url": "/remote/Router.cfm", "type": "remoting" "namespace": "com.cc", "actions": { "Authors": [{ "name": "GetAll", "len": 0 },{ "name": "add", "len": 1 },{ "name": "update", "len": 1 }] } }; This now exposes our server-side configuration to our Ext application. Stack deconstruction—API The purpose here is to create a JavaScript document, dynamically, of your configuration. Earlier we defined configuration via metadata. The DirectCFM API now has to convert that metadata into JavaScript. The first step is including the Api.cfm in a <script> tag on the page, but we need to know what's going on "under the hood." Api.cfm: <!--- Configure API Namespace and Description variable names ---> <cfset args = StructNew() /> <cfset args['ns'] = "com.cc" /> <cfset args['desc'] = "APIDesc" /> <cfinvoke component="Direct" method="getAPIScript" argumentcollection="#args#" returnVariable="apiScript" /> <cfcontent reset="true" /> <cfoutput>#apiScript#</cfoutput> Here we set a few variables, that will then be used in a method call. The getAPIScript method, of the Direct.cfc class, will construct our API from metadata. Direct.cfc getAPIScript() method: <cffunction name="getAPIScript"> <cfargument name="ns" /> <cfargument name="desc" /> <cfset var totalCFCs = '' /> <cfset var cfcName = '' /> <cfset var CFCApi = '' /> <cfset var fnLen = '' /> <cfset var Fn = '' /> <cfset var currFn = '' /> <cfset var newCfComponentMeta = '' /> <cfset var script = '' /> <cfset var jsonPacket = StructNew() /> <cfset jsonPacket['url'] = variables.routerUrl /> <cfset jsonPacket['type'] = variables.remotingType /> <cfset jsonPacket['namespace'] = ARGUMENTS.ns /> <cfset jsonPacket['actions'] = StructNew() /> <cfdirectory action="list" directory="#expandPath('.')#" name="totalCFCs" filter="*.cfc" recurse="false" /> <cfloop query="totalCFCs"> <cfset cfcName = ListFirst(totalCFCs.name, '.') /> <cfset newCfComponentMeta = GetComponentMetaData(cfcName) /> <cfif StructKeyExists(newCfComponentMeta, "ExtDirect")> <cfset CFCApi = ArrayNew(1) /> <cfset fnLen = ArrayLen(newCFComponentMeta.Functions) /> <cfloop from="1" to="#fnLen#" index="i"> <cfset currFn = newCfComponentMeta.Functions[i] /> <cfif StructKeyExists(currFn, "ExtDirect")> <cfset Fn = StructNew() /> <cfset Fn['name'] = currFn.Name/> <cfset Fn['len'] = ArrayLen(currFn.Parameters) /> <cfif StructKeyExists(currFn, "ExtFormHandler")> <cfset Fn['formHandler'] = true /> </cfif> <cfset ArrayAppend(CFCApi, Fn) /> </cfif> </cfloop> <cfset jsonPacket['actions'][cfcName] = CFCApi /> </cfif> </cfloop> <cfoutput><cfsavecontent variable="script">Ext.ns('#arguments. ns#');#arguments.ns#.#desc# = #SerializeJson(jsonPacket)#;</ cfsavecontent></cfoutput> <cfreturn script /> </cffunction> The getAPIScript method sets a few variables (including the 'actions' array), pulls a listing of all ColdFusion Components from the directory, loops over that listing, and finds any components containing "ExtDirect" in their root meta. With every component that does contain that meta, it then loops over each method, finds methods with "ExtDirect" in the function meta, and creates a structure with the function name and number of arguments, which is then added to an array of methods. When all methods have been introspected, the array of methods is added to the 'actions' array. Once all ColdFusion Components have been introspected, the entire packet is serialized into JSON, and returned to API.cfm for output. One item to note is that the script, when introspecting method metadata, also looks for a "ExtFormHandler" attribute. If it finds the attribute, it will include that in the method struct prior to placing the struct in the 'actions' array.
Read more
  • 0
  • 0
  • 2948

article-image-using-content-type-effectively-plone-intranet
Packt
04 Aug 2010
4 min read
Save for later

Using Content Type Effectively with Plone Intranet

Packt
04 Aug 2010
4 min read
(For more resources on Plone, see here.) Designing our intranet information architecture No one uses a knowledge system (such as our intranet) if the information stored in it is hard to find or consume. We will have to specially emphasize on thinking about not only a good navigation schema, but also a successful one for our intranet. The definition of success is different for every interested group, organization, enterprise, or any kind of entity our intranet will serve. There are a lot of navigation schemas we may want to implement, but it is our task to find out what will be more suitable for our organization. To achieve this, we will have to use both hierarchy and metadata taxonomy wisely. Obviously, the use of folders and collections will help achieve this endeavor. The first-level folders or sections are very important and we will have to keep an eye on them when designing our intranet. Also, we should not forget the next levels of folders, because they have a key role in a success navigation schema. The use of metadata, and specifically categorization of content, will also play an important role in our intranet. The continuous content cataloging is crucial to achieve a good content search and the users should be made aware of it. An intranet where the search of content is inefficient and difficult is an unsuccessful intranet, and with time, the users will abandon it. At this point, we should analyze the navigation needs of our intranet. Think about how the people will use it, how will they contribute contents to it, and how will they find things stored in it. In this analysis, it is very important to think about security. Navigation and security are closely related because most probably we define security by containers. There are some standard schemas: by organization structure, by process, by product, and so on. By organization is the most usual case. Everybody has a very clear idea of the organizational schema of an enterprise or organization, and this factor makes it easier to implement this type of schema. In this kind of schema, the first-level sections are divided into departments, teams, or main groups of interest. If our intranet is small and dedicated to one or few points of interest, then these must take precedence over the first level section folders. Keep the following things in mind: Our intranet will be more usable if we can keep our intranet sections clean and clear Fight against those people who believe that his (or her) department is more important than others and want to assault our intranet sections Let them know that maintaining a good intranet structure will be more useful and will help contribute to its success Second levels are also very important. They should be perdurable in time, interesting to users of all sections, and they should divide information and contents clearly. Two subsections shouldn't contain elements of the same subject or kind. For example, these might be a typical second level: Documentation Meetings Events News Forums, tracker, or some application specific to the current section All of these are very commonly seen in an intranet. It is a good practice to create these second-level sections in advance, so that people can adapt to them. Teach people to categorize content. This will help intranet searches incredibly and will help create collections and manage contents more effectively. If needed, make a well-known set of categories publicly available for people to use. This would prevent the repetition of categories and the rational use of them. Notice that there can be several types of categories: Subject: Terms that describe the subject of the content Process: Terms that identify the content with the organizational process Flags: Flags such as Strongly Recommended Products: Terms from the products, standards, and technology names that describe the subject matter of the resource Labels: Terms used to ensure that the resource is listed under the appropriate label Keywords: Terms used to describe the resource Events: Terms used to identify events which are recurrent with the content There are other metadata also which influence the improvement of the navigation and search abilities of the intranet such as: Title Description URL, the ID of each content Don't forget to teach your users about content contribution best practices before deploying the intranet. We and our intranet users will appreciate it a lot. Once we have settled down on some practices which are best for information architecture, we should know how to use some interesting Plone features that will help us build navigation and sort the information on our intranet.
Read more
  • 0
  • 0
  • 2946

article-image-simple-item-selector-using-jquery
Packt
09 Nov 2009
4 min read
Save for later

Simple Item Selector Using jQuery

Packt
09 Nov 2009
4 min read
(For more resources on jQuery, see here.) Adding jQuery to your page You can download the latest version of jQuery from jQuery site (http://jquery.com/) and can be added as a reference to your web pages accordingly. You can reference a local copy of jQuery using <script> tag in the page. Either you can reference your local copy or you can directly reference remote copy from jQuery.com or Google Ajax API (http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js) Prerequisite Knowledge In order to understand the code, one should have the basic knowledge of HTML, CSS, JavaScript and basic knowledge of jQuery. Ingredients Used HTML CSS jQuery Photoshop (Used for Designing of Image Buttons and Backgrounds) Preview / Download If you would like to see the working example, please do click here http://www.developersnippets.com/snippets/jquery/item_selector/item_selector.html). And if you would like to download the snippet, click here (http://www.developersnippets.com/snippets/jquery/item_selector/item_selector.zip) Figure 1: Snapshot of "Simple Item Selector using jQuery" Figure 2: Overview of div containers and image buttons used Successfully tested The above application has been successfully tested on various browsers like IE 6.0, IE 7, IE 8, Mozilla Firefox (Latest Version), Google Chrome and Safari Browser (4.0.2) respectively. HTML Code Below is the HTML code with comments for you to understand it better. <!-- Container --><div id="container"> <!-- From Container --> <div class="from_container"> <select id="fromSelectBox" multiple="multiple"> <option value="1">Adobe</option> <option value="2">Oracle</option> <option value="3">Google</option> <option value="4">Microsoft</option> <option value="5">Google Talk</option> <option value="6">Google Wave</option> <option value="7">Microsoft Silver Light</option> <option value="8">Adobe Flex Professional</option> <option value="9">Oracle DataBase</option> <option value="10">Microsoft Bing</option> </select><br /> <input type="image" src="images/selectall.jpg" class="selectall" onclick="selectAll('fromSelectBox')" /><input type="image" src="images/deselectall.jpg" class="deselectall" onclick="clearAll('fromSelectBox')" /> </div> <!-- From Container [Close] --> <!-- Buttons Container --> <div class="buttons_container"> <input type="image" src="images/topmost.jpg" id="topmost" /><br /> <input type="image" src="images/moveup.jpg" id="moveup" /><br /> <input type="image" src="images/moveright.jpg" id="moveright" /><br /> <input type="image" src="images/moveleft.jpg" id="moveleft" /><br /> <input type="image" src="images/movedown.jpg" id="movedown" /><br /> <input type="image" src="images/bottommost.jpg" id="bottommost" /><br /> </div> <!-- Buttons Container [Close] --> <!-- To Container --> <div class="to_container"> <select id="toSelectBox" multiple="multiple"></select><br /> <input type="image" src="images/selectall.jpg" class="selectall" onclick="selectAll('toSelectBox')" /><input type="image" src="images/deselectall.jpg" class="deselectall" onclick="clearAll('toSelectBox')" /> </div> <!-- To Container [Close] --> <!-- To Container --> <div class="ascdes_container"> <input type="image" src="images/ascending.jpg" id="ascendingorder" style="margin:1px 0px 2px 0px;" onclick="ascOrderFunction()" /><br /> <input type="image" src="images/descending.jpg" id="descendingorder" onclick="desOrderFunction()" /> </div> <!-- To Container [Close] --> <div style="clear:both"></div></div><!-- Container [Close] -->
Read more
  • 0
  • 0
  • 2945

article-image-understanding-tdd
Packt
03 Sep 2015
31 min read
Save for later

Understanding TDD

Packt
03 Sep 2015
31 min read
 In this article by Viktor Farcic and Alex Garcia, the authors of the book Test-Driven Java Development, we will go through TDD in a simple procedure of writing tests before the actual implementation. It's an inversion of a traditional approach where testing is performed after the code is written. (For more resources related to this topic, see here.) Red-green-refactor Test-driven development is a process that relies on the repetition of a very short development cycle. It is based on the test-first concept of extreme programming (XP) that encourages simple design with a high level of confidence. The procedure that drives this cycle is called red-green-refactor. The procedure itself is simple and it consists of a few steps that are repeated over and over again: Write a test. Run all tests. Write the implementation code. Run all tests. Refactor. Run all tests. Since a test is written before the actual implementation, it is supposed to fail. If it doesn't, the test is wrong. It describes something that already exists or it was written incorrectly. Being in the green state while writing tests is a sign of a false positive. Tests like these should be removed or refactored. While writing tests, we are in the red state. When the implementation of a test is finished, all tests should pass and then we will be in the green state. If the last test failed, implementation is wrong and should be corrected. Either the test we just finished is incorrect or the implementation of that test did not meet the specification we had set. If any but the last test failed, we broke something and changes should be reverted. When this happens, the natural reaction is to spend as much time as needed to fix the code so that all tests are passing. However, this is wrong. If a fix is not done in a matter of minutes, the best thing to do is to revert the changes. After all, everything worked not long ago. Implementation that broke something is obviously wrong, so why not go back to where we started and think again about the correct way to implement the test? That way, we wasted minutes on a wrong implementation instead of wasting much more time to correct something that was not done right in the first place. Existing test coverage (excluding the implementation of the last test) should be sacred. We change the existing code through intentional refactoring, not as a way to fix recently written code. Do not make the implementation of the last test final, but provide just enough code for this test to pass. Write the code in any way you want, but do it fast. Once everything is green, we have confidence that there is a safety net in the form of tests. From this moment on, we can proceed to refactor the code. This means that we are making the code better and more optimum without introducing new features. While refactoring is in place, all tests should be passing all the time. If, while refactoring, one of the tests failed, refactor broke an existing functionality and, as before, changes should be reverted. Not only that at this stage we are not changing any features, but we are also not introducing any new tests. All we're doing is making the code better while continuously running all tests to make sure that nothing got broken. At the same time, we're proving code correctness and cutting down on future maintenance costs. Once refactoring is finished, the process is repeated. It's an endless loop of a very short cycle. Speed is the key Imagine a game of ping pong (or table tennis). The game is very fast; sometimes it is hard to even follow the ball when professionals play the game. TDD is very similar. TDD veterans tend not to spend more than a minute on either side of the table (test and implementation). Write a short test and run all tests (ping), write the implementation and run all tests (pong), write another test (ping), write implementation of that test (pong), refactor and confirm that all tests are passing (score), and then repeat—ping, pong, ping, pong, ping, pong, score, serve again. Do not try to make the perfect code. Instead, try to keep the ball rolling until you think that the time is right to score (refactor). Time between switching from tests to implementation (and vice versa) should be measured in minutes (if not seconds). It's not about testing T in TDD is often misunderstood. Test-driven development is the way we approach the design. It is the way to force us to think about the implementation and to what the code needs to do before writing it. It is the way to focus on requirements and implementation of just one thing at a time—organize your thoughts and better structure the code. This does not mean that tests resulting from TDD are useless—it is far from that. They are very useful and they allow us to develop with great speed without being afraid that something will be broken. This is especially true when refactoring takes place. Being able to reorganize the code while having the confidence that no functionality is broken is a huge boost to the quality. The main objective of test-driven development is testable code design with tests as a very useful side product. Testing Even though the main objective of test-driven development is the approach to code design, tests are still a very important aspect of TDD and we should have a clear understanding of two major groups of techniques as follows: Black-box testing White-box testing The black-box testing Black-box testing (also known as functional testing) treats software under test as a black-box without knowing its internals. Tests use software interfaces and try to ensure that they work as expected. As long as functionality of interfaces remains unchanged, tests should pass even if internals are changed. Tester is aware of what the program should do, but does not have the knowledge of how it does it. Black-box testing is most commonly used type of testing in traditional organizations that have testers as a separate department, especially when they are not proficient in coding and have difficulties understanding it. This technique provides an external perspective on the software under test. Some of the advantages of black-box testing are as follows: Efficient for large segments of code Code access, understanding the code, and ability to code are not required Separation between user's and developer's perspectives Some of the disadvantages of black-box testing are as follows: Limited coverage, since only a fraction of test scenarios is performed Inefficient testing due to tester's lack of knowledge about software internals Blind coverage, since tester has limited knowledge about the application If tests are driving the development, they are often done in the form of acceptance criteria that is later used as a definition of what should be developed. Automated black-box testing relies on some form of automation such as behavior-driven development (BDD). The white-box testing White-box testing (also known as clear-box testing, glass-box testing, transparent-box testing, and structural testing) looks inside the software that is being tested and uses that knowledge as part of the testing process. If, for example, an exception should be thrown under certain conditions, a test might want to reproduce those conditions. White-box testing requires internal knowledge of the system and programming skills. It provides an internal perspective on the software under test. Some of the advantages of white-box testing are as follows: Efficient in finding errors and problems Required knowledge of internals of the software under test is beneficial for thorough testing Allows finding hidden errors Programmers introspection Helps optimizing the code Due to the required internal knowledge of the software, maximum coverage is obtained Some of the disadvantages of white-box testing are as follows: It might not find unimplemented or missing features Requires high-level knowledge of internals of the software under test Requires code access Tests are often tightly coupled to the implementation details of the production code, causing unwanted test failures when the code is refactored. White-box testing is almost always automated and, in most cases, has the form of unit tests. When white-box testing is done before the implementation, it takes the form of TDD. The difference between quality checking and quality assurance The approach to testing can also be distinguished by looking at the objectives they are trying to accomplish. Those objectives are often split between quality checking (QC) and quality assurance (QA). While quality checking is focused on defects identification, quality assurance tries to prevent them. QC is product-oriented and intends to make sure that results are as expected. On the other hand, QA is more focused on processes that assure that quality is built-in. It tries to make sure that correct things are done in the correct way. While quality checking had a more important role in the past, with the emergence of TDD, acceptance test-driven development (ATDD), and later on behavior-driven development (BDD), focus has been shifting towards quality assurance. Better tests No matter whether one is using black-box, white-box, or both types of testing, the order in which they are written is very important. Requirements (specifications and user stories) are written before the code that implements them. They come first so they define the code, not the other way around. The same can be said for tests. If they are written after the code is done, in a certain way, that code (and the functionalities it implements) is defining tests. Tests that are defined by an already existing application are biased. They have a tendency to confirm what code does, and not to test whether client's expectations are met, or that the code is behaving as expected. With manual testing, that is less the case since it is often done by a siloed QC department (even though it's often called QA). They tend to work on tests' definition in isolation from developers. That in itself leads to bigger problems caused by inevitably poor communication and the police syndrome where testers are not trying to help the team to write applications with quality built-in, but to find faults at the end of the process. The sooner we find problems, the cheaper it is to fix them. Tests written in the TDD fashion (including its flavors such as ATDD and BDD) are an attempt to develop applications with quality built-in from the very start. It's an attempt to avoid having problems in the first place. Mocking In order for tests to run fast and provide constant feedback, code needs to be organized in such a way that the methods, functions, and classes can be easily replaced with mocks and stubs. A common word for this type of replacements of the actual code is test double. Speed of the execution can be severely affected with external dependencies; for example, our code might need to communicate with the database. By mocking external dependencies, we are able to increase that speed drastically. Whole unit tests suite execution should be measured in minutes, if not seconds. Designing the code in a way that it can be easily mocked and stubbed, forces us to better structure that code by applying separation of concerns. More important than speed is the benefit of removal of external factors. Setting up databases, web servers, external APIs, and other dependencies that our code might need, is both time consuming and unreliable. In many cases, those dependencies might not even be available. For example, we might need to create a code that communicates with a database and have someone else create a schema. Without mocks, we would need to wait until that schema is set. With or without mocks, the code should be written in a way that we can easily replace one dependency with another. Executable documentation Another very useful aspect of TDD (and well-structured tests in general) is documentation. In most cases, it is much easier to find out what the code does by looking at tests than the implementation itself. What is the purpose of some methods? Look at the tests associated with it. What is the desired functionality of some part of the application UI? Look at the tests associated with it. Documentation written in the form of tests is one of the pillars of TDD and deserves further explanation. The main problem with (traditional) software documentation is that it is not up to date most of the time. As soon as some part of the code changes, the documentation stops reflecting the actual situation. This statement applies to almost any type of documentation, with requirements and test cases being the most affected. The necessity to document code is often a sign that the code itself is not well written.Moreover, no matter how hard we try, documentation inevitably gets outdated. Developers shouldn't rely on system documentation because it is almost never up to date. Besides, no documentation can provide as detailed and up-to-date description of the code as the code itself. Using code as documentation, does not exclude other types of documents. The key is to avoid duplication. If details of the system can be obtained by reading the code, other types of documentation can provide quick guidelines and a high-level overview. Non-code documentation should answer questions such as what the general purpose of the system is and what technologies are used by the system. In many cases, a simple README is enough to provide the quick start that developers need. Sections such as project description, environment setup, installation, and build and packaging instructions are very helpful for newcomers. From there on, code is the bible. Implementation code provides all needed details while test code acts as the description of the intent behind the production code. Tests are executable documentation with TDD being the most common way to create and maintain it. Assuming that some form of Continuous Integration (CI) is in use, if some part of test-documentation is incorrect, it will fail and be fixed soon afterwards. CI solves the problem of incorrect test-documentation, but it does not ensure that all functionality is documented. For this reason (among many others), test-documentation should be created in the TDD fashion. If all functionality is defined as tests before the implementation code is written and execution of all tests is successful, then tests act as a complete and up-to-date information that can be used by developers. What should we do with the rest of the team? Testers, customers, managers, and other non coders might not be able to obtain the necessary information from the production and test code. As we saw earlier, two most common types of testing are black-box and white-box testing. This division is important since it also divides testers into those who do know how to write or at least read code (white-box testing) and those who don't (black-box testing). In some cases, testers can do both types. However, more often than not, they do not know how to code so the documentation that is usable for developers is not usable for them. If documentation needs to be decoupled from the code, unit tests are not a good match. That is one of the reasons why BDD came in to being. BDD can provide documentation necessary for non-coders, while still maintaining the advantages of TDD and automation. Customers need to be able to define new functionality of the system, as well as to be able to get information about all the important aspects of the current system. That documentation should not be too technical (code is not an option), but it still must be always up to date. BDD narratives and scenarios are one of the best ways to provide this type of documentation. Ability to act as acceptance criteria (written before the code), be executed frequently (preferably on every commit), and be written in natural language makes BDD stories not only always up to date, but usable by those who do not want to inspect the code. Documentation is an integral part of the software. As with any other part of the code, it needs to be tested often so that we're sure that it is accurate and up to date. The only cost-effective way to have accurate and up-to-date information is to have executable documentation that can be integrated into your continuous integration system. TDD as a methodology is a good way to move towards this direction. On a low level, unit tests are a best fit. On the other hand, BDD provides a good way to work on a functional level while maintaining understanding accomplished using natural language. No debugging We (authors of this article) almost never debug applications we're working on! This statement might sound pompous, but it's true. We almost never debug because there is rarely a reason to debug an application. When tests are written before the code and the code coverage is high, we can have high confidence that the application works as expected. This does not mean that applications written using TDD do not have bugs—they do. All applications do. However, when that happens, it is easy to isolate them by simply looking for the code that is not covered with tests. Tests themselves might not include some cases. In that situation, the action is to write additional tests. With high code coverage, finding the cause of some bug is much faster through tests than spending time debugging line by line until the culprit is found. With all this in mind, let's go through the TDD best practices. Best practices Coding best practices are a set of informal rules that the software development community has learned over time, which can help improve the quality of software. While each application needs a level of creativity and originality (after all, we're trying to build something new or better), coding practices help us avoid some of the problems others faced before us. If you're just starting with TDD, it is a good idea to apply some (if not all) of the best practices generated by others. For easier classification of test-driven development best practices, we divided them into four categories: Naming conventions Processes Development practices Tools As you'll see, not all of them are exclusive to TDD. Since a big part of test-driven development consists of writing tests, many of the best practices presented in the following sections apply to testing in general, while others are related to general coding best practices. No matter the origin, all of them are useful when practicing TDD. Take the advice with a certain dose of skepticism. Being a great programmer is not only about knowing how to code, but also about being able to decide which practice, framework or style best suits the project and the team. Being agile is not about following someone else's rules, but about knowing how to adapt to circumstances and choose the best tools and practices that suit the team and the project. Naming conventions Naming conventions help to organize tests better, so that it is easier for developers to find what they're looking for. Another benefit is that many tools expect that those conventions are followed. There are many naming conventions in use, and those presented here are just a drop in the ocean. The logic is that any naming convention is better than none. Most important is that everyone on the team knows what conventions are being used and are comfortable with them. Choosing more popular conventions has the advantage that newcomers to the team can get up to speed fast since they can leverage existing knowledge to find their way around. Separate the implementation from the test code Benefits: It avoids accidentally packaging tests together with production binaries; many build tools expect tests to be in a certain source directory. Common practice is to have at least two source directories. Implementation code should be located in src/main/java and test code in src/test/java. In bigger projects, the number of source directories can increase but the separation between implementation and tests should remain as is. Build tools such as Gradle and Maven expect source directories separation as well as naming conventions. You might have noticed that the build.gradle files that we used throughout this article did not have explicitly specified what to test nor what classes to use to create a .jar file. Gradle assumes that tests are in src/test/java and that the implementation code that should be packaged into a jar file is in src/main/java. Place test classes in the same package as implementation Benefits: Knowing that tests are in the same package as the code helps finding code faster. As stated in the previous practice, even though packages are the same, classes are in the separate source directories. All exercises throughout this article followed this convention. Name test classes in a similar fashion to the classes they test Benefits: Knowing that tests have a similar name to the classes they are testing helps in finding the classes faster. One commonly used practice is to name tests the same as the implementation classes, with the suffix Test. If, for example, the implementation class is TickTackToe, the test class should be TickTackToeTest. However, in all cases, with the exception of those we used throughout the refactoring exercises, we prefer the suffix Spec. It helps to make a clear distinction that test methods are primarily created as a way to specify what will be developed. Testing is a great subproduct of those specifications. Use descriptive names for test methods Benefits: It helps in understanding the objective of tests. Using method names that describe tests is beneficial when trying to figure out why some tests failed or when the coverage should be increased with more tests. It should be clear what conditions are set before the test, what actions are performed and what is the expected outcome. There are many different ways to name test methods and our preferred method is to name them using the Given/When/Then syntax used in the BDD scenarios. Given describes (pre)conditions, When describes actions, and Then describes the expected outcome. If some test does not have preconditions (usually set using @Before and @BeforeClass annotations), Given can be skipped. Let's take a look at one of the specifications we created for our TickTackToe application:   @Test public void whenPlayAndWholeHorizontalLineThenWinner() { ticTacToe.play(1, 1); // X ticTacToe.play(1, 2); // O ticTacToe.play(2, 1); // X ticTacToe.play(2, 2); // O String actual = ticTacToe.play(3, 1); // X assertEquals("X is the winner", actual); } Just by reading the name of the method, we can understand what it is about. When we play and the whole horizontal line is populated, then we have a winner. Do not rely only on comments to provide information about the test objective. Comments do not appear when tests are executed from your favorite IDE nor do they appear in reports generated by CI or build tools. Processes TDD processes are the core set of practices. Successful implementation of TDD depends on practices described in this section. Write a test before writing the implementation code Benefits: It ensures that testable code is written; ensures that every line of code gets tests written for it. By writing or modifying the test first, the developer is focused on requirements before starting to work on the implementation code. This is the main difference compared to writing tests after the implementation is done. The additional benefit is that with the tests written first, we are avoiding the danger that the tests work as quality checking instead of quality assurance. We're trying to ensure that quality is built in as opposed to checking later whether we met quality objectives. Only write new code when the test is failing Benefits: It confirms that the test does not work without the implementation. If tests are passing without the need to write or modify the implementation code, then either the functionality is already implemented or the test is defective. If new functionality is indeed missing, then the test always passes and is therefore useless. Tests should fail for the expected reason. Even though there are no guarantees that the test is verifying the right thing, with fail first and for the expected reason, confidence that verification is correct should be high. Rerun all tests every time the implementation code changes Benefits: It ensures that there is no unexpected side effect caused by code changes. Every time any part of the implementation code changes, all tests should be run. Ideally, tests are fast to execute and can be run by the developer locally. Once code is submitted to version control, all tests should be run again to ensure that there was no problem due to code merges. This is specially important when more than one developer is working on the code. Continuous integration tools such as Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo) should be used to pull the code from the repository, compile it, and run tests. All tests should pass before a new test is written Benefits: The focus is maintained on a small unit of work; implementation code is (almost) always in working condition. It is sometimes tempting to write multiple tests before the actual implementation. In other cases, developers ignore problems detected by existing tests and move towards new features. This should be avoided whenever possible. In most cases, breaking this rule will only introduce technical debt that will need to be paid with interest. One of the goals of TDD is that the implementation code is (almost) always working as expected. Some projects, due to pressures to reach the delivery date or maintain the budget, break this rule and dedicate time to new features, leaving the task of fixing the code associated with failed tests for later. These projects usually end up postponing the inevitable. Refactor only after all tests are passing Benefits: This type of refactoring is safe. If all implementation code that can be affected has tests and they are all passing, it is relatively safe to refactor. In most cases, there is no need for new tests. Small modifications to existing tests should be enough. The expected outcome of refactoring is to have all tests passing both before and after the code is modified. Development practices Practices listed in this section are focused on the best way to write tests. Write the simplest code to pass the test Benefits: It ensures cleaner and clearer design; avoids unnecessary features. The idea is that the simpler the implementation, the better and easier it is to maintain the product. The idea adheres to the keep it simple stupid (KISS) principle. This states that most systems work best if they are kept simple rather than made complex; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided. Write assertions first, act later Benefits: This clarifies the purpose of the requirements and tests early. Once the assertion is written, the purpose of the test is clear and the developer can concentrate on the code that will accomplish that assertion and, later on, on the actual implementation. Minimize assertions in each test Benefits: This avoids assertion roulette; allows execution of more asserts. If multiple assertions are used within one test method, it might be hard to tell which of them caused a test failure. This is especially common when tests are executed as part of the continuous integration process. If the problem cannot be reproduced on a developer's machine (as may be the case if the problem is caused by environmental issues), fixing the problem may be difficult and time consuming. When one assert fails, execution of that test method stops. If there are other asserts in that method, they will not be run and information that can be used in debugging is lost. Last but not least, having multiple asserts creates confusion about the objective of the test. This practice does not mean that there should always be only one assert per test method. If there are other asserts that test the same logical condition or unit of functionality, they can be used within the same method. Let's go through few examples: @Test public final void whenOneNumberIsUsedThenReturnValueIsThatSameNumber() { Assert.assertEquals(3, StringCalculator.add("3")); } @Test public final void whenTwoNumbersAreUsedThenReturnValueIsTheirSum() { Assert.assertEquals(3+6, StringCalculator.add("3,6")); } The preceding code contains two specifications that clearly define what the objective of those tests is. By reading the method names and looking at the assert, there should be clarity on what is being tested. Consider the following for example: @Test public final void whenNegativeNumbersAreUsedThenRuntimeExceptionIsThrown() { RuntimeException exception = null; try { StringCalculator.add("3,-6,15,-18,46,33"); } catch (RuntimeException e) { exception = e; } Assert.assertNotNull("Exception was not thrown", exception); Assert.assertEquals("Negatives not allowed: [-6, -18]", exception.getMessage()); } This specification has more than one assert, but they are testing the same logical unit of functionality. The first assert is confirming that the exception exists, and the second that its message is correct. When multiple asserts are used in one test method, they should all contain messages that explain the failure. This way debugging the failed assert is easier. In the case of one assert per test method, messages are welcome, but not necessary since it should be clear from the method name what the objective of the test is. @Test public final void whenAddIsUsedThenItWorks() { Assert.assertEquals(0, StringCalculator.add("")); Assert.assertEquals(3, StringCalculator.add("3")); Assert.assertEquals(3+6, StringCalculator.add("3,6")); Assert.assertEquals(3+6+15+18+46+33, StringCalculator.add("3,6,15,18,46,33")); Assert.assertEquals(3+6+15, StringCalculator.add("3,6n15")); Assert.assertEquals(3+6+15, StringCalculator.add("//;n3;6;15")); Assert.assertEquals(3+1000+6, StringCalculator.add("3,1000,1001,6,1234")); } This test has many asserts. It is unclear what the functionality is, and if one of them fails, it is unknown whether the rest would work or not. It might be hard to understand the failure when this test is executed through some of the CI tools. Do not introduce dependencies between tests Benefits: The tests work in any order independently, whether all or only a subset is run Each test should be independent from the others. Developers should be able to execute any individual test, a set of tests, or all of them. Often, due to the test runner's design, there is no guarantee that tests will be executed in any particular order. If there are dependencies between tests, they might easily be broken with the introduction of new ones. Tests should run fast Benefits: These tests are used often. If it takes a lot of time to run tests, developers will stop using them or run only a small subset related to the changes they are making. The benefit of fast tests, besides fostering their usage, is quick feedback. The sooner the problem is detected, the easier it is to fix it. Knowledge about the code that produced the problem is still fresh. If the developer already started working on the next feature while waiting for the completion of the execution of the tests, he might decide to postpone fixing the problem until that new feature is developed. On the other hand, if he drops his current work to fix the bug, time is lost in context switching. Tests should be so quick that developers can run all of them after each change without getting bored or frustrated. Use test doubles Benefits: This reduces code dependency and test execution will be faster. Mocks are prerequisites for fast execution of tests and ability to concentrate on a single unit of functionality. By mocking dependencies external to the method that is being tested, the developer is able to focus on the task at hand without spending time in setting them up. In the case of bigger teams, those dependencies might not even be developed. Also, the execution of tests without mocks tends to be slow. Good candidates for mocks are databases, other products, services, and so on. Use set-up and tear-down methods Benefits: This allows set-up and tear-down code to be executed before and after the class or each method. In many cases, some code needs to be executed before the test class or before each method in a class. For this purpose, JUnit has @BeforeClass and @Before annotations that should be used as the setup phase. @BeforeClass executes the associated method before the class is loaded (before the first test method is run). @Before executes the associated method before each test is run. Both should be used when there are certain preconditions required by tests. The most common example is setting up test data in the (hopefully in-memory) database. At the opposite end are @After and @AfterClass annotations, which should be used as the tear-down phase. Their main purpose is to destroy data or a state created during the setup phase or by the tests themselves. As stated in one of the previous practices, each test should be independent from the others. Moreover, no test should be affected by the others. Tear-down phase helps to maintain the system as if no test was previously executed. Do not use base classes in tests Benefits: It provides test clarity. Developers often approach test code in the same way as implementation. One of the common mistakes is to create base classes that are extended by tests. This practice avoids code duplication at the expense of tests clarity. When possible, base classes used for testing should be avoided or limited. Having to navigate from the test class to its parent, parent of the parent, and so on in order to understand the logic behind tests introduces often unnecessary confusion. Clarity in tests should be more important than avoiding code duplication. Tools TDD, coding and testing in general, are heavily dependent on other tools and processes. Some of the most important ones are as follows. Each of them is too big a topic to be explored in this article, so they will be described only briefly. Code coverage and Continuous integration (CI) Benefits: It gives assurance that everything is tested Code coverage practice and tools are very valuable in determining that all code, branches, and complexity is tested. Some of the tools are JaCoCo (http://www.eclemma.org/jacoco/), Clover (https://www.atlassian.com/software/clover/overview), and Cobertura (http://cobertura.github.io/cobertura/). Continuous Integration (CI) tools are a must for all except the most trivial projects. Some of the most used tools are Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo). Use TDD together with BDD Benefits: Both developer unit tests and functional customer facing tests are covered. While TDD with unit tests is a great practice, in many cases, it does not provide all the testing that projects need. TDD is fast to develop, helps the design process, and gives confidence through fast feedback. On the other hand, BDD is more suitable for integration and functional testing, provides better process for requirement gathering through narratives, and is a better way of communicating with clients through scenarios. Both should be used, and together they provide a full process that involves all stakeholders and team members. TDD (based on unit tests) and BDD should be driving the development process. Our recommendation is to use TDD for high code coverage and fast feedback, and BDD as automated acceptance tests. While TDD is mostly oriented towards white-box, BDD often aims at black-box testing. Both TDD and BDD are trying to focus on quality assurance instead of quality checking. Summary You learned that it is a way to design the code through short and repeatable cycle called red-green-refactor. Failure is an expected state that should not only be embraced, but enforced throughout the TDD process. This cycle is so short that we move from one phase to another with great speed. While code design is the main objective, tests created throughout the TDD process are a valuable asset that should be utilized and severely impact on our view of traditional testing practices. We went through the most common of those practices such as white-box and black-box testing, tried to put them into the TDD perspective, and showed benefits that they can bring to each other. You discovered that mocks are a very important tool that is often a must when writing tests. Finally, we discussed how tests can and should be utilized as executable documentation and how TDD can make debugging much less necessary. Now that we are armed with theoretical knowledge, it is time to set up the development environment and get an overview and comparison of different testing frameworks and tools. Now we will walk you through all the TDD best practices in detail and refresh the knowledge and experience you gained throughout this article. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0[article] Java Refactoring in NetBeans[article] Developing a JavaFX Application for iOS [article]
Read more
  • 0
  • 0
  • 2942
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-authentication-and-authorization-modx
Packt
20 Oct 2009
1 min read
Save for later

Authentication and Authorization in MODx

Packt
20 Oct 2009
1 min read
It is vital to keep this distinction in mind to be able to understand the complexities explained in this article. You will also learn how MODx allows grouping of documents, users, and permissions. Create web users Let us start by creating a web user. Web users are users who can access restricted document groups in the web site frontend; they do not have Manager access. Web users can identify themselves at login by using login forms. They are allowed to log in from the user page, but they cannot log in using the Manager interface. To create a web user, perform the following steps: Click on the Web Users menu item in the Security menu. Click on New Web User. Fill in the fields with the following information: Field Name Value Username samira Password samira123 Email Address xyz@configurelater.com    
Read more
  • 0
  • 0
  • 2941

article-image-linux4afrika-interview-founder
Packt
22 Oct 2009
5 min read
Save for later

Linux4afrika: An Interview with the Founder

Packt
22 Oct 2009
5 min read
  "Linux4afrika has the objective of bridging the digital divide between developed and disadvantaged countries, especially in Africa, by supporting  access to information technology. This is done through the collection of used computers in Germany, the terminal server project and Ubuntu software which is open source, and by providing support to the involved schools and institutions." In this interview with the founder Hans-Peter Merkel, Packt's Kushal Sharma explores the idea, support, and the future of this movement. Kushal Sharma: Who are the chief promoters of this movement? Hans-Peter Merkel: FreiOSS (established in 2004) with currently about 300 members started the Linux4afrika project in 2006. The input was provided by some African trainees doing their internship at St. Ursula High school in Freiburg where we currently have 2 Terminal servers running. The asked FreiOSS to run similar projects in Africa. KS: What initiated this vision to bridge the IT gap between the developed and the under developed nations? During 2002 to 2005 we conducted IT trainings on Open Source products during 3 InWEnt trainings “Information Technology in African business” (see http://www.it-ab.org) with 60 African trainees (20 each year). This made FreiOSS to move OSS out of the local area and include other countries, especially those countries we had participants from. KS: Can you briefly recount the history of this movement, from the time it started to its popularity today? HPM: As mentioned before, the Linux4afrika project has its roots with FreiOSS and St. Ursula High school. There itself the idea was born. I conduct Open Source trainings and security trainings in several African countries (see http://www.hpmerkel.com/events). During a training in Dar es Salaam I demonstrated the Terminal Server solution to participants in a security training. One of the participants informed a Minister of Tanzanian Parliament who immediately came to get more information on this idea. He asked whether Linux4afrika could collect about 100 Computers and ship them to Tanzania. Tanzania would cover the shipping costs. After retuning to Germany I informed FreiOSS regarding this, and the collection activity started. We found out more information about the container costs and found that a container would fit about 200 computers for the same price. Therefore we decided to change the number from 100 to 200. One Terminalserver (AMD 64 Dual Core with 2 GB Memory) can run about 20 Thin Clients. This would serve about 10 schools in Tanzania. The Ubuntu Community Germany heard about our project and invited us to Linuxtag in Berlin (2007). This was a breakthrough for us; many organizations donated hardware. 3SAT TV also added greatly to our popularity by sending a 5 minute broadcast about our project (see http://www.linux4afrika.de). In June we met Markus Feilner from German Linux Magazin who contacted us and also published serveral online reports. In September Linux4afrika was invited to the German Parliament to join a meeting about education strategies for the under developed countries. In October Linux4afrika will start collection for a second container which will be shipped end of the year. In early 2008 about 5 members of FreiOSS will fly to Dar es Salaam on their own costs to conduct a one week training where teachers will be trained. This will be an addon to the service from Agumba Computers Ltd. (see http://www.agumba.biz). Agumba offers support in Tanzania to keep the network running. During the InWEnt trainings from 2002-2005, three employees from Agumba were in that training. Currently, 2 other people from Agumba are here for a three month internship to become familiar with our solution and make the project sustainable. KS: Who are the major contributors? HPM: Currently FreiOSS in Germany and Agumba Computers in Tanzania are the major contributors. KS: Do you get any internal support in Tanzania and Mozambique? Do their Governments support open source? HPM: Yes, we do. In Tanzania, it's Augumba Computers and in Mozambique we have some support from CENFOSS. All trainings conducted by me on Security and Forensics had a 70 percent part on Open Source in Tanzania. Currently, the Governmental agencies are implementing those technologies mainly on servers. KS: Do you have any individuals working full-time for this project? If so, how do the full-time individuals support themselves financially? HPM: All supporters are helping us without any financial support. They all come after work to our meetings which take place about once a month. After some starting problems the group is now able to configure and test about 50 Thin clients per evening meetings. KS: Tell us something more about the training program: what topics do you cover, how many participants do you have so far, etc.? HPM: Tanzania shows a big interest in Security trainings. Agumba computers offers those trainings for about 4-6 weeks a year. Participants come from Tanzania Revenue Authority, Police, Presidents office, banks, water/electricity companies and others. Currently Tanzania Revenue Authority has sent 5 participants to conduct a 3 month Forensic training in Germany. In Tanzania about 120 participants joined the trainings so far. Sessions for next year will start in January 2007. KS: Packt supported the project by sending some copies of our OpenVPN book. How will these be used and what do you hope to gain from them? HPM: Markus Feilner (the author of the OpenVPN book) is currently in Tanzania. He will conduct a one and a half day training on OpenVPN in Dar es Salaam. The participants in Germany who received the books will receive practical training on IPCop and OpenVPN for Microsoft and Linux clients. This will help them establish secure Wireless in their country. KS: What does the future hold for Linux4afrika? Our current plans include the second container, the visit to Dar early 2008, and Linuxtag 2008. Further actions will be discussed therafter. We already have a few requests to expand the Terminalserver Solution to other under developed countries. Also, currently we have a request to support Martinique after the hurricane has destroyed huge parts of the island. KS:Thanks very much Hans-Peter for taking out time for us, and all the very best for your plans.
Read more
  • 0
  • 0
  • 2939

article-image-updating-software-koha
Packt
16 Nov 2010
4 min read
Save for later

Updating Software in Koha

Packt
16 Nov 2010
4 min read
  Koha 3 Library Management System Install, configure, and maintain your Koha installation with this easy-to-follow guide A self-sufficient guide to installing and configuring Koha Take control of your libraries with Koha library management system Get a clear understanding of software maintenance, customization, and other advanced topics such as LDAP and Internationalization Written in a style that applies to all Linux flavors and Koha versions Orientation to updating software Before we can update the Koha software, let us learn about Koha's software versions and how to choose the version to upgrade to. In this section we also learn about the components of a software update, and how to install each component of the update properly. Understanding Koha's software versions To choose which new version to upgrade to, let us first understand how the Koha software is organized. Branches At any given point Koha has at least two main software branches: Stable: This branch is older and is considered stable or bug free for the most part. Only bug fixes are allowed on this branch. Development: This branch is where new features are developed. This branch is ahead of the stable branch, meaning it has all the features of the stable branch and the new features in development. Heads Both branches—stable and development have heads. A heads is the tip of the branch, pointing to the latest change made in that branch. At the time of writing of this article, there are two heads available in Koha's Git repository. 3.0.x: This is the tip of the stable branch master: This is the tip of the development branch Tags Both branches have multiple tags. Tags point to specific points in a branch's change history. For instance we see these tags related to the stable branch: v3.00.06: This is the latest stable branch v3.00.05: An earlier version of the 3.0.x branch v3.00.04: An earlier version of the 3.0.x branch v3.00.03: An earlier version of the 3.0.x branch And these tags are available for the development branch: v3.02.00-beta: This is the 3.02 branch in the beta testing stage v3.03.00-alpha: This is the 3.02 branch when released for alpha testing Choosing a version to update to We can choose to move to the head of the stable branch or the head of the development branch or to any tag in one of these branches. Here are some pointers to help you decide: On production servers, we upgrade to the latest stable tag in the stable branch To take an early look at new features being developed, switch to the alpha or beta tag in the development branch, if available If you want to take a look at the very latest version of the software, switch to head of the development branch Understanding components of software updates When bugs are fixed or new features are added in Koha, different types of files and programs can change such as these: Perl, Java script, HTML, CSS, and other types of files in kohaclone folder Tables, columns, constraints, indexes, system preferences, and other types of changes in Koha's database Indexes and properties in Zebra configuration files Directives in Koha's Apache2 configuration files An overview of the installation process To ensure that software updates are installed properly, we need to follow these steps: Download software updates: We can download updates using Git. Git automatically detects our current version and downloads updates from Koha's online repository. Switch to a specific software version: Depending on our purposes, we will choose a version that we want to upgrade to. Install Perl module prerequisites: The new version of the software may depend on new Perl modules; we will need to install these. Install the new version of Koha: We will install the new Koha version using the make utility; this process is similar to that of a fresh Koha install. Configure Apache2: The new version of the software may have an updated Apache2 configuration file. We will need to configure this new file. Upgrade the database: We will use Koha's web installer to upgrade the database to the new version. Rebuild Zebra indexes: The new software version may contain updates to Zebra configuration files. To have these changes reflected in search results, we will need to do a full rebuild of Zebra's indexes. Restart Zebra server: To load new Zebra configurations we will have to restart zebrasrv.
Read more
  • 0
  • 0
  • 2936

article-image-moodle-20-faqs
Packt
14 Oct 2010
8 min read
Save for later

Moodle 2.0 FAQs

Packt
14 Oct 2010
8 min read
Moodle 2.0 First Look Discover what's new in Moodle 2.0, how the new features work, and how it will impact you Get an insight into the new features of Moodle 2.0 Discover the benefits of brand new additions such as Comments and Conditional Activities Master the changes in administration with Moodle 2.0 The first and only book that covers all of the fantastic new features of Moodle 2.0         Read more about this book       (For more resources on Moodle, see here.)   Question: What are the basic requirements for Moodle 2.0 to function? Answer: It's important that either you (if you're doing this yourself) or your Moodle admin or webhost are aware of the requirements for Moodle 2.0. It needs: PHP must be 5.2.8 or later One of the following databases: MySQL 5.0.25 or later (InnoDB storage engine highly recommended) PostgreSQL 8.3 or later Oracle 10.2 or later MS SQL 2005 or later One of the following browsers: Firefox 3 or later Safari 3 or later Google Chrome 4 or later Opera 9 or later MS Internet Explorer 7 or later   Question: How can I upgrade to Moodle 2.0? Answer: If you already have an installation of Moodle, you will find instructions for upgrading in the docs on the main Moodle site here http://docs.moodle.org/en/Upgrading_to_Moodle_2.0. If you are upgrading from an earlier version of Moodle (such as 1.8) then you should upgrade to Moodle 1.9 first before going to 2.0. You must update incrementally; shortcuts – for example. updating from 1.7 directly to 2.0 -- are simply not possible. Read the docs carefully if you are planning on upgrading from very early versions such as 1.5 or 1.6.   Question: What are the potential problems with upgrading? Answer: There are a few challenges that one may come across while upgrading from Moodle 1.9 to 2.0 which are listed below: Themes: The way themes work has changed completely. While this allows for more flexible coding and templating, it does mean that if you had a customized theme it will not transfer over to Moodle 2 without some redesigning beforehand. Third party add-ons and custom code: The same applies to third party add-ons and custom code: it is highly unlikely they will work without significant alterations. Backup and Restore: Making courses from 1.9 or earlier restore into Moodle 2. 0 has proved very problematic and is still not entirely achievable. Although this is a priority for the Moodle developers, there is at the time of writing only a workaround involving restoring your course to a 1.9 site and then upgrading it to 2.0.   Question: How can teachers and students manage their learning? Answer: The two new features of Moodle 2.0 help teacher and students manage their learning: Conditional activities: A way to organize a course so that tasks are only available dependent on certain grades being obtained or criteria being met beforehand. Completion tracking: A way for students to have checkboxes next to their tasks that are either automatically marked as complete or which students themselves can manually mark if they feel they've finished the exercise – or alternatively a way for whole courses to be checked off as finished.   Question: What are the changes in the Themes structure for Moodle 2.0? Answer: The themes structure has been completely rewritten for Moodle 2.0. Themes that worked in 1.9 needed to be updated to work in 2.0. There is a wide variety of attractive new themes available. If you need to update your own theme or would like information on Moodle 2.0 theming, you will find the documentation at http://docs.moodle.org/en/Development:Themes_2.0 helpful. New to Moodle 2.0 are the following: Designer Mode: Turn this on so you're not served cached versions of themes, if you are designing themes or developing code. Allow theme changes in the URL: Enabling this will let users alter their theme via their Moodle URL using the syntax Allow blocks to use the dock: Enabling this will allow users to dock blocks if the theme supports it.   Question: Can we customize the MyMoodle page in Moodle 2.0? Answer: Yes, we can customize the default MyMoodle page. It's worth noting that on the MyMoodle page we can add blocks to the middle as well as the sides. With editing turned on, we're given the option to move a block to a central location.   Question: Can we Comment on the Moodle blog? Answer: Commenting on the Moodle blog is a bit of a workaround really; the Moodle blog doesn't really have a built-in commenting facility like, say WordPress. Rather, Moodle is making use of the new Comments feature which ordinarily appears as a block anywhere you want to add it.   Question: What are the improvements in the Blog option in Moodle 2.0 as compared to the previous version? Answer: There has always been a blogging option in a standard Moodle install. However, some users have found it unsatisfactory because of the following reasons: The blog is attached to the user profile so you can only have one blog There is no way to attach a blog or blog entry to a particular course There is no way for other people to comment on your blog For this reason, alternative blog systems (such as the contributed OU blog module) have become popular as they give users a wider range of options. The standard blog in Moodle 2.0 has changed, and now: A blog entry can optionally be associated with a course It is possible to comment on a blog entry Blog entries from outside of Moodle can be copied in It is now possible to search blog entries   Question: How to enable/disable the docking facility in Moodle 2.0? Answer: The docking facility can be managed in Moodle 2.0 as follows: The "docking" facility may be enabled or disabled for themes in Site Administration | Appearance | Themes | Theme settings. If we click the icon shown in the following screenshot, we also have the option of "docking" this over to the far left as a narrow tab.   Question: Has the HTML editor been replaced by some other editing tool? What is its advantage? Answer: In Moodle 2.0, the HTML editor has been replaced with a version known as Tiny MCE, a very popular Open Source editor you might have encountered in content management systems or blogging software such as WordPress. Along with Internet Explorer and Firefox, it will work with web browsers such as Safari, Chrome, and Opera, unlike Moodle's previous HTML editor. The following screenshot shows the new editor (on the bottom) with the original editor (on the top): There are many more options available to us when adding descriptions of our materials or summaries of our courses. However, one of the most powerful new features is the ability to add and embed media directly from within this new HTML editor.   Question: What have been the improvements related to Moodle Quiz? Answer: The following are the improvements to Moodle Quiz: The set up page has been simplified Creating questions has been simplified It's possible to flag questions for later referral Questions can be accessed with one click in the post-quiz review and correct/ incorrect questions are color-coded in an easy-to access navigation block   Question: What are Cohorts? Answer: Cohort is Moodle 2.0's take on the long wished for site-wide groups. When we click on the link we're taken to the following screen where we click on Add to enter details of the cohort we want to create:   Question: Has there been any modification in the Filters menu as compared to the previous versions On/Off options? Answer: The Manage Filters in Moodle 2.0 equates to the Filters menu in Moodle 1.9. The Manage Filters screen looks like the following screenshot (note—the screenshot only displays the first three filters): Previously, filters were either On or Off. Now we have three choices: Disabled: Nobody, in any course, can enable a filter. On: A filter is enabled by default and teachers can disable if they wish to. Off but available: A filter is off but teachers can enable it in their own courses.   Question: What are the changes in Site Administration? Answer: Perhaps the simplest way to explore this is to look at how this menu has altered since Moodle 1.9: Notifications/Registrations: A small but important change in Moodle 1.9, the Notifications screen contained a button you could click to register your site with http://moodle.org/. The page this took you to now has its own billing in Moodle 2.0, as the Registration link. Community hubs: The main Moodle community hub is known as MOOCH and you register with it here. You can also register your site with other community hubs. If you register with hubs, then teachers can add a Community block in their courses where users can search for a suitable course to enroll in or download. Summary In this article we took a look at the queries regarding what Moodle 2.0 has to offer with the exciting new modules and enhanced features, and the major overhauls in the file uploading and navigation system. Further resources on this subject: Moodle 1.9 Math [Book] Moodle Administration [Book] Moodle 1.9 for Teaching Special Education Children (5-10): Beginner's Guide [Book] Moodle 2.0: What's New in Add a Resource [Article] What's New in Moodle 2.0 [Article]
Read more
  • 0
  • 0
  • 2932
article-image-introduction-moodle-modules
Packt
22 Nov 2010
8 min read
Save for later

Introduction to Moodle Modules

Packt
22 Nov 2010
8 min read
  Moodle 1.9 Top Extensions Cookbook Over 60 simple and incredibly effective recipes for harnessing the power of the best Moodle modules to create effective online learning sites Packed with recipes to help you get the most out of Moodle modules Improve education outcomes by situating learning in a real-world context using Moodle Organize your content and customize your courses Reviews of the best Moodle modules—out of the available 600 modules Installation and configuration guides Written in a conversational and easy-to-follow manner       Introduction Moodle is an open source Learning Management System (LMS). Image source: http://moodle.org/ The word Moodle is actually an acronym. The 'M' in Moodle stands for Modular and the modularity of Moodle has been one of the key aspects of its success. Being modular means you can: Add modules to your Moodle instance Selectively use the modules you need M.O.O.D.L.E. The acronym Moodle stands for Modular Object-Oriented Dynamic Learning Environment. It is modular because you can add and remove modules. The programming paradigm used to create Moodle code is Object-Oriented. It is dynamic because it can be used for information delivery and interactivity, in a changeable and flexible way. It is a learning environment designed for teaching at many levels. Because Moodle is modular and open source, many people have created modules for Moodle, and many of those modules are available freely for you to use. At time of writing, there are over 600 modules that you can download from the Moodle Modules and plugins database. Some of these are popular, well designed, and well maintained modules. Others are ideas that didn't seem to get off the ground. Some are contributed and maintained by large institutions, but most are contributed by individuals, often teachers themselves, who want to share what they have created. If you have an idea for something you would like to do with Moodle, it's possible that someone has had that idea before and has created and shared a module you can use. This article will show you how to download and test contributed Moodle modules, to see if they suit your needs. Origins of Moodle Moodle began in 1999 as postgraduate work of Martin Dougiamas, "out of frustration with the existing commercial software at the time". Considering the widespread use of Moodle around the world (over 40,000 registered sites in over 200 countries), Martin is a very humble man. If you ever make it to a MoodleMoot and Martin is in attendance, be sure to introduce yourself. A test server If you only want to test modules, consider setting up your own basic web server, such as XAMPP (http://www.apachefriends.org/en/xampp.html) and installing Moodle from the Moodle Downloads page (http://download.moodle.org/). If you are a Windows or Mac user, you can even download and install Moodle packages where these two ingredients are already combined and ready to go. Once installed, add a course or two. Create some dummy students to see how modules work within a course. Have a play around with the modules available—Moodle is quite hard to break—don't be afraid to experiment. Getting modules you can trust The Moodle Modules and plugins database is filled with modules great and small. This article will help you to know how you can find modules yourself. Getting ready You may have an idea in mind, or you may just want to see what's out there. You'll need a web browser and an active Internet connection. How to do it... Point your browser to the Moodle Modules and plugins database. Refer http://moodle.org/mod/data/view.php?id=6009: Image source: http://moodle.org/mod/data/view.php?id=6009 As you scroll down you will see list of modules that can be downloaded. At the bottom of the page is a Search facility: Image source: http://moodle.org/mod/data/view.php?id=6009 You can also try an advanced search to get more specific about the following: What type of module you want What version of Moodle you have A number of other features The following is a search result for the term 'progress': Image source: http://moodle.org/mod/data/view.php?id=6009 Each entry has a type, the version of Moodle that it is compatible with, and a brief description. Clicking on the name of the module will take you to a page with details about the module. This is the module's 'entry': Image source: http://moodle.org/mod/data/view.php?d=13&rid=2524&filter=1 On each entry page there is a wealth of information about the module. The following is a list of questions you will want to answer when determining if the module is worth testing. Will it work with your version of Moodle? Is documentation provided? When was the module released and has there been activity (postings on the page below) since then? Is the module author active in the discussion about the module? Is the discussion positive (don't be too discouraged by bug reports if the author is involved and reporting that bugs have been fixed)? From discussion, can you tell if the module is widely used with a community of users behind it? What is the rating of the module? If you are happy with your answers to these questions, then you may have found a useful module. Be wary of modules that do what you want, but are not supported; you may be wasting your time and putting the security of your system and the integrity your teaching at risk. There's more... Here is some additional information that may help you on a module hunt. Types of modules In order to get a sense of how modules will work, you need to have an understanding of the distinction between different module types. The following table describes common module types. Amid the array of modules available, the majority are blocks and activity modules. Activity moduleActivity modules deliver information or facilitate interactivity within a course. Links to activity modules are added on a course main page and the activity module itself appears on a new page when clicked. Examples in the core installation are 'Forums' and 'Quizzes'.Assignment typeAssignment types are a specific type of activity module that focus on assessable work. They are all based on a common assignment framework and appear under 'Assignments' in the activities list. Examples in the core installation are 'Advanced upload of files' and 'Online text' assignments.BlockBlocks usually appear down each side of a course main page. They are usually passive, presenting specific information, and links to more information and activities. A block is a simpler type of module. Because they are easy to create, there are a large number of these in the Modules and Plugins database. Examples in the core installation are the 'Calendar' and 'Online Users' blocks.Course formatA course format allows the structure of a course main page to be changed to reflect the nature of the delivery of the course, for example, by schedule or by topic.FilterFilters allow targeted text appearing around a Moodle site to be replaced with other content, for example, equations, videos, or audio clips.IntegrationAn integration module allows Moodle to make use of systems outside the Moodle instance itself.Question typeWithin a quiz, question types can be added to enable different forms of questions to be asked. Checking your version If you are setting up your own Moodle instance for teaching or just for testing, take note of the version you are installing. If you have access to the Site Administration interface (the Moodle site root page when logged in as an administrator), clicking on Notifi cations will show you the version number near the bottom, for example Moodle 1.9.8 (Build: 20100325). The first part of this is the Moodle version; this is what you need when searching through modules on the Modules and plugins database. The second part, labeled "Build" shows the date when the installed version was released in YYYYMMDD format. This version information reflects what is stored in the /version.php file. If you are not the administrator of your system, consult the person who is. They should usually be able to tell you the version without looking it up. Moodle 2.0 The next version of Moodle to follow version 1.9 has been "on the cards" for some time. The process of installing modules will not change in the new version, so most of the information in this book will still be valid. You will need to look for versions of modules ready for Moodle 2.0 as earlier versions will not work without adjustment. As modules are usually contributed by volunteers, there may be some waiting before this happens; the best way to encourage this re-development is to suggest an improvement for the module on the Moodle bug tracker system at http://tracker.moodle.org/. See also Adding modules to Moodle
Read more
  • 0
  • 0
  • 2931

article-image-introduction-citrix-xendesktop
Packt
29 May 2013
23 min read
Save for later

Introduction to Citrix XenDesktop

Packt
29 May 2013
23 min read
(For more resources related to this topic, see here.) Configuring the XenDesktop policies Now that the XenDesktop infrastructure has been configured, it's time to activate and populate the VDI policies. This is an extremely important part of the implementation process, because with these policies you will regulate the resource use and assignments, and you will also improve the general virtual desktops performance. Getting ready All the policies will be applied to the deployed virtual desktop instances and the assigned users, so you need an already existing XenDesktop infrastructure on which you will enable and use the configuration rules. How to do it... In this recipe we will explain the configuration for the user and machine policies offered by Citrix XenDesktop. Perform the following steps: Connect to the XenDesktop Director machine with domain administrative credentials, then navigate to Start | All Programs | Citrix and run the Desktop Studio. On the left-hand side menu expand the HDX Policy section and select the Machines link. Click on the New button to create a new policy container, or select the default unfiltered policies and click on Edit to modify them. In the first case, you have to assign a descriptive name to the created policy. In the Categories menu, click on the following sections and configure the values for the policies that will be applied to the clients, in terms of network flow optimization and resource usage monitoring: The ICA section ICA listener connection timeout: Insert a value in milliseconds; default is 12000. ICA listener port number: This is the TCP/IP port number on which the ICA protocol will try to establish the connection. The default value is 1494. The Auto Client Reconnect subsection Auto client reconnect: (Values Allowed or Prohibited) Specify whether or not to automatically reconnect in case of a broken connection from a client. Auto client reconnect authentication: (Values Do not require authentication or Require authentication) Decide whether to let the Citrix infrastructure ask you for the credentials each time you have to reperform the login operation. Auto client reconnect logging: (Values Do Not Log auto-reconnect events or Log auto-reconnect events) This policy enables or disables the logging activities in the system log for the reconnection process. In case of active autoclient reconnect, you should also activate its logging. End User Monitoring subsection ICA round trip calculation: (Values Enabled or Disabled) This decides whether or not to enable the calculation of the ICA network traffic time. ICA round trip calculation interval: Insert the time interval in seconds for the period of the round trip calculation. ICA round trip calculations for idle connections: (Values Enabled or Disabled) Decide whether to enable the round trip calculation for connections that are not performing traffic. Enable this policy only if necessary. The Graphics subsection Display memory limit: Configure the maximum value in KB to assign it to the video buffer for a session. Display mode degrade preference: (Values Degrade color depth first or Degrade resolution first) Configure a parameter to lower the resolution or the color quality in case of graphic memory overflow. Dynamic Windows Preview: (Values Enabled or Disabled) With this policy you have the ability to turn on or turn off the high-level preview of the windows open on the screen. Image caching: (Values Enabled or Disabled) With this parameter you can cache images on the client to obtain a faster response. Notify user when display mode is degraded: (Values Enabled or Disabled) In case of degraded connections you can display a pop up to send a notification to the involved users. Queueing and tossing: (Values Enabled or Disabled) By enabling this policy you can stop the processing of the images that are replaced by other pictures. In presence of slow or WAN network connections, you should create a separate policy group which will reduce the display memory size, configure the degrade color depth policy, activate the image caching, and remove the advanced Windows graphical features. The Keep Alive subsection ICA keep alive timeout: Insert a value in seconds to configure the keep alive timeout for the ICA connections. ICA keep alives: (Values Do not send ICA keep alive messages or Send ICA keep alive messages) Configure whether or not to send keep-alive signals for the running sessions. The Multimedia subsection Windows Media Redirection: (Values Allowed or Prohibited) Decide whether or not to redirect the multimedia execution on the Citrix server(s) and then stream it to the clients. Windows Media Redirection Buffer Size: Insert a value in seconds for the buffer used to deliver multimedia contents to the clients. Windows Media Redirection Buffer Size Use: (Values Enabled or Disabled) This policy decides whether or not to let you use the previously configured media buffer size. The Multi-Stream Connections subsection Audio UDP Port Range: Specify a port range for the UDP connections used to stream audio data. The default range is 16500 to 16509. Multi-Port Policy: This policy configures the traffic shaping to implement the quality of service (QoS). You have to specify from two to four ports and assign them a priority level. Multi-Stream: (Values Enabled or Disabled) Decide whether or not to activate the previously configured multistream ports. You have to enable this policy to activate the port configuration in the Multi-Port Policy. The Session Reliability subsection Session reliability connections: (Values Allowed or Prohibited) By enabling this policy you allow the sessions to remain active in case of network problems. Session reliability port number: Specify the port used by ICA to check the reliability of incoming connections. The default port is 2598. Session reliability timeout: Specify a value in seconds used by the session reliability manager component to wait for a client reconnection. You cannot enable the ICA keep alives policy if the policies under the Session Reliability subsection have been activated. The Virtual Desktop Agent Settings section Controller Registration Port: Specify the port used by Virtual Desktop Agent on the client to register with the Desktop Controller. The default value is 80. Changing this port number will require you to also modify the port on the controller machine by running the following command: <BrokerInstallationPath>BrokerService.exe / VdaPort <newPort> Controller SIDs: Specify a single controller SID or a list of them used by Virtual Desktop Agent for registration procedures. Controllers: Specify a single or a set of Desktop Controllers in the form of FQDN, used by Virtual Desktop Agent for registration procedures. Site GUID: Specify the XenDesktop unique site identifier used by Virtual Desktop Agent for registration procedures. In presence of more than one Desktop Controller, you should create multiple VDA policies with different controllers for a load-balanced infrastructure.   The CPU Usage Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) With this policy you can enable or disable the monitoring for the CPU usage. Monitoring Period: Insert a value in seconds to configure the time period to run the CPU usage recalculation. Threshold: Configure a percentage value to activate the high CPU usage alert. The default value is 95 percent. Enable the CPU Usage Monitoring policies in order to better troubleshoot machine load issues. After configuring, click on the OK button to save the modifications. On the left-hand side menu, click on the Users policy link in the HDX Policy section. Click on the New button to create a new policy container, or select the default unfiltered policies and click on Edit to modify them. In the first case, you have to assign a descriptive name to the created policy. In the Categories menu click on the following sections and configure the associated values: The ICA section Client clipboard redirection: (Values Allowed or Prohibited) This policy permits you to decide whether or not to enable the use of the client clipboard in the XenDesktop session, and to perform copy and paste operations from the physical device to the remote Citrix session. The active clipboard redirection could be a security issue; be sure about its activation! The Flash Redirection subsection Flash acceleration: (Values Enabled or Disabled) This policy permits you to redirect the Flash rendering activities to the client. This is possible only with the legacy mode. Enable this policy to have a better user experience for the Flash contents. Flash backwards compatibility: (Values Enabled or Disabled) With this policy you can decide whether or not to activate the compatibility of older versions of Citrix Receiver with the most recent Citrix Flash policies and features. Flash default behavior: (Values Enable Flash acceleration, Disable Flash acceleration, or Block Flash player) This policy regulates the use of the Adobe Flash technology, respectively enabling the most recent Citrix for Flash features (including the client-side processing), permitting only server-side processed contents, or blocking any Flash content. Flash event logging: (Values Enabled or Disabled) Decide whether or not to create system logs for the Adobe Flash events. Flash intelligent fallback: (Values Enabled or Disabled) This policy, if enabled, is able to activate the server-side Flash content processing when the client side is not required. The Flash Redirection features have been strongly improved starting from XenDesktop Version 5.5. The Audio subsection Audio over UDP Real-time transport: (Values Enabled or Disabled) With this policy you can decide which protocols to transmit the audio packets, RTP/UDP (policy enabled) or TCP (policy disabled). The choice depends on the kind of audio traffic to transmit. UDP is better in terms of performance and bandwidth consumption. Audio quality: (Values Low, Medium, or High) This parameter depends on a comparison between the quality of the network connections and the audio level, and they respectively cover the low-speed connections, optimized for speech and high-definition audio cases. Client audio redirection: (Values Allowed or Prohibited) Allowing or prohibiting this policy permits applications to use the audio device on the client's machine(s). Client microphone redirection: (Values Allowed or Prohibited ) This policy permits you to map client microphone devices to use within a desktop session. Try to reduce the network and load impact of the multimedia components and devices where the high user experience is not required. The Bandwidth subsection Audio redirection bandwidth limit: Insert a value in kilobits per second (Kbps) to set the maximum bandwidth assigned to the playing and recording audio activities. Audio redirection bandwidth limit percent: Insert a maximum percentage value to play and record audio. Client USB device redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to USB devices redirection. Client USB device redirection bandwidth limit percent: Insert a maximum percentage value for USB devices redirection. Clipboard redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the clipboard traffic from the local client to the remote session. Clipboard redirection bandwidth limit percent: Insert a maximum percentage value for the clipboard traffic from the local client to the remote session. COM port redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the client COM port redirected traffic. COM port redirection bandwidth limit percent: Insert a maximum percentage value for the client COM port redirected traffic. File redirection bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to client drives redirection. File redirection bandwidth limit percent: Insert a maximum percentage value for client drives redirection. HDX MediaStream Multimedia Acceleration bandwidth limit: Insert a value in Kbps to set the maximum bandwidth assigned to the multimedia content redirected through the HDX MediaStream acceleration. HDX MediaStream Multimedia Acceleration bandwidth limit percent: Insert a maximum percentage value for the multimedia content redirected through the HDX MediaStream acceleration. Overall session bandwidth limit: Specify a value in Kbps for the total bandwidth assigned to the client sessions. In presence of both bandwidth limit and bandwidth limit percent enabled policies, the most restrictive value will be used. The Desktop UI subsection Aero Redirection: (Values Enabled or Disabled) This policy decides whether or not to activate the redirection of the Windows Aero graphical feature to the client device. If Aero has been disabled, this policy has no value. Aero Redirection Graphics Quality: (Values High, Medium, Low, and Lossless) If Aero has been enabled, you can configure its graphics level. Desktop wallpaper: (Values Allowed or Prohibited) Through this policy you can decide whether or not to permit the users having the desktop wallpaper in your session. Disable this policy if you want to standardize your desktop deployment. Menu animation: (Values Allowed or Prohibited) This policy permits you to decide whether or not to have the animated menu of the Microsoft operating systems. The choice depends on what kind of performances you need for your desktops. View window contents while dragging: (Values Allowed or Prohibited) This policy gives you the ability to see the entire window contents during the drag-and-drop activities between windows, if enabled. Otherwise you'll see only the window's border. Enabling the Aero redirection will have impact only on the LAN-based connection; on WAN, Aero will not be redirected by default. The File Redirection subsection Auto connect client drives: (Values Enabled or Disabled) With this policy the local drives of your client will or will not be automatically connected at logon time. Client drive redirection: (Values Allowed or Prohibited) The drive redirection policy allows you to decide whether it is permitted or not to save files locally on the client machine drives. Client fixed drives: (Values Allowed or Prohibited) This policy decides whether or not to permit you to read data from and save information to the fixed drives of your client machine. Client floppy drives: (Values Allowed or Prohibited) This policy decides whether or not to permit you to read data from and save information to the floppy drives of your client machine. This should be allowed only in presence of an existing floppy drive, otherwise it has no value to your infrastructure. Client network drives: (Values Allowed or Prohibited) With this policy you have the capability of mapping the remote network drives from your client. Client optical drives: (Values Allowed or Prohibited) With this policy you can enable or disable the access to the optical client drives, such as CD-ROM or DVD-ROM. Client removable drives: (Values Allowed or Prohibited) This policy allows or prohibits you to map, read, and save removable drives from your client, such as USB keys. Preserve client drive letters: (Values Enabled or Disabled) Enabling this policy offers you the possibility of maintaining the client drive letters when mapping them in the remote session, whenever possible. Read-only client drive access: (Values Enabled or Disabled) Enabling this policy will not permit you to access the mapped client drivers in write mode. By default, this policy is disabled to permit the full drive access. To reduce the impact on the client security, you should enable it. You can always modify it when necessary. These are powerful policies for regulating the access to the physical storage resources. You should configure them to be consistent with your company security policies. The Multi-Stream connections subsection Multi-Stream: (Values Enabled or Disabled) As seen earlier for the machine section, this policy enables or disables the multistreamed traffic for specific users. The Port Redirection subsection Auto connect client COM ports: (Values Enabled or Disabled) If enabled, this policy automatically maps the client COM ports. Auto connect client LPT ports: (Values Enabled or Disabled) This policy, if enabled, autoconnects the client LPT ports. Client COM port redirection: (Values Allowed or Prohibited) This policy configures the COM port redirection between the client and the remote session. Client LPT port redirection: (Values Allowed or Prohibited) This policy configures the LPT port redirection between the client and the remote session. You have to enable only the necessary ports, so disable the policies for the missing COM or LPT ports. The Session Limits subsection Disconnected session timer: (Values Enabled or Disabled) This policy enables or disables the counter used to migrate from a locked workstation to a logged off session. For security reasons, you should enable the automatic logoff of the idle sessions. Disconnected session timer interval: Insert a value in minutes, which will be used as a counter reference value to log off locked workstations. Set this parameter based on a real inactivity time for your company employees. Session connection to timer: (Values Enabled or Disabled) This policy will or will not use a timer to measure the duration of active connections from clients to the remote sessions. The Time Zone Control subsection Use local time of client: (Values Use server time zone or Use client time zone) With this policy you can decide whether to use the time settings from your client or from the server. XenDesktop uses the user session's time zone. The USB Devices subsection Client USB device redirection: (Values Allowed or Prohibited) With this important policy you can permit or prohibit USB drives redirection. Client USB device redirection rules: Through this policy you can generate rules for specific USB devices and vendors, in order to filter or not; and if yes, what types of external devices mapping. The Visual Display subsection Max Frame Per Second: Insert a value, in terms of frames per second, which will define the number of frames sent from the virtual desktop to the user client. This parameter could dramatically impact the network performance, so be careful about it and your network connection. The Server Session Settings section Single Sign-On: (Values Enabled or Disabled) This policy decides whether to turn on or turn off the SSO for the user sessions. Single Sign-On central store: Specify the SSO store server to which the user will connect for the logon operations, in the form of a UNC path. The Virtual Desktop Agent Settings section The HDX3DPro subsection EnableLossLess: (Values Allowed or Prohibited) This policy permits or prohibits the use of a lossless codec. HDX3DPro Quality Settings: Specify two values, Minimum Quality and Maximum Quality (from 0 to 100), as HDX 3D Pro quality levels. In the absence of a valid HDX 3D Pro license, this policy has no effect. The ICA Latency Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) This rule will or will not monitor the ICA latency problems. Monitoring Period: Define a value in seconds to run the ICA latency monitor. Threshold: Insert a threshold value in milliseconds to check if the ICA latency has arrived to the highest level. The Profile Load Time Monitoring subsection Enable Monitoring: (Values Allowed or Prohibited) With this policy you can monitor the time required to load a user profile. Threshold: Specify a value in seconds to activate the trigger for the high profile loading time event. These are important policies to troubleshoot performance issues in the profile loading activities, especially referred to the centralized profiles. After configuring click on the OK button to save the modifications. For both the edited policy categories (Machines and Users), click on the Edit button, select the Filters tab, and add one or more of the following filters: Access Control: (Mode: Allow or Deny, Connection Type: With Access Gateway or Without Access Gateway) Insert the parameters for the type of connection to which you are applying the policies, using or not using Citrix Access Gateway. Branch Repeater: (Values Connections with Branch Repeater or Connections without Branch Repeater) This policy decides whether or not to apply the policies to the connection that passes or doesn't pass through a configured Citrix Branch Repeater. Client IP Address: (Mode: Allow or Deny) Specify a client IP address to which you are allowing or denying the policy application. Client Name: (Mode: Allow or Deny) Specify a client name to which you are allowing or denying the policy application. Desktop Group: (Mode: Allow or Deny) Select from the drop-down list an existing desktop or application group to which you are applying or not applying the configured policies. Desktop Type: (Mode: Allow or Deny) This policy decides to allow or deny the policy application to the existing deployed resources (Private Desktop or Shared Desktop, Private Applications or Shared Applications). Organizational Unit: (Mode: Allow or Deny) Browse for an existing domain OU to which you are applying or not applying the configured policies. Tag: (Mode: Allow or Deny) This policy decides to allow or deny the application of the policies to specific tags applied to the desktops. User or Group: (Mode: Allow or Deny) Browse for existing domain users and groups to which you are applying or not applying the configured policies. For the machine section, you'll only have the desktop group, desktop type, organizational unit, and tag categories of filters. After completing this, click on the OK button to save the changed filters. How it works... The Citrix XenDesktop policies work at two different levels of components, machines and users, and for each of them you can apply a set of filters to decide when and where to permit or not to permit the policy utilization. These configurations should be strongly oriented to the performance and security optimization, so the best practices to apply is to generate different sets of policies and specifically apply them to different kinds of virtual desktops, clients, and users. The following is the explanation of the previously applied configurations: Machines policy level: These kinds of policies apply at the machine level, trying to regulate and optimize the session management, and the multimedia resources redirection. With this group of settings you are able to configure the standard ICA port to listen, and the relative connection timeouts. It's possible to decide whether or not to automatically reconnect a client in case of broken connections. Enabling Auto client reconnect policy could be right in some cases, especially when you have interrupted an important working session, but on the other hand, you could not have calculated waste of resources, because the Citrix broker could run a new session in the presence of issues with the session cookies. With the ICA round trip policies, you can monitor and measure the response time taken by the users for the operations. This data permits you to understand the responsiveness of your Citrix infrastructure. In case it allows you to apply remediation to the configuration, especially for the policies that involve graphics components, you can size the display memory and the image caching area, or turn on or off specific Windows advanced graphical features, such as the Dynamic Windows Preview (DWP). With the queuing and tossing policy active, you could have problems of lost frames when reproducing animations. The Windows media redirection policy optimizes the reproduction of multimedia objects; by applying a correct sizing to its buffer size you should obtain evident improvements in the streaming and reproduction operations. So, you should consider disabling this policy, demanding the processing of audio and video to the clients only when you can see no particular benefits. Another important feature offered by these policies is the QoS implementation; you can enable the multistream connection configurations and apply the traffic priority levels to them, permitting to give precedence and more bandwidth to the traffic that is considered more critical than others. The Multi-Stream policy for the QoS can be considered a less powerful alternative to Citrix Branch Repeater. As the last part of this section, the Virtual Desktop Agent Settings section permits you to restrict the access to only pre-configured resources, such as specific Desktop Controllers. Users policy level: Combined with the machines policies we have the users policies. These policies apply settings from a user session perspective, so you can configure, for instance, processing the Adobe Flash contents, deciding whether or not to activate the compatibility with the oldest version of this software, and whether to elaborate the Flash multimedia objects on the user's clients or on the Citrix servers. Moreover, you can configure the audio settings, such as audio and microphone client redirection (in the sense of using the local device resources), the desktop settings (Aero parameters, desktop wallpapers, and so on), or the HDX protocol quality settings. Be careful when applying policies for the desktop graphical settings. To optimize the information transmission for the desktops the bandwidth policy is extremely important; by this you can assign, in the form of maximum Kbps or percentage, the values for the traffic types such as audio, USB, clipboard, COM and LPT ports, and file redirection. These configurations require a good analysis of the traffic levels and their priorities within your organization. The last great configuration is the redirection of the client drives to the remote Citrix sessions; in fact, you can activate the mount (automatic or not) and the users rights (read only or read/write) on the client drives, removable or not, such as CD-ROM or DVD-ROM, removable USB devices, and fixed drives as the client device operating system root. This option gives you the flexibility to transfer information from the local device to the XenDesktop instance through the use of properly configured Virtual Desktop Agent. This last device policy could make your infrastructure more secure, thanks to the use of the USB device redirection rules; through it, in fact, you could only permit the use of USB keys approved by your company, prohibiting any other nonpolicy-compliant device. The granularity of the policy application is granted by the configuration of the filters; by using these-you can apply the policies to specific clients, desktop or application groups, or domain users and groups. In this way you can create different policies with different configurations, and apply them to specific areas of your company, without generalizing and overriding settings. There's more... To verify the effective running of the policies applied to your VDI infrastructure, there's a tool called Citrix Group Policy Modeling Wizard inside the HDX Policy section, which performs this task. This tool performs a simulation for the policy applications, returning a report with the current configuration. This is something similar to Microsoft Windows Domain Group Policy Results. The simulations apply to one or all the domain controllers configured within your domain, being able to test the application for a specific user or computer object, including organizational units containing them. Moreover, you can apply filters based on the client IP address, the client name, the type of machine (private or shared desktop, private or shared application), or you can apply the simulation to a specific desktop group. In the Advanced Options section you can simulate slow network connections and/or loopback processing (basically, a policy application only based on the computer object locations, instead of both the user and computer object positions) for a configured XenDesktop site. After running the policy application test, you can check the results by right-clicking on the generated report name, and selecting the View Report option. This tool is extremely powerful when you have to verify unexpected behaviors of your desktop instances or user rights because of the application of incorrect policies. Summery In this article we discussed the configuration of the XenDesktop infrastructural policies. Resources for Article : Further resources on this subject: Linux Thin Client : Considering the Network [Article] Designing a XenApp 6 Farm [Article] Getting Started with XenApp 6 [Article]
Read more
  • 0
  • 0
  • 2926

article-image-web-cms
Packt
23 Oct 2009
17 min read
Save for later

Web CMS

Packt
23 Oct 2009
17 min read
Let's get started. Do you want a CMS or a portal? We are evaluating a CMS for our Yoga Site. But you may want to build something else. Take a look again at the requirements. Do you need a lot of dynamic modules such as an event calendar, shopping cart, collaboration module, file downloads, social networking, and so on? Or you need modules for publishing and organizing content such as news, information, articles, and so on? Today's top-of-the-line Web CMSs can easily work as a portal. They either have a lot of built-in functionality or a wide range of plug-ins that extend their core features. Yet, there are solutions specifically made for web portals. You should evaluate them along with CMS software if your needs are more like a portal. On the other hand, if you want a simple corporate or personal web site, with some basic needs, you don't require a mammoth CMS. You can use a simple CMS that will not only fulfill your needs, but will also be easier to learn and maintain. Joomla! is a solid CMS. But it requires some experience to get used to it. For this article, let's first evaluate a simpler CMS. How do we know which CMS is simple? I think we can't go wrong with a CMS that's named "CMS Made Simple". Evaluating CMS Made Simple As the name suggests, CMS Made Simple (http://www.cmsmadesimple.org/) is an easy-to-learn and easy-to-maintain CMS. Here's an excerpt from its home page: If you are an experienced web developer, and know how to do the things you need to do, to get a site up with CMS Made Simple is just that, simple. For those with more advanced ambitions there are plenty of addons to download. And there is an excellent community always at your service. It's very easy to add content and addons wherever you want them to appear on the site. Design your website in whatever way or style you want and just load it into CMSMS to get it in the air. Easy as that! That makes things very clear. CMSMS seems to be simple for first-time users, and extensible for developers. Let's take CMSMS to a test drive. Time for action-managing content with CMS Made Simple Download and install CMS Made Simple. Alternatively, go to the demo a thttp://www.opensourcecms.com/. Log in to the administration section. Click on Content | Image Manager. Using the Upload File option, upload the Yoga Site logo. Click on Content | Pages option from the menu. You will see a hierarchical listing of current pages on the site. The list is easy to understand. Let's add a new page by clicking on the Add NewContent link above the list. The content addition screen is similar to a lot of other CMSs we have seen so far.There are options to enter page title, category, and so on. You can add page content using a large WYSIWYG editor. Notice that we can select a template for the page. We can also select a parent page.Since we want this page to appear at the root level, keep the Parent as none. Add some Yoga background information text. Format it using the editor as you see fit. There are two new options on this editor, which are indicated by the orange palmtree icons. These are two special options that CMSMS has added: first, to insert a menu; and second, to add a link to another page on the site. This is excellent. It saves us the hassle of remembering, or copying, links. Select a portion of text in the editor. Click on the orange palm icon with the link symbol on it. Select any page from the fly out menu. For now, we will link to the Home page. Click on the Insert/edit Image icon. Then click on the Browse icon next to the ImageURL field in the new window that appears. Select the logo we uploaded and insert it into content. Click on Submit to save the page. The Current Pages listing now shows our Background page. Let's bring it higher in the menu hierarchy. Click on the up arrow in the Move column on our page to push it higher. Do this until is at the second position—just after Home. That's all. We can click on the magnifying glass icon at the main menu bar's rightside to preview our site. Here's how it looks. What just happened? We set up the CMSMS and added some content to it. We wanted to use an image in ourcontent page. To make things simpler, we first uploaded an image. Then we went to the current pages listing. CMSMS shows all pages in the site in a hierarchical display. It's a simplefeature that makes a content administrator's life very easy. From there, we went on to createa new page. CMSMS has a WYSIWYG editor, like so many other CMSs we have seen till now. The content addition process is almost the same in most CMSs. Enter page title and related information,type in content, and you can easily format it using a WYSIWYG editor. We inserted the logo image uploaded earlier using this editor. CMSMS features extensions to the default WYSIWYG editor. These features demonstrate all of the thinking that's gone into making this software. The orange palm tree icon appearing on the WYSIWYG editor toolbar allowed us to insert a link to another page with a simple click. We could also insert a dynamic menu from within the editor if needed. Saving and previewing our site was equally easy. Notice how intuitive it is to add and manage content. CMS Made Simple lives up to its namein this process. It uses simple terms and workflow to accomplish tasks at hand. Check out the content administration process while you evaluate a CMS. After all, it's going to be your most commonly used feature! Hierarchies: How deep do you need them?What level of content hierarchies do you need? Are you happy with two levels? Do you like Joomla!'s categories -> sections -> content flow ? Or do you need to go even deeper? Most users will find two levels sufficient. But if you need more, find out if the CMS supports it. (Spoiler: Joomla! is only two-level deepby default.) Now that we have learned about the content management aspect of CMSMS, let's see how easily we can customize it. It has some interesting features we can use. Time for action-exploring customization options Look around the admin section. There are some interesting options. The third item in the Content menu is Global Content Blocks. Click on it. The name suggests that we can add content that appears on all pages of the site from there. A footer block is already defined. Our Yoga Site can get some revenue by selling interesting products. Let's create a block to promote some products on our site. Click on the Add Global Content Block link at the bottom. Let's use product as the name. Enter some text using the editor. Click on Submit to save. Our new content block will appear in the list. Select and copy Tag to Use this Block. Logically, we need to add this tag in a template. Select Layout | Templates from the main menu. If you recall, we are using the Left simple navigation + 1 column template. Click on the template name. This shows a template editor. Looking at this code we can make out the structure of a content page. Let's add the new content block tag after the main page content. Paste the tag just after the {* End relational links *} text. The tag is something like this. Save the template. Now preview the site. Our content block shows up after mainpage content as we wanted. Job done! What just happened? We used the global content block feature of CMSMS to insert a product promotion throughout our site. In the process, we learned about templates and also how we could modify them. Creating a global content block was similar to adding a new content page. We used the WYSIWYG editor to enter content block text. This gave us a special tag. If you know about PHP templates, you will have guessed that CMSMS uses Smarty templates and the tag was simply a custom tag in Smarty. Smarty Template EngineSmarty (http://www.smarty.net/) is the most popular template engine for the PHP programming language. Smarty allows keeping core PHP code and presentation/HTML code separate. Special tags are inserted in template files as placeholders for dynamic content. Visit http://www.smarty.net/crashcourse.php and http://www.packtpub.com/smarty/book for more. Next, we found the template our site was using. We could tell it by name, since the template shows up in a drop down in the add new pages screen as well. We opened the template and reviewed it. It was simple to understand—much like HTML. We inserted our product content block tag after the main content display. Then we saved it and previewed our site. Just as expected, the product promotion content showed up after main content of all pages. This shows how easy it is to add global content using CMSMS. But we also learned that global content blocks can help us manage promotions or commonly used content. Even if you don't go for CMS Made Simple, you can find a similar feature in the CMS of your choice. Simple features can make life easierCMS Made Simple's Global Content Block feature made it easy to run product promotions throughout a site. A simple feature like that can make the content administrator's life easier. Look out for such simple things that could make your job faster and easier in the CMS you evaluate. It's good time now to dive deeper into CMSMS. Go ahead and see whether it's the right choice for you. Have a go hero-is it right for you? CMS Made Simple (CMSMS) looks very promising. If we wanted to build a standard website with a photo gallery, newsletter, and so on, it is a perfect fit. Its code structure is understandable, the extending functionality is not too difficult. The default templates could be more appealing, but you can always create your own. The gentle learning curve of CMSMS is very impressive. The hierarchical display of pages,easy reordering, and simplistic content management approach are excellent. It's simple to figure out how things work. Yet CMSMS is a powerful system—remember how easily we could add a global content block? Doing something like that may need writing a plug-in or hacking source code in most other systems. It's the right time for you to see how it fits your needs. Take a while and evaluate the following: Does it meet your feature requirements? Does it have enough modules and extensions for your future needs? What does its web site say? Does it align with your vision and philosophy? Does it look good enough? Check out the forums and support structure. Do you see an active community? What are its system requirements? Do you have it all taken care of? If you are going to need customizations, do you (or your team) comfortably understand the code? We are done evaluating a simple CMS. Let us now look at the top two heavyweights in the Web CMS world—Drupal and Joomla!. Diving into Drupal Drupal (http://www.drupal.org) is a top open source Web CMS. Drupal has been around for years and has excellent architecture, code quality, and community support. The Drupal terminology can take time to sink in. But it can serve the most complicated content management needs. FastCompany and AOL's Corporate site work on Drupal:  Here is the About Drupal section on the Drupal web site. As you can see, Drupal can be used for almost all types of content management needs. The goal is to allow easy publishing and management of a wide variety of content. Let's try out Drupal. Let's understand how steep the learning curve really is, and why so many people swear by Drupal. Time for action-putting Drupal to the test Download and install Drupal. Installing Drupal involves downloading the latest stable release, extracting and uploading files to your server, setting up a database, and then following the instructions in a web installer. Refer to http://drupal.org/getting-started/ if you need help. Log in as the administrator. As you log in, you see a link to Create Content. This tells you that you can either create a page (simple content page) or a story (content with comments). We want to create a simple content page without any comments. So click on Page. In Drupal, viewing a page and editing a page are almost the same. You log in to Drupal and see site content in a preview mode. Depending on your rights, you will see links to edit content and manage other options. This shows the Create Page screen. There is a title but no WYSIWYG editor. Yes, Drupal does not come with a WYSIWYG text editor by default. You have to install an extension module for this. Let's go ahead and do that first. Go to the Drupal web site. Search for WYSIWYG in downloads. Find TinyMCE in the list. TinyMCE is the WYSIWYG editor we have seen in most other CMSs. Download the latest TinyMCE module for Drupal—compatible with your version of Drupal. The download does not include the actual TinyMCE editor. It only includes hooks tomake the editor work with Drupal. Go to the TinyMCE web site http://tinymce.moxiecode.com/download.php. Download the latest version. Create a new folder called modules in the sites/all/ folder of Drupal. This is theplace to store all custom modules. Extract the TinyMCE Drupal module here. It should create a folder named tinymcewithin the modules folder. Extract the TinyMCE editor within this folder. This creates a subfolder called tinymce within sites/all/modules/tinymce. Make sure the files are in the correct folders. Here's how your structure will look: Log in to Drupal if you are not already logged in. Go toAdminister | Site building | Modules. If all went well so far, at the end of the list of modules, you will find TinyMCE. Check the box next to it and click on Save Configuration to enable it. We need to perform two more steps before we can test this. Go to Administer |Site configuration | TinyMCE. It will prompt you that you don't have any profiles created. Create a new profile. Keep it enabled by default. Go to Administer | User management | Permissions. You will get this link from theTinyMCE configuration page too. Allow authenticated users to access tinymce. Then save permissions. We are now ready to test. Go to the Create Content | Page link. Super! The shiny WYSIWYG editor is now functional! It shows editing controls belowthe text area (all the other CMSs we saw so far show the controls above). Go ahead and add some content. Make sure to check Full HTML in Input Format.Save the page. You will see the content we entered right after you save it. Congratulations! What just happened? We deserve congratulations. After installing Drupal, we spotted that it did not come with a WYSIWYG editor. That's a bit of a setback. Drupal claims to be lightweight, but it should come with a nice editor, right? There are reasons for not including an editor by default. Drupal can be used for a variety of needs, and different WYSIWYG editors provide different features. The reason for not including any editor is to allow you to use the one that you feel is the best. Drupal is about a strong core and flexibility. At the same time, not getting a WYSIWYG editor by default was an opportunity. It was our opportunity to see how easy it was to add a plug-in to Drupal. We went to the Drupal site and found the TinyMCE module. The description of the module mentioned that the module is only a hook to TinyMCE. We need to download TinyMCE separately. We did that too. Hooks are another strength of Drupal. They are an easy way to develop extensions for Drupal. An additional function of modules is to ensure that we download a version compatible with Drupal's version. Mismatched Drupal and module versions create problems. We created a new directory within sites/all. This is the directory in which all custom modules/extensions should be stored. We extracted the module and TinyMCE ZIP files. We then logged on to the Drupal administration panel. Drupal had detected the module. We enabled it and configured it. The configuration process was multi step. Drupal has a very good access privilege system, but that made the configuration process longer. We not only had to enable the module, but also enable it for users. We also configured how it should show up, and in which sections. These are superb features for power users. Once all this was done, we could see a WYSIWYG editor in the content creation page. We used it and created a new page in Drupal. Here are the lessons we learned: Don't assume a feature in the CMS. Verify if that CMS has what you need. Drupal's module installation and configuration process is multistep and may require some looking around. Read the installation instructions of the plug-in. You will make fewer mistakes that way. Drupal is lightweight and is packed with a lot of power. But it has a learning curve of its own. With those important lessons in our mind, let's look around Drupal and figure out our way. Have a go hero-figure out your way with Drupal We just saw what it takes to get a WYSIWYG editor working with Drupal. This was obviously not a simple plug-and-play setup! Drupal has its way of doing things. If you are planning to use Drupal, it's a good time to go deeper and figure your way out with Drupal. Try out the following: Create a book with three chapters. Create a mailing list and send out one newsletter. Configure permissions and users according to your requirements. What if you wanted to customize the homepage? How easily can you do this? (Warning: It's not a simple operation with most CMSs.) Choosing a CMS is very confusing!Evaluating and choosing a CMS can be very confusing. Don't worry if you feel lost and confused among all the CMSs and their features. The guiding factors should always be your requirements, not the CMS's features. Figure out who's going to use the CMS—developers or end users. Find out all you need: Do you need to allow customizing the homepage? Know your technology platform. Check the code quality of the CMS—bad code can gag you. Does your site need so many features? Is the CMS only good looking, or is it beauty with brains? Consider all this in your evaluation. Drupal code quality Drupal's code is very well-structured. It's easy to understand and extend it via the hooks mechanism. The Drupal team takes extreme care in producing good code. Take a look at the sample code here. If you like looking around code, go ahead and peek into Drupal. Even if you don't use Drupal as a CMS, you can learn more about programming best practices. Now let's do a quick review and see some interesting Joomla! features.
Read more
  • 0
  • 0
  • 2925
article-image-yearly-holiday-list-calendar-developed-using-jquery-ajax-xml-and-css3
Packt
01 Feb 2011
5 min read
Save for later

Yearly Holiday List Calendar Developed using jQuery, AJAX, XML and CSS3

Packt
01 Feb 2011
5 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery About Holiday List Calendar This widget will help you in knowing the list of holidays in various countries. Here in this example, I have listed holidays pertaining to only two counties, namely India and US. You can make use of this widget on your websites or blogs to tell your readers about holidays and their importance if necessary. Adding jQuery to your page Download the latest version of jQuery from the jQuery site. This site can be added as a reference to your web pages accordingly. You can reference a local copy of jQuery after downloading <script> tag in the page. You can also directly reference the remote copy from jQuery or Google Ajax API. Pre-requisite Knowledge In order to understand the code, one should have some knowledge of AJAX concepts and XML Structure, basic knowledge of HTML, advance knowledge of CSS3 and lastly and mostly important one should know advance level of jQuery coding. Ingredients Used jQuery [Advance Level] CSS3 HTML XML Photoshop [Used for coming up with UI Design] HTML Code <div id="calendar-container"> <div class="nav-container"> <span>Year<br/><select id="selectYear"></select></span> <span class="last">Month<br /><a href="#" id="prevBtn" class="button gray"><</a> <select id="selectMonth"></select> <a href="#" id="nextBtn" class="button gray">></a></span> </div> <div class="data-container"></div> </div> XML Structure <?xml version="1.0" encoding="ISO-8859-1"?> <calendar> <year whichyear="2010" id="1"> <month name="January" id="1"> <country name="India"> <holidayList date="Jan 01st" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 14th" day="Friday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 26th" day="Wednesday"><![CDATA[Sample Data]]></holidayList> </country> <country name="US"> <holidayList date="Jan 01st" day="Saturday"><![CDATA[Sample Data]]></holidayList> <holidayList date="Jan 17th" day="Monday"><![CDATA[Sample Data]]></holidayList> </country> </month> <month name="January" id="1"> --------------------- --------------------- --------------------- </month> </year> </calendar> CSS Code body{ margin: 0; padding: 0; font-family: "Lucida Grande", "Lucida Sans", sans-serif; font-size: 100%; background: #333333; } #calendar-container{ width:370px; padding:5px; border:1px solid #bcbcbc; margin:0 auto; background-color:#cccccc; -webkit-border-radius: .5em; -moz-border-radius: .5em; border-radius: .5em; -webkit-box-shadow: 0 1px 4px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 4px rgba(0,0,0,.2); box-shadow: 0 1px 4px rgba(0,0,0,.2); } .nav-container{ padding:5px; } .nav-container span{display:inline-block; text-align::left; padding-right:15px; border-right:1px solid #828282; margin-right:12px; text-shadow: 1px 1px 1px #ffffff; font-weight:bold;} .nav-container span.last{padding-right:0px; border-right:none; margin-right:0px;} .data-container{ font-family:Arial, Helvetica, sans-serif; font-size:14px; } #selectMonth{width:120px;} .data-container ul{margin:0px; padding:0px;} .data-container ul li{ list-style:none; padding:5px;} .data-container ul li.list-header{border-bottom:1px solid #bebebe; border-right:1px solid #bebebe; background-color:#eae9e9; -webkit-border-radius: .2em .2em 0 0; -moz-border-radius: .2em .2em 0 0; border-radius: .3em .3em 0 0; background:-moz-linear-gradient(center top , #eae9e9, #d0d0d0) repeat scroll 0 0 transparent; margin-top:5px; text-shadow: 1px 1px 1px #ffffff;} .data-container ul li.padding-left-10px {background-color:#EEEEEE; border-bottom:1px solid #BEBEBE; border-right:1px solid #BEBEBE; font-size:12px;} /* button ---------------------------------------------- */ .button { font-size: 25px; font-weight: 700; display: inline-block; zoom: 1; /* zoom and *display = ie7 hack for display:inline-block */ *display: inline; vertical-align: bottom; margin: 0 2px; outline: none; cursor: pointer; text-align: center; text-decoration: none; text-shadow: 1px 1px 1px #555555; padding: 0px 10px 3px 10px; -webkit-border-radius: .2em; -moz-border-radius: .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); } .button:hover { text-decoration: none; } .button:active { position: relative; top: 1px; } select{ -webkit-border-radius: .2em .2em .2em .2em; -moz-border-radius: .2em .2em .2em .2em; border-radius: .2em; -webkit-box-shadow: 0 1px 2px rgba(0,0,0,.2); -moz-box-shadow: 0 1px 2px rgba(0,0,0,.2); box-shadow: 0 1px 2px rgba(0,0,0,.2); padding:5px; font-size:16px; border:1px solid #4b4b4b; } /* color styles ---------------------------------------------- */ .gray { color: #e9e9e9; border: solid 1px #555; background: #6e6e6e; background: -webkit-gradient(linear, left top, left bottom, from(#888), to(#575757)); background: -moz-linear-gradient(top, #888, #575757); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#888888', endColorstr='#575757'); } .gray:hover { background: #616161; background: -webkit-gradient(linear, left top, left bottom, from(#757575), to(#4b4b4b)); background: -moz-linear-gradient(top, #757575, #4b4b4b); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#757575', endColorstr='#4b4b4b'); } .gray:active { color: #afafaf; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } .grayDis{ color: #999999; background: -webkit-gradient(linear, left top, left bottom, from(#575757), to(#888)); background: -moz-linear-gradient(top, #575757, #888); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#575757', endColorstr='#888888'); } h2{ color:#ffffff; text-align:center; margin:10px 0px;} #header{ text-align:center; font-size: 1em; font-family: "Helvetica Neue", Helvetica, sans-serif; padding:1px; margin:10px 0px 80px 0px; background-color:#575757; } .ad{ width: 728px; height: 90px; margin: 50px auto 10px; } #footer{ width: 340px; margin: 0 auto; } #footer p{ color: #ffffff; font-size: .70em; margin: 0; } #footer a{ color: #15ADD1; }  
Read more
  • 0
  • 0
  • 2924

article-image-making-progress-menus-and-toolbars-using-ext-js-30-part-1
Packt
19 Nov 2009
7 min read
Save for later

Making Progress with Menus and Toolbars using Ext JS 3.0: Part 1

Packt
19 Nov 2009
7 min read
Placing buttons in a toolbar You can embed different types of components in a toolbar. This topic teaches you how to build a toolbar that contains image-only, text-only and image/text buttons, a toggle button, and a combo box. How to do it Create the styles for the toolbar items: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;} Define a data store for the combo box: Ext.onReady(function() { Ext.QuickTips.init(); var makesStore = new Ext.data.ArrayStore({ fields: ['make'], data: makes // from cars.js }); Create a toolbar and define the buttons and combo box inline: var tb = new Ext.Toolbar({ renderTo: 'tbar', items: [{ iconCls: 'icon-data', tooltip: 'Icon only button', handler:clickHandler }, '-', { text: 'Text Button' }, '-', { text: 'Image/Text Button', iconCls: 'icon-chart' }, '-', { text: 'Toggle Button', iconCls: 'icon-table', enableToggle: true, toggleHandler: toggleHandler, pressed: true }, '->', 'Make: ', { xtype: 'combo', store: makesStore, displayField: 'make', typeAhead: true, mode: 'local', triggerAction: 'all', emptyText: 'Select a make...', selectOnFocus: true, width: 135 }]}); Finally, create handlers for the push button and the toggle button: function clickHandler(btn) { Ext.Msg.alert('clickHandler', 'button pressed');}function toggleHandler(item, pressed) { Ext.Msg.alert('toggleHandler', 'toggle pressed');} How it works The buttons and the combo box are declared inline. While the standard button uses a click handler through the handler config option, the toggle button requires the toggleHandler config option.The button icons are set with the iconCls option, using the classes declared in the first step of the topic. As an example, note the use of the Toolbar.Separator instances in this fragment: }, '-', { text: 'Text Button'}, '-', { text: 'Image/Text Button', iconCls: 'icon-chart'}, '-', { Using '-' to declare a Toolbar.Separator is equivalent to using xtype: 'tbseparator'. Similarly, using '->' to declare Toolbar.Fill is equivalent to using xtype:'tbfill'. See also... The next recipe, Working with the new ButtonGroup component, explains how to use the ButtonGroup class to organize a series of related buttons Working with the new ButtonGroup component A welcome addition to Ext JS is the ability to organize buttons in groups. Here's how to create a panel with a toolbar that contains two button groups: How to do it Create the styles for the buttons: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;}.icon-sort-asc{ background:url(img/sort-asc.png) 0 no-repeat !important;}.icon-sort-desc{ background:url(img/sort-desc.png) 0 no-repeat !important;}.icon-filter{ background:url(img/funnel.png) 0 no-repeat !important;} Define a panel that will host the toolbar: Ext.onReady(function() { var pnl = new Ext.Panel({ title: 'My Application', renderTo:'pnl-div', height: 300, width: 500, bodyStyle: 'padding:10px', autoScroll: true, Define a toolbar inline and create two button groups: tbar: [{ xtype: 'buttongroup', title: 'Data Connections', columns: 1, defaults: { scale: 'small' }, items: [{ xtype:'button', text: 'Data Sources', iconCls:'icon-data' }, { xtype: 'button', text: 'Tables', iconCls: 'icon-table' }, { xtype: 'button', text: 'Reports', iconCls: 'icon-chart' }]}, { xtype: 'buttongroup', title: 'Sort & Filter', columns: 1, defaults: { scale: 'small' }, items: [{ xtype: 'button', text: 'Sort Ascending', iconCls: 'icon-sort-asc' }, { xtype: 'button', text: 'Sort Descending', iconCls: 'icon-sort-desc' }, { xtype: 'button', text: 'Filter', iconCls: 'icon-filter' }]}] How it works Using a button group consists of adding a step to the process of adding buttons, or other items, to a toolbar. Instead of adding the items directly to the toolbar, you need to firstly define the group and then add the items to the group: tbar: [{ xtype: 'buttongroup', title: 'Data Connections', columns: 1, defaults: { scale: 'small' }, items: [{ xtype:'button', text: 'Data Sources', iconCls:'icon-data' }, { xtype: 'button', text: 'Tables', iconCls: 'icon-table' }, { xtype: 'button', text: 'Reports', iconCls: 'icon-chart' }]} See also... The next recipe, Placing buttons in a toolbar, illustrates how you can embed different types of components in a toolbar Placing menus in a toolbar In this topic, you will see how simple it is to use menus inside a toolbar. The panel's toolbar that we will build, contains a standard button and a split button, both with menus: How to do it Create the styles for the buttons: #tbar{ width:600px;}.icon-data{ background:url(img/data.png) 0 no-repeat !important;}.icon-chart{ background:url(img/pie-chart.png) 0 no-repeat !important;}.icon-table{ background:url(img/table.png) 0 no-repeat !important;} Create a click handler for the menus: Ext.onReady(function() { Ext.QuickTips.init(); var clickHandler = function(action) { alert('Menu clicked: "' + action + '"');}; Create a window to host the toolbar: var wnd = new Ext.Window({ title: 'Toolbar with menus', closable: false, height: 300, width: 500, bodyStyle: 'padding:10px', autoScroll: true, Define the window's toolbar inline, and add the buttons and their respective menus: tbar: [{ text: 'Button with menu', iconCls: 'icon-table', menu: [ { text: 'Menu 1', handler:clickHandler.createCallback('Menu 1'), iconCls: 'icon-data' }, { text: 'Menu 1', handler: clickHandler.createCallback('Menu 2'), iconCls: 'icon-data'}]}, '-',{ xtype: 'splitbutton', text: 'Split button with menu', iconCls: 'icon-chart', handler: clickHandler.createCallback('Split button with menu'), menu: [ { text: 'Menu 3', handler: clickHandler.createCallback('Menu 3'), iconCls: 'icon-data' }, { text: 'Menu 4', handler: clickHandler.createCallback('Menu 4'), iconCls: 'icon-data'}] }]}); Finally, show the window: wnd.show(); How it works This is a simple procedure. Note how the split button is declared with the xtype: 'splitbutton' config option. Also, observe how the createCallback() function is used to invoke the clickHandler() function with the correct arguments for each button. See also... The next recipe, Commonly used menu items, shows the different items that can be used in a menu Commonly used menu items To show you the different items that can be used in a menu, we will build a menu that contains radio items, a checkbox menu, a date menu, and a color menu.This is how the radio options and checkbox menu will look: The Pick a Date menu item will display a date picker, as shown in the next screenshot: The Pick a Color menu item displays a color picker, as seen here: How to do it Create a handler for the checkbox menu: Ext.onReady(function() { Ext.QuickTips.init(); var onCheckHandler = function(item, checked) { Ext.Msg.alert('Menu checked', item.text + ', checked: ' + (checked ? 'checked' : 'unchecked')); }; Define a date menu: var dateMenu = new Ext.menu.DateMenu({ handler: function(dp, date) { Ext.Msg.alert('Date picker', date); }}); Define a color menu: var colorMenu = new Ext.menu.ColorMenu({ handler: function(cm, color) { Ext.Msg.alert('Color picker', String.format('You picked {0}.', color)); }}); Create a main menu. Now add the date and color menus, as well as a few inline menus: var menu = new Ext.menu.Menu({ id: 'mainMenu', items: [{ text: 'Radio Options', menu: { items: [ '<b>Choose a Theme</b>', { text: 'Aero Glass', checked: true, group: 'theme', checkHandler: onCheckHandler }, { text: 'Vista Black', checked: false, group: 'theme', checkHandler: onCheckHandler }, { text: 'Gray Theme', checked: false, group: 'theme', checkHandler: onCheckHandler }, { text: 'Default Theme', checked: false, group: 'theme', checkHandler: onCheckHandler } ] } }, { text: 'Pick a Date', iconCls: 'calendar', menu: dateMenu }, { text: 'Pick a Color', menu: colorMenu }, { text: 'The last menu', checked: true, checkHandler: onCheckHandler }]}); Create a toolbar and add the main menu: var tb = new Ext.Toolbar({ renderTo: 'tbar', items: [{ text: 'Menu Items', menu: menu }]}); How it works After defining the date and color pickers, the main menu is built. This menu contains the pickers, as well as a few more items that are defined inline. To display checked items (see the checked: true config option) with a radio button instead of a checkbox, the menu items need to be defined using the group config option. This is how the theme selector menu is built: menu: { items: [ '<b>Choose a Theme</b>', { text: 'Aero Glass', checked: true, group: 'theme', checkHandler: onCheckHandler }, { text: 'Vista Black', checked: false, group: 'theme', checkHandler: onCheckHandler See also... The Placing buttons in a toolbar recipe (covered earlier in this article) illustrates how you can embed different types of components in a toolbar >> Continue Reading Making Progress with Menus and Toolbars using Ext JS 3.0: Part 2 [ 1 | 2 ]   If you have read this article you may be interested to view : Making Progress with Menus and Toolbars using Ext JS 3.0: Part 2 Load, Validate, and Submit Forms using Ext JS 3.0: Part 1 Load, Validate, and Submit Forms using Ext JS 3.0: Part 2 Load, Validate, and Submit Forms using Ext JS 3.0: Part 3
Read more
  • 0
  • 0
  • 2919
Modal Close icon
Modal Close icon