Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-keeping-extensions-secure-joomla-15-part-1
Packt
18 Nov 2009
7 min read
Save for later

Keeping Extensions Secure with Joomla! 1.5: Part 1

Packt
18 Nov 2009
7 min read
Introduction There's no such thing as a completely secure system. No matter how many precautions we take and how many times we verify our design and implementation, we will never be able to guarantee that we have created a truly secure Joomla! extension. Why not? This is because it is not possible to be prepared for every potential vulnerability. Common Weakness Enumeration (CWE) is a project dedicated to generating a formal categorization and identification system of security vulnerabilities. CWE published a list of the top 25 security weaknesses, which were selected on the basis of their frequency and consequences. This list includes some of the most publicized security weaknesses, such as code injection (CWE-94) and XSS (CWE-79). When considering the security of our extensions, this list can prove useful. For information about common programming mistakes that lead to security vulnerabilities, refer to http://cwe.mitre.org/top25/. This article includes references to the CWE weaknesses. These references are in the form of CWE IDs, that is, CWE-n. For information about a weakness, simply Search By ID on the CWE web site. These references are intended to help us better understand the weaknesses, the risks associated with the weaknesses, how the risks can be reduced using the Joomla! framework, and the suggested CWE mitigations. Something we should consider is the ramifications of security flaws. Whichever way we look at this, the answer always involves financial loss. This is true even of non-profit organizations. If a web site is attacked and the attacker managed to completely obliterate all the data held on that web site, it will cost the owner's time to restore a backup or replace the data. OK, it may not seem like a financial loss because it's non profit. It is a wastage of time if the web site owner spends two hours to restore his or her data, as those two hours could have been used elsewhere. For commercial web sites, the potential for financial loss is far more obvious. If we use a bank as an example, a security flaw could enable an attacker to transfer money from the bank to his or her own (probably untraceable) account. In 2004, the Internet bank Cahoot suffered a security flaw enabling any existing customer access to other customers' accounts. Cahoot did not suffer any obvious financial loss from the security flaw and they claimed there was no risk of financial loss. However, the customers' confidence in Cahoot was inevitably lost. This loss of confidence and bad press will certainly have affected Cahoot in some way. For example, some potential customers may have decided to open an account with a rival bank because of concerns over how secure their savings would, or would not, be with Cahoot. For more information, refer to http://news.bbc.co.uk/1/hi/business/3984845.stm. From the perspective of an extension developer, we should reflect on our moral duty and our liability. Disclaimers, especially for commercial software, do not relinquish us of legal responsibility. We should always try to avoid any form of litigation, and I'm not suggesting that we run to Mexico or our closest safe haven. We should take a holistic approach to security. We need a complete view of how the system works and of the various elements that need to be secure. Security should be built into our extension from the requirements gathering stage through to the ongoing system maintenance. How we do that depends on how we are managing our project and what the security implications of our extension are. For example, a shopping cart component with credit card processing facilities will require far greater attention to security than a content plugin that converts the occurrences of :) to smiley face images. Irrespective of the way we choose to manage the risks of weaknesses, we should always document how we are circumventing security threats. Doing so will make it easier to maintain our extension without introducing vulnerabilities. Documentation also provides us with proof of prudent risk management, which can be useful should we ever be accused of failing to adequately manage the security risks associated with our software. This is all starting to sound like a lot of work! This brings us back to the ramifications of vulnerabilities. If on the one hand, it costs us one extra month of development time to produce a piece of near-secure software. And on the other hand, it costs us two months to patch a non-secure piece of software and an incalculable amount of damage to our reputation. It is clear which route we should favor! Packt Publishing offers a book that deals specifically with Joomla! security. For more information, refer to http://www.packtpub.com/joomla-web-security-guide/. Writing SQL safe queries SQL injection is probably the most high profile of all malicious web attacks. The effects of an SQL injection attack can be devastating and wide ranging. Whereas some of the more strategic attacks may simply be aimed at gaining access, others may intend on bringing about total disruption and even destruction. Some of the most prestigious organizations in the world have found themselves dealing with the effects of SQL injection attacks. For example, in August 2007 the United Nations web site was defaced as a result of an SQL injection vulnerability. More information can be found at http://news.bbc.co.uk/1/hi/technology/6943385.stm. Dealing with the effects of an SQL injection attack is one thing, but preventing them is quite another. This recipe explains how we can ensure that our queries are safe from attack by utilizing the Joomla! framework. For more information about SQL injection, refer to CWE-89. Getting ready The first thing we need is the database handler. There is nothing special here, just the usual Joomla! code as follows: $db =& JFactory::getDBO(); How to do it... There are two aspects of a query that require special attention: Identifiers and names Literal values The JDatabase::nameQuote() method is used to safely represent identifiers and names. We will start with an easy example, a name that consists of a single identifier. $name = $db->nameQuote('columnIdentifier'); We must take care when dealing with multiple-part names (that is, names that include more than one identifier separated by a period). If we attempt to do the same thing with the name tableIdentifier.columnIdentifier, we won't get the expected result! Instead, we would have to do the following: // prepare identifiers$tableIdentifier = $db->nameQuote('tableIdentifier');$columnIdentifier = $db->nameQuote('columnIdentifier');// create name$name = "$tableIdentifier.$columnIdentifier"; Avoid hardcoding encapsulation Instead of using the JDatabase::nameQuote() method, it can be tempting to do this: $sql = 'SELECT * FROM `#__foobar_groups` AS `group`'. This is OK as it works. But the query is now tightly coupled with the database system, making it difficult to employ an alternative database system. Now we will take a look at how to deal with literal values. Let's start with strings. In MySQL, strings are encapsulated in double or single quotes. This makes the process of dealing with strings seem extremely simple. Unfortunately, this would be an oversight. Strings can contain any character, including the type of quotes we use to encapsulate them. Therefore, it is also necessary to escape strings. We do all of this using the JDatabase::Quote() method as follows: $tableIdentifier = $db->nameQuote('tableIdentifier');$columnIdentifier = $db->nameQuote('columnIdentifier');$sql = "SELECT * FROM $tableIdentifier " . "WHERE $columnIdentifier " . ' = ' . $db->Quote("How's the recipebook going?"); The JDatabase::Quote() method essentially does the following. The exact output will depend on the database handler. However, most databases escape and encapsulate strings in pretty much the same way.   Original Quoted How's the recipebook going? 'How's the recipebook going?'
Read more
  • 0
  • 0
  • 1451

article-image-netbeans-platform-69-advanced-aspects-window-system
Packt
17 Aug 2010
5 min read
Save for later

NetBeans Platform 6.9: Advanced Aspects of Window System

Packt
17 Aug 2010
5 min read
(For more resources on NetBeans, see here.) Creating custom modes You can get quite far with the standard modes provided by the NetBeans Platform. Still, sometimes you may need to provide a custom mode, to provide a new position for the TopComponents within the application. A custom mode is created declaratively in XML files, rather than programmatically in Java code. In the following example, you create two new modes that are positioned side by side in the lower part of the application using a specific location relative to each other. Create a new module named CustomModes, with Code Name Base com.netbeansrcp.custommodes, within the existing WindowSystemExamples application. Right-click the module project and choose New | Other to open the New File dialog. Then choose Other | Empty File, as shown in the following screenshot: Type mode1.wsmode as the new filename and file extension, as shown in the following screenshot. Click Finish. Define the content of the new mode1.wsmode as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode1" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="20" weight="0.5"/> </constraints> </mode> Create another file to define the second mode and name it mode2.wsmode. Add this content to the new file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode2" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="40" weight="0.5"/> </constraints> </mode> Via the two wsmode files described above, you have defined two custom modes. The first mode has the unique name mode1, with the second named mode2. Both are created for normal TopComponents (view instead of editor) that are integrated into the main window, rather than being undocked by default (joined instead of separated). The constraints elements in the files are comparable to GridBagLayout, with a relative horizontal and vertical position, as well as a relative horizontal and vertical weight. You place mode1 in position 20/20 with a weighting of 0,5/0,2, while mode2 is placed in position 20/40 with the same weighting. If all the other defined modes have TopComponents opened within them, the TopComponents in the two new modes should lie side by side, right above the status bar, taking up 20% of the available vertical space, with the horizontal space shared between them. Let us now create two new TopComponents and register them in the layer.xml file so that they will be displayed in your new modes. Do this by using the New Window wizard twice in the CustomModes module, first creating a window called Class Name Prefix Red and then a window with Class Name Prefix Blue. What should I set the window position to? In the wizard, in both cases, it does not matter what you set to be the window position, as you are going to change that setting manually afterwards. Let both of them open automatically when the application starts. In the Design mode of both TopComponents, add a JPanel to each of the TopComponents. Change the background property of the panel in the RedTopComponent to red and in the BlueTopComponent to blue. Edit the layer.xml of CustomModes module, registering the two .wsmode files and ensuring that the two new TopComponents open in the new modes: <folder name="Windows2"> <folder name="Components"> <file name="BlueTopComponent.settings" url="BlueTopComponentSettings.xml"/> <file name="RedTopComponent.settings" url="RedTopComponentSettings.xml"/> </folder> <folder name="Modes"> <file name="mode1.wsmode" url="mode1.wsmode"/> <file name="mode2.wsmode" url="mode2.wsmode"/> <folder name="mode1"> <file name="RedTopComponent.wstcref" url="RedTopComponentWstcref.xml"/> </folder> <folder name="mode2"> <file name="BlueTopComponent.wstcref" url="BlueTopComponentWstcref.xml"/> </folder> </folder> </folder> As before, perform a Clean and Build on the application project node and then start the application again. It should look as shown in the following screenshot: In the summary, you defined two new modes in XML files and registered them in the module's layer.xml file. To confirm that the modes work correctly, you use the layer.xml file to register two new TopComponents so that they open by default into the new modes. As a result, you now know how to extend the default layout of a NetBeans Platform application with new modes.
Read more
  • 0
  • 0
  • 1446

article-image-deployment-feature-alfresco-3
Packt
29 Sep 2010
3 min read
Save for later

The Deployment Feature of Alfresco 3

Packt
29 Sep 2010
3 min read
  Alfresco 3 Web Content Management Create an infrastructure to manage all your web content, and deploy it to various external production systems A complete guide to Web Content Creation and Distribution Understand the concepts and advantages of Publishing-style Web CMS Leverage a single installation to manage multiple websites Integrate Alfresco web applications with external systems Read more about this book (For more resources on Alfresco 3, see here.) Alfresco WCM staging has an autodeploy option in its default workflow, allowing end users, at the time of submit, to enforce automatic deployment of approved changes directly to the live website without having to manually initiate deployment. The Submit Items window has an Auto Deploy checkbox, as shown in the following screenshot: Upon approval, if the auto deploy option is on, the workflow will perform a deployment to those live servers that have the Include In Auto Deploy option enabled. For more details about enabling this option, refer the Configuring a web project to use FSR section in the previous article. Deploying to a test server The Test Server Deployment functionality provides in-context preview by allowing a contributor to deploy their content to an external target (either an ASR or FSR), from which it can be rendered by any web application technology that can either read from a filesystem or access an ASR via HTTP (which includes all of the major web application technologies in use today, including Java, .NET, PHP, Ruby, Python, CGI, and so on). Once a test server has been deployed to, it is allocated to the user or workflow that performed the deployment. Once the user or workflow has finished with the test server it is released and returned to the pool of test servers. This happens automatically in the case of a workflow sandbox and manually via a UI action for User Sandboxes. The following process has to be followed to use the test server: Set up a test server pool. Deploy to a test server. Preview the content. Release the test server. Setting up a test server pool The following are the steps to configure a Web Project to use an FSR. Navigate to Company Home Web Projects | <web project name>|. Select the Edit Web Project Settings from the Action menu. Click on Next to reach the Configure Deployment Servers window. Click on the Add Deployment Receiver link as shown in the following screenshot: For Type, select Test Server, specify the Display Name, Host name, and the Target Name. Click on the Add button. Similarly configure another test server, say with "cignex-test2" as the target. Ensure that the FSR is running on the test server. The targets "cignex-test1" and "cignex-test2" are configured in FSR. Deploy to a test server Let's say, you as a content manager would like to deploy your User Sandbox to the test server for testing purposes. Go to your User Sandbox and from the More Actions menu choose Deploy as shown in the following screenshot: The Deploy Sandbox window displays, listing all of the unallocated test servers as shown in the next screenshot. Select a test server to use (only one test server can be allocated to a sandbox at a time), and click on OK. The Monitor Deployment information displays once the deployment completes. If an error occurs, the reason for the error is shown under the Deployment Failed message:
Read more
  • 0
  • 0
  • 1445

article-image-drupal-6-performance-optimization-using-db-maintenance-and-boost-part-2
Packt
23 Mar 2010
5 min read
Save for later

Drupal 6 Performance Optimization Using DB Maintenance and Boost: Part 2

Packt
23 Mar 2010
5 min read
Testing your Boost configuration Now we're going to test out our Boost configuration and make sure everything is working with our initial basic settings and the .htaccess configuration that we're running. Log out of your website in your current browser or open up another web browser so that you can browse around your site as an anonymous user. The main thing we want to check on is that our static HTML type files (our Drupal pages or nodes) are being cached and stored in the cache directory we have specified in the module configuration. If we chose to use a GZIP compression, we will want to check to make sure the ZIP files are being generated and stored. Also, run your Status report and view your log entries to check to see if any errors related to the module configuration are being thrown. You should start noticing a performance boost on your site immediately, as you browse around your site. Start clicking around and opening different nodes on your site and admire the faster performance! You should notice it. If we check the cache directory on our site, we should notice that the Boost module has started writing HTML files to our cache directory. In the directory you should now see the following folders: /cache/normal/variantcube.com/fire/node Boost has automatically created a new folder called /node where it will store the cached HTML versions of the Drupal pages it loads. For example, if we look into our /node directory, we should see a bunch of HTML files that have been cached while we've browsed anonymously on our site. You can almost see this happen in real time if you browse to a page and then immediately refresh your remote server/ site window in your FTP client (while in the /node folder). I see the following files corresponding to their Drupal nodes: 201_.html202_.html203_.html206_.html208_.html These correspond to: node/201node/202node/203node/206node/208 Also, at the root of our /fire directory, we should see any non-node pages (for example, pages created using Drupal Views module). In our case, our main Photo gallery View page has been cached: photo_gallery.html. This page corresponds to our photo_gallery View page. You can immediately see the power and flexibility of this module by inspecting your cache directory. You should notice a performance increase on all of these cached pages because the pages that are loading are now your Boost-powered HTML pages. So, multiple clicking on one Drupal node should demonstrate how quickly your pages are now loading. The module has created another folder in your /fire/cache directory called perm. The /perm folder contains your CSS and JS files as they are cached. If you look in this folder, you'll see paths to the following folders: /cache/perm/variantcube.com/fire/files/css/cache/perm/variantcube.com/fire/files/js If you look in your CSS directory, you should see cached versions of your CSS files, and if you look in your /js directory, you should see a cached version of your JavaScript. Another method of checking the module is working correctly is to view source on your pages (by viewing source in your web browser) and see if the following code is being added to your HTML output: <!-- Page cached by Boost @ 2009-10-23 13:56:03, expires @ 2009-10-23 14:56:03 --> So the actual HTML source in the web browser will tell you that you are viewing a cached version of the page rather than a dynamically generated version of the page. It also tells you when this cached page version will expire—based on our configuration, basically one hour after it's been loaded depending on our Boost module settings. Everything appears to be working fine with our initial Boost installation and configuration. Sit back and behold the power of Boost! Boost and Poormanscron Checking our Status report will show us that we're running an incorrect version of Poormanscron. Boost is optimized to work with the latest dev or 2.0 branch of Poormanscron. So let's go ahead and install the latest version so that our cron runs will work correctly with Boost. Visit the Poormanscron project page and download the 6.x.-2.0-beta1 release and extract and upload it to our /sites/all/modules directory. Then run your Status report again to check to make sure the Boost warning has disappeared. You may need to run your update.php script, as this module update will make changes to your database schema. Run update.php and then refresh your Status report. In your Status report, you should now see the Boost row state: Boost Installed correctly, should be working if properly configured. Configuring Poormanscron The updated 2.x-beta1 version of Poormanscron is the precursor module to the eventual Drupal 7 core cron functionality. In Drupal 7, the functionality of the Poormanscron module will be part of the default core processes. For this reason the beta1 version does not give you a module configuration page. It will just run cron automatically, based on a setting on your Site information page. Go here to see that setting: Site configuration | Site information. Now you have an automatically run cron setting that you can select from. We'll use the default 1 hour cron run. This is a nice preview of some of the new built-in functionality of Drupal 7 core.
Read more
  • 0
  • 0
  • 1445

article-image-introduction-developing-facebook-applications
Packt
20 Dec 2010
6 min read
Save for later

Introduction to Developing Facebook Applications

Packt
20 Dec 2010
6 min read
  Facebook Graph API Development with Flash Build social Flash applications fully integrated with the Facebook Graph API Build your own interactive applications and games that integrate with Facebook Add social features to your AS3 projects without having to build a new social network from scratch Learn how to retrieve information from Facebook's database A hands-on guide with step-by-step instructions and clear explanation that encourages experimentation and play         Read more about this book       So let's get on with it... What's so great about Facebook? Seems like everyone's on Facebook these days—people are on it to socialize; businesses are on it to try to attract those people's attention. But the same is true for other older social networks such as LinkedIn, Friendster, and MySpace. Facebook's reach goes far beyond these; my small town's high street car park proudly displays a "Like Us On Facebook" sign. More and more Flash games and Rich Internet Applications (RIAs) are allowing users to log in using their Facebook account—it's a safe assumption that most users will have one. Companies are asking freelancers for deeper Facebook integration in their projects. It's practically a buzzword. But why the big fuss? It's popular Facebook benefits from the snowball effect: it's big, so it gets bigger. People sign up because most of their friends are already on it, which is generally not the case for, say, Twitter. Businesses sign up because they can reach so many people. It's a virtuous circle. There's a low barrier to entry, too; it's not just for techies, or even people who are "pretty good with computers;" even old people and luddites use Facebook. In February 2010, the technology blog ReadWriteWeb published an article called "Facebook Wants to Be Your One True Login," about Facebook's attempts to become the de facto login system throughout the Web. Within minutes, the comments filled up with posts from confused Facebook users: (Source: http://www.readwriteweb.com/archives/facebook_wants_to_be_your_one_true_login.php.) Evidently, the ReadWriteWeb article had temporarily become the top search result for Facebook Login, leading hundreds of Facebook users, equating Google or Bing with the Internet, to believe that this blog post was actually a redesigned Facebook.com. The comment form, fittingly, had a Sign in with Facebook button that could be used instead of manually typing in a name and e-mail address to sign a comment—and of course, the Facebook users misinterpreted this as the new Log in button. And yet… all of those people manage to use Facebook, keenly enough to throw a fit when it apparently became impossible to use. It's not just a site for geeks and students; it has serious mass market appeal. Even "The Social Network"—a movie based on the creation of Facebook—held this level of appeal: it opened at #1 and remained there for its second weekend. Numbers According to Facebook's statistics page (http://www.facebook.com/press/info.php?statistics), over 500 million people log in to Facebook in any given month (as of November 2010). For perspective, the population of the entire world is just under 7,000 million. Twitter is estimated to have 95 million monthly active users (according to the eMarketer.com September 2010 report), as is MySpace. FarmVille, the biggest game based on the Facebook platform, has over 50 million: more than half the population of either competing social network. FarmVille has been reported to be hugely profitable, with some outsider reports claiming that its parent company, Zynga, has generated twice as much profit as Facebook itself (though take this with a grain of salt). Now, of course, not every Facebook game or application can be that successful, and FarmVille does benefit from the same snowball effect as Facebook itself, making it hard to compete with—but that almost doesn't matter; these numbers validate Facebook as a platform on which a money-making business can be built. It's everywhere As the aforementioned ReadWriteWeb article explained, Facebook has become a standard login across many websites. Why add yet another username/password combination to your browser's list (or your memory) if you can replace them all with one Facebook login? This isn't restricted to posting blog comments. UK TV broadcaster, Channel 4, allows viewers to access their entire TV lineup on demand, with no need to sign up for a specific Channel 4 account: Again, Facebook benefits from that snowball effect: as more sites enable a Facebook login, it becomes more of a standard, and yet more sites decide to add a Facebook login in order to keep up with everyone else. Besides login capabilities, many sites also allow users to share their content via Facebook. Another UK TV broadcaster, the BBC, lets users post links for their recommended TV programs straight to Facebook: Blogs—or, indeed, many websites with articles—allow readers to Like a post, publishing this fact on Facebook and on the site itself: So half a billion people use the Facebook website every month, and at the same time, Facebook spreads further and further across the Internet—and even beyond. "Facebook Messages" stores user's entire conversational histories, across e-mail, SMS, chat, and Facebook itself; "Facebook Places" lets users check into a physical location, letting friends know that they're there. No other network has this reach. It's interesting to develop for With all this expansion, it's difficult for a developer to keep up with the Facebook platform. And sometimes there are bugs, and undocumented areas, and periods of downtime, all of which can make development harder still. But the underlying system—the Graph API, introduced in April 2010—is fascinating. The previous API had become bloated and cumbersome over its four years; the Graph API feels well-designed with plenty of room for expansion. Have a go hero – get on Facebook If you're not on Facebook already, sign up now (for free) at http://facebook.com. You'll need an account in order to develop applications that use it. Spend some time getting used to it: Set up a personal profile. Post messages to your friends on their Walls. See what all the FarmVille fuss is about at http://apps.facebook.com/onthefarm. Check in to a location using Facebook Places. Log in to some blogs using your Facebook account. Share some YouTube videos on your own Wall from the YouTube website. "Like" something. Go native!
Read more
  • 0
  • 0
  • 1445

article-image-shipping-modules-magento-part-2
Packt
29 Jan 2010
8 min read
Save for later

Shipping Modules in Magento: Part 2

Packt
29 Jan 2010
8 min read
Appearing in the administration Once this has been done, the shipping method should appear in Shipping Methods under System->Configuration: Now, we will look at the most useful shipping module fields that are used when putting the shipping module together. These are fields with predefined names and types that have automatically processed the results that they output. Therefore, they require no additional coding in the adaptor module to take them on board; Magento performs these methods straight out of the box. Free shipping If we want to enable an automatic price-based amount for free shipping with our method, we can add in a field called free_shipping_enable and combine this with another field by the name of free_shipping_subtotal. When free_shipping_enable is set to Enabled by the Magento administrator, then Magento will automatically take free_shipping_subtotal into account and offer free shipping if the total amount is above the value of free_shipping_subtotal. If this field is disabled, Magento will simply process using the default shipping calculation behavior of the module.   The fields are set up as follows, with sort_order and show_in_ values varying: <free_shipping_enable translate="label"> <label>Free shipping with minimum order amount</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_enabledisable</source_model> <sort_order>21</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></free_shipping_enable><free_shipping_subtotal translate="label"> <label>Minimum order amount for free shipping</label> <frontend_type>text</frontend_type> <sort_order>22</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></free_shipping_subtotal> Handling Handling charges sometimes come into the equation and need to be added onto the overall transaction. Magento enables us to do this using the following source models to present what we want to achieve: <handling_type translate="label"> <label>Calculate Handling Fee</label> <frontend_type>select</frontend_type> <source_model>shipping/source_handlingType</source_model> <sort_order>10</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>0</show_in_store></handling_type><handling_action translate="label"> <label>Handling Applied</label> <frontend_type>select</frontend_type> <source_model>shipping/source_handlingAction</source_model> <sort_order>11</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>0</show_in_store></handling_action><handling_fee translate="label"> <label>Handling fee</label> <frontend_type>text</frontend_type> <sort_order>12</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></handling_fee> Restricting a shipping method to certain countries This will allow us to present the option to the administrator for filtering the shipping method to be only accessible to certain countries. In practice, this means that if we wanted to offer only one type of delivery to the United Kingdom, then we could do so simply by selecting United Kingdom from the multi-select field created by the following declaration. The Magento administrator can choose the specific countries from the multiple select list. Only orders from those countries that we have created shipping methods for will be processed in the shipping module. This enables them to choose any number of countries for restricting this shipping method to . sallowspecific translate="label"> <label>Ship to applicable countries</label> <frontend_type>select</frontend_type> <sort_order>90</sort_order> <frontend_class>shipping-applicable-country</frontend_class><source_model>adminhtml/system_config_source_shipping_allspecificcountries</source_model> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></sallowspecific><specificcountry translate="label"> <label>Ship to Specific countries</label> <frontend_type>multiselect</frontend_type> <sort_order>91</sort_order><source_model>adminhtml/system_config_source_country</source_model> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></specificcountry><showmethod translate="label"> <label>Show method if not applicable</label> <frontend_type>select</frontend_type> <sort_order>92</sort_order><source_model>adminhtml/system_config_source_yesno</source_model> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store></showmethod> Using our template to create a shipping method Now that we have our bare-bones shipping module, we continue with the creation of something that we can see an outcome from. From this we should be able to start to put together our own shipping module tailor-made for future needs. The purpose of what we are going to build is going to be very simple: we're going to create a shipping module that meets the following parameters: It has a handling fee, either per product or for the entire order It can be limited to specific countries It can set a simple flat-rate shipping cost, if 10 products or more are being ordered It can set another simple flat-rate shipping cost, if 10 products or less are being ordered All of the above can be configured via the Magento administration Before progressing, we delete the previous shipping module from our installation to make sure that it does not interfere with what we'll be building. To do this, we go back to the Magento Downloader  and select Uninstall from the module's supporting dropdown before committing the changes. The configuration files This time, we'll go with the directory MagentoBook and the name FullShippingModule. For this, our /app/code/local/MagentoBook/ShippingModule/MagentoBook/FullShippingModule/etc/config.xml file will look like: <?xml version="1.0"?><config> <modules> <MagentoBook_FullShippingModule> <version>0.1.0</version> <depends> <Mage_Shipping /> </depends> </MagentoBook_FullShippingModule></modules><global> <models> <FullShippingModule> <class>MagentoBook_FullShippingModule_Model</class> </FullShippingModule> </models><resources> <fullshippingmodule_setup> <setup> <module>MagentoBook_FullShippingModule</module> </setup> <connection> <use>core_setup</use> </connection> </fullshippingmodule_setup> </resources> </global></config> We turn on FullShippingModule, and allow it to be turned off/on from within the administration. Then, we create /app/etc/modules/MagentoBook_FullShippingModule.xml and place the following in it: <?xml version="1.0"?><config> <modules> <MagentoBook_FullShippingModule> <active>true</active> <codePool>local</codePool> </MagentoBook_FullShippingModule> </modules></config> Our adaptor For those interested in cutting down on code, unnecessary comments have been removed (which were included in the previous adaptor in this article). We place the following code in: /app/code/local/MagentoBook/FullShippingModule/Model/Carrier/FullBoneMethod.php <?phpclass MagentoBook_FullShippingModule_Model_Carrier_FullBoneMethodextends Mage_Shipping_Model_Carrier_Abstract{ protected $_code = 'fullshippingmodule'; public function collectRates(Mage_Shipping_Model_Rate_Request $request) { if (!$this->getConfigData('active')) { Mage::log('The '.$this->_code.' shipping method is not active.'); return false; } $handling = $this->getConfigData('handling'); $result = Mage::getModel('shipping/rate_result'); $method = Mage::getModel('shipping/rate_result_method'); $items = Mage::getModel('checkout/session')->getQuote()- >getAllItems(); if (count($items) >= $this->getConfigData('minimum_item_limit')) { $code = $this->getConfigData('over_minimum_code'); $title = $this->getConfigData('over_minimum_title'); $price = $this->getConfigData('over_minimum_price'); } else { $code = $this->getConfigData('under_minimum_code'); $title = $this->getConfigData('under_minimum_title'); $price = $this->getConfigData('under_minimum_price'); } $method->setCarrier($this->_code); $method->setCarrierTitle($this->getConfigData('title')); $method->setMethod($code); $method->setMethodTitle($title); $method->setPrice($price + $handling); $result->append($method); return $result; }} In short, this will check whether there are more items in the cart than the pre-configured value of minimum_item_limit and then apply a rate if it is over the set limit. If under the limit, it applies another rate.
Read more
  • 0
  • 0
  • 1444
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-routes-and-model-binding-intermediate
Packt
01 Oct 2013
6 min read
Save for later

Routes and model binding (Intermediate)

Packt
01 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting ready This section builds on the previous section and assumes you have the TodoNancy and TodoNancyTests projects all set up. How to do it... The following steps will help you to handle the other HTTP verbs and work with dynamic routes: Open the TodoNancy Visual Studio solution. Add a new class to the NancyTodoTests project, call it TodosModulesTests, and fill this test code for a GET and a POST route into it: public class TodosModuleTests { private Browser sut; private Todo aTodo; private Todo anEditedTodo; public TodosModuleTests() { TodosModule.store.Clear(); sut = new Browser(new DefaultNancyBootstrapper()); aTodo = new Todo { title = "task 1", order = 0, completed = false }; anEditedTodo = new Todo() { id = 42, title = "edited name", order = 0, completed = false }; } [Fact] public void Should_return_empty_list_on_get_when_no_todos_have_been_posted() { var actual = sut.Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } [Fact] public void Should_return_201_create_when_a_todo_is_posted() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)); Assert.Equal(HttpStatusCode.Created, actual.StatusCode); } [Fact] public void Should_not_accept_posting_to_with_duplicate_id() { var actual = sut.Post("/todos/", with => with.JsonBody(anEditedTodo)) .Then .Post("/todos/", with => with.JsonBody(anEditedTodo)); Assert.Equal(HttpStatusCode.NotAcceptable, actual.StatusCode); } [Fact] public void Should_be_able_to_get_posted_todo() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo) ) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(aTodo, actualBody[0]); } private void AssertAreSame(Todo expected, Todo actual) { Assert.Equal(expected.title, actual.title); Assert.Equal(expected.order, actual.order); Assert.Equal(expected.completed, actual.completed); } } The main thing to notice new in these tests is the use of actual.Body.DesrializeJson<Todo[]>(), which takes the Body property of the BrowserResponse type, assumes it contains JSON formatted text, and then deserializes that string into an array of Todo objects. At the moment, these tests will not compile. To fix this, add this Todo class to the TodoNancy project as follows: public class Todo { public long id { get; set; } public string title { get; set; } public int order { get; set; } public bool completed { get; set; } } Then, go to the TodoNancy project, and add a new C# file, call it TodosModule, and add the following code to body of the new class: public static Dictionary<long, Todo> store = new Dictionary<long, Todo>(); Run the tests and watch them fail. Then add the following code to TodosModule: public TodosModule() : base("todos") { Get["/"] = _ => Response.AsJson(store.Values); Post["/"] = _ => { var newTodo = this.Bind<Todo>(); if (newTodo.id == 0) newTodo.id = store.Count + 1; if (store.ContainsKey(newTodo.id)) return HttpStatusCode.NotAcceptable; store.Add(newTodo.id, newTodo); return Response.AsJson(newTodo) .WithStatusCode(HttpStatusCode.Created); }; } The previous code adds two new handlers to our application. One handler for the GET "/todos/" HTTP and the other handler for the POST "/todos/" HTTP. The GET handler returns a list of todo items as a JSON array. The POST handler allows for creating new todos. Re-run the tests and watch them succeed. Now let's take a closer look at the code. Firstly, note how adding a handler for the POST HTTP is similar to adding handlers for the GET HTTP. This consistency extends to the other HTTP verbs too. Secondly, note that we pass the "todos"string to the base constructor. This tells Nancy that all routes in this module are related to /todos. Thirdly, notice the this.Bind<Todo>() call, which is Nancy's data binding in action; it deserializes the body of the POST HTTP into a Todo object. Now go back to the TodosModuleTests class and add these tests for the PUT and DELETE HTTP as follows: [Fact] public void Should_be_able_to_edit_todo_with_put() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)) .Then .Put("/todos/1", with => with.JsonBody(anEditedTodo)) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(anEditedTodo, actualBody[0]); } [Fact] public void Should_be_able_to_delete_todo_with_delete() { var actual = sut.Post("/todos/", with => with.Body(aTodo.ToJSON())) .Then .Delete("/todos/1") .Then .Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } After watching these tests fail, make them pass by adding this code to the constructor of TodosModule: Put["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; var updatedTodo = this.Bind<Todo>(); store[p.id] = updatedTodo; return Response.AsJson(updatedTodo); }; Delete["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; store.Remove(p.id); return HttpStatusCode.OK; }; All tests should now pass. Take a look at the routes to the new handlers for the PUT and DELETE HTTP. Both are defined as "/{id}". This will match any route that starts with /todos/ and then something more that appears after the trailing /, such as /todos/42 and the {id} part of the route definition is 42. Notice that both these new handlers use their p argument to get the ID from the route in the p.id expression. Nancy lets you define very flexible routes. You can use any regular expression to define a route. All named parts of such regular expressions are put into the argument for the handler. The type of this argument is DynamicDictionary, which is a special Nancy type that lets you look up parts via either indexers (for example, p["id"]) like a dictionary, or dot notation (for example, p.id) like other dynamic C# objects. There's more... In addition to the handlers for GET, POST, PUT, and DELETE, which we added in this recipe, we can go ahead and add handler for PATCH and OPTIONS by following the exact same pattern. Out of the box, Nancy automatically supports HEAD and OPTIONS for you. To handle the HEAD HTTP request, Nancy will run the corresponding GET handler but only return the headers. To handle OPTIONS, Nancy will inspect which routes you have defined and respond accordingly. Summary In this article we saw how to handle the other HTTP verbs apart from GET and how to work with dynamic routes. We will also saw how to work with JSON data and how to do model binding. Resources for Article: Further resources on this subject: Displaying MySQL data on an ASP.NET Web Page [Article] Layout with Ext.NET [Article] ASP.Net Site Performance: Speeding up Database Access [Article]
Read more
  • 0
  • 0
  • 1443

article-image-developing-application-symfony-13-part-2
Packt
10 Nov 2009
7 min read
Save for later

Developing an Application in Symfony 1.3 (Part 2)

Packt
10 Nov 2009
7 min read
Building the database The last step is to create the database and then create all of the tables. I have created my database called milkshake on the CLI using the following command: $/home/timmy/workspace/milkshake>mysqladmin create milkshake -u root -p Now that we have created the database, we need to generate the SQL that will create our tables. Again, we are going to use a Symfony task for this. Just like creating the ORM layer, the task will build the SQL based on the schema.xml file. From the CLI, execute the following task: $/home/timmy/workspace/milkshake>symfony propel:build-sql This has now generated a SQL file that contains all of the SQL statements needed to build the tables in our database. This file is located in the data/sql folder within the project folder. Looking at the generated lib.model.schema.sql file in this folder, you can view the SQL. Next, we need to insert the SQL into the database. Again using a Symfony task, execute the following on the CLI: $/home/timmy/workspace/milkshake>symfony propel:insert-sql During the execution of the task, you will be prompted to enter a y or N as to whether you want to delete the existing data. As this command will delete your existing tables and then create new tables, enter y. During development, the confirmation can become tiring. To get around this you can append the no-confirmation switch to the end as shown here: >symfony propel:insert-sql --no-confirmation Afterwards, check in your database and you should see all of the tables created as shown in the following screenshot: I have showed you how to execute each of the tasks in order to build everything. But there is a simpler way to do this, and that is with yet another Symfony task which executes all of the above tasks: $/home/timmy/workspace/milkshake>symfony propel:build-all or $/home/timmy/workspace/milkshake>symfony propel:build-all --no-confirmation Our application is now all set up with a database and the ORM layer configured. Next, we can start on the application logic and produce a wireframe. Creating the application modules In Symfony, all requests are initially handled by a front controller before being passed to an action. The actions then implement the application logic before returning the presentation template that will be rendered. Our application will initially contain four areas—home, location, menu, and vacancies. These areas will essentially form modules within our frontend application. A module is similar to an application, which is the place to group all application logic and is self contained. Let's now create the modules on the CLI by executing the following tasks: $/home/timmy/workspace/milkshake>symfony generate:module frontend home$/home/timmy/workspace/milkshake>symfony generate:module frontend location$/home/timmy/workspace/milkshake>symfony generate:module frontend menu$/home/timmy/workspace/milkshake>symfony generate:module frontend vacancies Executing these tasks will create all of the modules' folder structures along with default actions, templates, and tests in our frontend application. You will see the following screenshot when running the first task: Let's examine the folder structure for a module: Folder Description actions This folder contains the actions class and components class for a module templates All modules templates are stored in this folder Now browse to http://milkshake/frontend_dev.php/menu and you will see Symfony's default page for our menu module. Notice that this page also provides useful information on what to do next. This information, of course, is to render our template rather than have Symfony forward the request. Handling the routing We have just tested our menu module and Symfony was able to handle this request without us having to set anything. This is because the URL was interpreted as http://milkshake/module/action/:params. If the action is missing, Symfony will automatically append index and execute the index action if one exists in the module. Looking at the URL for our menu module, we can use either http://milkshake/frontend_dev.php/menu or http://milkshake/frontend_dev.php/menu/index for the moment. Also, if you want to pass variables from the URL, then we can just add them to the end of the URL. For example, if we wanted to also pass page=1 to the menu module, the URL would be http://milkshake/frontend_dev.php/menu/index/page/1. The problem here is that we must also specify the name of the action, which doesn't leave much room for customizing a URL. Mapping the URL to the application logic is called routing. In the earlier example, we browsed to http://milkshake/frontend_dev.php/menu and Symfony was able to route that to our menu module without us having to configure anything. First, let's take a look at the routing file located at apps/frontend/config/routing.yml. # default ruleshomepage: URL: / param: { module: default, action: index }default_index: URL: /:module param: { action: index }default: URL: /:module/:action/* This is the default routing file that was generated for us. Using the home page routing rules as an example, the route is broken down into three parts: A unique label: homepage A URL pattern: URL: / An array of request parameters: param: { module: menu, action: index } We refer to each one of the rules within the routing file using a unique label. A URL pattern is what Symfony uses to map the URL to a rule, and the array of parameters is what maps the request to the module and the action. By using a routing file, Symfony caters for complicated URLs, which can restrict parameter types, request methods, and associate parameters to our Propel ORM layer. In fact, Symfony includes an entire framework that handles the routing for us. The application logic As we have seen, Symfony routes all requests to an action within a module. So let's open the actions class for our menu module, which is located at apps/frontend/modules/menu/actions/actions.class.php. class menuActions extends sfActions{ /** * Executes index action * * @param sfRequest $request A request object */ public function executeIndex(sfWebRequest $request) { $this->forward('default', 'module'); }} This menuActions class contains all of the menu actions and as you can see, it extends the sfActions base class. This class was generated for us along with a default 'index' action (method). The default index action simply forwards the request to Symfony's default module, which in turn generates the default page that we were presented with. All of the actions follow the same naming convention, that is, the action name must begin with the word execute followed by the action name starting with a capital letter. Also, the request object is passed to the action, which contains all of the parameters that are in the request. Let's begin by modifying the default behavior of the menu module to display our own template. Here we need the application logic to return the template name that needs to be rendered. To do this, we simply replace the call to the forward function with a return statement that has the template name: public function executeIndex(sfWebRequest $request) { return sfView::SUCCESS; } A default index template was also generated for us in the templates folder, that is, apps/frontend/modules/menu/templates/indexSuccess.php. Returning the sfView::SUCCESS constant will render this template for us. The template rendered will depend on the returned string from the action. All templates must also follow the naming convention of actionNameReturnString.php. Therefore, our action called index returns the sfView constant SUCCESS, meaning that the indexSuccess.php template needs to be present within the templates folder for our menu module. We can return other strings, such as these: return sfView::ERROR: Looks for the indexError.php template return myTemplate: Looks for the indexmyTemplate.php template return sfView::NONE: Will not return a template and, therefore, bypass the view layer; this could be used as an example for an AJAX request However, just removing the $this->forward('default', 'module') function will also return indexSuccess.php by default. It is worth adding the return value for ease in reading. Now that we have rendered the menu template, go ahead and do the same for the home, locations, and vacancies modules.
Read more
  • 0
  • 0
  • 1440

article-image-managing-images-and-videos-joomla-15-part-2
Packt
19 Nov 2009
7 min read
Save for later

Managing Images and Videos in Joomla! 1.5: Part 2

Packt
19 Nov 2009
7 min read
Using video files Video files are generally large due to the amount of content they contain and their length. It's beyond the scope of our article to describe them in detail, but in basic terms, they are a linear sequence of still images placed together to create a sequence of movement, usually accompanied by an audio track. Original video files are compressed using a codec to produce a compressed video file. The various codecs produce different results for file size, quality, and export. Video files play in the browser by downloading the data through the Internet, progressively streaming it so the movie begins to play before the whole file has downloaded. Audio files work in a similar way, but are often not as large. The final quality of a video also depends on the method used to capture it and how it's stored. The better the quality of the camera, the better the result. If you want to learn more about video, Wikipedia has a page at http://en.wikipedia.org/wiki/Video_formats. Just like anything else, there are pros and cons of adding videos to your website. YouTube alone has proven there is a strong market for a more visual medium. However, there are still many people who prefer text-based content as well. Consider whether adding a video to your site will enhance your user's experience. Is the material promotional or instructional? Is the content better demonstrated than explained? Video material can broaden your target audience. Many people prefer watching a video online to reading lengthy bodies of text. Videos aren't that great for search engine optimization. Consider adding a transcript to the page as well, in order to increase the ability to search. Choosing the best video file format Video played through the Internet requires a media player, which acts as an interface between the video file and the browser. These days most Internet users have one embedded within their browser. Popular versions include: QuickTime, a player created by Apple Windows Media Player WINAMP Real Player, developed by Real Networks Adobe Flash Player The following are some of the video file types that can be played through your website using third-party media players: .wmv files are a popular format developed by Microsoft and which come bundled within the Internet Explorer software package and are, therefore, preinstalled on Windows PCs. This is a format good for movies with movement within them. This format works with Windows Media Player, RealPlayer, and another called VLC Player. This format isn't very compatible with Mac or Linux computer users. .mov files are a QuickTime video platform extension that also plays back on the Windows operating system. The Apple QuickTime movie player software can be easily downloaded from Apple at http://www.apple.com/quicktime/download/. While not many browsers have the QuickTime media player installed, this format does provide very high quality video. You can always provide a link to the URL to download the software in order to play the video. .avi files are often the format of videos with smaller dimensions, played back through a website. They are a container for audio and video files (hence the name!). They can sometimes be quite large in file size, depending on the codec used to compress the video footage. They are a mainstream format. .swf and .flv videos are excellent for web video streaming and can also include interactive features. Most Mac and PCs have the Flash Shockwave Player installed; however, it can be downloaded from http://get.adobe.com/flashplayer/. Take note of the requirements for your individual operating system and browser preferences. Keep the following in mind when considering a video for your website: Ensure the video is succinct and the file size as small as possible. Even with a high speed download, time is still required to fully download the complete file. Keeping the video between one to three minutes long and the file size under five megabytes The more the movement in a video, the larger the file size. Consider whether the video really enhances the message. Viewers are only interested in material that is useful to them and will resent consuming their download resources on a video that holds no value for them. The larger the file size, the longer it takes to upload. Consider your audience's data rate. Do they have high speed downloads or are there some with dial-up connections? Video files are generally large due to the amount of content they contain. They stream (streaming is the way the Internet transfers multimedia information) through the data so the video will begin playing before it has fully downloaded itself into the browser, allowing it to be played back as quickly as possible. Audio files work in a similar way, but are not usually big files. There are various video file formats available and most website users have a player to see them already contained within the browser. Many users have QuickTime, a player created by Apple (that also runs on PCs) and Real Player, developed by Real Networks. Videos require a special plugin to play them through an article on your site, once you have uploaded it. Alternatively, you can embed a link from the popular YouTube site (http://www.youtube.com/). We'll look at how to do both in relation to the Party People website. Uploading a video We'll upload a new video, much the same way we would upload an image, to a new subfolder called videos within the Party People website. The steps are as follows: Navigate to the Media Manager. Select the stories folder and type videos into the Files input box. Click Create Folder. Select the new videos folder icon; then click the Browse button to choose the video from our desktop computer. Click Start Upload. Now we have a video file ready to be inserted into an article. The Party People website has the popular AllVideos plugin installed to do this. Updating videos—AllVideos plugin This is another neat plugin that works in much the same way as the Simple Image Gallery, a stablemate from this team of developers. If you don't have it installed and you would like to present videos on your site, ask your developer to install it for you or refer to the developers website http://www.joomlaworks.gr/content/view/35/41/ for instructions. Our Party People website has a .mov video on the Products and Services page, which we will update. To update the video display: Navigate to the Article Manager through the top menu and open theOur Services Include... article. Change the name of the video file between the { } and {/} tags within the text editor to the new filename. Depending on the format of the video being presented, the code should look like this: {mov}promoVideo{/mov} This code displays a QuickTime movie within the article. Save the changes. The following screenshot shows us how the video will look in context. Note that you do not need to include the format extension at the end of the filename, as the tag surrounding the name addresses this. Changing to a different video file and format The AllVideos plugin supports a number of video file formats and the developer's website lists them all at http://www.joomlaworks.gr/content/view/35/41/. We'll change the video we just linked to a different one which is in the .wmv format. The steps are as follows: Navigate to the Article containing the video presentation. Change the tag between the { } braces to reflect the new file type, taking care not to delete any of the symbols. For example: {wmv}updatedServicesVideo{/wmv} Save the changes to your article. You should take care to avoid rearranging any of the formatting within the code, as this will prevent the movie from playing. That is, don't add any extra spaces, colons, commas, and so on.  
Read more
  • 0
  • 0
  • 1436

article-image-lesson-solutions-using-moodle-19-part-2
Packt
21 Jan 2010
8 min read
Save for later

Lesson Solutions Using Moodle 1.9: Part 2

Packt
21 Jan 2010
8 min read
Controlling the flow through a lesson If your lesson questions have all true/false or yes/no answers, you will probably set Maximum number of answers/branches to 2. If you use more than two answers per question, consider whether you want to create a jump page for each answer. If you create a unique jump page for every answer on the question pages, and you use three answers per question, how many cards will there be in your flash card deck? The answer is your lesson will have three pages for each flash card, the card itself, plus two jump pages for remedial information. We don't want to spend all day creating a short lesson. But we still want to show remedial information when a student selects the wrong answer. Consider phrasing your questions, answers, and remedial pages so that one remedial page can cover all of the incorrect responses. The illustration shows this kind of flow. Note that we've reduced the number of remedial pages that have to be created. If you must give a different feedback for each answer to a question, consider using a quiz instead of a lesson. While a remedial page in a lesson can consist of anything that you can put on a web page, a feedback can only consist of text. However, quizzes are usually easier to create than lessons. If a quiz with feedback will suffice, you can probably create it faster than the kind of branching lesson shown in the figure. But if your feedback must be feature rich, there's nothing better than Moodle's Lesson module. Use a lesson to create a deck of flash cards Flash cards are a classic teaching strategy. In addition to a learning experience, flash cards also make a good self-assessment tool for students. You can use a lesson, as if it's an online deck of flash cards. One advantage of using an online system is that log files tell you if a student completed the flash card activity, and how well the student did. Keep it moving Students are accustomed to a flash card activity moving quickly. Showing a remedial page after each incorrect response will slow down the activity. Consider using only question feedback, without remedial pages in between cards. In a flash card lesson, every page will be a question page. In a lesson, a question page can have any content that you can put on a normal web page. So, each page in your flash card lesson can consist of a fully-featured web page, with a question at the bottom and some text-only feedback for each answer. When setting the jumps for each answer on the question page (on the card), make sure that a correct answer takes the student to the next page and an incorrect answer keeps them on the same page. Again, this duplicates our physical experience with flash cards. When we get the correct answer, we move on to the next card. When we get the wrong answer, we try again until we've got it right. Lesson settings that help create a flash card experience For a flash card lesson, you will probably set Practice lesson to Yes so that the grade for this lesson will not show up in the Gradebook. As stated above, setting Maximum grade to 0 will prevent this activity from showing up in the Gradebook. However, it will also prevent a student from seeing his/her score on the activity. If you want the student to see how well he/she did on the lesson, set Practice lesson to Yes and use a maximum grade that makes sense such as one point per correct answer. Allow student review enables a student to go backwards in a lesson and retry questions that he/she got wrong. In a flash card activity, this is usually set to No. Instead, we usually set Action after correct answer to Show an unanswered Page. That means after a student answers a flash card question incorrectly, Moodle might display that card again during the same session. If the student answers the question correctly, that card is not shown again during the same session. This is how most of us are accustomed to using physical flash cards. Number of pages (cards) to show determines how many pages are shown. You usually want a flash card session to be short. If the lesson contains more than this number, the lesson ends after reaching the number set here. If the lesson contains fewer than this number, the lesson ends after every card has been shown. For a flash card lesson, set this to less than the total number of cards. You can use the Slide Show setting to display the lesson in a separate window, and make that window the size of a flash card. This can help create the effect of a deck of cards. When the student uses a physical deck of flash cards, he/she can see approximately how far into the deck he/she is. The Progress bar setting can help to create this effect with your online deck of flash cards Use an ungraded lesson to step through instructions Briefly, precorrection is anticipating mistakes that students might make, and providing instruction to help them avoid those mistakes. Consider, you give a complex assignment to students . You know that even if you supply them with written instructions, they are likely to make mistakes, even when following the instructions. You might also give the students a video demo, and a Frequently Made Mistakes document. You could even host a chat before the assignment to answer any questions they have about how to complete it. If you focus these items on the parts of the assignment that are most likely to cause trouble, they become examples of precorrection. You can use a lesson to give students precorrection for difficult instructions. Place directions that should be read in a specific order on a series of lesson pages. See to it that the students step through those pages. This has several advantages over placing all of the instructions on one page. They are as follows: Moodle will log the students' view of the lesson pages so that you can confirm they have read the instructions. While the length of a lesson page is unlimited, the tendency when creating them is to keep them short. This encourages you to break up the directions into smaller chunks, which are easier for students to understand. You can insert a question page after each step, to confirm the user's understanding of the step. Question feedback and remedial pages can correct the students' understanding, before they move to the next step. If you use this technique, the lesson should probably be a Practice lesson so that the students' grade doesn't affect their final grade for the course. A workaround Less ons are designed to primarily be a teaching tool, and only secondarily an assessment tool. However, if you decide that you prefer to use a lesson for assessment, you can work around this limitation. This workaround enables you to determine if a student answered incorrectly on an initial question or on a remedial question. A low score on remedial questions should prompt action on the teacher's part such as contacting the student and offering additional help. You have seen how a lesson usually consists of an instructional page followed by a question page, and that when a student answers a question incorrectly the lesson can display a remedial page. After the remedial page, you can present another question on the same topic. Now, imagine a lesson that covers three items. Each item has its own instructional page followed by a question page, and a remedial page followed by another question page. So, not counting the entry and exit pages, there would be: Three topic pages Three question pages Three remedial topic pages Three remedial question pages If you were looking at the Gradebook for this lesson, and a student's grade indicated that he/she got two questions wrong, you could determine whether it was because he/she gave: One incorrect response on two of the items Two incorrect responses for the same item If the student answered incorrectly on both the first and the remedial questions for the same item, it could indicate the student is having trouble with that item. But the Gradebook won't tell you that. You will need to drill down from the Gradebook into the lesson to see that student's score for each question. From the Gradebook, you would select the category in which the lesson is placed. In this example, the activities are not categorized: After selecting the category (or just Uncategorised), a table of grades for each student/activity is displayed, which is as shown in the following screenshot: You may see that Student2 did not score well on the lesson. So, select the student's score to drill down into the lesson. Select the Reports tab, then the Overview subtab, and then the grade that you want to investigate: Finally , you may think that you're going to see a detailed report telling you which questions this student got right or wrong, so you would then be able to determine which concepts or facts he/she had trouble with, and help the student with those items. But instead, you see the following screenshot:
Read more
  • 0
  • 0
  • 1429
article-image-apache-myfaces-trinidad-12-web-application-groundwork-part-2
Packt
30 Nov 2009
3 min read
Save for later

Apache MyFaces Trinidad 1.2 Web Application Groundwork: Part 2

Packt
30 Nov 2009
3 min read
Deployment Deployment is very easy because with Seam-gen, we also inherit the deployment mechanism (already run during the project setup, earlier) provided by the Ant build process of Seam-gen. However, a few notes regarding the specific deployment of a Trinidad and Facelet web application in contrast with Seam-gen are discussed in more detail in the upcoming topics. The following screenshot shows the referenced libraries within the Eclipse IDE: Trinidad-specific and Facelet-related changes to the project files First of all, the lib directory lacks the Trinidad JAR files and the Facelet JAR: jsf-facelets-1.1.14.jar trinidad-api-1.2.9.jar trinidad-impl-1.2.9.jar So above JAR files must be added to the lib directory while others, such as the RichFaces JARs, should be removed as we want to achieve a clean setup of a single component library. A mix-up should be avoided to keep away from integration problems. The following screenshot shows the contents of the lib directory inside Eclipse (part I only shows files not referenced by the Eclipse project): Most importantly, we need to update the file deployed-jars.list, as it is looked at by the build process to provide the application server with the required JAR files. So we reduce this list file to a more minimal Trinidad-specific version: antlr-runtime.jar commons-beanutils.jar commons-digester.jar core.jar drools-compiler.jar drools-core.jar janino.jar jboss-el.jar jboss-seam.jar jboss-seam-*.jar jbpm-jpdl.jar jsf-facelets-1.1.14.jar mvel14.jar trinidad-api-1.2.10.jar trinidad-impl-1.2.10.jar The following screenshot shows the contents of the lib directory inside Eclipse (part II only shows files not referenced by the Eclipse project): Next, in the resources directory we must add a provider for Seam's conversation mechanism to support Seam conversations in Trinidad dialogs. Its file name must follow the Trinidad naming convention for this provider type, and it must be located below resources in META-INF/services: File name: org.apache.myfaces.trinidad.PageFlowScopeProvider Contents: It must contain the name and package path of the provider class, for example, trinidad.SeamPageFlowScopeProviderImpl This class is created as an implementation of Trinidad's abstract class PageFlowScopeProvider that can be easily done with Eclipse's comfortable class creation wizard. There are further simplifications: The org.jboss.seam.ui.richfaces package in the resources directory is required for Seam's support of RichFaces, but is obsolete for us and should thus be deleted In the WEB-INF directory, we can carry out the following activities: We can add a folder for Facelet composition components We can simplify the components.xml file by getting rid of persistence declarations such as persistence:managed-persistence-context, persistence:entity-manager-factory, and drools:rule-base declarations (the security we only leave is the one for the identity object) The faces-config must be adapted to suit the Trinidad renderer as described earlier The pages.xml becomes even simpler as we practically avoid it altogether using the dialog framework as described in the navigation section earlier We must modify the web.xml to suit Trinidad's requirements We must add three additional files, namely the trinidad-config.xml, the trinidad-skins.xml, and a taglib.xml to declare the Facelet composition components The *-dev-ds.xml and *-prod-ds.xml files may be emptied of any specific data because no database-backing is used in our test project
Read more
  • 0
  • 0
  • 1423

article-image-managing-records-alfresco-3
Packt
25 Jan 2011
12 min read
Save for later

Managing Records in Alfresco 3

Packt
25 Jan 2011
12 min read
Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Records Details Much of the description in this article focuses on record features that are found on the Records Details page. An abbreviated set of metadata and available actions for the record is shown on the row for the record in the File Plan. The Details page for a record is a composite screen that contains a complete listing of all information for a record, including the links to all possible actions and operations that can be performed on a record. We can get to the Details page for a record by clicking on the link to it from the File Plan page: The Record Details page provides a summary of all available information known about a record and has links to all possible actions that can be taken on it. This is the central screen from which a record can be managed. The Details screen is divided into three main columns. The first column on the screen provides a preview of the content for the record. The middle column lists the record Metadata, and the right-most column shows a list of Actions that can be taken on the record. There are other areas lower down on the page with additional functionality that include a way for the user to manually trigger events in the steps of the disposition, to get URL links to fi le content for the record, and to create relationship links to other records in the File Plan: Alfresco Flash previewer The web preview component in the left column of the Record Details page defines a region in which the content of the record can be visually previewed. It is a bit of an exaggeration to call the preview component a Universal Viewer, but it does come close to that. The viewer is capable of viewing a number of different common file formats and it can be extended to support the viewing of additional file formats. Natively, the viewer is capable of viewing both Flash SWF files and image formats like JPEG, PNG, or GIF. Microsoft Office, OpenOffice, and PDF files are also configured out-of-the-box to be previewed with the viewer by first converting the files to PDF and then to Flash. The use of an embedded viewer in Share means that client machines don't have to have a viewing application installed to be able to view the file contents of a record. For example, a client machine running an older version of Microsoft Word may not have the capability to open a record saved in the newer Word DOCX format, but within Share, using the viewer, that client would be able to preview and read the contents of the DOCX file. The top of the viewer has a header area that displays the icon of a record alongside the name of the record being viewed. Below that, there is a toolbar with controls for the viewing of the file: At the left of the toolbar, there are controls to change the zoom level. Small increments for zoom in and zoom out are controlled by clicking on the "+" and "-" buttons. The zoom setting can also be controlled by the slider or by specifying a zoom percentage or display factor like Fit Width from the drop-down menu. For multi-page documents, there are controls to go to the next or previous pages and to jump to a specific page. The Fullscreen button enlarges the view and displays it using the entire screen. Maximize enlarges the view to display it within the browser window. Image panning and positioning within the viewer can be done by using the scrollbar or by left-clicking and dragging the image with the mouse. A print option is available from an item on the right-mouse click menu. Record Metadata The centre column of the Record Details displays the metadata for the record. There are a lot of metadata properties that are stored with each record. To make it easier to locate specific properties, there is a grouping of the metadata, and each group has a label. The first metadata group is Identification and Status. It contains the Name, Title, and Description of the record. It shows the Unique Record Identifier for the record, and the unique identifier for the record Category to which the record belongs. Additional Metadata items track whether the record has been Declared, when it was Declared, and who Declared it: The General group for metadata tracks the Mimetype and the Size of the file content, as well as who Created or last made any modifications to the record. Additional metadata for the record is listed under groups like Record, Security, Vital Record Information, and Disposition. The Record group contains the metadata fields Location, Media Type, and Format, all of which are especially useful for managing non-electronic records. Record actions In the right-most column of the Record Details page, there is a list of Actions that are available to perform on the record. The list displayed is dynamic and changes based on the state of the record. For example, options like Declare as Record or Undo Cutoff are only displayed when the record is in a state where that action is possible: Download action The Download action does just that. Clicking on this action will cause the file content for the record to be downloaded to the user's desktop. Edit Metadata This action displays the Edit form matching the content type for the record. For example, if the record has a content type of cm:content, the Edit form associated with the type cm:content will be displayed to allow the editing of the metadata. Items identified with asterisks are required fields. Certain fields contain data that is not meant to change and are grayed out and non-selectable: Copy record Clicking on the Copy to action will pop up a repository directory browser that allows a copy of the record to be filed to any Folder within the File Plan. The name of the new record will start with the words "Copy of" and end with the name of the record being copied. Only a single copy of a record can be placed in a Folder without first changing the name of the first copy. It isn't possible to have two records in the same Folder with the same name. Move record Clicking on the Move to action pops up a dialog to browse to a new Folder for where the record will be moved. The record is removed from the original location and moved to the new location. File record Clicking on the File to action pops up a dialog to identify a new Folder for where the record will be filed. A reference to the record is placed in the new Folder. After this operation, the record will basically be in two locations. Deleting the record from either of the locations causes the record to be removed from both of the locations. After filing the record, a clip status icon is displayed on the upper-left next to the checkbox for selection. The status indicates that one record is filed in multiple Folders of the File Plan: Delete record Clicking on the Delete action permanently removes the item from the File Plan. Note that this action differs from Destroy that removes only the file content from a record as part of the final step of a disposition schedule. Audit log At any point in the lifecycle of a record, an audit log is available that shows a detailed history of all activities for the record. The record audit log can help to answer questions that may come up such as which users have been involved with the record and when specific lifecycle events for the record have occurred. The audit log also provides information that can confirm whether activities in the records system are both effective and compliant with record policies. The View Audit Log action creates and pops up a dialog containing a detailed historical report for the record. The report includes very detailed and granular information about every change that has ever been made to the record. Each entry in the audit log includes a timestamp for when the change was made, the user that made the change, and the type of change or event that occurred. If the event involved the change of any metadata, the original values and the changed values for the metadata are noted in the report. By clicking on the File as Record button on the dialog, the audit report for the record itself can be captured as a record that can then be filed within the File Plan. The report is saved in HTML file format. Clicking on the Export button at the top of the dialog enables the audit report to be downloaded in HTML format: The Audit log, discussed here, provides very granular information about any changes that have occurred to a specific record. Alfresco also provides a tool included with the Records Management Console, also called Audit, which can create a very detailed report showing all activities and actions that have occurred throughout the records system. Links Below the Actions component is a panel containing the Share component. This is a standard component that is also used in the Share Document Library. The component lists three URL links in fields that can be easily copied from and pasted to. The URLs allow record content and metadata to be easily shared with others. The first link in the component is the Download File URL. Referencing this link causes the content for the record to be downloaded as a file. The second link is the Document URL. It is similar to the first link, but if the browser is capable of viewing the file format type, the content will be displayed in the browser; otherwise it is downloaded as a file. The third link is the This Page URL. This is the URL to the record details page. Trying to access any of these three URLs will require the user to first authenticate himself/herself before access to any content will be allowed. Events Below the Flash preview panel on the Details page for the record is an area that displays any Events that are currently available to be manually triggered for this record. Remember that each step of a disposition schedule is actionable after either the expiration of a time deadline or by the manual triggering of an event. Events are triggered manually by a user needing to click on a button to indicate that an event has occurred. The location of the event trigger buttons differs depending on how the disposition in the record Category was applied. If the disposition was applied at the Folder level, the manual event trigger buttons will be available on the Details page for the Folder. If the disposition was applied at the record level, the event trigger buttons are available on the Record Details page. The buttons that we see on this page are the ones available from the disposition being applied at the record level. The event buttons that apply to a particular state will be grouped together based on whether or not the event has been marked as completed. After clicking on completion, the event is moved to the Completed group. If there are multiple possible events, it takes only a single one of them to complete in order to make the action available. Some actions, like cutoff, will be executed by the system. Other actions, like destruction, require a user to intervene, but will become available from the Share user interface: References Often it is useful to create references or relationships between records. A reference is a link that relates one record to another. Clicking on the link will retrieve and view the related record. In the lower right of the Details page, there is a component for tracking references from this record and to other records in the File Plan. It is especially useful for tracking, for instance, reference links to superseded or obsolete versions of the current record. To attach references, click on the Manage button on the References component: Then, from the next screen, select New Reference: A screen containing a standard Alfresco form will then be displayed. From this screen, it is possible to name the reference, pick another record to reference, and to mark the type of reference. Available reference types include: SupersededBy / Supersedes ObsoletedBy / Obsoletes Supporting Documentation / Supported Documentation VersionedBy / Versions Rendition Cross-Reference After creating the reference, you will then see the new reference show up in the list: How does it work? We've now looked at the functionality of the details page for records and the Series, Category, and Folder containers. In this "How does it work?" section, we'll investigate in greater detail how some of the internals for the record Details page work.
Read more
  • 0
  • 0
  • 1411

article-image-extending-mootools
Packt
25 Jul 2011
8 min read
Save for later

Extending MooTools

Packt
25 Jul 2011
8 min read
  MooTools 1.3 Cookbook Over 100 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page         Read more about this book       (For more resources on this topic, see here.) The reader can benefit from the previous article on MooTools: Extending and Implementing Elements.   Making a Corvette out of a car-extending the base class The "base class" is a function, a method, that allows extension. Just what does extending a class entail? Buckle up and let us take a drive. Getting ready Just to show the output of our work, create a DIV that will be our canvas. <div id="mycanvas"></div> How to do it... Creating a class from the base class is as rudimentary as this: var Car = new Class();. That is not very instructive, so at the least, we add the constructor method to call at the time of instantiation: initialize. <script type="text/javascript"> var Car = new Class({ initialize: function(owner) { this.owner = owner; }}); The constructor method takes the form of a property named initialize and must be a function; however, it does not have to be the first property declared in the class. How it works... So far in our recipe, we have created an instance of the base class and assigned it to the variable Car. We like things to be sporty, of course. Let's mutate the Car into a Corvette using Extends and passing it the name of the Class to make a copy of and extend into a new class. var Corvette = new Class({ Extends: Car, mfg: 'Chevrolet', model: 'Corvette', setColor: function(color) { this.color = color; }}); Our Corvette is ready for purchase. An instantiation of the extended class will provide some new owner happiness for 5 years or 50,000 miles, whichever comes first. Make the author's red, please. var little_red = new Corvette('Jay Johnston'); little_red.setColor('red'); $('mycanvas').set('text',little_red.owner+"'s little "+little_red.color+' '+little_red.model+' made by '+little_red.mfg); </script> There's more... This entire example will work identically if Corvette Implements rather than Extends Car. Whether to Extend or to Implement Extending a class changes the prototype, creating a copy in which the this.parent property allows for the overridden parent class method to be referenced within the extended class's current method. To derive a mutation that takes class properties from multiple classes, we use Implements. Be sure to place the Extends or Implements property first before all other methods and properties. And if both extending and implementing, the Implements property follows the Extends property. See also See how Moo can muster so much class: http://mootools.net/docs/core/Class/Class#Class.   Giving a Corvette a supercharger-Implements versus Extends Be ready to watch for several things in this recipe. Firstly, note how the extended corvette methods can use this.parent. Secondly, note how the implemented corvette, the ZR1, can implement multiple classes. Getting ready Create a canvas to display some output. <h1>Speed Indexes:</h1><div id="mycanvas"></div> How to do it... Here we create a class to represent a car. This car does not have an engine until it goes through further steps of manufacturing, so if we ask what its speed is, the output is zero. Next, we create a class to represent a sporty engine, which has an arbitrary speed index of 10. // create two classes from the base Class var Car = new Class({ showSpeed: function() { return 0; } }); var SportyEngine = new Class({ speed: 10 }); Now we get to work. First, we begin by manufacturing corvettes, a process which is the extension of Car, they are faster than an empty chassis, of course, so we have them report their speed as an index rating one more than the parent class. // Extend one, Implement the other var Corvette = new Class({ Extends: Car, showSpeed: function() { // this.parent calls the overridden class return this.parent()+1; } }); Secondly, we implement both Car and SportyEngine simultaneously as ZR1. We cannot use this.parent so we return the speed if asked. Of course, the ZR1 would not have a speed if only a mutation of Car, but since it is also a mutation of SportyEngine it has the speed index of that class. var ZR1 = new Class({ // multiple classes may be implemented Implements: [Car, SportyEngine], // yep showSpeed: function() { // this.parent is not available //return this.parent()+1; // nope return this.speed; }}); How it works... When an instantiation of Corvette is created and its showSpeed() method called, it reports the speed of the parent class, Car, adding 1 to it. This is thanks to the magic Extends provides via this.parent(). var corvette = new Corvette(); var zr1 = new ZR1(); $('mycanvas').set('html', '<table>'+ '<tr><th>Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>ZR1:</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); And so, the output of this would be: Corvette: 1ZR1: 10 An instantiation of ZR1 has the properties of all classes passed to Implements. When showSpeed() is called, the value conjured by this.speed comes from the property defined within SportyEngine.   Upgrading some Corvettes—Extends versus Implements Now that we have reviewed some of the reasons to extend versus implement, we are ready to examine more closely how inheritance within Extends can be useful in our scripting. Getting ready Create a display area for the output of our manufacturing plant. <h1>Speeds Before</h1><div id="before"></div><h1>Speeds After</h1><div id="after"></div> How to do it... Create two classes, one that represents all car chassis with no engine and one that represents a fast engine that can be ordered as an upgrade. This section is identical to the last recipe; if necessary, review once more before continuing as the jist will be to alter our instantiations to display how inheritance patterns affect them. // create two classes from the base Class var Car = new Class({ showSpeed: function() { return 0; } }); var SportyEngine = new Class({ speed: 10 }); // Extend one, Implement the other var Corvette = new Class({ Extends: Car, speed: 1, showSpeed: function() { // this.parent calls the overridden class return this.parent()+1; } }); var ZR1 = new Class({ // multiple classes may be implemented Implements: [Car, SportyEngine], // yep showSpeed: function() { // this.parent is not available //return this.parent()+1; // nope return this.speed; } }); Note that the output before mutation is identical to the end of the previous recipe. var corvette = new Corvette(); var zr1 = new ZR1(); $('before').set('html', '<table>'+ '<tr><th>Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>ZR1</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); Here is what happens when the manufacturing plant decides to start putting engines in the base car chassis. That gives them a speed, where they did not have one previously. Mutate the base class by having it return an index of five rather than zero. // the mfg changes base Car speed to be +5 fasterCar = Car.implement({ showSpeed: function() { return 5; }});// but SportyEngine doesn't use the parent method$('after').set('html', '<table>'+ '<tr><th>New Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>New ZR1</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); How it works... The zr1 instantiation did not mutate. The corvette instantiation did. Since zr1 used implements, there is no inheritance that lets it call the parent method. In our example, this makes perfect sense. The base chassis comes with an engine rated with a speed of five. The ZR1 model, during manufacturing/instantiation is given a completely different engine/a completely different property, so any change/recall of the original chassis would not be applicable to that model. For the naysayer, the next recipe shows how to effect a manufacturer recall that will alter all Corvettes, even the ZR1s. There's more... There is an interesting syntax used to mutate the new version of Car, Class.implement(). That same syntax is not available to extend elements. See also Here is a link to the MooTool documentation for Class.implement(): http://mootools.net/docs/core/Class/Class#Class:implement.  
Read more
  • 0
  • 0
  • 1408
article-image-testing-students-knowledge-using-moodle-modules
Packt
03 Jun 2010
11 min read
Save for later

Testing Students' Knowledge using Moodle Modules

Packt
03 Jun 2010
11 min read
(For more resources on Moodle, see here.) There are a number of tools in Moodle to find out the students' depth of understanding—some formal and some informal. There are also some tools that allow the students to assist themselves in their own knowledge consolidation. We will now investigate these tools in more detail. Implementing a glossary The glossary is a well known and commonly used module in Moodle, but it may not always be used as the effective teaching and learning tool that it can be. Given the jargon-rich nature of design and technology, it is also an important glue for the whole subject area. Checking the settings The key options here relate to the functioning of the glossary and how you use it as a teaching and learning tool. The first thing that needs to be done is to check the overall settings for the module in the administration section of the site. The settings are divided into three key areas: Default settings for the glossary module Settings for the entries in the glossary Format of the display styles The previous settings can be found on the administration panel in the Site Administration | Modules | Activities | Glossary section. Default settings The first section determines which functions of the glossary are generally available to the users of the module on the site. In the previous screenshot, the main options relate to the linking functions and comments on the entries themselves. For example, do you want to allow students to comment on the glossary entries? For a subject such as DT, you might wish to allow students to add their own understanding of terms. In some cases, there may be regional terms for some things which are different from the "formal" definition of a term, such as the term used for certain woodworking tools. As woodworking is an ancient craft, there are many different local terms to describe some of the equipments used. The glossary would then reinforce the social nature of the design as a part of a teaching method. Preventing duplicate entries As you can see in the previous screenshot, the default setting here would be to not allow duplicate entries in the glossary—although it can be enabled to perhaps show the nuances in a term or allow the term to be defined in another language, if your school has a partner school in another country. In some instances, you might have a definition that is the accepted definition, but also one that students might come across such as a building term. Likewise, you might have a definition relating to a usage as opposed to how a manufacturer might determine an item. Allowing duplication gives students a sense of how terms change, depending on how they are used and by whom. Allowing comments Allowing comments on the entries is useful for students to build up an internal dialog whereby they may be able to add examples of best practice to the definitions in the glossary relating to their on-the-job work experience. This is useful because students who had previously worked in a company and gained some experience of the job can leave a historical record to help the students who have just started the course or work. Automatically linking comments The linking of the glossary terms is useful if you use a consistent approach throughout your course design; in particular, if you use the Compose a web page function under Add a resource, as discussed earlier, rather than uploading the proprietary word-processed files, then Moodle will be able to link the terms through the database. However, please bear in mind that the linking function will add some load to your server and may not be appropriate in all cases, such as when using a server on a shared system. Linking definitions throughout the site allows students to have a better understanding of all the elements, as they work through them and when they forget some key terms. However, you need to remember that this links across an entire site and some definitions may clash across curriculum subjects. Therefore, there is a need for the duplicate entries. The ability to create a glossary for all the users on a site is only available to site administrators. Entry level default settings The following screenshot shows the options that are available for the entries, which are added to the glossaries. If enabled, the options shown in the following screenshot will automatically be enabled when the entries are made. Users still have the choice to disable the options like the automatic linking shown as follows: Again, you can link terms in the course to the glossary definitions, and if necessary, make the terms case sensitive. The case-sensitive option as well as the matching of whole words allows some fine tuning of the glossary. For example, you may make an entry for a law, which is HAM. If you enable the case-sensitive option on this entry, then a link will be created in the database when the specific term HAM is entered on a page in the course and not for every instance of hammer. This may be more useful with younger students when it is important for them to learn the key terms, but perhaps not so with older students. You can now save the settings you have chosen. If the changes have been applied for you by your site administrator, you can move directly to your course to begin using a glossary. Creating a glossary Once you or the site administrator have set up the module in the way that is most appropriate for your institution, teachers can then begin to apply them to their courses. We are assuming here that you have other subject areas on your Moodle site, therefore, we will focus on the course-level entries, but the principles are same. Enabling the glossary For this example, we will add a glossary of terms to our construction-based course. Younger students may be unfamiliar with this subject in many cases and they will, therefore, have a greater need for some ways of understanding the wealth of the terms. The same is true for Food Technology or Resistant Materials, but it is more likely that they would have at least encountered food-based products or materials in their lives. It is less likely that they had been involved in the construction of their environment or have working knowledge of these key terms. As with all the modules, the first action is to enable editing on the course to activate the activities drop-down menu. This requires clicking the button that follows: This will then show the activities menu from which you can select the glossary module Glossary from one of the many Add an activity' drop down menus, as shown in the following screenshot, to create a new glossary activity module. Editing the glossary Once enabled, added, or created, you can then name the glossary and determine some of the functionality you want in it to be available. Like all the other modules, these are related to time and display elements. The following screenshot shows some of the key settings such as the type of glossary and the display format. Most of these settings were determined at the site level, such as allowing duplicate entries or comments, but they can be changed by the staff as required. The key point here is that we make one glossary—a main glossary for the course. We make all the subsequent glossaries secondary, which means that we can export the terms into this main one, but this is the overall glossary for the course. We might have secondary glossaries for the human aspects of construction or health and safety, as opposed to the material elements for example. As this is a very graphical subject, we have enabled the display to be like an encyclopedia. This will allow staff and students to add images and video files to explain the terms they are defining in a better way. Rating entries If you are going to use the glossary in a more formal way, it would be useful to allow the students to rate the entries, so that they can peer assess each other's terms. If you set each student a number of terms to define as a homework exercise, you can allow the students to research and populate the glossary and for the other students to award marks. This makes the terms far more dynamic and real for the students. You might also invite your contacts from local companies to rate the students' definitions and give them feedback to help develop their understanding on a deeper level. This is enabled through the grading option, as shown in the following screenshot: The glossary can now be saved. Adding entries (categories) When we are adding entries, the first real requirement is to create some categories in which you can organize the terms. A category in this instance is a group of terms such as tools or techniques. If we had created one Moodle course to cover all the DT subjects, we might have a main glossary for DT and secondary glossaries for Food and Construction. This makes it more organized as well as making it easier to search for items. In this example, we are creating some glossary items relating to the term 'carpentry'. We need to create an overall category for this area. This is achieved by first adding a new category item by clicking on the Browse by category tab and then on Edit categories, as shown in the following screenshot: This will open the dialog window to create a new category for this glossary, as shown in the following screenshot: If you select to link the category, this whole sub-section will be revealed by a hyperlink to this term, which could be useful for newer students. Adding entries If the course you are managing incorporates all of the DT subjects, then you may wish to create a main glossary for DT and secondary glossaries for Food or Resistant Materials and so on. In this example, we have a course for Construction and the Built Environment, which is the main glossary. We are going to create categories to group the terms in the glossary for areas such as carpentry or electrical. With the categories set, it is now possible to add and organize the definitions for your course. This can be managed entirely by the staff or can be an exercise that allows student participation, such as a homework exercise as mentioned earlier. In this example, we are building up a definition for a particular woodworking joint. Once we have set the name and basic details, we can then categorize the entry and add some keywords for searches, as shown in the following screenshot: The entry can now be viewed by students as well as rated and commented upon. In the following instance, the student has not only rated the entry, but also added a comment with a link to a website they have found, which further illustrates the particular woodworking joint. Students could also embed a video stream from a video site, which would also help to explain the process more clearly. This level of collaborative learning is immensely powerful with this type of kinesthetic information. Students can now add their own entries as part of a homework routine or teaching strategy. As shown in the following screenshot, items added to the glossary are linked to other pages through the database. The terms are highlighted by Moodle and clicking on them will take users to the glossary page, which defines them. In this example, the word 'wood' is highlighted in green. When you click on this link, it will open the corresponding entry for wood in the glossary, as shown in the following screenshot. Mapping their minds Many aspects of design require students to sketch out their ideas in a graphical form in order to get to grips with the complexity and the various components. These sketches could be scanned and uploaded as formal assignments, but they could also be incorporated into the Moodle site through the use of an add-on called Mindmap. This is a third-party module that allows the students to map out their ideas using a basic interface and permits them to link and label the items on a screen. It can be saved by them in their area and can be viewed by the staff for guidance and support. The Mindmap module can be found at: http://moodle.org/mod/data/view.php?d=13&rid=1628&filter=1 Once the module has been installed, it is added to a course in a similar way as we add any other module, by turning the editing on. After choosing the drop-down activities menu, you can then add theMindmap module, as shown in the following screenshot: This will activate the dialog to set up the name and settings for the Mindmap module for the course.
Read more
  • 0
  • 0
  • 1398

Packt
24 Mar 2015
15 min read
Save for later

REST – What You Didn't Know

Packt
24 Mar 2015
15 min read
Nowadays, topics such as cloud computing and mobile device service feeds, and other data sources being powered by cutting-edge, scalable, stateless, and modern technologies such as RESTful web services, leave the impression that REST has been invented recently. Well, to be honest, it is definitely not! In fact, REST was defined at the end of the 20th century. This article by Valentin Bojinov, author of the book RESTful Web API Design with Node.js, will walk you through REST's history and will teach you how REST couples with the HTTP protocol. You will look at the five key principles that need to be considered while turning an HTTP application into a RESTful-service-enabled application. You will also look at the differences between RESTful and SOAP-based services. Finally, you will learn how to utilize already existing infrastructure for your benefit. In this article, we will cover the following topics: A brief history of REST REST with HTTP RESTful versus SOAP-based services Taking advantage of existing infrastructure (For more resources related to this topic, see here.) A brief history of REST Let's look at a time when the madness around REST made everybody talk restlessly about it! This happened back in 1999, when a request for comments was submitted to the Internet Engineering Task Force (IETF: http://www.ietf.org/) via RFC 2616: "Hypertext Transfer Protocol - HTTP/1.1." One of its authors, Roy Fielding, later defined a set of principles built around the HTTP and URI standards. This gave birth to REST as we know it today. Let's look at the key principles around the HTTP and URI standards, sticking to which will make your HTTP application a RESTful-service-enabled application: Everything is a resource Each resource is identifiable by a unique identifier (URI) Use the standard HTTP methods Resources can have multiple representation Communicate statelessly Principle 1 – everything is a resource To understand this principle, one must conceive the idea of representing data by a specific format and not by a physical file. Each piece of data available on the Internet has a format that could be described by a content type. For example, JPEG Images; MPEG videos; html, xml, and text documents; and binary data are all resources with the following content types: image/jpeg, video/mpeg, text/html, text/xml, and application/octet-stream. Principle 2 – each resource is identifiable by a unique identifier Since the Internet contains so many different resources, they all should be accessible via URIs and should be identified uniquely. Furthermore, the URIs can be in a human-readable format (frankly I do believe they should be), despite the fact that their consumers are more likely to be software programmers rather than ordinary humans. The URI keeps the data self-descriptive and eases further development on it. In addition, using human-readable URIs helps you to reduce the risk of logical errors in your programs to a minimum. Here are a few sample examples of such URIs: http://www.mydatastore.com/images/vacation/2014/summer http://www.mydatastore.com/videos/vacation/2014/winter http://www.mydatastore.com/data/documents/balance?format=xml http://www.mydatastore.com/data/archives/2014 These human-readable URIs expose different types of resources in a straightforward manner. In the example, it is quite clear that the resource types are as follows: Images Videos XML documents Some kinds of binary archive documents Principle 3 – use the standard HTTP methods The native HTTP protocol (RFC 2616) defines eight actions, also known as verbs: GET POST PUT DELETE HEAD OPTIONS TRACE CONNECT The first four of them feel just natural in the context of resources, especially when defining actions for resource data manipulation. Let's make a parallel with relative SQL databases where the native language for data manipulation is CRUD (short for Create, Read, Update, and Delete) originating from the different types of SQL statements: INSERT, SELECT, UPDATE and DELETE respectively. In the same manner, if you apply the REST principles correctly, the HTTP verbs should be used as shown here: HTTP verb Action Response status code GET Request an existing resource "200 OK" if the resource exists, "404 Not Found" if it does not exist, and "500 Internal Server Error" for other errors PUT Create or update a resource "201 CREATED" if a new resource is created, "200 OK" if updated, and "500 Internal Server Error" for other errors POST Update an existing resource "200 OK" if the resource has been updated successfully, "404 Not Found" if the resource to be updated does not exist, and "500 Internal Server Error" for other errors DELETE Delete a resource "200 OK" if the resource has been deleted successfully, "404 Not Found" if the resource to be deleted does not exist, and "500 Internal Server Error" for other errors There is an exception in the usage of the verbs, however. I just mentioned that POST is used to create a resource. For instance, when a resource has to be created under a specific URI, then PUT is the appropriate request: PUT /data/documents/balance/22082014 HTTP/1.1 Content-Type: text/xml Host: www.mydatastore.com <?xml version="1.0" encoding="utf-8"?> <balance date="22082014"> <Item>Sample item</Item> <price currency="EUR">100</price> </balance> HTTP/1.1 201 Created Content-Type: text/xml Location: /data/documents/balance/22082014 However, in your application you may want to leave it up to the server REST application to decide where to place the newly created resource, and thus create it under an appropriate but still unknown or non-existing location. For instance, in our example, we might want the server to create the date part of the URI based on the current date. In such cases, it is perfectly fine to use the POST verb to the main resource URI and let the server respond with the location of the newly created resource: POST /data/documents/balance HTTP/1.1Content-Type: text/xmlHost: www.mydatastore.com<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance>HTTP/1.1 201 CreatedContent-Type: text/xmlLocation: /data/documents/balance Principle 4 – resources can have multiple representations A key feature of a resource is that they may be represented in a different form than the one it is stored. Thus, it can be requested or posted in different representations. As long as the specified format is supported, the REST-enabled endpoint should use it. In the preceding example, we posted an xml representation of a balance, but if the server supported the JSON format, the following request would have been valid as well: POST /data/documents/balance HTTP/1.1Content-Type: application/jsonHost: www.mydatastore.com{"balance": {"date": ""22082014"","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}}HTTP/1.1 201 CreatedContent-Type: application/jsonLocation: /data/documents/balance Principle 5 – communicate statelessly Resource manipulation operations through HTTP requests should always be considered atomic. All modifications of a resource should be carried out within an HTTP request in isolation. After the request execution, the resource is left in a final state, which implicitly means that partial resource updates are not supported. You should always send the complete state of the resource. Back to the balance example, updating the price field of a given balance would mean posting a complete JSON document that contains all of the balance data, including the updated price field. Posting only the updated price is not stateless, as it implies that the application is aware that the resource has a price field, that is, it knows its state. Another requirement for your RESTful application is to be stateless; the fact that once deployed in a production environment, it is likely that incoming requests are served by a load balancer, ensuring scalability and high availability. Once exposed via a load balancer, the idea of keeping your application state at server side gets compromised. This doesn't mean that you are not allowed to keep the state of your application. It just means that you should keep it in a RESTful way. For example, keep a part of the state within the URI. The statelessness of your RESTful API isolates the caller against changes at the server side. Thus, the caller is not expected to communicate with the same server in consecutive requests. This allows easy application of changes within the server infrastructure, such as adding or removing nodes. Remember that it is your responsibility to keep your RESTful APIs stateless, as the consumers of the API would expect it to be. Now that you know that REST is around 15 years old, a sensible question would be, "why has it become so popular just quite recently?" My answer to the question is that we as humans usually reject simple, straightforward approaches, and most of the time, we prefer spending more time on turning complex solutions into even more complex and sophisticated solutions. Take classical SOAP web services for example. Their various WS-* specifications are so many and sometimes loosely defined in order to make different solutions from different vendors interoperable. The WS-* specifications need to be unified by another specification, WS-BasicProfile. This mechanism defines extra interoperability rules in order to ensure that all WS-* specifications in SOAP-based web services transported over HTTP provide different means of transporting binary data. This is again described in other sets of specifications such as SOAP with Attachment References (SwaRef) and Message Transmission Optimisation Mechanism (MTOM), mainly because the initial idea of the web service was to execute business logic and return its response remotely, not to transport large amounts of data. Well, I personally think that when it comes to data transfer, things should not be that complex. This is where REST comes into place by introducing the concept of resources and standard means to manipulate them. The REST goals Now that we've covered the main REST principles, let's dive deeper into what can be achieved when they are followed: Separation of the representation and the resource Visibility Reliability Scalability Performance Separation of the representation and the resource A resource is just a set of information, and as defined by principle 4, it can have multiple representations. However, the state of the resource is atomic. It is up to the caller to specify the content-type header of the http request, and then it is up to the server application to handle the representation accordingly and return the appropriate HTTP status code: HTTP 200 OK in the case of success HTTP 400 Bad request if a unsupported content type is requested, or for any other invalid request HTTP 500 Internal Server error when something unexpected happens during the request processing For instance, let's assume that at the server side, we have balance resources stored in an XML file. We can have an API that allows a consumer to request the resource in various formats, such as application/json, application/zip, application/octet-stream, and so on. It would be up to the API itself to load the requested resource, transform it into the requested type (for example, json or xml), and either use zip to compress it or directly flush it to the HTTP response output. It is the Accept HTTP header that specifies the expected representation of the response data. So, if we want to request our balance data inserted in the previous section in XML format, the following request should be executed: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: text/xmlHTTP/1.1 200 OKContent-Type: text/xmlContent-Length: 140<?xml version="1.0" encoding="utf-8"?><balance date="22082014"><Item>Sample item</Item><price currency="EUR">100</price></balance> To request the same balance in JSON format, the Accept header needs to be set to application/json: GET /data/balance/22082014 HTTP/1.1Host: my-computer-hostnameAccept: application/jsonHTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 120{"balance": {"date": "22082014","Item": "Sample item","price": {"-currency": "EUR","#text": "100"}}} Visibility REST is designed to be visible and simple. Visibility of the service means that every aspect of it should self-descriptive and should follow the natural HTTP language according to principles 3, 4, and 5. Visibility in the context of the outer world would mean that monitoring applications would be interested only in the HTTP communication between the REST service and the caller. Since the requests and responses are stateless and atomic, nothing more is needed to flow the behavior of the application and to understand whether anything has gone wrong. Remember that caching reduces the visibility of you restful applications and should be avoided. Reliability Before talking about reliability, we need to define which HTTP methods are safe and which are idempotent in the REST context. So let's first define what safe and idempotent methods are: An HTTP method is considered to be safe provided that when requested, it does not modify or cause any side effects on the state of the resource An HTTP method is considered to be idempotent if its response is always the same, no matter how many times it is requested The following table lists shows you which HTTP method is safe and which is idempotent: HTTP Method Safe Idempotent GET Yes Yes POST No No PUT No Yes DELETE No Yes Scalability and performance So far, I have often stressed on the importance of having stateless implementation and stateless behavior for a RESTful web application. The World Wide Web is an enormous universe, containing a huge amount of data and a few times more users eager to get that data. Its evolution has brought about the requirement that applications should scale easily as their load increases. Scaling applications that have a state is hardly possible, especially when zero or close-to-zero downtime is needed. That's why being stateless is crucial for any application that needs to scale. In the best-case scenario, scaling your application would require you to put another piece of hardware for a load balancer. There would be no need for the different nodes to sync between each other, as they should not care about the state at all. Scalability is all about serving all your clients in an acceptable amount of time. Its main idea is keep your application running and to prevent Denial of Service (DoS) caused by a huge amount of incoming requests. Scalability should not be confused with performance of an application. Performance is measured by the time needed for a single request to be processed, not by the total number of requests that the application can handle. The asynchronous non-blocking architecture and event-driven design of Node.js make it a logical choice for implementing a well-scalable application that performs well. Working with WADL If you are familiar with SOAP web services, you may have heard of the Web Service Definition Language (WSDL). It is an XML description of the interface of the service. It is mandatory for a SOAP web service to be described by such a WSDL definition. Similar to SOAP web services, RESTful services also offer a description language named WADL. WADL stands for Web Application Definition Language. Unlike WSDL for SOAP web services, a WADL description of a RESTful service is optional, that is, consuming the service has nothing to do with its description. Here is a sample part of a WADL file that describes the GET operation of our balance service: <application ><grammer><include href="balance.xsd"/><include href="error.xsd"/></grammer><resources base="http://localhost:8080/data/balance/"><resource path="{date}"><method name="GET"><request><param name="date" type="xsd:string" style="template"/></request><response status="200"><representation mediaType="application/xml"element="service:balance"/><representation mediaType="application/json" /></response><response status="404"><representation mediaType="application/xml"element="service:balance"/></response></method></resource></resources></application> This extract of a WADL file shows how application-exposing resources are described. Basically, each resource must be a part of an application. The resource provides the URI where it is located with the base attribute, and describes each of its supported HTTP methods in a method. Additionally, an optional doc element can be used at resource and application to provide additional documentation about the service and its operations. Though WADL is optional, it significantly reduces the efforts of discovering RESTful services. Taking advantage of the existing infrastructure The best part of developing and distributing RESTful applications is that the infrastructure needed is already out there waiting restlessly for you. As RESTful applications use the existing web space heavily, you need to do nothing more than following the REST principles when developing. In addition, there are a plenty of libraries available out there for any platform, and I do mean any given platform. This eases development of RESTful applications, so you just need to choose the preferred platform for you and start developing. Summary In this article, you learned about the history of REST, and we made a slight comparison between RESTful services and classical SOAP Web services. We looked at the five key principles that would turn our web application into a REST-enabled application, and finally took a look at how RESTful services are described and how we can simplify the discovery of the services we develop. Now that you know the REST basics, we are ready to dive into the Node.js way of implementing RESTful services. Resources for Article: Further resources on this subject: Creating a RESTful API [Article] So, what is Node.js? [Article] CreateJS – Performing Animation and Transforming Function [Article]
Read more
  • 0
  • 0
  • 1393
Modal Close icon
Modal Close icon