Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-building-crud-application-zk-framework
Packt
21 Oct 2009
5 min read
Save for later

Building a CRUD Application with the ZK Framework

Packt
21 Oct 2009
5 min read
An Online Media Library There are some traditional applications that could be used to introduce a framework. One condition for the selection is that the application should be a CRUD (Create —Read—Update—Delete) application. Therefore, an 'Online Media Library', which has all four operations, would be appropriate. We start with the description of requirements, which is the beginning of most IT projects. The application will have the following features: Add new media Update existing media Delete media Search for the media (and show the results) User roles (administrator for maintaining the media and user accounts for browsing the media) In the first implementation round the application should only have some basic functionality that will be extended step by step. A media item should have the following attributes: A title A type (Song or Movie) An ID which could be defined by the user A description An image The most important thing at the start of a project is to name it. We will call our project ZK-Medialib. Setting up Eclipse to Develop with ZK We use version 3.3 of Eclipse, which is also known as Europa release. You can download the IDE from http://www.eclipse.org/downloads/. We recommend using the version "Eclipse IDE for Java EE Developers". First we have to make a file association for the .zul files. For that open the Preferences dialog with Window | Preferences. After that do the following steps: Type Content Types into the search dialog. Select Content Types in the tree. Select XML in the tree. Click Add and type *.zul. See the result. The steps are illustrated in the picture below: With these steps, we have syntax highlighting of our files. However, to have content assist, we have to take care about the creation of new files. The easiest way is to set up Eclipse to work with zul.xsd. For that open the Preferences dialog with Window | Preferences. After that do the following steps: Type XML Catalog into the search dialog. Select XML Catalog in the tree. Press Add and fill out the dialog (see the second dialog below). See the result. Now we can easily create new ZUL files with the following steps: File | New | Other, and select XML: Type in the name of the file (for example hello.zul). Press Next. Choose Create XML file from an XML schema file: Press Next. Select Select XML Catalog entry. Now select zul.xsd: Now select the Root Element of the page (e.g. window). Select Finish. Now you have a new ZUL file with content assist. Go into the generated attribute element and press Alt+Space. Setting up a New Project The first thing we will need for the project is the framework itself. You can download the ZK framework from http://www.zkoss.org. At the time of writing, the latest version of ZK is 2.3.0. After downloading and unzipping the ZK framework we should define a project structure. A good structure for the project is the directory layout from the Maven project (http://maven.apache.org/). The structure is shown in the figure below. The directory lib contains the libraries of the ZK framework. For the first time it's wise to copy all JAR files from the ZK framework distribution. If you unzip the distribution of the version 2.3.0 the structure should look like the figure below. The structure below shows the structure of the ZK distribution. Here you can get the files you need for your own application. For our example, you should copy all JAR files from lib, ext, and zkforge to the WEB-INF/lib directory of your application. It's important that the libraries from ext and zkforge are copied direct to WEB-INF/lib. Additionally copy the directories tld and xsd to the WEB-INF directory of your application. Now after the copy process, we have to create the deployment descriptor (web.xml) for the web application. Here you can use web.xml from the demo application, which is provided from the ZK framework. For our first steps, we need no zk.xml (that configuration file is optional in a ZK application). The application itself must be run inside a JEE (Java Enterprise Edition) Webcontainer. For our example, we used the Tomcat container from the Apache project (http://tomcat.apache.org). However, you can run the application in each JEE container that follows the Java Servlet Specification 2.4 (or higher) and runs under a Java Virtual Machine 1.4 (or higher). We create the zk-media.xml file for Tomcat, which is placed in conf/Catalina/localhost of the Tomcat directory. <Context path="/zk-media" docBase="D:/Development/workspaces/workspace-zk-medialib/ZK-Medialib/src/main/webapp" debug="0"privileged="true" reloadable="true" crossContext="false"><Logger className="org.apache.catalina.logger.FileLogger"directory="D:/Development/workspaces/workspace-zk-medialib/logs/ZK-Medialib" prefix="zkmedia-" suffix=".txt" timestamp="true"/></Context> With the help of this context file, we can directly see the changes of our development, since, we set the root of the web application to the development directory.  
Read more
  • 0
  • 0
  • 7568

article-image-facebook-application-development-ruby-rails
Packt
21 Oct 2009
4 min read
Save for later

Facebook Application Development with Ruby on Rails

Packt
21 Oct 2009
4 min read
Technologies needed for this article RFacebook RFacebook (http://rfacebook.rubyforge.org/index.html) is a Ruby interface to the Facebook APIs. There are two parts to RFacebook—the gem and the plug-in. The plug-in is a stub that calls RFacebook on the Rails library packaged in the gem. RFacebook on Rails library extends the default Rails controller, model, and view. RFacebook also provides a simple interface through an RFacebook session to call any Facebook API. RFacebook uses some meta-programming idioms in Ruby to call Facebook APIs. Indeed Indeed is a job search engine that allows users to search for jobs based on keywords and location. It includes job listings from major job boards and newspapers and even company career pages. Acquiring candidates through Facebook We will be creating a Facebook application and displaying it through Facebook. This application, when added into the list of a user's applications, allows the user to search for jobs using information in his or her Facebook profile. Facebook applications, though displayed within the Facebook interface, are actually hosted and processed somewhere else. To display it within Facebook, you need to host the application in a publicly available website and then register the application. We will go through these steps in creating the Job Board Facebook application. Creating a Rails application Next, create a Facebook application. To do this, you will need to first add a special application in your Facebook account—the Developer application. Go to http://www.facebook.com/developers and you will be asked to allow Developer to be installed in your Facebook account. Add the Developer application and agree to everything in the permissions list. You will not have any applications yet, so click on the create one link to create a new application. Next you will be asked for the name of the application you want to create. Enter a suitable name; in our case, enter 'Job Board' and you will be redirected to the Developer application main page, where you are shown your newly created application with its API key and secret. You will need the API key and secret in a while. Installing and configuring RFacebook RFacebook consists of two components—the gem and the plug-in. The gem contains the libraries needed to communicate with Facebook while the plug-in enables your Rails application to integrate with Facebook. As mentioned earlier, the plug-in is basically a stub to the gem. The gem is installed like any other gem in Ruby: $gem install rfacebook To install the plug-in go to your RAILS_ROOT folder and type in: $./script/plugin install svn://rubyforge.org/var/svn/rfacebook/trunk/rfacebook/plugins/rfacebook Next, after the gem and plug-in is installed, run a setup rake script to create the configuration file in the RAILS_ROOT folder: $rake facebook:setup This creates a facebook.yml configuration file in RAILS_ROOT/config folder. The facebook.yml file contains three environments that mirror the Rails startup environments. Open it up to configure the necessary environment with the API key and secret that you were given when you created the application in the section above. development: key: YOUR_API_KEY_HERE secret: YOUR_API_SECRET_HERE canvas_path: /yourAppName/ callback_path: /path/to/your/callback/ tunnel: username: yourLoginName host: www.yourexternaldomain.com port: 1234 local_port: 5678 For now, just fill in the API key and secret. In a later section when we configure the rest of the Facebook application, we will need to revisit this configuration. Extracting the Facebook user profile Next we want to extract the user's Facebook user profile and display it on the Facebook application. We do this to let the user confirm that this is the information he or she wants to send as search parameters. To do this, create a controller named search_controller.rb in the RAILS_ROOT/app/controllers folder. class SearchController < ApplicationController before_filter :require_facebook_install layout 'main' def index view render :action => :view end def view if fbsession.is_valid? response = fbsession.users_getInfo(:uids => [fbsession.session_user_id], :fields => ["current_location", "education_history", "work_history"]) @work_history = response.work_history @education_history = response.education_history @current_location = response.current_location endend
Read more
  • 0
  • 0
  • 3936

article-image-getting-grips-facebook-platform
Packt
21 Oct 2009
6 min read
Save for later

Getting to Grips with the Facebook Platform

Packt
21 Oct 2009
6 min read
The Purpose of the Facebook Platform As you develop your Facebook applications, you'll find that the Facebook Platform is essential—in fact you won't really be able to do anything without it. So what does it do? Well, before answering that, let's look at a typical web-based application. The Standard Web Application Model If you've ever designed and built a web application before, then you'd have done it in a fairly standard way. Your application and any associated data would have been placed on a web server, and then your application users will access it from their web browsers via the Internet: The Facebook model is slightly different. The Facebook Web Application Model As far as your application users are concerned, they will just access Facebook.com and your application, by using a web browser and the Internet. But, that's not where the application lives—it's actually on your own server: Once you've looked at the Facebook web application model and realized that your application actually resides on your own server, it becomes obvious what the purpose of the Facebook Platform is—to provide an interface between your application and itself. There is an important matter to be considered here. If the application actually resides on your server, and your application becomes very successful (according to Facebook there are currently 25 million active users), then will your server be able to able to cope with that number of hits? Don't be too alarmed. This doesn't mean that your server will be accessed every time someone looks at his or her profile. Facebook employs a cache to stop that happening: Of course, at this stage, you're probably more concerned with just getting the application working—so let's continue looking at the Platform, but just bear that point in mind. Different components of the Facebook platform There are three elements to the Facebook Platform: The Facebook API (Application Programming Interface) FBML—Facebook Markup Language FQL—Facebook Query Language We'll now spend some time with each of these elements, and you'll see how you can use them individually, and in conjunction to make powerful yet simple applications. The great thing is that if you haven't got your web server set up yet, don't worry, because Facebook supplies you with all of the tools that you would need in order to do a test run with each of the elements. The Facebook API If you've already done some programming, then you'll probably know what an API (or Application Programming Interface) is. It's a set of software libraries that enable you to work with an application (in this case, Facebook) without knowing anything about its internal workings. All you have to do is obtain the libraries, and start making use of them in your own application. Now, before you start downloading files, you can actually learn more about their functionality by making use of the Facebook API Test Console. The Facebook API Test Console If you want to make use of the Facebook Test Console, you'll first need to access the Facebook developers' section—you'll find a link to this at the bottom of every Facebook page: Alternatively, you can use the URL http://developers.facebook.com to go there directly in your browser. When you get to this page, you'll find a link to the Tools page: Or, again, you can go directly to http://developers.facebook.com/tools.php, where you'll find the API Test Console: You'll find that the API Test Console has a number of fields: User ID—A read-only field which (when you're logged on to Facebook) unsurprisingly displays your user ID number. Response Format—With this, you can select the type of response that you want, and this can be: XML JSON Facebook PHP Client Callback—If you are using XML or JSON, then you can encapsulate the response in a function. Method—The actual Facebook method that you want to test. Once you've logged in, you'll see that your User ID is displayed and that all the drop-downs are enabled: You will also notice that a new link, documentation, appears on the screen, which is very useful. All you have to do is to select a method from the drop-down list, and then click on documentation. Once you've done that you'll see: A description of the method The parameters used by the method An example return XML A description of the expected response. The FQL equivalent (we will discuss this later in the chapter.) Error codes For now, let's just change the Response Format to Facebook PHP Client, and then click on Call Method to see what happens: In this case, you can see that the method returns an array of user ids—each one is the ID of one of the friends of the currently logged in user (that is your list of friends because you're the person logged in). You could, of course, go on to use this array in PHP as part of your application, but don't worry about that at the moment. For the time being, we'll just concentrate on working with our prototyping in the test console. However, before we move on, it's worth noting that you can obtain an array of friends only for the currently logged in user. You can't obtain the list of friends for any other user. So, for example, you would not be able to use friends. get on id 286601116 or 705175505. In fact, you wouldn't be able to use friends. get for 614902533 (as shown in the example) because that's my ID and not yours. On the other hand, having obtained a list of valid IDs we can now do something more interesting with them. For example, we can use the users.getinfo method to obtain the first name and birthday for particular users: As you can see, a multidimensional array is returned to your PHP code (if you were actually using this in an application). Therefore, for example, if you were to load the array into a variable $birthdays, then $birthdays[0][birthday] would contain January 27, 1960. Of course, in the above example, the most important piece of information is the first birthday in the array—record that in your diary for future reference. And, if you're thinking that I'm old enough to be your father, well, in some cases this is actually true: Now that you've come to grips with the API Test console, we can turn our attention to FBML and the FBML Test Console.
Read more
  • 0
  • 0
  • 1954

article-image-selinux-highly-secured-web-hosting-python-based-web-applications
Packt
21 Oct 2009
10 min read
Save for later

SELinux - Highly Secured Web Hosting for Python-based Web Applications

Packt
21 Oct 2009
10 min read
When contemplating the security of a web application, there are several attack vectors that you must consider. An outsider may attack the operating system by planting a remote exploit, exercising insecure operating system settings, or brandishing some other method of privilege escalation. Or, the outsider may attack other sites contained in the same server without escalating privileges. (Note that this particular discussion does not touch upon the conditions under which an attack steals data from a single site. Instead, I'm focusing on the ability to attack different applications on the same server.) With hosts providing space for large numbers of PHP-based sites, security can be difficult as the httpd daemon traditionally runs under the same Unix user for all sites. In order to prevent these kinds of attacks from occurring, you need to concentrate on two areas: Preventing the site from reading or modifying the data of another site, and Preventing the site from escalating privileges to tamper with the operating system and bypass user-based restrictions. There are two toolboxes you use to accomplish this. In the first case, you need to find a way to run all of your sites under different Linux users. This allows the traditional Linux filesystem security model to provide protection against a hacked site attacking other sites on the same server. In the second case, you need to find a way to prevent a privilege escalation to begin with and barring that, prevent damage to the operating system should an escalation occur. Let's first take a look at a method to run different sites under different users. The Python web framework provides several versatile methods by which applications can run. There are three common methods: first, using Python's built-in http server; second, running the script as a CGI application; and third, using mod_python under Apache (similar to what mod_perl and mod_php do). These methods have various disadvantages: respectively, a lack of scalability, performance issues due to CGI application loading, and the aforementioned “all sites under one user” problem. To provide a scalable, secure, high-performance framework, you can turn to a relatively new delivery method: mod_wsgi. This Apache module, created by Graham Dumpleton, provides several methods by which you can run Python applications. In this case, we'll be focusing on the “daemon” mode of mod_wsgi. Much like mod_python, the daemon mode of mod_wsgi embeds a Python interpreter (and the requisite script) into a httpd instance. Much like with mod_python, you can configure sites based on mod_wsgi to appear at various locations in the virtual directory tree and under different virtual servers. You can also configure the number and behavior of child daemons on a per-site basis. However, there is one important difference: with mod_wsgi, you can configure each httpd instance to run as a different Linux user. During operation, the main httpd instance dispatches requests to the already-running mod_wsgi children, producing performance results that rival mod_python. But most importantly, since each httpd instance is running under a different Linux user, you can apply Linux security mechanisms to different sites running on one server. Once you have your sites running on a per-user basis, you should next turn your attention to preventing privilege escalation and protecting the operating system. By default, the Targeted mode of SELinux provided by RedHat Enterprise Linux 5 (and its free cousins such as CentOS) provides strong protection against intrusions from httpd-based applications. Because of this, you will need to configure SELinux to allow access to resources such as databases and files that reside outside of the normal httpd directories. To illustrate these concepts, I'll guide you as you install a Trac instance under mod_wsgi. The platform is CentOS 5. As a side note, it's highly recommended that you perform the installation and SELinux debugging in a XEN instance so that your environment only contains the software that is needed. The sidebar explains how to easily install the environment that was originally used to perform this exercise, and I will assume that is your primary environment. There are a few steps that require the use of a C compiler – namely, the installation of Trac – and I'll guide you through migrating these packages to your XEN-based test environment. Installing Trac In this example, you'll use a standard installation of Trac. Following the instructions provided in the URL in the Resource section, begin by installing Trac 0.10.4 with ClearSilver 0.10.5 and SilverCity 0.9.7. (Note that with many Python web applications such as Trac and Django, “installing” the application means that you're actually installing the libraries necessary for Python to run the application. You'll need to run a script to create the actual site.) Next, create a PostgreSQL user and database on a different machine. If you are using XEN for your development machine, you can use a PostgreSQL database running in your main DOM0 instance; all we are concerned with is that the PostgreSQL instance is accessed on a different machine over the network. (Note that MySQL will also work in this example, but SQLite will not. In this case, we need a database engine that is accessed over the network, not as a disk file.) After that's done, you'll need to create an actual Trac site. Create a directory under /opt, such as /opt/trac. Next, run the trac_admin command and enter the information prompted. trac-admin /opt/trac initenv Installing mod_wsgi You can find mod_wsgi at the source listed in the Resources. After you make sure the httpd_devel package is installed, installing mod_wsgi is as simple as extracting the tarball and issuing the normal ./configure and 'make install' commands. Running Trac under mod_wsgi If you look under /opt/trac, you'll notice two directories: one labeled apache, and one with the label of the project that you assigned when you installed this instance of Trac. You'll start by creating an application script in the apache directory. The application script is listed in Listing 1. Listing 1: /opt/trac/apache/trac.wsgi #!/usr/bin/python import sys sys.stdout = sys.stderr import os os.environ['TRAC_ENV'] = '/opt/trac/test_proj' import trac.web.main application = trac.web.main.dispatch_request (Note the 'sys.stdout = sys.stderr' line. This is necessary due to the way WSGI handles communications between the Python script and the httpd instance. If there is any code in the script that prints to STDOUT (such as debug messages), then the httpd instance can crash.) After creating the application script, you'll modify httpd.conf to load the wsgi module and set up the Trac application. After the LoadModule lines, insert a line for mod_wsgi: LoadModule wsgi_module modules/mod_wsgi.so Next, go to the bottom of httpd.conf and insert the text in Listing 2. This text configures the wsgi module for one particular site; it can be used under the default httpd configuration as well as under VirtualHost directives. Listing 2: Excerpt from httpd.conf: WSGIDaemonProcess trac user=trac_user group=trac_user threads=25 WSGIScriptAlias /trac /opt/trac/apache/trac.wsgi WSGIProcessGroup trac WSGISocketPrefix run/wsgi <Directory /opt/trac/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Note the WSGIScriptAlias identifier. The /trac keyword (first parameter) specifies where in the directory tree the application will exist. With this configuration, If you go to your server's root address, you'll see the default CenOS splash page. If you add /trac after the address, you'll hit your Trac instance. Save the httpd.conf file. Finally, add a Linux user called trac_user. It is important that this user should not have login privileges. When the root httpd instance runs and encounters the WSGIDaemonProcess directive noted above, it will fork itself as the user specified in the directive; the fork will then load Python and the indicated script.     Securing Your Site In this section, I'll focus on the two areas noted in the introduction: User based security and SELinux. I will touch briefly on the theory of SELinux and explain the nuts and bolts of this particular implementation in more depth. I highly recommend that you read the RedHat Enterprise Linux Deployment Guide for the particulars about how RedHat implements SELinux. As with all activities involving some risk, if you plan to implement these methods, you should retain the services of a qualified security consultant to advise you about your particular situation. Setting up the user-based security is not difficult. Because the HTTPD instance containing Python and the Trac instance will run under the Trac user, you can safely set everything under /opt/trac/test_project for read and execute (for directories) for user and none for group/all. By doing this, you will isolate this site from other sites and users on the system. Now, let's configure SELinux. First, you should verify that your system is running the proper Policy and Mode. On your development system, you'll be using the Targeted policy in its Permissive mode. If you choose to move your Python applications to a production machine, you would run under the Targeted policy, in the Enforcing mode. The Targeted policy is limited to protecting the most popular network services without making the system so complex as to prevent user-level work from being done. It is the only mode that ships with RedHat 5, and by extension, CentOS 5. In Permissive mode, SELinux policy violations are trapped and sent to the audit log, but the behavior is allowed. In enforcing mode, the violation is trapped and the behavior is not allowed. To verify the Mode, run the Security Level Configuration tool from the Administration menu. The SELinux tab, shown in Figure 1, allows you to adjust the Mode. After you have verified that SELinux is running in Permissive mode, you need to do two things. First, you need to change the Type of the files under /opt/trac. Second, you need to allow Trac to connect to the Postgres database that you configured when you installed Trac. First, you need to tweak the SELinux file types attached to the files in your Trac instance. These file types dictate what processes are allowed to access them. For example, /etc/shadow has a very restrictive 'shadow' type that only allows a few applications to read and write it. By default, SELinux expects web-based applications – indeed, anything using Apache – to reside under /var/www. Files created under this directory have the SELinux Type httpd_sys_content_t. When you created the Trac instance under /opt/trac, the files were created as type usr_t. Figure 2 shows the difference between these labels To properly label the files under /opt, issue the following commands as root: cd /optchcon -R -t httpd_user_content_t trac/ After the file types are configured, there is one final step to do: allow Trac to connect to PostgreSQL. In its default state, SELinux disallows outbound network connections for the httpd type. To allow database connections, issue the following command: setsebool -P httpd_can_network_connect_db=1 In this case, we are using the -P option to make this setting persistent. If you omit this option, then the setting will be reset to its default state upon the next reboot. After the setsebool command has been run, start HTTPD by issuing the following command: /sbin/service httpd start If you visit the url http://127.0.0.1/trac, you should see the Trac screen such as that in Figure 3.    
Read more
  • 0
  • 0
  • 8965

article-image-theming-modules-drupal-6
Packt
21 Oct 2009
5 min read
Save for later

Theming Modules in Drupal 6

Packt
21 Oct 2009
5 min read
Our Target Module: What We Want Before we begin developing a module, here's a brief overview of what we want to accomplish. The module we will write in this article is the Philosophy Quotes module (philquotes will be our machine-readable name). The goal of this module will be to create a block that displays pithy philosophical quotes. We will implement the following features: Quotes should be stored along with other basic content, making it possible to add, modify, and delete this content in exactly the same way that we create other articles. Since our existing themes aren't aware of this quotes module, it must provide some default styling. We will progress through the creation of this module by first generating a new "quote" content type, and then building a theme-aware module. Creating a Custom Content Type As Drupal evolved, it incorporated an increasingly sophisticated method for defining content. Central to this system is the idea of the content type. A content type is a definition, stored in Drupal's database, of how a particular class of content should be displayed and what functionality it ought to support. Out of the box, Drupal has two defined content types: Page and Story. Pages are intended to contain content that is static, like an "About Us" or "Contact Us" page. Stories, on the other hand, are intended to contain more transient content—news items, blog postings, and so on. Creating new pages or stories is as simple as clicking on the Create Content link in the default menu. Obviously, not all content will be classified as either a page or a story, and many sites will need specialized content types to adequately represent a specific class of content. Descriptions of events, products, component descriptions, and so on might all be better accomplished with specialized content types. Our module is going to display brief quotes. These quotes shouldn't be treated like either articles or pages. For example, we wouldn't want a new quote to be displayed along with site news in the center column of our front page. Thus, our quotes module needs a custom content type. This content type will be very simple. It will have two parts: the text of the quote and the origin of the quote. For example, here's a famous quote: The life of man [is] solitary, poor, nasty, brutish, and short.—Thomas Hobbes. The text of this quote is "The life of man [is] solitary, poor, nasty, brutish, and short", and the origin in this example is Thomas Hobbes. We could have been more specific and included the title of the work (Leviathan) or even the exact page reference, edition, and so on. But all this information, in our simple example, would be treated as the quote's origin. Given the simplicity of our content type, we can simply use the built-in Drupal content type tool to create the new type. To generate even more sophisticated content types, we could install the CCK (Content Creation Kit) module, and perhaps some of the CCK extension modules. CCK provides a robust set of tools for defining custom fields, data types, and features. But here our needs are simple, so we won't need any additional modules or even any custom code to create this new content type. Using the Administration Interface to Create a Content Type The process of creating our custom content type is as simple as logging into Drupal and filling out a form. The content type tool is in Administer | Content management | Content types. There are a couple of tabs at the top of the page: Clicking the Add content type tab will load the form used to create our new content type. On this form, we need to complete the Name and Type fields—the first with a human-friendly name, and the second with a computer-readable name. Description is often helpful. In addition to these fields, there are a few other form fields under the Submission form settings and Workflow settings that we need to change. In the Submission form settings section, we will change the labels to match the terminology we have been using. Instead of Title and Body, our sections will be Origin and Text. Changing labels is a superficial change. While it changes the text that is displayed to users, the underlying data model will still refer to these fields as title and body. We will see this later in the article. In the Workflow settings section, we need to make sure that only Published is checked. By default, Promoted to front page is selected. That should be disabled unless you want new quotes to show up as content in the main section of the front page. Once the form is complete, pressing the Save content type button will create the new content type. That's all there is to it. The Create content menu should now have the option of creating a new quote: As we continue, we will create a module that displays content of type quote in a block. Before moving on, we want a few pieces of content. Otherwise, our module would have no data to display. Here's the list of quotes (as displayed on Administer | Content management | Content) that will constitute our pool of quotations for our module.  
Read more
  • 0
  • 0
  • 1855

article-image-installing-drupal-themes
Packt
21 Oct 2009
5 min read
Save for later

Installing Drupal Themes

Packt
21 Oct 2009
5 min read
The large and active community of developers that has formed around Drupal guarantees a steady flow of themes for this popular CMS. The diversity of that community also assures that there will be a wide variety of themes produced. Add into the equation the existence of a growing number of commercial and open source web designs and you can be certain that somewhere out there is a design that is close to what you want. The issue becomes identifying the sources of themes and designs, and determining how much work you want to do yourself. You can find both design ideas and complete themes on the Web. You need to decide whether you want to work with an existing theme, or convert a design into a theme, or whether you want to start from scratch, unburdened by any preliminary constraints or alien code. For purposes of this article, we will be dealing with finding, installing, and then uninstalling an existing and current Drupal theme. This article assumes you have a working Drupal installation, and that you have access to the files on your server. Finding Additional Themes There are several factors to consider when determining the suitability of an existing theme. The first issue is compatibility. Due to changes made to Drupal in the 5.x series, older themes will not work properly with Drupal 5.x. Accordingly, your first step is to determine which version of Drupal you are running. To find the version information for your installation, go to Administer | Logs | Status Report. The first line of the Status Report tabular data will show your version number. If you do not see the Status Report option, then you are probably using a Drupal version earlier than 5.x. We suggest you upgrade as this book is for Drupal 5.x. If you know your Drupal version, you can confirm whether the theme you are considering is usable on your system. If the theme you are looking at doesn't provide versioning information, assume the worst and make sure you back up your site before you install the questionable theme. Once you're past the compatibility hurdle, your next concern is system requirements; does the theme require any additional extensions to work properly? Some themes are ready to run with no additional extensions required. Many themes require that your Drupal installation include a particular templating engine. The most commonly required templating engine is PHPTemplate. If you are running a recent instance of Drupal, you will find that the PHPTemplate engine is installed by default. You can also download a variety of other popular templating engines, including Smarty and PHPTal from http://drupal.org/project/Theme+engines.Check carefully whether the theme you've chosen requires you to download and install other extensions. If so, track down the additional extensions and install them first, before you install your theme. A good place to start looking for a complete Drupal theme is, perhaps not surprisingly, the official Drupal site. At Drupal.org, you can find a variety of downloads, including both themes and template engines. Go to http://drupal.org/project/Themes to find a listing of the current collection of themes. All the themes state very clearly the version compatibility and whether there are any prerequisites to run the theme. In addition to the resources on the official Drupal site, there is an assortment of fan sites providing themes. Some sites are open source, others commercial, and a fair number are running unusual licenses (most frequently asking that footers be left intact with links back to their sites). Some of the themes available are great; most are average. If your firm is brand sensitive, or your design idiosyncratic, you will probably find yourself working from scratch. Regardless of your particular needs, the theme repositories are a good place to start gathering ideas. Even if you cannot find exactly what you need, you sometimes find something with which you can work. An existing set of properly formed theme files can jump start your efforts and save you a ton of time. If you wish to use an existing theme, pay attention to the terms of usage. You can save yourself (or your clients) major headaches by catching any unusual licensing provisions early in the process. There's nothing worse than spending hours on a theme only to discover its use is somehow restricted. One source for designs with livable usage policies is the Open Source Web Design site, http://www.oswd.org, which includes a repository of designs, all governed by open source licensing terms. The down side of this resource is that all you get is the design—not the code, not a ready-made theme. You will need to convert the design into a usable theme. For this article, let's search out a completed theme and for the sake of simplicity, let's take one from the official Drupal site. I am going to download the Gagarin theme from Drupal.org. I'll refer to this theme as a working example of some ofthe steps below. You can either grab a copy of the same theme or you can use another—the principles are the same regardless. Gagarin is an elegant little theme from Garamond of the Russian Drupal community. Gagarin is set up for a two-column site (though it can be run in three columns) and works particularly well for a blog site.
Read more
  • 0
  • 0
  • 2836
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-article-marketing-cms-blogs-and-online-magazines-improve-sales
Packt
21 Oct 2009
9 min read
Save for later

Article Marketing on CMS, Blogs, and Online Magazines to Improve Sales

Packt
21 Oct 2009
9 min read
Dynamic Content: What and Why? What do we actually mean when we call it "dynamic"? Does it mean the content itself changes while the customer is browsing through the website? Here, we mean that the content of the whole website changes as time goes by, and more and more content is added. At this stage, it doesn't really matter who or what adds the content—whether it's the Administrator of the osCommerce website, or the website's visitors, or customers, or if the content is downloaded from another source (another website). Why is dynamic content important for osCommerce? It's not only important for osCommerce, but for any website that is supposed to generate traffic and has goals to attract more visitors. The more professional, detailed, and interesting the content of the site is, the more customers will find it potentially useful to visit the website again and read newly posted materials. If the content of the website changes quite often, it will keep constantly growing with the flow of visitors. For online stores, including osCommerce dynamic content is especially important. First of all, it attracts more visitors who are interested in reading new and updated content to the website. It attracts more target audience visitors, who are likely to become customers and existing customers who are likely to become return customers. And all this without any advertising expenses involved! Then, well prepared content increases the visitor's confidence and helps to increase the "visitor to customer" conversion rates. Finally, keyword-rich content is well indexed by search engine crawlers, and search engines are more likely to put a website that constantly updates higher in search results than a website that doesn't. So publishing dynamic content on an osCommerce site may increase the number of visitors, make the website more noticeable in search engines, and also increase the number of sales. How Can We Make Users Participate? By inviting not only readers but also writers it is possible to publish even more articles, news, reviews, etc. Such an approach may also help a lot in attracting users and converting them into customers, as they will know your osCommerce online store as a place in the Internet where they can both buy goods online and read what other customers have to say, express their thoughts, and participate in discussions, etc. Here are several ways to invite users and customers of the website to participate in creating dynamic content: First and most important, is to make customers and website users aware of apossibility to participate and post content or just comments on other posts. Give them technical means to participate. Ask them to describe their own experience or opinion on a productor service. Ask them how you could improve your services and product range. Even though it's extremely important to get feedback and content from users and customers of an online store, the Administrator of such a site should be ready to fight SPAM, fraudulent posts and content, copyright violations, and content that harasses, abuses, or threatens other users or customers. Different Types of Dynamic Content in osCommerce There can be multiple types of dynamic content present in osCommerce. There are osCommerce contributions and website authoring and building techniques and methods that allow for dynamic content to be added to an osCommerce online store. In this article, we will specifically focus on articles, and how they can used for marketing on various blogs and online magazines. CMS, Blogs, and Online Magazines Article marketing means that you will write articles about your field of interest and distribute them for publication on other websites, blogs, and ezines with a link back to your site. Each time your article is published on a website you get a one-way link to your site. As with most good things, this method has been pounced upon by Internet marketers and the net is flooded with a lot of low-quality articles. However, if you produce meaningful articles, you can still get a lot of benefit by distributing your articles. By publishing articles in your own online store, you not only get better indexed by search engines, but what's more important provide useful information to your customers who may be seeking for it. What can the articles be about? Articles can be about products or series of products available forpurchase online. Articles can be about specific features of products and help customers to choose the product they want based on the detailed explanation of what this or that feature actually means. This sort of articles are called "buying guides". Also, in the articles, authors can compare features of several products and provide comprehensive information on what is the best choice forwhat needs. Articles can contain information on the best practice, tips, and tricks for using certain products advertised in the online store. Such articles help customers imagine how they would use this or that product, which increases chances for customers to buy the product online. Articles can be about some interesting facts related to the products sold online (similar to how Starbucks write in their brochures about water supplies in dry regions where their coffee grows). Articles can be about tendencies in certain industries, related to the products available in the online store. Actually blog posts would not be that different to the articles, except that an article is supposed to be more detailed and informative than a blog post. Also, in the blog posts the author often can ask the readers to comment on certain questions/topics to make the discussion live and initiate opinion exchange. When running a business blog, it makes sense to create several categories where the articles could be sorted. This way the customers will be able to find previous posts more easily, and also the blog author(s) will have certain goals, like, for example, to write to each category regularly. Online magazines would contain articles grouped by certain theme and would be issued/updated regularly. With both articles and blog posts, a facility to leave a comment is important to make the customers participate in discussion. For search engine optimization purposes and for better user (reader)-to-customer conversion, links to the related products advertised on the online store can be added to each article or blog post, or be published alongside it. Besides a facility to publish articles in other websites and online magazines, there are a number of content management solutions that can be integrated into the online store. But first, we will consider a very effective and free way to publish content on the Internet. Open encyclopedia Wikipedia.org allows for publishing content. Of course the content needs to be properly prepared and be very detailed and to the point. But if you have anything to say about the products advertised in the online store, the technology that stands behind those products, or e-commerce approach that stands behind the online store, that qualifies to become an article in the electronic encyclopedia—a post in Wikipedia would be one of the most effective ways to promote your online store, its technology, and products. Existing Content Management Solutions There are a number of open-source solutions available to an osCommerce store owner that could be used to publish content directly on the website. We will consider several of the most popular ones, and also general integration practices with osCommerce. osCommerce Information System and News Desk osCommerce Information system is a publicly available contribution that can be downloaded from the osCommerce Contributions website at http://addons.oscommerce.com/info/1709. It allows managing content of pages like Terms and Conditions, Privacy Policy, etc. osCommerce Newsdesk contribution allows for creating multi-level categories of articles and actually posting articles into the categories. It has support for multiple languages and each article can be posted in several languages. It also has built-in support for the so called WYSIWYG HTML editor, so that the Administrator of the online store can create nice-looking HTML content directly in the Administration panel of the osCommerce online store, with no use of any additional HTML authoring tools. Posted articles can be added to the front end of the online store, either into an information box in one of the side columns, or displayed on one page, groupedby categories. Posting an article is really easy. In the Administration panel, one has to fill in the article name, abstract, content, and may also want to fill in page title and keywords for the sake of SEO. Articles can be posted in different topics (categories), and the system allows for a multi-level tree of categories to be built in the Administration panel of the site. When posting an article, one can also select author from thedrop-down list. Authors are managed separately. It becomes possible to add the Tell a Friend box on the Article page, this is configured in the options in the Administration panel. Reviews can be submitted by readers, and, once a review gets approved by the Administrator, the website displays it on the Article page, along with the Article text. Yet another very important feature includes a facility to assign certain products to articles. Once a product gets assigned to an article, a link to the product information page appears underneath the text of the article. This feature works really well for SEO, and also, it helps customers who would be interested in products that are described in the associated article to find them easily. This solution, which is available for free and can be downloaded from the osCommerce website (http://addons.oscommerce.com/info/1026) can address the needs of an online shop by allowing posting articles to the website, running online magazines, creating buying guides, and so on.  
Read more
  • 0
  • 0
  • 1590

article-image-creating-efficient-reports-visual-studio
Packt
21 Oct 2009
5 min read
Save for later

Creating efficient reports with Visual Studio

Packt
21 Oct 2009
5 min read
Report Services, Analysis Services, and Integration Services are the three pillars of Business Intelligence in Microsoft's vision that continues to evolve. Reporting is a basic activity, albeit one of the most important activities of an organization because it provides a specialized and customized view of the data of various forms (relational, text, xml etc) that live in data stores. The report is useful in making business decisions, scheduling business campaigns, or assessing the competition. The report itself may be required in hard copy in several document formats such as DOC, HTML, PDF, etc. Many times it is also required to be retrieved in an interactive form from the data store and viewed on a suitable interface, including a web browser. The Microsoft SQL Server 2005 Reporting Services, popularly known by its acronym SSRS, provides all that is necessary to create and manage reports and deploy them on a report server with output available in several document formats. The reader will greatly benefit from reading the several articles detailed in the author's Hodentek Blog. The content for the articles were developed using VS 2003, VS 2005, SQL 2000 and SQL 2005. (For more resources on Microsoft, see here.) The content for the present tutorial uses a Visual Studio 2008 Professional and a Microsoft SQL Server Compact 3.5 embeddable database for its data. In Visual Studio a Report Design Wizard guides you through fashioning a report from your choices. Create a Windows Project in VS2008 Create a new project from File | New | Project. Provide a name instead of the default name (WindowsApplicaiton1). This is changed to ReportDesign for this tutorial as shown in the next figure. VS 2008 supports multi-version targeting. In the top right of the New Project window you can see that this report is targeted for the NET 2.0 Framework Version and can be published to a Net 2.0 web site. Slightly enlarge the Form1. Drag and drop the Microsoft Report Viewer control shown in the next figure on to the form from the Toolbox. This has the same functionality as the ReportViewer control in VS 2005 as shown in the next figure. The control will be housed on the form as shown in the next figure. You can display the tasks needed to configure the Report Viewer by clicking on the Smart Task as shown in the same figure. The report will have all the functionalities like print, save to different formats, navigating through pages, etc. Working with the Report Wizard Now click on the Design a new report task. The opens the Report Wizard window as shown in the figure. Read the instructions on this page carefully. Click on the Next Button. This displays the Data Source Configuration Wizard shown in the next figure. Choosing a Data Source The application can obtain data from these different resources. Click on the Database icon and then click on the Next button. This displays the window where you need to select a connection to the data source. If there are existing connections you should be able to see them in the drop-down list box. Making a Connection to Get Data Click on the New Connection button. This brings up the Add Connection window showing a default connection to a Microsoft SQL Server Compact 3.5.NET Framework Data Provider. It also shows the location to be My Computer. This source can be changed by clicking on the Change... button. This will bring up the Change Data Source window where you can choose. As found in this version you have the following options: Microsoft SQL Server option lets you connect to SQL 2000 or 2005 using the .NET Framework Data Provider for SQL Server. Microsoft SQL Server Compact 3.5 lets you connect to a database file. Microsoft SQL Server Database File lets you connect to a Local Microsoft SQL Server Instance including a SQL Express. Although it is not explicitly stated what these versions are. For this tutorial the Compact 3.5 will be used (also uses a .NET Framework Data Provider of Compact 3.5). Click on the OK button in the Change Data Source window. VS 2008 installation also installs a database file on the computer for the SQL Server Compact 3.5. Click on Browse button (you could also create one if you like, herein it will be browsed). This brings up the Select SQL Server Compact 3.5 Database File window with the default location where the database file is parked as shown in the next figure. Click on the Northwind icon in the window and click on the Open button. This updates the Add Connection window with this information as shown in the next figure. You may test the connection by hitting the Test Connection button which should display a successful outcome as shown in the next figure. There is no need for a password as you are the owner. Click OK twice and this will take you back to the Data Source Configuration Wizard updating the connection information which you may review as shown in the next figure. Click on the Next button. This brings up the Microsoft Visual Studio message window giving you the option to bring this data source to your project.    
Read more
  • 0
  • 0
  • 6528

article-image-implementing-document-management-alfresco-3-part1
Packt
21 Oct 2009
5 min read
Save for later

Implementing Document Management in Alfresco 3- part1

Packt
21 Oct 2009
5 min read
Managing spaces A space in Alfresco is nothing but a folder that contains content as well as sub spaces. The space users are the users invited to a space to perform specific actions such as editing content, adding content, discussing a particular document, and so on. The exact capability that a given user has within a space is a function of their role, or rights. Let's consider the capability of creating a sub-space. By default, in order to create a sub-space, one of the following must apply: The user is the administrator of the system The user has been granted the Contributor role The user has been granted the Coordinator role The user has been granted the Collaborator role Similarly, to edit space properties, a user will need to be the administrator or be granted a role that gives them rights to edit the space. These roles include Editor, Collaborator and Coordinator. Space is a smart folder Space is a folder with additional features, such as, security, business rules, workflow, notifications, local search capabilities, and special views. The additional features, which make the space a smart folder, are explained as follows: Space security: You can define security at the space level. You can designate a user or a group of users who can perform certain actions on the content in a space. For example, on the Marketing Communications space in the Intranet, you can specify that only users in the marketing group can add content, and other users can only see the content. Space business rules: Business rules, such as transforming content from Microsoft Word to Adobe PDF and sending notifications when content gets into a space, can be defined at the space level. Space workflow: You can define and manage the content workflow on a space. Typically, you will create a space for the content that needs to be reviewed, and a space for the content that has been approved. You will create various spaces for dealing with the different stages that the work flows through, and Alfresco will manage the movement of the content between those spaces. Space events: Alfresco triggers events when content moves into a space, when content moves out of a space, or when content is modified within a space. You can capture such events at the space level, and trigger certain actions, such as sending email notifications to certain users. Space aspects: Aspects are additional properties and behavior that can be added to the content, based on the space in which it resides. For example, you can define a business rule to add customer details to all of the customer contract documents that are in your intranet's Sales space. Space search: Alfresco search functions can be limited to a space. For example, if you create a space called Marketing, then you can limit the search to documents within the Marketing space, instead of searching the entire site. Space syndication: Content in a space can be syndicated by applying RSS feed scripts to a space. You can apply RSS feeds to your News space, so that other applications and web sites can subscribe to this feed for news updates. Space content: Content in a space can be versioned, locked, checked-in and checked-out, and managed. You can specify certain documents in a space to be versioned, and others not. Space network folder: Space can be mapped to a network drive on your local machine, enabling you to work with the content locally. For example, by using CIFS interface, a space can be mapped to the Windows network folder. Space dashboard view: Content in a space can be aggregated and presented using special dashboard views. For example, the Company Policies space can list all of the latest policy documents, that have been updated in the past one month or so. You can create different views for Sales, Marketing, and Finance departmental spaces. Why space hierarchy is important Like regular folders, a space can have child spaces (called sub spaces). These sub spaces can have further sub spaces of their own. There is no limitation on the number of hierarchical levels. However, the space hierarchy is very important for all of the reasons specified above, in the previous section. Any business rules and security defined for a space is applicable to all of the content and sub spaces within that space. Your space hierarchy should look similar to the following screenshot: A space in Alfresco enables you to define various business rules, a dashboard view, properties, workflow, and security for the content belonging to each department. You can decentralize the management of your content by providing access to departments at the individual space levels. The example of the Intranet space should contain sub spaces, as shown in the preceding screenshot. You can create spaces by logging in as the administrator. It is also very important to set the security (by inviting groups of users to these spaces).
Read more
  • 0
  • 0
  • 2006

article-image-apache-ofbiz-service-engine-part-1
Packt
21 Oct 2009
6 min read
Save for later

Apache OFBiz Service Engine: Part 1

Packt
21 Oct 2009
6 min read
Defining a Service We first need to define a service. Our first service will be named learningFirstService. In the folder ${component:learning}, create a new folder called servicedef. In that folder, create a new file called services.xml and enter into it this: <?xml version="1.0" encoding="UTF-8" ?> <services xsi:noNamespaceSchemaLocation="http://www.ofbiz.org/dtds/services.xsd"> <description>Learning Component Services</description> <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> <description>Our First Service</description> <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> </service> </services> In the file ${component:learning}ofbiz-component.xml, add after the last <entity-resource> element this: <service-resource type="model" loader="main" location="service def/services.xml"/> That tells our component learning to look for service definitions in the file ${component:learning}servicedefservices.xml. It is important to note that all service definitions are loaded at startup; therefore any changes to any of the service definition files will require a restart! Creating the Java Code for the Service In the package org.ofbiz.learning.learning, create a new class called LearningServices with one static method learningFirstService: package org.ofbiz.learning.learning; import java.util.Map; import org.ofbiz.service.DispatchContext; import org.ofbiz.service.ServiceUtil; public class LearningServices { public static final String module = LearningServices.class.getName(); public static Map learningFirstService(DispatchContext dctx, Map context){ Map resultMap = ServiceUtil.returnSuccess("You have called on service 'learningFirstService' successfully!"); return resultMap; } } Services must return a map. This map must contain at least one entry. This entry must have the key responseMessage (see org.ofbiz.service.ModelService.RESPONSE_MESSAGE), having a value of one of the following: success or ModelService.RESPOND_SUCCESS error or ModelService.RESPOND_ERROR fail or ModelService.RESPOND_FAIL By using ServiceUtil.returnSuccess() to construct the minimal return map, we do not need to bother adding the responseMessage key and value pair. Another entry that is often used is that with the key successMessage (ModelService.SUCCESS_MESSAGE). By doing ServiceUtil.returnSuccess("Some message"), we will get a return map with entry successMessage of value "Some message". Again, ServiceUtil insulates us from having to learn the convention in key names. Testing Our First Service Stop OFBiz, recompile our learning component and restart OFBiz so that the modified ofbiz-component.xml and the new services.xml can be loaded. In ${component:learning}widgetlearningLearningScreens.xml, insert a new Screen Widget: <screen name="TestFirstService"> <section> <widgets> <section> <condition><if-empty field-name="formTarget"/></condition> <actions> <set field="formTarget" value="TestFirstService"/> <set field="title" value="Testing Our First Service"/> </actions> <widgets/> </section> <decorator-screen name="main-decorator" location="${parameters.mainDecoratorLocation}"> <decorator-section name="body"> <include-form name="TestingServices" location="component://learning/widget/learning/LearningForms.xml"/> <label text="Full Name: ${parameters.fullName}"/> </decorator-section> </decorator-screen> </widgets> </section> </screen> In the file ${component:learning}widgetlearningLearningForms.xml, insert a new Form Widget: <form name="TestingServices" type="single" target="${formTarget}"> <field name="firstName"><text/></field> <field name="lastName"><text/></field> <field name="planetId"><text/></field> <field name="submit"><submit/></field> </form> Notice how the formTarget field is being set in the screen and used in the form. For now don't worry about the Full Name label we are setting from the screen. Our service will eventually set that. In the file ${webapp:learning}WEB-INFcontroller.xml, insert a new request map: <request-map uri="TestFirstService"> <event type="service" invoke="learningFirstService"/> <response name="success" type="view" value="TestFirstService"/> </request-map> The control servlet currently has no way of knowing how to handle an event of type service, so in controller.xml we must add a new handler element immediately under the other <handler> elements: <handler name="service" type="request" class="org.ofbiz.webapp.event.ServiceEventHandler"/> <handler name="service-multi" type="request" class="org.ofbiz.webapp.event.ServiceMultiEventHandler"/> We will cover service-multi services later. Finally add a new view map: <view-map name="TestFirstService" type="screen" page="component://learning/widget/learning/LearningScreens.xml#TestFirstService"/> Fire to webapp learning an http OFBiz request TestFirstService, and see that we have successfully invoked our first service: Service Parameters Just like Java methods, OFBiz services can have input and output parameters and just like Java methods, the parameter types must be declared. Input Parameters (IN) Our first service is defined with two parameters: <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> Any parameters sent to the service by the end-user as form parameters, but not in the services list of declared input parameters, will be dropped. Other parameters are converted to a Map by the framework and passed into our static method as the second parameter. Add a new method handleInputParamaters to our LearningServices class. public static Map handleParameters(DispatchContext dctx, Map context){ String firstName = (String)context.get("firstName"); String lastName = (String)context.get("lastName"); String planetId= (String)context.get("planetId"); String message = "firstName: " + firstName + "<br/>"; message = message + "lastName: " + lastName + "<br/>"; message = message + "planetId: " + planetId; Map resultMap = ServiceUtil.returnSuccess(message); return resultMap; } We can now make our service definition invoke this method instead of the learningFirstService method by opening our services.xml file and replacing: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> with: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="handleParameters"> Once again shutdown, recompile, and restart OFBiz. Enter for fields First Name, Last Name, and Planet Id values Some, Name, and Earth, respectively. Submit and notice that only the first two parameters went through to the service. Parameter planetId was dropped silently as it was not declared in the service definition. Modify the service learningFirstService in the file ${component:learning}servicedefservices.xml, and add below the second parameter a third one like this: <attribute name="planetId" type="String" mode="IN" optional="true"/> Restart OFBiz and submit the same values for the three form fields, and see all three parameters go through to the service.  
Read more
  • 0
  • 0
  • 3399
article-image-multimedia-and-assessments-moodle-19-part-1
Packt
21 Oct 2009
7 min read
Save for later

Multimedia and Assessments with Moodle 1.9 (part 1)

Packt
21 Oct 2009
7 min read
Adding multimedia to multiple choice answers in Moodle quizzes and lessons Sometimes it can be useful to insert multimedia elements into the answers of a multiple choice question in a Moodle lesson or quiz. This can apply to situations where students are required to: Recognize audio excerpts corresponding to text, images, or videos (for example, in music or language courses students have to identify a melody from a music sheet excerpt, or the correct pronunciation of a given text) Recognize video scenes (for example, corresponding to a certain dialogue, gestural conversation, and so on) Adding multimedia to the question body is fairly easy because we can use the HTML editor and just link to a multimedia file, and the Moodle filter will do the rest. But adding questions for which the answer choices are multimedia files is a different story, as there is no HTML editor, just a simple text form. However, this is not complicated with the help of the correct HTML code. For example, in the course, Module 1 - Music Evolves—students have to post excerpts of songs from different moments of a musical genre to a forum topic as attachments. In the same module, we will create a quiz (Mini-quiz – history of music) that will use the excerpts posted by our students in its questions, as an incentive for other students to have a look at their colleagues' work. So, after creating a new quiz and adding a new multiple choice question to it (for example, "Which of the following excerpts refers to medieval music?") we can add links to the MP3 files submitted by students as choices. We can get these links by right-clicking (CTRL+click for Mac users) on the linked MP3 file in the forum post and then clicking on the Copy Link Location option, as shown in the following screenshot: Next, while editing the multiple choice question, we can paste the link location in the answer form, for example, for Choice 1. This is the easy way, as Moodle, with its multimedia filter, will do the rest: As a result, we'll get something like this: However, note that the entire link to the file shows up, which is not very aesthetically-pleasing (and can give clues to the correct answer to students in the filename). We can solve this by using a simple HREF HTML tag in the answer forms, so that we obtain something cleaner, such as this: In this case, we can use the following code: <a href="pasted link location"> link text </a> with a SPACE in the link text: The same concept applies for videos and music from online services (TeacherTube, YouTube, Imeem, and so on) as we can paste the embed code in the answer form. In this case, there is no need to use any extra HTML code. So adding the embed code in the following manner: will result in the following screenshot: When using this process, we should keep in mind a couple of things: The multimedia files that are linked in the choice options MUST be available to the students in the course. If we copy the link location from the files area but the file is not available to the students, we'll have problems. The same applies to attachments in forum posts, with separate groups. Consider the situation where the files linked to in the answer options are those of an attachment in a forum post on the course. Suppose the question is shared and the quiz is restored in another course or exported to another Moodle installation. In this case, there will be problems with the file access as the hyperlink will point to the original source in a particular course, which is not currently available. In the case of videos or audio from online services, embedding may be disabled by request, so these can later become unavailable in the course. Too many links to MP3 files on the same quiz page, and/or MP3 files of considerable size can slow down the page loading. As a possible solution to the first three issues, we can have the multimedia files in a public folder on our server. In this way, files can be accessed from different courses and domains. We could, for example, download a YouTube video and make it available on our server, if this service is blocked in our school or institution. Another option is to upload these files to the course files area (but in this case, the files must be made available to students in the course, by using the Display a directory resource, or they will not have permissions to listen to or see them). There is a trick that can be used to make content available in a course without showing it in the course topics. To do this, we can go to the course settings and add an extra topic, creating the resources and activities that we don't want to show to our students (however, everything for now should be visible, so no "eyes closed" icons!). After we're done, we should go again to the course settings and remove the extra topic. In this way, the content is there, is "visible" from a permissions point of view, but at the same doesn't show in the course. This can be a way of having the files available for quizzes and other activities. As a possible solution to the last problem, we can use page breaks, or have one question per page in the quiz, so that students can only load one question at a time. Another solution is to reduce the file sizes, either by slicing or encoding the files in other formats. In the case of a MP3, reducing the bitrate could be an option. Adding multimedia to quizzes, lessons, and assignments Remember that multimedia can be used in interesting ways in not only multiple choice answers but also in question bodies and lesson content and assignments. We can create lessons in a tutorial style, with videos followed by some questions on the video's content, leading to different lesson branches according to the answers, or assignments can be presented as quick briefing videos. And don't forget that if we want to receive multimedia assignments, we should set this activity to allow students' file uploads. Creating exercises with Hot Potatoes Hot Potatoes (http://hotpot.uvic.ca) from Half Baked Software allows us to create interactive web games and puzzles in a simple way. One of the advantages of Hot Potatoes over Moodle's quiz engine is that Hot Potatoes makes it easier to create exercises, and some of these are very different from the ones available in Moodle, for example crosswords, and finding pairs via drag and drop. The license for this software is a peculiar one, as it allows free use by individuals working for state-funded educational institutions that are non-profit making, on the condition that the material produced using the application is freely available to anyone via the Web (this means that a Moodle course without access to guests, without a key wouldn't probably qualify). Other uses require a license, so we should keep this in mind. We need to register the software at http://hotpot.uvic.ca/reg/register.htm. A key will be sent to our email inbox and we can then register it going to Help | Register and filling in the details. There are six different types of exercises that we can create with this software: JQuiz - question-based exercises JCloze - fill in the gaps exercises JMatch - matching exercises JMix - jumble exercises JCross - crosswords The Masher - linked exercises of the different types mentioned above We will only take a look at the JCross and JMix exercises, as the other formats can be achieved with the question types that Moodle provides in quizzes and lessons. However, you should try them and see for yourself how easy it can be!
Read more
  • 0
  • 0
  • 1861

article-image-buttons-menus-and-toolbars-ext-js
Packt
20 Oct 2009
5 min read
Save for later

Buttons, Menus, and Toolbars in Ext JS

Packt
20 Oct 2009
5 min read
The unsung heroes of every application are the simple things like buttons, menus, and toolbars. In this article by Shea Frederick, Steve 'Cutter' Blades, and Colin Ramsay, we will cover how to add these items to our applications. Our example will contain a few different types of buttons, both with and without menus. A button can simply be an icon, or text, or both. Toolbars also have some mechanical elements such as spacers and dividers that can help to organize the buttons on your toolbars items. We will also cover how to make these elements react to user interaction. A toolbar for every occasion Just about every Ext component—panels, windows, grids can accept a toolbar on either the top or the bottom. The option is also available to render the toolbar standalone into any DOM element in our document. The toolbar is an extremely flexible and useful component that will no doubt be used in every application. Ext.Toolbar: The main container for the buttons Ext.Button: The primary handler for button creation and interaction Ext.menu: A menu Toolbars Our first toolbar is going to be rendered standalone in the body of our document. We will add one of each of the main button types, so we can experiment with each: Button—tbbutton: This is the standard button that we are all familiar with. Split Button—tbsplit: A split button is where you have a default button action and an optional menu. These are used in cases where you need to have many options in the same category as your button, of which there is a most commonly used default option. Menu—tbbutton+menu: A menu is just a button with the menu config filled in with options. Ext.onReady(function(){ new Ext.Toolbar({ renderTo: document.body, items: [{ xtype: 'tbbutton', text: 'Button' },{ xtype: 'tbbutton', text: 'Menu Button', menu: [{ text: 'Better' },{ text: 'Good' },{ text: 'Best' }] },{ xtype: 'tbsplit', text: 'Split Button', menu: [{ text: 'Item One' },{ text: 'Item Two' },{ text: 'Item Three' }] }] });}); As usual, everything is inside our onReady event handler. The items config holds all of our toolbars elements—I say elements and not buttons because the toolbar can accept many different types of Ext components including form fields—which we will be implementing later on in this article. The default xtype for each element in the items config is tbbutton. We can leave out the xtype config element if tbbutton is the type we want, but I like to include it just to help me keep track. The button Creating a button is fairly straightforward; the main config option is the text that is displayed on the button. We can also add an icon to be used alongside the text if we want to. Here is a stripped-down button: { xtype: 'tbbutton', text: 'Button'} Menu A menu is just a button with the menu config populated—it's that simple. The menu items work along the same principles as the buttons. They can have icons, classes, and handlers assigned to them. The menu items could also be grouped together to form a set of option buttons, but first let's create a standard menu. This is the config for a typical menu config: { xtype: 'tbbutton', text: 'Button', menu: [{ text: 'Better' },{ text: 'Good' },{ text: 'Best' }]} As we can see, once the menu array config is populated, the menu comes to life. To group these menu items together, we would need to set the group config and the boolean checked value for each item: menu: [{ text: 'Better', checked: true, group: 'quality'}, { text: 'Good', checked: false, group: 'quality'}, { text: 'Best', checked: false, group: 'quality'}] Split button The split button sounds like a complex component, but it's just like a button and a menu combined, with a slight twist. By using this type of button, you get to use the functionality of a button while adding the option to select an item from the attached menu. Clicking the left portion of the button that contains the text triggers the button action. However, clicking the right side of the button, which contains a small down arrow, triggers the menu. { xtype: 'tbsplit', text: 'Split Button', menu: [{ text: 'Item One' },{ text: 'Item Two' },{ text: 'Item Three' }]} Toolbar item alignment, dividers, and spacers By default, every toolbar aligns elements to the leftmost side. There is no alignment config for a toolbar, so if we want to align all of the toolbar buttons to the rightmost side, we need to add a fill as the first item in the toolbar. If we want to have items split up between both the left and right sides, we can also use a fill: { xtype: 'tbfill'} Pop this little guy in a tool-bar wherever you want to add space and he will push items on either side of the fill to the ends of the tool bar, as shown below: We also have elements that can add space or vertical dividers, like the one used between the Menu Button and the Split Button. The spacer adds a few pixels of empty space that can be used to space out buttons, or move elements away from the edge of the toolbar: { xtype: 'tbspacer'} A divider can be added in the same way: { xtype: 'tbseparator'} Shortcuts Ext has many shortcuts that can be used to make coding faster. Shortcuts are a character or two that can be used in place of a configuration object. For example, consider the standard toolbar filler configuration: { xtype: 'tbfill'} The shortcut for a toolbar filler is a hyphen and a greater than symbol: '->' Not all of these shortcuts are documented. So be adventurous, poke around the source code, and see what you can find. Here is a list of the commonly-used shortcuts:
Read more
  • 0
  • 0
  • 7376

article-image-apache-ofbiz-service-engine-part-2
Packt
20 Oct 2009
16 min read
Save for later

Apache OFBiz Service Engine: Part 2

Packt
20 Oct 2009
16 min read
Calling Services from Java Code So far, we have explored services invoked as events from the controller (example <event type="service" invoke="learningFirstService"/>). We now look at calling services explicitly from code. To invoke services from code, we use the dispatcher object, which is an object of type org.ofbiz.service.ServiceDispatcher. Since this is obtainable from the DispatchContext we can invoke services from other services. To demonstrate this we are going to create one simple service that calls another. In our services.xml file in ${component:learning}servicedef add two new service definitions: <service name="learningCallingServiceOne" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="callingServiceOne"> <description>First Service Called From The Controller</description> <attribute name="firstName" type="String" mode="IN" optional="false"/> <attribute name="lastName" type="String" mode="IN" optional="false"/> <attribute name="planetId" type="String" mode="IN" optional="false"/> <attribute name="fullName" type="String" mode="OUT" optional="true"/> </service> <service name="learningCallingServiceTwo" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="callingServiceTwo"> <description>Second Service Called From Service One</description> <attribute name="planetId" type="String" mode="IN" optional="false"/> </service> In this simple example it is going to be the job of learningCallingServiceOne to prepare the parameter map and pass in the planetId parameter to learningCallingServiceTwo. The second service will determine if the input is EARTH, and return an error if not. In the class org.ofbiz.learning.learning.LearningEvents, add the static method that is invoked by learningCallingServiceOne: public static Map callingServiceOne(DispatchContext dctx, Map context){ LocalDispatcher dispatcher = dctx.getDispatcher(); Map resultMap = null; String firstName = (String)context.get("firstName"); String lastName = (String)context.get("lastName"); String planetId = (String)context.get("planetId"); GenericValue userLogin = (GenericValue)context.get("userLogin"); Locale locale = (Locale)context.get("locale"); Map serviceTwoCtx = UtilMisc.toMap("planetId", planetId, "userLogin", userLogin, "locale", locale); try{ resultMap = dispatcher.runSync("learningCallingServiceTwo", serviceTwoCtx); }catch(GenericServiceException e){ Debug.logError(e, module); } resultMap.put("fullName", firstName + " " + lastName); return resultMap; } and also the method invoked by learningServiceTwo: public static Map callingServiceTwo(DispatchContext dctx, Map context){ String planetId = (String)context.get("planetId"); Map resultMap = null; if(planetId.equals("EARTH")){ resultMap = ServiceUtil.returnSuccess("This planet is Earth"); }else{ resultMap = ServiceUtil.returnError("This planet is NOT Earth"); } return resultMap; } To LearningScreens.xml add: <screen name="TestCallingServices"> <section> <actions><set field="formTarget" value="TestCallingServices"/></actions> <widgets> <include-screen name="TestFirstService"/> </widgets> </section></screen> Finally add the request-map to the controller.xml file: <request-map uri="TestCallingServices"> <security auth="false" https="false"/> <event type="service" invoke="learningCallingServiceOne"/> <response name="success" type="view" value="TestCallingServices"/> <response name="error" type="view" value="TestCallingServices"/> </request-map> and also the view-map: <view-map name="TestCallingServices" type="screen" page="component://learning/widget/learning/LearningScreens.xml#TestCallingServices"/> Stop, rebuild, and restart, then fire an OFBiz http request TestCallingServices to webapp learning. Do not be alarmed if straight away you see error messages informing us that the required parameters are missing. By sending this request we have effectively called our service with none of our compulsory parameters present. Enter your name and in the Planet Id, enter EARTH. You should see: Try entering MARS as the Planet Id. Notice how in the Java code for the static method callingServiceOne the line resultMap = dispatcher.runSync("learningCallingServiceTwo", serviceTwoCtx); is wrapped in a try/catch block. Similar to how the methods on the GenericDelegator object that accessed the database threw a GenericEntityException, methods on our dispatcher object throw a GenericServiceException which must be handled. There are three main ways of invoking a service: runSync—which runs a service synchronously and returns the result as a map. runSyncIgnore—which runs a service synchronously and ignores the result. Nothing is passed back. runAsync—which runs a service asynchronously. Again, nothing is passed back. The difference between synchronously and asynchronously run services is discussed in more detail in the section called Synchronous and Asynchronous Services. Implementing Interfaces Open up the services.xml file in ${component:learning}servicedef and take a look at the service definitions for both learningFirstService and learningCallingServiceOne. Do you notice that the <attribute> elements (parameters) are the same? To cut down on the duplication of XML code, services with similar parameters can implement an interface. As the first service element in this file enter the following: <service name="learningInterface" engine="interface"> <description>Interface to describe base parameters for Learning Services</description> <attribute name="firstName" type="String" mode="IN" optional="false"/> <attribute name="lastName" type="String" mode="IN" optional="false" /> <attribute name="planetId" type="String" mode="IN" optional="false"/> <attribute name="fullName" type="String" mode="OUT" optional="true"/></service> Notice that the engine attribute is set to interface. Replace all of the <attribute> elements in the learningFirstService and learningCallingServiceOne service definitions with: <implements service="learningInterface"/> So the service definition for learningServiceOne becomes: <service name="learningCallingServiceOne" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="callingServiceOne"> <description>First Service Called From The Controller</description> <implements service="learningInterface"/></service> Restart OFBiz and then fire an OFBiz http request TestCallingServices to webapp learning. Nothing should have changed—the services should run exactly as before, however our code is now somewhat tidier. Overriding Implemented Attributes It may be the case that the interface specifies an attribute as optional="false", however, our service does not need this parameter. We can simply override the interface and add the <attribute> element with whatever settings we wish. For example, if we wish to make the planetId optional in the above example, the <implements> element could remain, but a new <attribute> element would be added like this: <service name="learningCallingServiceOne" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="callingServiceOne"> <description>First Service Called From The Controller</description> <implements service="learningInterface"/> <attribute name="planetId" type="String" mode="IN" optional="false"/></service> Synchronous and Asynchronous Services The service engine allows us to invoke services synchronously or asynchronously. A synchronous service will be invoked in the same thread, and the thread will "wait" for the invoked service to complete before continuing. The calling service can obtain information from the synchronously run service, meaning its OUT parameters are accessible. Asynchronous services run in a separate thread and the current thread will continue without waiting. The invoked service will effectively start to run in parallel to the service or event from which it was called. The current thread can therefore gain no information from a service that is run asynchronously. An error that occurs in an asynchronous service will not cause a failure or error in the service or event from which it is called. A good example of an asynchronously called service is the sendOrderConfirmation service that creates and sends an order confirmation email. Once a customer has placed an order, there is no need to wait while the mail service is called and the mail sent. The mail server may be down, or busy, which may result in an error that would otherwise stop our customer form placing the order. It is much more preferable to allow the customer to continue to the Order Confirmation page and have our business receive the valuable order. By calling this service asynchronously, there is no delay to the customer in the checkout process, and while we log and fix any errors with the mail server, we still take the order. Behind the scenes, an asynchronous service is actually added to the Job Scheduler. It is the Job Scheduler's task to invoke services that are waiting in the queue. Using the Job Scheduler Asynchronous services are added to the Job Scheduler automatically. However, we can see which services are waiting to run and which have already been invoked through the Webtools console. We can even schedule services to run once only or recur as often as we like. Open up the Webtools console at https://localhost:8443/webtools/control/main and take a look under the Service Engine Tools heading. Select Job List to view a full list of jobs. Jobs without a Start Date/Time have not started yet. Those with an End Date/Time have completed. The Run Time is the time they are scheduled to run. All of the outstanding jobs in this list were added to the JobSandbox Entity when the initial seed data load was performed, along with the RecurrenceRule (also an Entity) information specifying how often they should be run. They are all maintenance jobs that are performed "offline". The Pool these jobs are run from by default is set to pool. In an architecture where there may be multiple OFBiz instances connecting to the same database, this can be important. One OFBiz instance can be dedicated to performing certain jobs, and even though job schedulers may be running on each instance, this setting can be changed so we know only one of our instances will run this job. The Service Engine settings can be configured in frameworkserviceconfigserviceengine.xml. By changing both the send-to-pool attribute and the name attribute on the <run-from-pool element>, we can ensure that only jobs created on an OFBiz instance are run by this OFBiz instance. Click on the Schedule Job button and in the Service field enter learningCallingServiceOne, leave the Pool as pool and enter today's date/time by selecting the calendar icon and clicking on today's date. We will need to add 5 minutes onto this once it appears in the box. In the below example the Date appeared as 2008-06-18 14:11:24.265. This job is only going to be scheduled to run once, although we could specify any recurrence information we wish. Select Submit and notice that scheduler is already aware of the parameters that can (or must, in this case) be entered. This information has been taken from the service definition in our services.xml file. Press Submit to schedule the job and find the entry in the list. This list is ordered by Run Time so it may not be the first. Recurring maintenance jobs are imported in the seed data and are scheduled to run overnight. These will more than likely be above the job we have just scheduled since their run-time is further in the future. The entered parameters are converted to a map and then serialized to the database. They are then fed to the service at run time. Quickly Running a Service Using the Webtools console it is also possible to run a service synchronously. This is quicker than going through the scheduler should you need to test a service or debug through a service. Select the Run Service button from the menu and enter the same service name, submit then enter the same parameters again. This time the service is run straight away and the OUT parameters and messages are passed back to the screen: Naming a Service and the Service Reference Service names must be unique throughout the entire application. Because we do not need to specify a location when we invoke a service, if service names were duplicated we can not guarantee that the service we want to invoke is the one that is actually invoked. OFBiz comes complete with a full service reference, which is in fact a dictionary of services that we can use to check if a service exists with the name we are about to choose, or even if there is a service already written that we are about to duplicate. From https://localhost:8443/webtools/control/main select the Service Reference and select "l" for learning. Here we can see all of our learning services, what engine they use and what method they invoke. By selecting the service learningCallingServiceOne, we can obtain complete information about this service as was defined in the service definition file services.xml. It even includes information about the parameters that are passed in and out automatically. Careful selection of intuitive service names and use of the description tags in the service definition files are good practice since this allows other developers to reuse services that already exists, rather than duplicate work unnecessarily. Event Condition Actions (ECA) ECA refers to the structure of rules of a process. The Event is the trigger or the reason why the rule is being invoked. The condition is a check to see if we should continue and invoke the action, and the action is the final resulting change or modification. A real life example of an ECA could be "If you are leaving the house, check to see if it is raining. If so, fetch an umbrella". In this case the event is "leaving the house". The condition is "if it is raining" and the action is "fetch an umbrella". There are two types of ECA rules in OFBiz: Service Event Condition Actions (SECAs) and Entity Event Condition Actions (EECAs). Service Event Condition Actions (SECAs) For SECAs the trigger (Event) is a service being invoked. A condition could be if a parameter equalled something (conditions are optional), and the action is to invoke another service. SECAs are defined in the same directory as service definitions (servicedef). Inside files named secas.xml Take a look at the existing SECAs in applicationsorderservicedefsecas.xml and we can see a simple ECA: <eca service="changeOrderStatus" event="commit" run-on-error="false"> <condition field-name="statusId" operator="equals" value="ORDER_CANCELLED"/> <action service="releaseOrderPayments" mode="sync"/></eca> When the changeOrderStatus transaction is just about to be committed, a lookup is performed by the framework to see if there are any ECAs for this event. If there are, and the parameter statusId is ORDER_CANCELLED then the releaseOrderPayments service is run synchronously. Most commonly, SECAs are triggered on commit or return; however, it is possible for the event to be in any of the following stages in the service's lifecycle: auth—Before Authentication in-validate—Before IN parameter validation out-validate—Before OUT parameter validation invoke—Before service invocation commit—Just before the transaction is committed return—Before the service returns global-commit global-rollback The variables global-commit and i are a little bit different. If the service is part of a transaction, they will only run after a rollback or between the two phases (JTA implementation) of a commit. There are also two specific attributes whose values are false by default: run-on-failure run-on-error You can set them to true if you want the SECA to run in spite of a failure or error. A failure is the same thing as an error, except it doesn't represent a case where a rollback is required. It should be noted that parameters passed into the trigger service are available, if need be, to the action service. The trigger services OUT parameters are also available to the action service. Before using SECAs in a component, the component must be informed of the location of the ECA service-resources: <service-resource type="eca" loader="main" location="servicedef/secas.xml"/> This line must be added under the existing <service-resource> elements in the component's ofbiz-component.xml file. Entity Event Condition Actions (EECAs) For EECAs, the event is an operation on an entity and the action is a service being invoked. EECAs are defined in the same directory as entity definitions (entitydef): inside files named eecas.xml. They are used when it may not necessarily be a service that has initiated an operation on the entity, or you may wish that no matter what service operates on this entity, a certain course of action to be taken. Open the eecas.xml file in the applicationsproductentitydef directory and take a look at the first <eca> element: <eca entity="Product" operation="create-store" event="return"> <condition field-name="autoCreateKeywords" operator="not-equals" value="N"/> <action service="indexProductKeywords" mode="sync" value-attr="productInstance"/></eca> This ECA ensures that once any creation or update operation on a Product record has been committed, so long as the autoCreateKeywords field of this record is not N, then the indexProductKeywords service will be automatically invoked synchronously. The operation can be any of the following self-explanatory operations: create store remove find create-store (create or store/update) create-remove store-remove create-store-remove any The return event is by far the most commonly used event in an EECA. But there are also validate, run, cache-check, cache-put, and cache-clear events. There is also the run-on-error attribute. Before using EECAs in a component, the component must be informed of the location of the eca entity-resource: <entity-resource type="eca" loader="main" location="entitydef/eecas.xml"/> must be added under the existing <entity-resource> elements in the component's ofbiz-component.xml file. ECAs can often catch people out! Since there is no apparent flow from the trigger to the service in the code they can be difficult to debug. When debugging always keep an eye on the logs. When an ECA is triggered, an entry is placed into the logs to inform us of the trigger and the action. Summary This brings us to the end of our investigation into the OFBiz Service Engine. We have discovered how useful the Service Oriented Architecture in OFBiz can be and we have learnt how the use of some of the built in Service Engine tools, like the Service Reference, can help us when we are creating new services. In this article we have looked at: Calling services from code (using dispatcher). IN/OUT parameter mismatch when calling services Sending feedback; standard return codes success, error and fail. Implementing Service Interfaces Synchronous and asynchronous services Using the Service Engine tools ECAs: Event Condition Actions
Read more
  • 0
  • 0
  • 3087
article-image-multimedia-and-assessments-moodle-19-part-2
Packt
20 Oct 2009
5 min read
Save for later

Multimedia and Assessments with Moodle 1.9 (part 2)

Packt
20 Oct 2009
5 min read
JClic (http://clic.xtec.net/en) is a free (under a GPL license) software application released by the Ministry of Education of the Government of Catalunya. It is written in Java, and allows us to create the following seven types of interactive activities: Association games - to identify the relationship between two groups of data Memory games - to discover hidden pairs of elements Exploring, Identifying, and Information games - to start with initial information and choose paths to the answer Puzzles – to order graphics, text, and audio, or to combine graphics and audio Written answers – to write text, a word, or a sentence Text activities – to solve exercises based on words, sentences, letters, and paragraphs (these can be completed, corrected, or ordered) Wordsearches and crosswords – to find hidden words or solve crossword puzzles JClic exercises can be more visually appealing than Hot Potatoes, as we will see, and can be particularly useful for younger students. But, as they require Java, this should be checked with the ICT coordinator as Java must be installed on the schools' PCs. In the software download area (http://clic.edu365.cat/en/jclic/download.htm), we can download JClic author, the application that allows us to create these activities. The file will use WebStart, and will run from a single file, named jclic.jnlp. When we run it for the first time, in Microsoft Vista at least, we will need to give permission for the application to Run (selecting the Always trust content from this publisher option will avoid having to perform this step every time we start JClic): Then JClic will start loading: The interface of JClic author is as shown in the following screenshot: As it can be seen, there are four available tabs: Project – the default tab, which allows us to define some details of the project. Media library – where pictures and other multimedia are managed. Activities – where the project activities are created or modified. This tab further contains four tabs. Sequences – where we can sequence several activities in the same project. The options inside these tabs will be available only after we create a new project. Start a new project The first step in building interactive JClic activities is to start a new project (via menu option File | New project): We should then define: The name of the project The name of the file in which the project will be saved (having a double extension of .jclic.zip) The default folder for saving the files to is:C:/Programme Files/JClic/projects/name of project (in Windows)$home/JClic/projects/name of project (in other OSs) We can change this and, if we are using multimedia files, again we should keep everything organized inside this folder. Creating a puzzle activity We are now ready to start creating our first activity, a puzzle. In Module 2 - A world of music—we can pick some of the pictures of instruments that our students gathered in the Instrument Mappers activities and create a jigsaw puzzle as part of a final game for the module. We will perform the following steps: Provide details of the project in the Project tab. Import a picture to the Media library. Add an activity called Exchangeable puzzle. Create a sequence. Note that we are starting from the tab on the left and moving to the right as we configure the activity. As an example, I created a project called Instruments: Next, I added a description of the activity and specified myself as an author by clicking on the plus (+) button. We can specify in more details, but for now this much information is enough as an example. Now, let's import a picture into our Media library by clicking on the icon on the far left on the toolbar: If we pick a picture from any folder on our computer, JClic will recommend that this be copied to the project folder (we should accept this recommendation, especially if we want to upload our activity to Moodle). Note that the Media library accepts different kinds of multimedia files, from MP3 to Flash and video. This can be useful in other types of activities. We now have a picture of a lamelaphone that will make a difficult jigsaw for our students. Lamelaphone image source: Weeks, Alex (2006). Mbira dzavadzimu 1.jpg. Retrieved October 12, 2008,from http://commons.wikimedia.org/wiki/File:Mbira_dzavadzimu_1.jpg The next step is to add the puzzle activity, on the Activities tab, by clicking on the icon on the far left of the toolbar: A dialog box is displayed, and in this menu we should select the Exchange puzzle option, entering a name for our puzzle, in the input field at the bottom of the dialog box: We can then add a description of the activity, and if needed, we can define a timer countdown (in the Counters section), among other options: Reports are mentioned in this dialog box. JClic provides a way to gather students' responses, but due to the complexity of this functionality, we will not deal with it in this book. In the Window tab, on the the Activities tab, we can also define some color options, as shown in the screenshot below: In the Messages tab, we can add an initial message, which for example, gives the context of the activity, and a final message, as feedback for the exercise by clicking on the dark gray areas: Finally, in the Panel tab, we should insert the lamelaphone picture from our Media library and define the kind of jigsaw that we want. In the following screenshot, I have done the following three things: Selected a jigsaw with curved unions. Defined 5x5 pieces. Selected the image from the Media library. Our puzzle activity is now ready, and we can now add a finding pairs activity to the same project, in a sequence.
Read more
  • 0
  • 0
  • 1638

article-image-integrating-zk-other-frameworks
Packt
20 Oct 2009
7 min read
Save for later

Integrating ZK with Other Frameworks

Packt
20 Oct 2009
7 min read
Integration with the Spring Framework Spring is one of the most complete lightweight containers, which provides centralized, automated configuration, and wiring of your application objects. It improves your application's testability and scalability by allowing software components to be first developed and tested in isolation, then scaled up for deployment in any environment. This approach is called the POJO (Plain Old Java Object) approach and is gaining popularity because of its flexibility. So, with all these advantages, it's no wonder that Spring is one of the most used frameworks. Spring provides many nice features: however, it works mainly in the back end. Here ZK may provide support in the view layer. The benefit from this pairing is the flexible and maturity of Spring together with the easy and speed of ZK. Specify a Java class in the use attribute of a window ZUL page and the world of Spring will be yours. A sample ZUL looks like: <?xml version="1.0" encoding="UTF-8"?> <p:window xsi_schemaLocation="http://www.zkoss.org/2005/zul http://www.zkoss.org/2005/zul " border="normal" title="Hello!" use="com.myfoo.myapp.HelloController"> Thank you for using our Hello World Application. </p:window> The HelloController points directly to a Java class where you can use Spring features easily. Normally, if a Java Controller is used for a ZUL page it becomes necessary sooner or later to call a Spring bean. Usually in Spring you would use the applicationContext like: ctx = new ClassPathXmlApplicationContext("applicationContext.xml"); UserDAO userDAO = (UserDAO) ctx.getBean("userDAO"); Then the userDAO is usable for any further access. In ZK there is a helper class SpringUtil. It wrapps the applicationContext and simplifies the code to: UserDAO userDAO = (UserDAO) SpringUtil.getBean("userDAO"); Pretty easy, isn't it? Let us examine an example. Assume we have a small web application that gets flight data from a flight table. The web.xml file looks like: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.4" xsi_schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>IRT-FLIGHTSAMPLE</display-name> <filter> <filter-name>hibernateFilter</filter-name> <filter-class> org.springframework.orm.hibernate3.support.OpenSessionInViewFilter </filter-class> </filter> <filter-mapping> <filter-name> hibernateFilter </filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:applicationContext-jdbc.xml ,classpath:applicationContext-dao.xml ,classpath:applicationContext-service.xml ,classpath:applicationContext.xml </param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <session-config> <!-- Default to 30 minute session timeouts --> <session-timeout>30</session-timeout> </session-config> <mime-mapping> <extension>xsd</extension> <mime-type>text/xml</mime-type> </mime-mapping> <servlet> <description> <![CDATA[The servlet loads the DSP pages.]]> </description> <servlet-name>dspLoader</servlet-name> <servlet-class> org.zkoss.web.servlet.dsp.InterpreterServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>dspLoader</servlet-name> <url-pattern>*.dsp</url-pattern> </servlet-mapping> <!-- ZK --> <listener> <description> Used to cleanup when a session is destroyed </description> <display-name>ZK Session Cleaner</display-name> <listener-class> org.zkoss.zk.ui.http.HttpSessionListener </listener-class> </listener> <servlet> <description>ZK loader for ZUML pages</description> <servlet-name>zkLoader</servlet-name> <servlet-class> org.zkoss.zk.ui.http.DHtmlLayoutServlet </servlet-class> <!-- Must. Specifies URI of the update engine (DHtmlUpdateServlet). It must be the same as <url-pattern> for the update engine. --> <init-param> <param-name>update-uri</param-name> <param-value>/zkau</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>zkLoader</servlet-name> <url-pattern>*.zul</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>zkLoader</servlet-name> <url-pattern>*.zhtml</url-pattern> </servlet-mapping> <servlet> <description>The asynchronous update engine for ZK </description> <servlet-name>auEngine</servlet-name> <servlet-class> org.zkoss.zk.au.http.DHtmlUpdateServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>auEngine</servlet-name> <url-pattern>/zkau/*</url-pattern> </servlet-mapping> <welcome-file-list id="WelcomeFileList"> <welcome-file>index.zul</welcome-file> </welcome-file-list> </web-app> Furthermore let's have a small ZUL page that has the interface to retrieve and show flight data: <?xml version="1.0" encoding="UTF-8"?> <p:window xsi_schemaLocation="http://www.zkoss.org/2005/zul http://www.zkoss.org/2005/zul " id="query" use="com.myfoo.controller.SampleController"> <p:grid> <p:rows> <p:row> Airline Code: <p:textbox id="airlinecode"/> </p:row> <p:row> Flightnumber: <p:textbox id="flightnumber"/> </p:row> <p:row> Flightdate: <p:datebox id="flightdate"/> </p:row> <p:row> <p:button label="Search" id="search"/> <p:separator width="5px"/> </p:row> </p:rows> </p:grid> <p:listbox width="100%" id="resultlist" mold="paging" rows="21" style="font-size: x-small;"> <p:listhead sizable="true"> <p:listheader label="Airline Code" sort="auto" style="font-size: x-small;"/> <p:listheader label="Flightnumber" sort="auto" style="font-size: x-small;"/> <p:listheader label="Flightdate" sort="auto" style="font-size: x-small;"/> <p:listheader label="Destination" sort="auto" style="font-size: x-small;"/> </p:listhead> </p:listbox> </p:window> As you can see, the use attribute of the ZUL page is the link to the SampleController. The SampleController handles and controls the objects. Let's have a short look at the SampleController sample code: public class SampleController extends Window { private Listbox resultlist; private Textbox airlinecode; private Textbox flightnumber; private Datebox flightdate; private Button search; /** * Initialize the page */ public void onCreate() { // Components resultlist = (Listbox) this.getPage().getFellow("query").getFellow("resultlist"); airlinecode = (Textbox) this.getPage().getFellow("query").getFellow("airlinecode"); flightnumber = (Textbox) this.getPage().getFellow("query").getFellow("flightnumber"); flightdate = (Datebox) this.getPage().getFellow("query").getFellow("flightdate"); search = (Button) this.getPage().getFellow("query").getFellow("search"); search.addEventListener("onClick", new EventListener() { public void onEvent(Event event) throws Exception { performSearch(); } }); } /** * Execute the search and fill the list */ private void performSearch() { //(1) List<Flight> flightlist = ((FlightService) SpringUtil.getBean("flightService")). getFlightBySearch(airlinecode.getValue(), flightnumber.getValue(), flightdate.getValue(),""); resultlist.getItems().clear(); for (Flight aFlightlist : flightlist) { // add flights to list } } } /* (1)-shows the integration of the Spring Bean*/ Just for completion the context file for Spring is listed here with the bean that is called. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"> <beans> <bean id="txProxyTemplate" abstract="true" class="org.springframework.transaction. interceptor.TransactionProxyFactoryBean"> <property name="transactionManager"> <ref bean="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="save*">PROPAGATION_REQUIRED</prop> <prop key="add*">PROPAGATION_REQUIRED</prop> <prop key="remove*">PROPAGATION_REQUIRED</prop> </props> </property> </bean> <bean id="flightService" parent="txProxyTemplate"> <property name="target"> <bean class="com.myfoo.services.impl.FlightServiceImpl"> <property name="flightDAO"> <ref bean="flightDao"/> </property> </bean> </property> </bean> </beans> In short we have learned how to use Spring with ZK and about the configurations. We have seen that the integration is quite smooth and also powerful.
Read more
  • 0
  • 0
  • 2521
Modal Close icon
Modal Close icon