Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-adapting-user-devices-using-mobile-web-technology
Packt
23 Oct 2009
10 min read
Save for later

Adapting to User Devices Using Mobile Web Technology

Packt
23 Oct 2009
10 min read
Luigi's Pizza On The Run mobile shop is working well now. And he wants to adapt it to different mobile devices. Let's look at the following: Understanding the Lowest Common Denominator method Finding and comparing features of different mobile devices Deciding to adapt or not Adapting and progressively enhancing POTR application using Wireless Abstraction Library Detecting device capabilities Evaluating tools that can aid in adaptation Moving your blog to the mobile web By the end of this article, you will have a strong foundation in adapting to different devices. What is Adaptation? Adaptation, sometimes called multiserving, means delivering content as per each user device's capabilities. If the visiting device is an old phone supporting only WML, you will show a WML page with Wireless Bitmap (wbmp) images. If it is a newer XHTML MP-compliant device, you will deliver an XHTML MP version, customized according to the screen size of the device. If the user is on iMode in Japan, you will show a Compact HTML (cHTML) version that's more forgiving than XHTML. This way, users get the best experience possible on their device. Do I Need Adaptation? I am sure most of you are wondering why you would want to create somany different versions of your mobile site? Isn't following the XHTML MPstandard enough? On the Web, you could make sure that you followed XHTML and the site will work in all browsers. The browser-specific quirks are limited and fixes are easy. However, in the mobile world, you have thousands of devices using hundreds of different browsers. You need adaptation precisely for that reason! If you want to serve all users well, you need to worry about adaptation. WML devices will give up if they encounter a <b> tag within an <a> tag. Some XHTML MP browsers will not be able to process a form if it is within a table. But a table within a form will work just fine. If your target audience is limited, and you know that they are going to use a limited range of browsers, you can live without adaptation. Can't I just Use Common Capabilities and Ignore the Rest? You can. Finding the Lowest Common Denominator (LCD) of the capabilities of target devices, you can design a site that will work reasonably well in all devices. Devices with better capabilities than LCD will see a version that may not be very beautiful but things will work just as well. How to Determine the LCD? If you are looking for something more than the W3C DDC guidelines, you may be interested in finding out the capabilities of different devices to decide on your own what features you want to use in your application. There is a nice tool that allows you to search on device capabilities and compare them side by side. Take a look at the following screenshot showing mDevInf (http://mdevinf.sourceforge.net/) in action, showing image formats supported on a generic iMode device. You can search for devices and compare them, and then come to a conclusion about features you want to use. This is all good. But when you want to cater to wider mobile audience, you must consider adaptation. You don't want to fight with browser quirks and silly compatibility issues. You want to focus on delivering a good solution. Adaptation can help you there. OK, So How do I Adapt? You have three options to adapt: Design alternative CSS: this will control the display of elements and images. This is the easiest method. You can detect the device and link an appropriate CSS file. Create multiple versions of pages: redirect the user to a device-specific version. This is called "alteration". This way you get the most control over what is shown to each device. Automatic Adaptation: create content in one format and use a tool to generate device-specific versions. This is the most elegant method. Let us rebuild the pizza selection page on POTR to learn how we can detect the device and implement automatic adaptation. Fancy Pizza Selection Luigi has been asking to put up photographs of his delicious pizzas on the mobile site, but we didn't do that so far to save bandwidth for users. Let us now go ahead and add images to the pizza selection page. We want to show larger images to devices that can support them. Review the code shown below. It's an abridged version of the actual code. <?php include_once("wall_prepend.php"); ?> <wall:document><wall:xmlpidtd /> <wall:head> <wall:title>Pizza On The Run</wall:title> <link href="assets/mobile.css" type="text/css" rel="stylesheet" /> </wall:head> <wall:body> <?php echo '<wall:h2>Customize Your Pizza #'.$currentPizza.':</wall:h2> <wall:form enable_wml="false" action="index.php" method="POST"> <fieldset> <wall:input type="hidden" name="action" value="order" />'; // If we did not get the total number of pizzas to order, // let the user select if ($_REQUEST["numPizza"] == -1) { echo 'Pizzas to Order: <wall:select name="numPizza">'; for($i=1; $i<=9; $i++) { echo '<wall:option value="'.$i.'">'.$i.'</wall:option>'; } echo '</wall:select><wall:br/>'; } else { echo '<wall:input type="hidden" name="numPizza" value="'.$_REQUEST["numPizza"].'" />'; } echo '<wall:h3>Select the pizza</wall:h3>'; // Select the pizza $checked = 'checked="checked"'; foreach($products as $product) { // Show a product image based on the device size echo '<wall:img src="assets/pizza_'.$product[ "id"].'_120x80.jpg" alt="'.$product["name"].'"> <wall:alternate_img src="assets/pizza_'.$product[ "id"].'_300x200.jpg" test="'.($wall->getCapa( 'resolution_width') >= 200).'" /> <wall:alternate_img nopicture="true" test="'.( !$wall->getCapa('jpg')).'" /> </wall:img><wall:br />'; echo '<wall:input type="radio" name="pizza[ '.$currentPizza.']" value="'.$product["id"].'" '.$checked.'/>'; echo '<strong>'.$product["name"].' ($'.$product[ "price"].')</strong> - '; echo $product["description"].'<wall:br/>'; $checked = ''; } echo '<wall:input type="submit" class="button" name= "option" value="Next" /> </fieldset></wall:form>'; ?> <p><wall:a href="?action=home">Home</wall:a> - <wall:caller tel="+18007687669"> +1-800-POTRNOW</wall:caller></p> </wall:body> </wall:html> What are Those <wall:*> Tags? All those <wall:*> tags are at the heart of adaptation. Wireless Abstraction Library (WALL) is an open-source tag library that transforms the WALL tags into WML, XHTML, or cHTML code. E.g. iMode devices use <br> tag and simply ignore <br />. WALL will ensure that cHTML devices get a <br> tag and XHTML devices get a <br /> tag. You can find a very good tutorial and extensive reference material on WALL from: http://wurfl.sourceforge.net/java/wall.php. You can download WALL and many other tools too from that site. WALL4PHP—a PHP port of WALL is available from http://wall.laacz.lv/. That's what we are using for POTR. Let's Make Sense of This Code! What are the critical elements of this code? Most of it is very similar to standard XHTML MP. The biggest difference is that tags have a "wall:" prefix. Let us look at some important pieces: The wall_prepend.php file at the beginning loads the WALL class, detects the user's browser, and loads its capabilities. You can use the $wall object in your code later to check device capabilities etc. <wall:document> tells the WALL parser to start the document code. <wall:xmlpidtd /> will insert the XHTML/WML/CHTML prolog as required. This solves part of the headache in adaptation. The next few lines define the page title and meta tags. Code that is not in <wall:*> tags is sent to the browser as is. The heading tag will render as a bold text on a WML device. You can use many standard tags with WALL. Just prefix them with "wall:". We do not want to enable WML support in the form. It requires a few more changes in the document structure, and we don't want it to get complex for this example! If you want to support forms on WML devices, you can enable it in the <wall:form> tag. The img and alternate_img tags are a cool feature of WALL. You can specify the default image in the img tag, and then specify alternative images based on any condition you like. One of these images will be picked up at run time. WALL can even skip displaying the image all together if the nopicture test evaluates to true. In our code, we show a 120x100 pixels images by default, and show a larger image if the device resolution is more than 200 pixels. As the image is a JPG, we skip showing the image if the device cannot support JPG images. The alternate_img tag also supports showing some icons available natively on the phone. You can refer to the WALL reference for more on this. Adapting the phone call link is dead simple. Just use the <wall:caller> tag. Specify the number to call in the tel attribute, and you are done. You can also specify what to display if the phone does not support phone links in alt attribute. When you load the URL in your browser, WALL will do all the heavy liftingand show a mouth-watering pizza—a larger mouth-watering pizza if you have a large screen! Can I Use All XHTML Tags? WALL supports many XHTML tags. It has some additional tags to ease menu display and invoke phone calls. You can use <wall:block> instead of code <p> or <div> tags because it will degrade well, and yet allow you to specify CSS class and id. WALL does not have tags for tables, though it can use tables to generate menus. Here's a list of WALL tags you can use: a, alternate_img, b, block, body, br, caller, cell, cool_menu, cool_menu_css, document, font, form, h1, h2, h3, h4, h5, h6, head, hr, i, img, input, load_capabilities, marquee, menu, menu_css, option, select, title, wurfl_device_id, xmlpidtd. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Will This Work Well for WML? WALL can generate WML. WML itself has limited capabilities so you will be restricted in the markup that you can use. You have to enclose content in <wall:block> tags and test rigorously to ensure full WML support. WML handles user input in a different way and we can't use radio buttons or checkboxes in forms. A workaround is to change radio buttons to a menu and pass values using the GET method. Another is to convert them to a select drop down. We are not building WML capability in POTR yet. WALL is still useful for us as it can support cHTML devices and will automatically take care of XHTML implementation variations in different browsers. It can even generate some cool menus for us! Take a look at the following screenshot.
Read more
  • 0
  • 0
  • 2786

article-image-creating-custom-wcm-workflow-group-using-alfresco-3
Packt
27 Sep 2010
13 min read
Save for later

Creating a Custom WCM Workflow for a Group using Alfresco 3

Packt
27 Sep 2010
13 min read
(For more resources on Alfresco 3, see here.) You can define and deploy your own task-oriented workflows in the Alfresco repository. However, you need to follow a particular format to define your workflow and a particular process to deploy it in Alfresco. Workflows can be deployed manually (which requires a restart of the server) and dynamically (without starting the server). For now we will deploy the workflow manually. These customizations are typically deployed via the alfresco/extension folder and require the Alfresco server to be restarted to take effect. In the later examples, we will deploy using the dynamic approach. As an example, we will configure one workflow. The use case scenario is as follows. There is a section of Blogs and News on the Cignex website, which needs to be updated monthly. The blog has to be published regularly. In order to publish, one needs to follow some process that can be defined in a workflow. The blog has to be reviewed by three different groups. Each group has different roles. Groups approve the blog one at a time and in order. When the blog is submitted, it will go to the first group. All the users belonging to that group will receive a notification via a task in the My Pooled Tasks dashlet. Any one of the users can take ownership and approve or reject the task. If rejected, it will go to the initiator. On approval it will go to next group and the process will continue for all three groups. Once the process is complete, a notification will be sent to the initiator. Also the blog would be submitted to the Staging box. For this, create Jennifer Bruce, Kristie Dawid, LeRoy Fuess, Michael Alison, and Jessica Tucker as users. Create three groups: Technical Reviewer, Editorial, and Publisher. Add Jennifer Bruce and Kristie Dawid to Technical Reviewer, add LeRoy Fuess to Editorial, and add Michael Alison and Jessica Tucker to Publisher. Invite Technical Reviewer, Editorial, and Publisher as Reviewer on the Cignex web project. The custom workflow process is shown in the following diagram: Defining the workflow process For any workflow to be deployed you should have the following files: Task Model: The Task Model provides a description for each of the tasks in the workflow. Each task description consists of Name, Title, Properties, Mandatory Aspects, and Association. Process Definition: The Process Definition describes the states (steps) and transitions (choices) of a workflow. Resource Bundle (optional): A workflow Resource Bundle provides all the human-readable messages displayed in the user interface for managing the workflow. Messages include task titles, task property names, task choices, and so on. web-client-config-custom.xml: Web Client configuration specifies the presentation of Tasks and properties to the user in the Alfresco Explorer. custom-model-context.xml: The custom model Spring Context file instructs Spring on how to bootstrap or load the Task Model definition file, Process Definition file, and Resource Bundle. web-client-config-wcm.xml: Web Client configuration specifies the availability of workflow to the web project in the Alfresco Explorer. Follow these steps to create a custom workflow. Step 1: Create a Task Model For each task in the Process Definition (as defined by <task> elements), it is possible to associate a task description. The description specifies information that may be attached to a task, that is properties (name and data type) associations (name and type of associated object), and mandatory aspects. A user may view and edit this information in the Task dialog within the Alfresco Explorer. The Task Model is expressed as a Content Model, as supported by the Data Dictionary. To create a Task Model, create a new Content Model file for Process Definition with the .xml extension. Define a Content Model name Create a Type for each task Define Properties Define and add Aspects to a Type Define a Content Model name Create a new Content Model for the Process Definition. Define the namespace of the model. XML namespaces provide a method for avoiding element name conflicts. If you want to use any other model's task, aspect, or association, then you can use it by importing their namespace. Reusability of Task Model is possible. <?xml version="1.0" encoding="UTF-8"?> <model name="bookwcmwf:workflowmodel" > <imports> <import uri="http://www.alfresco.org/model/wcmworkflow/1.0" prefix="wcmwf" /> <import uri="http://www.alfresco.org/model/bpm/1.0" prefix="bpm"> </imports> <namespaces> <namespace uri="http://book.com" prefix="bookwcmwf" /> </namespaces></model> Create a Type for each task For each task we have to define a Content Type. The Type can also be extended as follows: <types> <type name="bookwcmwf:submitReviewTask"> <parent>wcmwf:startTask</parent> </type></types> Define Properties Within each Type, describe the Properties and Associations (information) required for that task. Properties can also be inherited from other task definitions. Using the previous example all the properties of wcmwf:startTask will be added to this Type. <type name="bookwcmwf:submitReviewTask"> <parent>wcmwf:startTask</parent> <properties> <property name="wcmwf:submitReviewType"> <title>Serial or Parallel Review</title> <type>d:text</type> </property> </properties> <associations> <association name="wcmwf:webproject"> <source> <mandatory>false</mandatory> <many>false</many> </source> <target> <class>wca:webfolder</class> <mandatory>true</mandatory> <many>false</many> </target> </association> </associations></type> Define Aspect You can also introduce custom properties by defining an Aspect. An Aspect can be applied to any Content Type. Once applied, the properties are added to that Content Type. You cannot define a dependency on other Aspects. They cannot be extended. <type name="bookwcmwf:verifyBrokenLinksTask"> <parent>wcmwf:workflowTask</parent> <mandatory-aspects> <aspect>bookwcmwf:reviewInfo</aspect> <aspect>bpm:assignee</aspect> </mandatory-aspects></type> <aspects> <aspect name="bookwcmwf:reviewInfo"> <properties> <property name=" bookwcmwf:reviewerCnt"> <title>Reviewer Count</title> <type>d:int</type> <mandatory>true</mandatory> </property> </properties> </aspect></aspects> The following are the advantages of having custom Aspect over custom content: Flexibility: You will have more flexibility. Having a custom Aspect will give you the flexibility to add an additional set of properties to the documents in specific spaces. Efficiency: Since these properties are applied selectively to certain documents only in certain spaces, you will use limited storage in a relational database for these properties. The following are the disadvantages of having custom Aspect over custom content: High Maintenance: If the custom Aspect (additional properties) is added to documents based on business rules, you need to define it at every space, wherever required. Dependency: You cannot define the dependency with other Aspects. For example, if you need the effectivity aspect to always be associated with the custom aspect, you need to make sure you attach both the Aspects to the documents. Now that we are familiar with the code, let's develop a complete model file to deploy our case study in action. For any customization of files you have to develop the files in the extension folder of <install-alfresco>. Create a file book-serial-group-workflow-wcmModel.xml in the specified location <install-alfresco>/tomcat/shared/classes/alfresco/extension. Copy the downloaded content into the file. For reference, go to http://wiki.alfresco.com/wiki/Data_Dictionary_Guide#Content_Types. Step 2: Create the Process Definition A Process Definition represents a formal specification of a business process and is based on a directed graph. The graph is composed of nodes and transitions. Every node in the graph is of a specific Type. The Type of the node defines the runtime behavior. A Process Definition has exactly one Start-state and End-state. The following table describes some of the key terms used in a Process Definition: Key term Description Swimlane Swimlane is used to define a role for a user. Transition Transitions have a source node and a destination node. The source node is represented by the property from and the destination node is represented by the property to. It is used to connect nodes. A Transition can optionally have a name. The name is represented in the UI with a button. Task Tasks are associated with a Swimlane. These tasks are defined in the Workflow model files. On the basis of these tasks, the Properties are displayed. Actions Actions are pieces of Java code that are executed upon events in the process execution. These actions are performed on the basis of these tasks, as defined in the Process Definition. Events The jBPM engine will fire Events during the graph execution. Events specify moments in the execution of the process. An Event can be task-create, nodeenter, task-end, process-end, and so on. When the jBPM engine fires an event, the list of Actions is executed. Scripts Script is executed within Action. Some of the variables that can be available in Script are node, task, execution context, and so on. Nodes Each Node has a specific type. The Node Type determines what will happen when an execution arrives in the Node at runtime. The following table summarizes the Node Types available in jBPM out of the box. Node types Description Task Node A Task Node represents one or more tasks that have to be performed by users. Start-state There can be only one Start-state in the Process Definition, which logs the start of the workflow. decision The distinction between multiple paths. When the decision between multiples path has to be taken, a decision node is used. fork A fork splits one path of execution into multiple concurrent paths of execution. join Joins multiple paths into single path. A join will end every token that enters the join. node The node serves the situation where you want to write your own code in a node. End-state There can be only one End-state in the Process Definition, which logs the end of the workflow. There are two ways of building the Process Definition. One is by hand, that is create a jPDL XML document. The second option is by designer, that is use a tool to generate the jPDL XML document. To create a Process Definition, create a new Process Definition file with the extension .xml. Define a Process Definition name The Process Definition name is important. <?xml version="1.0" encoding="UTF-8"?> <process-definition name="bookwcmwf:bookworkflow"> In the previous code we have used bookwcmwf:bookworkflow where bookwcmwf is the namespace of the workflow model file defined earlier, which we are going to use in this Process Definition, and bookworkflow can be any name. Define a Swimlane Swimlanes are used to declare workflow "roles". Tasks are associated with a Swimlane. Here initiator is the user who is starting the workflow. Likewise, we have some other roles also defined. For example, bpm_assignee (one user to whom the workflow is assigned), bpm_assignees (one or more user), bpm_groupAssignee (single group), and bpm_groupAssignees (one or more groups). <swimlane name="initiator"/><swimlane name="approver"> <assignment class="org.alfresco.repo.workflow.jbpm.AlfrescoAssignment"> <pooledactors>#{bpm_groupAssignee}</pooledactors> </assignment></swimlane><swimlane name="assignee"> <assignment class="org.alfresco.repo.workflow.jbpm.AlfrescoAssignment"> <actor>#{bpm_assignee}</actor> </assignment></swimlane> Associate a task We have already defined task in the Content Model files. On the basis of these tasks the properties are displayed. Next step is to add these tasks to the workflow process. To start with, add a task to the start node. The Start Task is assigned to the initiator of the workflow. It's used to collect the information (that is the workflow parameters) required for the workflow to proceed. <start-state name="start"> <task name="bookwcmwf:submitReviewTask" swimlane="initiator"/> <transition name="" to="initialise"/></start-state><swimlane name="assignee"> <assignment class="org.alfresco.repo.workflow.jbpm.AlfrescoAssignment"> <actor>#{bpm_assignee}</actor> </assignment></swimlane> <task-node name="initialise "> <task name="bookwcmwf:verifyBrokenLinksTask" swimlane="assignee" /> <transition name="abort" to="end"> <action class="org.alfresco.repo.workflow.jbpm.AlfrescoJavaScript"> <script> var mail = actions.create("mail"); mail.parameters.to = initiator.properties["cm:email"]; mail.parameters.subject = "Adhoc Task " + bpm_workflowDescription; mail.parameters.from = bpm_assignee.properties["cm:email"]; mail.parameters.text = "It's done"; mail.execute(bpm_package); </script> </action> </task-node> <end-state name="end"/> During runtime, all the properties of the task bookwcmwf:submitReviewTask are visible to the user who is initiating a workflow. Once the properties are filled, the initiator assigns a task to another user or group. In this case, it is assigned to user. Now the task appears in dashlets of assigned user. The Assignee fills the properties of the task bookwcmwf:verifyBrokenLinksTask and clicks on the abort button. The abort transition would call Alfresco JavaScript that sends an e-mail. And an end-state event will log the end of the workflow. We are now ready to create a Process Definition file and use the workflow model we developed earlier for our case study. Create a file book-serial-group-processdefinition.xml in the specified location <install-alfresco>/tomcat/shared/classes/alfresco/extension. Copy the downloaded content into the file. For reference go to http://wiki.alfresco.com/wiki/WorkflowAdministration. Step 3: Create the workflow Resource Bundles For localized workflow interaction it is necessary to provide Resource Bundles containing UI labels for each piece of text that is exposed to the user. With the appropriate Resource Bundles, a single workflow instance may spawn tasks where the user interface for each task is rendered in a different language, based on the locale of the user. Specific structure has to be followed in order to define labels for UI in Resource Bundle. <model_prefix>_<model_name>.[title|description]<model_prefix>_<model_name>.<model_element>.<element_prefix>_ <element_name>.[title|description] Add all the properties that relate to this Process Definition and model. bookwcmwf_bookworkflow.workflow.title=Book Workflow bookwcmwf_bookworkflow.node.verifybrokenlinks.transition.abort.title=Abort Submission bookwcmwf_workflowmodel.type.bookwcmwf_reviewTask.description=Review Documents to approve or reject them Create a file book-serial-group-messages.properties in the specified location, <install-alfresco>/tomcat/shared/classes/alfresco/extension. Copy the downloaded content into the file.
Read more
  • 0
  • 0
  • 2784

article-image-using-javascript-effects-enhance-your-joomla-website-visitors
Packt
06 Jul 2010
5 min read
Save for later

Using Javascript Effects to enhance your Joomla! website for Visitors

Packt
06 Jul 2010
5 min read
(Read more interesting articles on Joomla! 1.5 here.) Introduction Joomla! is a feature-rich content management system, but there are some things it can't do out of the box. This is where JavaScript can become useful in improving the experience of your website to its visitors. Including a JavaScript file in your Joomla! template One of the most basic aspects of using JavaScript with your Joomla! template is including it within the page. There are two ways to do this—within the <head> element of your template, or within the <body> element of your template (best placed just above the </body> element). We'll make use of the method that uses the <head> template. (The reasons to do so are covered in another recipe of this article.) Getting ready Open your template's index.php template, located in your template's subdirectory within your Joomla! installation's templates directory. How to do it... Locate the <head> element of your Joomla! template in the index.php file and insert a <script> element that references the JavaScript file(s) that you wish to use: <!-- some HTML omitted for brevity --><script type="text/javascript" src="<?php echo $this->baseurl ?>/templates/rhuk_milkyway/js/javascript-file.js"></script></head> Note that the base directory of your Joomla! installation is inserted automatically to help prevent any problems with changing directory paths to the JavaScript file, should you change your website's location.You will need to change the template's path if you choose to rename your template. For valid XHTML, you need to specify the type attribute, as shown: <script type="text/javascript" src="<?php echo $this->baseurl ?>/templates/rhuk_milkyway/js/javascript-file.js"></script> How it works... When a browser encounters a <script> element in your page, it loads the required behavior included in the JavaScript file, assuming that your browser has it enabled. See also Including a JavaScript file in your Joomla! template Maximizing backward compatibility with JavaScript Installing Google Analytics Integrating AddThis social bookmarking tool with your Joomla! template Tips and tricks for minimizing page load time when using JavaScript While JavaScript can be used to enhance your visitors' experience of your Joomla! website, it can have a negative impact in terms of both real and envisaged loading times of pages.JavaScript slows down the loading of a page because it's a single-threaded language. This means that nothing else can occur while JavaScript is being loaded or something in JavaScript is being evaluated. As such, a single, slow-loading JavaScript file can prevent a whole website from loading quickly! How to do it... The most obvious thing that you can do to make your Joomla! template load more efficiently is to compress any JavaScript you do use. As such, there are a number of online compression tools that you can use, including the JavaScript Compressor (http://javascriptcompressor.com): Once you've inserted your JavaScript into the Paste your code input area, click on Compress. In this example, we've used a function that creates a slideshow with the use of the jQuery library: // Requires jQuery to run$(document).ready(function() { $('#slideshow-wrapper').cycle({ fx: 'fade', speed: 'normal', timeout: 0, next: '#next', prev: '#prev' }); $('#slidie').cycle({ fx: 'fade' }); }); As you can see, quite a compression is noticeable even with a relatively compact piece of JavaScript, around half the size of the original in terms of memory required to store it (and thus, resources required to load it for a visitor visiting your website). $(document).ready(function(){$('#slideshow-wrapper').cycle({fx:'fade',speed:'normal',timeout:0,next:'#next',prev:'#prev'});$('#slidie').cycle({fx:'fade'})}); Many JavaScript libraries (such as jQuery and MooTools) and features are available in a compressed format already. There is also a Joomla! extension called JCH Optimize that you can use. You can download it from http://extensions.joomla.org/extensions/site-management/site-performance/12088. How it works... As you've seen, JavaScript is a single-threaded language, so one technique to minimize its impact on your Joomla! website's loading times for visitors is to make the JavaScript the last possible item in the page to load. Additionally, compressing JavaScript can greatly reduce page-loading times with larger JavaScript files. It's also worth keeping a backup of the uncompressed JavaScript file, as this makes it easier to change and recompress in the future. There's more... Another way to minimize your Joomla! template's page load time is to reference JavaScript and other template files from different hostnames. Browsers including Internet Explorer and Firefox have been known to limit the number of simultaneous connections to a hostname, slowing down the loading of the page. Moving <script>tags to the bottom of the page The other major factor that can slow down the page load times for visitors to your Joomla! website is the necessity for their browser to have to stop while it deals with any JavaScript included in your page. You can overcome this by reordering the HTML elements in your Joomla! template's index.php file to move the <script> elements to the bottom of the page where they appear inline. See also Maximizing backward compatibility with JavaScript Installing Google Analytics
Read more
  • 0
  • 0
  • 2778

article-image-plug-ins-and-dynamic-actions-oracle-application-express-and-ext-js
Packt
22 Mar 2011
7 min read
Save for later

Plug-ins and Dynamic Actions with Oracle Application Express and Ext JS

Packt
22 Mar 2011
7 min read
Oracle Application Express 4.0 with Ext JS Deliver rich desktop-styled Oracle APEX applications using the powerful Ext JS JavaScript library Plug-ins and dynamic actions are the two most exciting new features for developers in APEX 4.0. Combining them with Ext JS components is a recipe for success. For the first time we now have the ability to add custom "widgets" directly into APEX that can be used declaratively in the same way as native APEX components. Plug-ins and dynamic actions are supported with back end integration, allowing developers to make use of APEX provided PL/SQL APIs to simplify component development. Plug-ins give developers a supported mechanism to enhance the existing built-in functionality by writing custom PL/SQL components for item types, regions, and processes. Dynamic actions provide developers with a way to define client-side behavior declaratively without needing to know JavaScript. Using a simple wizard, developers can select a page item and a condition, enter a value, and select an action (for example, Show, Hide, Enable, and Show Item Row). Most APEX developers come from a database development background. So, they are much more comfortable coding with PL/SQL than JavaScript. The sooner work is focused on PL/SQL development the more productive APEX developers become. The ability to create plug-ins that can be used declaratively means developers don't have to write page-specific JavaScript for items on a page, or use messy "hacks" to attach additional JavaScript functionality to standard APEX items. Ext JS provides a rich library of sophisticated JavaScript components just waiting to be integrated into APEX using plug-ins. A home for your plug-ins and dynamic actions APEX allows you to create plug-ins for item, region, dynamic action, and process types. Like templates and themes, plug-ins are designed to be shared, so they can be easily exported and imported from one workspace application to another. Plug-ins can also be subscribed, providing a way to easily share and update common attributes between plug-ins. Building a better Number Field APEX 4.0 introduced the Number Field as a new item type, allowing you to configure number-range checks by optionally specifying minimum and maximum value attributes. It also automatically checks that the entered value is a number, and performs NOT NULL validation as well. You can also specify a format mask for the number as well, presumably to enforce decimal places. This all sounds great, and it does work as described, but only after you have submitted the page for processing on the server. The following screenshot shows the APEX and Ext versions of the Number Field, both setup with a valid number range of 1 to 10,000. The APEX version allows you to enter any characters you want, including letters. The Ext version automatically filters the keys pressed to only accept numbers, conditionally the decimal separator, and the negative sign. It also highlights invalid values when you go outside the valid number range. We are going to build a better Number Field using APEX plug-ins to provide better functionality on the client side, and still maintain the same level of server-side validation. The Number Field is quite a simple example, allowing us to be introduced to how APEX plug-ins work without getting bogged down in the details. The process of building a plug-in requires the following: Creating a plug-in in your application workspace Creating a test page containing your plug-in and necessary extras to test it Running your application to test functionality Repeating the build/test cycle, progressively adding more features until satisfied with the result Creating a plug-in item For our Number Field, we will be creating an item plug-in. So, navigate to the plug-ins page in Application Builder, found under Application Builder | Application xxx | Shared Components | Plug-ins, and press Create to open the Plug-in Create/Edit page. Start filling in the following fields: Name: Ext.form.NumberField. Here, I'm using the Ext naming for the widget, but that's just for a convenience. Internal Name: Oracle's recommendation here is to use your organization's domain name as a prefix. So for example, a company domain of mycompany. com would prefix a plug-in named Slider, would result in an internal name of COM.MYCOMPANY.SLIDER. File Prefix: As we are referencing the Ext JavaScript libraries in our page template, we can skip this field completely. For specialized plug-ins that are only used on a handful of specific pages, you would attach a JavaScript file here. Source: For the PL/SQL source code required to implement our plug-in, you can either enter it as a PL/SQL anonymous block of code that contains functions for rendering, validating and AJAX callbacks, or refer to code in a PL/SQL package in the database. You get better performance using PL/SQL packages in the database, so that's the smart way to go. Simply include your package function names in the Callbacks section, as shown in the following screenshot, noting that no AJAX function name is specified as no AJAX functionality is required for this plug-in. The package doesn't need to exist at this time—the form will submit successfully anyway. Standard attributes: In addition to being able to create up to ten custom attributes, the APEX team has made the standard attributes available for plug-ins to use. This is really useful, because the standard attributes comprise elements that are useful for most components, such as width and height. They also have items such as List of Values (LOV), which have more complicated validation rules already built-in, checking SQL queries used for the LOV source are valid queries, and so on. For the Number Field, only a few attributes have been checked: IsVisible Widget: Indicating the element will be displayed Session State Changeable: So that APEX knows to store the value of the item in session state Has Read Only Attribute: So that conditional logic can be used to make the item read only Has Width Attributes: Allowing the width of the item to be set Notice that some of the attributes are disabled in the following screenshot. This is because the APEX Builder conditionally enables the checkboxes that are dependent on another attribute. So, in this screenshot the List of Values Required, Has LOV Display Null Attributes, and Has Cascading LOV Attributes checkboxes are disabled, because they are dependant on the Has List of Values checkbox. Once it is checked, the other checkboxes are enabled. Custom attributes: Defining the custom attributes for our Number Field is largely an exercise in reviewing the configuration options available in the documentation for Ext.form.NumberField, and deciding which options are most useful to be included. You need to also take into account that some configuration options are already included when you define an APEX item using your plug-in. For example, Item Label is included with an APEX item, but is rendered separately from the item, so don't need to be included. Likewise Value Required is a validation rule, and is also separate when rendering the APEX item. The following next screenshot shows some Ext.form.NumberField configuration options, overlaid with the custom attributes defined in APEX for the plug-in: I've chosen not to include the allowBlank config option as a custom attribute, simply because this can be determined by the APEX Value Required property. Other Configuration options, such as allowDecimals and allowNegative, are included as custom attributes as Yes/No types. The custom attributes created are listed in the following table: Custom events: Custom events are used to define JavaScript event that can be exposed to dynamic actions. For this simple plug-in, there is no need to define any custom events. At this point, we are done defining the Number Field in APEX; it's time to turn our attention to building the database package to execute the Callbacks to render and validate the plug-in.
Read more
  • 0
  • 0
  • 2771

article-image-oracle-enterprise-manager-grid-control
Packt
28 Jun 2010
9 min read
Save for later

Oracle Enterprise Manager Grid Control

Packt
28 Jun 2010
9 min read
Evolution of IT systems: As architectural patterns moved from monolithic systems to distributed systems, not all IT systems were moved to the newest patterns. Some new systems were built with new technologies and patterns whereas existing systems that were performing well enough continued on earlier technologies. Best of breed approach: With multi-tiered architectures, enterprises had the choice of building each tier using best of breed technology for that tier. For example, one system could be built using a J2EE container from vendor A, but a database from vendor B. Avoiding single vendors and technologies: Enterprises wanted to avoid dependence on any single vendor and technology. This led to systems being built using different technologies. For example, an order-booking system built using .NET technologies on Windows servers, but an order shipment system built using J2EE platform on Linux servers. Acquisitions and Mergers: Through acquisitions and mergers, enterprises have inherited IT systems that were built using different technologies. Frequently, new systems were added to integrate the systems of two enterprises but the new systems were totally different from the existing systems. For example, using BPEL process manager to integrate a CRM system with a transportation management system. We see that each factor for diversity in the data center has some business or strategic value. At the same time, such diversity makes management of the data center more complex. To manage such data centers we need a special product like Oracle's Enterprise Manager Grid Control that can provide a unified and centralized management solution for the wide array of products. In any given data center, there are lots of repetitive operations that need to be executed on multiple servers (like applying security patches on all Oracle Databases). As data centers move away from high-end servers to a grid of inexpensive servers, the number of IT resources increases in the data center and so does the cost of executing repetitive operations on the grid. Enterprise Manager Grid Control provides solutions to reduce the cost of any grid by automating repetitive operations that can be simultaneously executed on multiple servers. Enterprise Manager Grid Control works as a force multiplier by providing support for executing the same operations on multiple servers at the cost of one operation. As organizations put more emphasis on business and IT alignment, that requires a view of IT resources overlaid with business processes and applications is required. Enterprise Manager Grid Control provides such a view and improves the visibility of IT and business processes in a given data center. By using Enterprise Manager Grid Control, administrators can see IT issues in the context of business processes and they can understand how business processes are affected by IT performance. In this article, we will get to know more about Oracle's Enterprise Manager Grid Control by covering the following aspects: Key features of Enterprise Manager Grid Control: Comprehensive view of data center Unmanned monitoring Historical data analysis Configuration management Managing multiple entities as one Service level management Scheduling Automating provisioning Information publishing Synthetic transaction Manage from anywhere Enterprise Manager Product family Range of products managed by Enterprise Manager: Range of products EM extensibility Enterprise Manager Grid Control architecture. Multi-tier architecture Major components High availability Summary of learning Key features of Enterprise Manager Grid Control Typical applications in today's world are built with multi-tiered architecture; to manage such applications a system administrator has to navigate through multiple management tools and consoles that come along with each product. Some of the tools have a browser interface, some have a thick client interface, or even a command line interface. Navigating through multiple management tools often involves doing some actions from a browser or running some scripts or launching a thick client from the command line. For example, to find bottlenecks in a J2EE application in the production environment, an administrator has to navigate through the management console for the HTTP server, the management console for the J2EE container, and the management console for the database. Enterprise Manager Grid Control is a systems management product for the monitoring and management of all of the products in the data center. For the scenario explained above, Enterprise Manager provides a common management interface to manage an HTTP server, J2EE server and database. Enterprise Manager provides this unified solution for all products in a data center. In addition to basic monitoring, Enterprise Manager provides a unified interface for many other administration tasks like patching, configuration compliance, backup-recovery, and so on. Some key features of Enterprise Manager are explained here. Comprehensive view of the data center Enterprise Manager provides a comprehensive view of the data center, where an administrator can see all of the applications, servers, databases, network devices, storage devices, and so on, along with performance and configuration data. As the number of all such resources is very high, this Enterprise Manager highlights the resources that need immediate attention or that may need attention in near future. For example, a critical security patch is available that needs to be applied on some Fusion Middleware servers, or a server that has 90% CPU utilization. The following figure shows one such view of a data center, where users can see all entities that are monitored, that are up, that are down, that have performance alerts, that have configuration violations and so on. The user can drill down to fine-grained views from this top-level view. The data in the top-level view and the fine-grained drill-down view can be broadly summarized in the following categories: Performance data Data that shows how an IT resource is performing, that includes the current status, overall availability over a period of time, and other performance indicators that are specific to the resource like the average response time for a J2EE server. Any violation of acceptable performance thresholds is highlighted in this view. Configuration data Configuration data is the configuration parameters or, configuration files captured from an IT resource. Besides the current configuration, changes in configuration are also tracked and available from Enterprise Manager. Any violation of configuration conformance is also available. For example, if a data center policy mandates that only port 80 should be open on all servers, Enterprise Manager captures any violation of that policy. Status of scheduled operations In any data center there are some scheduled operations, these operations could be something like a system administration task such as taking a backup of a database server or some batch process that moves data across systems, for example, moving orders from fulfillment to shipping. Enterprise Manager provides a consolidated view of the status of all such scheduled operations. Inventory Enterprise Manager provides a listing of all hardware and software resources with details like version numbers. All of these resources are also categorized in different buckets – for example, Oracle Application Server, WebLogic application Server, WebSphere application are all categorized in the middleware bucket. This categorization helps the user to find resources of the same or similar type. Enterprise Manager. It also captures the finer details of software resources—like patches applied. The following figure shows one such view where the user can see all middleware entities like Oracle WebLogic Server, IBM WebSphere Server, Oracle Application Server, and so on. Oracle Enterprise Manager Grid Control" border="0" alt="Oracle Enterprise Manager Grid Control" title="Oracle Enterprise Manager Grid Control" /> Unmanned monitoring Enterprise Manager monitors IT resources around the clock and it gathers all performance indicators at every fixed interval. Whenever a performance indicator goes beyond the defined acceptable limit, Enterprise Manager records that occurrence. For example, if the acceptable limit of CPU utilization for a server is 70%, then whenever CPU utilization of the server goes above 70% then that occurrence is recorded. Enterprise Manager can also send notification of any such occurrence through common notification mechanisms like email, pager, SNMP trap, and so on. Historical data analysis All of the performance indicators captured by Enterprise Manager are saved in the repository. Enterprise Manager provides some useful views of the data using the system administrator that can analyze data over a period of time. Besides the fine-grained data that is collected at every fixed interval, it also provides coarse views by rolling up the data every hour and every 24 hours. Configuration management Enterprise Manager gathers configuration data for IT resources at regular intervals and checks for any configuration compliance violation. Any such violation is captured and can be sent out as a notification. Enterprise Manager comes with many out-of-the-box configuration compliance rules that represent best practices; in addition to that, system administrators can configure their own rules. All of the configuration data is also saved in the Enterprise Manager repository. Using data, the system administrator can compare the configuration of two similar IT resources or compare the configuration of the same IT resource at two different points in time. The system administrator can also see the configuration change history. Managing multiple entities as one Most of the more recent applications are built with multi-tiered architecture and each tier may run on different IT resources. For example, an order booking application can have all of its presentation and business logic running on a J2EE server, all business data persisted in a database, all authentication and authorization performed through an LDAP server, and all of the traffic to the application routed through an HTTP server. To monitor such applications, all of the underlying resources need to be monitored. Enterprise Manager provides support for grouping such related IT resources. Using this support, the system administrator can monitor all related resources as one entity and all performance indicators for all related entities can be monitored from one interface. Service level management Enterprise Manager provides necessary constructs and interfaces for managing service level agreements that are based on the performance of IT resources. Using these constructs, the user can define indicators to measure service levels and expected service levels. For example, a service representing a web application can have the same average JSP response time as a service indicator, the expected service level for this service is to have the service indicator below three seconds for 90% of the time during business hours. Enterprise Manager keeps track of all such indicators and violations in the context of a service and at any time the user can see the status of such service level agreements over a defined time period.
Read more
  • 0
  • 0
  • 2769

article-image-model-view-controller-pattern-and-configuring-web-scripts-alfresco
Packt
30 Aug 2010
13 min read
Save for later

The Model-View-Controller pattern and Configuring Web Scripts with Alfresco

Packt
30 Aug 2010
13 min read
(For more resources on Alfresco, see here.) One way of looking at the Web Scripts framework is as a platform for implementing RESTful Web Services. Although, as we have seen, your service won't actually be RESTful unless you follow the relevant guiding principles, Web Scripts technology alone does not make your services RESTful as if by magic. Another way of looking at it is as an implementation of the Model-View-Controller, or MVC pattern. Model-View-Controller is a long-established pattern in Computer Science, often used when designing user-facing, data-oriented applications. MVC stipulates that users of the application send commands to it by invoking the controller component, which acts on some sort of data model, then selects an appropriate view for presenting the model to the users. While the applicability of MVC to Web application has often been debated, it is still a useful framework for partitioning concerns and responsibilities and for describing the roles of the various components of the Alfresco Web Scripts framework. In the latter, the role of controller is carried out by the scripting component. It should be stressed that, in the MVC pattern, the controller's role is purely that of governing the user interaction by selecting an appropriate model and a corresponding view for presentation, possibly determining which user actions are applicable given the present state of the application. It is not the controller's role to carry out any kind of business logic or to operate on the model directly. Rather, the controller should always delegate the execution of data manipulation operations and queries to a suitable business logic or persistence layer. In the context of Alfresco Web Scripts, this means that your controller script should avoid doing too many things by using the repository APIs directly. It is the responsibility of the controller to: Validate user inputs Possibly convert them to data types suitable for the underlying logic layers Delegate operations to those layers Take data returned by them Use the data to prepare a model for the view to display and nothing more All complex operations and direct manipulations of repository objects should ideally be carried out by code that resides somewhere else, like in Java Beans or JavaScript libraries, which are included by the present script. In practice, many Web Scripts tend to be quite small and simple, so this strict separation of concerns is not always diligently applied. However, as the size and complexity of controller scripts grows, it is considered as a good practice to modularize an application's logic in order to make it easier to follow and to maintain. Some people also have a preference for the relative safety of a static language like Java, compared to JavaScript, and for the use of modern Java IDEs. Therefore, it is frequent to see Web Scripts applications that place the very minimum of logic in controller scripts that use Java Beans to carry out more complex tasks. Coming to the view, which in Alfresco Web Scripts is implemented as FreeMarker templates, it should be noted that in a departure from the "pure" MVC pattern, the freedom accorded to the controller itself of choice between different possible views is rather limited as which view to use is determined exclusively by selecting a template for the specific output format requested by the user through the format specification in the request URL. The model that the view can access is also only partially the responsibility of the controller. Whereas the latter can add more objects to the model available to the view, it cannot reduce the visibility of the predefined, root-scoped objects. It is therefore possible for the view to perform quite a bit of logic without even having a controller to do it. This is why Web Scripts without a controller are acceptable. Whether this is a good practice or not is open to debate. The following diagram illustrates the steps that are involved when a Web Script is executed: The diagram can be explained as follows: An HTTP request, specifying a method and a URI is received. The dispatcher uses the HTTP method and the URI to select a Web Script to execute and executes the controller script. The controller script accesses the repository services by means of the Alfresco JavaScript API. The model is populated and passed to the FreeMarker template engine for rendering. FreeMarker renders a response using the appropriate template. The response is returned to the client. URL matching We've already seen how the dispatcher selects a particular Web Script by matching the URL of the HTTP request against the value of the url element in the descriptors of the registered Web Scripts. There is actually a bit more to this process than simple, exact matching, as we are going to see. First, let's have a look at the structure of a Web Script's request URL: http[s]://<host>:<port>/[<contextPath>/]/<servicePath>[/ <scriptPath>][?<scriptArgs>] The meaning of host and port should be obvious. contextPath is the name of the web application context, that is, where your application is deployed in your application server or Servlet container. It will often be alfresco, but could be share, as the Share application is able to host Web Scripts. It could be missing, if the application is deployed in the root context, or it could really be anything you want. The value of servicePath will usually be either service or wcservice. Using the former, if the Web Script requires authentication, this is performed using the HTTP Basic method. This means that the browser will pop up a username/password dialog box. When the latter is used, authentication is performed by the Alfresco Explorer (also known as Web Client). This means that no further authentication is required if you are already logged into the Explorer, otherwise you will be redirected to the Explorer login page. scriptPath is the part of the URL that is matched against what is specified in the descriptor. Arguments can optionally be passed to the script by specifying them after the question mark, as with any URL. With this in mind, let's look at the value of the <url> element in the descriptor. This must be a valid URI template, according to the JSR-311 specification. Basically, a URI template is a (possibly relative) URI, parts of which are tokens enclosed between curly braces, such as: /one/two/three /api/login/ticket/{ticket} /api/login?u={username}&pw={password?} Tokens stand for variable portions of the URI and match any value for a path element or a parameter. So, the first template in the previous list only matches /one/two/three exactly, or more precisely: http[s]://<host>:<port>/[<contextPath>/]/<servicePath>/one/two/three The second template here matches any URI that begins with /api/login/ticket/, whereas the third matches the /api/login URI when there is a u parameter present and possibly a pw parameter as well. The ? symbol at the end of a token indicates that the parameter or path element in question is not mandatory. Actually, the mandatory character of a parameter is not enforced by Alfresco, but using the question mark is still valuable for documentation purposes to describe what the Web Script expects to receive. We can now precisely describe the operation of the dispatcher as follows: When the dispatcher needs to select a Web Script to execute, it will select the one matching the specific HTTP method used by the request and whose URI template more specifically matches the script path and arguments contained in the request URL. A Web Script descriptor can also have more than one URI template specified in its descriptor, simply by having more than one <url> element. All of them are consulted for matching. The actual values of the path elements specified as tokens are available to the script as entries in the url.templateArgs map variable. For instance, when the /x/foo URL is matched by the /x/{token} template, the value of the expression url. templateArgs["token"] will be equal to foo. Values of request arguments are accessible from the script or template as properties of the args object, such as args.u and args.pw for the third example here. The format requested, which can be specified in the URL by means of the filename extension or of a format argument, need not be specified in the URI template. Authentication In the last version of our Web Script, we specified a value of user for the <authentication> element. When you use this value, users are required to authenticate when they invoke the Web Script's URL, but they can use any valid credentials. When a value of none is present, the Web Script will not require any authentication and will effectively run anonymously. This tends to not be very useful, as all operations using repository objects will require authentication anyway. If you require no authentication, but try to access the repository anyway, the script will throw an exception. A value of guest requires authentication as the guest user. This can be used for scripts accessed from the Explorer, where users are automatically logged in as guest, unless they log in with a different profile. A value of admin requires authentication as a user with the administrator role, typically admin. Run as Scripts can be run as if they were invoked by a user, other than the one who actually provided the authentication credentials. In order to do this, you need to add a runAs attribute to the <authentication> element: <authentication runAs="admin">user</authentication> This can be used, as in the previous example, to perform operations which require administrator privileges without actually logging in as an admin. As this can be a security risk, only scripts loaded from the Java classpath, and not those loaded from the Data Dictionary, can use this feature. The Login service Web Scripts that render as HTML and are therefore intended to be used by humans directly can either use HTTP Basic authentication or Alfresco Explorer authentication, as it is assumed that some person will fill in a login dialog with his username and password. When a script is meant to implement some form of Web Service that is intended for consumption by another application, HTTP Basic or form-based authentication is not always convenient. For this reason, Alfresco provides the login service, which can be invoked using the following URL: http[s]://<host>:<port>/[<contextPath>/]/service/api/login?u={username }&pw={password?} If authentication is successful, the script returns an XML document with the following type of content: <ticket>TICKET_024d0fd815fe5a2762e40350596a5041ec73742a</ticket> Applications can use the value of the ticket element in subsequent requests in order to avoid having to provide user credentials with each request, simply by adding an alf_ticket=TICKET_024d0fd815fe5a2762e40350596a5041ec73742a argument to the URL. As the username and the password are included, unencrypted, in the request URL, it is recommended that any invocations of the login service be carried out over HTTPS. Transactions Possible values of the transaction element are: none/li> required/li> requiresnew When none is used, scripts are executed without any transactional support. Since most repository operations require a transaction to be active, using none will result in an error whenever the script tries to call repository APIs. required causes the execution of the script to be wrapped in a transaction, which is the normal thing to do. If a transaction is already active when the script is invoked, no new transaction is started. requiresnew, on the other hand, always initiates a new transaction, even if one is already active. Requesting a specific format The format element in the Web Script descriptor indicates how clients are expected to specify which rendering format they require. This is what the element looks like: <format [default="default format"]>extension|argument</format> A value of extension indicates that clients should specify the format as a filename extension appended to the Web Script's path. For example: http://localhost:8080/alfresco/service/myscript.html http://localhost:8080/alfresco/service/myscript.xml A value of argument indicates that the format specification is to be sent as the value of the format URL parameter: http://localhost:8080/alfresco/service/myscript?format=html http://localhost:8080/alfresco/service/myscript?format=xml When the client fails to specify a particular format, the value of the default attribute is taken as the format to use. Once a format has been determined, the corresponding template is selected for rendering the model, after the script finishes its execution, on the basis of the template's filename, which must be of the form: <basename>.<method>.<format>.ftl Status Sometimes it can be necessary for the Web Script to respond to a request with an HTTP status code other than the usual 200 OK status, which indicates that the request has been processed successfully. There might also be a requirement that the response body be different depending on the status code, like, for instance, when you want to display a specific message to indicate that some resource could not be found, together with a status code of 404 Not Found. You can easily do this by manipulating the status object: if (document != null){ status.code = 404 status.message = "No such file found." status.redirect = true} You need to set the value of status.redirect to true in order for Alfresco to use an alternative error handling template. When you do this, the Web Scripts framework goes looking for a template with a file name of <basename>.<method>.<format>.<code>.ftl, like, for instance, myscript.get.html.404.ftl and uses it, if found, instead of the usual myscript.get.html.ftl. In this template, you can access the following properties of the status variable to customize your output: Property name Meaning status.code Numeric value of the HTTP status code (for example, 404) status.codeName String value of the status code (for example, Not Found) status.message Possibly set by the script status.exception The exception that caused this status If your script sets a status code between 300 and 399, which usually means a redirection, you can set the value of status.location to control the value of the location HTTP header: status.code = 301; // Moved permanentlystatus.location = 'http://some.where/else'
Read more
  • 0
  • 0
  • 2761
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-typo3-connecting-external-apis-flicker-and-youtube
Packt
01 Feb 2010
10 min read
Save for later

TYPO3 for Connecting External APIs: Flicker and Youtube

Packt
01 Feb 2010
10 min read
Getting recent Flickr photos The Flickr API is very powerful and gives access to just about everything a user can do manually. You can write scripts to automatically download latest pictures from a photostream, download photos or videos tagged with a certain keyword, or post comments on photos. In this recipe, we will make use of the phpFlickr library to perform some basic listing functions for photos in Flickr. Getting ready Before you start, you should sign up for a free Flickr account, or use an existing one. Once you have the account, you need to sign up for an API key. You can go to Your Account, and select the Extending Flickr tab. After filling in a short form, you should be given two keys—API key and secret key. We will use these in all Flickr operations. We will not go through the steps required for integration into extensions, and will leave this exercise to the reader. The code we present can be used in both frontend plugins and backend modules. As was previously mentioned, we will be using the phpFlickr library. Go to http://phpflickr.com/ to download the latest version of the library and read the complete documentation. How to do it... Include phpFlickr, and instantiate the object (modify the path to the library, and replace api-key with your key): require_once("phpFlickr.php");$flickrService = new phpFlickr('api-key'); Get a list of photos for a specific user: $photos = $flickrService->people_getPublicPhotos('7542705@N08'); If the operation succeeds, $photos will contain an array of 100 (by default) photos from the user. You could loop over the array, and print a thumbnail with a link to the full image by: foreach ($photos['photos']['photo'] as $photo) { $imgURL = $flickrService->buildPhotoURL($photo, 'thumbnail'); print '<a href="http://www.flickr.com/photos/' . $photo['owner'] . '/' . $photo['id'] . '">' . '<img src="' . $imgURL . '" /></a><br />';} How it works... The Flickr API is exposed as a set of REST services, which we can issue calls to. The tough work of signing the requests and parsing the results is encapsulated by phpFlickr, so we don't have to worry about it. Our job is to gather the parameters, issue the request, and process the response. In the example above, we got a list of public photos from a user 7542705@N08. You may not know the user ID of the person you want to get photos for, but Flickr API offers several methods for finding the ID: $userID = $flickrService->people_findByEmail($email);$userID = $flickrService->people_findByUsername($username); If you have the user ID, but want to get more information about the user, you can do it with the following calls: // Get more info about the user:$flickrService->people_getInfo($userID);// Find which public groups the user belongs to:$flickrService->people_getPublicGroups($userID);// Get user's public photos:$flickrService->people_getPublicPhotos($userID); We utilize the people_getPublicPhotos method to get the user's photostream. The returned array has the following structure: Array( [photos] => Array ( [page] => 1 [pages] => 8 [perpage] => 100 [total] => 770 [photo] => Array ( [0] => Array ( [id] => 3960430648 [owner] => 7542705@N08 [secret] => 9c4087aae3 [server] => 3423 [farm] => 4 [title] => One Cold Morning [ispublic] => 1 [isfriend] => 0 [isfamily] => 0 ) […] ) )) We loop over the $photos['photos']['photo'] array, and for each image, we build a URL for the thumbnail using the buildPhotoURL method, and a link to the image page on Flickr. There's more... There are lots of other things we can do, but we will only cover a few basic operations. Error reporting and debugging Occasionally, you might encounter an output you do not expect. It's possible that the Flickr API returned an error, but by default, it's not shown to the user. You need to call the following functions to get more information about the error: $errorCode = $flickrService->getErrorCode();$errorMessage = $flickrService->getErrorMsg(); Downloading a list of recent photos You can get a list of the most recent photos uploaded to Flickr using the following call: $recentPhotos = $flickrService->photos_getRecent(); See also Uploading files to Flickr Uploading DAM files to Flickr Uploading files to Flickr In this recipe, we will take a look at how to upload files to Flickr, as well as how to access other authenticated operations. Although many operations don't require authentication, any interactive functions do. Once you have successfully authenticated with Flickr, you can upload files, leave comments, and make other changes to the data stored in Flickr that you wouldn't be allowed to do without authentication. Getting ready If you followed the previous example, you should have everything ready to go. We'll assume you have the $flickrService object instantiated ready. How to do it... Before calling any operations that require elevated permissions, the service needs to be authenticated. Add the following code to perform the authentication: $frob = t3lib_div::_GET('frob');if (empty($frob)) { $flickrService->auth('write', false);} else { $flickrService->auth_getToken($frob);} Call the function to upload the file: $flickrService->sync_upload($filePath); Once the file is uploaded, it will appear in the user's photostream. How it works... Flickr applications can access any user's data if the user authorizes them. For security reasons, users are redirected to Yahoo! to log into their account, and confirm access for your application. Once your application is authorized by a user, a token is stored in Flickr, and can be retrieved at any other time. $flickrService->auth() requests permissions for the application. If the application is not yet authorized by the user, he/she will be redirected to Flickr. After giving the requested permissions, Flickr will redirect the user to the URL defined in the API key settings. The redirected URL will contain a parameter frob. If present, $flickrService->auth_getToken($frob); is executed to get the token and store it in session. Future calls within the session lifetime will not require further calls to Flickr. If the session is expired, the token will be requested from Flickr service, transparent to the end user. There's more... Successful authentication allows you to access other operations that you would not be able to access using regular authentication. Gaining permissions There are different levels of permissions that the service can request. You should not request more permissions than your application will use. API call Permission level $flickrService->auth('read', false); Permissions to read users' files, sets, collections, groups, and more. $flickrService->auth('write', false); Permissions to write (upload, create new, and so on). $flickrService->auth('delete', false); Permissions to delete files, groups, associations, and so on. Choosing between synchronous and asynchronous upload There are two functions that perform a file upload: $flickrService->sync_upload($filePath);$flickrService->async_upload($filePath); The first function continues execution only after the file has been accepted and processed by Flickr. The second function returns after the file has been submitted, but not necessarily processed. Why would you use the asynchronous method? Flickr service may have a large queue of uploaded files waiting to be processed, and your application might timeout while it's waiting. If you don't need to access the uploaded file right after it was uploaded, you should use the asynchronous method. See also Getting recent Flickr photos Uploading DAM files to Flickr Uploading DAM files to Flickr In this recipe, we will make use of our knowledge of the Flickr API and the phpFlickr interface to build a Flickr upload service into DAM. We will create a new action class, which will add our functionality into a DAM file list and context menus. Getting ready For simplicity, we will skip the process of creating the extension. You can download the extension dam_flickr_upload and view the source code. We will examine it in more detail in the How it works... section. How to do it... Sign up for Flickr, and request an API key if you haven't already done so. After you receive your key, click Edit key details. Fill in the application title and description as you see fit. Under the call back URL, enter the web path to the dam_flickr_upload/mod1/index.php file. For example, if your domain is http://domain.com/, TYPO3 is installed in the root of the domain, and you installed dam_flickr_upload in the default local location under typo3conf, then enter http://domain.com/typo3conf/ext/dam_flickr_upload/mod1/index.php You're likely to experience trouble with the callback URL if you're doing it on a local installation with no public URI. Install dam_flickr_upload. In the Extension Manager, under the extension settings, enter the Flickr API key and the secret key you have received. Go to the Media | File module, and click on the control button next to a file. Alternatively, select Send to Flickr in the context menu, which appears if you click on the file icon, as seen in the following screenshot: A new window will open, and redirect you to Flickr, asking you to authorize the application for accessing your account. Confirm the authorization by clicking the OK, I'LL AUTHORIZE IT button. The file will be uploaded, and placed into your photostream on Flickr. Subsequent uploads will no longer need explicit authorization. A window will come up, and disappear after the file has been successfully uploaded. How it works... Let's examine in detail how the extension works. First, examine the file tree. The root contains the now familiar ext_tables.php and ext_conf_template.txt files.The Res directory contains icons used in the DAM. The Lib directory contains the phpFlickr library. The Mod1 directory contains the module for uploading. ext_conf_template.txt This file contains the global extension configuration variables. The two variables defined in this file are the Flickr API key and the Flickr secret key. Both of these are required to upload files. ext_tables.php As was mentioned previously, ext_tables.php is a configuration file that is loaded when the TYPO3 framework is initializing. tx_dam::register_action ('tx_dam_action_flickrUpload', 'EXT:dam_flickr_upload/class.tx_dam_flickr_upload_action.php:&tx_dam_flickr_upload_action_flickrUpload'); This line registers a new action in DAM. Actions are provided by classes extending the tx_dam_actionbase class, and define operations that can be performed on files and directories. Examples of actions include view, cut, copy, rename, delete, and more. The second parameter of the function defines where the action class is located. $GLOBALS['TYPO3_CONF_VARS']['EXTCONF']['dam_flickr_upload']['allowedExtensions'] = array('avi', 'wmv', 'mov', 'mpg', 'mpeg','3gp', 'jpg', 'jpeg', 'tiff', 'gif', 'png'); We define an array of file types that can be uploaded to Flickr. This is not hardcoded in the extension, but stored in ext_tables.php, so that it can be overwritten by extensions wanting to limit or expand the functionality to other file types.
Read more
  • 0
  • 0
  • 2760

article-image-graphic-design-working-clip-art-and-making-your-own
Packt
09 Nov 2012
9 min read
Save for later

Graphic Design - Working with Clip Art and Making Your Own

Packt
09 Nov 2012
9 min read
(For more resources related to this topic, see here.) Making symbols from Character Palette into clip art—where to find clip art for iWork Clip art is the collective name for predrawn images, pictures, and symbols that can be quickly added to documents. In standalone products, a separate Clip art folder is often added to the package. iWork doesn't have one — this has been the subject of numerous complaints on the Internet forums. However, even though there is no clip art folder as such in iWork, there are hundreds of clip-art images on Mac computers that come as part of our computers. Unlike MS Office or Open Office that are separate Universes on your machine, iWork (even though we buy it separately) is an integral part of the Mac. It complements and works with applications that are already there, such as iLife (iPhoto), Mail, Preview, Address Book, Dictionaries, and Spotlight. Getting ready So, where is the clip art for iWork? First, elements of the Pages templates can be used as clip art—just copy and paste them. Look at this wrought iron fence post from the Collector Newsletter template. It is used there as a column divider. Select and copy-paste it into your project, set the image placement to Floating, and move it in between the columns or text boxes. The Collector Newsletter template also has a paper clip, a price tag, and several images of slightly rumpled and yellowed sheets of paper that can be used as backgrounds. Images with little grey houses and house keys from the Real Estate Newsletter template are good to use with any project related to property. The index card image from the Back Page of the Green Grocery Newsletter template can be used for designing a cooking recipe, and the background image of a yellowing piece of paper from the Musical Concert poster would make a good background for an article on history. Clip art in many templates is editable and easy to resize or modify. Some of the images are locked or grouped. Under the Arrange menu, select the Unlock and Ungroup options, to use those images as separate graphic elements. Many of the clip-art images are easy to recreate with iWork tools. Bear in mind, however, that some of the images have low resolution and should only be used with small dimensions. You will find various clip-art images in the following locations: A dozen or so attractive clip-art images are in Image Bullets, under the Bullets drop-down menu: Open the Text Inspector, click on the List tab, and choose Image Bullets from the Bullets & Numbering drop-down menu. There, you will find checkboxes and other images. Silver and gold pearls look very attractive, but any of your own original images can also be made into bullets. In Bullets, choose Custom Image | Choose and import your own image. Note that images with shadows may distort the surrounding text. Use them with care or avoid applying shadows. Navigate to Macintosh HD | Library | Desktop Pictures: Double-click on the hard disk icon on your desktop and go to Library | Desktop Pictures. There are several dozen images including the dew drop and the lady bug. These are large files, good enough for using as background images. They are not, strictly speaking, clip art but are worth keeping in mind. Navigate to Home| Pictures | iChat Icons (or HD | Library | Application Support | Apple | iChat Icons): The Home folder icon (a little house) is available in the side panel of any folder on your Mac. This is where documents associated with your account are stored on your computer. It has a Pictures folder with a dozen very small images sitting in the folder called iChat Icons. National flags are stored here as button-like images. The apple image can be found in the Fruit folder. The gems icons, such as the ruby heart, from this folder look attractive as bullets. Navigate to HD | Library | User Pictures: You can find animals, flowers, nature, sports, and other clip-art images in this folder. These are small TIFF files that can be used as icons when a personal account is set up on a Mac. But of course, they can be used as clip art. The Sports folder has a selection of balls, but not a cricket ball, even though cricket may have the biggest following in the world (Britain, South Africa, Australia, India, Pakistan, Bangladesh, Sri Lanka, and many Caribbean countries). But, a free image of the cricket ball from Wikipedia/Wikimedia can easily be made into clip art. There may be several Libraries on your Mac. The main Library is on your hard drive; don't move or rename any folders here. Duplicate images from this folder and use copies. Your personal library (it is created for each account on your machine) is in the Home folder. This may sound a bit confusing, but you don't have to wade through endless folders to find what you want, just use Spotlight to find relevant images on your computer, in the same way that you would use Google to search on the Internet. Character Palette has hundreds of very useful clip-art-like characters and symbols. You can find the Character Palette via Edit | Special Characters. Alternatively, open System Preferences | International | Input Menu. Check the Character Palette and Show input menu in menu bar boxes: Now, you will be able to open the Character Palette from the screen-top menu. Character Palette can also be accessed through Font Panel. Open it with the Command + T keyboard shortcut. Click on the action wheel at the bottom of the panel and choose Characters... to open the Character Palette. Check the Character Palette box to find what you need. Images here range from the familiar Command symbol on the Mac keyboard to zodiac symbols, to chess pieces and cards icons, to mathematical and musical signs, various daily life shapes, including icons of telephones, pens and pencils, scissors, airplanes, and so on. And there are Greek letters that can be used in scientific papers (for instance, the letter ∏). To import the Character Palette symbols into an iWork document, just click-and-drag them into your project. The beauty of the Character Palette characters is that they behave like letters. You can change the color and font size in the Format bar and add shadows and other effects in the Graphics Inspector or via the Font Panel. To use the Character Palette characters as clip art, we need to turn them into images in PDF, JPEG, or some other format. How to do it... Let's see how a character can be turned into a piece of clip art. This applies to both letters and symbols from the Character Palette. Open Character Palette | Symbols | Miscellaneous Symbols. In this folder, we have a selection of scissors that can be used to show, with a dotted line, where to cut out coupons or forms from brochures, flyers, posters, and other marketing material. Click on the scissors symbol with a snapped off blade and drag it into an iWork document. Select the symbol in the same way as you would select a letter, and enlarge it substantially. To enlarge, click on the Font Size drop-down menu in the Format bar and select a bigger size, or use the shortcut key Command + plus sign (hit the plus key several times). Next, turn the scissors into an image. Make a screenshot (Command + Shift + 4) or use the Print dialog to make a PDF or a JPEG. You can crop the image in iPhoto or Preview before using it in iWork, or you can import it straight into your iWork project and remove the white background with the Alpha tool. If Alpha is not in your toolbar, you can find it under Format |Instant Alpha. Move the scissors onto the dotted line of your coupon. Now, the blade that is snapped in half appears to be cutting through the dotted line. Remember that you can rotate the clip art image to put scissors either on the horizontal or on the vertical sides of the coupon. Use other scissors symbols from the Character Palette, if they are more suitable for your project. Store the "scissors" clip art in iPhoto or another folder for future use if you are likely to need it again. There's more... There are other easily accessible sources of clip art. MS Office clip art is compatible If you have kept your old copy of MS Office, nothing is simpler than copy-pasting or draggingand- dropping clip art from the Office folder right into your iWork project. When using clip art, it's worth remembering that some predrawn images quickly become dated. For example, if you put a clip art image of an incandescent lamp in your marketing documents for electric works, it may give an impression that you are not familiar with more modern and economic lighting technologies. Likewise, a clip art image of an old-fashioned computer with a CRT display put on your promotional literature for computer services can send the wrong message, because modern machines use flat-screen displays. Wikipedia/Wikimedia Look on Wikipedia for free generic images. Search for articles about tools, domestic appliances, furniture, houses, and various other objects. Most articles have downloadable images with no copyright restrictions for re-use. They can easily be made into clip art. This image of a hammer from Wikipedia can be used for any articles about DIY (do-it-yourself) projects. Create your own clip art Above all, it is fun to create your own clip art in iWork. For example, take a few snapshots with your digital camera or cell phone, put them in one of iWork's shapes, and get an original piece of clip art. It could be a nice way to involve children in your project.
Read more
  • 0
  • 0
  • 2759

article-image-user-interface-production
Packt
01 Dec 2010
10 min read
Save for later

User Interface in Production

Packt
01 Dec 2010
10 min read
  Liferay User Interface Development Develop a powerful and rich user interface with Liferay Portal 6.0 Design usable and great-looking user interfaces for Liferay portals Get familiar with major theme development tools to help you create a striking new look for your Liferay portal Learn the techniques and tools to help you improve the look and feel of any Liferay portal A practical guide with lots of sample code included from real Liferay Portal Projects free for use for developing your own projects         Read more about this book       (For more resources on this subject, see here.) This section will introduce how to add workflow capabilities to any custom assets in plugins. Knowledge Base articles will be used as an example of custom assets. In brief, the Knowledge Base plugin enables companies to consolidate and manage the most accurate information in one place. For example, a user manual is always updated; users are able to rate, to add workflow and to provide feedback on these Knowledge Base articles. Preparing a plugin—Knowledge Base First of all, let's prepare a plugin with workflow capabilities, called Knowledge Base. Note that the plugin Knowledge Base here is used as an example only. You can have your own plugin as well. What's Knowledge Base? The plugin Knowledge Base allows authoring articles and organize them in a hierarchy of navigable categories. It leverages Web Content articles, structures, and templates; allows rating on articles; allows commenting on articles; allows adding hierarchy of categories; allows adding tags to articles; exports articles to PDF and other formats; supports workflow; allows adding custom attributes (called custom fields); supports indexing and advanced search; allows using the rule engine and so on. Most importantly the plugin Knowledge Base supports import of a semantic markup language for technical documentation called DocBook. DocBook enables its users to create document content in a presentation-neutral form that captures the logical structure of the content; that content can then be published in a variety of formats, such as HTML, XHTML, EPUB, and PDF, without requiring users to make any changes to the source. Refer to http://www.docbook.org/. In general, the plugin Knowledge Base provides four portlets inside: Knowledge Base Admin (managing knowledge base articles and templates), Knowledge Base Aggregator (publishing knowledge base articles), Knowledge Base Display (displaying knowledge base articles) and Knowledge Base Search (the ability to search knowledge base articles). Structure The plugin Knowledge Base has following folder structure under the folder $PLUGIN_SDK_HOME/knowledge-base-portlet. admin: View JSP files for portlet Admin aggregator: View JSP files for portlet Aggregator display: View JSP files for portlet Display icons: Icon images files js: JavaScript files META-INF: Contains context.xml search: View JSP files for portlet Search WEB-INF: Web info specification; includes subfolders classes, client, lib, service, SQL, src, and tld As you can see, JSP files such as init.jsp and css_init.jsp are located in the folder $PLUGIN_SDK_HOME/knowledge-base-portlet. Services and models As you may have noticed, the plugin Knowledge Base has specified services and models with the package named com.liferay.knowledgebase. You would be able to find details at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/service.xml. Service-Builder in Plugins SDK will automatically generate services and models against service.xml, plus XML files such as portlet-hbm.xml, portlet-model-hints.xml, portlet-orm.xml, portlet-spring.xml, base-spring.xml, cluster-spring.xml,dynamic-data-source-spring.xml, hibernate-spring.xml, infrastructure-spring.xml, messaging-spring.xml, and shard-data-source-spring.xml under the folder $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/META-INF. The service.xml specified Knowledge Base articles as entries: Article, Comment, and Template. The entry Article included columns: article Id as primary key, resource Prim Key, group Id, company Id, user Id, user Name, create Date, modified Date, parent resource Prim Key, version, title, content, description, and priority; the entry Template included columns: template Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, title, content, and description; while the entity Comment included columns: comment Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, class Name Id, class PK, content and helpful. As you can see, the entity Comment could be applied on either any core assets or custom assets like Article and Template by using class Name Id and class PK. By the way, the custom SQL scripts were provided at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/custom-sql/default.xml. In addition, resource actions, that is, permission actions specification—are provided at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/src/resource-actions/default.xml. Of course, you can use Ant target build-wsdd to generate WSDD server configuration file $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/server-config.wsdd and to use Ant target build-client plus namespace-mapping.properties to generate web service client JAR file like $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/client/known-base-portlet-client.jar. In brief, based on your own custom models and services specified in service.xml, you can easily build service, WSDD, and web service client in plugins of Liferay 6 or above version. Adding workflow instance link First, you have to add a workflow instance link and its related columns and finder in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/service.xml as follows. <column name="status" type="int" /> <column name="statusByUserId" type="long" /> <column name="statusByUserName" type="String" /> <column name="statusDate" type="Date" /> <!-- ignore details --> <finder name="R_S" return-type="Collection"> <finder-column name="resourcePrimKey" /> <finder-column name="status" /> </finder> <!-- ignore details --> <reference package-path="com.liferay.portal" entity="WorkflowInstance Link" /> As shown in the above code, the column element represents a column in the database, four columns like status, statusByUserId, statusByUserName, and statusDate are required for Knowledge Base workflow, storing workflow related status, and user info; the finder element represents a generated finder method, the method finder R_S is defined as Collection (an option) for return type with two columns, for example, resourcePrimkey and status; where the reference element allows you to inject services from another service.xml within the same class loader. For example, if you inject the Resource (that is, WorkflowInstanceLink) entity, then you'll be able to reference the Resource services from your service implementation via the methods getResourceLocalService and getResourceService. You'll also be able to reference the Resource services via the variables resourceLocalService and resourceService. Then, you need to run ANT target build-service to rebuild service based on newly added workflow instance link. Adding workflow handler Liferay 6 provides pluggable workflow implementations, where developers can register their own workflow handler implementation for any entity they build. It will appear automatically in the workflow admin portlet so users can associate workflow entities with available permissions. To make it happen, we need to add the Workflow Handler in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/liferay-portlet.xml of plugin as follows. <workflow-handler>com.liferay.knowledgebase.admin.workflow. ArticleWorkflowHandler</workflow-handler> As shown in the above code, the workflow-handler value must be a class that implements com.liferay.portal.kernel.workflow.BaseWorkflowHandler and is called when the workflow is run. Of course, you need to specify ArticleWorkflowHandler under the package com.liferay.knowledgebase.admin.workflow. The following is an example code snippet. public class ArticleWorkflowHandler extends BaseWorkflowHandler { public String getClassName(){/* get target class name */}; public String getType(Locale locale) {/* get target entity type, that is, Knowledge base article*/}; public Article updateStatus( int status, Map<String, Serializable> workflowContext) throws PortalException, SystemException {/* update workflow status*/}; protected String getIconPath(ThemeDisplay themeDisplay) {/* find icon path */ return ArticleLocalServiceUtil.updateStatus(userId, resourcePrimKey, status, serviceContext); }; } As you can see, ArticleWorkflowHandler extends BaseWorkflowHandler and overrode the methods getClassName, getType, updateStatus, and getIconPath. That's it. Updating workflow status As mentioned in the previous section, you added the method updateStatus in ArticleWorkflowHandler. Now you should provide implementation of the method updateStatus in com.liferay.knowledgebase.service.impl.ArticleLocalServiceImpl.java. The following is some example sample code: public Article updateStatus(long userId, long resourcePrimKey, int status, ServiceContext serviceContext) throws PortalException, SystemException { /* ignore details */ // Article Article article = getLatestArticle(resourcePrimKey, WorkflowConstants.STATUS_ANY); articlePersistence.update(article, false); if (status != WorkflowConstants.STATUS_APPROVED) { return article; } // Articles // Asset // Social // Indexer // Attachments // Subscriptions } As shown in above code, it first gets latest article by resourcePrimKey and WorkflowConstants.STATUS_ANY. Then, it updates the article based on the workflow status. And moreover, it updates articles display order, asset tags and categories, social activities, indexer, attachments, subscriptions, and so on. After adding new method at com.liferay.knowledgebase.service.impl.ArticleLocalServiceImpl.java, you need to run ANT target build-service to build services. Adding workflow-related UI tags Now it is time to add workflow related UI tags (AUI tags are used as an example) at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/admin/edit_article.jsp. First of all, add the AUI input workflow action with value WorkflowConstants.ACTION_SAVE_DRAFT as follows. <aui:input name="workflowAction" type="hidden" value=" <%= WorkflowConstants.ACTION_SAVE_DRAFT %>" /> As shown in above code, the default value of AUI input workflowAction was set as SAVE DRAFT with type hidden. That is, this AUI input is invisible to end users. Afterwards, it would be better to add workflow messages by UI tag liferay-ui:message, like a-new-version-will-be-created-automatically-if-this-content-is-modified for WorkflowConstants.STATUS_APPROVED, and there-is-a-publication-workflow-in-process for WorkflowConstants.STATUS_PENDING. <% int status = BeanParamUtil.getInteger(article, request, "status", WorkflowConstants.STATUS_DRAFT); %> <c:choose> <c:when test="<%= status == WorkflowConstants.STATUS_APPROVED %>"> <div class="portlet-msg-info"> <liferay-ui:message key="a-new-version-will-be-created- automatically-if-this-content-is-modified" /> </div> </c:when> <c:when test="<%= status == WorkflowConstants.STATUS_PENDING %>"> <div class="portlet-msg-info"> <liferay-ui:message key="there-is-a-publication-workflow-in-process" /> </div></c:when> </c:choose> And then add AUI workflow status tag aui:workflow-status at $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/admin/edit_article.jsp. <c:if test="<%= article != null %>"> <aui:workflow-status id="<%= String.valueOf(resourcePrimKey) %>" status="<%= status %>" version="<%= GetterUtil.getDouble(String. valueOf(version)) %>" /> </c:if> As you can see, aui:workflow-status is used to represent workflow status. Similarly you can find other AUI tags like a button, button_row, column, field_wrapper, fieldset, form, input, layout, legend, option, panel, script, and select. Finally you should add the JavaScript to implement the function publishArticle() as follows. function <portlet:namespace />publishArticle() { document.<portlet:namespace />fm.<portlet:namespace />workflowAction.value = "<%= WorkflowConstants.ACTION_PUBLISH %>"; <portlet:namespace />updateArticle(); } As you can see, the workflow action value is set as WorkflowConstants.ACTION_PUBLISH. You have added workflow capabilities on Knowledge Base articles in plugins. From now on, you will be able to apply workflow on Knowledge Base articles through Control Panel.
Read more
  • 0
  • 0
  • 2758

article-image-android-application-testing-tdd-and-temperature-converter
Packt
27 Jun 2011
7 min read
Save for later

Android application testing: TDD and the temperature converter

Packt
27 Jun 2011
7 min read
Getting started with TDD Briefly, Test Driven Development is the strategy of writing tests along the development process. These test cases are written in advance of the code that is supposed to satisfy them. A single test is added, then the code needed to satisfy the compilation of this test and finally the full set of test cases is run to verify their results. This contrasts with other approaches to the development process where the tests are written at the end when all the coding has been done. Writing the tests in advance of the code that satisfies them has several advantages. First, is that the tests are written in one way or another, while if the tests are left till the end it is highly probable that they are never written. Second, developers take more responsibility for the quality of their work. Design decisions are taken in single steps and finally the code satisfying the tests is improved by refactoring it. This UML activity diagram depicts the Test Driven Development to help us understand the process: The following sections explain the individual activities depicted in this activity diagram. Writing a test case We start our development process with writing a test case. This apparently simple process will put some machinery to work inside our heads. After all, it is not possible to write some code, test it or not, if we don't have a clear understanding of the problem domain and its details. Usually, this step will get you face to face with the aspects of the problem you don't understand, and you need to grasp if you want to model and write the code. Running all tests Once the test is written the obvious following step is to run it, altogether with other tests we have written so far. Here, the importance of an IDE with built-in support of the testing environment is perhaps more evident than in other situations and this could cut the development time by a good fraction. It is expected that firstly, our test fails as we still haven't written any code! To be able to complete our test, we usually write additional code and take design decisions. The additional code written is the minimum possible to get our test to compile. Consider here that not compiling is failing. When we get the test to compile and run, if the test fails then we try to write the minimum amount of code necessary to make the test succeed. This may sound awkward at this point but the following code example in this article will help you understand the process. Optionally, instead of running all tests again you can just run the newly added test first to save some time as sometimes running the tests on the emulator could be rather slow. Then run the whole test suite to verify that everything is still working properly. We don't want to add a new feature by breaking an existing one. Refactoring the code When the test succeeds, we refactor the code added to keep it tidy, clean, and minimal. We run all the tests again, to verify that our refactoring has not broken anything and if the tests are again satisfied, and no more refactoring is needed we finish our task. Running the tests after refactoring is an incredible safety net which has been put in place by this methodology. If we made a mistake refactoring an algorithm, extracting variables, introducing parameters, changing signatures or whatever your refactoring is composed of, this testing infrastructure will detect the problem. Furthermore, if some refactoring or optimization could not be valid for every possible case we can verify it for every case used by the application and expressed as a test case. What is the advantage? Personally, the main advantage I've seen so far is that you focus your destination quickly and is much difficult to divert implementing options in your software that will never be used. This implementation of unneeded features is a wasting of your precious development time and effort. And as you may already know, judiciously administering these resources may be the difference between successfully reaching the end of the project or not. Probably, Test Driven Development could not be indiscriminately applied to any project. I think that, as well as any other technique, you should use your judgment and expertise to recognize where it can be applied and where not. But keep this in mind: there are no silver bullets. The other advantage is that you always have a safety net for your changes. Every time you change a piece of code, you can be absolutely sure that other parts of the system are not affected as long as there are tests verifying that the conditions haven't changed. Understanding the testing requirements To be able to write a test about any subject, we should first understand the Subject under test. We also mentioned that one of the advantages is that you focus your destination quickly instead of revolving around the requirements. Translating requirements into tests and cross-referencing them is perhaps the best way to understand the requirements, and be sure that there is always an implementation and verification for all of them. Also, when the requirements change (something that is very frequent in software development projects), we can change the tests verifying these requirements and then change the implementation to be sure that everything was correctly understood and mapped to code. Creating a sample project—the Temperature Converter Our examples will revolve around an extremely simple Android sample project. It doesn't try to show all the fancy Android features but focuses on testing and gradually building the application from the test, applying the concepts learned before. Let's pretend that we have received a list of requirements to develop an Android temperature converter application. Though oversimplified, we will be following the steps you normally would to develop such an application. However, in this case we will introduce the Test Driven Development techniques in the process. The list of requirements Most usual than not, the list of requirements is very vague and there is a high number of details not fully covered. As an example, let's pretend that we receive this list from the project owner: The application converts temperatures from Celsius to Fahrenheit and vice-versa The user interface presents two fields to enter the temperatures, one for Celsius other for Fahrenheit When one temperature is entered in one field the other one is automatically updated with the conversion If there are errors, they should be displayed to the user, possibly using the same fields Some space in the user interface should be reserved for the on screen keyboard to ease the application operation when several conversions are entered Entry fields should start empty Values entered are decimal values with two digits after the point Digits are right aligned Last entered values should be retained even after the application is paused User interface concept design Let's assume that we receive this conceptual user interface design from the User Interface Design team: Creating the projects Our first step is to create the project. As we mentioned earlier, we are creating a main and a test project. The following screenshot shows the creation of the TemperatureConverter project (all values are typical Android project values): When you are ready to continue you should press the Next > button in order to create the related test project. The creation of the test project is displayed in this screenshot. All values will be selected for you based on your previous entries:
Read more
  • 0
  • 0
  • 2755
article-image-programming-littlebits-circuits-javascript-part-2
Anna Gerber
14 Dec 2015
5 min read
Save for later

Programming littleBits circuits with JavaScript Part 2

Anna Gerber
14 Dec 2015
5 min read
In this two-part series, we're programing littleBits circuits using the Johnny-Five JavaScript Robotics Framework. Be sure to read over Part 1 before continuing here. Let's create a circuit to play with, using all of the modules from the littleBits Arduino Coding Kit. Attach a button to the Arduino connector labelled d0. Attach a dimmer to the connector marked a0 and a second dimmer to a1. Turn the dimmers all the way to the right (max) to start with. Attach a power module to the single connector on the left-hand side of the fork module, and the three output connectors of the fork module to all of the input modules. The bargraph should be connected to d5, and the servo to d9, and both set to PWM output mode using the switches on board of the Arduino. The servo module has two modes: turn and swing. Swing mode makes the servo sweep betwen maximum and minimum. Set it to swing mode using the onboard switch. Reading input values We'll create an instance of the Johnny-Five Button class to respond to button press events. Our button is connected to the connector labelled d0 (i.e. digital "pin" 0) on our Arduino, so we'll need to specify the pin as an argument when we create the button. var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { var button = new five.Button(0); }); Our dimmers are connected to analog pins (A0 and A1), so we'll specify these as strings when we create Sensor objects to read their values. We can also provide options for reading the values; for example, we'll set the frequency to 250 milliseconds, so we'll be receiving 4 readings per second for both dimmers. var dimmer1 = new five.Sensor({ pin: "A0", freq: 250 }); var dimmer2 = new five.Sensor({ pin: "A1", freq: 250 }); We can attach a function that will be run any time the value changes (on "change") or anytime we get a reading (on "data"): dimmer1.on("change", function() { // raw value (between 0 and 1023) console.log("dimmer 1 is " + this.raw); }); Run the code and try turning dimmer 1. You should see the value printed to the console whenever the dimmer value changes. Triggering behavior Now we can use code to hook our input components up to our output components. To use, for example, the dimmer to control the brightness of the bargraph, change the code in the event handler: var led = new five.Led(5); dimmer1.on("change", function() { // set bargraph brightness to one quarter // of the raw value from dimmer led.brightness(Math.floor(this.raw / 4)); }); You'll see the bargraph brightness fade as you turn the dimmer. We can use the JavaScript Math library and operators to manipulate the brightness value before we send it to the bargraph. Writing code gives us more control over the mapping between input values and output behaviors than if we'd snapped our littleBits modules directly together without going via the Arduino. We set our d5 output to PWM mode, so all of the LEDs should fade in and out at the same time. If we set the output to analog mode instead, we'd see the behavior change to light up more or fewer LEDs depending on value of the brightness. Let's use the button to trigger the servo stop and start functions. Add a button press handler to your code, and a variable to keep track of whether the servo is running or not. We'll toggle this variable between true and false using JavaScript's boolean not operator (!). We can determine whether to stop or start the servo each time the button is pressed via a conditional statement based on the value of this variable. var servo = new five.Motor(9); servo.start(); var button = new five.Button(0); var toggle = false; var speed = 255; button.on("press", function(value){ toggle = !toggle; if (toggle) { servo.start(speed); } else { servo.stop(); } }); The other dimmer can be used to change the servo speed: dimmer2.on("change", function(){ speed = Math.floor(this.raw / 4); if (toggle) { servo.start(speed); } }); There are many input and output modules available within the littleBits system for you to experiment with. You can use the Sensor class with input modules, and check out the Johnny-Five API docs to see examples of types of outputs supported by the API. You can always fall back to using the Pin class to program any littleBits module. Using the REPL Johnny-Five includes a Read-Eval-Print-Loop (REPL) so you can interactively write code to control components instantly - no waiting for code to compile and upload! Any of the JavaScript objects from your program that you want to access from the REPL need to be "injected". The following code, for example, injects our servo and led objects. this.repl.inject({ led: led, servo: servo }); After running the program using Node.js, you'll see a >> prompt in your terminal. Try some of the following functions (hit Enter after each to see it take effect): servo.stop(): stop the servo servo.start(50): start servo moving at slow speed servo.start(255): start servo moving at max speed led.on(): turn LED on led.off(): turn LED off led.pulse(): slowly fade LED in and out led.stop(): stop pulsing LED led.brightness(100): set brightness of LED - the parameter should be between 0 and 255 LittleBits are fantastic for prototyping, and pairing the littleBits Arduino with JavaScript makes prototyping interactive electronic projects even faster and easier. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 2747

article-image-understanding-datastore
Packt
14 Sep 2015
41 min read
Save for later

Understanding the Datastore

Packt
14 Sep 2015
41 min read
 In this article by Mohsin Hijazee, the author of the book Mastering Google App Engine, we will go through learning, but unlearning something is even harder. The main reason why learning something is hard is not because it is hard in and of itself, but for the fact that most of the times, you have to unlearn a lot in order to learn a little. This is quite true for a datastore. Basically, it is built to scale the so-called Google scale. That's why, in order to be proficient with it, you will have to unlearn some of the things that you know. Your learning as a computer science student or a programmer has been deeply enriched by the relational model so much so that it is natural to you. Anything else may seem quite hard to grasp, and this is the reason why learning Google datastore is quite hard. However, if this were the only glitch in all that, things would have been way simpler because you could ask yourself to forget the relational world and consider the new paradigm afresh. Things have been complicated due to Google's own official documentation, where it presents a datastore in a manner where it seems closer to something such as Django's ORM, Rails ActiveRecord, or SQLAlchemy. However, all of a sudden, it starts to enlist its limitations with a very brief mention or, at times, no mention of why the limitations exist. Since you only know the limitations but not why the limitations are there in the first place, a lack of reason may result to you being unable to work around those limitations or mold your problem space into the new solution space, which is Google datastore. We will try to fix this. Hence, the following will be our goals in this article: To understand BigTable and its data model To have a look at the physical data storage in BigTable and the operations that are available in it To understand how BigTable scales To understand datastore and the way it models data on top of BigTable So, there's a lot more to learn. Let's get started on our journey of exploring datastore. The BigTable If you decided to fetch every web page hosted on the planet, download and store a copy of it, and later process every page to extract data from it, you'll find out that your own laptop or desktop is not good enough to accomplish this task. It has barely enough storage to store every page. Usually, laptops come with 1 TB hard disk drives, and this seems to be quite enough for a person who is not much into video content such as movies. Assuming that there are 2 billion websites, each with an average of 50 pages and each page weighing around 250 KB, it sums up to around 23,000+ TB (or roughly 22 petabytes), which would need 23,000 such laptops to store all the web pages with a 1 TB hard drive in each. Assuming the same statistics, if you are able to download at a whopping speed of 100 MBps, it would take you about seven years to download the whole content to one such gigantic hard drive if you had one in your laptop. Let's suppose that you downloaded the content in whatever time it took and stored it. Now, you need to analyze and process it too. If processing takes about 50 milliseconds per page, it would take about two months to process the entire data that you downloaded. The world would have changed a lot by then already, leaving your data and processed results obsolete. This is the Kind of scale for which BigTable is built. Every Google product that you see—Search Analytics, Finance, Gmail, Docs, Drive, and Google Maps—is built on top of BigTable. If you want to read more about BigTable, you can go through the academic paper from Google Research, which is available at http://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf. The data model Let's examine the data model of BigTable at a logical level. BigTable is basically a key-value store. So, everything that you store falls under a unique key, just like PHP' arrays, Ruby's hash, or Python's dict: # PHP $person['name'] = 'Mohsin'; # Ruby or Python person['name'] = 'Mohsin' However, this is a partial picture. We will learn the details gradually in a while. So, let's understand this step by step. A BigTable installation can have multiple tables, just like a MySQL database can have multiple tables. The difference here is that a MySQL installation might have multiple databases, which in turn might have multiple tables. However, in the case of BigTable, the first major storage unit is a table. Each table can have hundreds of columns, which can be divided into groups called column families. You can define column families at the time of creating a table. They cannot be altered later, but each column family might have hundreds of columns that you can define even after the creation of the table. The notation that is used to address a column and its column families is like job:title, where job is a column family and title is the column. So here, you have a job column family that stores all the information about the job of the user, and title is supposed to store the job title. However, one of the important facts about these columns is that there's no concept of datatypes in BigTable as you'd encounter in other relational database systems. Everything is just an uninterpreted sequence of bytes, which means nothing to BigTable. What they really mean is just up to you. It might be a very long integer, a string, or a JSON-encoded data. Now, let's turn our attention to the rows. There are two major characteristics of the rows that we are concerned about. First, each row has a key, which must be unique. The contents of the key again consist of an uninterpreted string of bytes that is up to 64 KB in length. A key can be anything that you want it to be. All that's required is that it must be unique within the table, and in case it is not, you will have to overwrite the contents of the row with the same content. Which key should you use for a row in your table? That's the question that requires some consideration. To answer this, you need to understand how the data is actually stored. Till then, you can assume that each key has to be a unique string of bytes within the scope of a table and should be up to 64 KB in length. Now that we know about tables, column families, columns, rows, and row keys, let's look at an example of BigTable that stores 'employees' information. Let's pretend that we are creating something similar to LinkedIn here. So, here's the table: Personal Professional Key(name) personal:lastname personal:age professinal:company professional:designation Mohsin Hijazee 29 Sony Senior Designer Peter Smith 34 Panasonic General Manager Kim Yong 32 Sony Director Ricky Martin 45 Panasonic CTO Paul Jefferson 39 LG Sales Head So, 'this is a sample BigTable. The first column is the name, and we have chosen it as a key. It is of course not a good key, because the first name cannot necessarily be unique, even in small groups, let alone in millions of records. However, for the sake of this example, we will assume that the name is unique. Another reason behind assuming the name's uniqueness is that we want to increase our understanding gradually. So, the key point here is that we picked the first name as the row's key for now, but we will improve on this as we learn more. Next, we have two column groups. The personal column group holds all the personal attributes of the employees, and the other column family named professional has all the other attributes pertaining to the professional aspects. When referring to a column within a family, the notation is family:column. So, personal:age contains the age of the employees. If you look at professinal:designation and personal:age, it seems that the first one's contents are strings, while the second one stores integers. That's false. No column stores anything but just plain bytes without any distinction of what they mean. The meaning and interpretation of these bytes is up to the user of the data. From the point of view of BigTable', each column just contains plain old bytes. Another thing that is drastically different from RDBMS is such as MySQL is that each row need not have the same number of columns. Each row can adopt the layout that they want. So, the second row's personal column family can have two more columns that store gender and nationality. For this particular example, the data is in no particular order, and I wrote it down as it came to my mind. Hence, there's no order of any sort in the data at all. To summarize, BigTable is a key-value storage where keys should be unique and have a length that is less than or equal to 64 KB. The columns are divided into column families, which can be created at the time of defining the table, but each column family might have hundreds of columns created as and when needed. Also, contents have no data type and comprise just plain old bytes. There's one minor detail left, which is not important as regards our purpose. However, for the sake of the completeness of the BigTable's data model, I will mention it now. Each value of the column is stored with a timestamp that is accurate to the microseconds, and in this way, multiple versions of a column value are available. The number of last versions that should be kept is something that is configurable at the table level, but since we are not going to deal with BigTable directly, this detail is not important to us. How data is stored? Now that we know about row keys, column families, and columns, we will gradually move towards examining this data model in detail and understand how the data is actually stored. We will examine the logical storage and then dive into the actual structure, as it ends up on the disk. The data that we presented in the earlier table had no order and were listed as they came to my mind. However, while storing, the data is always sorted by the row key. So now, the data will actually be stored like this: personal professional Key(name) personal:lastname personal:age professinal:company professional:designation Kim Yong 32 Sony Director Mohsin Hijazee 29 Sony Senior Designer Paul Jefferson 39 LG Sales Head Peter Smith 34 Panasonic General Manager Ricky Martin 45 Panasonic CTO OK, so what happened here? The name column indicates the key of the table and now, the whole table is sorted by the key. That's exactly how it is stored on the disk as well. 'An important thing about sorting is lexicographic sorting and not semantic sorting. By lexicographic, we mean that they are sorted by the byte value and not by the textness or the semantic sort. This matters because even within the Latin character set, different languages have different sort orders for letters, such as letters in English versus German and French. However, all of this and the Unicode collation order isn't valid here. It is just sorted by byte values. In our instance, since K has a smaller byte value (because K has a lower ASCII/Unicode value) than letter M, it comes first. Now, suppose that some European language considers and sorts M before K. That's not how the data would be laid out here, because it is a plain, blind, and simple sort. The data is sorted by the byte value, with no regard for the semantic value. In fact, for BigTable, this is not even text. It's just a plain string of bytes. Just a hint. This order of keys is something that we will exploit when modeling data. How? We'll see later. The Physical storage Now that we understand the logical data model and how it is organized, it's time to take a closer look at how this data is actually stored on the disk. On a physical disk, the stored data is sorted by the key. So, key 1 is followed by its respective value, key 2 is followed by its respective value, and so on. At the end of the file, there's a sorted list of just the keys and their offset in the file from the start, which is something like the block to the right: Ignore the block on your left that is labeled Index. We will come back to it in a while. This particular format actually has a name SSTable (String Storage Table) because it has strings (the keys), and they are sorted. It is of course tabular data, and hence the name. Whenever your data is sorted, you have certain advantages, with the first and foremost advantage being that when you look up for an item or a range of items, 'your dataset is sorted. We will discuss this in detail later in this article. Now, if we start from the beginning of the file and read sequentially, noting down every key and then its offset in a format such as key:offset, we effectively create an index of the whole file in a single scan. That's where the first block to your left in the preceding diagram comes from. Since the keys are sorted in the file, we simply read it sequentially till the end of the file, hence effectively creating an index of the data. Furthermore, since this index only contains keys and their offsets in the file, it is much smaller in terms of the space it occupies. Now, assuming that SSTable has a table that is, say, 500 MB in size, we only need to load the index from the end of the file into the memory, and whenever we are asked for a key or a range of keys, we just search within a memory index (thus not touching the disk at all). If we find the data, only then do we seek the disk at the given offset because we know the offset of that particular key from the index that we loaded in the memory. Some limitations Pretty smart, neat, and elegant, you would say! Yes it is. However, there's a catch. If you want to create a new row, key must come in a sorted order, and even if you are sure about where exactly this key should be placed in the file to avoid the need to sort the data, you still need to rewrite the whole file in a new, sorted order along with the index. Hence, large amounts of I/O are required for just a single row insertion. The same goes for deleting a row because now, the file should be sorted and rewritten again. Updates are OK as long as the key itself is not altered because, in that case, it is sort of having a new key altogether. This is so because a modified key would have a different place in the sorted order, depending on what the key actually is. Hence, the whole file would be rewritten. Just for an example, say you have a row with the key as all-boys, and then you change the key of that row to x-rays-of-zebra. Now, you will see that after the new modification, the row will end up at nearly the end of the file, whereas previously, it was probably at the beginning of the file because all-boys comes before x-rays-of-zebra when sorted. This seems pretty limiting, and it looks like inserting or removing a key is quite expensive. However, this is not the case, as we will see later. Random writes and deletion There's one last thing that's worth a mention before we examine the operations that are available on a BigTable. We'd like to examine how random writes and the deletion of rows are handled because that seems quite expensive, as we just examined in the preceding section. The idea is very simple. All the read, writes, and removals don't go straight to the disk. Instead, an in-memory SSTable is created along with its index, both of which are empty when created. We'll call it MemTable from this point onwards for the sake of simplicity. Every read checks the index of this table, and if a record is found from here, it's well and good. If it is not, then the index of the SSTable on the disk is checked and the desired row is returned. When a new row has to be read, we don't look at anything and simply enter the row in the MemTable along with its record in the index of this MemTable. To delete a key, we simply mark it deleted in the memory, regardless of whether it is in MemTable or in the on disk table. As shown here the allocation of block into Mem Table: Now, when the size of the MemTable grows up to a certain size, it is written to the disk as a new SSTable. Since this only depends on the size of the MemTable and of course happens much infrequently, it is much faster. Each time the MemTable grows beyond a configured size, it is flushed to the disk as a new SSTable. However, the index of each flushed SSTable is still kept in the memory so that we can quickly check the incoming read requests and locate it in any table without touching the disk. Finally, when the number of SSTables reaches a certain count, the SSTables are merged and collapsed into a single SSTable. Since each SSTable is just a sorted set of keys, a merge sort is applied. This merging process is quite fast. Congratulations! You've just learned the most atomic storage unit in BigData solutions such as BigTable, Hbase, Hypertable, Cassandara, and LevelDB. That's how they actually store and process the data. Now that we know how a big table is actually stored on the disk and how the read and writes are handled, it's time to take a closer look at the available operations. Operations on BigTable Until this point, we know that a BigTable table is a collection of rows that have unique keys up to 64 KB in length and the data is stored according to the lexicographic sort order of the keys. We also examined how it is laid out on the disk and how read, writes, and removals are handled. Now, the question is, which operations are available on this data? The following are the operations that are available to us: Fetching a row by using its key Inserting a new key Deleting a row Updating a row Reading a range of rows from the starting row key to the ending row key Reading Now, the first operation is pretty simple. You have a key, and you want the associated row. Since the whole data set is sorted by the key, all we need to do is perform a binary search on it, and you'll be able to locate your desired row within a few lookups, even within a set of a million rows. In practice, the index at the end of the SSTable is loaded in the memory, and the binary search is actually performed on it. If we take a closer look at this operation in light of what we know from the previous section, the index is already in the memory of the MemTable that we saw in the previous section. In case there are multiple SSTables because MemTable was flushed many times to the disk as it grew too large, all the indexes of all the SSTables are present in the memory, and a quick binary search is performed on them. Writing The second operation that is available to us is the ability to insert a new row. So, we have a key and the values that we want to insert in the table. According to our new knowledge about physical storage and SSTables, we can understand this very well. The write directly happens on the in-memory MemTable and its index is updated, which is also in the memory. Since no disk access is required to write the row as we are writing in memory, the whole file doesn't have to be rewritten on disk, because yet again, all of it is in the memory. This operation is very fast and almost instantaneous. However, if the MemTable grows in size, it will be flushed to the disk as a new SSTable along with the index while retaining a copy of its index in the memory. Finally, we also saw that when the number of SSTables reaches a certain number, they are merged and collapsed to form a new, bigger table. Deleting It seems that since all the keys are in a sorted order on the disk and deleting a key would mean disrupting the sort order, a rewrite of the whole file would be a big I/O overhead. However, it is not, as it can be handled smartly. Since all the indexes, including the MemTable and the tables that were the result of flushing a larger MemTable to the disk, are already in the memory, deleting a row only requires us to find the required key in the in-memory indexes and mark it as deleted. Now, whenever someone tries to read the row, the in-memory indexes will be checked, and although an entry will be there, it will be marked as deleted and won't be returned. When MemTable is being flushed to the disk or multiple tables are being collapsed, this key and the associated row will be excluded in the write process. Hence, they are totally gone from the storage. Updating Updating a row is no different, but it has two cases. The first case is in which not only the values, but also the key is modified. In this case, it is like removing the row with an old key and inserting a row with a new key. We already have seen both of these cases in detail. So, the operation should be obvious. However, the case where only the values are modified is even simpler. We only have to locate the row from the indexes, load it in the memory if it is not already there, and modify. That's all. Scanning a range This last operation is quite interesting. You can scan a range of keys from a starting key to an ending key. For instance, you can return all the rows that have a key greater than or equal to key1 and less than or equal to key2, effectively forming a range. Since the looking up of a single key is a fast operation, we only have to locate the first key of the range. Then, we start reading the consecutive keys one after the other till we encounter a key that is greater than key2, at which point, we will stop the scanning, and the keys that we scanned so far are our query's result. This is how it looks like: Name Department Company Chris Harris Research & Development Google Christopher Graham Research & Development LG Debra Lee Accounting Sony Ernest Morrison Accounting Apple Fred Black Research & Development Sony Janice Young Research & Development Google Jennifer Sims Research & Development Panasonic Joyce Garrett Human Resources Apple Joyce Robinson Research & Development Apple Judy Bishop Human Resources Google Kathryn Crawford Human Resources Google Kelly Bailey Research & Development LG Lori Tucker Human Resources Sony Nancy Campbell Accounting Sony Nicole Martinez Research & Development LG Norma Miller Human Resources Sony Patrick Ward Research & Development Sony Paula Harvey Research & Development LG Stephanie Chavez Accounting Sony Stephanie Mccoy Human Resources Panasonic In the preceding table, we said that the starting key will be greater than or equal to Ernest and ending key will be less than or equal to Kathryn. So, we locate the first key that is greater than or equal to Ernest, which happens to be Ernest Morrison. Then, we start scanning further, picking and returning each key as long as it is less than or equal to Kathryn. When we reach Judy, it is less than or equal to Kathryn, but Kathryn isn't. So, this row is not returned. However, the rows before this are returned. This is the last operation that is available to us on BigTable. Selecting a key Now that we have examined the data model and the storage layout, we are in a better position to talk about the key selection for a table. As we know that the stored data is sorted by the key, it does not impact the writing, deleting, and updating to fetch a single row. However, the operation that is impacted by the key is that of scanning a range. Let's think about the previous table again and assume that this table is a part of some system that processes payrolls for companies, and the companies pay us for the task of processing their payroll. Now, let's suppose that Sony asks us to process their data and generate a payroll for them. Right now, we cannot do anything of this kind. We can just make our program scan the whole table, and hence all the records (which might be in millions), and only pick the records where job:company has the value of Sony. This would be inefficient. Instead, what we can do is put this sorted nature of row keys to our service. Select the company name as the key and concatenate the designation and name along with it. So, the new table will look like this: Key Name Department Company Apple-Accounting-Ernest Morrison Ernest Morrison Accounting Apple Apple-Human Resources-Joyce Garrett Joyce Garrett Human Resources Apple Apple-Research & Development-Joyce Robinson Joyce Robinson Research & Development Apple Google-Human Resources-Judy Bishop Chris Harris Research & Development Google Google-Human Resources-Kathryn Crawford Janice Young Research & Development Google Google-Research & Development-Chris Harris Judy Bishop Human Resources Google Google-Research & Development-Janice Young Kathryn Crawford Human Resources Google LG-Research & Development-Christopher Graham Christopher Graham Research & Development LG LG-Research & Development-Kelly Bailey Kelly Bailey Research & Development LG LG-Research & Development-Nicole Martinez Nicole Martinez Research & Development LG LG-Research & Development-Paula Harvey Paula Harvey Research & Development LG Panasonic-Human Resources-Stephanie Mccoy Jennifer Sims Research & Development Panasonic Panasonic-Research & Development-Jennifer Sims Stephanie Mccoy Human Resources Panasonic Sony-Accounting-Debra Lee Debra Lee Accounting Sony Sony-Accounting-Nancy Campbell Fred Black Research & Development Sony Sony-Accounting-Stephanie Chavez Lori Tucker Human Resources Sony Sony-Human Resources-Lori Tucker Nancy Campbell Accounting Sony Sony-Human Resources-Norma Miller Norma Miller Human Resources Sony Sony-Research & Development-Fred Black Patrick Ward Research & Development Sony Sony-Research & Development-Patrick Ward Stephanie Chavez Accounting Sony So, this is a new format. We just welded the company, department, and name as the key and as the table will always be sorted by the key, that's what it looks like, as shown in the preceding table. Now, suppose that we receive a request from Google to process their data. All we have to do is perform a scan, starting from the key greater than or equal to Google and less then L because that's the next letter. This scan is highlighted in the previous table. Now, the next request is more specific. Sony asks us to process their data, but only for their accounting department. How do we do that? Quite simple! In this case, our starting key will be greater than or equal to Sony-Accounting, and the ending key can be Sony-Accountinga, where a is appended to indicate the end key in the range. The scanned range and the returned rows are highlighted in the previous table. BigTable – a hands-on approach Okay, enough of the theory. It is now time to take a break and perform some hands-on experimentation. By now, we know that about 80 percent of the BigTable and the other 20 percent of the complexity is scaling it to more than one machine. Our current discussion only assumed and focused on a single machine environment, and we assumed that the BigTable table is on our laptop and that's about it. You might really want to experiment with what you learned. Fortunately, given that you have the latest version of Google Chrome or Mozilla Firefox, that's easy. You have BigTable right there! How? Let me explain. Basically, from the ideas that we looked at pertaining to the stored key value, the sorted layout, the indexes of the sorted files, and all the operations that were performed on them, including scanning, we extracted a separate component called LevelDB. Meanwhile, as HTML was evolving towards HTML5, a need was felt to store data locally. Initially, SQLite3 was embedded in browsers, and there was a querying interface for you to play with. So all in all, you had an SQL database in the browser, which yielded a lot of possibilities. However, in recent years, W3C deprecated this specification and urged browser vendors to not implement it. Instead of web databases that were based on SQLite3, they now have databases based on LevelDB that are actually key-value stores, where storage is always sorted by key. Hence, besides looking up for a key, you can scan across a range of keys. Covering the IndexedDB API here would be beyond the scope of this book, but if you want to understand it and find out what the theory that we talked about looks like in practice, you can try using IndexedDB in your browser by visiting http://code.tutsplus.com/tutorials/working-with-indexeddb--net-34673. The concepts of keys and the scanning of key ranges are exactly like those that we examined here as regards BigTable, and those about indexes are mainly from the concepts that we will examine in a later section about datastores. Scaling BigTable to BigData By now, you have probably understood the data model of BigTable, how it is laid out on the disk, and the advantages it offers. To recap once again, the BigTable installation may have many tables, each table may have many column families that are defined at the time of creating the table, and each column family may have many columns, as required. Rows are identified by keys, which have a maximum length of 64 KB, and the stored data is sorted by the key. We can receive, update, and delete a single row. We can also scan a range of rows from a starting key to an ending key. So now, the question comes, how does this scale? We will provide a very high-level overview, neglecting the micro details to keep things simple and build a mental model that is useful to us as the consumers of BigTable, as we're not supposed to clone BigTable's implementation after all. As we saw earlier, the basic storage unit in BigTable is a file format called SSTable that stores key-value pairs, which are sorted by the key, and has an index at its end. We also examined how the read, write, and delete work on an in-memory copy of the table and merged periodically with the table that is present on the disk. Lastly, we also mentioned that when the in memory is flushed as SSTables on the disk when reach a certain configurable count, they are merged into a bigger table. The view so far presents the data model, its physical layout, and how operations work on it in cases where the data resides on a single machine, such as a situation where your laptop has a telephone directory of the entire Europe. However, how does that work at larger scales? Neglecting the minor implementation details and complexities that arise in distributed systems, the overall architecture and working principles are simple. In case of a single machine, there's only one SSTable (or a few in case they are not merged into one) file that has to be taken care of, and all the operations have to be performed on it. However, in case this file does not fit on a single machine, we will of course have to add another machine, and half of the SSTable will reside on one machine, while the other half will be on the another machine. This split would of course mean that each machine would have a range of keys. For instance, if we have 1 million keys (that look like key1, key2, key3, and so on), then the keys from key1 to key500000 might be on one machine, while the keys from key500001 to key1000000 will be on the second machine. So, we can say that each machine has a different key range for the same table. Now, although the data resides on two different machines, it is of course a single table that sprawls over two machines. These partitions or separate parts are called tablets. Let's see the Key allocation on two machines: We will keep this system to only two machines and 1 million rows for the sake of discussion, but there may be cases where there are about 20 billion keys sprawling over some 12,000 machines, with each machine having a different range of keys. However, let's continue with this small cluster consisting of only two nodes. Now, the problem is that as an external user who has no knowledge of which machine has which portion of the SSTable (and eventually, the key ranges on each machine), how can a key, say, key489087 be located? For this, we will have to add something like a telephone directory, where I look up the table name and my desired key and I get to know the machine that I should contact to get the data associated with the key. So, we are going to add another node, which will be called the master. This master will again contain simple, plain SSTable, which is familiar to us. However, the key-value pair would be a very interesting one. Since this table would contain data about the other BigTable tables, let's call it the METADATA table. In the METADATA table, we will adopt the following format for the keys: tablename_ending-row-key Since we have only two machines and each machine has two tablets, the METADATA table will look like this: Key Value employees_key500000 192.168.0.2 employees_key1000000 192.168.0.3 The master stores the location of each tablet server with the row key that is the encoding of the table name and the ending row of the tablet. So, the tablet has to be scanned. The master assigns tablets to different machines when required. Each tablet is about 100 MB to 200 MB in size. So, if we want to fetch a key, all we need to know is the following: Location of the master server Table in which we are looking for the key The key itself Now, we will concatenate the table name with the key and perform a scan on the METADATA table on the master node. Let's suppose that we are looking for key600000 in employees table. So, we would first be actually looking for the employees_key600000 key in the table on master machine. As you are familiar with the scan operation on SSTable (and METADATA is just an SSTable), we are looking for a key that is greater than or equal to employees_key600000, which happens to be employees_key1000000. From this lookup, the key that we get is employees_key1000000 against which, IP address 192.168.0.3 is listed. This means that this is the machine that we should connect to fetch our data. We used the word keys and not the key because it is a range scan operation. This will be clearer with another example. Let's suppose that we want to process rows with keys starting from key400000 to key800000. Now, if you look at the distribution of data across the machine, you'll know that half of the required range is on one machine, while the other half is on the other. Now, in this case, when we consult the METADATA table, two rows will be returned to us because key400000 is less then key500000 (which is the ending row key for data on the first machine) and key800000 is less then key1000000, which is the ending row for the data on the second machine. So, with these two rows returned, we have two locations to fetch our data from. This leads to an interesting side-effect. As the data resides on two different machines, this can be read or processed in parallel, which leads to an improved system performance. This is one reason why even with larger datasets, the performance of BigTable won't deteriorate as badly as it would have if it were a single, large machine with all the data on it. The datastore thyself So until now, everything that we talked about was about BigTable, and we did not mention datastore at all. Now is the time to look at datastore in detail because we understand BigTable quite well now. Datastore is an effectively solution that was built on top of BigTable as a persistent NoSQL layer for Google App Engine. As we know that BigTable might have different tables, data for all the applications is stored in six separate tables, where each table stores a different aspect or information about the data. Don't worry about memorizing things about data modeling and how to use it for now, as this is something that we are going to look into in greater detail later. The fundamental unit of storage in datastore is called a property. You can think of a property as a column. So, a property has a name and type. You can group multiple properties into a Kind, which effectively is a Python class and analogous to a table in the RDBMS world. Here's a pseudo code sample: # 1. Define our Kind and how it looks like. class Person(object): name = StringProperty() age = IntegerProperty() # 2. Create an entity of kind person ali = Person(name='Ali', age='24) bob = Person(name='Bob', age='34) david = Person(name='David', age='44) zain = Person(name='Zain', age='54) # 3. Save it ali.put() bob.put() david.put() zain.put() This looks a lot like an ORM such as Django's ORM, SQLAlchemy, or Rails ActiveRecord. So, Person class is called a Kind in App Engine's terminology. The StringProperty and IntegerProperty property classes are used to indicate the type of the data that is supposed to be stored. We created an instance of the Person class as mohsin. This instance is called an entity in App Engine's terminology. Each entity, when stored, has a key that is not only unique throughout your application, but also combined with your application ID. It becomes unique throughout all the applications that are hosted over Google App Engine. All entities of all kinds for all apps are stored in a single BigTable, and they are stored in a way where all the property values are serialized and stored in a single BigTable column. Hence, no separate columns are defined for each property. This is interesting and required as well because if we are Google App Engine's architects, we do not know the Kind of data that people are going to store or the number and types of properties that they would define so that it makes sense to serialize the whole thing as one and store them in one column. So, this is how it looks like: Key Kind Data agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Person {name: 'Ali', age: 24} agtkZXZ-bWdhZS0wMXIPCxNTVVyc29uIgNBbGkM Person {name: 'Bob', age: 34} agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIgNBbBQM Person {name: 'David', age: 44} agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIRJ3bGkM Person {name: 'Zain', age: 54} The key appears to be random, but it is not. A key is formed by concatenating your application ID, your Kind name (Person here), and either a unique identifier that is auto generated by Google App Engine, or a string that is supplied by you. The key seems cryptic, but it is not safe to pass it around in public, as someone might decode it and take advantage of it. Basically, it is just base 64 encoded and can easily be decoded to know the entity's Kind name and ID. A better way would be to encrypt it using a secret key and then pass it around in public. On the other hand, to receive it, you will have to decrypt it using the same key. A gist of this is available on GitHub that can serve the purpose. To view this, visit https://gist.github.com/mohsinhijazee/07cdfc2826a565b50a68. However, for it to work, you need to edit your app.yaml file so that it includes the following: libraries: - name: pycrypto version: latest Then, you can call the encrypt() method on the key while passing around and decrypt it back using the decrypt() method, as follows: person = Person(name='peter', age=10) key = person.put() url_safe_key = key.urlsafe() safe_to_pass_around = encrypt(SECRET_KEY, url_safe_key) Now, when you have a key from the outside, you should first decrypt it and then use it, as follows: key_from_outside = request.params.get('key') url_safe_key = decrypt(SECRET_KEY, key_from_outside) key = ndb.Key(urlsafe=url_safe_key) person = key.get() The key object is now good for use. To summarize, just get the URL safe key by calling the ndb.Key.urlsafe() method and encrypt it so that it can be passed around. On return, just do the reverse. If you really want to see how the encrypt and decrypt operations are implemented, they are reproduced as follows without any documentation/comments, as cryptography is not our main subject: import os import base64 from Crypto.Cipher import AES BLOCK_SIZE = 32 PADDING='#' def _pad(data, pad_with=PADDING): return data + (BLOCK_SIZE - len(data) % BLOCK_SIZE) * PADDING def encrypt(secret_key, data): cipher = AES.new(_pad(secret_key, '@')[:32]) return base64.b64encode(cipher.encrypt(_pad(data))) def decrypt(secret_key, encrypted_data): cipher = AES.new(_pad(secret_key, '@')[:32]) return cipher.decrypt(base64.b64decode (encrypted_data)).rstrip(PADDING) KEY='your-key-super-duper-secret-key-here-only-first-32-characters-are-used' decrypted = encrypt(KEY, 'Hello, world!') print decrypted print decrypt(KEY, decrypted) More explanation on how this works is given at https://gist.github.com/mohsinhijazee/07cdfc2826a565b50a68. Now, let's come back to our main subject, datastore. As you can see, all the data is stored in a single column, and if we want to query something, for instance, people who are older than 25, we have no way to do this. So, how will this work? Let's examine this next. Supporting queries Now, what if we want to get information pertaining to all the people who are older than, say, 30? In the current scheme of things, this does not seem to be something that is doable, because the data is serialized and dumped, as shown in the previous table. Datastore solves this problem by putting the sorted values to be queried upon as keys. So here, we want to query by age. Datastore will create a record in another table called the Index table. This index table is nothing but just a plain BigTable, where the row keys are actually the property value that you want to query. Hence, a scan and a quick lookup is possible. Here's how it would look like: Key Entity key Myapp-person-age-24 agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Myapp-person-age-34 agtkZXZ-bWdhZS0wMXIPCxNTVVyc29uIgNBbGkM Myapp-person-age-44 agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIgNBbBQM Myapp-person-age-54 agtkZXZ-bWdhZS0wMXIPCxIGUGVyc29uIRJ3bGkM Implementation details So, all in all, Datastore actually builds a NoSQL solution on top of BigTable by using the following six tables: A table to store entities A table to store entities by kind A table to store indexes for the property values in the ascending order A table to store indexes for the property values in the descending order A table to store indexes for multiple properties together A table to keep a track of the next unique ID for Kind Let us look at each table in turn. The first table is used to store entities for all the applications. We have examined this in an example. The second table just stores the Kind names. Nothing fancy here. It's just some metadata that datastore maintains for itself. Think of this—you want to get all the entities that are of the Person Kind. How will you do this? If you look at the entities table alone and the operations that are available to us on a BigTable table, you will know that there's no such way for us to fetch all the entities of a certain Kind. This table does exactly this. It looks like this: Key Entity key Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM AgtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBbGkM Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBb854 agtkZXZ-bWdhZS0wMXIQTXIGUGVyc29uIgNBb854 Myapp-Person-agtkZXZ-bWdhZS0wMXIQTXIGUGVy748IgNBbGkM agtkZXZ-agtkZXZ-bWdhZS0wMXIQTXIGUGVy748IgNBbGkM So, as you can see, this is just a simple BigTable table where the keys are of the [app ID]-[Kind name]-[entity key] pattern. The tables 3, 4, and 5 from the six tables that were mentioned in the preceding list are similar to the table that we examined in the Supporting queries section labeled Data as stored in BigTable. This leaves us with the last table. As you know that while storing entities, it is important to have a unique key for each row. Since all the entities from all the apps are stored in a single table, they should be unique across the whole table. When datastore generates a key for an entity that has to be stored, it combines your application ID and the Kind name of the entity. Now, this much part of the key only makes it unique across all the other entities in the table, but not within the set of your own entities. To do this, you need a number that should be appended to this. This is exactly similar to how AUTO INCREMENT works in the RDBMS world, where the value of a column is automatically incremented to ensure that it is unique. So, that's exactly what the last table is for. It keeps a track of the last ID that was used by each Kind of each application, and it looks like this: Key Next ID Myapp-Person 65 So, in this table, the key is of the [application ID]-[Kind name] format, and the value is the next value, which is 65 in this particular case. When a new entity of kind Person is created, it will be assigned 65 as the ID, and the row will have a new value of 66. Our application has only one Kind defined, which is Person. Therefore, there's only one row in this table because we are only keeping track for the next ID for this Kind. If we had another Kind, say, Group, it will have its own row in this table. Summary We started this article with the problem of storing huge amounts of data, processing it in bulk, and randomly accessing it. This arose from the fact that we were ambitious to store every single web page on earth and process it to extract some results from it. We introduced a solution called BigTable and examined its data model. We saw that in BigTable, we can define multiple tables, with each table having multiple column families, which are defined at the time of creating the table. We learned that column families are logical groupings of columns, and new columns can be defined in a column family, as needed. We also learned that the data store in BigTable has no meaning on its own, and it stores them just as plain bytes; its interpretation and meanings depend on the user of data. We also learned that each row in BigTable has a unique row key, which has a length of 64 KB. Lastly, we turned our attention to datastore, a NoSQL storage solution built on top of BigTable for Google App Engine. We briefly mentioned some datastore terminology such as properties (columns), entities (rows), and kinds (tables). We learned that all data is stored across six different BigTable tables. This captured a different aspect of data. Most importantly, we learned that all the entities of all the apps hosted on Google App Engine are stored in a single BigTable and all properties go to a single BigTable column. We also learned how querying is supported by additional tables that are keyed by the property values that list the corresponding keys. This concludes our discussion on Google App Engine's datastore and its underlying technology, workings, and related concepts. Next, we will learn how to model our data on top of datastore. What we learned in this chapter will help us enormously in understanding how to better model our data to take full advantage of the underlying mechanisms. Resources for Article: Further resources on this subject: Google Guice[article] The EventBus Class[article] Integrating Google Play Services [article]
Read more
  • 0
  • 0
  • 2744

Packt
20 Jan 2010
8 min read
Save for later

SOA Management—OSB (aka ALSB) Management

Packt
20 Jan 2010
8 min read
Introducing Oracle Service Bus (OSB) In any distributed system, components that run on a distributed platform need to communicate or exchange messages with each other. In SOA based systems, services need to interact or exchange messages with other services also. Oracle Service Bus is a product that provides a platform for interaction and message exchange between services. Integration of disparate services can be a challenge. There could be a difference in messaging models—some services may support the synchronous model, whereas other services may support the asynchronous model. Some services may support HTTP protocol, whereas other services may support JMS protocol. Oracle Service Bus provides helps in solving many of these challenges by providing the following features: Support for different protocols, such as HTTP(s), FTP, JMS, E-mail, Tuxedo, and so on Support for different messaging models, such as point-to-point model, publish-subscribe model Support for different message formats, such as SOAP, E-mail, JMS, XML, and so on Support for different content types such as XML, binary, and so on Data transformation using XLST and Xquery—data mapping from one format to another format through declarative constructs Besides the integration support, OSB provides features that help in managing runtime for the integration of services. Some features are: Load balancing: Load balancing is very important when traffic volume between services is very high. Load balancing also helps to achieve high availability. Content-based routing: Routing to appropriate service based on content is very valuable in the changing business environment. OSB provides loosely-coupled bus architecture, where all services and consumers of services can be plugged into, and OSB becomes a central place for defining mediation rules between services and consumers, as shown next: At implementation level, OSB is a set of J2EE applications that run on top of WebLogic J2EE container. It uses various J2EE constructs like Enterprise Java Bean, data source, connectors, and so on. OSB uses a database for storing the reporting data. OSB can be deployed in a single server model or clustered server model. Generally, in a production environment a clustered model is used, where multiple WebLogic Servers are used for load balancing and high-availability. In such setups, multiple WebLogic Servers are front-ended by a load balancer. The following figure depicts one such clustered deployment. OSB constructs There are some constructs specific to OSB—let's learn about those constructs. Proxy service OSB provides mediation between consumer and provider services. This mediation is loosely coupled where a consumer service makes a call to intermediate service, and the intermediate service performs some transformation and routes it to producer service. This intermediate service is hosted by OSB and called Proxy service. Business service Business service is a representation of actual producer service. Business service controls the call to business service, it knows about the business service endpoint. In case of failure in calling producer service, it can retry the operation. In case there are multiple producer services, it knows about the endpoint of each producer service and provides load balancing across those endpoints. Message flow Message flow is a set of steps that are executed by the proxy service before it routes to the business service. It includes steps for data transformation, validation, reporting, and so on. Supported versions Enterprise Manager provides OSB management from release 10.2.0.5 onwards. The following is the support matrix for EM against different versions of OSB management: EM Version OSB 2.6 OSB 2.6.1 OSB 3.0 OSB 10gR3 10.2.0.5 Yes Yes Yes Yes To manage or monitor OSB from Enterprise Manager 10.2.0.5, some patches are needed on OSB servers. The following table lists the patch number for each supported OSB release: OSB version Patch ID 2.6, 2.6.1 B8SZ 3.0 SWGQ 10gR3 7NPS The SmartUpdate patching tool that comes with WebLogic installation can be used to apply these patches. Before downloading or applying these patches, check Enterprise Manager documentation for any updates on patch Ids. Discovery of Oracle Service Bus Oracle Service Bus gets discovered as part of WebLogic domain, managed server discovery.We know how Enterprise Manager Agent discovers WebLogic domain and managed servers. Along with domain, cluster and managed server targets, OSB targets are also created and persisted in the Enterprise Manager repository. Just like WebLogic domain and server targets, OSB can also be discovered and monitored, either by remote agent or local agent. In case you want to use OSB service's provisioning feature you will need to discover/monitor OSB in the local agent mode. In the local agent mode, the agent is installed where the domain admin server is running. In the remote agent mode, the agent can be on any other host on the same network. After discovery you will see the Oracle Service Bus target on the Middleware tab of the EM homepage. The following screenshot shows the OSB target on the Middleware tab. Monitoring OSB and OSB services Under the BPEL monitoring section, we learnt about the BPEL eco system that included BPEL PM, dehydration store, database listener, host, and so on, and how Enterprise Manager provides support for modeling of BPEL eco system as system. Enterprise Manager provides similar support for managing and monitoring OSB eco system. Monitoring OSB For monitoring OSB, you can go to the OSB homepage, where you can see the status and availability of OSB. It shows some other details like the host, name of the WebLogic Server Domain where OSB is installed. It also shows some coarse-grained historical view of traffic metrics—where it shows message/error/security violation rates for all the services. On the Home page there is an option to create an infrastructure system and service. The steps for creating an infrastructure system service is exactly the same as BPEL, so we will not repeat those steps and will leave it as an exercise. The following screenshot shows one such homepage after the infrastructure system services are created. OSB uses Java Message Service (JMS) queues for receiving, scheduling, dispatching of messages, it's very important to monitor JMS queues for OSB. From the OSB homepage, click on JMS Performance, and you will see the performance of JMS queues used by OSB. The following screenshot shows one such page. You can see the metrics for message inflow and messages pending. Monitoring OSB services OSB service monitoring can be divided into two parts—proxy service that receives the requests and business service that dispatches the requests. Proxy service metrics also include the metrics in message flow, where message flow processes the request. Monitoring proxy services Performance of proxy services represents the performance seen by consumers of OSB. Besides that, proxy services are hosted by OSB servers. Enterprise Manager collects some very useful metrics for proxy services. These metrics are: Status of proxy service Throughput metrics at proxy service and endpoint level Performance metrics at proxy service level and endpoint level Error metrics at service level and end point level Security violation at proxy service level To monitor the performance of message flow, there are some fine-grained metrics available, these metrics include throughput, error rate, and performance at each step in the message flow. Let's go through the console screens for OSB service monitoring. Go to the OSB Services tab from OSB homepage, on that page you will see a listing of all the services that include proxy as well as business services. You will see every proxy and business services is part of some project. OSB provides a construct project under which different services can be created; project is just a means to categorize different services. The following screenshot shows such a page where you can see the list of all the projects and services. For each service you see metrics related to throughput, errors, violations, and performance. Once you click on one of the services, you will see a page where the metrics and other details of proxy service are listed. On this page, you can see the throughput and performance chart for the proxy service. You will also see a section for the EM service model that represents a proxy service. This model is similar to the model that we had for the BPEL process. This mode has three entities: Infrastructure service: This service represents all of the components in the OSB eco system. Availability service: This service represents all of the availability tests for an OSB proxy service endpoint, it could include SOAP tests, Web transactions etc. Aggregate service: This service is just an aggregate of the previous two services. Using this service, you can monitor all of the required metrics in one place. You can see the performance of your IT infrastructure as well as the performance of partner links in one place.
Read more
  • 0
  • 0
  • 2743
article-image-joomla-16-managing-site-users-access-control
Packt
17 Mar 2011
9 min read
Save for later

Joomla! 1.6: Managing Site Users with Access Control

Packt
17 Mar 2011
9 min read
Joomla! 1.6 First Look A concise guide to everything that's new in Joomla! 1.6.     What's new about the Access Control Levels system? In Joomla! 1.5, a fixed set of user groups was available, ranging from "Public" users (anyone with access to the frontend of the site) to "Super Administrators", allowed to log in to the backend and do anything. The ACL system in Joomla! 1.6 is much more flexible: Instead of fixed user groups with fixed sets of permissions, you can create as many groups as you want and grant the people in those groups any combination of permissions. ACL enables you to control anything users can do on the site: log in, create, edit, delete, publish, unpublish, trash, archive, manage, or administer things. Users are no longer limited to only one group: a user can belong to different groups at the same time. This allows you to give particular users both the set of permissions for one group and another group without having to create a third, combined set of permissions from the ground up. Permissions no longer apply to the whole site as they did in Joomla! 1.5. You can now set permissions for specific parts of the site. Permissions apply to either the whole site, or to specific components, categories, or items (such as a single article). What are the default user groups and permissions? The flexibility of the new ACL system has a downside: it can also get quite complex. The power to create as many user groups as you like, each with very fine-grained sets of permissions assigned to them, means you can easily get entangled in a web of user groups, Joomla! Objects, and permissions. You should carefully plan the combinations of permissions you need to assign to different user groups. Before you change anything in Joomla!, sketch an outline or use mind mapping tools (such as http://bubbl.us) to get an overview of what you want to accomplish through Joomla! ACL: who (which users) should be able to see or do what in which parts of the site? In many cases, you might not need to go beyond the default setup and just use the default users groups and permissions that are already present when you install Joomla! 1.6. So, before we go and find out how you can craft custom user groups and their distinctive sets of permissions, let's have a look at the default Joomla! 1.6 ACL setup. The default site-wide settings In broad terms, the default groups and permissions present in Joomla! 1.6 are much like the ACL system that was available in Joomla! 1.5. To view the default groups and their permissions, go to Site | Global Configuration | Permissions. The Permission Settings screen is displayed, showing a list of User Groups. A user group is a collection of users sharing the same permissions, such as Public, Manager, or Administrator. By default the permission settings of the Public user group are shown; clicking any of the other user group names reveals the settings for that particular group. On the right-hand side of the Permission Settings screen, the generic (site-wide) action permissions for this group are displayed: Site Login, Admin Login, and so on. Actions are the things users are allowed do on the site. For the sample user groups, these action permissions have already been set. Default user groups Let's find out what these default user groups are about. We'll discuss the user groups from the most basic level (Public) to the most powerful (Super Users). Public – the guest group This is the most basic level; anyone visiting your site is considered part of the Public group. Members of the Public group can view the frontend of the site, but they don't have any special permissions. Registered – the user group that can log in Registered users are regular site visitors, except for the fact that they are allowed to log in to the frontend of the site. After they have logged in with their account details, they can view content that may be hidden from ordinary site visitors because the Access level of that content has been set to Registered. This way, Registered users can be presented all kinds of content ordinary (Public) users can't see. Registered users, however, can't contribute content. They're part of the user community, not web team. Author, Editor, Publisher – the frontend content team Authors, Editors, and Publishers are allowed to log in to the frontend, to edit or add articles. There are three types of frontend content contributors, each with their specific permission levels: Authors can create new content for approval by a Publisher or someone higher in rank. They can edit their own articles, but can't edit existing articles created by others. Editors can create new articles and edit existing articles. A Publisher or higher must approve their submissions. Publishers can create, edit, and publish, unpublish, or trash articles in the frontend. They cannot delete content. Manager, Administrator, Super User – the backend administrators Managers, Administrators and Super Users are allowed to log in to the backend to add and manage content and to perform administrative tasks. Managers can do all that Publishers can, but they are also allowed to log in to the backend of the site to create, edit, or delete articles. They can also create and manage categories. They have limited access to administration functions. Administrators can do all that Managers can and have access to more administration functions. They can manage users, edit, or configure extensions and change the site template. They can use manager screens (User Manager, Article Manager, and so on) and can create, delete, edit, and change the state of users, articles, and so on. Super Users can do everything possible in the backend. (In Joomla! 1.5, this user group type was called Super Administrator). When Joomla! is installed, there's always one Super User account created. That's usually the person who builds and customizes the website. In the current example website, you're the Super User. Shop Suppliers and Customers – two sample user groups You'll notice two groups in the Permission Settings screen that we haven't covered yet: Shop Suppliers and Customer. These are added when you install the Joomla! 1.6 sample data. These aren't default user groups; they are used in the sample Fruit Shop site to show how you can create customized groups. Are there also sample users available? As there are user groups present in the sample data, you might expect there are also sample users. This is not the case. There are no (sample) users assigned to the sample user groups. There's just one user available after you've installed Joomla!— you. You can view your details by navigating to Users | User Manager. You're taken to the User Manager: Users screen: Here you can see that your name is Super User, your user name is admin (unless you've changed this yourself when setting up your account), and you're part of the user group called Super Users. There's also a shortcut available to take you to your own basic user settings: click on Site | My Profile or—even faster—just click on the Edit Profile shortcut in the Control Panel. However, you can't manage user permissions here; the purpose of the My Profile screen is only to manage basic user settings. Action Permissions: what users can do We've now seen what types of users are present in the default setup of Joomla! 1.6. The action permissions that you can grant these user groups—things they can do on the site—are shown per user group in the Site | Global Configuration | Permissions screen. Click on any of the user group names to see the permission settings for that group: You'll also find these permissions (such as Site Login, Create, Delete, Edit) on other places in the Joomla! interface: after all, you don't just apply permissions on a site-wide basis (as you could in previous versions of Joomla!), but also on the level of components, categories, or individual items. To allow or deny users to do things, each of the available actions can be set to Allowed or Denied for a specific user group. If the permission for an action isn't explicitly allowed or denied, it is Not Set. Permissions are inherited You don't have to set each and every permission on every level manually: permissions are inherited between groups. That is, a child user group automatically gets the permissions set for its parent. Wait a minute—parents, children, inheritance ... how does that work? To understand these relationships, let's have a look at the overview of user groups in the Permission Settings screen. This shows all available user groups (I've edited this screen image a little to be able to show all the user groups in one column): You'll notice that all user group names are displayed indented, apart from Public. This indicates the permissions hierarchy: Public is the parent group, Manager (indented one position) is a child of Public, Administrator (indented two positions) is a child of Manager. Permissions for a parent group are automatically inherited by all child groups (unless these permissions are explicitly set to Allowed or Denied to "break" the inheritance relationship). In other words: a child group can do anything a parent group can do—and more, as it is a child and therefore has its own specific permissions set. For example, as Authors are children of the Registered group, they inherit the permissions of the Registered group (that is, the permission to log in to the frontend of the site). Apart from that, Authors have their own specific permissions added to the permissions of the Registered group. Setting an action to Denied is very powerful: you can't allow an action for a lower level in the permission hierarchy if it is set to Denied higher up in the hierarchy. So, if an action is set to Denied for a higher group, this action will be inherited all the way down the permissions "tree" and will always be denied for all lower levels—even if you explicitly set the lower level to Allowed.  
Read more
  • 0
  • 0
  • 2741

article-image-making-content-findable-drupal-6
Packt
20 Oct 2009
5 min read
Save for later

Making Content Findable in Drupal 6

Packt
20 Oct 2009
5 min read
What you will learn In this article, you will learn about: Using Taxonomy to link descriptive terms to Node Content Tag clouds Path aliases What you will do In this article, you will: Create a Taxonomy Enable the use of tags with Node Content Define a custom URL Activate site searching Perform a search Understanding Taxonomy One way to find content on a site is by using a search function, but this can be considered as a hit-or-miss approach. Searching for an article on 'canines' won't return an article about dogs, unless it contains the word 'canines'. Certainly, navigation provides a way to navigate the site, but unless your site has only a small amount of content, the navigation can only be general in nature. Too much navigation is annoying. There are far too many sites with two or three sets of top navigation, plus left and bottom navigation. It's just too much to take in and still feel relaxed. Site maps offer additional navigation assistance, but they're usually not fun to read, and are more like a Table of Contents, where you have to know what you're looking for. So, what's the answer?—Tags! A Tag is simply a word or a phrase that is used as a descriptive link to content. In Drupal, a collective set of terms, from which terms or tags are associated with content, is called a Vocabulary. One or more Vocabularies comprise a Taxonomy. This a good place to begin, so let's create a Vocabulary. Activity 1: Creating a Taxonomy Vocabulary In this activity, we will be adding two terms to our Vocabulary. We shall also learn how to assign a Taxonomy to Node Content that has been created. We begin in the Content management area of the admin menu. There, you should find the Taxonomy option listed, as shown in the following screenshot. Click on this option. Taxonomy isn't listed in my admin menuThe Taxonomy module is not enabled by default. Check on the Modules page (Admin | Site building | Modules) and make sure that the module is enabled. For the most part, modules can be thought of as options that can be added to your Drupal site, although some of them are considered essential. Some modules come pre-installed with Drupal. Among them, some are automatically activated, and some need to be activated manually. There are many modules that are not included with Drupal, but are available freely from the Drupal web site. The next page gives us a lengthy description of the use of a taxonomy. At the top of the page are two options, List and Add vocabulary. We'll choose the latter. On the Add vocabulary page, we need to provide a Vocabulary name. We can create several vocabularies, each for a different use. For example, with this site, we could have a vocabulary for 'Music' and another for 'Meditation'. For now, we'll just create one vocabulary, and name it Tags, as suggested below, in the Vocabulary name box. In the Description box, we'll type This vocabulary contains Tag terms. In the Help text box, we'll type Enter one or more descriptive terms separated by commas. Next is the [Node] Content types section. This lists the types of Node Content that are currently defined. Each has a checkbox alongside it. Selecting the checkbox indicates that the associated Node Content type can have Tags from this vocabulary assigned to it. Ultimately, it means that if a site visitor searches using a Tag, then this type of Node Content might be offered as a match. We will be selecting all of the checkboxes. If a new Node Content type is created that will use tags, then edit the vocabulary and select the checkbox. The Settings section defines how we will use this vocabulary. In this case, we want to use it with tags, so we will select the Tags checkbox. The following screenshot shows the completed page. We'll then click on the Save button. At this point, we have a vocabulary, as shown in the screenshot, but it doesn't contain anything. We need to add something to it, so that we can use it. Let's click on the add terms link. On the Add term page, we're going to add two terms, one at a time. First we'll type healing music into the Term name box. We'll purposely make the terms lower case, as it will look better in the display that we'll be creating soon. We'll click on the Save button, and then repeat the procedure for another term named meditation. The method we used for adding terms is acceptable when creating new terms that have not been applied to anything, yet. If the term does apply to existing Node Content, then a better way to add it is by editing that content. We'll edit the Page we created, entitled Soul Reading. Now that the site has the Taxonomy module enabled, a new field named Tags appears below Title. We're going to type soul reading into it. This is an AJAX field. If we start typing a tag that already exists, then it will offer to complete the term. AJAX (Asynchronous JavaScript + XML) is a method of using existing technologies to retrieve data in the background. What it means to a web site visitor is that data can be retrieved and presented on the page being viewed, without having to reload the page. Now, we can Save our Node Content, and return to the vocabulary that we created earlier. Click on the List tab at the top of the page. Our terms are listed, as shown in the following screenshot.  
Read more
  • 0
  • 0
  • 2734
Modal Close icon
Modal Close icon