Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-customizing-default-expression-engine-v167-website-template
Packt
22 Oct 2009
5 min read
Save for later

Customizing the Default Expression Engine (v1.6.7) website template

Packt
22 Oct 2009
5 min read
Introduction The blogging revolution has changed the nature of the internet and as more and more individuals and organizations choose the blog format as the preferred web solution (i.e content driven websites which are regularly updated by one or more users). The question of which blogging application to use becomes an increasingly important one. I have tested all the popular blogging solutions and the one I have found to be the most impressive for designers with a little CSS and XHTML know-how is Expression Engine. After spending a little time on the Expression Engine support forums I have noticed that a high number of the support questions are relating to solving CSS problems within EE. This demonstrates that many designers who choose EE are used to working from within Adobe Dreamweaver (or other WYSWYG applications) and although there are no compatibility issues between the two systems, it is clear that they are making the transition from graphics based web design to lovely CSS web based design. When you are installing Expression Engine 1.6.7 for the first time, you are asked to choose a theme from a drop-down menu, which you may expect would offer you a satisfactory selection of pre-installed themes to choose from. Sadly this is not the case in Expression Engine (EE) version 1.6.7, and if you have not downloaded and manually saved a theme inside the right folder within the EE system file structure you will be given only one theme to choose from: the dreaded "default weblog theme". Once you complete the installation, no other themes can be imported or installed, so you can either restart the installation, opt to select the default weblog theme or start from a blank canvas (sorry about that). The good news is that the next release of EE (v.2.0) ships with a very nice default template, but the bad news is that you will have to wait a few months longer to get a copy and you will need to renew your license to upgrade for a fee.  Even then you will probably want to modify the new and improved default template in EE 2.0, or you may due to your needs opt to  choose another EE template altogether so that your website does not look like a thousand other websites based on the same template (not good). This article will demonstrate how to use Cascading Style Sheets (CSS) to improve the layout of the default EE website/blog template, and how to keep the structure (XHTML) and the presentation (CSS) entirely separate-which is best practice in web development. This article is intended as a practical guide to intermediate level web designers wanting to program with CSS more effectively within Expression Engine, and to create better websites, which adhere to current WC3 web standards. It will not attempt teach you the fundamentals of programming with CSS and XHTML or how to install, use or develop a website with EE. This article will demonstrate how to use CSS to effectively take control of the appearance of any EE template. If you are new to EE then it is recommended that you consult the book "Building a Website with Expression Engine 1.6.7", published by Packt Publishing and visiting www.expressionengine.com to consult the EE user guide. If you get stuck at any time when using Expression Engine you can visit the EE support forums via the man EE site to get help with EE, XHTML and CSS issues and for regular updates, the Elislab EE feed within EE is an excellent source of news from the EE community. The trouble with templates Lets open up EE’s default "weblog" template in Firefox. I have the very useful "developers toolbar" add-on installed. You can see the many options which are available lined-up across the bottom of Firefox’s toolbar. When you select "Outline > Outline Current Element", Firefox renders an outline around the block of the element and is set by default to display the name of the selected element. This add-on features many other timesaving and task-facilitating functions, which range from DOM tools for JavaScript development to nifty layout tools like displaying a ruler inside the Firefox window. The default template is a useful guide to some basic EE tags embedded into the XHTML, but the CSS should render a more clean and simple to customize design-so lets make some much needed changes. I will not be looking onto the EE tags in this article because EE tags are very powerful and are beyond the scope of this article. Inside the template module create a new template group and call it "site_extended". Template groups in EE organize your templates into virtual folders. We will make a copy of the existing template group and templates so all the changes we make are non-destructive. Choose, do not duplicate any existing template groups, but select "Make the index template in this group your site's home page?" and press submit. Easy. Next create a new template and call it "site_extended_css" and lets duplicate the site/site_css template. This powerful feature instructs EE to clone an existing template with a new name and location. Now let's create a copy of the default site weblog and call it "index_extended". Select "duplicate an existing template", choose "site/index" from the options drop-down list. The first part of the location is "site"/ being the template group and the site/"index" the actual template. Now your template management tab should look like: Notice that the index template has a red star next to it.
Read more
  • 0
  • 0
  • 3366

article-image-managing-posts-wordpress-plugin
Packt
14 Oct 2009
8 min read
Save for later

Managing Posts with WordPress Plugin

Packt
14 Oct 2009
8 min read
Programming the Manage panel The Manage Posts screen can be changed to show extra columns, or remove unwanted columns in the listing. Let's say that we want to show the post type—Normal, Photo, or Link. Remember the custom field post-type that we added to our posts? We can use it now to differentiate post types. Time for action – Add post type column in the Manage panel We want to add a new column to the Manage panel, and we will call it Type. The value of the column will represent the post type—Normal, Photo, or Link. Expand the admin_menu() function to load the function to handle Manage Page hooks: add_submenu_page('post-new.php', __('Add URL',$this->plugin_domain) , __('URL', $this->plugin_domain) , 1 ,'add-url', array(&$this, 'display_form') );// handle Manage page hooksadd_action('load-edit.php', array(&$this, 'handle_load_edit') );} Add the hooks to the columns on the Manage screen: // Manage page hooksfunction handle_load_edit(){ // handle Manage screen functions add_filter('manage_posts_columns', array(&$this, 'handle_posts_columns')); add_action('manage_posts_custom_column', array(&$this, 'handle_posts_custom_column'), 10, 2);} Then implement the function to add a new Column, remove the author and replace the date with our date format: // Handle Column headerfunction handle_posts_columns($columns){ // add 'type' column $columns['type'] = __('Type',$this->plugin_domain); return $columns;} For date key replacement, we need an extra function:     function array_change_key_name( $orig, $new, &$array ){ foreach ( $array as $k => $v ) $return[ ( $k === $orig ) ? $new : $k ] = $v; return ( array ) $return;} And finally, insert a function to handle the display of information in that column: // Handle Type column displayfunction handle_posts_custom_column($column_name, $id){ // 'type' column handling based on post type if( $column_name == 'type' ) { $type=get_post_meta($id, 'post-type', true); echo $type ? $type : __('Normal',$this->plugin_domain); }} Don't forget to add the Manage page to the list of localized pages: // pages where our plugin needs translation$local_pages=array('plugins.php', 'post-new.php', 'edit.php');if (in_array($pagenow, $local_pages)) As a result, we now have a new column that displays the post type using information from a post custom field. What just happened? We have used the load-edit.php action to specify that we want our hooks to be assigned only on the Manage Posts page (edit.php). This is similar to the optimization we did when we loaded the localization files. The handle_posts_columns is a filter that accepts the columns as a parameter and allows you to insert a new column: function handle_posts_columns($columns){ $columns['type'] = __('Type',$this->plugin_domain); return $columns;} You are also able to remove a column. This example would remove the Author column: unset($columns['author']); To handle information display in that column, we use the handle_posts_custom_column action. The action is called for each entry (post), whenever an unknown column is encountered. WordPress passes the name of the column and current post ID as parameters. That allows us to extract the post type from a custom field: function handle_posts_custom_column($column_name, $id){ if( $column_name == 'type' ) { $type=get_post_meta($id, 'post-type', true); It also allows us to print it out: echo $type ? $type : __('Normal',$this->plugin_domain); }} Modifying an existing column We can also modify an existing column. Let's say we want to change the way Date is displayed. Here are the changes we would make to the code: // Handle Column headerfunction handle_posts_columns($columns){ // add 'type' column $columns['type'] = __('Type',$this->plugin_domain); // remove 'author' column //unset($columns['author']); // change 'date' column $columns = $this->array_change_key_name( 'date', 'date_new', $columns ); return $columns;}// Handle Type column displayfunction handle_posts_custom_column($column_name, $id){ // 'type' column handling based on post type if( $column_name == 'type' ) { $type=get_post_meta($id, 'post-type', true); echo $type ? $type : __('Normal',$this->plugin_domain); } // new date column handling if( $column_name == 'date_new' ) { the_time('Y-m-d <br > g:i:s a'); } }function array_change_key_name( $orig, $new, &$array ){ foreach ( $array as $k => $v ) $return[ ( $k === $orig ) ? $new : $k ] = $v; return ( array ) $return;} The example replaces the date column with our own date_new column and uses it to display the date with our preferred formatting. Manage screen search filter WordPress allows us to show all the posts by date and category, but what if we want to show all the posts depending on post type? No problem! We can add a new filter select box straight to the Manage panel. Time for action – Add a search filter box Let's start by adding two more hooks to the handle_load_edit() function. The restrict_manage_posts function draws the search box and the posts_where alters the database query to select only the posts of the type we want to show. // Manage page hooksfunction handle_load_edit(){ // handle Manage screen functions add_filter('manage_posts_columns', array(&$this, 'handle_posts_columns')); add_action('manage_posts_custom_column', array(&$this, 'handle_posts_custom_column'), 10, 2); // handle search box filter add_filter('posts_where', array(&$this, 'handle_posts_where')); add_action('restrict_manage_posts', array(&$this, 'handle_restrict_manage_posts'));} Let's write the corresponding function to draw the select box: // Handle select box for Manage pagefunction handle_restrict_manage_posts(){ ?> <select name="post_type" id="post_type" class="postform"> <option value="0">View all types</option> <option value="normal" <?php if( $_GET['post_type']=='normal') echo 'selected="selected"' ?>><?php _e ('Normal',$this->plugin_domain); ?></option> <option value="photo" <?php if( $_GET['post_type']=='photo') echo 'selected="selected"' ?>><?php _e ('Photo',$this->plugin_domain); ?></option> <option value="link" <?php if( $_GET['post_type']=='link') echo 'selected="selected"' ?>><?php _e ('Link',$this->plugin_domain); ?></option> </select> <?php} And finally, we need a function that will change the query to retrieve only the posts of the selected type: // Handle query for Manage pagefunction handle_posts_where($where){ global $wpdb; if( $_GET['post_type'] == 'photo' ) { $where .= " AND ID IN (SELECT post_id FROM {$wpdb->postmeta} WHERE meta_key='post-type' AND metavalue='".__ ('Photo',$this->plugin_domain)."' )"; } else if( $_GET['post_type'] == 'link' ) { $where .= " AND ID IN (SELECT post_id FROM {$wpdb->postmeta} WHERE meta_key='post-type' AND metavalue='".__ ('Link',$this->plugin_domain)."' )"; } else if( $_GET['post_type'] == 'normal' ) { $where .= " AND ID NOT IN (SELECT post_id FROM {$wpdb->postmeta} WHERE meta_key='post-type' )"; } return $where;} What just happened? We have added a new select box to the header of the Manage panel. It allows us to filter the post types we want to show. We added the box using the restrict_manage_posts action that is triggered at the end of the Manage panel header and allows us to insert HTML code, which we used to draw a select box. To actually perform the filtering, we use the posts_where filter, which is run when a query is made to fetch the posts from the database. if( $_GET['post_type'] == 'photo' ){ $where .= " AND ID IN (SELECT post_id FROM {$wpdb->postmeta} WHERE meta_key='post-type' AND metavalue='".__ ('Photo',$this->plugin_domain)."' )"; If a photo is selected, we inspect the WordPress database postmeta table and select posts that have the post-type key with the value, Photo. At this point, we have a functional plugin. What we can do further to improve it is to add user permissions checks, so that only those users allowed to write posts and upload files are allowed to use it. Quick referencemanage_posts_columns($columns): This acts as a filter for adding/removing columns in the Manage Posts panel. Similarly, we use the function, manage_pages_columns for the Manage Pages panel.manage_posts_custom_column($column, $post_id): This acts as an action to display information for the given column and post. Alternatively, manage_pages_custom_column for Manage Pages panel.posts_where($where): This acts as a filter for the where clause in the query that gets the posts.restrict_manage_posts: This acts as an action that runs at the end of the Manage panel header and allows you to insert HTML.
Read more
  • 0
  • 0
  • 3365

article-image-taking-control-reactivity-inputs-and-outputs
Packt
23 Oct 2013
7 min read
Save for later

Taking Control of Reactivity, Inputs, and Outputs

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Showing and hiding elements of the UI We'll start easy with a simple function that you are certainly going to need if you build even a moderately complex application. Those of you who have been doing extra credit exercises and/or experimenting with your own applications will probably have already wished for this or, indeed, have already found it. conditionalPanel() allows you to show/hide UI elements based on other selections within the UI. The function takes a condition (in JavaScript, but the form and syntax will be familiar from many languages) and a UI element, and displays the UI only when the condition is true. This is actually used a couple of times in the advanced GA application and indeed in all the applications I've ever written of even moderate complexity. The following is a simpler example (from ui.R, of course, in the first section, within sidebarPanel()), which allows users who request a smoothing line to decide what type they want: conditionalPanel(condition = "input.smoother == true",selectInput("linearModel", "Linear or smoothed",list("lm", "loess"))) As you can see, the condition appears very R/Shiny-like, except with the "." operator familiar to JavaScript users in place of "$", and with "true" in lower case. This is a very simple but powerful way of making sure that your UI is not cluttered with irrelevant material. Giving names to tabPanel elements In order to further streamline the UI, we're going to hide the hour selector when the monthly graph is displayed and the date selector when the hourly graph is displayed. The difference is illustrated in the following screenshot with side-by-side pictures, hourly figures UI on the left-hand side and monthly figures on the right-hand side: In order to do this, we're going to have to first give the tabs of the tabbed output names. This is done as follows (with the new code in bold): tabsetPanel(id ="theTabs",tabPanel("Summary", textOutput("textDisplay"),value = "summary"),tabPanel("Monthly figures",plotOutput("monthGraph"), value = "monthly"),tabPanel("Hourly figures",plotOutput("hourGraph"), value = "hourly")) As you can see, the whole panel is given an ID (theTabs), and then each tabPanel is also given a name (summary, monthly, and hourly). They are referred to in the server.R file very simply as input$theTabs. Let's have a quick look at a chunk of code in server.R that references the tab names; this code makes sure that we subset based on date only when the date selector is actually visible, and by hour only when the hour selector is actually visible. Our function to calculate and pass data now looks like the following (new code again bolded): passData <- reactive({if(input$theTabs != "hourly"){analytics <- analytics[analytics$Date %in%seq.Date(input$dateRange[1], input$dateRange[2],by = "days"),]}if(input$theTabs != "monthly"){analytics <- analytics[analytics$Hour %in%as.numeric(input$minimumTime) :as.numeric(input$maximumTime),]}analytics <- analytics[analytics$Domain %in%unlist(input$domainShow),]analytics}) As you can see, subsetting by month is carried out only when the date display is visible (that is, when the hourly tab is not shown), and vice versa. Finally, we can make our changes to ui.R to remove parts of the UI based on tab selection: conditionalPanel(condition = "input.theTabs != 'hourly'",dateRangeInput(inputId = "dateRange",label = "Date range",start = "2013-04-01",max = Sys.Date())),conditionalPanel(condition = "input.theTabs != 'monthly'",sliderInput(inputId = "minimumTime",label = "Hours of interest- minimum",min = 0,max = 23,value = 0,step = 1),sliderInput(inputId = "maximumTime",label = "Hours of interest- maximum",min = 0,max = 23,value = 23,step = 1)) Note the use in the latter example of two UI elements within the same conditionalPanel() call; it is worth noting that it helps you keep your code clean and easy to debug. Reactive user interfaces Another trick you will definitely want up your sleeve at some point is a reactive user interface. This enables you to change your UI (for example, the number or content of radio buttons) based on reactive functions. For example, consider an application that I wrote related to survey responses across a broad range of health services in different areas. The services are related to each other in quite a complex hierarchy, and over time, different areas and services respond (or cease to exist, or merge, or change their name...), which means that for each time period the user might be interested in, there would be a totally different set of areas and services. The only sensible solution to this problem is to have the user tell you which area and date range they are interested in and then give them back the correct list of services that have survey responses within that area and date range. The example we're going to look at is a little simpler than this, just to keep from getting bogged down in too much detail, but the principle is exactly the same and you should not find this idea too difficult to adapt to your own UI. We are going to imagine that your users are interested in the individual domains from which people are accessing the site, rather than just have them lumped together as the NHS domain and all others. To this end, we will have a combo box with each individual domain listed. This combo box is likely to contain a very high number of domains across the whole time range, so we will let users constrain the data by date and only have the domains that feature in that range return. Not the most realistic example, but it will illustrate the principle for our purposes. Reactive user interface example – server.R The big difference is that instead of writing your UI definition in your ui.R file, you place it in server.R, and wrap it in renderUI(). Then all you do is point to it from your ui.R file. Let's have a look at the relevant bit of the server.R file: output$reacDomains <- renderUI({domainList = unique(as.character(passData()$networkDomain))selectInput("subDomains", "Choose subdomain", domainList)}) The first line takes the reactive dataset that contains only the data between the dates selected by the user and gives all the unique values of domains within it. The second line is a widget type we have not used yet which generates a combo box. The usual id and label arguments are given, followed by the values that the combo box can take. This is taken from the variable defined in the first line. Reactive user interface example – ui.R The ui.R file merely needs to point to the reactive definition as shown in the following line of code (just add it in to the list of widgets within sidebarPanel()): uiOutput("reacDomains") You can now point to the value of the widget in the usual way, as input$subDomains. Note that you do not use the name as defined in the call to renderUI(), that is, reacDomains, but rather the name as defined within it, that is, subDomains. Summary It's a relatively small but powerful toolbox with which you can build a vast array of useful and intuitive applications with comparatively little effort. This article looked at fine-tuning the UI using conditionalPanel() and observe(), and changing our UI reactively. Resources for Article: Further resources on this subject: Fine Tune the View layer of your Fusion Web Application [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article]
Read more
  • 0
  • 0
  • 3356

article-image-implementing-opencart-modules
Packt
24 Oct 2013
6 min read
Save for later

Implementing OpenCart Modules

Packt
24 Oct 2013
6 min read
(For more resources related to this topic, see here.) OpenCart is an e-commerce cart application built with its own in-house framework which uses Model-View-Controller (MVC) language pattern thus each module in OpenCart also follows the MVCL patterns. Controller creates logics and gathers data from Model and pass it to display them in the view OpenCart modules have admin/ and catalog/ folders and files in admin/ folder helps in controlling setting of module and files in catalog/ folder handles the presentation layer (front-end). Learning how to clone and write codes for Opencart Modules We assume that you already know PHP and have already installed the OpenCart and familiar with the OpenCart backend and frontend as well as some coding knowledge with PHP. You are going to create Hello World module which just has one input box at the admin, settings for the module and same content are shown in front end. First step on module creation is using a unique name, so there will not be conflict with other modules. The same unique name is used to create the file name and class name to extend controller and model. There are generally 6-8 files that need to be created for each module, and they follow a similar structure. If there is interaction with the database tables, we have to create two extra models. The following screenshot shows the hierarchy of files and folder of OpenCart module. So now you know the basic directory structure of OpenCart module. The file structure is divided into two sections admin and catalog. Admin folders and files deal with the setting of the modules and data handling while the catalog folders and files handles the frontend. Let's start with an easy way to make the module. You are going to make the duplicate of the default google_talk module of OpenCart and change it to the Hello World module. We are using the Dreamweaver to work with files. Changes made at the admin folder Go to admin/controller/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it to your favorite text editor, then find the following lines: classControllerModuleGoogleTalk extends Controller { Change the class name to classControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld as shown in the following screenshot: Then, save the file Go to admin/language/english/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it. Then look for the following line of code: $_['entry_code'] = 'Google Talk Code:<br /> <span class="help">Goto <a href="http://www.google.com/talk/service/badge/New" target="_blank"> <u>Create a Google Talk chatback badge</u> </a> and copy &amp; paste the generated code into the text box. </span>'; Replace with the following line of code: $_['entry_code'] = 'Hello World Content'; Then again find Google Talk and replace all with Hello World, and save the file. Go to admin/view/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl and open it and then find google_talk and replace it with helloworld then save it. Changes made at the catalog folder Go to catalog/controller/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and look for the following line: class ControllerModuleGoogleTalk extends Controller { change the class name to class ControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld and save it. Go to catalog/language/english/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and then look for Live Chat and replace it with Hello World then save it. Go to catalog/view/theme/default/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl. With the preceding files and codes change, our Hello World module is ready to be installed. Now log in to the admin section and go to Extensions | Module, look for the Hello World and click on [install] then click on [Edit]. Then type the content that you would like to show at the frontend in Hello World Content field after that click on the Add Module button and provide the setting as per your need and click on Save. Understanding the global Library methods used in OpenCart OpenCart has many predefined methods which can be called anywhere like in the controller or in the model and as well as in the view template files as per the need. You can find system level library files at system/library/. We have defined all the library functions so that it is easy for programmers to use it. For example: $this->customer->login($email, $password, $override = false) Log a customer in. It checks for the customer username and password if $override is passed false, else only for current logged in status and the e-mail. If it finds the correct entry then the cart entry, wishlist entries are retrieved. As well as customer ID, first name, last name, e-mail, telephone, fax, newsletter subscription status, customer group ID, and address ID are globally accessible for the customer. It also updates the customer IP address from where it logs in. Developing and customizing modules, pages, order totals, shipping, and payments extensions in OpenCart We describe the featured module of OpenCart, create feedback module and tips module, and describe and show how codes work and are managed. We helped to learn how to make pages and modules in OpenCart as well as let visualize the uses of database structure; how data are saved as per language, as per store so it helps OpenCart programmers to understand and follow the patterns of Opencart coding style. We describe codes how form works, how to list out the data from database, how edit works in module, and how they are saved. Show them how to code shipping module in OpenCart as well as total order modules and payment modules. We have outlined how templates, models, and controllers work for extensions. Summary In this way we have learned how to clone and write codes for OpenCart modules and the changes made at the admin and catalog folder. We also learned the global library methods used in OpenCart. Also, covered all ways to code the OpenCart extensions. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] OpenCart FAQs [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 3349

article-image-introduction-risk-analysis
Packt
18 Feb 2013
21 min read
Save for later

An Introduction to Risk Analysis

Packt
18 Feb 2013
21 min read
(For more resources related to this topic, see here.) Risk analysis First, we must understand what risk is, how it is calculated, and then implement a solution to mitigate or reduce the calculated risk. At this point in the process of developing agile security architecture, we have already defined our data. The following sections assume we know what the data is, just not the true impact to the enterprise if a threat is realized. What is risk analysis? Simply stated, risk analysis is the process of assessing the components of risk; threats, impact, and probability as it relates to an asset, in our case enterprise data. To ascertain risk, the probability of impact to enterprise data must first be calculated. A simple risk analysis output may be the decision to spend capital to protect an asset based on value of the asset and the scope of impact if the risk is not mitigated. This is the most general form of risk analysis, and there are several methods that can be applied to produce a meaningful output. Risk analysis is directly impacted by the maturity of the organization in terms of being able to show value to the enterprise as a whole and understanding the applied risk methodology. If the enterprise does not have a formal risk analysis capability, it will be difficult for the security team to use this method to properly implement security architecture for enterprise initiatives. Without this capability, the enterprise will either spend on the products with the best marketing, or not spend at all. Let's take a closer look at the risk analysis components and figure out where useful analysis data can be obtained. Assessing threats First, we must define what a threat is in order to identify probable threats. It may be difficult to determine threats to the enterprise data if this analysis has never been completed. A threat is anything that can act negatively towards the enterprise assets. It may be a person, virus, malware, or a natural disaster. Due to the broad scope of threats, actions may be purposeful or unintentional in nature adding to the absolute unpredictability of impact. Once a threat is defined, the attributes of threats must be identified and documented. The documentation of threats should include the type of threat, identified threat groupings, motivations if any, and methods of actions. In order to gain understanding of pertinent threats for the enterprise, researching past events may be helpful. Historically, there have been challenges to getting realistic breach data, but better reporting of post-breach findings continues to reduce the uncertainty of analysis. Another method to getting data is leveraging existing security technologies implemented to build a realistic perspective of threats. The following are a few sample questions to guide you on the discovery of threats: What is being detected by the existing infrastructure? What are others in the same industry observing? What post-breach data is available in the same industry vertical? Who would want access to this data? What would motivate a person to attempt unauthorized access to the data? Data theft Destruction Notoriety Hacktivism Retaliation A sample table of data type, threat, and motivation is shown as follows: Data   Threat   Motivation   Credit card numbers   Hacker   Theft, Cybercrime   Trade secrets   Competitor   Competitive advantage   Personally Identifiable Information (PII)   Disgruntled employee   Retaliation, Destruction   Company confidential documents   Accidental leak   None   Client list   Natural disaster   None   This should be developed with as much detail as possible to form a realistic view of threats to the enterprise. There may also be several variations of threats and motivations for threat action on enterprise data. For example, accessing trade secrets by a competitor may be for competitive advantage, or a hacker may take action as part of hacktivism to bring negative press to the enterprise. The more you can elaborate on the possible threats and motivations that exist, the better you will be able to reduce the list to probable threats based on challenging the data you have gathered. It is important to continually challenge the logic used to have the most realistic perspective. Assessing impact Now that the probable threats have been identified, what kind of damage can be done or negative impact can be enacted upon the enterprise and the data. Impact is the outcome of threats acting against the enterprise. This could be a denial-of-service state where the agent, a hacker, uses a tool to starve the enterprise Internet web servers of resources causing a denial-of-service state for legitimate users. Another impact could be the loss of customer credit cards resulting in online fraud, reputation loss, and countless dollars in cleanup and remediation efforts. There are the immediate impacts and residual impacts. Immediate impacts are rather easy to determine because, typically, this is what we see in the news if it is big enough of an issue. Hopefully, the impact data does not come from first-hand experience, but in the case it is, executives should take action and learn from their mistakes. If there is no real-life experience with the impact, researching breach data will help using Internet sites such as DATALOSS db (http://datalossdb.org). Also, understanding the value of the data to the enterprise and its customers will aide in impact calculation. I think the latter impact analysis is more useful, but if the enterprise is unsure, then relying on breach data may be the only option. The following are a few sample discovery questions for business impact analysis: How is the enterprise affected by threat actions? Will we go out of business? Will we lose market share? If the data is deleted or manipulated, can it be recovered or restored? If the building is destroyed, do we have disaster recovery and business continuity capabilities? To get a more accurate assessment of the probable impact or total cost to the enterprise, map out what data is most desirable to steal, destroy, and manipulate. Align the identified threats to the identified data, and apply an impact level to the data indicating if the enterprise would suffer critical to minor loss. These should be as accurate as possible. Work the scenarios out on paper and base the impact analysis on the outcome of the exercises. The following is a sample table to present the identification and assessment of impact based on threat for a retailer. This is generally called a business impact analysis. Data   Threat   Impact   Credit card numbers   Hacker   Critical   Trade secrets   Competitor   Medium   PII   Disgruntled employee   High   Company confidential documents   Accidental leak   Low   Client list   Natural disaster   Medium   Enterprise industry vertical may affect the impact analysis. For instance, a retailer may have greater impact if credit card numbers are stolen than if their client list was stolen. Both scenarios have impact but one may warrant greater protection and more restricted access to limit the scope of impact, and reduce immediate and residual loss. Business impact should be measured in how the threat actions affect the business overall. Is it an annoyance or does it mean the business can no longer function? Natural disasters should also be accounted for and considered when assessing enterprise risk. Assessing probability Now that all conceived threats have been identified along with the business impact for each scenario, how do we really determine risk? Shouldn't risk be based on how likely the threat may take action, succeed, and cause an impact? Yes! The threat can be the most perilous thing imagined but if threat actions may only occur once in three thousand years, investment in protecting against the threat may not be warranted, at least in the near term. Probability data is as difficult, if not more difficult, to find than threat data. However, this calculation has the most influence on the derived risk. If the identified impact is expected to happen twice a year and the business impact is critical, perhaps security budget should be allocated to security mechanisms that mitigate or reduce the impact. The risk of the latter scenario would be higher because it is more probable, not possible, but probable. Anything is possible. I have heard an analogy for this to make the point. In the game of Russian roulette, a semi-automatic pistol either has a bullet in the chamber or it does not, this is possible. With a revolver and a quick spin of the cylinder, you now have a 1 in 6 chance on whether there is a bullet that will be fired when the firing pin strikes forward. This is oversimplified to illustrate possibility versus probability. There are several variables in the example that could affect the outcome such as a misfire, or the safety catch being enabled, stopping the gun's ability to fire. These would be calculated to form an accurate risk value. Make sense? This is how we need to approach probability. Technically, it is a semi-accurate estimation because there is just not enough detailed information on breaches and attacks to draw absolute conclusions. One approach may be to research what is happening in the same industry using online resources and peer groups, and then make intelligent estimates to determine if the enterprise could be affected too. Generally, there are outlier scenarios that require the utmost attention regardless; start here if these have not been identified as a probable risk scenario for the enterprise. The following are a few sample probability estimation questions: Has this event occurred before to the enterprise? Is there data to suggest it is happening now? Are there documented instances for similar enterprises? Do we know anything in regards to occurrence? Is the identified threat and impact really probable? The following table is the continuation of our risk analysis for our fictional retailer: Data   Threat   Impact   Probability   Credit card numbers   Hacker   Critical   High   Trade secrets   Competitor   Medium   Low   PII   Disgruntled employee   High   Medium   Company confidential documents   Accidental leak Low   Low Client list   Natural disaster   Medium   High   Based on the outcome of the probability exercises of identified threats and impacts, risk can be calculated and the appropriate course of action(s) developed and implemented. Assessing risk Now that the enterprise has agreed on what data has value, identified threats to the data, rated the impact to the enterprise, and the estimated probability of the impact occurring, the next logical step is to calculate the risk of the scenarios. Essentially, there are two methods to analyze and present risk: qualitative and quantitative. The decision to use one over the other should be based on the maturity of the enterprise's risk office. In general, a quantitative risk analysis will use descriptive labels like a qualitative method, however, there is more financial and mathematical analysis in quantitative analysis. Qualitative risk analysis Qualitative risk analysis provides a perspective of risk in levels with labels such as Critical, High, Medium, and Low. The enterprise must still define what each level means in a general financial perspective. For instance, a Low risk level may equate to a monetary loss of $1,000 to $100,000. The dollar ranges associated with each risk level will vary by enterprise. This must be agreed on by the entire enterprise so when risk is discussed, everyone is knowledgeable of what each label means financially. Do not confuse the estimated financial loss with the more detailed quantitative risk analysis approach; it is a simple valuation metric for deciding how much investment should be made based on probable monetary loss. The following section is an example qualitative risk analysis presenting the type of input required for the analysis. Notice that this is not a deep analysis of each of these inputs; it is designed to provide a relatively accurate perspective of risk associated with the scenario being analyzed. Qualitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Estimated impact: High, Medium, Low (as indicated in the following table).   Risk Estimated loss ($)   High   > 1,000,000   Medium   500,000 to 900,000   Low   < 500,000   Quantitative risk analysis Quantitative risk analysis is an in-depth assessment of what the monetary loss would be to the enterprise if the identified risk were realized. In order to facilitate this analysis, the enterprise must have a good understanding of its processes to determine a relatively accurate dollar amount for items such as systems, data restoration services, and man-hour break down for recovery or remediation of an impacting event. Typically, enterprises with a mature risk office will undertake this type of analysis to drive priority budget items or find areas to increase insurance, effectively transferring business risk. This will also allow for accurate communication to the board and enterprise executives to know at any given time the amount of risk the enterprise has assumed. With the quantitative approach a more accurate assessment of the threat types, threat capabilities, vulnerability, threat action frequency, and expected loss per threat action are required and must be as accurate as possible. As with qualitative risk analysis, the output of this analysis has to be compared to the cost to mitigate the identified threat. Ideally, the cost to mitigate would be less than the loss expectancy over a determined period of time. This is simple return on investment (ROI) calculation. Let's look again at the scenario used in the qualitative analysis and run it through a quantitative analysis. We will then compare against the price of a security product that would mitigate the risk to see if it is worth the capital expense. Before we begin the quantitative risk analysis, there are a couple of terms that need to be explained: Annual loss expectancy (ALE): The ALE is the calculation of what the financial loss would be to the enterprise if the threat event was to occur for a single year period. This is directly related to threat frequency. In the scenario this is once every three years, dividing the single lost expectancy by annual occurrence provides the ALE. Cost of protection (COP): The COP is the capital expense associated with the purchase or implementation of a security mechanism to mitigate or reduce the risk scenario. An example would be a firewall that costs $150,000 and $50,000 per each year of protection of the loss expectancy period. If the cost of protection over the same period is lower than the loss, this is a good indication that the capital expense is financially worthwhile. Quantitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Single loss expectation: $250,000. Threat frequency: 3 (how many times per year; this would be roughly once every three years). ALE: $83,000. COP: $150,000 (over 3 years). We will divide the total loss and the cost of protection over three years as, typically, capital expenses are depreciated over three to four years, and the loss is expected once every three years. This will give us the ALE and COP in the equation to determine the cost-benefit analysis. This is a simplified example, but the math would look as follows: $83,000 (ALE) - $50,000 (COP) = $33,000 (cost benefit) The loss is annually $33,000 more than the cost to protect against the threat. The assumption in our example is that the $250,000 figure is 85% of the total asset value, but because we have 15% protection capability, the number is now approximately $294,000. This step can be shortcut out of the equation if the ALE and rate of occurrence are known. When trying to figure out threat capability, try to be as realistic about the threat first. This will help us to better assess vulnerability because you will have a more accurate perspective on how realistic the threat is to the enterprise. For instance, if your scenario requires cracking advanced encryption and extensive system experience, the threat capability would be expert indicating current security controls may be acceptable for the majority of threat agents reducing probability and calculated risk. We tend to exaggerate in security to justify a purchase. We need to stop this trend and focus on what is the best area to spend precious budget dollars. The ultimate goal of a quantitative risk analysis is to ensure that spend for protection does not far exceed the threat the enterprise is protecting against. This is beneficial for the security team in justifying the expense of security budget line items. When the analysis is complete, there should still be a qualitative risk label associated with the risk. Using the above scenario with an annualized risk of $50,000 indicates this scenario is extremely low risk based on the defined risk levels in the qualitative risk exercise even if SLE is used. Does this analysis accurately represent acceptable loss? After an assessment is complete it is good practice to ensure all assumptions still hold true, especially the risk labels and associated monetary amounts. Applying risk analysis to trust models Well, now we can apply our risk methodology to our trust models to decide if we can continue with our implementation as is, or whether we need to change our approach based on risk. Our trust models, which are essentially use cases, rely on completing the risk analysis, which in turn decide the trust level and security mechanisms required to reduce the enterprise risk to an acceptable level. It would be foolish to think that we can shove all requests for similar access directly into one of these buckets without further analysis to determine the real risk associated with the request. After completing one of the risk analysis types we just covered, risk guidance can be provided for the scenario (and I stress guidance). For the sake of simplicity an implementation path may be chosen, but it will lead to compromises in the overall security of the enterprise and is cautioned. I have re-presented the table of one scenario, the external application user. This is a better representation of how a trust model should look with risk and security enforcement established for the scenario. If an enterprise is aware of how it conducts business, then a focused effort in this area should produce a realistic list of interactions with data by whom, with what level of trust, and based on risk, what controls need to be present and enforced by policy and standards. User type   External   Allowed access   Tier 1 DMZ only, Least privilege   Trust level   1 - Not trusted   Risk   Medium   Policy   Acceptable use, Monitoring, Access restrictions   Required security mechanisms   FW, IPS, Web application firewall   The user is assumed to have access to log in to the web application and have more possible interaction with the backend database(s). This should be a focal point for testing, because this is the biggest area of risk in this scenario. Threats such as SQL injection that can be waged against a web application with little to no experience are commonplace. Enterprises that have e-commerce websites typically do not restrict who can create an account. This should have input to the trust decision and ultimately the security architecture applied. Deciding on a risk analysis methodology We have covered the two general types of risk analysis, qualitative and quantitative, but which is best? It depends on several factors: risk awareness of the enterprise, risk analysts' capabilities, risk analysis data, and the influence of risk in the enterprise. If the idea of risk analysis or IT risk analysis is new to the enterprise, then a slow approach with qualitative analysis is recommended to get everyone thinking of risk and what it means to the business. It will be imperative to get an enterprise-wide agreement on the risk labels. Using the lesser involved method does not mean you will not be questioned on the data used in the analysis, so be prepared to defend the data used and explain estimation methods leveraged. If it is decided to use a quantitative risk analysis method, a considerable amount of effort is required along with meticulous loss figures and knowledge of the environment. This method is considered the most effective requiring risk expertise, resources, and an enterprise-wide commitment to risk analysis. This method is more accurate, though it can be argued that since both methods require some level of estimation, the accuracy lies in accurate estimation skills. I use the Douglas Hubbard school of thought on estimating with 90 percent accuracy. You will find his works at his website http://www.hubbardresearch.com/. I highly recommend his title How to Measure Anything: Finding the Value of "Intangibles" in Business, Tantor Media to learn estimation skills. It may be beneficial to have an external firm perform the analysis if the engagement is significant in size. The benefits of both should be that the enterprise is able to make risk-aware decisions on how to securely implement IT solutions. Both should be presented with common risk levels such as High, Medium, Low; essentially the common language everyone can speak knowing a range of financial risk without all the intimate details of how they arrived at the risk level. Other thoughts on risk and new enterprise endeavors Now that you have been presented with types of risk analysis, they should be applied as tools to best approach the new technologies being implemented in the networks of our enterprises. Unfortunately, there are broad brush strokes of trusted and untrusted approaches being applied that may or may not be accurate without risk analysis as a decision input. Two examples where this can be very costly are the new BYOD and cloud initiatives. At first glance these are the two most risky business maneuvers an enterprise can attempt from an information security perspective. Deciding if this really is the case requires an analysis based on trust models and data-centric security architecture. If the proper security mechanisms are implemented and security applied from users to data, the risk can be reduced to a tolerable level. The BYOD business model has many positive benefits to the enterprise, especially capital expense reduction. However, implementing a BYOD or cloud solution without further analysis of risk can introduce significant risk beyond the benefit of the initiative. Do not be quick to spread fear in order to avoid facing the changing landscape we have worked so hard to build and secure. It is different, but at one time, what we know today as the norm was new too. Be cautious but creative, or IT security will be discredited for what will be received as a difficult interaction. This is not the desired perception for IT security. Strive to understand the business case, risk to business assets (data, systems, people, processes, and so on), and then apply sound security architecture as we have discussed so far. Begin evangelizing the new approach to security in the enterprise by developing trust models that everyone can understand. Use this as the introduction to agile security architecture and get input to create models based on risk. By providing a risk-based perspective to emerging technologies and other radical requests, a methodical approach can bring better adoption and overall increased security in the enterprise. Summary In this article, we took a look at analyzing risk by presenting quantitative and qualitative methods including an exercise to understand the approach. The overall goal of security is to be integrated into business processes, so it is truly a part of the business and not an expensive afterthought simply there to patch a security problem. Resources for Article : Further resources on this subject: Microsoft Enterprise Library: Security Application Block [Article] Microsoft Enterprise Library: Authorization and Security Cache [Article] Getting Started with Enterprise Library [Article]
Read more
  • 0
  • 0
  • 3337

article-image-session-and-user-joomla-15-part-1
Packt
16 Nov 2009
9 min read
Save for later

The Session and the User with Joomla! 1.5: Part 1

Packt
16 Nov 2009
9 min read
Introduction When a user starts browsing a Joomla! web site, a PHP session is created. Hidden away in the session is user information, this information will either represent a known registered user or a guest. We can interact with the session using the session handler, a JSession object. When we work with the session in Joomla!, we must not use the global PHP $_SESSION variable or any of the PHP session functions. Getting the session handler To interact with the session we use the session handler; this is a JSession object that is globally available via the static JFactory interface. It is imperative that we only use the global JSession object to interact with the PHP session. Directly using $_SESSION or any of the PHP session functions could have unintended consequences. How to do it... To retrieve the JSession object we use JFactory. As JFactory returns an object, we must use =& when assigning the object to a variable. If we do not and our server is running a PHP version prior to PHP 5, we will inadvertently create a copy of the global JSession object. $session =& JFactory::getSession(); How it works... If we look at the JSession class, we will notice that there is a getInstance() method. It is tempting to think of this as synonymous with the JFactory::getSession() method. There is, however, an important difference, the JSession::getInstance() method requires configuration parameters. The JFactory::getSession() method accepts configuration parameters, but they are not required. The first time the JFactory::getSession() method is executed, it is done by the JApplication object (often referred to as mainframe). This creates the session handler. It is the application and JFactory that deal with the configuration of the session. Subsequent usage of the JFactory::getSession() method will not require the creation of the object, and thus simply returns the existing object. The following sequence diagram shows how this process works the first time it is executed by the JApplication object: When the JFactory::getSession() method is subsequently executed, because session will already exist, the _createSession() method is not executed. The diagram is a simplification of the process; additional complexity has not been included because it is outside the scope of this recipe. See also For information about setting and retrieving values in the session, refer to the next two recipes, Adding data to the session and Getting session data. Adding data to the session Data that is set in the session is maintained between client requests. For example, we could display announcements at the top of all our pages and include an option to hide the announcements. Once a user opts to hide the announcements, by setting a value in the session, we would be able to 'remember' this throughout the user's visit. To put this into context, we would set a session value hideAnnouncements to true when a user opts to hide announcements. In subsequent requests, we will be able to retrieve the value of hideAnnouncements from the session and its state will remain the same. In Joomla!, session data is maintained using a JSession object, which we can retrieve using JFactory. This recipe explains how to set data in the session using this object instead of using the global PHP $_SESSION variable. Getting ready We must get the session handler, a JSession object. For more information, refer to the first recipe in this article, Getting the session handler. $session =& JFactory::getSession(); How to do it... The JSession::set() method is used to set a value in the session. The first parameter is the name of the value we want to set; the second is the value itself. $session->set('hideAnnouncements', $value); The JSession::set() method returns the previous value. If no value was previously set, the return value will be null. // set the new value and retrieve the old$oldValue = $session->set('hideAnnouncements', $value);echo 'Hide Announcement was ' . ($oldValue ? 'true' : 'false');echo 'Hide Announcement is now ' . ($value ? 'true' : 'false'); Lastly, we can remove data from the session by setting the value to null. // remove something$session->set('hideAnnouncements', null); How it works... The session contains a namespace-style data structure. Namespaces are required by JSession and by default all values are set in the default namespace. To set a value in a different namespace, we use the optional JSession::set() third parameter. $session->set('something', $value, 'mynamespace'); Sessions aren't just restricted to storing basic values such as strings and integers. The JUser object is a case in point—every session includes an instance of JUser that represents the user the session belongs to. If we add objects to the session, we must be careful. All session data is serialized. To successfully unserialize an object, the class must already be known when the session is restored. For example, the JObject class is safe to serialize because it is loaded prior to restoring the session. $value = new JObject();$session->set('aJObject', $value); If we attempt to do this with a class that is not loaded when the session is restored, we will end up with an object of type __PHP_Incomplete_Class. To overcome this, we can serialize the object ourselves. // serialize the object in the session$session->set('anObject', serialize($anObject)); To retrieve this, we must unserialize the object after we have loaded the class. If we do not do this, we will end up with a string that looks something like this O:7:"MyClass":1:{s:1:"x";s:10:"some value";}. // load the classinclude_once(JPATH_COMPONENT . DS . 'myclass.php');// unserialize the object from the session$value = unserialize($session->get('anObject')); There's more... There is an alternative way of setting data in the session. User state data is also part of the session, but this data allows us to save session data using more complex hierarchical namespaces, for example com_myextension.foo.bar.baz. To access this session data, we use the application object instead of the session handler. // get the application$app =& JFactory::getApplication();// set some data$app->setUserState('com_myextsion.foo.bar.baz, $value); An advantage of using user state data is that we can combine this with request data. For more information refer to the next recipe, Getting session data. The JApplication::setUserState() method is documented as returning the old value. However, a bug prevents this from working; instead the new value is returned. See also For information about retrieving values from the session, refer to the next recipe, Getting session data. Getting session data Data that is set in the session is maintained between client requests. For example if during one request we set the session value of hideAnnouncements to true, as described in the previous recipe, in subsequent requests we will be able to retrieve the value of hideAnnouncements and its state will remain the same. In Joomla!, session data is maintained using the global JSession object. This recipe explains how to get data from the session using this object instead of from the normal global PHP $_SESSION variable. Getting ready We must get the session handler, a JSession object. For more information, refer to the first recipe in this article, Getting the session handler. $session =& JFactory::getSession(); How to do it... We use the JSession::get() method to retrieve data from the session. $value = $session->get('hideAnnouncements'); If the value we attempt to retrieve is not set in the session, the value null is returned. It is possible to specify a default value, which will be returned in instances where the value is not currently set in the session. $defaultValue = false;$value = $session->get('hideAnnouncements', $defaultValue); How it works... The session contains a namespace-style data structure. Namespaces are required by JSession and by default all values are retrieved from the default namespace. To get a value from a different namespace, we use the optional third JSession::get() parameter. $value = $session->get('hideAnnouncements', $defaultValue, 'mynamespace'); It is possible to store objects in the session. However, these require special attention when we extract them from the session. For more information about storing objects in the session, refer to the previous recipe, Adding data to the session. There's more... There is an alternative way of getting data from the session. User state data is also part of the session. The user state data allows us to store session data using more complex hierarchical namespaces, for example com_myextension.foo.bar.baz. To access user state data, we use the application object instead of the session handler. // get the application (this is the same as $mainframe)$app =& JFactory::getApplication();// get some user state data$value = $app->getUserState('com_myextsion.foo.bar.baz'); User state data is usually combined with request data. For example, if we know the request may include a value that we want to use to update the user state data, we use the JApplication::getUserStateFromRequest() method. // get some user state data and update from request$value = $app->getUserStateFromRequest( 'com_myextsion.foo.bar.baz', 'inputName', $defaultValue, 'INTEGER'); The four parameters we provide this method with are the path to the value in the state data, the name of the request input from which we want to update the value, the default value (which is used if there is no value in the request), and the type of value. This method is used extensively for dealing with display state data, such as pagination. // get global default pagination limit$defaultListLimit = $app->getCfg('list_limit');// get limit based on user state data / request data$limit = $app->getUserStateFromRequest( 'global.list.limit', 'limit', $defaultListLimit, 'INTEGER'); See also For information about setting values in the session, refer to the previous recipe, Adding data to the session.
Read more
  • 0
  • 0
  • 3337
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-developing-web-service-cxf
Packt
07 Jan 2010
9 min read
Save for later

Developing a Web Service with CXF

Packt
07 Jan 2010
9 min read
In this article we will basically study the sample Order Processing Application and discuss the following points: Developing a service Developing a client The Order Processing Application The objective of the Order Processing Application is to process a customer order. The order process functionality will generate the customer order, thereby making the order valid and approved. A typical scenario will be a customer making an order request to buy a particular item. The purchase department will receive the order request from the customer and prepare a formal purchase order. The purchase order will hold the details of the customer, the name of the item to be purchased, the quantity, and the price. Once the order is prepared, it will be sent to the Order Processing department for the necessary approval. If the order is valid and approved, then the department will generate the unique order ID and send it back to the Purchase department. The Purchase department will communicate the order ID back to the customer. For simplicity, we will look at the following use cases: Prepare an order Process the order The client application will prepare an order and send it to the server application through a business method call. The server application will contain a web service that will process the order and generate a unique order ID. The generation of the unique order ID will signify order approval. In real world applications a unique order ID is always accompanied by the date the order was approved. However, in this example we chose to keep it simple by only generating order ID. Developing a service Let's look specifically at how to create an Order Processing Web Service and then register it as a Spring bean using a JAX-WS frontend. The Sun-based JAX-WS specification can be found at the following URL: http://jcp.org/aboutJava/communityprocess/final/jsr224/index.html JAX-WS frontend offers two ways of developing a web service—Code-first and Contract-first. We will use the Code-first approach, that is, we will first create a Java class and convert this into a web service component. The first set of tasks will be to create server-side components. In web service terminology, Code-first is termed as the Bottoms Up approach, and Contract-first is referred to as the Top Down approach. To achieve this, we typically perform the following steps: Create a Service Endpoint Interface (SEI) and define a business method to be used with the web service. Create the implementation class and annotate it as a web service. Create beans.xml and define the service class as a Spring bean using a JAX-WS frontend. Creating a Service Endpoint Interface (SEI) Let's first create the SEI for our Order Processing Application. We will name our SEI OrderProcess. The following code illustrates the OrderProcess SEI: package demo.order; import javax.jws.WebService; @WebService public interface OrderProcess { @WebMethod String processOrder(Order order); } As you can see from the preceding code, we created a Service Endpoint Interface named OrderProcess. The SEI is just like any other Java interface. It defines an abstract business method processOrder. The method takes an Order bean as a parameter and returns an order ID String value. The goal of the processOrder method is to process the order placed by the customer and return the unique order ID. One significant thing to observe is the @WebService annotation. The annotation is placed right above the interface definition. It signifies that this interface is not an ordinary interface but a web service interface. This interface is known as Service Endpoint Interface and will have a business method exposed as a service method to be invoked by the client. The @WebService annotation is part of the JAX-WS annotation library. JAX-WS provides a library of annotations to turn Plain Old Java classes into web services and specifies detailed mapping from a service defined in WSDL to the Java classes that will implement that service. The javax.jws.WebService annotation also comes with attributes that completely define a web service. For the moment we will ignore these attributes and proceed with our development. The javax.jws.@WebMethod annotation is optional and is used for customizing the web service operation. The @WebMethod annotation provides the operation name and the action elements which are used to customize the name attribute of the operation and the SOAP action element in the WSDL document. The following code shows the Order class: package demo.order;import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement(name = "Order")public class Order { private String customerID; private String itemID; private int qty; private double price; // Contructor public Order() { } public String getCustomerID() { return customerID; } public void setCustomerID(String customerID) { this.customerID = customerID; } public String getItemID() { return itemID; } public void setItemID(String itemID) { this.itemID = itemID; } public int getQty() { return qty; } public void setQty(int qty) { this.qty = qty; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; }} As you can see, we have added an @XmlRootElement annotation to the Order class. The @XmlRootElement is part of the Java Architecture for XML Binding (JAXB) annotation library. JAXB provides data binding capabilities by providing a convenient way to map XML schema to a representation in Java code. The JAXB shields the conversion of XML schema messages in SOAP messages to Java code without having the developers know about XML and SOAP parsing. CXF uses JAXB as the default data binding component. The @XmlRootElement annotations associated with Order class map the Order class to the XML root element. The attributes contained within the Order object by default are mapped to @XmlElement. The @XmlElement annotations are used to define elements within the XML. The @XmlRootElement and @XmlElement annotations allow you to customize the namespace and name of the XML element. If no customizations are provided, then the JAXB runtime by default would use the same name of attribute for the XML element. CXF handles this mapping of Java objects to XML. Developing a service implementation class We will now develop the implementation class that will realize our OrderProcess SEI. We will name this implementation class OrderProcessImpl. The following code illustrates the service implementation class OrderProcessImpl: @WebServicepublic class OrderProcessImpl implements OrderProcess { public String processOrder(Order order) { String orderID = validate(order); return orderID; }/** * Validates the order and returns the order ID**/ private String validate(Order order) { String custID = order.getCustomerID(); String itemID = order.getItemID(); int qty = order.getQty(); double price = order.getPrice(); if (custID != null && itemID != null && !custID.equals("") && !itemID.equals("") && qty > 0 && price > 0.0) { return "ORD1234"; } return null; }} As we can see from the preceding code, our implementation class OrderProcessImpl is pretty straightforward. It also has @WebService annotation defined above the class declaration. The class OrderProcessImpl implements OrderProcess SEI. The class implements the processOrder method. The processOrder method checks for the validity of the order by invoking the validate method. The validate method checks whether the Order bean has all the relevant properties valid and not null. It is recommended that developers explicitly implement OrderProcess SEI, though it may not be necessary. This can minimize coding errors by ensuring that the methods are implemented as defined. Next we will look at how to publish the OrderProcess JAX-WS web service using Spring configuration.   Spring-based server bean What makes CXF the obvious choice as a web service framework is its use of Spring-based configuration files to publish web service endpoints. It is the use of such configuration files that makes the development of web service convenient and easy with CXF. Spring provides a lightweight container which works on the concept of Inversion of Control (IoC) or Dependency Injection (DI) architecture; it does so through the implementation of a configuration file that defines Java beans and its dependencies. By using Spring you can abstract and wire all the class dependencies in a single configuration file. The configuration file is often referred to as an Application Context or Bean Context file. We will create a server side Spring-based configuration file and name it as beans.xml. The following code illustrates the beans.xml configuration file: <beans xsi_schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd"> <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension- soap.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <jaxws:endpoint id="orderProcess" implementor="demo.order.OrderProcessImpl" address="/OrderProcess" /></beans> Let's examine the previous code and understand what it really means. It first defines the necessary namespaces. It then defines a series of <import> statements. It imports cxf.xml, cxf-extension-soap.xml, and cxf-servlet.xml. These files are Springbased configuration files that define core components of CXF. They are used to kick start CXF runtime and load the necessary infrastructure objects such as WSDL manager, conduit manager, destination factory manager, and so on. The <jaxws:endpoint> element in the beans.xml file specifies the OrderProcess web service as a JAX-WS  endpoint. The element is defined with the following three attributes: id—specifies a unique identifier for a bean. In this case, jaxws:endpoint is a bean, and the id name is orderProcess. implementor— specifies the actual web service implementation class. In this case, our implementor class is OrderProcessImpl. address— specifies the URL address where the endpoint is to be published. The URL address must to be relative to the web context. For our example, the endpoint will be published using the relative path /OrderProcess. The <jaxws:endpoint> element signifies that the CXF internally uses JAX-WS frontend to publish the web service. This element definition provides a short and convenient way to publish a web service. A developer need not have to write any Java class to publish a web service.
Read more
  • 0
  • 0
  • 3329

article-image-enhancing-user-experience-php-5-ecommerce-part-1
Packt
29 Jan 2010
7 min read
Save for later

Enhancing the User Experience with PHP 5 Ecommerce: Part 1

Packt
29 Jan 2010
7 min read
Juniper Theatricals Juniper Theatricals want to have a lot of products on their online store, and as a result they fear that some products may get lost within the website, or not be as obvious to their customers. To help prevent this problem, we will integrate product searching to make products easy to find, and we will add filters to product lists allowing customers to see products that match what they are looking for (for example, ones within their price range). As some products could still be lost, they want to be able to recommend related products to customers when they view particular products. If a customer wants a product, and it happens to be out of stock, then they want to prevent the customer from purchasing it elsewhere; so we will look at stock notifications too. The importance of user experience Our customers' experience on the stores powered by our framework is veryimportant. A good user experience will leave them feeling wanted and valued, whereas a poor user experience will leave them feeling unwanted, unvalued, and may leave a bad taste in their mouths. Search The ability for customers to be able to search, find, and filter products is vital, as if they cannot find what they are looking for they will be frustrated by our site and go somewhere where they can find what they are looking for much more easily. There are two methods that can make it much easier for customers to find what they are looking for: Keyword search: This method allows customers to search the product catalog based on a series of keywords. Filtering: This method allows customers to filter down lists of products based on attributes, refining larger lists of products into ones that better match their requirements. Finding products The simplest way for us to implement a search feature is to search the product name and product description fields. To make the results more relevant, we can place different priorities on where matches were found; for instance, if a word or phrase is found in both the name and description then that would be of the highest importance; next would be products with the word or phrase in the name; and finally, we would have products that just have the word or phrase contained within the product description itself. So, what is involved in adding search features to our framework? We need the following: Search box: We need a search box for our customers to type in words or phrases. Search feature in the controller: We need to add some code to search the products database for matching products. Search results: Finally, we need to display the matching products to the customer. Search box We need a search box where our customers can type in words or phrases to search our stores. This should be a simple POST form pointing to the path products/search with a search field of product_search. The best place for this would be in our website's header, so customers can perform their search from anywhere on the site or store. <div id="search"><form action="products/search" method="post"><label for="product_search">Search for a product</label><input type="text" id="product_search" name="product_search" /><input type="submit" id="search" name="search" value="Search" /></form></div> We now have a search box on the store: Controlling searches with the products controller A simple modification to our products controller will allow customers to search products. We need to make a small change to the constructor, to ensure that it knows when to deal with search requests. Then we need to create a search function to search products, store the results, and display them in a view. Constructor changes A simple switch statement can be used to detect if we are viewing a product, performing a search, or viewing all of the products in the database as a list. $urlBits = $this->registry->getURLBits();if( !isset( $urlBits[1] ) ){$this->listProducts();}else{switch( $urlBits[1] ){case 'view':$this->viewProduct();break;case 'search':$this->searchProducts();break;default:$this->listProducts();break;}} This works by breaking down the URL and, depending on certain aspects of the URL, different methods are called from within the controller. Search function We now need a function to actually search our products database, such as the following: private function searchProducts(){// check to see if the user has actually submitted the search formif( isset( $_POST['product_search'] ) &&$_POST['product_search'] != '' ){ Assuming the customer has actually entered something to search, we need to clean the search phrase, so it is suitable to run in our database query, and then we perform the query. The phrase is checked against the name and description of the product, with the name taking priority within the results. The highlighted code illustrates the query with prioritization. // clean up the search phrase$searchPhrase = $this->registry->getObject('db')->sanitizeData( $_POST['product_search'] );$this->registry->getObject('template')->getPage()->addTag( 'query', $_POST['product_search'] );// perform the search, and cache the results, ready for the// results template$sql = "SELECT v.name, c.path,IF(v.name LIKE '%{$searchPhrase}%', 0, 1) AS priority,IF(v.content LIKE '%{$searchPhrase}%', 0, 1)AS prioritybFROM content c, content_versions v, content_types tWHERE v.ID=c.current_revision AND c.type=t.IDAND t.reference='product' AND c.active=1AND ( v.name LIKE '%{$searchPhrase}%' OR v.contentLIKE '%{$searchPhrase}%' )ORDER BY priority, priorityb ";$cache = $this->registry->getObject('db')->cacheQuery( $sql );if( $this->registry->getObject('db')->numRowsFromCache( $cache ) == 0 ){// no results from the cached query, display the no results// template} If there are some products matching the search, then we display the results to the customer. else{// some results were found, display them on the results page// IMPROVEMENT: paginated results$this->registry->getObject('template')->getPage()->addTag( 'results', array( 'SQL', $cache ) );$this->registry->getObject('template')->buildFromTemplates('header.tpl.php','products-searchresults.tpl.php', 'footer.tpl.php');}}else{// search form not submitted, so just display the search box page$this->registry->getObject('template')->buildFromTemplates('header.tpl.php','products-searchform.tpl.php', 'footer.tpl.php');}} As the results from the query are stored in a cache, we can simply assign this cache to a template tag variable, and the results will be displayed. Of course, as we need to account for the fact that there may be no results, we must check to ensure there are some results, and if there are none, we must display the relevant template. Search results Finally, we need a results page to display these results on. <h2>Products found...</h2><p>The following products were found, matching your search for{query}.</p><ul><!-- START results --><li><a href="products/view/{path}">{ name}</a></li><!-- END results --></ul> Our search results page looks like this: Improving searches We could improve this search function by making it applicable for all types of content managed by the framework. Obviously if we were going to do this, it would need to be taken out of the products controller, perhaps either as a controller itself, or as a core registry function, or as part of the main content/pages controller. The results could either be entirely in a main list, with a note of their type of content, or tabbed, with each type of content being displayed in a different tab. The following diagrams represent these potential Search Results pages.   And, of course, the tab-separated search results.
Read more
  • 0
  • 0
  • 3324

article-image-faq-web-services-and-apache-axis2
Packt
28 Feb 2011
12 min read
Save for later

FAQ on Web Services and Apache Axis2

Packt
28 Feb 2011
12 min read
Apache Axis2 Web Services, 2nd Edition Create secure, reliable, and easy-to-use web services using Apache Axis2. Extensive and detailed coverage of the enterprise ready Apache Axis2 Web Services / SOAP / WSDL engine. Attain a more flexible and extensible framework with the world class Axis2 architecture. Learn all about AXIOM - the complete XML processing framework, which you also can use outside Axis2. Covers advanced topics like security, messaging, REST and asynchronous web services. Written by Deepal Jayasinghe, a key architect and developer of the Apache Axis2 Web Service project; and Afkham Azeez, an elected ASF and PMC member.      Q: How did SOA change the world view? A: The era of isolated computers is over. Now "connected we stand, isolated we fall" is becoming the motto of computing. Networking and communication facilities have connected the world in a way as never before. The world has hardware that could support the systems that connect thousands of computers, and these systems have the capacity to wield power that was once only dreamed of. Yet, computer science lacked the technologies and abstraction to utilize the established communication networks. The goal of distributed computing is to provide such abstractions. RPC, RMI, IIOP, and CORBA are a few proposals that provide abstractions over the network for the developers to build upon. These proposals fail to consider one critical nature of the problem. The systems are a composition of numerous heterogeneous subsystems, but these proposals require all the participants to share a programming language or a few languages. Service Oriented Architecture (SOA) provides the answer by defining a set of concepts and patterns to integrate homogenous and heterogeneous components together. SOA provides a better way to achieve loosely coupled systems, and hence more extensibility and flexibility. In addition, similar to object-oriented programming (OOP), SOA enables a high degree of reusability. There are three main ways one can enable SOA capabilities in their systems and applications: Existing messaging systems: for example, JMS, IBM MQSeries, Tibco, and so on Plain Old XML (POX): for example, REST, XML/HTTP and so on Web services: for example, SOAP, WSDL, WS-* Q: What are the shortcomings of Java Messaging Service (JMS)? A: Among the commonly used messaging systems, Java Messaging Service (JMS) plays a major role in the industry and has become a common API for messaging systems. We can find a number of different message types of JMS, such as Text, Bytes, Name-Value pair, Stream, and Object. One of the main disadvantages of these types of messaging systems is that they do not have a single wire format (serialization format). As a result, interoperability is a big issue: if two applications are using JMS to communicate, then they must be on the same implementation. Sonic, Tibco, and IBM are the leaders in the commercial markets, and JBoss, Manta, and ActiveMQ are the commonly used open source implementations. Q: What is POX and how does it serve the web? A: Plain Old XML or POX is another way of exposing functionality and enabling SOA in the system. With the widespread use of the Web, the POX approach has become more popular. Most of the web applications expose the XML APIs, where we can develop components and communicate with them. Google Maps, Auto complete, and Amazon services are a few examples of applications that heavily use XML APIs to expose the functionality. In most cases, POX is used in combination with REST (Representational State Transfer). REST is a model of an underlying architecture of the Web, and it is based on the concept that every URL identifies resources. GET, PUT, POST, and DELETE are the verbs that are used in the REST architecture. REST is often associated with the theoretical standpoints, and for this reason, REST is generally not used for complex interactions. Q: What are web services? A: The fundamental concept behind web services is the SOA where an application is no longer a large monolithic program, but it is divided into smaller, loosely coupled programs. The provided services are loosely coupled together with standardized and well-defined interfaces. These loosely coupled programs make the architecture very extensible due to the possibility to add or remove services with limited costs. Therefore, new services can be created by combining existing services. To understand loose coupling clearly, it is better to understand the opposite, which is tight coupling, and its problems: Errors, delays, and downtime spread through the system The resilience of the whole system is based on the weakest part Cost of upgrading or migrating spreads It's hard to evaluate the useful parts from the dead weight The benefits a web service provides are listed below: Increased interoperability, resulting in lower maintenance costs Increased reusability and composablity (for example, use publicly available services and reuse them or integrate them to provide new services) Increased competition among vendors, resulting in lower product costs Easy transition from one product to another, resulting in lower training costs Greater degree of adoption and longevity for a standard, a large degree of usage from vendors and users leading to a higher degree of acceptance Q: What contributes to the popularity of web services? A: Among the three commonly used methods to enable SOA, a web service can be considered as the most standard and flexible way. Web services extend the idea of POX and add additional standards to make the communication more organized and standardized. There are several reasons behind the web services being the most popular SOA-enabled mechanism, as stated here: Web services are described using WSDL, and WSDL can capture any complex application and the required quality of services. Web services use SOAP as the message transmission mechanism, as SOAP is a special type of XML. It gains all the extensibility features from XML. There are a number of standard bodies to create and enforce the standards for web services. There are multiple open source and commercial web service implementations. By using the standards and procedures, web services provide application and programming language-independent mechanism to integrate and communicate. Different programming languages may define different implementations for web services, yet they interoperate because they all agree on the format of the information they share. Q: What are the standard bodies for web services? A: In web services, there are three main standard bodies that helped to improve the interoperability, quality of service, and base standards: WS-I OASIS W3C Q: How do organizations move into web services? A: There are three ways in which an organization could possibly use to move into the web services, listed next: Create a new web service from scratch. The developer creates the functionalities of the services as well as the description (i.e., WSDL). Expose the existing functionality through a web service. Here the functionalities of the service already exist. Only the service description needs to be implemented. Integrate web services from other vendors or business partners. There are occasions when using a service implemented by another is more cost effective than building from the scratch. On these occasions, the organization will need to integrate others' or even business partners' web services. The real usage of web service concepts is for the second and third methods, which enables other web services and applications to use the existing applications. Web services describe a new model for using the web; the model allows publication of business functions to the Web and provides universal access to those business functions. Both developers and end users benefit from web services. The web service model simplifies business application development and interoperation. Q: How does a Web services model look like? A: Web service model consists of a set of basic functionalities such as describe, publish, discover, bind, invoke, update, and unpublish. In the meantime, the model also consists of three actors—service provider, service broker, and service requester. Both the functionalities as well as actors are shown in the next figure. Service provider is the individual (organization) that provides the service. The service provider's job is to create, publish, maintain, and unpublish their services. From a business point of view, a service provider is the owner of the service. From an architectural view, a service provider is the platform that holds the implementation of the service. Google API, Yahoo! Financial services, Amazon Services, and Weather services are some examples of service providers. Service broker provides a repository of service descriptions (WSDL). These descriptions are published by the service provider. Service requesters will search the repository to identify the required service and obtain the binding information for these services. Service broker can be either public, where the services are universally accessible, or private, where only a specified set of service requesters are able to access the service. Service requester is the party that is looking for a service to fulfill its requirements. A requester could be a human accessing the service or an application program (a program could also be a service). From a business view, this is the business that wants to fulfill a particular service. From an architectural view, this is the application that is looking for and invoking a service. Q: What are web services standards? A: So far we have discussed SOA, standard bodies of web services, and the web service model. Here, we are going to discuss more about standards, which make web services more usable and flexible. In the past few years, there has been a significant growth in the usage of web services as application integration mechanism. As mentioned earlier, a web service is different from other SOA exposing mechanisms because it consists of various standards to address issues encountered in the other two mechanisms. The growing collection of WS-* (for example, Web Service security, Web Service reliable messaging, Web Service addressing, and others) standards, supervised by the web services governing bodies, define the web service protocol stack shown in the following figure. Here we will be looking at the standards that have been specified in the most basic layers: messaging and description, and discovery. The messaging standards are intended to give the framework for exchanging information in a distributed environment. These standards have to be reliable so that the message will be sent only once and only the intended receiver will receive it. This is one of the primary areas where research is being conducted, as everything depends on the messaging ability. Q: Describe the web services standards, XML-RPC and SOAP? A: The web services standards; XML-RPC and SOAP are described below. XML-RPC: The XML-RPC standard was created by Dave Winer in 1998 with Microsoft. That time the existing RPC systems were very bulky. Therefore, to create a light-weight system, the developer simplified it by specifying only the essentials and defined only a handful of data types and commands. This protocol uses XML to encode its calls to HTTP as a transport mechanism. The message is sent as a POST request in which the body of the request is in XML. A procedure is executed on the server and the value it returns is also formatted into XML. The parameters can be scalars, numbers, strings, dates, as well as complex record and list structures. As new functionalities were introduced, XML-RPC evolved into what is now known as SOAP, which is discussed next. Still, some people prefer using XML-RPC because of its simplicity, minimalism, and the ease of use. SOAP: The concept of SOAP is a stateless, one-way message exchange. However, applications can create more complex interaction patterns—such as request-response, request-multiple responses, and so on—by combining such one-way exchanges with features provided by an underlying protocol and application-specific information. SOAP is silent on the semantics of any application-specific data it conveys as it is on issues such as routing of SOAP messages, reliable data transfer, firewall traversal, and so on. However, SOAP provides the framework by which application-specific information may be conveyed in an extensible manner. The developers had chosen XML as the standard message format because of its widespread use by major organizations and open source initiatives. Also, there is a wide variety of freely available tools that ease the transition to a SOAP-based implementation. Q: Define the scope of Web Services Addressing (WS-Addressing)? A: The standard provides transport independent mechanisms to address messages and identifies web services, corresponding to the concepts of address and message correlation described in the web services architecture. The standard defines XML elements to identify web services endpoints and to secure end-to-end endpoint identification in messages. This enables messaging systems to support message transmission through networks that include processing nodes such as endpoint managers, firewalls, and gateways in a transport-neutral manner. Thus, WS-Addressing enables organizations to build reliable and interoperable web service applications by defining a standard mechanism for identifying and exchanging Web Services messages between multiple end points. Q: What is Web Services Description Language (WSDL)? A: WSDL developed by IBM, Ariba, and Microsoft is an XML-based language that provides a model for describing web services. The standard defines services as network endpoints or ports. WSDL is normally used in combination with SOAP and XML schema to provide web services over networks. A service requester who connects to a web service can read the WSDL to determine what functions are available in the web service. Special data types are embedded in the WSDL file in the form of XML Schema. The client can then use SOAP to call functions listed in the WSDL. The standard enables one to separate the description of the abstract functionality offered by a service from the concrete details of a service description such as how and where that functionality is offered. This specification defines a language for describing the abstract functionality of a service as well as a framework for describing the concrete details of a service description. The abstract definition of ports and messages is separated from their concrete use, allowing the reuse of these definitions.
Read more
  • 0
  • 0
  • 3323

article-image-building-custom-version-jquery
Packt
04 Apr 2013
9 min read
Save for later

Building a Custom Version of jQuery

Packt
04 Apr 2013
9 min read
(For more resources related to this topic, see here.) Why Is It Awesome? While it's fairly common for someone to say that they use jQuery in every site they build (this is usually the case for me), I would expect it much rarer for someone to say that they use the exact same jQuery methods in every project, or that they use a very large selection of the available methods and functionality that it offers. The need to reduce file size as aggressively as possible to cater for the mobile space, and the rise of micro-frameworks such as Zepto for example, which delivers a lot of jQuery functionality at a much-reduced size, have pushed jQuery to provide a way of slimming down. As of jQuery 1.8, we can now use the official jQuery build tool to build our own custom version of the library, allowing us to minimize the size of the library by choosing only the functionality we require. For more information on Zepto, see http://zeptojs.com/. Your Hotshot Objectives To successfully conclude this project we'll need to complete the following tasks: Installing Git and Make Installing Node.js Installing Grunt.js Configuring the environment Building a custom jQuery Running unit tests with QUnit Mission Checklist We'll be using Node.js to run the build tool, so you should download a copy of this now. The Node website (http://nodejs.org/download/) has an installer for both 64 and 32-bit versions of Windows, as well as a Mac OS X installer. It also features binaries for Mac OS X, Linux, and SunOS. Download and install the appropriate version for your operating system. The official build tool for jQuery (although it can do much more besides build jQuery) is Grunt.js, written by Ben Alman. We don't need to download this as it's installed via the Node Package Manager (NPM). We'll look at this process in detail later in the project. For more information on Grunt.js, visit the official site at http://gruntjs.com. First of all we need to set up a local working area. We can create a folder in our root project folder called jquery-source. This is where we'll store the jQuery source when we clone the jQuery Github repository, and also where Grunt will build the final version of jQuery. Installing Git and Make The first thing we need to install is Git, which we'll need in order to clone the jQuery source from the Github repository to our own computer so that we can work with the source files. We also need something called Make, but we only need to actually install this on Mac platforms because it gets installed automatically on Windows when Git is installed. As the file we'll create will be for our own use only and we don't want to contribute to jQuery by pushing code back to the repository, we don't need to worry about having an account set up on Github. Prepare for Lift Off First we'll need to download the relevant installers for both Git and Make. Different applications are required depending on whether you are developing on Mac or Windows platforms. Mac developers Mac users can visit http://git-scm.com/download/mac for Git. Next we can install Make. Mac developers can get this by installing XCode. This can be downloaded from https://developer.apple.com/xcode/. Windows developers Windows users can install msysgit, which can be obtained by visiting https://code.google.com/p/msysgit/downloads/detail?name=msysGit-fullinstall-1.8.0-preview20121022.exe. Engage Thrusters Once the installers have downloaded, run them to install the applications. The defaults selected by the installers should be fine for the purposes of this mission. First we should install Git (or msysgit on Windows). Mac developers Mac developers simply need to run the installer for Git to install it to the system. Once this is complete we can then install XCode. All we need to do is run the installer and Make, along with some other tools, will be installed and ready. Windows developers Once the full installer for msysgit has finished, you should be left with a Command Line Interface (CLI) window (entitled MINGW32) indicating that everything is ready for you to hack. However, before we can hack, we need to compile Git. To do this we need to run a file called initialize.sh. In the MINGW32 window, cd into the msysgit directory. If you allowed this to install to the default location, you can use the following command: cd C:msysgitmsysgitsharemsysGit Once we are in the correct directory, we can then run initialize.sh in the CLI. Like the installation, this process can take some time, so be patient and wait for the CLI to return a flashing cursor at the $ character. An Internet connection is required to compile Git in this way. Windows developers will need to ensure that the Git.exe and MINGW resources can be reached via the system's PATH variable. This can be updated by going to Control Panel | System | Advanced system settings | Environment variables. In the bottom section of the dialog box, double-click on Path and add the following two paths to the git.exe file in the bin folder, which is itself in a directory inside the msysgit folder wherever you chose to install it: ;C:msysgitmsysgitbin; C:msysgitmsysgitmingwbin; Update the path with caution! You must ensure that the path to Git.exe is separated from the rest of the Path variables with a semicolon. If the path does not end with a semicolon before adding the path to Git.exe, make sure you add one. Incorrectly updating your path variables can result in system instability and/or loss of data. I have shown a semicolon at the start of the previous code sample to illustrate this. Once the path has been updated, we should then be able to use a regular command prompt to run Git commands. Post-installation tasks In a terminal or Windows Command Prompt (I'll refer to both simply as the CLI from this point on for conciseness) window, we should first cd into the jquery-source folder we created at the start of the project. Depending on where your local development folder is, this command will look something like the following: cd c:jquery-hotshotsjquery-source To clone the jQuery repository, enter the following command in the CLI: git clone git://github.com/jquery/jquery.git Again, we should see some activity on the CLI before it returns to a flashing cursor to indicate that the process is complete. Depending on the platform you are developing on, you should see something like the following screenshot: Objective Complete — Mini Debriefing We installed Git and then used it to clone the jQuery Github repository in to this directory in order to get a fresh version of the jQuery source. If you're used to SVN, cloning a repository is conceptually the same as checking out a repository. Again, the syntax of these commands is very similar on Mac and Windows systems, but notice how we need to escape the backslashes in the path when using Windows. Once this is complete, we should end up with a new directory inside our jquery-source directory called jquery. If we go into this directory, there are some more directories including: build: This directory is used by the build tool to build jQuery speed: This directory contains benchmarking tests src: This directory contains all of the individual source files that are compiled together to make jQuery Test: This directory contains all of the unit tests for jQuery It also has a range of various files, including: Licensing and documentation, including jQuery's authors and a guide to contributing to the project Git-specific files such as .gitignore and .gitmodules Grunt-specific files such as Gruntfile.js JSHint for testing and code-quality purposes Make is not something we need to use directly, but Grunt will use it when we build the jQuery source, so it needs to be present on our system. Installing Node.js Node.js is a platform for running server-side applications built with JavaScript. It is trivial to create a web-server instance, for example, that receives and responds to HTTP requests using callback functions. Server-side JS isn't exactly the same as its more familiar client-side counterpart, but you'll find a lot of similarities in the same comfortable syntax that you know and love. We won't actually be writing any server-side JavaScript in this project – all we need Node for is to run the Grunt.js build tool. Prepare for Lift Off To get the appropriate installer for your platform, visit the Node.js website at http://nodejs.org and hit the download button. The correct installer for your platform, if supported, should be auto-detected. Engage Thrusters Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. Installing Node is a straightforward procedure on either the Windows or Mac platforms as there are installers for both. This task will include running the installer, which is obviously simple, and testing the installation using a CLI. On Windows or Mac platforms, run the installer and it will guide you through the installation process. I have found that the default options are fine in most cases. As before, we also need to update the Path variable to include Node and Node's package manager NPM. The paths to these directories will differ between platforms. Mac Mac developers should check that the $PATH variable contains a reference to usr/local/bin. I found that this was already in my $PATH, but if you do find that it's not present, you should add it. For more information on updating your $PATH variable, see http://www.tech-recipes.com/rx/2621/os_x_change_path_environment_variable/. Windows Windows developers will need to update the Path variable, in the same way as before, with the following paths: C:Program Filesnodejs; C:UsersDesktopAppDataRoamingnpm; Windows developers may find that the Path variable already contains an entry for Node so may just need to add the path to NPM. Objective Complete — Mini Debriefing Once Node is installed, we will need to use a CLI to interact with it. To verify Node has installed correctly, type the following command into the CLI: node -v The CLI should report the version in use, as follows: We can test NPM in the same way by running the following command: npm -v
Read more
  • 0
  • 0
  • 3322
article-image-using-third-party-plugins-non-native-plugins
Packt
30 Aug 2013
4 min read
Save for later

Using third-party plugins (non-native plugins)

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) We want to focus on a particular case here, because we have already seen how to add a new property, and for some components, we can easily add the plugins or features property, and then add the plugin configuration. But the components that have native plugins supported by the API do not allow us to do so, like for instance, the grid panel from Ext JS: We can only use the plugins and features that are available within Sencha Architect. What if we want to use a third-party plugin or feature such as the Filter Plugin? It is possible, but we need to use an advanced feature from Sencha Architect, which is "creating overrides". A disclaimer about overrides: this has to be avoided. Whenever you can use a set method to change a property, use it. Overrides should be your last resource and they should be used very carefully, because if you do not use them carefully, you can change the behavior of a component and something may stop working. But we will demonstrate how to do it in a safe way! We will use the BooksGrid as an example in this topic. Let's say we need to use the Filter Plugin on it, so we need to create an override first. To do it, select the BooksGrid from the project inspector, open the code editor, and click on the Create Override button (Step 1). Sencha Architect will display a warning (Step 2). We can click on Yes to continue: The code editor will open (Step 3) the override class so we can enter our code. In this case, we will have complete freedom to do whatever we need to on this file. So let's add the features() function with the declaration of the plugin and also the initComponent()function as shown in the following screenshot (Step 4): One thing that is very important is that we must call the callParent()function (callOverriden()is deprecated already in Ext JS 4.1 and later versions) to make sure we will continue to have all the original behavior of the component (in this case the BooksGridclass). The only thing we want to do is to add a new feature to it. And we are done with the override! To go back to the original class we can use the navigator as shown in the following screenshot: Notice that requires was added to the class Packt.view.override. BooksGrid, which is the class we just wrote. The next step is to add the plugin on the class requires. To do so, we need to select the BooksGrid, go to the config panel, and add the requires with the name of the plugin (Ext.ux.grid.FiltersFeature): Some developers like to add the plugin file directly as a JavaScript file on app.html/index.html. Sencha provides the dynamic loading feature so let's take advantage of it and use it! First, we cannot forget to add the uxfolder with the plugin on the project root folder as shown in the following screenshot: Next, we need to set the application loader. Select the Application from the project inspector (Step 5), then go to the config panel, locate the Loader Config property, click on the +icon (Step 6), then click on the arrow icon (Step 7). The details of the loader will be available on the config panel. Locate the paths property and click on it (Step 8). The code editor will be opened with the loader path's default value, which is {"Ext": "."}(Step 9). Do not remove it; simply add the path of the Ext.uxnamespace which is the uxfolder (Step 10): And we are almost done! We need to add the filterable option in each column we want to allow the user to filter its values (Step 11): we can use the config panel to add a new property (we need to select the desired column from the project inspector first—always remember to do this). And then, we can choose what type of property we want to add (Step 12 and Step 14). For example, we can add filterable: true(Step 13) for the id column and filterable: {type: 'string'}(Step 15 and Step 16) to the Name column as shown in the following screenshot: And the plugin is ready to be used! Summary In this article we learned some useful tricks that can help in our everyday tasks while working with Sencha projects using Sencha Architect. Also we covered advanced topics such as creating overrides to use third party plugins and features and implement multilingual apps. Resources for Article: Further resources on this subject: Sencha Touch: Layouts Revisited [Article] Sencha Touch: Catering Form Related Needs [Article] Creating a Simple Application in Sencha Touch [Article]
Read more
  • 0
  • 0
  • 3320

article-image-php-magic-features
Packt
12 Oct 2009
5 min read
Save for later

PHP Magic Features

Packt
12 Oct 2009
5 min read
In this article by Jani Hartikainen, we'll look at PHP's "magic" features: Magic methods, which are class methods with specific names, are used to perform various specialized tasks. They are grouped into two: overloading methods and non-overloading methods. Overloading magic methods are used when your code attempts to access a method or a property which does not exist. Non-overloading methods perform other tasks. Magic functions, which are similar to magic methods, but are just plain functions outside any class. Currently there is only one magic function in PHP. Magic constants, which are similar to constants in notation, but act more like "dynamic" constants - their value depends on where you use them. We'll also look at some practical examples of using some of these, and lastly we'll check out what new features PHP 5.3 is going to add. Magic methods For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods. __construct and __destruct class SomeClass { public function __construct() { } public function __destruct() { }} The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common.  __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct. $obj = new SomeClass(); __destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about  its existence. It gets called when your object falls out of scope or is garbage collected. function someFunc() { $obj = new SomeClass(); //when the function ends, $obj falls out of scope and SomeClass __destruct is called } someFunc(); If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton. __clone class SomeClass { public $someValue; public function __clone() { $clone = new SomeClass(); $clone->someValue = $this->someValue; return $clone; }} The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects. $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = clone $obj1;echo $obj2->someValue;//echos 1 Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following: $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = $obj1;$obj3 = clone $obj1;$obj1->someValue = 2; What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1. If you want to disable cloning, you can make __clone private or protected. __toString class SomeClass { public function __toString() { return 'someclass'; }} The __toString method is called when PHP needs to convert class instances into strings, for example when echoing: $obj = new SomeClass();echo $obj;//will output 'someclass' This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves. __sleep and __wakeup class SomeClass { private $_someVar; public function __sleep() { return array('_someVar'); } public function __wakeup() { }} These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized. $obj = new SomeClass();$serialized = serialize($obj);//__sleep was calledunserialize($serialized);//__wakeup was called You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized. As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message. An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual. __set_state class SomeClass { public $someVar; public static function __set_state($state) { $obj = new SomeClass(); $obj->someVar = $state['someVar']; return $obj; }} This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class. $obj = new SomeClass();$obj->someVar = 'my value';var_export($obj); This code will output something along the lines of: SomeClass::__set_state(array('someVar'=>'my value')); Note that var_export will also export private and protected variables of the class, so they too will be in the array.
Read more
  • 0
  • 0
  • 3314

article-image-product-cross-selling-and-layout-using-panels-drupal-and-ubercart-2x
Packt
25 Mar 2010
7 min read
Save for later

Product Cross-selling and Layout using Panels with Drupal and Ubercart 2.x

Packt
25 Mar 2010
7 min read
Product cross-selling Product cross-selling is a very powerful policy that you might be familiar with. For instance Amazon.com was one of the early adopters of recommendation systems in a very sophisticated manner, and it boosted its online selling rates by hundreds of millions of dollars. If we move on to more algorithmic complexity, a more sophisticated example is Netflix, an online movie rental service, and the core of its business is its recommendation system and the hype that surrounds it. Finally, the most recent and simplest to implement is Last.fm with a very elegant and efficient recommendation algorithm. By adopting Drupal and Ubercart, things become pretty straightforward, as you have good modules that encapsulate the complexity of recommendation algorithms and require little configuration, and you may as well provide to your end customers a great consumer experience. In addition to that, do not forget the powerful and robust taxonomy mechanism that Drupal implements in its core and the site-wide content tagging it provides, so the relevant items and items that could be used in conjunction could be categorized. So now without further delay, we will go through all these interesting possibilities that our Drupal online shop could offer. Using taxonomies As we have already mentioned, taxonomies are the core of the Drupal system and grasping the high-level implementation can save us a lot of trouble most of the time. Taxonomies often help us create references for our Drupal system nodes, differentiate between them, and create easy-to-use, intuitive, and searchable views on our content. Therefore, in our example, the basic idea is to create a taxonomy not only for products that can be sold as groups (as we already have Ubercart product kit for that), but rather for the products administrator to be able to tag all these relevant products in a way that high-revenue electronic shops like ExpanSys and PixMania have adopted. To achieve this we do not need any new module installation but rather the plain old Drupal taxonomy system. We will make two taxonomies: one for product mangers, which they can edit while they add new products, and another in which users can free tag your content. These free taxonomy vocabularies are also referred as folksonomies. Furthermore, everyday practice has shown that relevant taxonomy blocks can really boost your site traffic, page views, and eventually conversions that translate to purchases. The vocabularies that we will alter are the following: Community Tagging. We need this particular free tagging vocabulary to allow our end users to tag the products of our site in order to provide non-intuitive connections. Product Types. This stands for an internal tagging vocabulary with predefined terms namely offer, best price, and new product and will help create the corresponding views in order to perform product promotion in your online electronics shop. To create the new vocabularies and change the existing one, we take the following steps: Navigate to Administer | Content management | Taxonomy. Click on Add vocabulary. Fill in the name as "Community Tagging" and provide a short description. Choose Product and Product Kit as the Content types that will be associated for tagging. In the Settings pane choose Tags to allow free tagging and Multiple select. Finally choose Not in Sitemap for XML Sitemap. To add another taxonomy for product visibility options and positioning, go back to the taxonomy page and again click on Add vocabulary. Add the name "Product Type" along with a short description and click on Product and Product Kit in the Content types section. Finally add a priority 1.0 to the XML sitemap element and click on the Save button. Navigate to your newly created vocabulary terms and add the terms "offer", "best price", and "new product". This is an example of a user-defined term-tagging procedure on one of our products. Use Taxonomies for Navigation and MenusYou can also use Drupal's system pages using the taxonomy view module for category listings. The end of the URL should look like this: taxonomy/term/1 or taxonomy/term/2.Note that taxonomy URLs always contain one or more Term IDs at the end of the URL. These numbers, 1 and 2 above, tell the Drupal engine which categories to display. Now combine the Term IDs above in one URL using a comma as a delimiter: taxonomy/term/1, 2. The resulting listing represents the boolean AND operation. It includes all nodes tagged with both terms. To get a listing of nodes using either taxonomy term 1 OR 2, use a plus sign as the operator: taxonomy/term/1+2 Using recommendation systems Recommendation systems have existed a long time and make a crucial contribution in some of the most successful online shops. In this section we will focus on examples of implicit data collection of the customer's activities that include the following: Observing the items that a user views in an online store. Analyzing item/user viewing time. Keeping a record of the items that a user purchases online. Obtaining a list of items that a user has listened to or watched on his or her computer. Analyzing the user's social network and discovering similar likes and dislikes. Having these data and customer behavior in our account, it is then easy to find the optimal item suggestions that fit people's profiles. We can then provide sections like "customers who bought this book also bought" on Amazon.com suggestions. Further to our discussion we will install recommendation API and two other modules that depend on it. The Ubercart-oriented module is the Ubercart Recommender module. This module collects data through the Drupal Core Statistics module about user purchases and provides suggestions about other products that could be relevant to the returning customer. All recommendation systems assign special weights in their recommendation algorithm to purchased products since this generates returned value and we have a fully converted customer. In order to handle suggestions to users that have not made any purchases yet from our online shop we will also use the Browsing History Recommender and Relevant Content modules. You can find more information about the algorithms and the recommendation procedure implemented in the Drupal Recommender API at http://mrzhou.cms.si.umich.edu/recommender. Next we provide a synopsis of the added value and the functionality of each module: Browsing History Recommender: This module adds two blocks in your site "Users who browsed this node also browsed" and "Recommended for you". To calculate the recommendations, this module uses Drupal statistics and in particular the history data and keeps track of 30 days of users' node browsing activity. The "Recommended for you" block provides personalized node recommendations based on a user's node browsing history. Relevant Content: This module provides two ways of referencing content relevant to the node in sight. Both of these methods provide configuration to filter for specific content types and vocabularies, limit the maximum size of the result, and provide some header text. The result in both cases is a list of nodes that the module considers most relevant, based on the categorization of the current page. You can configure multiple blocks with individual settings for node type, vocabulary, maximum result size, and optional header text. Ubercart Products Recommender: This module actually adds two extra block in our blocks section, one named "Customers who ordered this product also ordered", which performs a cross check between orders of customers that bought the particular product in sight and another named "Recommended for you", which provides personalized products recommendations based on a user's purchasing history. To configure your online shop to provide content-related recommendations we need to perform the following administration steps: Download Recommender from Drupal.org by navigating here:http://mrzhou.cms.si.umich.edu/recommender Unzip the file in your site's modules directory. Navigate to the modules administration screen and activate this module. Follow the preceding procedure for the following modules also:http://drupal.org/project/relevant_contenthttp://drupal.org/project/history_rec andhttp://drupal.org/project/uc_rec After you have uploaded and installed all modules you will be able to see the following blocks in your blocks page (Administer | Site building | Blocks). Although these modules have configuration screens, they do need extra configuration or property change actions and can be assigned to your theme regions as they are from the blocks section. You can find a very thorough discussion about all recommendation modules on Drupal at http://groups.drupal.org/node/12347.
Read more
  • 0
  • 0
  • 3304
article-image-internet-peas-gardening-javascript-part-1
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 1

Anna Gerber
23 Nov 2015
6 min read
Who wouldn't want an army of robots to help out around the home and garden? It's not science fiction: Robots are devices that sense and respond to the world around us, so with some off-the-shelf hardware, and the power of the Johnny-Five JavaScript Robotics framework, we can build and program simple "robots" to automate every day tasks. In this two part article series, we'll build an internet-connected device for monitoring plants. Bill of materials You'll need these parts to build this project Part Source Particle Core (or Photon) Particle 3xAA Battery holder e.g. with micro USB connector from DF Robot Jumper wires Any electronics supplier e.g. Sparkfun Solderless breadboard Any electronics supplier e.g. Sparkfun Photo resistor Any electronics supplier e.g. Sparkfun 1K resistor Any electronics supplier e.g. Sparkfun Soil moisture sensor e.g. Sparkfun Plants   Particle (formerly known as Spark) is a platform for developing devices for the Internet of Things. The Particle Core was their first generation Wifi development board, and has since been supeceded by the Photon. Johnny-Five supports both of these boards, as well as Arduino, BeagleBone Black, Raspberry Pi, Edison, Galileo, Electric Imp, Tessel and many other device platforms, so you can use the framework with your device of choice. The Platform Support page lists the features currently supported on each device. Any device with Analog Read support is suitable for this project. Setting up the Particle board Make sure you have a recent version of Node.js installed. We're using npm (Node Package Manager) to install the tools and libraries required for this project. Install the Particle command line tools with npm (via the Terminal on Mac, or Command Prompt on Windows): npm install -g particle-cli Particle boards need to be registered with the Particle Cloud service, and you must also configure your device to connect to your wireless network. So the first thing you'll need to do is connect it to your computer via USB and run the setup program. See the Particle Setup docs. The LED on the Particle Core should be blinking blue when you plug it in for the first time (if not, press and hold the mode button). Sign up for a Particle Account and then follow the prompts to setup your device via the Particle website, or if you prefer you can run the setup program from the command line. You'll be prompted to sign in and then to enter your Wifi SSID and password: particle setup After setup is complete, the Particle Core can be disconnected from your computer and powered by batteries or a separate USB power supply - we will connect to the board wirelessly from now on. Flashing the board We also need to flash the board with the Voodoospark firmware. Use the CLI tool to sign in to the Particle Cloud and list your devices to find out the ID of your board: particle cloud login particle list Download the firmware.cpp file and use the flash command to write the Voodoospark firmware to your device: particle cloud flash <Your Device ID> voodoospark.cpp See the Voodoospark Getting Started page for more details. You should see the following message: Flash device OK: Update started The LED on the board will flash magenta. This will take about a minute, and will change back to green when the board is ready to use. Creating a Johnny-Five project We'll be installing a few dependencies from npm, so to help manage these, we'll set up our project as an npm package. Run the init command, filling in the project details at the prompts. npm init After init has completed, you'll have a package.json file with the metadata that you entered about your project. Dependencies for the project can also be saved to this file. We'll use the --save command line argument to npm when installing packages to persist dependencies to our package.json file. We'll need the Johnny-Five npm module as well as the Particle-IO IO Plugin for Johnny-Five. npm install johnny-five --save npm install particle-io --save Johnny-Five uses the Firmata protocol to communicate with Arduino-based devices. IO Plugins provide Firmata compatible interfaces to allow Johnny-Five to communicate with non-Arduino-based devices. The Particle-IO Plugin allows you to run Node.js applications on your computer that communicate with the Particle board over Wifi, so that you can read from sensors or control components that are connected to the board. When you connect to your board, you'll need to specify your Device ID and your Particle API Access Token. You can look up your access token under Settings in the Particle IDE. It's a good idea to copy these to environment variables rather than hardcoding them into your programs. If you are on Mac or Linux, you can create a file called .particlerc then run source .particlerc: export PARTICLE_TOKEN=<Your Token Here> export PARTICLE_DEVICE_ID=<Your Device ID Here> Reading from a sensor Now we're ready to get our hands dirty! Let's confirm that we can communicate with our Particle board using Johnny-Five, by taking a reading from our soil moisture sensor. Using jumper wires, connect one pin on the soil sensor to pin A0 (analog pin 0) and the other to GND (ground). The probes go into the soil in your plant pot. Create a JavaScript file named sensor.js using your preferred text editor or IDE. We use require statements to include the Johnny-Five module and the Particle-IO plugin. We're creating an instance of the Particle IO plugin (with our token and deviceId read from our environment variables) and providing this as the io config option when creating our Board object. var five = require("johnny-five"); var Particle = require("particle-io"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { console.log("CONNECTED"); var soilSensor = new five.Sensor("A0"); soilSensor.on("change", function() { console.log(this.value); }); }); After the board is ready, we create a Sensor object to monitor changes on pin A0, and then print the value from the sensor to the Node.js console whenever it changes. Run the program using Node.js: node sensor.js Try pulling the sensor out of the soil or watering your plant to make the sensor reading change. See the Sensor API for more methods that you can use with Sensors. You can hit control-C to end the program. In the next installment we'll connect our light sensor and extend our Node.js application to monitor our plant's environment. Continue reading now! About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3297

article-image-moodle-developing-interactive-timeline-widget
Packt
21 Oct 2011
5 min read
Save for later

Moodle: Developing an Interactive Timeline Widget

Packt
21 Oct 2011
5 min read
(For more resources on Moodle, see here.) Introducing the SIMILE timeline widget The Massachusetts Institute of Technology (MIT) has developed various visualization and data manipulation tools as part of the SIMILE project. One of these is a free/open source timeline JavaScript widget, which takes time-based data as input and creates an interactive timeline that scrolls from left to right and contains popup panes and links. A timeline for the life and career of Monet is as follows: You can view more examples on the web site, http://simile-widgets.org/timeline/. In order to use the timeline widget we need these components: The Moodle timeline filter, containing the SIMILE timeline Javascript libraries A timeline data file, in XML or JSON (Javascript Object Notation) format Photographs to show in the popup panes A web page to host the timeline We will deal with installing the filter later, but first we must decide on the subject for our timeline. If you visit the home of SIMILE, http://simile-widgets.org/timeline/, you will be able to explore timelines for the assassination of John F. Kennedy, the life of Claude Monet, and other examples. Timelines can be granular to the minute or hour, as in the case of the assassination. Or they can be spread over centuries or millennia—this is currently the limit for the widget. A suitable subject for our young audience would be significant or important inventions. This can encompass subjects as diverse as printing, paper, penicillin, steam, and the computer. And these inventions originate in different parts of the world, which adds an extra dimension to the subject. Now that we have our subject, a search on Google reveals some useful links, including a list of the top 10 inventions, http://listverse.com/2007/09/13/top-10-greatestinventions/. We may or may not agree with a list like this, but it acts as a useful starting point and aide memoire. There are pictures here, and more information and images on other websites including Wikipedia. To progress from ideas to our timeline, we need to create an XML data file. One of the easier tools to help us produce the XML is a syntax highlighting text editor, which we will install now. If you already have a text editor on your computer that you think is suitable, skip this section. Installing a text editor We have our subject, so the next step is to install some editing software to help us create our XML timeline file. Though most operating systems including Windows contain simple text editors, it will be helpful to install an editor that features syntax highlighting for various computer languages including XML. Some general-purpose text editors are: Notepad2 for Windows, a simple open-source editor, distributed under a BSD license (http://opensource.org/licenses/bsd-license.php). It is a small download, less than 300 kilobytes, from http://flos-freeware.ch/notepad2.html. Notepad++ for Windows, distributed under a GPL license. Download it from http://notepad-plus-plus.org. TextWrangler for Mac OS X, distributed under a Freeware license (not open-source), http://barebones.com/products/textwrangler/. Most flavors of Linux and Unix include the vi or vim text editor. Or use Emacs or your favorite editor. We will carry on and install Notepad2 for the examples in this article. Time for action – installing Notepad2 To install Notepad2 on Windows, visit the website http://flos-freeware.ch/notepad2.html in your browser and follow these steps: Under the Downloads section, find the link to the notepad2.zip file. Download it to your computer. Open the file notepad2.zip with the Unzip program available on your computer (the one built-in to Windows XP or later. Or Winzip, 7zip, or similar). Extract all the files in notepad2.zip to a directory on your hard drive, for example, C:apps2Notepad2. In your new directory, click on the file Notepad2.exe with your right mouse button. Choose Create Shortcut from the context menu that appears. Name the shortcut Notedpad2, and copy it to the clipboard. Now in Windows Explorer on Windows 7, go to the directory C:Users{USER}SendTo (or on Windows XP, go to the directory C:Documents and Settings{USER}SendTo). In each case {USER} is your username. Paste a copy of the shortcut in the directory. (Note, you will probably see a shortcut for Notepad, the basic editor included in Windows in this directory. You may want to delete it to avoid confusion.) Go to your Desktop and paste the shortcut there as well. Whew! Notepad2 is a little fiddly to set up, but don't worry, its easy to use. What just happened? We downloaded and installed a text editor for Windows called Notepad2. This has a feature called syntax highlighting, which as we will see helps us to write the XML file for our timeline widget. Note that the shortcuts we created are useful in different contexts. The SendTo shortcut is useful when we wish to edit a file—simply select it in Windows Explorer, right-click, and choose SendTo | Notepad2 in the context menu. The Desktop shortcut is useful for creating new files. We also found that there are many alternatives for Windows, Mac OS X, and other operating systems. These included Notepad++ for Windows, TextWrangler for Mac OS X, and vi/vim and Emacs for Linux. We have added a shortcut for the editor to the SendTo menu in Windows, as shown below: Now that we have a text editor we can create the timeline XML.
Read more
  • 0
  • 0
  • 3295
Modal Close icon
Modal Close icon