Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-working-events
Packt
08 Jan 2016
7 min read
Save for later

Working with Events

Packt
08 Jan 2016
7 min read
In this article by Troy Miles, author of the book jQuery Essentials, we will learn that an event is the occurrence of anything that the system considers significant. It can originate in the browser, the form, the keyboard, or any other subsystem, and it can also be generated by the application via a trigger. An event can be as simple as a key press or as complex as the completion of an Ajax request. (For more resources related to this topic, see here.) While there are a myriad of potential events, events only matter when the application listens for them. This is also known as hooking an event. By hooking an event, you tell the browser that this occurrence is important to you and to let you know when it happens. When the event occurs, the browser calls your event handling code passing the event object to it. The event object holds important event data, including which page element triggered it. Let's take a look at the first learned and possibly most important event, the ready event. The ready event The first event that programmers new to jQuery usually learn about is the ready event, sometimes referred to as the document ready event. This event signifies that the DOM is fully loaded and that jQuery is open for business. The ready event is similar to the document load event, except that it doesn't wait for all of the page's images and other assets to load. It only waits for the DOM to be ready. Also, if the ready event fires before it is hooked, the handler code will be called at least once unlike most events. The .ready() event can only be attached to the document element. When you think about it, it makes sense because it fires when the DOM, also known as Document Object Model is fully loaded. The .ready() event has few different hooking styles. All of the styles do the same thing—hook the event. Which one you use is up to you. In its most basic form, the hooking code looks similar to the following: $(document).ready(handler); As it can only be attached to the document element, the selector can be omitted. In which case the event hook looks as follows: $().ready(handler); However, the jQuery documentation does not recommend using the preceding form. There is still a terser version of this event's hook. This version omits nearly everything, and it is only passing an event handler to the jQuery function. It looks similar to the following: $(handler); While all of the different styles work, I only recommend the first form because it is the most clear. While the other forms work and save a few bytes worth of characters, they do so at the expense of code clarity. If you are worried about the number of bytes an expression uses, you should use a JavaScript minimizer instead. It will do a much more thorough job of shrinking code than you could ever do by hand. The ready event can be hooked as many times as you'd like. When the event is triggered, the handlers are called in the order in which they were hooked. Let's take a look at an example in code. // ready event style no# 1 $(document).ready(function () { console.log("document ready event handler style no# 1"); // we're in the event handler so the event has already fired. // let's hook it again and see what happens $(document).ready(function () { console.log("We get this handler even though the ready event has already fired"); }); }); // ready event style no# 2 $().ready(function () { console.log("document ready event handler style no# 2"); }); // ready event style no# 3 $(function () { console.log("document ready event handler style no# 3"); }); In the preceding code, we hook the ready event three times, each one using a different hooking style. The handlers are called in the same order that they are hooked. In the first event handler, we hook the event again. As the event has been triggered already, we may expect that the handler will never be called, but we would be wrong. jQuery treats the ready event differently than other events. Its handler is always called, even if the event has already been triggered. This makes the ready event a great place for initialization and other code, which must be run. Hooking events The ready event is different as compared to all of the other events. Its handler will be called once, unlike other events. It is also hooked differently than other events. All of the other events are hooked by chaining the .on() method to the set of elements that you wish to use to trigger the event. The first parameter passed to the hook is the name of the event followed by the handling function, which can either be an anonymous function or the name of a function. This is the basic pattern for event hooking. It is as follows: $(selector).on('event name', handling function); The .on() method and its companion the .off() method were first added in version 1.7 of jQuery. For older versions of jQuery, the method that is used to hook the event is .bind(). Neither the .bind() method nor its companion the .unbind() method are deprecated, but .on() and .off() are preferred over them. If you are switching from .bind(), the call to .on() is identical at its simplest levels. The .on() method has capabilities beyond that of the .bind() method, which requires different sets of parameters to be passed to it. If you would like for more than one event to share the same handler, simply place the name of the next event after the previous one with a space separating them: $("#clickA").on("mouseenter mouseleave", eventHandler); Unhooking events The main method that is used to unhook an event handler is .off(). Calling it is simple; it is similar to the following: $(elements).off('event name', handling function); The handling function is optional and the event name is also optional. If the event name is omitted, then all events that are attached to the elements are removed. If the event name is included, then all handlers for the specified event are removed. This can create problems. Think about the following scenario. You write a click event handler for a button. A bit later in the app's life cycle, someone else also needs to know when the button is clicked. Not wanting to interfere with already working code, they add a second handler. When their code is complete, they remove the handler as follows: $('#myButton').off('click'); As the handler was called using only using the event name, it removed not only the handler that it added but also all of the handlers for the click event. This is not what was wanted. Don't despair however; there are two fixes for this problem: function clickBHandler(event){ console.log('Button B has been clicked, external'); } $('#clickB').on('click', clickBHandler); $('#clickB').on('click', function(event){ console.log('Button B has been clicked, anonymous'); // turn off the 1st handler without during off the 2nd $('#clickB').off('click', clickBHandler); }); The first fix is to pass the event handler to the .off() method. In the preceding code, we placed two click event handlers on the button named clickB. The first event handler is installed using a function declaration, and the second is installed using an anonymous function. When the button is clicked, both of the event handlers are called. The second one turns off the first one by calling the .off() method and passing its event handler as a parameter. By passing the event handler, the .off() method is able to match the signature of the handler that you'd like to turn off. If you are not using anonymous functions, this fix is works well. But, what if you want to pass an anonymous function as the event handler? Is there a way to turn off one handler without turning off the other? Yes there is, the second fix is to use event namespacing. Summary In this article, we learned a lot about one of the most important constructs in modern web programming—events. They are the things that make a site interactive. Resources for Article: Further resources on this subject: Preparing Your First jQuery Mobile Project[article] Building a Custom Version of jQuery[article] Learning jQuery[article]
Read more
  • 0
  • 0
  • 2661

article-image-meet-yii
Packt
07 Dec 2012
7 min read
Save for later

Meet Yii

Packt
07 Dec 2012
7 min read
(For more resources related to this topic, see here.) Easy To run a Yii version 1.x-powered web application, all you need are the core framework files and a web server supporting PHP 5.1.0 or higher. To develop with Yii, you only need to know PHP and object-oriented programming. You are not required to learn any new configuration or templating language. Building a Yii application mainly involves writing and maintaining your own custom PHP classes, some of which will extend from the core, Yii framework component classes. Yii incorporates many of the great ideas and work from other well-known web programming frameworks and applications. So if you are coming to Yii from using other web development frameworks, it is likely that you will find it familiar and easy to navigate. Yii also embraces a convention over configuration philosophy, which contributes to its ease of use. This means that Yii has sensible defaults for almost all the aspects that are used for configuring your application. Following the prescribed conventions, you can write less code and spend less time developing your application. However, Yii does not force your hand. It allows you to customize all of its defaults and makes it easy to override all of these conventions. Efficient Yii is a high-performance, component-based framework that can be used for developing web applications on any scale. It encourages maximum code reuse in web programming and can significantly accelerate the development process. As mentioned previously, if you stick with Yii's built-in conventions, you can get your application up and running with little or no manual configuration. Yii is also designed to help you with DRY development. DRY stands for Don't Repeat Yourself , a key concept of agile application development. All Yii applications are built using the Model-View-Controller (MVC) architecture. Yiienforces this development pattern by providing a place to keep each piece of your MVC code. This minimizes duplication and helps promote code reuse and ease of maintainability. The less code you need to write, the less time it takes to get your application to market. The easier it is to maintain your application, the longer it will stay on the market. Of course, the framework is not just efficient to use, it is remarkably fast and performance optimized. Yii has been developed with performance optimization in mind from the very beginning, and the result is one of the most efficient PHP frameworks around. So any additional overhead that Yii adds to applications written on top of it is extremely negligible. Extensible Yii has been carefully designed to allow nearly every piece of its code to be extended and customized to meet any project requirement. In fact, it is difficult not to take advantage of Yii's ease of extensibility, since a primary activity when developing a Yii application is extending the core framework classes. And if you want to turn your extended code into useful tools for other developers, Yii provides easy-to-follow steps and guidelines to help you create such third-party extensions. This allows you to contribute to Yii's ever-growing list of features and actively participate in extending Yii itself. Remarkably, this ease-of-use, superior performance, and depth of extensibility does not come at the cost of sacrificing its features. Yii is packed with features to help you meet those high demands placed on today's web applications. AJAX-enabled widgets, RESTful and SOAP Web services integration, enforcement of an MVC architecture, DAO and relational ActiveRecord database layer, sophisticated caching, hierarchical role-based access control, theming, internationalization (I18N), and localization (L10N) are just the tip of the Yii iceberg. As of version 1.1, the core framework is now packaged with an official extension library called Zii. These extensions are developed and maintained by the core framework team members, and continue to extend Yii's core feature set. And with a deep community of users who are also contributing by writing Yiiextensions, the overall feature set available to a Yii-powered application is growing daily. A list of available, user-contributed extensions on the Yii framework website can be found at http://www.yiiframework.com/extensions. There is also an unofficial extension repository of great extensions that can be found at http://yiiext.github.com/, which really demonstrates the strength of the community and the extensibility of this framework. MVC architecture As mentioned earlier, Yii is an MVC framework and provides an explicit directory structure for each piece of model, view, and controller code. Before we get started with building our first Yii application, we need to define a few key terms and look at how Yii implements and enforces this MVC architecture. Model Typically in an MVC architecture, the model is responsible for maintaining the state, and should encapsulate the business rules that apply to the data that defines this state. A model in Yii is any instance of the framework class CModel or its child class. A model class is typically comprised of data attributes that can have separate labels (something user friendly for the purpose of display), and can be validated against a set of rules defined in the model. The data that makes up the attributes in the model class could come from a row of a database table or from the fields in a user input form. Yii implements two kinds of models, namely the form model (a CFormModel class) and active record (a CActiveRecord class). They both extend from the same base class CModel. The class CFormModel represents a data model that collects HTML form inputs. It encapsulates all the logic for form field validation, and any other business logic that may need to be applied to the form field data. It can then store this data in memory or, with the help of an active record model, store data in a database. Active Record (AR) is a design pattern used to abstract database access in an objectoriented fashion. Each AR object in Yii is an instance of CActiveRecord or its child class, which wraps a single row in a database table or view, that encapsulates all the logic and details around database access, and houses much of the business logic that is required to be applied to that data. The data field values for each column in the table row are represented as properties of the active record object. View Typically the view is responsible for rendering the user interface, often based on the data in the model. A view in Yii is a PHP script that contains user interface-related elements, often built using HTML, but can also contain PHP statements. Usually, any PHP statements within the view are very simple, conditional or looping statements, or refer to other Yii UI-related elements such as HTML helper class methods or prebuilt widgets. More sophisticated logic should be separated from the view and placed appropriately in either the model, if dealing directly with the data, or the controller, for more general business logic. Controller The controller is our main director of a routed request, and is responsible for taking user input, interacting with the model, and instructing the view to update and display appropriately. A controller in Yii is an instance of CController or a child class thereof. When a controller runs, it performs the requested action, which then interacts with the necessary models, and renders an appropriate view. An action, in its simplest form, is a controller class method whose name starts with the word action
Read more
  • 0
  • 0
  • 2649

article-image-google-web-toolkit-2-creating-page-layout
Packt
24 Nov 2010
7 min read
Save for later

Google Web Toolkit 2: Creating Page Layout

Packt
24 Nov 2010
7 min read
Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible   The layout will be as shown in the diagram below: Creating the home page layout class This recipe creates a panel to place the menu bar, banner, sidebars, footer, and the main application layout. Ext GWT provides several options to define the top-level layout of the application. We will use the BorderLayout function. We will add the actual widgets after the layout is fully defined. The other recipes add the menu bar, banner, sidebars, and footers each, one-by-one. Getting ready Open the Sales Processing System project. How to do it... Let's list the steps required to complete the task. Go to File | New File. Select Java from Categories, and Java Class from File Types. Click on Next. Enter HomePage as the Class Name, and com.packtpub.client as Package. Click on Finish. Inherit the class ContentPanel. Press Ctrl + Shift + I to import the package automatically. Add a default constructor: package com.packtpub.client; import com.extjs.gxt.ui.client.widget.ContentPanel; public class HomePage extends ContentPanel { public HomePage() { } } Write the code of the following steps in this constructor. Set the size in pixels for the content panel: setSize(980,630); Hide the header: setHeaderVisible(false); Create a BorderLayout instance and set it for the content panel: BorderLayout layout = new BorderLayout(); setLayout(layout); Create a BorderLayoutData instance and configure it to be used for the menu bar and toolbar: BorderLayoutData menuBarToolBarLayoutData= new BorderLayoutData(LayoutRegion.NORTH, 55); menuBarToolBarLayoutData.setMargins(new Margins(5)); Create a BorderLayoutData instance and configure it to be used for the left-hand sidebar: BorderLayoutData leftSidebarLayoutData = new BorderLayoutData(LayoutRegion.WEST, 150); leftSidebarLayoutData.setSplit(true); leftSidebarLayoutData.setCollapsible(true); leftSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)); Create a BorderLayoutData instance and configure it to be used for the main contents, at the center: BorderLayoutData mainContentsLayoutData = new BorderLayoutData(LayoutRegion.CENTER); mainContentsLayoutData.setMargins(new Margins(0)); Create a BorderLayoutData instance and configure it to be used for the right-hand sidebar: BorderLayoutData rightSidebarLayoutData = new BorderLayoutData(LayoutRegion.EAST, 150); rightSidebarLayoutData.setSplit(true); rightSidebarLayoutData.setCollapsible(true); rightSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)); Create a BorderLayoutData instance and configure it to be used for the footer: BorderLayoutData footerLayoutData = new BorderLayoutData(LayoutRegion.SOUTH, 20); footerLayoutData.setMargins(new Margins(5)); How it works... Let's now learn how these steps allow us to complete the task of designing the application for the home page layout. The full page (home page) is actually a "content panel" that covers the entire area of the host page. The content panel is a container having top and bottom components along with separate header, footer, and body sections. Therefore, the content panel is a perfect building block for application-oriented user interfaces. In this example, we will place the banner at the top of the content panel. The body section of the content panel is further subdivided into five regions in order to place these—the menu bar and toolbar at the top, two sidebars on each side, a footer at the bottom, and a large area at the center to place the contents like forms, reports, and so on. A BorderLayout instance lays out the container into five regions, namely, north, south, east, west, and center. By using BorderLayout as the layout of the content panel, we will get five places to add five components. BorderLayoutData is used to specify layout parameters of each region of the container that has BorderLayout as the layout. We have created five instances of BorderLayoutData, to be used in the five regions of the container. There's more... Now, let's talk about some general information that is relevant to this recipe. Setting the size of the panel The setSize method is used to set the size for a panel. Any one of the two overloaded setSize methods can be used. A method has two int parameters, namely, width and height. The other one takes the same arguments as string. Showing or hiding header in the content panel Each content panel has built-in headers, which are visible by default. To hide the header, we can invoke the setHeaderVisible method, giving false as the argument, as shown in the preceding example. BorderLayoutData BorderLayoutData is used to set the layout parameters, such as margin, size, maximum size, minimum size, collapsibility, floatability, split bar, and so on for a region in a border panel. Consider the following line of code in the example we just saw: BorderLayoutData leftSidebarLayoutData = new BorderLayoutData(LayoutRegion.WEST, 150) It creates a variable leftSidebarLayoutData, where the size is 150 pixels and the region is the west of the border panel. rightSidebarLayoutData.setSplit(true) sets a split bar between this region and its neighbors. The split bar allows the user to resize the region. leftSidebarLayoutData.setCollapsible(true) makes the component collapsible, that is, the user will be able to collapse and expand the region. leftSidebarLayoutData.setMargins(new Margins(0, 5, 0, 5)) sets a margin where 0, 5, 0, and 5 are the top, right, bottom, and left margins, respectively. Classes and packages In the preceding example, some classes are used from Ext GWT library, as shown in the following table: ClassPackageBorderLayoutcom.extjs.gxt.ui.client.widget.layoutBorderLayoutDatacom.extjs.gxt.ui.client.widget.layoutContentPanelcom.extjs.gxt.ui.client.widgetMarginscom.extjs.gxt.ui.client.utilStylecom.extjs.gxt.ui.client See also The Adding the banner recipe The Adding menus recipe The Creating the left-hand sidebar recipe The Creating the right-hand sidebar recipe The Creating main content panel recipe The Creating the footer recipe The Using HomePage instance in EntryPoint recipe Adding the banner This recipe will create a method that we will use to add a banner in the content panel. Getting ready Place the banner image banner.png at the location webresourcesimages. You can use your own image or get it from the code sample provided on the Packt Publishing website (www.packtpub.com). How to do it... Create the method getBanner: public ContentPanel getBanner() { ContentPanel bannerPanel = new ContentPanel(); bannerPanel.setHeaderVisible(false); bannerPanel.add(new Image("resources/images/banner.png")); Image("resources/images/banner.png")); return bannerPanel; } Call the method setTopComponent of the ContentPanel class in the following constructor: setTopComponent(getBanner()); How it works... The method getBanner() creates an instance bannerPanel of type ContentPanel. The bannerPanel will just show the image from the location resources/images/banner.png. That's why, the header is made invisible by invoking setHeaderVisible(false). Instance of the com.google.gwt.user.client.ui.Image class, which represents the banner image, is added in the bannerPanel. In the default constructor of the HomePage class, the method setTopComponent(getBanner()) is called to set the image as the top component of the content panel. See also The Creating the home page layout class recipe The Adding menus recipe The Creating the left-hand sidebar recipe The Creating the right-hand sidebar recipe The Creating main content panel recipe The Creating the footer recipe The Using HomePage instance in EntryPoint recipe  
Read more
  • 0
  • 0
  • 2636

article-image-web-application-testing
Packt
14 Nov 2014
15 min read
Save for later

Web Application Testing

Packt
14 Nov 2014
15 min read
This article is written by Roberto Messora, the author of the Web App Testing Using Knockout.JS book. This article will give you an overview of various design patterns used in web application testing. It will also tech you web development using jQuery. (For more resources related to this topic, see here.) Presentation design patterns in web application testing The Web has changed a lot since HTML5 has made its appearance. We are witnessing a gradual shift from a classical full server-side web development, to a new architectural asset that moves much of the application logic to the client-side. The general objective is to deliver rich internet applications (commonly known as RIA) with a desktop-like user experience. Think about web applications such as Gmail or Facebook: if you maximize your browser, they look like complete desktop applications in terms of usability, UI effects, responsiveness, and richness. Once we have established that testing is a pillar of our solutions, we need to understand which is the best way to proceed, in terms of software architecture and development. In this regard, it's very important to determine the very basic design principles that allow a proper approach to unit testing. In fact, even though HTML5 is a recent achievement, HTML in general and JavaScript are technologies that have been in use for quite some time. The problem here is that many developers tend to approach the modern web development in the same old way. This is a grave mistake because, back in time, client-side JavaScript development was a lot underrated and mostly confined to simple UI graphic management. Client-side development is historically driven by libraries such as Prototype, jQuery, and Dojo, whose primary feature is DOM (HTML Document Object Model, in other words HTML markup) management. They can work as-is in small web applications, but as soon as these grow in complexity, code base starts to become unmanageable and unmaintainable. We can't really think that we can continue to develop JavaScript in the same way we did 10 years ago. In those days, we only had to dynamically apply some UI transformations. Today we have to deliver complete working applications. We need a better design, but most of all we need to reconsider client-side JavaScript development and apply the advanced design patterns and principles. jQuery web application development JavaScript is the programming language of the web, but its native DOM API is something rudimental. We have to write a lot of code to manage and transform HTML markup to bring UI to life with some dynamic user interaction. Also not full standardization means that the same code can work differently (or not work at all) in different browsers. Over the past years, developers decided to resolve this situation: JavaScript libraries, such as Prototype, jQuery and Dojo have come to light. jQuery is one of the most known open-source JavaScript libraries, which was published for the first time in 2006. Its huge success is mainly due to: A simple and detailed API that allows you to manage HTML DOM elements Cross-browser support Simple and effective extensibility Since its appearance, it's been used by thousands of developers as the foundation library. A large amount of JavaScript code all around the world has been built with keeping jQuery in mind. jQuery ecosystem grew up very quickly and nowadays there are plenty of jQuery plugins that implement virtually everything related to the web development. Despite its simplicity, a typical jQuery web application is virtually untestable. There are two main reasons: User interface items are tightly coupled with the user interface logic User interface logic spans inside event handler callback functions The real problem is that everything passes through a jQuery reference, which is a jQuery("something") call. This means that we will always need a live reference of the HTML page, otherwise these calls will fail, and this is also true for a unit test case. We can't think about testing a piece of user interface logic running an entire web application! Large jQuery applications tend to be monolithic because jQuery itself allows callback function nesting too easily, and doesn't really promote any particular design strategy. The result is often spaghetti code. jQuery is a good option if you want to develop some specific custom plugin, also we will continue to use this library for pure user interface effects and animations, but we need something different to maintain a large web application logic. Presentation design patterns To move a step forward, we need to decide what's the best option in terms of testable code. The main topic here is the application design, in other words, how we can build our code base following a general guideline with keeping testability in mind. In software engineering there's nothing better than not to reinvent the wheel, we can rely on a safe and reliable resource: design patterns. Wikipedia provides a good definition for the term design pattern (http://en.wikipedia.org/wiki/Software_design_pattern): In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. There are tens of specific design patterns, but we also need something that is related to the presentation layer because this is where a JavaScript web application belongs to. The most important aspect in terms of design and maintainability of a JavaScript web application is a clear separation between the user interface (basically, the HTML markup) and the presentation logic. (The JavaScript code that turns a web page dynamic and responsive to user interaction.) This is what we learned digging into a typical jQuery web application. At this point, we need to identify an effective implementation of a presentation design pattern and use it in our web applications. In this regard, I have to admit that the JavaScript community has done an extraordinary job in the last two years: up to the present time, there are literally tens of frameworks and libraries that implement a particular presentation design pattern. We only have to choose the framework that fits our needs, for example, we can start taking a look at MyTodo MVC website (http://todomvc.com/): this is an open source project that shows you how to build the same web application using a different library each time. Most of these libraries implement a so-called MV* design pattern (also Knockout.JS does). MV* means that every design pattern belongs to a broader family with a common root: Model-View-Controller. The MVC pattern is one of the oldest and most enduring architectural design patterns: originally designed by Trygve Reenskaug working on Smalltalk-80 back in 1979, it has been heavily refactored since then. Basically, the MVC pattern enforces the isolation of business data (Models) from user interfaces (Views), with a third component (Controllers) that manages the logic and user-input. It can be described as (Addy Osmani, Learning JavaScript Design Patterns, http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvc): A Model represented domain-specific data and was ignorant of the user-interface (Views and Controllers). When a model changed, it would inform its observers A View represented the current state of a Model. The Observer pattern was used for letting the View know whenever the Model was updated or modified Presentation was taken care of by the View, but there wasn't just a single View and Controller - a View-Controller pair was required for each section or element being displayed on the screen The Controllers role in this pair was handling user interaction (such as key-presses and actions e.g. clicks), making decisions for the View This general definition has slightly changed over the years, not only to adapt its implementation to different technologies and programming languages, but also because changes have been made to the Controller part. Model-View-Presenter, Model-View-ViewModel are the most known alternatives to the MVC pattern. MV* presentation design patterns are a valid answer to our need: an architectural design guideline that promotes the separation of concerns and isolation, the two most important factors that are needed for software testing. In this way, we can separately test models, views, and the third actor whatever it is (a Controller, Presenter, ViewModel, etc.). On the other hand, adopting a presentation design pattern doesn't mean at all that we cease to use jQuery. jQuery is a great library, we will continue to add its reference to our pages, but we will also integrate its use wisely in a better design context. Knockout.JS and Model-View-ViewModel Knockout.JS is one of the most popular JavaScript presentation libraries, it implements the Model-View-ViewModel design pattern. The most important concepts that feature Knockout:JS are: An HTML fragment (or an entire page) is considered as a View. A View is always associated with a JavaScript object called ViewModel: this is a code representation of the View that contains the data (model) to be shown (in the form of properties) and the commands that handle View events triggered by the user (in the form of methods). The association between View and ViewModel is built around the concept of data-binding, a mechanism that provides automatic bidirectional synchronization: In the View, it's declared placing the data-bind attributes into DOM elements, the attributes' value must follow a specific syntax that specifies the nature of the association and the target ViewModel property/method. In the ViewModel, methods are considered as commands and properties that are defined as special objects called observables: their main feature is the capability to notify every state modification A ViewModel is a pure-code representation of the View: it contains data to show and commands that handle events triggered by the user. It's important to remember that a ViewModel shouldn't have any knowledge about the View and the UI: pure-code representation means that a ViewModel shouldn't contain any reference to HTML markup elements (buttons, textboxes, and so on), but only pure JavaScript properties and methods. Model-View-ViewModel's objective is to promote a clear separation between View and ViewModel, this principle is called Separation of Concerns. Why is this so important? The answer is quite easy: because, in this way a developer can achieve a real separation of responsibilities: the View is only responsible for presenting data to the user and react to her/his inputs, the ViewModel is only responsible for holding the data and providing the presentation logic. The following diagram from Microsoft MSDN depicts the existing relationships between the three pattern actors very well (http://msdn.microsoft.com/en-us/library/ff798384.aspx): Thinking about a web application in these terms leads to a ViewModel development without any reference to DOM elements' IDs or any other markup related code as in the classic jQuery style. The two main reasons behind this are: As the web application becomes more complex, the number of DOM elements increases and is not uncommon to reach a point where it becomes very difficult to manage all those IDs with the typical jQuery fluent interface style: the JavaScript code base turns into a spaghetti code nightmare very soon. A clear separation between View and ViewModel allows a new way of working: JavaScript developers can concentrate on the presentation logic, UX experts on the other hand, can provide an HTML markup that focuses on the user interaction and how a web application will look. The two groups can work quite independently and agree on the basic contact points using the data-bind tag attributes. The key feature of a ViewModel is the observable object: a special object that is capable to notify its state modifications to any subscribers. There are three types of observable objects: The basic observable that is based on JavaScript data types (string, number, and so on) The computed observable that is dependent on other observables or computed observables The observable array that is a standard JavaScript array, with a built-in change notification mechanism On the View-side, we talk about declarative data-binding because we need to place the data-bind attributes inside HTML tags, and specify what kind of binding is associated to a ViewModel property/command. MVVM and unit testing Why a clear separation between the user interface and presentation logic is a real benefit? There are several possible answers, but, if we want to remain in the unit testing context, we can assert that we can apply proper unit testing specifications to the presentation logic, independently, from the concrete user interface. In Model-View-ViewModel, the ViewModel is a pure-code representation of the View. The View itself must remain a thin and simple layer, whose job is to present data and receive the user interaction. This is a great scenario for unit testing: all the logic in the presentation layer is located in the ViewModel, and this is a JavaScript object. We can definitely test almost everything that takes place in the presentation layer. Ensuring a real separation between View and ViewModel means that we need to follow a particular development procedure: Think about a web application page as a composition of sub-views: we need to embrace the divide et impera principle when we build our user interface, the more sub-views are specific and simple, the more we can test them easily. Knockout.JS supports this kind of scenario very well. Write a class for every View and a corresponding class for its ViewModel: the first one is the starting point to instantiate the ViewModel and apply bindings, after all, the user interface (the HTML markup) is what the browser loads initially. Keep each View class as simple as possible, so simple that it might not even need be tested, it should be just a container for:     Its ViewModel instance     Sub-View instances, in case of a bigger View that is a composition of smaller ones     Pure user interface code, in case of particular UI JavaScript plugins that cannot take place in the ViewModel and simply provide graphical effects/enrichments (in other words they don't change the logical functioning) If we look carefully at a typical ViewModel class implementation, we can see that there are no HTML markup references: no tag names, no tag identifiers, nothing. All of these references are present in the View class implementation. In fact, if we were to test a ViewModel that holds a direct reference to an UI item, we also need a live instance of the UI, otherwise accessing that item reference would cause a null reference runtime error during the test. This is not what we want, because it is very difficult to test a presentation logic having to deal with a live instance of the user interface: there are many reasons, from the need of a web server that delivers the page, to the need of a separate instance of a web browser to load the page. This is not very different from debugging a live page with Mozilla Firebug or Google Chrome Developer Tools, our objective is the test automation, but also we want to run the tests easily and quickly in isolation: we don't want to run the page in any way! An important application asset is the event bus: this is a global object that works as an event/message broker for all the actors that are involved in the web page (Views and ViewModels). Event bus is one of the alternative forms of the Event Collaboration design pattern (http://martinfowler.com/eaaDev/EventCollaboration.html): Multiple components work together by communicating with each other by sending events when their internal state changes (Marting Fowler) The main aspect of an event bus is that: The sender is just broadcasting the event, the sender does not need to know who is interested and who will respond, this loose coupling means that the sender does not have to care about responses, allowing us to add behaviour by plugging new components (Martin Fowler) In this way, we can maintain all the different components of a web page that are completely separated: every View/ViewModel couple sends and receives events, but they don't know anything about all the other couples. Again, every ViewModel is completely decoupled from its View (remember that the View holds a reference to the ViewModel, but not the other way around) and in this case, it can trigger some events in order to communicate something to the View. Concerning unit testing, loose coupling means that we can test our presentation logic a single component at a time, simply ensuring that events are broadcasted when they need to. Event buses can also be mocked so we don't need to rely on concrete implementation. In real-world development, the production process is an iterative task. Usually, we need to: Define a View markup skeleton, without any data-bind attributes. Start developing classes for the View and the ViewModel, which are empty at the beginning. Start developing the presentation logic, adding observables to the ViewModel and their respective data bindings in the View. Start writing test specifications. This process is repetitive, adds more presentation logic at every iteration, until we reach the final result. Summary In this article, you learned about web development using jQuery, presentation design patters, and unit testing using MVVM. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 2624

article-image-working-data-components
Packt
22 Nov 2013
15 min read
Save for later

Working with Data Components

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Introducing the DataList component The DataList component displays a collection of data in the list layout with several display types and supports AJAX pagination. The DataList component iterates through a collection of data and renders its child components for each item. Let us see how to use <p:dataList>to display a list of tag names as an unordered list: <p:dataList value="#{tagController.tags}" var="tag" type="unordered" itemType="disc"> #{tag.label} </p:dataList> The preceding <p:dataList> component displays tag names as an unordered list of elements marked with disc type bullets. The valid type options are unordered, ordered, definition, and none. We can use type="unordered" to display items as an unordered collection along with various itemType options such as disc, circle, and square. By default, type is set to unordered and itemType is set to disc. We can set type="ordered" to display items as an ordered list with various itemType options such as decimal, A, a, and i representing numbers, uppercase letters, lowercase letters, and roman numbers respectively. Time for action – displaying unordered and ordered data using DataList Let us see how to display tag names as unordered and ordered lists with various itemType options. Create <p:dataList> components to display items as unordered and ordered lists using the following code: <h:form> <p:panel header="Unordered DataList"> <h:panelGrid columns="3"> <h:outputText value="Disc"/> <h:outputText value="Circle" /> <h:outputText value="Square" /> <p:dataList value="#{tagController.tags}" var="tag" itemType="disc"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="circle"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="square"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> <p:panel header="Ordered DataList"> <h:panelGrid columns="4"> <h:outputText value="Number"/> <h:outputText value="Uppercase Letter" /> <h:outputText value="Lowercase Letter" /> <h:outputText value="Roman Letter" /> <p:dataList value="#{tagController.tags}" var="tag" type="ordered"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="A"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="a"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="i"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> </h:form> Implement the TagController.getTags() method to return a collection of tag objects: public class TagController { private List<Tag> tags = null; public TagController() { tags = loadTagsFromDB(); } public List<Tag> getTags() { return tags; } } What just happened? We have created DataList components to display tag names as an unordered list using type="unordered" and as an ordered list using type="ordered" with various supported itemTypes values. This is shown in the following screenshot: Using DataList with pagination support DataList has built-in pagination support that can be enabled by setting paginator="true". By enabling pagination, the various page navigation options will be displayed using the default paginator template. We can customize the paginator template to display only the desired options. The paginator can be customized using the paginatorTemplate option that accepts the following keys of UI controls: FirstPageLink LastPageLink PreviousPageLink NextPageLink PageLinks CurrentPageReport RowsPerPageDropdown Note that {RowsPerPageDropdown} has its own template, and options to display is provided via the rowsPerPageTemplate attribute (for example, rowsPerPageTemplate="5,10,15"). Also, {CurrentPageReport} has its own template defined with the currentPageReportTemplate option. You can use the {currentPage}, {totalPages}, {totalRecords}, {startRecord}, and {endRecord} keywords within the currentPageReport template. The default is "{currentPage} of {totalPages}". The default paginator template is "{FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink}". We can customize the paginator template to display only the desired options. For example: {CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown} The paginator can be positioned using the paginatorPosition attribute in three different locations: top, bottom, or both(default). The DataList component provides the following attributes for customization: rows: This is the number of rows to be displayed per page. first: This specifies the index of the first row to be displayed. The default is 0. paginator: This enables pagination. The default is false. paginatorTemplate: This is the template of the paginator. rowsPerPageTemplate: This is the template of the rowsPerPage dropdown. currentPageReportTemplate: This is the template of the currentPageReport UI. pageLinks: This specifies the maximum number of page links to display. The default value is 10. paginatorAlwaysVisible: This defines if paginator should be hidden when the total data count is less than the number of rows per page. The default is true. rowIndexVar: This specifies the name of the iterator to refer to for each row index. varStatus: This specifies the name of the exported request scoped variable to represent the state of the iteration same as in <ui:repeat> attribute varStatus. Time for action – using DataList with pagination Let us see how we can use the DataList component's pagination support to display five tags per page. Create a DataList component with pagination support along with custom paginatorTemplate: <p:panel header="DataList Pagination"> <p:dataList value="#{tagController.tags}" var="tag" id="tags" type="none" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <f:facet name="header"> Tags </f:facet> <h:outputText value="#{tag.id} - #{tag.label}" style="margin-left:10px" /> <br/> </p:dataList> </p:panel> What just happened? We have created a DataList component along with pagination support by setting paginator="true". We have customized the paginator template to display additional information such as CurrentPageReport and RowsPerPageDropdown. Also, we have used the rowsPerPageTemplate attribute to specify the values for RowsPerPageDropdown. The following screenshot displays the result: Displaying tabular data using the DataTable component DataTable is an enhanced version of the standard DataTable that provides various additional features such as: Pagination Lazy loading Sorting Filtering Row selection Inline row/cell editing Conditional styling Expandable rows Grouping and SubTable and many more In our TechBuzz application, the administrator can view a list of users and enable/disable user accounts. First, let us see how we can display list of users using basic DataTable as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <f:facet name="header"> List of Users </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> <f:facet name="footer"> Total no. of Users: #{fn:length(adminController.users)}. </f:facet> </p:dataTable> The following screenshot shows us the result: PrimeFaces 4.0 introduced the Sticky component and provides out-of-the-box support for DataTable to make the header as sticky while scrolling using the stickyHeader attribute: <p:dataTable var="user" value="#{adminController.users}" stickyHeader="true"> ... </p:dataTable> Using pagination support If there are a large number of users, we may want to display users in a page-by-page style. DataTable has in-built support for pagination. Time for action – using DataTable with pagination Let us see how we can display five users per page using pagination. Create a DataTable component using pagination to display five records per page, using the following code: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" currentPageReportTemplate="( {startRecord} - {endRecord}) of {totalRecords} Records." rowsPerPageTemplate="5,10,15"> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> What just happened? We have created a DataTable component with the pagination feature to display five rows per page. Also, we have customized the paginator template and provided an option to change the page size dynamically using the rowsPerPageTemplate attribute. Using columns sorting support DataTable comes with built-in support for sorting on a single column or multiple columns. You can define a column as sortable using the sortBy attribute as follows: <p:column headerText="FirstName" sortBy="#{user.firstName}"> <h:outputText value="#{user.firstName}" /> </p:column> You can specify the default sort column and sort order using the sortBy and sortOrder attributes on the <p:dataTable> element: <p:dataTable id="usersTbl2" var="user" value="#{adminController.users}" sortBy="#{user.firstName}" sortOrder="descending"> </p:dataTable> The <p:dataTable> component's default sorting algorithm uses a Java comparator, you can use your own customized sort method as well: <p:column headerText="FirstName" sortBy="#{user.firstName}" sortFunction="#{adminController.sortByFirstName}"> <h:outputText value="#{user.firstName}" /> </p:column> public int sortByFirstName(Object firstName1, Object firstName2) { //return -1, 0 , 1 if firstName1 is less than, equal to or greater than firstName2 respectively return ((String)firstName1).compareToIgnoreCase(((String)firstName2)); } By default, DataTable's sortMode is set to single, to enable sorting on multiple columns set sortMode to multiple. In multicolumns' sort mode, you can click on a column while the metakey (Ctrl or command) adds the column to the order group: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" sortMode="multiple"> </p:dataTable> Using column filtering support DataTable provides support for column-level filtering as well as global filtering (on all columns) and provides an option to hold the list of filtered records. In addition to the default match mode startsWith, we can use various other match modes such as endsWith, exact, and contains. Time for action – using DataTable with filtering Let us see how we can use filters with users' DataTable. Create a DataTable component and apply column-level filters and a global filter to apply filter on all columns: <p:dataTable widgetVar="userTable" var="user" value="#{adminController.users}" filteredValue="#{adminController.filteredUsers}" emptyMessage="No Users found for the given Filters"> <f:facet name="header"> <p:outputPanel> <h:outputText value="Search all Columns:" /> <p:inputText id="globalFilter" onkeyup="userTable.filter()" style="width:150px" /> </p:outputPanel> </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email" filterBy="#{user.emailId}" footerText="contains" filterMatchMode="contains"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName" filterBy="#{user.firstName}" footerText="startsWith"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="LastName" filterBy="#{user.lastName}" filterMatchMode="endsWith" footerText="endsWith"> <h:outputText value="#{user.lastName}" /> </p:column> <p:column headerText="Disabled" filterBy="#{user.disabled}" filterOptions="#{adminController.userStatusOptions}" filterMatchMode="exact" footerText="exact"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> Initialize userStatusOptions in AdminController ManagedBean. @ManagedBean @ViewScoped public class AdminController { private List<User> users = null; private List<User> filteredUsers = null; private SelectItem[] userStatusOptions; public AdminController() { users = loadAllUsersFromDB(); this.userStatusOptions = new SelectItem[3]; this.userStatusOptions[0] = new SelectItem("", "Select"); this.userStatusOptions[1] = new SelectItem("true", "True"); this.userStatusOptions[2] = new SelectItem("false", "False"); } //setters and getters } What just happened? We have used various filterMatchMode instances, such as startsWith, endsWith, and contains, while applying column-level filters. We have used the filterOptions attribute to specify the predefined filter values, which is displayed as a select drop-down list. As we have specified filteredValue="#{adminController.filteredUsers}", once the filters are applied the filtered users list will be populated into the filteredUsers property. This following is the resultant screenshot: Since PrimeFaces Version 4.0, we can specify the sortBy and filterBy properties as sortBy="emailId" and filterBy="emailId" instead of sortBy="#{user.emailId}" and filterBy="#{user.emailId}". A couple of important tips It is suggested to use a scope longer than the request such as the view scope to keep the filteredValue attribute so that the filtered list is still accessible after filtering. The filter located at the header is a global one applying on all fields; this is implemented by calling the client-side API method called filter(). The important part is to specify the ID of the input text as globalFilter, which is a reserved identifier for DataTable. Selecting DataTable rows Selecting one or more rows from a table and performing operations such as editing or deleting them is a very common requirement. The DataTable component provides several ways to select a row(s). Selecting single row We can use a PrimeFaces' Command component, such as commandButton or commandLink, and bind the selected row to a server-side property using <f:setPropertyActionListener>, shown as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <!-- Column definitions --> <p:column style="width:20px;"> <p:commandButton id="selectButton" update=":form:userDetails" icon="ui-icon-search" title="View"> <f:setPropertyActionListener value="#{user}" target="#{adminController.selectedUser}" /> </p:commandButton> </p:column> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Selecting rows using a row click Instead of having a separate button to trigger binding of a selected row to a server-side property, PrimeFaces provides another simpler way to bind the selected row by using selectionMode, selection, and rowKey attributes. Also, we can use the rowSelect and rowUnselect events to update other components based on the selected row, shown as follows: <p:dataTable var="user" value="#{adminController.users}" selectionMode="single" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:ajax event="rowSelect" listener="#{adminController.onRowSelect}" update=":form:userDetails"/> <p:ajax event="rowUnselect" listener="#{adminController.onRowUnselect}" update=":form:userDetails"/> <!-- Column definitions --> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Similarly, we can select multiple rows using selectionMode="multiple" and bind the selection attribute to an array or list of user objects: <p:dataTable var="user" value="#{adminController.users}" selectionMode="multiple" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <!-- Column definitions --> </p:dataTable> rowKey should be a unique identifier from your data model and should be used by DataTable to find the selected rows. You can either define this key by using the rowKey attribute or by binding a data model that implements org.primefaces.model.SelectableDataModel. When the multiple selection mode is enabled, we need to hold the Ctrl or command key and click on the rows to select multiple rows. If we don't hold on to the Ctrl or command key and click on a row and the previous selection will be cleared with only the last clicked row selected. We can customize this behavior using the rowSelectMode attribute. If you set rowSelectMode="add", when you click on a row, it will keep the previous selection and add the current selected row even though you don't hold the Ctrl or command key. The default rowSelectMode value is new. We can disable the row selection feature by setting disabledSelection="true". Selecting rows using a radio button / checkbox Another very common scenario is having a radio button or checkbox for each row, and the user can select one or more rows and then perform actions such as edit or delete. The DataTable component provides a radio-button-based single row selection using a nested <p:column> element with selectionMode="single": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:column selectionMode="single"/> <!-- Column definitions --> </p:dataTable> The DataTable component also provides checkbox-based multiple row selection using a nested <p:column> element with selectionMode="multiple": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <p:column selectionMode="multiple"/> <!-- Column definitions --> </p:dataTable> In our TechBuzz application, the administrator would like to have a facility to be able to select multiple users and disable them at one go. Let us see how we can implement this using the checkbox-based multiple rows selection.
Read more
  • 0
  • 0
  • 2611

article-image-html5-canvas
Packt
11 Sep 2013
5 min read
Save for later

HTML5 Canvas

Packt
11 Sep 2013
5 min read
(For more resources related to this topic, see here.) Setting up your HTML5 canvas (Should know) This recipe will show you how to first of all set up your own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. For this task we will be creating a series of primitives such as circles and rectangles. Modern video games make use of these types of primitives in many different forms. For example, both circles and rectangles are commonly used within collision-detection algorithms such as bounding circles or bounding boxes. How to do it... As previously mentioned we will begin by creating our own HTML5 canvas. We will start by creating a blank HTML file. To do this, you will need some form of text editor such as Microsoft Notepad (available for Windows) or the TextEdit application (available on Mac OS). Once you have a basic webpage set up, all that is left to do in order to create a canvas is to place the following between both body tags: <canvas id="canvas" width="800" height="600"></canvas> As previously mentioned, we will be implementing a number of basic elements within the canvas. In order to do this we must first link the JavaScript file to our webpage. This file will be responsible for the initialization, loading, and drawing of objects to the canvas. In order for our scripts to have any effect on our canvas we must create a separate file called canvas example. Create this new file within your text editor and then insert the following code declarations: var canvas = document.getElementById("canvas"), context = canvas.getContext("2d"); These declarations are responsible for retrieving both the canvas element and context. Using the canvas context, we can begin to draw primitives, text, and load textures into our canvas. We will begin by drawing a rectangle in the top-left corner of our canvas. In order to do this place the following code below our previous JavaScript declarations: context.fillStyle="#FF00FF";context.fillRect(15,15,150,75); If you were to now view the original webpage we created, you would see the rectangle being drawn in the top-left corner at position X: 15, Y: 15. Now that we have a rectangle, we can look at how we would go about drawing a circle onto our canvas. This can be achieved by means of the following code: context.beginPath();context.arc(350,150,40,0,2 * Math.PI);context.stroke(); How it works... The first code extract represents the basic framework required to produce a blank webpage and is necessary for a browser to read and display the webpage in question. With a basic webpage created, we then declare a new HTML5 canvas. This is done by assigning an id attribute, which we use to refer to the canvas within our scripts. The canvas declaration then takes a width and height attribute, both of which are also necessary to specify the size of the canvas, that is, the number of pixels wide and pixels high. Before any objects can be drawn to the canvas, we first need to get the canvas element. This is done through means of the getElementById method that you can see in our canvas example. When retrieving the canvas element, we are also required to specify the canvas context by calling a built-in HTML5 method known as getContext. This object gives access to many different properties and methods for drawing edges, circles, rectangles, external images, and so on. This can be seen when we draw a rectangle to our the canvas. This was done using the fillStyle property, which takes in a hexadecimal value and in return specifies the color of an element. Our next line makes use of the fillRect method, which requires a minimum of four values to be passed to it. These values include the X and Y position of the rectangle, as well as the width and height of the rectangle. As a result, a rectangle is drawn to the canvas with the color, position, width, and height specified. We then move on to drawing a circle to the canvas, which is done by firstly calling a built-in HTML canvas method known as BeginPath. This method is used to either begin a new path or to reset a current path. With a new path setup, we then take advantage of a method known as Arc that allows for the creation of arcs or curves, which can be used to create circles. This method requires that we pass both an X and Y position, a radius, and a starting angle measured in radians. This angle is between 0 and 2 * Pi where 0 and 2 are located at the 3 o'clock position of the arc's circle. We also must pass an ending angle, which is also measured in radians. The following figure is taken directly from the W3C HTML canvas reference, which you can find at the following link http://bit.ly/UCVPY1: Summary In this article we saw how to first of all set up our own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. Resources for Article: Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 2606
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-components
Packt
25 Nov 2014
14 min read
Save for later

Components

Packt
25 Nov 2014
14 min read
This article by Timothy Moran, author of Mastering KnockoutJS, teaches you how to use the new Knockout components feature. (For more resources related to this topic, see here.) In Version 3.2, Knockout added components using the combination of a template (view) with a viewmodel to create reusable, behavior-driven DOM objects. Knockout components are inspired by web components, a new (and experimental, at the time of writing this) set of standards that allow developers to define custom HTML elements paired with JavaScript that create packed controls. Like web components, Knockout allows the developer to use custom HTML tags to represent these components in the DOM. Knockout also allows components to be instantiated with a binding handler on standard HTML elements. Knockout binds components by injecting an HTML template, which is bound to its own viewmodel. This is probably the single largest feature Knockout has ever added to the core library. The reason we started with RequireJS is that components can optionally be loaded and defined with module loaders, including their HTML templates! This means that our entire application (even the HTML) can be defined in independent modules, instead of as a single hierarchy, and loaded asynchronously. The basic component registration Unlike extenders and binding handlers, which are created by just adding an object to Knockout, components are created by calling the ko.components.register function: ko.components.register('contact-list, { viewModel: function(params) { }, template: //template string or object }); This will create a new component named contact-list, which uses the object returned by the viewModel function as a binding context, and the template as its view. It is recommended that you use lowercase, dash-separated names for components so that they can easily be used as custom elements in your HTML. To use this newly created component, you can use a custom element or the component binding. All the following three tags produce equivalent results: <contact-list params="data: contacts"><contact-list> <div data-bind="component: { name: 'contact-list', params: { data: contacts }"></div> <!-- ko component: { name: 'contact-list', params: { data: contacts } --><!-- /ko --> Obviously, the custom element syntax is much cleaner and easier to read. It is important to note that custom elements cannot be self-closing tags. This is a restriction of the HTML parser and cannot be controlled by Knockout. There is one advantage of using the component binding: the name of the component can be an observable. If the name of the component changes, the previous component will be disposed (just like it would if a control flow binding removed it) and the new component will be initialized. The params attribute of custom elements work in a manner that is similar to the data-bind attribute. Comma-separated key/value pairs are parsed to create a property bag, which is given to the component. The values can contain JavaScript literals, observable properties, or expressions. It is also possible to register a component without a viewmodel, in which case, the object created by params is directly used as the binding context. To see this, we'll convert the list of contacts into a component: <contact-list params="contacts: displayContacts, edit: editContact, delete: deleteContact"> </contact-list> The HTML code for the list is replaced with a custom element with parameters for the list as well as callbacks for the two buttons, which are edit and delete: ko.components.register('contact-list', { template: '<ul class="list-unstyled" data-bind="foreach: contacts">'    +'<li>'      +'<h3>'        +'<span data-bind="text: displayName"></span> <small data-          bind="text: phoneNumber"></small> '        +'<button class="btn btn-sm btn-default" data-bind="click:          $parent.edit">Edit</button> '        +'<button class="btn btn-sm btn-danger" data-bind="click:          $parent.delete">Delete</button>'      +'</h3>'    +'</li>' +'</ul>' }); This component registration uses an inline template. Everything still looks and works the same, but the resulting HTML now includes our custom element. Custom elements in IE 8 and higher IE 9 and later versions as well as all other major browsers have no issue with seeing custom elements in the DOM before they have been registered. However, older versions of IE will remove the element if it hasn't been registered. The registration can be done either with Knockout, with ko.components.register('component-name'), or with the standard document.createElement('component-name') expression statement. One of these must come before the custom element, either by the script containing them being first in the DOM, or by the custom element being added at runtime. When using RequireJS, being in the DOM first won't help as the loading is asynchronous. If you need to support older IE versions, it is recommended that you include a separate script to register the custom element names at the top of the body tag or in the head tag: <!DOCTYPE html> <html> <body>    <script>      document.createElement('my-custom-element');    </script>    <script src='require.js' data-main='app/startup'></script>      <my-custom-element></my-custom-element> </body> </html> Once this has been done, components will work in IE 6 and higher even with custom elements. Template registration The template property of the configuration sent to register can take any of the following formats: ko.components.register('component-name', { template: [OPTION] }); The element ID Consider the following code statement: template: { element: 'component-template' } If you specify the ID of an element in the DOM, the contents of that element will be used as the template for the component. Although it isn't supported in IE yet, the template element is a good candidate, as browsers do not visually render the contents of template elements. The element instance Consider the following code statement: template: { element: instance } You can pass a real DOM element to the template to be used. This might be useful in a scenario where the template was constructed programmatically. Like the element ID method, only the contents of the elements will be used as the template: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: { element: template } }); An array of DOM nodes Consider the following code statement: template: [nodes] If you pass an array of DOM nodes to the template configuration, then the entire array will be used as a template and not just the descendants: var template = document.getElementById('contact-list-template') nodes = Array.prototype.slice.call(template.content.childNodes); ko.components.register('contact-list', { template: nodes }); Document fragments Consider the following code statement: template: documentFragmentInstance If you pass a document fragment, the entire fragment will be used as a template instead of just the descendants: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: template.content }); This example works because template elements wrap their contents in a document fragment in order to stop the normal rendering. Using the content is the same method that Knockout uses internally when a template element is supplied. HTML strings We already saw an example for HTML strings in the previous section. While using the value inline is probably uncommon, supplying a string would be an easy thing to do if your build system provided it for you. Registering templates using the AMD module Consider the following code statement: template: { require: 'module/path' } If a require property is passed to the configuration object of a template, the default module loader will load the module and use it as the template. The module can return any of the preceding formats. This is especially useful for the RequireJS text plugin: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'} }); Using this method, we can extract the HTML template into its own file, drastically improving its organization. By itself, this is a huge benefit to development. The viewmodel registration Like template registration, viewmodels can be registered using several different formats. To demonstrate this, we'll use a simple viewmodel of our contacts list components: function ListViewmodel(params) { this.contacts = params.contacts; this.edit = params.edit; this.delete = function(contact) {    console.log('Mock Deleting Contact', ko.toJS(contact)); }; }; To verify that things are getting wired up properly, you'll want something interactive; hence, we use the fake delete function. The constructor function Consider the following code statement: viewModel: Constructor If you supply a function to the viewModel property, it will be treated as a constructor. When the component is instantiated, new will be called on the function, with the params object as its first parameter: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: ListViewmodel //Defined above }); A singleton object Consider the following code statement: viewModel: { instance: singleton } If you want all your component instances to be backed by a shared object—though this is not recommended—you can pass it as the instance property of a configuration object. Because the object is shared, parameters cannot be passed to the viewmodel using this method. The factory function Consider the following code statement: viewModel: { createViewModel: function(params, componentInfo) {} } This method is useful because it supplies the container element of the component to the second parameter on componentInfo.element. It also provides you with the opportunity to perform any other setup, such as modifying or extending the constructor parameters. The createViewModel function should return an instance of a viewmodel component: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: { createViewModel: function(params, componentInfo) {    console.log('Initializing component for',      componentInfo.element);    return new ListViewmodel(params); }} }); Registering viewmodels using an AMD module Consider the following code statement: viewModel: { require: 'module-path' } Just like templates, viewmodels can be registered with an AMD module that returns any of the preceding formats. Registering AMD In addition to registering the template and the viewmodel as AMD modules individually, you can register the entire component with a require call: ko.components.register('contact-list', { require: 'contact-list' }); The AMD module will return the entire component configuration: define(['knockout', 'text!contact-list.html'], function(ko, templateString) {   function ListViewmodel(params) {    this.contacts = params.contacts;    this.edit = params.edit;    this.delete = function(contact) {      console.log('Mock Deleting Contact', ko.toJS(contact));    }; }   return { template: templateString, viewModel: ListViewmodel }; }); As the Knockout documentation points out, this method has several benefits: The registration call is just a require path, which is easy to manage. The component is composed of two parts: a JavaScript module and an HTML module. This provides both simple organization and clean separation. The RequireJS optimizer, which is r.js, can use the text dependency on the HTML module to bundle the HTML code with the bundled output. This means your entire application, including the HTML templates, can be a single file in production (or a collection of bundles if you want to take advantage of lazy loading). Observing changes in component parameters Component parameters will be passed via the params object to the component's viewmodel in one of the following three ways: No observable expression evaluation needs to occur, and the value is passed literally: <component params="name: 'Timothy Moran'"></component> <component params="name: nonObservableProperty"> </component> <component params="name: observableProperty"></component> <component params="name: viewModel.observableSubProperty "></component> In all of these cases, the value is passed directly to the component on the params object. This means that changes to these values will change the property on the instantiating viewmodel, except for the first case (literal values). Observable values can be subscribed to normally. An observable expression needs to be evaluated, so it is wrapped in a computed observable: <component params="name: name() + '!'"></component> In this case, params.name is not the original property. Calling params.name() will evaluate the computed wrapper. Trying to modify the value will fail, as the computed value is not writable. The value can be subscribed to normally. An observable expression evaluates an observable instance, so it is wrapped in an observable that unwraps the result of the expression: <component params="name: isFormal() ? firstName : lastName"></component> In this example, firstName and lastName are both observable properties. If calling params.name() returned the observable, you will need to call params.name()() to get the actual value, which is rather ugly. Instead, Knockout automatically unwraps the expression so that calling params.name() returns the actual value of either firstName or lastName. If you need to access the actual observable instances to, for example, write a value to them, trying to write to params.name will fail, as it is a computed observable. To get the unwrapped value, you can use the params.$raw object, which provides the unwrapped values. In this case, you can update the name by calling params.$raw.name('New'). In general, this case should be avoided by removing the logic from the binding expression and placing it in a computed observable in the viewmodel. The component's life cycle When a component binding is applied, Knockout takes the following steps. The component loader asynchronously creates the viewmodel factory and template. This result is cached so that it is only performed once per component. The template is cloned and injected into the container (either the custom element or the element with the component binding). If the component has a viewmodel, it is instantiated. This is done synchronously. The component is bound to either the viewmodel or the params object. The component is left active until it is disposed. The component is disposed. If the viewmodel has a dispose method, it is called, and then the template is removed from the DOM. The component's disposal If the component is removed from the DOM by Knockout, either because of the name of the component binding or a control flow binding being changed (for example, if and foreach), the component will be disposed. If the component's viewmodel has a dispose function, it will be called. Normal Knockout bindings in the components view will be automatically disposed, just as they would in a normal control flow situation. However, anything set up by the viewmodel needs to be manually cleaned up. Some examples of viewmodel cleanup include the following: The setInterval callbacks can be removed with clearInterval. Computed observables can be removed by calling their dispose method. Pure computed observables don't need to be disposed. Computed observables that are only used by bindings or other viewmodel properties also do not need to be disposed, as garbage collection will catch them. Observable subscriptions can be disposed by calling their dispose method. Event handlers can be created by components that are not part of a normal Knockout binding. Combining components with data bindings There is only one restriction of data-bind attributes that are used on custom elements with the component binding: the binding handlers cannot use controlsDescendantBindings. This isn't a new restriction; two bindings that control descendants cannot be on a single element, and since components control descendant bindings that cannot be combined with a binding handler that also controls descendants. It is worth remembering, though, as you might be inclined to place an if or foreach binding on a component; doing this will cause an error. Instead, wrap the component with an element or a containerless binding: <ul data-bind='foreach: allProducts'> <product-details params='product: $data'></product-details> </ul> It's also worth noting that bindings such as text and html will replace the contents of the element they are on. When used with components, this will potentially result in the component being lost, so it's not a good idea. Summary In this article, we learned that the Knockout components feature gives you a powerful tool that will help you create reusable, behavior-driven DOM elements. Resources for Article: Further resources on this subject: Deploying a Vert.x application [Article] The Dialog Widget [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 2586

article-image-plone-4-development-creating-custom-workflow
Packt
30 Aug 2011
7 min read
Save for later

Plone 4 Development: Creating a Custom Workflow

Packt
30 Aug 2011
7 min read
Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.        Keeping control with workflow As we have alluded to before, managing permissions directly anywhere other than the site root is usually a bad idea. Every content object in a Plone site is subject to security, and will in most cases inherit permission settings from its parent. If we start making special settings in particular folders, we will quickly lose control. However, if settings are always acquired, how can we restrict access to particular folders or prevent authors from editing published content whilst still giving them rights to work on items in a draft state? The answer to both of these problems is workflow. Workflows are managed by the portal_workflow tool. This controls a mapping of content types to workflows definitions, and sets a default workflow for types not explicitly mapped. The workflow tool allows a workflow chain of multiple workflows to be assigned to a content type. Each workflow is given its own state variable. Multiple workflows can manage permissions concurrently. Plone's user interface does not explicitly support more than one workflow, but can be used in combination with custom user interface elements to address complex security and workflow requirements. The workflow definitions themselves are objects found inside the portal_workflow tool, under the Contents tab. Each definition consists of states, such as private or published, and transitions between them. Transitions can be protected by permissions or restricted to particular roles. Although it is fairly common to protect workflow transitions by role, this is not actually a very good use of the security system. It would be much more sensible to use an appropriate permission. The exception is when custom roles are used solely for the purpose of defining roles in a workflow. Some transitions are automatic, which means that they will be invoked as soon as an object enters a state that has this transition as a possible exit (that is, provided the relevant guard conditions are met). More commonly, transitions are invoked following some user action, normally through the State drop-down menu in Plone's user interface. It is possible to execute code immediately before or after a transition is executed. States may be used simply for information purposes. For example, it is useful to be able to mark a content object as "published" and be able to search for all published content. More commonly, states are also used to control content item security. When an object enters a particular state, either its initial state, when it is first created, or a state that is the result of a workflow transition, the workflow tool can set a number of permissions according to a predefined permissions map associated with the target state. The permissions that are managed by a particular workflow are listed under the Permissions tab on the workflow definition: These permissions are used in a permission map for each state: If you change workflow security settings, your changes will not take effect immediately, since permissions are only modified upon workflow transitions. To synchronize permissions with the current workflow definitions, use the Update security settings button at the bottom of the Workflows tab of the portal_workflow tool. Note that this can take a long time on large sites, because it needs to find all content items using the old settings and update their permissions. If you use the Types control panel in Plone's Site Setup to change workflows, this reindexing happens automatically. Workflows can also be used to manage role-to-group assignments in the same way they can be used to manage role-to-permission assignments. This feature is rarely used in Plone, however. All workflows manage a number of workflow variables, whose values can change with transitions and be queried through the workflow tool. These are rarely changed, however, and Plone relies on a number of the default ones. These include the previous transition (action), the user ID of the person who performed that transition (actor), any associated comments (comments), the date/time of the last transition (time), and the full transition history (review_history). Finally, workflows can define work lists, which are used by Plone's Review list portlet to show pending tasks for the current user. A work list in effect performs a catalog search using the workflow's state variable. In Plone, the state variable is always called review_state. The workflow system is very powerful, and can be used to solve many kinds of problems where objects of the same type need to be in different states. Learning to use it effectively can pay off greatly in the long run. Interacting with workflow in code Interacting with workflow from our own code is usually straightforward. To get the workflow state of a particular object, we can do: from Products.CMFCore.utils import getToolByName wftool = getToolByName(context, 'portal_workflow') review_state = wftool.getInfoFor(context, 'review_state') However, if we are doing a search using the portal_catalog tool, the results it returns has the review state as metadata already: from Products.CMFCore.utils import getToolByName catalog = getToolByName(context, 'portal_catalog') for result in catalog(dict( portal_type=('Document', 'News Item',), review_state=('published', 'public', 'visible',), )): review_state = result.review_state # do something with the review_state To change the workflow state of an object, we can use the following line of code: wftool.doActionFor(context, action='publish') The action here is the name of a transition, which must be available to the current user, from current state of context. There is no (easy) way to directly specify the target state. This is by design: recall that transitions form the paths between states, and may involve additional security restrictions or the triggering of scripts. Again, the Doc tab for the portal_workflow tool and its sub-objects (the workflow definitions and their states and transitions) should be your first point of call if you need more detail. The workflow code can be found in Products.CMFCore.WorkflowTool and Products.DCWorkflow. Installing a custom workflow It is fairly common to create custom workflows when building a Plone website. Plone ships with several useful workflows, but security and approvals processes tend to differ from site to site, so we will often find ourselves creating our own workflows. Workflows are a form of customization. We should ensure they are installable using GenericSetup. However, the workflow XML syntax is quite verbose, so it is often easier to start from the ZMI and export the workflow definition to the filesystem. Designing a workflow for Optilux Cinemas It is important to get the design of a workflow policy right, considering the different roles that need to interact with the objects, and the permissions they should have in the various states. Draft content should be visible to cinema staff, but not customers, and should go through review before being published. The following diagram illustrates this workflow: This workflow will be made the default, and should therefore apply to most content. However, we will keep the standard Plone policy of omitting workflow for the File and Image types. This means that permissions for content items of these types will be acquired from the Folder in which they are contained, making them simpler to manage. In particular, this means it is not necessary to separately publish linked files and embedded images when publishing a Page. Because we need to distinguish between logged-in customers and staff members, we will introduce a new role called StaffMember. This role will be granted View permission by default for all items in the site, much like a Manager or Site Administrator user is by default (although workflow may override this). We will let the Site Administrator role represent site administrators, and the Reviewer role represent content reviewers, as they do in a default Plone installation. We will also create a new group, Staff, which is given the StaffMember role. Among other things, this will allow us to easily grant the Reader, Editor and Contributor role in particular folders to all staff from the Sharing screen. The preceding workflow is designed for content production and review. This is probably the most common use for workflow in Plone, but it is by no means the only use case. For example, the author once used workflows to control the payment status on an Invoice content type. As you become more proficient with the workflow engine, you will find that it is useful in a number of scenarios.
Read more
  • 0
  • 0
  • 2583

article-image-your-first-application
Packt
15 Jan 2014
7 min read
Save for later

Your First Application

Packt
15 Jan 2014
7 min read
(For more resources related to this topic, see here.) Sketching out the application We are going to build a browsable database of cat profiles. Visitors will be able to create pages for their cats and fill in basic information such as the name, date of birth, and breed for each cat. This application will support the default Create-Retrieve-Update-Delete operations (CRUD). We will also create an overview page with the option to filter cats by breed. All of the security, authentication, and permission features are intentionally left out since they will be covered later. Entities, relationships, and attributes Firstly, we need to define the entities of our application. In broad terms, an entity is a thing (person, place, or object) about which the application should store data. From the requirements, we can extract the following entities and attributes: Cats, which have a numeric identifier, a name, a date of birth, and a breed Breeds, which only have an identifier and a name This information will help us when defining the database schema that will store the entities, relationships, and attributes, as well as the models, which are the PHP classes that represent the objects in our database. The map of our application We now need to think about the URL structure of our application. Having clean and expressive URLs has many benefits. On a usability level, the application will be easier to navigate and look less intimidating to the user. For frequent users, individual pages will be easier to remember or bookmark and, if they contain relevant keywords, they will often rank higher in search engine results. To fulfill the initial set of requirements, we are going to need the following routes in our application: Method Route Description GET / Index GET /cats Overview page GET /cats/breeds/:name Overview page for specific breed GET /cats/:id Individual cat page GET /cats/create Form to create a new cat page POST /cats Handle creation of new cat page GET /cats/:id/edit Form to edit existing cat page PUT /cats/:id Handle updates to cat page GET /cats/:id/delete Form to confirm deletion of page DELETE /cats/:id Handle deletion of cat page We will shortly learn how Laravel helps us to turn this routing sketch into actual code. If you have written PHP applications without a framework, you can briefly reflect on how you would have implemented such a routing structure. To add some perspective, this is what the second to last URL could have looked like with a traditional PHP script (without URL rewriting): /index.php?p=cats&id=1&_action=delete&confirm=true. The preceding table can be prepared using a pen and paper, in a spreadsheet editor, or even in your favorite code editor using ASCII characters. In the initial development phases, this table of routes is an important prototyping tool that forces you to think about URLs first and helps you define and refine the structure of your application iteratively. If you have worked with REST APIs, this kind of routing structure will look familiar to you. In RESTful terms, we have a cats resource that responds to the different HTTP verbs and provides an additional set of routes to display the necessary forms. If, on the other hand, you have not worked with RESTful sites, the use of the PUT and DELETE HTTP methods might be new to you. Even though web browsers do not support these methods for standard HTTP requests, Laravel uses a technique that other frameworks such as Rails use, and emulates those methods by adding a _method input field to the forms. This way, they can be sent over a standard POST request and are then delegated to the correct route or controller method in the application. Note also that none of the form submissions endpoints are handled with a GET method. This is primarily because they have side effects; a user could trigger the same action multiple times accidentally when using the browser history. Therefore, when they are called, these routes never display anything to the users. Instead, they redirect them after completing the action (for instance, DELETE /cats/:id will redirect the user to GET/cats). Starting the application Now that we have the blueprints for the application, let's roll up our sleeves and start writing some code. Start by opening a new terminal window and create a new project with Composer, as follows: $ composer create-project laravel/laravel cats --prefer-dist $ cd cats Once Composer finishes downloading Laravel and resolving its dependencies, you will have a directory structure identical to the one presented previously. Using the built-in development server To start the application, unless you are running an older version of PHP (5.3.*), you will not need a local server such as WAMP on Windows or MAMP on Mac OS since Laravel uses the built-in development server that is bundled with PHP 5.4 or later. To start the development server, we use the following artisan command: $ php artisan serve Artisan is the command-line utility that ships with Laravel and its features will be covered in more detail later. Next, open your web browser and visit http://localhost:8000; you will be greeted with Laravel's welcome message. If you get an error telling you that the php command does not exist or cannot be found, make sure that it is present in your PATH variable. If the command fails because you are running PHP 5.3 and you have no upgrade possibility, simply use your local development server (MAMP/WAMP) and set Apache's DocumentRoot to point to cats-app/public/. Writing the first routes Let's start by writing the first two routes of our application inside app/routes.php. This file already contains some comments as well as a sample route. You can keep the comments but you must remove the existing route before adding the following routes: Route::get('/', function(){   return "All cats"; }); Route::get('cats/{id}', function($id){   return "Cat #$id"; }); The first parameter of the get method is the URI pattern. When a pattern is matched, the closure function in the second parameter is executed with any parameters that were extracted from the pattern. Note that the slash prefix in the pattern is optional; however, you should not have any trailing slashes. You can make sure that your routes work by opening your web browser and visiting http://localhost:8000/cats/123. If you are not using PHP's built-in development server and are getting a 404 error at this stage, make sure that Apache's mod_rewrite configuration is enabled and works correctly. Restricting the route parameters In the pattern of the second route, {id} currently matches any string or number. To restrict it so that it only matches numbers, we can chain a where method to our route as follows: Route::get('cats/{id}', function($id){ return "Cat #$id"; })->where('id', '[0-9]+'); The where method takes two arguments: the first one is the name of the parameter and the second one is the regular expression pattern that it needs to match. Summary We have covered a lot in this article. We learned how to define routes, prepare the models of the application, and interact with them. Moreover, we have had a glimpse at the many powerful features of Eloquent, Blade, as well as the other convenient helpers in Laravel to create forms and input fields: all of this in under 200 lines of code! Resources for Article: Further resources on this subject: Laravel 4 - Creating a Simple CRUD Application in Hours [Article] Creating and Using Composer Packages [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 2573

article-image-using-mongoid
Packt
09 Dec 2013
12 min read
Save for later

Using Mongoid

Packt
09 Dec 2013
12 min read
(For more resources related to this topic, see here.) Introducing Origin Origin is a gem that provides the DSL for Mongoid queries. Though at first glance, a question may seem to arise as to why we need a DSL for Mongoid queries; If we are finally going to convert the query to a MongoDB-compliant hash, then why do we need a DSL? Origin was extracted from Mongoid gem and put into a new gem, so that there is a standard for querying. It has no dependency on any other gem and is a standalone pure query builder. The idea was that this could be a generic DSL that can be used even without Mongoid! So, now we have a very generic and standard querying pattern. For example, in Mongoid 2.x we had the criteria any_in and any_of, and no direct support for the and, or, and nor operations. In Mongoid 2.x, the only way we could fire a $or or a $and query was like this: Author.where("$or" => {'language' => 'English', 'address.city' => 'London '}) And now in Mongoid 3, we have a cleaner approach. Author.or(:language => 'English', 'address.city' => 'London') Origin also provides good selectors directly in our models. So, this is now much more readable: Book.gte(published_at: Date.parse('2012/11/11')) Memory maps, delayed sync, and journals As we have seen earlier, MongoDB stores data in memory-mapped files of at most 2 GB each. After the data is loaded for the first time into the memory mapped files, we now get almost memory-like speeds for access instead of disk I/O, which is much slower. These memory-mapped files are preallocated to ensure that there is no delay of the file generation while saving data. However, to ensure that the data is not lost, it needs to be persisted to the disk. This is achieved by journaling. With journaling, every database operation is written to the oplog collection and that is flushed to disk every 100 ms. Journaling is turned on by default in the MongoDB configuration. This is not the actual data but the operation itself. This helps in better recovery (in case of any crash) and also ensures the consistency of writes. The data that is written to various collections are flushed to the disk every 60 seconds. This ensures that the data is persisted periodically and also ensures the speed of data access is almost as fast as memory. MongoDB relies on the operating system for the memory management of its memory-mapped files. This has the advantage of getting inherent OS benefits as the OS is improved. Also, there's the disadvantage of lack of control on how memory is managed by MongoDB. However, what happens if something goes wrong (server crashes, database stops, or disk is corrupted)? To ensure durability, whenever data is saved in files, the action is logged to a file in a chronological order. This is the journal entry, which is also a memory-mapped file but is synced with the disk every 100 ms. Using the journal, the database can be easily recovered in case of any crash. So, in the worst case scenario, we could potentially lose 100 ms of information. This is a fair price to pay for the benefits of using MongoDB. MongoDB journaling makes it a very robust and durable database. However, it also helps us decide when to use MongoDB and when not to use it. 100 ms is a long time for some services, such as financial core banking or maybe stock price updates. In such applications, MongoDB is not recommended. For most cases that are not related to heavy multi-table transactions like most financial applications MongoDB can be suitable. All these things are handled seamlessly, and we don't usually need to change anything. We can control this behavior via the configuration of MongoDB but usually it's not recommended. Let's now see how we save data using Mongoid. Updating documents and attributes As with ActiveModel specifications, save will update the changed attributes and return the updated object, otherwise it will return false on failure. The save! function will raise an exception on the error. In both cases, if we pass validate: false as a parameter to save, it will bypass the validations. A lesser-known persistence option is the upsert action. An upsert action creates a new document if it does not find it and overwrites the object if it finds it. A good reason to use upsert is in the find_and_modify action. For example, suppose we want to reserve a book in our Sodibee system, and we want to ensure that at any one point, there can be only one reservation for a book. In a traditional scenario: t1: Request-1 searches for a for a book which is not reserved and finds it t2: Now, it saves the book with the reservation information t3: Request-2 searches for a reservation for the same book and finds that the book is reserved t4: Request-2 handles the situation with either error or waits for reservation to be freed So far so good! However in a concurrent model, especially for web applications, it creates problems. t1: Request-1 searches for a book which is not reserved and finds it t2: Request-2 searches for a reservation for the same book and also gets back that book since it's not yet reserved t3: Request-1 saves the book with its reservation information t4: Request-2 now overwrites previous update and saves the book with its reservation information Now we have a situation where two requests think that the reservation for the book was successful and that is against our expectations. This is a typical problem that plagues most web applications. The various ways in which we can solve this is discussed in the subsequent sections. Write concern MongoDB helps us ensure write consistency. This means that when we write something to MongoDB, it now guarantees the success of the write operation. Interestingly, this is a configurable option and is set to acknowledged by default. This means that the write is guaranteed because it waits for an acknowledgement before returning success. In earlier versions of Mongoid, safe: true was turned off by default. This meant that success of the write operation was not guaranteed. The write concern is configured in Mongoid.yml as follows: development:sessions:default:hosts:- localhost:27017options:write:w: 1 The default write concern in Mongoid is configured with w: 1, which means that the success of a write operation is guaranteed. Let's see an example: class Authorinclude Mongoid::Documentfield :name, type: Stringindex( {name: 1}, {unique: true, background: true})end Indexing blocks read and write operations. Hence, its recommended to configure indexing in the background in a Rails application. We shall now start a Rails console and see how this reacts to a duplicate key index by creating two Author objects with the same name. irb> Author.create(name: "Gautam") => #<Author _id: 5143678345db7ca255000001, name: "Gautam"> irb> Author.create(name: "Gautam") Moped::Errors::OperationFailure: The operation: #<Moped::Protocol::Command @length=83 @request_id=3 @response_to=0 @op_code=2004 @flags=[] @full_collection_name="sodibee_development.$cmd" @skip=0 @limit=-1 @selector={:getlasterror=>1, :w=>1} @fields=nil> failed with error 11000: "E11000 duplicate key error index: sodibee_ development.authors.$name_1 dup key: { : "Gautam" }" As we can see, it has raised a duplicate key error and the document is not saved. Now, let's have some fun. Let's change the write concern to unacknowledged: development:sessions:default:hosts:- localhost:27017options:write:w: 0 The write concern is now set to unacknowledged writes. That means we do not wait for the MongoDB write to eventually succeed, but assume that it will. Now let's see what happens with the same command that had failed earlier. irb > Author.where(name: "Gautam").count=> 1irb > Author.create(name: "Gautam")=> #<Author _id: 5287cba54761755624000000, name: "Gautam">irb > Author.where(name: "Gautam").count=> 1 There seems to be a discrepancy here. Though Mongoid create returned successfully, the data was not saved to the database. Since we specified background: true for the name index, the document creation seemed to succeed as MongoDB had not indexed it yet, and we did not wait for acknowledging the success of the write operation. So, when MongoDB tries to index the data in the background, it realizes that the index criterion is not met (since the index is unique), and it deletes the document from the database. Now, since that was in the background, there is no way to figure this out on the console or in our Rails application. This leads to an inconsistent result. So, how can we solve this problem? There are various ways to solve this problem: We leave the Mongoid default write concern configuration alone. By default, it is w: 1 and it will raise an exception. This is the recommended approach as prevention is better than cure! Do not specify the background: true option. This will create indexes in the foreground. However, this approach is not recommended as it can cause a drop in performance because index creations block read and write access. Add drop_dups: true. This deletes data, so you have to be really careful when using this option. Other options to the index command create different types of indexes as shown in the following table: Index Type Example Description sparse index({twitter_name: 1}, { sparse: true}) This creates sparse indexes, that is, only the documents containing the indexed fields are indexed. Use this with care as you can get incomplete results. 2d 2dsphere index({:location => "2dsphere"}) This creates a two-dimensional spherical index. Text index MongoDB 2.4 introduced text indexes that are as close to free text search indexes as it gets. However, it does only basic text indexing—that is, it supports stop words and stemming. It also assigns a relevance score with each search. Text indexes are still an experimental feature in MongoDB, and they are not recommended for extensive use. Use ElasticSearch, Solr (Sunspot), or ThinkingSphinx instead. The following code snippet shows how we can specify a text index with weightage: index({ "name" => 'text',"last_name" => 'text'},{weights: {'name' => 10,'last_name' => 5,},name: 'author_text_index'}) There is no direct search support in Mongoid (as yet). So, if you want to invoke a text search, you need to hack around a little. irb> srch = Mongoid::Contextual::TextSearch.new(Author.collection,Author.all, 'john')=> #<Mongoid::Contextual::TextSearchselector: {}class: Authorsearch: johnfilter: {}project: N/Alimit: N/Alanguage: default>irb> srch.execute=> {"queryDebugString"=>"john||||||", "language"=>"english","results"=>[{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00030b'), "name"=>"Bettye Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00046d'), "name"=>"John Pagac"}},{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f000578'),"name"=>"Jeanie Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058445db7c843f0007e7')...{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058a45db7c843f0025f1'), "name"=>"Alford Johns"}}], "stats"=>{"nscanned"=>25,"nscannedObjects"=>0, "n"=>25, "nfound"=>25, "timeMicros"=>31103},"ok"=>1.0} By default, text search is disabled in MongoDB configuration. We need to turn it on by adding setParameter = textSearchEnabled=true in the MongoDB configuration file, typically /usr/local/mongo.conf. This returns a result with statistical data as well the documents and their relevance score. Interestingly, it also specifies the language. There are a few more things we can do with the search result. For example, we can see the statistical information as follows: irb> a.stats=> {"nscanned"=>25, "nscannedObjects"=>0, "n"=>25, "nfound"=>25,"timeMicros"=>31103} We can also convert the data into our Mongoid model objects by using project, as shown in the following command: > a.project(:name).to_a=> [#<Author _id: 51fc058345db7c843f00030b, name: "Bettye Johns",last_name: nil, password: nil>, #<Author _id: 51fc058345db7c843f00046d,name: "John Pagac", last_name: nil, password: nil>, #<Author _id:51fc058345db7c843f000578, name: "Jeanie Johns", last_name: nil, password:nil> ... Some of the important things to remember are as follows: Text indexes can be very heavy in memory. They can return the documents, so the result can be large. We can use multiple keys (or filters) along with a text search. For example, the index with index ({ 'state': 1, name: 'text'}) will mandate the use of the state for every text search that is specified. A search for "john doe" will result in a search for "john"or "doe"or both. A search for "john"and "doe" will search for all "john"and "doe"in a random order. A search for ""john doe"", that is, with escaped quotes, will search for documents containing the exact phrase "john doe". A lot more data can be found at http://docs.mongodb.org/manual/tutorial/search-for-text/ Summary This article provides an excellent reference for using Mongoid. The article has examples with code samples and explanations that help in understanding the various features of Mongoid. Resources for Article: Further resources on this subject: Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article] Documentation with phpDocumentor: Part 1 [Article]
Read more
  • 0
  • 0
  • 2570
article-image-seam-data-validation
Packt
09 Oct 2009
8 min read
Save for later

Seam Data Validation

Packt
09 Oct 2009
8 min read
Data validation In order to perform consistent data validation, we would ideally want to perform all data validation within our data model. We want to perform data validation in our data model so that we can then keep all of the validation code in one place, which should then make it easier to keep it up-to-date if we ever change our minds about allowable data values. Seam makes extensive use of the Hibernate validation tools to perform validation of our domain model. The Hibernate validation tools grew from the Hibernate project (http://www.hibernate.org) to allow the validation of entities before they are persisted to the database. To use the Hibernate validation tools in an application, we need to add hibernate-validator.jar into the application's class path, after which we can use annotations to define the validation that we want to use for our data model. Let's look at a few validations that we can add to our sample Seam Calculator application. In order to implement data validation with Seam, we need to apply annotations either to the member variables in a class or to the getter of the member variables. It's good practice to always apply these annotations to the same place in a class. Hence, throughout this article, we will always apply our annotation to the getter methods within classes. In our sample application, we are allowing numeric values to be entered via edit boxes on a JSF form. To perform data validation against these inputs, there are a few annotations that can help us. Annotation Description @Min The @Min annotation allows a minimum value for a numeric variable to be specified. An error message to be displayed if the variable's value is less than the specified minimum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be greater than or equal to ...). @Min(value=0, message="...") @Max The @Max annotation allows a maximum value for a numeric variable to be specified. An error message to be displayed if the variable's value is greater than the specified maximum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be less than or equal to ...). @Max(Value=100, message="...") @Range The @Range annotation allows a numeric range-that is, both minimum and maximum values-to be specified for a variable. An error message to be displayed if the variable's value is outside the specified range can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be between ... and ...). @Range(min=0, max=10, message="...") At this point, you may be wondering why we need to have an @Range validator, when by combining the @Min and @Max validators, we can get a similar effect. If you want a different error message to be displayed when a variable is set above its maximum value as compared to the error message that is displayed when it is set below its minimum value, then the @Min and @Max annotations should be used. If you are happy with the same error message being displayed when a variable is set outside its minimum or maximum values, then the @Range validator should be used. Effectively, the @Min and @Max validators are providing a finer level of error message provision than the @Range validator. The following code sample shows how these annotations can be applied to a sample application, to add basic data validation to our user inputs. package com.davidsalter.seamcalculator; import java.io.Serializable; import org.jboss.seam.annotations.Name; import org.jboss.seam.faces.FacesMessages; import org.hibernate.validator.Max;import org.hibernate.validator.Min;import org.hibernate.validator.Range; @Name("calculator") public class Calculator implements Serializable { private double value1; private double value2; private double answer; @Min(value=0) @Max(value=100) public double getValue1() { return value1; } public void setValue1(double value1) { this.value1 = value1; } @Range(min=0, max=100) public double getValue2() { return value2; } public void setValue2(double value2) { this.value2 = value2; } public double getAnswer() { return answer; } ... } Displaying errors to the user In the previous section, we saw how to add data validation to our source code to stop invalid data from being entered into our domain model. Now that we have reached a level of data validation, we need to provide feedback to the user to inform them of any invalid data that they have entered. JSF applications have the concept of messages that can be displayed associated with different components. For example, if we have a form asking for a date of birth to be entered, we could display a message next to the entry edit box if an invalid date were entered. JSF maintains a collection of these error messages, and the simplest way of providing feedback to the user is to display a list of all of the error messages that were generated as a part of the previous operation. In order to obtain error messages within the JSF page, we need to tell JSF which components we want to be validated against the domain model. This is achieved by using the <s:validate/> or <s:validateAll/> tags. These are Seam-specific tags and are not a part of the standard JSF runtime. In order to use these tags, we need to add the following taglib reference to the top of the JSF page. <%@ taglib uri="http://jboss.com/products/seam/taglib" prefix="s" %> In order to use this tag library, we need to add a few additional JAR files into the WEB-INF/lib directory of our web application, namely: jboss-el.jar jboss-seam-ui.jar jsf-api.jar jsf-impl.jar This tag library allows us to validate all of the components (<s:validateAll/>) within a block of JSF code, or individual components (<s:validate/>) within a JSF page. To validate all components within a particular scope, wrap them all with the <s:validateAll/> tag as shown here: <h:form> <s:validateAll> <h:inputText value="..." /> <h:inputText value="..." /> </s:validateAll> </h:form> To validate individual components, embed the <s:validate/> tag within the component, as shown in the following code fragment. <h:form> <h:inputText value="..." > <s:validate/> </h:inputText> <h:inputText value="..." > <s:validate/> </h:inputText> </h:form> After specifying that we wish validation to occur against a specified set of controls, we can display error messages to the user. JSF maintains a collection of errors on a page, which can be displayed in its entirety to a user via the <h:messages/> tag. It can sometimes be useful to show a list of all of the errors on a page, but it isn't very useful to the user as it is impossible for them to say which error relates to which control on the form. Seam provides some additional support at this point to allow us to specify the formatting of a control to indicate error or warning messages to users. Seam provides three different JSF facets (<f:facet/>) to allow HTML to be specified both before and after the offending input, along with a CSS style for the HTML. Within these facets, the <s:message/> tag can be used to output the message itself. This tag could be applied either before or after the input box, as per requirements. Facet Description beforeInvalidField This facet allows HTML to be displayed before the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="beforeInvalidField"> ... </f:facet> afterInvalidField This facet allows HTML to be displayed after the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="afterInvalidField"> ... </f:facet> aroundInvalidField This facet allows the CSS style of the text surrounding the input that is in error to be specified. <f:facet name="aroundInvalidField"> ... </f:facet> In order to specify these facets for a particular field, the <s:decorate/>  tag must be specified outside the facet scope. <s:decorate> <f:facet name="aroundInvalidField"> <s:span styleClass="invalidInput"/> </f:facet> <f:facet name="beforeInvalidField"> <f:verbatim>**</f:verbatim> </f:facet> <f:facet name="afterInvalidField"> <s:message/> </f:facet> <h:inputText value="#{calculator.value1}" required="true" > <s:validate/> </h:inputText> </s:decorate> In the preceding code snippet, we can see that a CSS style called invalidInput is being applied to any error or warning information that is to be displayed regarding the <inputText/> field. An erroneous input field is being adorned with a double asterisk (**) preceding the edit box, and the error message specific to the inputText field after is displayed in the edit box.
Read more
  • 0
  • 0
  • 2566

article-image-gamified-websites-framework
Packt
07 Oct 2013
15 min read
Save for later

Gamified Websites: The Framework

Packt
07 Oct 2013
15 min read
(For more resources related to this topic, see here.) Business objectives Before we can go too far down the road on any journey, we first have to be clear about where we are trying to go. This is where business objectives come into the picture. Although games are about fun, and gamification is about generating positive emotion without losing sight of the business objectives, gamification is a serious business. Organizations spend millions of dollars every year on information technology. Consistent and steady investment in information technology is expected to bring a return on that investment in the way of improved business process flow. It's meant to help the organization run smoother and easier. Gamification is all about "improving" business processes. Organizations try to improve the process itself, wherever possible, whereas technology only facilitates the process. Therefore, gamification efforts will be scrutinized under similar microscope and success metrics that information technology efforts will. The fact that customers, employees, or stakeholders are having more fun with the organization's offering is not enough. It will have to meet a business objective. The place to start with defining business objectives is with the business process that the organization is looking to improve. In our case, the process we are planning to improve is e-learning. We are looking at the process of K-12 aged persons learning "thinking". How does that process look right now? Image source: http://www.moddb.com/groups/critical-thinkers-of-moddb/images/critical-thinking-skills-explained In a full-blown e-learning situation, we would be looking to gamify as much of this process as possible. For our purpose, we will focus on the areas of negotiation and cooperation. According to the Negotiate and Cooperate phase of the Critical Thinking Process, learners consider different perspectives and engage in discussions with others. This gives us a clear picture of what some of our objectives might be. They might be, among others: Increasing engagement in discussion with others Increasing the level of consideration of different perspectives Note that these objectives are measurable. We will be able to test whether the increases/improvements we are looking for are actually happening over time. With a set of measurable objectives, we can turn our attention to the next step, that is target behaviors, in our Gamification Design Framework. Target behaviors Now that we are clear about what we are trying to accomplish with our system, we will focus on the actions we are hoping to incentivize: our target behaviors. One of the big questions around gamification efforts is can it really cause behavioral change. Will employees, customers, and stakeholders simply go back to doing things the way they are used to once the game is over? Will they figure out a way to "cheat" the system? The only way to meet long-term organizational objectives in a systematic way is the application to not only cause change for the moment, but lasting change over time. Many gamification applications fail in long-term behavior change, and here's why. Psychologists have studied the behavior change life cycle at length. . The study revealed that people go through five distinct phases when changing a behavior. Each phase presents a different set of challenges. The five phases of the behavioral life cycle are as follows: Awareness: Before a person will take any action to change a behavior, he/she must first be aware of their current behavior and how it might need to change. Buy in: After a person becomes aware that they need to change, they must agree that they actually need to change and make the necessary commitment to do so. Learn: But what actually does a person need to do to change? It cannot be assumed that he/she knows how to change. They must learn the new behavior. Adopt: Now that he/she has learned the necessary skills, they have to actually implement them. They need to take the new action. Maintain: Finally, after adopting a new behavior, it can only become a lasting change with constant practice. Image source: http://www.accenture.com/us-en/blogs/technology-labs-blog/archive/2012/03/28/gamification-and-the-behavior-change-lifecycle.aspx) How can we use this understanding to establish our target behaviors? Keep in mind that our objectives are to increase interaction through discussion and increase consideration for other perspectives. According to our understanding of changing behavior around our objectives, we need our users to: Become aware of their discussion frequency with other users Become aware that other perspectives exist Commit to more discussions with other users Commit to considering other users' perspectives Learn how to have more discussions with other users Learn about other users' perspectives Have more discussions with other users Actually consider other users' perspectives Continue to have more discussions with other users on a consistent basis Continue to consider other users' perspectives over time This outlines the list of activities that needs to be performed for our systems to meet our objectives. Of course, some of our target behaviors will be clear. In other cases, it will require some creativity on our part to get users to take these actions. So what are some possible actions that we can have our users take to move them along the behavior change life cycle? Check their discussion thread count Review the Differing Point of View section Set a target discussion amount for a particular time period Set a target number of Differing Points of View to review Watch a video (or some instructional material) on how to use the discussion area Watch a video (or some instructional material) on the value of viewing other perspectives Participate in the discussion groups Read through other users' discussions posts Participate in the discussion groups over time Read through other users' perspectives over time Some of these target behaviors are relatively straightforward to implement. Others will require more thought. More importantly, we have now identified the target behaviors we want our users to take. This will guide the rest of our development efforts. Players Although the last few sections have been about the serious side of things, such as objectives and target behaviors, we still have gamification as the focal point. Hence, from this point on we will refer to our users as players. We must keep in mind that although we have defined the actions that we want our players to take, the strategies to motivate them to take that action vary from player to player. Gamification is definitely not a one-size-fits-all process. We will have to look at each of our target behaviors from the perspective of our players. We must take their motivations into consideration, unless our mechanics are pretty much trial and error. We will need an approach that's a little more structured. According to the Bartle's Player Motivations theory, players of any game system fall into one of the following four categories: Killers: These are people motivated to participate in a gaming scenario with the primary purpose of winning the game by "acting on" other players. This might include killing them, beating, and directly competing with other players in the game. Achievers: These, on the other hand, are motivated by taking clear actions against the system itself to win. They are less motivated by beating an opponent than by achieving things to win. Socializers: These have very different motivations for participating in a game. They are motivated more by interacting and engaging with other players. Explorers: Like socializers, explorers enjoy interaction and engagement, but less with other players than with the system itself. The following diagram outlines each player motivation type and what game mechanic might best keep them engaged. Image source: http://frankcaron.com/Flogger/?p=1732 As we define our activity loops, we need to make sure that we include each of the four types of players and their motivations. Activity loops Gamified systems, like other systems, are simply a series of actions. The player acts on the system and the system responds. We refer to how the user interacts with the system as activity loops. We will talk about two types of activity loops, engagement loops and progression loops, to describe our player interactions. Engagement loops describe how a player engages the system. They outline what a player does and how the system responds. Activity will be different for players depending on their motivations, so we must also take into consideration why the player is taking the action he is taking. A progression loop describes how the player engages the system as a whole. It outlines how he/she might progress through the game itself. Whereas engagement loops discuss what the player does on a detailed level, progression loops outline the movement of the player through the system. For example, when a person drives a car, he/she is interacting with the car almost constantly. This interaction is a set of engagement loops. All the while, the car is going somewhere. Where the car is going describes its progression loops. Activity loops tend to follow the Motivation, Action, Feedback pattern. The players are sufficiently motivated to take an action. When the players take the action and they get a feedback from the system, the feedback hopefully motivates the players enough to take another action. They take that action and get more feedback. In a perfect world, this cycle would continue indefinitely and the players would never stop playing our gamified system. Our goal is to get as close to this continuous activity loop as we possibly can. Progression loops We have spent the last few pages looking at the detailed interactions that a player will have with the system in our engagement loops. Now it's time to turn our attention to the other type of activity loop, the progression loop. Progression loops look at the system at a macro level. They describe the player's journey through the system. We usually think about levels, badges, and/or modes when we are thinking about progression loops We answer questions such as: where have you been, where are you now, and where are you going. This can all be summed up into codifying the player's mastery level. In our application, we will look at the journey from the vantage point of a novice, an expert, and a master. Upon joining the game, players will begin at novice level. At novice level we will focus on: Welcome On-boarding and getting the user acclimated to using the system Achievable goals In the Welcome stage, we will simply introduce the user to the game and encourage him/her to try it out. Upon on-boarding, we need to make the process as easy as possible and give back positive feedback as soon as possible. Once the user is on board, we will outline the easiest way to get involved and begin the journey. At the expert level, the player is engaging regularly in the game. However, other players would not consider this player a leader in the game. Our goal at this level is to present more difficult challenges. When the player reaches a challenge that is appearing too difficult, we can include surprise alternatives along the way to keep him/her motivated until they can break through the expert barrier to master level. The game and other players recognize masters. They should be prominently displayed within the game and might tend to want to help others at novice and expert levels. These options should become available at later stages in the game. Fun After we have done the work of identifying our objectives, defining target behaviors, scoping our players, and laying out the activities of our system, we can finally think about the area of the system where many novice game designers start: the fun. Other gamification practitioners will avoid, or at least disguise, the fun aspect of the gamification design process. It is important that we don't over or under emphasize the fun in the process. For example, chefs prepare an entire meal with spices, but they don't add all spices together. They use the spices in a balanced amount in their cooking to bring flavor to their dishes. Think of fun as an array of spices that we can apply to our activity loops. Marc Leblanc has categorized fun into eight distinct categories. We will attempt to sprinkle just enough of each, where appropriate, to accomplish the desired amount of fun. Keep in mind that what one player will experience as fun will not be the same for another. One size definitely does not fit all in this case. Sensation: A pleasurable experience Narrative: An unfolding story Challenge: An obstacle course Fantasy: Make believe Fellowship: A social framework Discovery: Exploring uncharted territory Expression: Player is given a platform Submission: Mindless activity So how can we sparingly introduce the above dimensions of fun in our system? Action to take Dimension of fun Check their discussion thread count Challenge Review a differing point of the View section Discovery Set a target discussion  amount for a particular time period Challenge Set a target number of "Differing Points of View" to review Challenge Watch a video (or some instructional material) on the how to use the discussion area Challenge Watch a video (or some instructional material) on the value of viewing other perspectives Challenge Participate in the discussion groups Fellowship Expression Read through other users' discussions posts Discovery Participate in the discussion groups over time Fellowship Expression Read through other users' perspectives over time Discovery Tools We are finally at the stage from where we can begin implementation. At this point, we can look at the various game elements (tools) to implement our gamified system. If we have followed the framework upto this point, the mechanics and elements should become apparent. We are not simply adding leader boards or a point system for the sake of it. We can tie all the tools we use back to our previous work. This will result in a Gamification Design Matrix for our application. But before we go there, let's stop and take a look at some tools we have at our disposal. There are a myriad of tools, mechanics, and strategies at our disposal. New ones are being designed everyday. Here are a few of the most common mechanics that we will encounter when designing our gamified system: Achievements: These are specific objectives that a player meets. Avatars: These are visual representations of a player's role, persona, or character in a game. Badges: These are visual elements used to recognize a particular accomplishment. They give players a sense of pride that they can show off to others. Boss fight: This is an exceptionally difficult challenge in a game scenario, usually at the end of a level to demonstrate enough skill level to move up to the next level. Leaderboards: These show rankings of players publicly. They recognize an accomplishment like a badge, but they are visible for all to see. We see this almost every day, in every way from sports team rankings to sales rep monthly results. Points: These are rather straightforward. Players accumulate points and take various actions in the system. Quests/Mission: These are specialized challenges in a game scenario having narrative and objective as characteristics. Reward: This is anything used to extrinsically motivate the user to take a particular action. Team: This is a group of players playing as a single unit. Virtual assets: These are elements in the game that have some value and can be acquired or used to acquire other assets, whether tangible or virtual. Now it's time to turn and take off our gamification design hat and put on our developer hat. Let's start by developing some initial mockups of what our final site might look like using the design we have outlined previously. Many people develop mockups using graphics tools such as Photoshop or Gimp. At this stage, we will be less detailed in our mockups and simply use pencil sketches or a mockup tool such as Balsamiq. Login screen This is a mock-up of the basic login screen in our application. Players are accustomed to a basic login and password scenario we provide here. Account creation screen First time players will have to create an account initially. This is the mock-up of our signup page. Main Player Screen This captures the main elements of our system when a player is fully engaged with the system. Main Player Post Response Screen We have outlined the key functionality of our gamified system via mock-ups. Mock-ups are a means of visually communicating to our team what we are building and why we are building it. Visual mock-ups also give us an opportunity to uncover issues in our design early in the process. Summary Most gamified applications will fail due to a poorly designed system. Hence, we have introduced a Gamification Design Framework to guide our development process. We know that our chances of developing a successful system increase tremendously if we: Define clear business objectives Establish target behaviors Understand our players Work through the activity loops Remember the fun Optimize the tools Resources for Article: Further resources on this subject: An Introduction to PHP-Nuke [Article] Installing phpMyAdmin [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2565

article-image-seam-and-ajax
Packt
22 Oct 2009
11 min read
Save for later

Seam and AJAX

Packt
22 Oct 2009
11 min read
What is AJAX? AJAX (Asynchronous JavaScript and XML) is a technique rather than a new technology for developing highly interactive web applications. Traditionally, when JavaScript is written, it uses the browser's XMLHttp DOM API class to make asynchronous calls to a server-side component, for example, servlets. The server-side component generates a resulting XML package and returns this to the client browser, which can then update the browser page without having to re-render the entire page. The result of using AJAX technologies (many different technologies can be used to develop AJAX functionality, for example, PHP, Microsoft .NET, Servlets, and Seam) is to provide an appearance similar to a desktop, for web applications. AJAX and the Seam Framework The Seam Framework provides built-in support for AJAX via its direct integration with libraries such as RichFaces and AJAX4JSF. Discussing the AJAX support of RichFaces and AJAX4JSF could fill an entire book, if not two books, so we'll discuss these technologies briefly, towards the end of this article, where we'll give an overview of how they can be used in a Seam application. However, Seam provides a separate technology called Seam Remoting that we'll discuss in detail in this article. Seam Remoting allows a method on Seam components to be executed directly from JavaScript code running within a browser, allowing us to easily build AJAX-style applications. Seam Remoting uses annotations and is conversation-aware, so that we still get all of the benefits of writing conversationally-aware components, except that we can now access them via JavaScript as well as through other view technologies, such as Facelets. Seam Remoting provides a ready-to-use framework, making AJAX applications easier to develop. For example, it provides debugging facilities and logging facilities similar to the ones that we use everyday when writing Java components. Configuring Seam applications for Seam Remoting To use Seam Remoting, we need to configure the Seam web application to support JavaScript code that is making asynchronous calls to the server back end. In a traditional servlet-based system this would require writing complex servlets that could read, parse, and return XML as part of an HTTP GET or POST request. With Seam Remoting, we don't need to worry about managing XML data and its transport mechanism. We don't even need to worry about writing servlets that can handle the communication for us—all of this is a part of the framework. To configure a web application to use Seam Remoting, all we need to do is declare the Seam Resource servlet within our application's WEB-INF/web.xml file. We do this as follows. <servlet> <servlet-name>Seam Resource Servlet</servlet-name> <servlet-class> org.jboss.seam.servlet.SeamResourceServlet </servlet-class></servlet><servlet-mapping> <servlet-name>Seam Resource Servlet</servlet-name> <url-pattern>/seam/resource/*</url-pattern></servlet-mapping> That's all we need to do to make a Seam web application work with Seam Remoting. To make things even easier, this configuration is automatically done when applications are created with SeamGen, so you would have to worry about this configuration only if you are using non-SeamGen created projects. Configuring Seam Remoting server side To declare that a Seam component can be used via Seam Remoting, the methods that are to be exposed need to be annotated with the @WebRemote annotation. For simple POJO components, this annotation is applied directly on the POJO itself, as shown in the following code snippet. @Name("helloWorld")public class HelloWorld implements HelloWorldAction { @WebRemote public String sayHello() { return "Hello world !!";} For Session Beans, the annotation must be applied on the Session Beans business interface rather than on the implementation class itself. A Session Bean interface would be declared as follows. import javax.ejb.Local;import org.jboss.seam.annotations.remoting.WebRemote;@Localpublic interface HelloWorldAction { @WebRemote public String sayHello(); @WebRemote public String sayHelloWithArgs(String name);} The implementation class is defined as follows: import javax.ejb.Stateless;import org.jboss.seam.annotations.Name;@Stateless@Name("helloWorld")public class HelloWorld implements HelloWorldAction { public String sayHello() { return "Hello world !!"; } public String sayHelloWithArgs(String name) { return "Hello "+name; }} Note that, to make a method available to Seam Remoting, all we need to do is to annotate the method with @WebRemote and then import the relevant class. As we can see in the preceding code, it doesn't matter how many parameters our methods take. Configuring Seam Remoting client side In the previous sections, we've seen that minimal configuration is required to enable Seam Remoting and to declare Seam components as Remoting-aware. Similarly in this section, we'll see that minimal work is required within a Facelets file to enable Remoting. The Seam Framework provides built-in JavaScript to enable Seam Remoting. To use this JavaScript, we first need to define it within a Facelets file in the following way: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/resource/remote.js"></script><script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld"> To include the relevant JavaScript into a Facelets page, we need to import the /seam/resource/remoting/resource/remote.js and /seam/resource/remoting/interface.js JavaScript files. These files are served via the Seam resource servlet that we defined earlier in this article. You can see that the interface.js file takes an argument defining the name of the Seam component that we will be accessing (this is the name of the component for which we have defined methods with the @WebRemote annotation). If we wish to use two or more different Seam components from a Remoting interface, we would specify their names as parameters to the interface.js file separated by using an "&", for example: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld&mySecondComponent& myThirdComponent"> To specify that we will use Seam components from the web tier is straight-forward, however, the Seam tag library makes this even easier. Instead of specifying the JavaScript shown in the preceding examples, we can simply insert the <s:remote /> tag into Facelets, passing the name of the Seam component to use within the include parameter. <ui:compositiontemplate="layout/template.xhtml"> <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> To use the <s:remote /> tag, we need to import the Seam tag library, as shown in this example. When the web page is rendered, Seam will automatically generate the relevant JavaScript. If we are using the <s:remote /> tag and we want to invoke methods on multiple Seam components, we need to place the component names as comma-separated values within the include parameter of the tag instead, for example: <s:remote include="helloWorld, mySecondComponent, myThirdComponent" /> Invoking Seam components via Remoting Now that we have configured our web application, defined the services to be exposed from the server, and imported the JavaScript to perform the AJAX calls, we can execute our remote methods. To get an instance of a Seam component within JavaScript, we use the Seam.Component.getInstance() method. This method takes one parameter, which specifies the name of the Seam component that we wish to interact with. Seam.Component.getInstance("helloWorld") This method returns a reference to Seam Remoting JavaScript to allow our exposed @WebReference methods to be invoked. When invoking a method via JavaScript, we must specify any arguments to the method (possibly there will be none) and a callback function. The callback function will be invoked asynchronously when the server component's method has finished executing. Within the callback function we can perform any JavaScript processing (such as DOM processing) to give our required AJAX-style functionality. For example, to execute a simple Hello World client, passing no parameters to the server, we could define the following code within a Facelets file. <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> <p> <div id="helloResult"></div> </p> <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); } </script></ui:define> Let's take a look at this code, one piece at a time, to see exactly what is happening. <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> In this part of the code, we have specified that we want to invoke methods on the helloWorld Seam component by using the <s:remote /> tag. We've then declared a button and specified that the sayHello() JavaScript method will be invoked when the button is clicked. <div id="helloResult"></div> Next we've defined an empty <div /> called helloResult. This <div /> will be populated via the JavaScript DOM API with the results from out server side method invocation. <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); }</script> Next, we've defined our JavaScript function sayHello(), which is invoked when the button is clicked. This method declares a callback function that takes one parameter. The JavaScript DOM API uses this parameter to set the contents of the helloResult <div /> that we have defined earlier. So far, everything that we've done here has been simple JavaScript and hasn't used any Seam APIs. Finally, we invoke the Seam component using the Seam.Component.getInstance().sayHello() method, passing the callback function as the final parameter. When we open the page, the following flow of events occurs: The page is displayed with appropriate JavaScript created via the<s:remote /> tag. The user clicks on the button. The Seam JavaScript is invoked, which causes the sayHello() method on the helloWorld component to be invoked. The server side component completes execution, causing the JavaScript callback function to be invoked. The JavaScript DOM API uses the results from the server method to change the contents of the <div /> in the browser, without causing the entire page to be refreshed. This process shows how we've developed some AJAX functionality by writing a minimal amount of JavaScript, but more importantly, by not dealing with XML or the JavaScript XMLHttp class. The preceding code shows how we can easily invoke server side methods without passing any parameters. This code can easily be expanded to pass parameters, as shown in the following code snippet: <s:remote include="helloWorld"/><p> <button onclick="javascript:sayHelloWithArgs()"> Say Hello with Args </button></p><p> <div id="helloResult"></div></p><script type="text/javascript"> function sayHelloWithArgs() { var name = "David"; var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHelloWithArgs(name, callback); }</script> The preceding code shows that the process for invoking remote methods with parameters is similar to the process for invoking remote methods with no parameters. The important point to note is that the callback function is specified as the last parameter. When our simple application is run, we get the following screenshot. Clicking on either of the buttons on the page causes our AJAX code to be executed, and the text of the <div /> component to be changed. If we want to invoke a server side method via Seam Remoting and we want the method to be invoked as a part of a Seam conversation, we can use the Seam.Remoting.getcontext.setConversationId() method to set the conversation ID. This ID will then by used by the Seam Framework to ensure that the AJAX request is a part of the appropriate conversation. Seam.Remoting.getContext().setConversationId(#{conversation.id});
Read more
  • 0
  • 0
  • 2556
article-image-building-facebook-application-part-2
Packt
16 Oct 2009
11 min read
Save for later

Building a Facebook Application: Part 2

Packt
16 Oct 2009
11 min read
Mock AJAX and your Facebook profile I'm sure that you've heard of AJAX (Asynchronous JavaScript and XML) with which you can build interactive web pages. Well, Facebook has Mock AJAX, and with this you can create interactive elements within a profile page. Mock AJAX has three attributes that you need to be aware of: clickwriteform: The form to be used to process any data. clickwriteid: The id of a component to be used to display our data. clickwriteurl: The URL of the application that will process the data. When using Mock AJAX, our application must do two things: Return the output of any processed data (and we can do that by using either echo or print). Define a form with which we'll enter any data, and a div to receive the processed data Using a form on your profile Since we want to make our application more interactive, one simple way is to add a form. So, for our first example we can add a function (or in this case a set of functions) to appinclude.php that will create a form containing a simple combo-box: function country_combo () {/*You use this function to display a combo-box containing a list of countries. It's in its own function so that we can use it in other forms without having to add any extra code*/$country_combo = <<<EndOfText<select name=sel_country><option>England</option><option>India</option></select>EndOfText;return $country_combo;}function country_form () {/*Like country_combo-box we can use this form where ever needed because we've encapsulated it in its own function */global $appcallbackurl;$country_form = "<form>";$country_form .= country_combo ();$country_form .= <<<EndOfText<input type="submit" clickrewriteurl="$appcallbackurl" clickrewriteid="info_display" value="View Country"/><div id="info_display" style="border-style: solid; border-color: black; border-width: 1px; padding: 5px;">No country selected</div></form>EndOfText;return $country_form;}function display_simple_form () {/*This function displays the country form with a nice subtitle (on the Profile page)*/global $facebook, $_REQUEST;#Return any processed dataif (isset($_REQUEST['sel_country'])) { echo $_REQUEST['sel_country'] . " selected"; exit;}#Define the form and the div$fbml_text = <<<EndOfText<fb:subtitle><fb:name useyou=false uid=$user firstnameonly=true possessive=true> </fb:name> Suspect List</fb:subtitle>EndOfText;$fbml_text .= country_form ();$facebook->api_client->profile_setFBML($fbml_text, $user);echo $fbml_text;} And, of course, you'll need to edit index.php: display_simple_form (); You'll notice from the code that we need to create a div with the id info_display, and that this is what we use for the clickrewriteid of the submit button. You'll also notice that we're using $appcallbackurl for the clickrewriteurl ($appcallbackurl is defined in appinclude.php). Now, it's just a matter of viewing the new FMBL (by clicking on the application URL in the left-navigation panel): If you select a country, and then click on View Country, you'll see: I'm sure that you can see where we're going with this. The next stage is to incorporate this form into our Suspect Tracker application. And the great thing now is that because of the functions that we've already added to appinclude.php, this is now a very easy job: function first_suspect_tracker () {global $facebook, $_REQUEST;if (isset($_REQUEST['sel_country'])) { $friend_details = get_friends_details_ by_country ($_REQUEST['sel_country']); foreach ($friend_details as $friend) { $div_text .= "<fb:name uid=" . $friend['uid'] . " firstnameonly=false></fb:name>, "; } echo $div_text; exit;}$fbml_text .= country_form ();$facebook->api_client->profile_setFBML($fbml_text, $user);$facebook->redirect($facebook->get_facebook_url() . '/profile.php');} You may also want to change the country_form function, so that the submit button reads View Suspects. And, of course, we'll also need to update index.php. Just to call our new function: <?phprequire_once 'appinclude.php';first_suspect_tracker ();?> This time, we'll see the list of friends in the selected country: or: OK, I know what you're thinking, this is fine if all of your friends are in England and India, but what if they're not? And you don't want to enter the list of countries manually, do you? And what happens if someone from a country not in the list becomes your friend? Obviously, the answer to all of these questions is to create the combo-box dynamically. Creating a dynamic combo-box I'm sure that from what we've done so far, you can work out how to extract a list of countries from Facebook: function country_list_sql () {/*We're going to be using this piece of SQL quite often so it deserves its own function*/global $user;$country_list_sql = <<<EndSQLSELECT hometown_location.countryFROM userWHERE uid IN (SELECT uid1FROM friendWHERE uid2=$user)EndSQL;return $country_list_sql;}function full_country_list () {/*With the SQL in a separate function this one is very short and simple*/global $facebook;$sql = country_list_sql ();$full_country_list = $facebook-> api_client->fql_query($sql);print_r ($full_country_list);} However, from the output, you can see that there's a problem with the data: If you look through the contents of the array, you'll notice that some of the countries are listed more than once—you can see this even more clearly if we simulate building the combo-box: function options_country_list () {global $facebook;$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);foreach ($country_list as $country){ echo "option:" . $country['hometown_location']['country'] ."<br>";}} From which, we'd get the output: This is obviously not what we want in the combo-box. Fortunately, we can solve the problem by making use of the array_unique method, and we can also order the list by using the sort function: function filtered_country_list () {global $facebook;$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);$combo_full = array();foreach ($country_list as $country){ array_push($combo_full, $country['hometown_location']['country']);}$combo_list = array_unique($combo_full);sort($combo_list);foreach ($combo_list as $combo){ echo "option:" . $combo ."<br>";}} And now, we can produce a usable combo-box: Once we've added our code to include the dynamic combo-box, we've got the workings for a complete application, and all we have to do is update the country_combo function: function country_combo () {/*The function now produces a combo-box derived from the friends' countries */global $facebook;$country_combo = "<select name=sel_country>";$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);$combo_full = array();foreach ($country_list as $country){ array_push($combo_full, $country['hometown_location']['country']);}$combo_list = array_unique($combo_full);sort($combo_list);foreach ($combo_list as $combo){ $country_combo .= "<option>" . $combo ."</option>";}$country_combo .= "</select>";return $country_combo;} Of course, you'll need to reload the application via the left-hand navigation panel for the result: Limiting access to the form You may have spotted a little fly in the ointment at this point. Anyone who can view your profile will also be able to access your form and you may not want that (if they want a form of their own they should install the application!). However, FBML has a number of if (then) else statements, and one of them is <fb:if-is-own-profile>: <?phprequire_once 'appinclude.php';$fbml_text = <<<EndOfText<fb:if-is-own-profile>Hi <fb:name useyou=false uid=$user firstnameonly=true></fb:name>, welcome to your Facebook Profile page.<fb:else>Sorry, but this is not your Facebook Profile page - it belongs to <fb:name useyou=false uid=$user firstnameonly=false> </fb:name>,</fb:else></fb:if-is-own-profile>EndOfText;$facebook->api_client->profile_setFBML($fbml_text, $user);echo "Profile updated";?> So, in this example, if you were logged on to Facebook, you'd see the following on your profile page: But anyone else viewing your profile page would see: And remember that the FBML is cached when you run: $facebook->api_client->profile_setFBML($fbml_text, $user); Also, don't forget, it is not dynamic that is it's not run every time that you view your profile page. You couldn't, for example, produce the following for a user called Fred Bloggs: Sorry Fred, but this is not Your Facebook Profile page - it belongs to Mark Bain That said, you are now able to alter what's seen on the screen, according to who is logged on. Storing data—keeping files on your server From what we've looked at so far, you already know that you not only have, but need, files stored on your server (the API libraries and your application files). However, there are other instances when it is useful to store files there. Storing FBML on your server In all of the examples that we've worked on so far, you've seen how to use FBML mixed into your code. However, you may be wondering if it's possible to separate the two. After all, much of the FBML is static—the only reason that we include it in the code is so that we can produce an output. As well as there may be times when you want to change the FBML, but you don't want to have to change your code every time you do that (working on the principle that the more times you edit the code the more opportunity there is to mess it up). And, of course, there is a simple solution. Let's look at a typical form: <form><div id="info_display" style="border-style: solid; border-color: black; border-width: 1px; padding: 5px;"></div><input name=input_text><input type="submit" clickrewriteurl="http://213.123.183.16/f8/penguin_pi/" clickrewriteid="info_display" value="Write Result"></form> Rather than enclosing this in $fbml_text = <<<EndOfText ... EndOfText; as we have done before, you can save the FBML into a file on your server, in a subdirectory of your application. For example /www/htdocs/f8/penguin_pi/fbml/form_input_text.fbml. "Aha" I hear your say, "won't this invalidate the caching of FBML, and cause Facebook to access my server more often than it needs?" Well, no, it won't. It's just that we need to tell Facebook to update the cache from our FBML file. So, first we need to inform FBML that some external text needs to be included, by making use of the <fb:ref> tag, and then we need to tell Facebook to update the cache by using the fbml_refreshRefUrl method: function form_from_server () {global $facebook, $_REQUEST, $appcallbackurl, $user;$fbml_file = $appcallbackurl . "fbml/form_input_text.fbml";if (isset($_REQUEST['input_text'])) { echo $_REQUEST['input_text']; exit;}$fbml_text .= "<fb:ref url='" . $fbml_file . "' />";$facebook->api_client->profile_setFBML($fbml_text, $user);$facebook->api_client->fbml_refreshRefUrl($fbml_file);echo $fbml_text;} As far as your users are concerned, there is no difference. They'll just see another form on their profile page: Even if your users don't appreciate this leap forward, it will make a big difference to your coding—you're now able to isolate any static FBML from your PHP (if you want). And now, we can turn our attention to one of the key advantages of having your own server—your data. Storing data on your server So far, we've concentrated on how to extract data from Facebook and display it on the profile page. You've seen, for example, how to list all of your friends from a given country. However, that's not how Pygoscelis' list would work in reality. In reality, you should be able to select one of your friends and add them to your suspect list. We will, therefore, spend just a little time on looking at creating and using our own data. We're going to be saving our data in files, and so your first job must be to create a directory in which to save those files. Your new directory needs to be a subdirectory of the one containing your application. So, for example, on my Linux server I would do: cd /www/htdocs/f8/penguin_pi       #Move to the application directory mkdir data #Create a new directory chgrp www-data data                          #Change the group of the directory chmod g+w data                                  #Ensure that the group can write to data
Read more
  • 0
  • 0
  • 2555

article-image-wordpress-avoiding-black-hat-techniques
Packt
26 Apr 2011
10 min read
Save for later

WordPress: Avoiding the Black Hat Techniques

Packt
26 Apr 2011
10 min read
  WordPress 3 Search Engine Optimization Optimize your website for popularity with search engines         Read more about this book       (For more resources on WordPress, see here.) Typical black hat techniques There is a wide range of black hat techniques fully available to all webmasters. Some techniques can improve rankings in short term, but generally not to the extent that legitimate web development would, if pursued with the same effort. The risk of black hat techniques is that they are routinely detected and punished. Black hat is never the way to go for a legitimate business, and pursuing black hat techniques can get your site (or sites) permanently banned and will also require you to build an entirely new website with an entirely new domain name. We will examine a few black hat techniques to help you avoid them. Hidden text on web pages Hidden text is the text that through either coding or coloring does not appear to users, but appears to search engines. Hidden text is a commonly-used technique, and would be better described as gray hat. It tends not to be severely punished when detected. One technique relies on the coloring of elements. When the color of a text element is set to the same color as the background (either through CSS or HTML coding), then the text disappears from human readers while still visible to search spiders. Unfortunately, for webmasters employing this technique, it's entirely detectible by Google. More easily detectible is the use of the CSS property display: none. In the language of CSS, this directs browsers to not display the text that is defined by that element. This technique is easily detectible by search engines. There is an obvious alternative to employing hidden text: Simply use your desired keywords in the text of your content and display the text to both users and search spiders. Spider detection, cloaking, redirection, and doorway pages Cloaking and spider detection are related techniques. Cloaking is a black hat SEO technique whereby the content presented to search engine spiders (via search spider detection) differs from the content presented to users. Who would employ such a technique? Cloaking is employed principally by sellers of products typically promoted by spam, such as pharmaceutics, adult sites, and gambling sites. Since legitimate search traffic is difficult to obtain in these niches, the purveyors of these products employ cloaking to gain visitors. Traditional cloaking relies upon spider detection. When a search spider visits a website, the headers accompanying a page view request identify the spider by names such as Goolgebot (Google's spider) or Slurp (Inktomi's spider). Conversely, an ordinary web browser (presumably with a human operator) will identify itself as Mozilla, Internet Explorer, or Safari, as the case may be. With simple JavaScript or with server configuration, it is quite easy to identify the requesting browser and deliver one version of a page to search spiders and another version of the page to human browsers. All you really need is to know the names of the spiders, which are publicly known. A variation of cloaking is a doorway page. A doorway page is a page through which human visitors are quickly redirected (through a meta refresh or JavaScript) to a destination page. Search spiders, however, index the doorway page, and not the destination page. Although the technique differs in execution, the effect is the same: Human visitors see one page, and the search engines see another. The potential harm from cloaking goes beyond search engine manipulation. More often than not, the true destination pages in a cloaking scheme are used for the transmission of malware, viruses, and Trojans. Because the search engines aren't necessarily reading the true destination pages, the malicious code isn't detected. Any type of cloaking, when reported or detected, is almost certain to result in a severe Google penalty, such as removal of a site from the search engine indexes. Linking to bad neighborhoods and link farms A bad neighborhood is a website or a network of websites that either earns inbound links through illegitimate means or employs other "black hat on-page" techniques such as cloaking, and redirects them. A link farm is a website that offers almost no content, but serves solely for the purpose of listing links. Link farms, in turn, offer links to other websites to increase the rankings of these sites. A wide range of black hat techniques can get a website labeled as a bad neighborhood. A quick test you can employ to determine if a site is a bad neighborhood is by entering the domain name as a part of the specialized Google search query, "site:the-website-domain.com" to see if Google displays any pages of that website in its index. If Google returns no results, the website is either brand new or has been removed from Google's index—a possible indicator that it has been labeled a bad neighborhood. Another quick test is to check the site's PageRank and compare the figure to the number of inbound links pointing to the site. If a site has a large number of backlinks but has a PageRank of zero, which would tend to indicate that its PageRank has been manually adjusted downwards due to a violation of Google's Webmaster Guidelines. If both of the previous tests are either positive or inconclusive, you would still be wise to give the site a "smell test". Here are some questions to ask when determining if a site might be deemed as a bad neighborhood: Does the site offer meaningful content? Did you detect any redirection while visiting the site? Did you get any virus warning while visiting the site? Is the site a little more than lists of links or text polluted with high numbers of links? Check the website's backlink profile. Are the links solely low-value inbound links? If it isn't a site you would engage with when visiting, don't link to it. Google Webmaster Guidelines Google Webmaster Guidelines are a set of written rules and prohibitions that outline recommended and forbidden website practices. You can find these webmaster guidelines at: http://www.google.com/support/webmasters/bin/ answer.py?hl=en&answer=35769, though you'll find it easier to search for "Google Webmaster Guidelines" and click on the top search result. You should read through the Google Webmaster Guidelines and refer to them occasionally. The guidelines are divided into design and content guidelines, technical guidelines, and quality guidelines. Google Webmaster Guidelines in a nutshell At their core, Google Webmaster Guidelines aim for quality in the technology underlying websites in their index, high-quality content, and also discourage manipulation of search results through deceptive techniques. All search engines have webmaster guidelines, but if you follow Google's dictates, you will not run afoul of any of the other search engines. Here, we'll discuss only the Google's rules. Google's design and content guidelines instruct that your site should have a clear navigational hierarchy with text links rather than image links. The guidelines specifically note that each page "should be reachable from at least one static text link". Because WordPress builds text-based, hierarchical navigation naturally, your site will also meet that rule naturally. The guidelines continue by instructing that your site should load quickly and display consistently among different browsers. The warnings come in Google's quality guidelines; in this section, you'll see how Google warns against a wide range of black hat techniques such as the following: Using hidden text or hidden links, elements that through coloring, font size, or CSS display properties to show to the search engines but do not show them to the users. The use of cloaking or "sneaky redirects". Cloaking means a script that detects search engine spiders and displays one version of a website to users and displays an alternate version to the search engines. The use of repetitive, automated queries to Google. Some unscrupulous software vendors (Google mentions one by name, WebPosition Gold, which is still in the market, luring unsuspecting webmasters) sell software and services that repeatedly query Google to determine website rankings. Google does allow such queries in some instances through their AJAX Search API Key—but you need to apply for one and abide by the terms of its use. The creation of multiple sites or pages that consist solely of duplicate content that appears on other web properties. The posting or installation of scripts that behave maliciously towards users, such as with viruses, trojans, browser interceptors, or other badware. Participation in link schemes. Google is quite public that it values inbound links as a measure of site quality, so it is ever vigilant to detect and punish illegitimate link programs. Linking to bad neighborhoods. A bad neighborhood means a website that uses illegitimate, forbidden techniques to earn inbound links or traffic. Stuffing keywords onto pages in order to fool search spiders. Keyword stuffing is "the oldest trick in the book". It's not only forbidden, but also highly ineffective at influencing search results and highly annoying to visitors. When Google detects violations of its guidelines Google, which is nearly an entirely automated system, is surprisingly capable of detecting violations of its guidelines. Google encourages user-reporting of spam websites, cloaked pages, and hidden text (through their page here: https://www. google.com/webmasters/tools/spamreport). They maintain an active antispam department that is fully engaged in an ongoing improvement in both, manual punishments for offending sites, and algorithmic improvements for detecting violations. When paid link abuses are detected, Google will nearly always punish the linking site, not necessarily the site receiving the link—even though the receiving site is the one earning a ranking benefit. At first glance, this may seem counter-intuitive, but there is a reason. If Google punished the site receiving a forbidden paid link, then any site owner could knock a competitor's website by buying a forbidden link, pointing to the competitor, and then reporting the link as spam. When an on-page black hat or gray hat element is detected, the penalty will be imposed upon the offending site. The penalties range from a ranking adjustment to an outright ban from search engine results. Generally, the penalty matches the crime; the more egregious penalties flow from more egregious violations. We need to draw a distinction, however, between a Google ban, penalty, and algorithmic filtering. Algorithmic filtering is simply an adjustment to the rankings or indexing of a site. If you publish content that is a duplicate of the other content on the Web, and Google doesn't rank or index that page, that's not a penalty, it's simply the search engine algorithm operating properly. If all of your pages are removed from the search index, that is most likely a ban. If the highest ranking you can achieve is position 40 for any search phrase, that could potentially be a penalty called a "-40 penalty". All search engines can impose discipline upon websites, but Google is the most strict and imposes far more penalties than the other search engines, so we will largely discuss Google here. Filtering is not a penalty; it is an adjustment that can be remedied by undoing the condition that led to the it. Filtering can occur for a variety of reasons but is often imposed following over optimization. For example, if your backlink profile comprises links of which 80% use the same anchor text, you might trigger a filter. The effect of a penalty or filter is the same: decreased rankings and traffic. In the following section, we'll look at a wide variety of known Google filters and penalties, and learn how to address them.
Read more
  • 0
  • 0
  • 2549
Modal Close icon
Modal Close icon