Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-working-aspnet-web-api
Packt
13 Dec 2013
5 min read
Save for later

Working with ASP.NET Web API

Packt
13 Dec 2013
5 min read
(For more resources related to this topic, see here.) The ASP.NET Web API is a framework that you can make use of to build web services that use HTTP as the protocol. You can use the ASP.NET Web API to return data based on the data requested by the client, that is, you can return JSON or XML as the format of the data. Layers of an application The ASP.NET Framework runs on top of the managed environment of the .NET Framework. The Model, View, and Controller (MVC) architectural pattern is used to separate the concerns of an application to facilitate testing, ease the process of maintenance of the application's code, and to provide better support for change. The model represents the application's data and the business objects; the view is the presentation layer component, and the controller binds the model and the view together. The following figure illustrates the components of Model View Architecture: The MVC Architecture The ASP.NET Web API architecture The ASP.NET Web API is a lightweight web-based architecture that uses HTTP as the application protocol. Routing in the ASP.NET Web API works a bit differently compared to the way it works in ASP.NET MVC. The basic difference between routing in MVC and routing in a Web API is that, Web API uses the HTTP method, and not the URI path, to select the action. The Web API Framework uses a routing table to determine which action is to be invoked for a particular request. You need to specify the routing parameters in the WebApiConfig.cs file that resides in the App_Start directory. Here's an example that shows how routing is configured: routes.MapHttpRoute(    name: "Packt API Default",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); The following code snippet illustrates how routing is configured by action names: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); The ASP.NET Web API generates structured data such as JSON and XML as responses. It can route the incoming requests to the actions based on HTTP verbs and not only action names. Also, the ASP.NET Web API can be hosted outside of the ASP.NET runtime environment and the IIS Web Server context. Routing in ASP.NET Web API Routing in the ASP.NET Web API is very much the same as in the ASP.NET MVC. The ASP.NET Web API routes URLs to a controller. Then, the control is handed over to the action that corresponds to the HTTP verb of the request message. Note that the default route template for an ASP.NET Web API project is {controller}/{id}—here the {id} parameter is optional. Also, the ASP.NET Web API route templates may optionally include an {action} parameter. It should be noted that unlike the ASP.NET MVC, URLs in the ASP.NET Web API cannot contain complex types. It should also be noted that complex types must be present in the HTTP message body, and that there can be one, and only one, complex type in the HTTP message body. Note that the ASP.NET MVC and the ASP.NET Web API are two distinctly separate frameworks which adhere to some common architectural patterns. In the ASP.NET Web API framework, the controller handles all HTTP requests. The controller comprises a collection of action methods—an incoming request to the Web API framework, the request is routed to the appropriate action. Now, the framework uses a routing table to determine the action method to be invoked when a request is received. Here is an example: routes.MapHttpRoute(     name: "Packt Web API",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); Refer to the following UserController class. public class UserController <UserAuthentication>: BaseApiController<UserAuthentication> {     public void GetAllUsers() { }     public IEnumerable<User> GetUserById(int id) { }     public HttpResponseMessage DeleteUser(int id){ } } The following table illustrates the HTTP method and the corresponding URI, Actions, and so on: HTTP Method URI Action Parameter GET api/users GetAllUsers None GET api/users/1 GetUserByID 1 POST api/users     DELETE api/users/3 DeleteUser 3 The Web API Framework matches the segments in the URI path to the template. The following steps are performed: The URI is matched to a route template. The respective controller is selected. The respective action is selected. The IHttpControllerSelector.SelectController method selects the controller, takes an HttpRequestMessage instance and returns an HttpControllerDescriptor. After the controller has been selected, the Web API framework selects the action by invoking the IHttpActionSelector.SelectAction method. This method in turn accepts HttpControllerContext and returns HttpActionDescriptor. You can also explicitly specify the HTTP method for an action by decorating the action method using the HttpGet, HttpPut, HttpPost, or HttpDelete attributes. Here is an example: public class UsersController : ApiController {     [HttpGet]     public User FindUser(id) {} } You can also use the AcceptVerbs attribute to enable HTTP methods other than GET, PUT, POST, and DELETE. Here is an example: public class UsersController : ApiController {     [AcceptVerbs("GET", "HEAD")]     public User FindUser(id) { } } You can also define route by an action name. Here is an example: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); You can also override the action name by using the ActionName attribute. The following code snippet illustrates two actions: one that supports GET and the other that supports POST: public class UsersController : ApiController {     [HttpGet]     [ActionName("Token")]     public HttpResponseMessage GetToken(int userId);     [HttpPost]     [ActionName("Token")]     public void AddNewToken(int userId); }  
Read more
  • 0
  • 0
  • 10474

article-image-clojure-domain-specific-languages-design-concepts-clojure
Packt
13 Dec 2013
3 min read
Save for later

Clojure for Domain-specific Languages - Design Concepts with Clojure

Packt
13 Dec 2013
3 min read
(For more resources related to this topic, see here.) Every function is a little program When I first started getting deep into Clojure development, my friend Tom Marble taught me a very good lesson with a single sentence. I'm not sure if he's the originator of this idea, but he told me to think of writing functions as though "every function is a small program". I'm not really sure what I thought about functions before I heard this, but it all made sense the very moment he told me this. Why write a function as if it were its own program? Because both a function and a program are created to handle a specific set of problems, and this method of thinking allows us to break down our problems into a simpler group of problems. Each set of problems might only need a very limited collection of functions to solve them, so to make a function that fits only a single problem isn't really any different from writing a small program to get the very same result. Some might even call this the Unix philosophy, in the sense that you're trying to build small, extendable, simple, and modular code. A pure function What are the benefits of a program-like function? There are many benefits to this approach of development, but the two clear advantages are that the debugging process can be simplified with the decoupling of task, and this approach can make our code more modular. This approach also allows us to better build pure functions. A pure function isn't dependent on any variable outside the function. Anything other than the arguments passed to the function can't be realized by a pure function. Because our program will cause side effects as a result of execution, not all of our functions can be truly pure. This doesn't mean we should forget about trying to develop program-like functions. Our code inherently becomes more modular because pure functions can survive on their own. This is key when needing to build flexible, extendable, and reusable code components. Floor to roof development It is also known as bottom-up development and is the concept of building basic low- level pieces of a program and then combining them to build the whole program. This approach leads to more reusable code that can be more easily tested because each part of the program acts as an individual building block and doesn't require a large portion of the program to be completed to run a test. Each function only does one thing When a function is written to perform a specific task, that function shouldn't do anything unrelated to the original problem it's needed to solve. For example, if you were to write a function named parse-xml, the function should be able to act as a program that can only parse XML data. If the example function does anything else other than parse lines of XML input, it is probably badly designed and will cause confusion when trying to debug errors in our programs. This practice will help us keep our functions to a more reasonable size and can also help simplify the debugging process.
Read more
  • 0
  • 0
  • 18828

article-image-joomla-template-system
Packt
12 Dec 2013
9 min read
Save for later

Joomla! Template System

Packt
12 Dec 2013
9 min read
(For more resources related to this topic, see here.) Every website has some content, and all kinds of information is provided on websites; not just text, but pictures, animations, and video clips—anything that communicates a site's body of knowledge. However, visual design is the appearance of the site. A good visual design is one that is high quality, appropriate, and relevant to the audience and the message it supports. As a large amount of companies feel the need to redesign their site very few years, they need someone who can stand back and figure out what all that content should communicate. This could be you. The basic principle of Joomla! (and other content management systems) is to separate the content from its visual form. Although this separation is not absolute, it is distinct enough to facilitate quick and efficient customization and deployment of websites. Changing the appearance of web pages built on CMS comes down to installing and configuring a new template. A template is a set of files that determine the look and feel of your Joomla-powered website. Templates include information about the general layout of the site and other content, such as graphics, colors, background images, headers, logos, and typography and footers. Each template is different, offering many choices for site owners to almost instantly change the look of their website. You can see the result of this separation of content from presentation by changing the default template (preinstalled in Joomla!). For web designers, learning how to develop templates for content management systems such as Joomla! opens up lots of opportunities. Joomla! gives you big opportunities to build websites. Taking into account the evolution of web browsers, you are only limited by your imagination and skill set, thanks to a powerful and flexible CMS infrastructure. The ability to change or modify the content and appearance of web pages is important in today's online landscape. What is a Joomla! template? As in the case of traditional HTML templates, Joomla! template is a collection of files (PHP, CSS, and JavaScript) that define the visual appearance of the site. Each template has variations on these files, and each template's files are different, but they have a common purpose; they control the placement of the elements on the screen and impact both the presentation of the contents and the usability of the functionality. In general, a template does not have any content, but it can include logo and background images. The Joomla! template controls the way all information is shown on each page of the website. A template contains the stylesheets, locations, and layout information for the web content being displayed. Also each installed component can have its own template to present content that can overwrite the default template's CSS styles. A template alone cannot be called a website. Generally, people think of the template as the appearance of their site. But a template is only a structure (usually painted and colored) with active fields. It determines the appearance of individual elements (for example, font size, color, backgrounds, style, and spacing) and arrangement of individual elements (including modules). In Joomla!, a single page view is generated by the HTML output of one component, selected modules, and the template. Unlike typical websites, where different components of the template are duplicated throughout the website pages, in case of Joomla!, there is just one assigned template that is responsible for displaying content for the entire site. Most CMS's, Joomla! included, have a modular structure that allows easy improvement of the site's appearance and functionality by installing and publishing modules in appropriate areas. Search engines don't care about design, but people do. How well a template is designed and implemented is, therefore, largely responsible for the first impression made by a website, which later translates into the perception that people have of the entire website. Joomla! released Joomla! Version 3.0.0 on September 27, 2012 with significant updates and major developments. With the adoption of the Twitter Bootstrap framework, Joomla! has become the first major CMS to be mobile ready in both visitor and administrator areas. Bootstrap (http://twitter.github.com/bootstrap) is an open source JavaScript framework developed by the team at Twitter. It is a combination of HTML, CSS, and JavaScript code designed to help build user interface components. Bootstrap was also programmed to support both HTML5 and CSS3. As a result, page layout uses a 1152 px * 1132 px * 1116 px * 1104 px grid, whereas previous versions of Joomla! templates used a 940 px wide layout. Default template stylesheets in Joomla! 3.x are written with LESS, and are then compiled to generate the CSS files. Because of the use of Bootstrap, Joomla! 3.x will slowly begin to migrate toward jQuery in the core (instead of MooTools). Mootools is no longer the primary JavaScript library interface. Joomla! 3.x templates are not compatible with previous versions of Joomla! and have been developed as a separate product. Templates – download for free, buy, or build your own I also want to show you the sites where you can download templates for free or buy them; after all, this book is supposed to teach you how to create your own sites. There are a number of reasons for this. First, you might not have the time or the ability to design a template or create it from scratch for the customer. You can set up your website within minutes because all you have to do is install or upload your template and begin adding content. By swapping the header image and changing the background color or image, you can transform a template with very little additional work. Second, as you read the book you will get acquainted with the basic principles of modifying templates, and thus you will learn how to adapt the ready-made solutions to the specific needs of your project. In general, you don't need to know much about PHP to use or tweak prebuilt templates. Templates can be customized by anyone with basic HTML/CSS knowledge. You can customize template elements to make it suit your needs or those of your client using a simple CSS editor; your template can be configured by template parameters. Third, learn from other template developers. Follow every move of your competitors. When they release an interesting, functional, and popular template, follow (but do not copy) them. We can all learn from others; projects by other people are probably one of the most obvious sources of inspiration. The following screenshot presents a few commercial templates for Joomla! 3.x. built in 2013 by popular developers, bearing in mind, however, that the line between inspiration and plagiarism is often very thin: Free templates Premade free templates are a great solution for those who have a limited budget. It is good experience to use the work of different developers and is also a great way to test a new web concept without investing much apart from your time. There are some decent free templates out there that may even be suitable for a small or medium production website. If you don't like a certain template after using it for a bit, ditching it doesn't mean any loss in investment. Unfortunately, there are also some disadvantages of using free templates. These templates are not unique. Several thousands of web designers from around the world may have already downloaded and used the template you have chosen. So if you don't change the colors or layout a bit, your site will look like a clone, which would be quite unprofessional. Generally, free Joomla! templates don't have any important or useful features such as color variants, Google fonts, advanced typography, CSS compression options, or even responsive layout. On the downside of free templates, you have the obvious quality issues. The majority of free templates are very basic and sometimes even buggy. The support for free templates is almost always lacking. While there are a few free templates that are supported by their creators, they are under no obligation to provide full support to your template if you need help adjusting the layout or fixing a problem due to an error. Realize that developers often use free templates to advertise their cost structures, expansion versions, or club subscriptions. That's why some developers require you to leave a link to their website on the bottom of your page if you use their free templates. What was surprising to me was that not all the free templates for Joomla! 3.x are mobile friendly, despite the fact that even the built-in CMS are built as Responsive Web Design (RWD). In most cases, it was presumably intended by the creators to look like JoomlaShine or Globbersthemes. The following is a list of resources from where you can download different kinds of free templates: www.joomla24.com www.joomlaos.de www.siteground.com/joomla-templates.htm www.bestofjoomla.com Quite often, popular developers publish free templates on their websites; in this way they promote their brand and other products such as modules or commercial versions of templates. Those templates always have better quality and features than others. I suggest that you download free templates only from reliable sources. It is with a great deal of care that you should approach templates shared on discussion forums or blogs because there's a high probability that the code template has been deliberately modified. A huge proportion of templates available for free are in fact packaged with malicious code. Summary Hopefully, after reading this article you will have a better understanding of the features of the Joomla! Template Manager and the types of problems it is able to solve. Resources for Article: Further resources on this subject: Installing and Configuring Joomla! on Local and Remote Servers [Article] Joomla! 1.5: Installing, Creating, and Managing Modules [Article] Tips and Tricks for Joomla! Multimedia [Article]
Read more
  • 0
  • 0
  • 11705

article-image-building-queries
Packt
12 Dec 2013
10 min read
Save for later

Building Queries

Packt
12 Dec 2013
10 min read
(For more resources related to this topic, see here.) Understanding DQL DQL is the acronym of Doctrine Query Language. It's a domain-specific language that is very similar to SQL, but is not SQL. Instead of querying the database tables and rows, DQL is designed to query the object model's entities and mapped properties. DQL is inspired by and similar to HQL, the query language of Hibernate, a popular ORM for Java. For more details you can visit this website: http://www.hibernate.org/. Learn more about domain-specific languages at: http://en.wikipedia.org/wiki/Domain-specific_language To better understand what it means, let's run our first DQL query. Doctrine command-line tools are as genuine as a Swiss Army knife. They include a command called orm:run-dql that runs the DQL query and displays it's result. Use it to retrieve title and all the comments of the post with 1 as an identifier: php vendor/bin/doctrine.php orm:run-dql "SELECT p.title,c.bodyFROM BlogEntityPost p JOIN p.comments c WHERE p.id=1" It looks like a SQL query, but it's definitely not a SQL query. Examine the FROM and the JOIN clauses; they contain the following aspects: A fully qualified entity class name is used in the FROM clause as the root of the query All the Comment entities associated with the selected Post entities are joined, thanks to the presence of the comments property of the Post entity class in the JOIN clause As you can see, data from the entities associated with the main entity can be requested in an object-oriented way. Properties holding the associations (on the owning or the inverse side) can be used in the JOIN clause. Despite some limitations (especially in the field of subqueries), DQL is a powerful and flexible language to retrieve object graphs. Internally, Doctrine parses the DQL queries, generates and executes them through Database Abstraction Layer (DBAL) corresponding to the SQL queries, and hydrates the data structures with results. Until now, we only used Doctrine to retrieve the PHP objects. Doctrine is able to hydrate other types of data structures, especially arrays and basic types. It's also possible to write custom hydrators to populate any data structure. If you look closely at the return of the previous call of orm:run-dql, you'll see that it's an array, and not an object graph, that has been hydrated. As with all the topics covered in this book, more information about built-in hydration modes and custom hydrators is available in the Doctrine documentation on the following website: http://docs.doctrine-project.org/en/latest/reference/dql-doctrine-query-language.html#hydration-modes Using the entity repositories Entity repositories are classes responsible for accessing and managing entities. Just like entities are related to the database rows, entity repositories are related to the database tables. All the DQL queries should be written in the entity repository related to the entity type they retrieve. It hides the ORM from other components of the application and makes it easier to re-use, refactor, and optimize the queries. Doctrine entity repositories are an implementation of the Table Data Gateway design pattern. For more details, visit the following website: http://martinfowler.com/eaaCatalog/tableDataGateway.html A base repository, available for every entity, provides useful methods for managing the entities in the following manner: find($id): It returns the entity with $id as an identifier or null It is used internally by the find() method of the Entity Managers. findAll(): It retrieves an array that contains all the entities in this repository findBy(['property1' => 'value', 'property2' => 1], ['property3' => 'DESC', 'property4' => 'ASC']): It retrieves an array that contains entities matching all the criteria passed in the first parameter and ordered by the second parameter findOneBy(['property1' => 'value', 'property2' => 1]): It is similar to findBy() but retrieves only the first entity or null if none of the entities match the criteria Entity repositories also provide shortcut methods that allow a single property to filter entities. They follow this pattern: findBy*() and findOneBy*(). For instance, calling findByTitle('My title') is equivalent to calling findBy(['title' => 'My title']). This feature uses the magical __call() PHP method. For more details visit the following website: http://php.net/manual/en/language.oop5.overloading.php#object.call In our blog app, we want to display comments in the detailed post view, but it is not necessary to fetch them from the list of posts. Eager loading through the fetch attribute is not a good choice for the list, and Lazy loading slows down the detailed view. A solution to this would be to create a custom repository with extra methods for executing our own queries. We will write a custom method that collates comments in the detailed view. Creating custom entity repositories Custom entity repositories are classes extending the base entity repository class provided by Doctrine. They are designed to receive custom methods that run the DQL queries. As usual, we will use the mapping information to tell Doctrine to use a custom repository class. This is the role of the repositoryClass attribute of the @Entity annotation. Kindly perform the following steps to create a custom entity repository: Reopen the Post.php file at the src/Blog/Entity/ location and add a repositoryClass attribute to the existing @Entity annotation like the following line of code: @Entity(repositoryClass="PostRepository") Doctrine command-line tools also provide an entity repository generator. Type the following command to use it: php vendor/bin/doctrine.php orm:generate:repositories src/ Open this new empty custom repository, which we just generated in the PostRepository.phpPostRepository.php file, at the src/Blog/Entity/ location. Add the following method for retrieving the posts and comments: /** * Finds a post with its comments * * @param int $id * @return Post */ public function findWithComments($id) { return $this ->createQueryBuilder('p') ->addSelect('c') ->leftJoin('p.comments', 'c') ->where('p.id = :id') ->orderBy('c.publicationDate', 'ASC') ->setParameter('id', $id) ->getQuery() ->getOneOrNullResult() ; } Our custom repository extends the default entity repository provided by Doctrine. The standard methods, described earlier in the article, are still available. Getting started with Query Builder QueryBuilder is an object designed to help build the DQL queries through a PHP API with a fluent interface. It allows us to retrieve the generated DQL queries through the getDql() method (useful for debugging) or directly use the Query object (provided by Doctrine). To increase performance, QueryBuilder caches the generated DQL queries and manages an internal state. The full API and states of the DQL query are documented on the following website: http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/query-builder.html We will give an in-depth explanation of the findWithComments() method that we created in the PostRepository class. Firstly, a QueryBuilder instance is created with the createQueryBuilder() method inherited from the base entity repository. The QueryBuilder instance takes a string as a parameter. This string will be used as an alias of the main entity class. By default, all the fields of the main entity class are selected and no other clauses except SELECT and FROM are populated. The leftJoin() call creates a JOIN clause that retrieves comments associated with the posts. Its first argument is the property to join and its second is the alias; these will be used in the query for the joined entity class (here, the letter c will be used as an alias for the Comment class). Unless the SQL JOIN clause is used, the DQL query automatically fetches the entities associated with the main entity. There is no need for keywords like ON or USING. Doctrine automatically knows whether a join table or a foreign-key column must be used. The addSelect() call appends comment data to the SELECT clause. The alias of the entity class is used to retrieve all the fields (this is similar to the * operator in SQL). As in the first DQL query of this article, specific fields can be retrieved with the notation alias.propertyName. You guessed it, the call to the where() method sets the WHERE part of the query. Under the hood, Doctrine uses prepared SQL statements. They are more efficient than the standard SQL queries. The id parameter will be populated by the value set by the call to setParameter(). Thanks again to prepared statements and this setParameter() method, SQL Injection attacks are automatically avoided. SQL Injection Attacks are a way to execute malicious SQL queries using user inputs that have not escaped. Let's take the following example of a bad DQL query to check if a user has a specific role: $query = $entityManager->createQuery('SELECT ur FROMUserRole urWHERE ur.username = "' . $username . '" ANDur.role = "' . $role . '"'); $hasRole = count($query->getResult()); This DQL query will be translated into SQL by Doctrine. If someone types the following username: " OR "a"="a the SQL code contained in the string will be injected and the query will always return some results. The attacker has now gained access to a private area. The proper way should be to use the following code: $query = $entityManager->createQuery("SELECT ur FROMUserRole WHEREusername = :username and role = :role"); $query->setParameters([ 'username' => $username, 'role' => $role ]); $hasRole = count($query->getResult()); Thanks to prepared statements, special characters (like quotes) contained in the username are not dangerous, and this snippet will work as expected. The orderBy() call generates an ORDER BY clause that orders results as per the publication date of the comments, older first. Most SQL instructions also have an object-oriented equivalent in DQL. The most common join types can be made using DQL; they generally have the same name. The getQuery() call tells the Query Builder to generate the DQL query (if needed, it will get the query from its cache if possible), to instantiate a Doctrine Query object, and to populate it with the generated DQL query. This generated DQL query will be as follows: SELECT p, c FROM BlogEntityPost p LEFT JOIN p.comments cWHEREp.id = :id ORDER BY c.publicationDate ASC The Query object exposes another useful method for the purpose of debugging: getSql(). As its name implies, getSql() returns the SQL query corresponding to the DQL query, which Doctrine will run on DBMS. For our DQL query, the underlying SQL query is as follows: SELECT p0_.id AS id0, p0_.title AS title1, p0_.bodyAS body2,p0_.publicationDate AS publicationDate3,c1_.id AS id4, c1_.bodyAS body5, c1_.publicationDate AS publicationDate6,c1_.post_id ASpost_id7 FROM Post p0_ LEFT JOIN Commentc1_ ON p0_.id =c1_.post_id WHERE p0_.id= ? ORDER BY c1_.publicationDate ASC The getOneOrNullResult() method executes it, retrieves the first result, and returns it as a Post entity instance (this method returns null if no result is found). Like the QueryBuilder object, the Query object manages an internal state to generate the underlying SQL query only when necessary. Performance is something to be very careful about while using Doctrine. When set in production mode, ORM is able to cache the generated queries (DQL through the QueryBuilder objects, SQL through the Query objects) and results of the queries. ORM must be configured to use one of the blazing, fast, supported systems (APC, Memcache, XCache, or Redis) as shown on the following website: http://docs.doctrine-project.org/en/latest/reference/caching.html We still need to update the view layer to take care of our new findWithComments() method. Open the view-post.php file at the web/location, where you will find the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->find($_GET['id']); Replace the preceding line of code with the following code snippet: $post = $entityManager->getRepository('BlogEntityPost')->findWithComments($_GET['id']);
Read more
  • 0
  • 0
  • 3280

article-image-logging-capabilities
Packt
10 Dec 2013
6 min read
Save for later

Logging Capabilities

Packt
10 Dec 2013
6 min read
(For more resources related to this topic, see here.) Posting messages to the log TestComplete allows committing various types of messages to the log: ordinary messages, warnings, logs, and so on. In this section, we will consider examples of how to use these messages. Getting ready Create a file with the name myfile.txt in the root directory of C:. How to do it... In order to see examples of all the message types in the log, the following steps should be performed: Create and launch the following function: function testMessages() { Log.Event("An event", "Event additional Info"); Log.Message("A message", "Message additional Info"); Log.Warning("A warning", "Warning additional Info"); Log.Error("An error", "Error additional Info"); Log.File("C:\somefile.txt", "A file posted to the log"); Log.Link("C:\somefile.txt", "A link to a file"); Log.Link("http://smartbear.com/", "HTTP link"); Log.Link("ftp://smartbear.com/", "FTP link"); } In the result, we will get the following screenshot of the log How it works... In the given example, we have used four different types of messages. They are as follows: Log.Event: This message is an event which occurs when TestComplete interacts with a tested application. Usually, messages of this type are placed into the log at the point of text input or mouse-clicks; however, we can also place custom-made events into the log. Log.Message: This message is an ordinary message that is usually used for prompting a user concerning current actions that are being executed by the script (usually, of a higher level than that of the events; for example, creation of a user, searching for a record, and so on). Log.Warning: This message is a non-critical error. It is used in case the results of the check are different from those expected; nonetheless, execution of the script can carry on. Log.Error: This message is a critical error usually used when an error is a critical one, making any further execution of the test would be futile These four types of message are based on several parameters. The first of them is a string that we observe in the log itself; the second one contains additional information which can be seen in the Additional Info tab, if the message has been clicked on. The second parameter is optional and can be omitted as well as all other parameters. There are two more types of messages: Log.File: This message copies the assigned file into the file with the log, and places a reference-pointer to it. Meanwhile, TestComplete renames the file to avoid naming conflicts, leaving only the original extension intact. Log.Link: This message places a link to the web page or a file, without making a copy of the file itself in the folder with the log. On clicking on the link, the file will open with the help of the associated program or a link in the browser. These two types of message accept the link as the first parameter, and then the message parameters, and those pertaining to the additional information (as the previous four). Only the first parameter is mandatory. Posting screenshots to the log Sometimes, it is necessary to place an image into the log; often, it may be a window screenshot, an image of a controls element, or even that of the whole of the screen. To this end, we use the Log.Picture method. In this section we will consider different ways to place an image into the log. How to do it... The following steps should be performed to place an image to the log: First of all, we will create two image objects for the enabled window and the whole of the screen: var picWindow = Sys.Desktop.ActiveWindow().Picture(); var picDesktop = Sys.Desktop.Picture(); The image of the active window, now being stored in the picWindow variable , will be placed into the log, unchanged: Log.Picture(picWindow, "Active window"); The image of the desktop is reduced by four times via the Stretch method , and then saved on to the file with the help of the SaveToFile method: picDesktop.Stretch(picDesktop.Size.Width/2, picDesktop.Size.Height/2); picDesktop.SaveToFile("c:\desktop.png"); Now we go about creating a new variable of the Picture type, loading up an image into it from the earlier saved file, and then placing the same into the log: var pic = Utils.Picture; pic.LoadFromFile("c:\desktop.png"); Log.Picture(pic, "Resized Desktop"); As a result of function's execution, the log will contain the two images placed therein: that of the enabled window at the moment of test execution, and that of the reduced desktop copy. How it works... The Log.Picture method has one mandatory parameter that is, the image itself; the other parameters being optional. Images of any of the onscreen objects (of a window, of a singular controls element, of the desktop) can be obtained via the Picture method. In our example, with the help of the method, we get the image of the desktop and that of the active window. Instead of the active window, we could use any variable that corresponds to a window or a controls element. Any image can be saved onto the disk with the help of the SaveToFile method. The format of the saved image is determined by its extension (in our case, it is the PNG). If it's necessary to obtain a variable containing the image from the file, we are supposed to create an empty variable placeholder with the help of the Utils.Picture property , and then with the help of the LoadFromFile method , we upload the image into it. In the future, one could handle the image as any other, received with the help of the Picture method. Great-size images can be minified with the help of the Stretch method. The Stretch method uses two parameters: the new width and height of the image. With the help of the Size.Width and Size.Height properties , we could zoom in or out on the image in relation to its original size, without setting the dimensions explicitly. There's more... With the help of the Picture method , we could obtain not only the image of the whole window or a controls element, but just a part of it. For example, the following code gets an image of the upper left square of the desktop within the sizing of 50 x 50 pixels: var picDesktop = Sys.Desktop.Picture(0,0, 50, 50); The values of the parameters are as follows: coordinates of the left and right top corner, and its width and height. There is one important project setting which allows automatic posting images in case of error. To enable this option, right-click on the project name, navigate to Edit | Properties, click on Playback item from the list of options, and enable checkbox Post image on error. Apart from changing the dimensions of the image, TestComplete allows for the execution of several, quite complicated imaging manipulations. For example, the comparison of the two images (the Compare method ), searching for one image inside the other (the Find method ), and so on. Click on the following link to get to know more about these possibilities: http://support.smartbear.com/viewarticle/32131/
Read more
  • 0
  • 0
  • 2394

article-image-using-mongoid
Packt
09 Dec 2013
12 min read
Save for later

Using Mongoid

Packt
09 Dec 2013
12 min read
(For more resources related to this topic, see here.) Introducing Origin Origin is a gem that provides the DSL for Mongoid queries. Though at first glance, a question may seem to arise as to why we need a DSL for Mongoid queries; If we are finally going to convert the query to a MongoDB-compliant hash, then why do we need a DSL? Origin was extracted from Mongoid gem and put into a new gem, so that there is a standard for querying. It has no dependency on any other gem and is a standalone pure query builder. The idea was that this could be a generic DSL that can be used even without Mongoid! So, now we have a very generic and standard querying pattern. For example, in Mongoid 2.x we had the criteria any_in and any_of, and no direct support for the and, or, and nor operations. In Mongoid 2.x, the only way we could fire a $or or a $and query was like this: Author.where("$or" => {'language' => 'English', 'address.city' => 'London '}) And now in Mongoid 3, we have a cleaner approach. Author.or(:language => 'English', 'address.city' => 'London') Origin also provides good selectors directly in our models. So, this is now much more readable: Book.gte(published_at: Date.parse('2012/11/11')) Memory maps, delayed sync, and journals As we have seen earlier, MongoDB stores data in memory-mapped files of at most 2 GB each. After the data is loaded for the first time into the memory mapped files, we now get almost memory-like speeds for access instead of disk I/O, which is much slower. These memory-mapped files are preallocated to ensure that there is no delay of the file generation while saving data. However, to ensure that the data is not lost, it needs to be persisted to the disk. This is achieved by journaling. With journaling, every database operation is written to the oplog collection and that is flushed to disk every 100 ms. Journaling is turned on by default in the MongoDB configuration. This is not the actual data but the operation itself. This helps in better recovery (in case of any crash) and also ensures the consistency of writes. The data that is written to various collections are flushed to the disk every 60 seconds. This ensures that the data is persisted periodically and also ensures the speed of data access is almost as fast as memory. MongoDB relies on the operating system for the memory management of its memory-mapped files. This has the advantage of getting inherent OS benefits as the OS is improved. Also, there's the disadvantage of lack of control on how memory is managed by MongoDB. However, what happens if something goes wrong (server crashes, database stops, or disk is corrupted)? To ensure durability, whenever data is saved in files, the action is logged to a file in a chronological order. This is the journal entry, which is also a memory-mapped file but is synced with the disk every 100 ms. Using the journal, the database can be easily recovered in case of any crash. So, in the worst case scenario, we could potentially lose 100 ms of information. This is a fair price to pay for the benefits of using MongoDB. MongoDB journaling makes it a very robust and durable database. However, it also helps us decide when to use MongoDB and when not to use it. 100 ms is a long time for some services, such as financial core banking or maybe stock price updates. In such applications, MongoDB is not recommended. For most cases that are not related to heavy multi-table transactions like most financial applications MongoDB can be suitable. All these things are handled seamlessly, and we don't usually need to change anything. We can control this behavior via the configuration of MongoDB but usually it's not recommended. Let's now see how we save data using Mongoid. Updating documents and attributes As with ActiveModel specifications, save will update the changed attributes and return the updated object, otherwise it will return false on failure. The save! function will raise an exception on the error. In both cases, if we pass validate: false as a parameter to save, it will bypass the validations. A lesser-known persistence option is the upsert action. An upsert action creates a new document if it does not find it and overwrites the object if it finds it. A good reason to use upsert is in the find_and_modify action. For example, suppose we want to reserve a book in our Sodibee system, and we want to ensure that at any one point, there can be only one reservation for a book. In a traditional scenario: t1: Request-1 searches for a for a book which is not reserved and finds it t2: Now, it saves the book with the reservation information t3: Request-2 searches for a reservation for the same book and finds that the book is reserved t4: Request-2 handles the situation with either error or waits for reservation to be freed So far so good! However in a concurrent model, especially for web applications, it creates problems. t1: Request-1 searches for a book which is not reserved and finds it t2: Request-2 searches for a reservation for the same book and also gets back that book since it's not yet reserved t3: Request-1 saves the book with its reservation information t4: Request-2 now overwrites previous update and saves the book with its reservation information Now we have a situation where two requests think that the reservation for the book was successful and that is against our expectations. This is a typical problem that plagues most web applications. The various ways in which we can solve this is discussed in the subsequent sections. Write concern MongoDB helps us ensure write consistency. This means that when we write something to MongoDB, it now guarantees the success of the write operation. Interestingly, this is a configurable option and is set to acknowledged by default. This means that the write is guaranteed because it waits for an acknowledgement before returning success. In earlier versions of Mongoid, safe: true was turned off by default. This meant that success of the write operation was not guaranteed. The write concern is configured in Mongoid.yml as follows: development:sessions:default:hosts:- localhost:27017options:write:w: 1 The default write concern in Mongoid is configured with w: 1, which means that the success of a write operation is guaranteed. Let's see an example: class Authorinclude Mongoid::Documentfield :name, type: Stringindex( {name: 1}, {unique: true, background: true})end Indexing blocks read and write operations. Hence, its recommended to configure indexing in the background in a Rails application. We shall now start a Rails console and see how this reacts to a duplicate key index by creating two Author objects with the same name. irb> Author.create(name: "Gautam") => #<Author _id: 5143678345db7ca255000001, name: "Gautam"> irb> Author.create(name: "Gautam") Moped::Errors::OperationFailure: The operation: #<Moped::Protocol::Command @length=83 @request_id=3 @response_to=0 @op_code=2004 @flags=[] @full_collection_name="sodibee_development.$cmd" @skip=0 @limit=-1 @selector={:getlasterror=>1, :w=>1} @fields=nil> failed with error 11000: "E11000 duplicate key error index: sodibee_ development.authors.$name_1 dup key: { : "Gautam" }" As we can see, it has raised a duplicate key error and the document is not saved. Now, let's have some fun. Let's change the write concern to unacknowledged: development:sessions:default:hosts:- localhost:27017options:write:w: 0 The write concern is now set to unacknowledged writes. That means we do not wait for the MongoDB write to eventually succeed, but assume that it will. Now let's see what happens with the same command that had failed earlier. irb > Author.where(name: "Gautam").count=> 1irb > Author.create(name: "Gautam")=> #<Author _id: 5287cba54761755624000000, name: "Gautam">irb > Author.where(name: "Gautam").count=> 1 There seems to be a discrepancy here. Though Mongoid create returned successfully, the data was not saved to the database. Since we specified background: true for the name index, the document creation seemed to succeed as MongoDB had not indexed it yet, and we did not wait for acknowledging the success of the write operation. So, when MongoDB tries to index the data in the background, it realizes that the index criterion is not met (since the index is unique), and it deletes the document from the database. Now, since that was in the background, there is no way to figure this out on the console or in our Rails application. This leads to an inconsistent result. So, how can we solve this problem? There are various ways to solve this problem: We leave the Mongoid default write concern configuration alone. By default, it is w: 1 and it will raise an exception. This is the recommended approach as prevention is better than cure! Do not specify the background: true option. This will create indexes in the foreground. However, this approach is not recommended as it can cause a drop in performance because index creations block read and write access. Add drop_dups: true. This deletes data, so you have to be really careful when using this option. Other options to the index command create different types of indexes as shown in the following table: Index Type Example Description sparse index({twitter_name: 1}, { sparse: true}) This creates sparse indexes, that is, only the documents containing the indexed fields are indexed. Use this with care as you can get incomplete results. 2d 2dsphere index({:location => "2dsphere"}) This creates a two-dimensional spherical index. Text index MongoDB 2.4 introduced text indexes that are as close to free text search indexes as it gets. However, it does only basic text indexing—that is, it supports stop words and stemming. It also assigns a relevance score with each search. Text indexes are still an experimental feature in MongoDB, and they are not recommended for extensive use. Use ElasticSearch, Solr (Sunspot), or ThinkingSphinx instead. The following code snippet shows how we can specify a text index with weightage: index({ "name" => 'text',"last_name" => 'text'},{weights: {'name' => 10,'last_name' => 5,},name: 'author_text_index'}) There is no direct search support in Mongoid (as yet). So, if you want to invoke a text search, you need to hack around a little. irb> srch = Mongoid::Contextual::TextSearch.new(Author.collection,Author.all, 'john')=> #<Mongoid::Contextual::TextSearchselector: {}class: Authorsearch: johnfilter: {}project: N/Alimit: N/Alanguage: default>irb> srch.execute=> {"queryDebugString"=>"john||||||", "language"=>"english","results"=>[{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00030b'), "name"=>"Bettye Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f00046d'), "name"=>"John Pagac"}},{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058345db7c843f000578'),"name"=>"Jeanie Johns"}}, {"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058445db7c843f0007e7')...{"score"=>7.5, "obj"=>{"_id"=>BSON::ObjectId('51fc058a45db7c843f0025f1'), "name"=>"Alford Johns"}}], "stats"=>{"nscanned"=>25,"nscannedObjects"=>0, "n"=>25, "nfound"=>25, "timeMicros"=>31103},"ok"=>1.0} By default, text search is disabled in MongoDB configuration. We need to turn it on by adding setParameter = textSearchEnabled=true in the MongoDB configuration file, typically /usr/local/mongo.conf. This returns a result with statistical data as well the documents and their relevance score. Interestingly, it also specifies the language. There are a few more things we can do with the search result. For example, we can see the statistical information as follows: irb> a.stats=> {"nscanned"=>25, "nscannedObjects"=>0, "n"=>25, "nfound"=>25,"timeMicros"=>31103} We can also convert the data into our Mongoid model objects by using project, as shown in the following command: > a.project(:name).to_a=> [#<Author _id: 51fc058345db7c843f00030b, name: "Bettye Johns",last_name: nil, password: nil>, #<Author _id: 51fc058345db7c843f00046d,name: "John Pagac", last_name: nil, password: nil>, #<Author _id:51fc058345db7c843f000578, name: "Jeanie Johns", last_name: nil, password:nil> ... Some of the important things to remember are as follows: Text indexes can be very heavy in memory. They can return the documents, so the result can be large. We can use multiple keys (or filters) along with a text search. For example, the index with index ({ 'state': 1, name: 'text'}) will mandate the use of the state for every text search that is specified. A search for "john doe" will result in a search for "john"or "doe"or both. A search for "john"and "doe" will search for all "john"and "doe"in a random order. A search for ""john doe"", that is, with escaped quotes, will search for documents containing the exact phrase "john doe". A lot more data can be found at http://docs.mongodb.org/manual/tutorial/search-for-text/ Summary This article provides an excellent reference for using Mongoid. The article has examples with code samples and explanations that help in understanding the various features of Mongoid. Resources for Article: Further resources on this subject: Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article] Documentation with phpDocumentor: Part 1 [Article]
Read more
  • 0
  • 0
  • 2570
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-touch-events
Packt
26 Nov 2013
7 min read
Save for later

Touch Events

Packt
26 Nov 2013
7 min read
(For more resources related to this topic, see here.) The why and when of touch events These touch events come in a few different flavors. There are taps, swipes, rotations, pinches, and more complicated gestures available for you to make your users' experience more enjoyable. Each type of touch event has an established convention which is important to know, so your user has minimal friction when using your website for the first time: The tap is analogous to the mouse click in a traditional desktop environment, whereas, the rotate, pinch, and related gestures have no equivalence. This presents an interesting problem—what can those events be used for? Pinch events are typically used to scale views (have a look at the following screenshot of Google Maps accessed from an iPad, which uses pinch events to control the map's zoom level), while rotate events generally rotate elements on screen. Swipes can be used to move content from left to right or top to bottom. Multifigure gestures can be different as per your website's use cases, but it's still worthwhile examining if a similar interaction already exists, and thus an established convention. Web applications can benefit greatly from touch events. For example, imagine a paint-style web application. It could use pinches to control zoom, rotations to rotate the canvas, and swipes to move between color options. There is almost always an intuitive way of using touch events to enhance a web application. In order to get comfortable with touch events, it's best to get out there and start using them practically. To this end, we will be building a paint tool with a difference, that is, instead of using the traditional mouse pointer as the selection and drawing tool, we will exclusively use touch events. Together, we can learn just how awesome touch events are and the pleasure they can afford your users. Touch events are the way of the future, and adding them to your developer tool belt will mean that you can stay at the cutting edge. While MC Hammer may not agree with these events, the rest of the world will appreciate touching your applications. Testing while keeping your sanity (Simple) Touch events are awesome on touch-enabled devices. Unfortunately, your development environment is generally not touch-enabled, which means testing these events can be trickier than normal testing. Thankfully, there are a bunch of methods to test these events on non-touch enabled devices, so you don't have to constantly switch between environments. In saying that though, it's still important to do the final testing on actual devices, as there can be subtle differences. How to do it... Learn how to do initial testing with Google Chrome. Debug remotely with Mobile Safari and Safari. Debug further with simulators and emulators. Use general tools to debug your web application. How it works… As mentioned earlier, testing touch events generally isn't as straightforward as the normal update-reload-test flow in web development. Luckily though, the traditional desktop browser can still get you some of the way when it comes to testing these events. While you generally are targeting Mobile Safari on iOS and Chrome, or Browser on Android, it's best to do development and initial testing in Chrome on the desktop. Chrome has a handy feature in its debugging tools called Emulate touch events. This allows a lot of your mouse-related interactions to trigger their touch equivalents. You can find this setting by opening the Web Inspector, clicking on the Settings cog, switching to the Overrides tab, and enabling Emulate touch events (if the option is grayed out, simply click on Enable at the top of the screen). Using this emulation, we are able to test many touch interactions; however, there are certain patterns (such as scaling and rotating) that aren't really possible just using emulated events and the mouse pointer. To correctly test these, we need to find the middle ground between testing on a desktop browser and testing on actual devices. There's more… Remember the following about testing with simulators and browsers: Device simulators While testing with Chrome will get you some of the way, eventually you will want to use a simulator or emulator for the target device, be it a phone, tablet, or other touch-enabled device. Testing in Mobile Safari The iOS Simulator, for testing Mobile Safari on iPhone, iPod, and iPad, is available only on OS X. You must first download Xcode from the App Store. Once you have Xcode, go to Xcode | Preferences | Downloads | Components, where you can download the latest iOS Simulator. You can also refer to the following screenshot: You can then open the iOS Simulator from Xcode by accessing Xcode | Open Developer Tool | iOS Simulator. Alternatively, you can make a direct alias by accessing Xcode's package contents by accessing the Xcode icon's context menu in the Applications folder and selecting Show Package Contents. You can then access it at Xcode.app/Contents/Applications. Here you can create an alias, and drag it anywhere using Finder. Once you have the simulator (or an actual device); you can then attach Safari's Web Inspector to it, which makes debugging very easy. Firstly, open Safari and then launch Mobile Safari in either the iOS Simulator or on your actual device (ensure it's plugged in via its USB cable). Ensure that Web Inspector is enabled in Mobile Safari's settings via Settings | Safari | Advanced | Web Inspector. You can refer to the following screenshot: Then, in Safari, go to Develop | <your device name> | <tab name>. You can now use the Web Inspector to inspect your remote Mobile Safari instance as shown in the following screenshot: Testing in Mobile Safari in the iOS Simulator allows you to use the mouse to trigger all the relevant touch events. In addition, you can hold down the Alt key to test pinch gestures. Testing on Android's Chrome Testing on the Android on desktop is a little more complicated, due to the nature of the Android emulator. Being an emulator and not a simulator, its performance suffers (in the name of accuracy). It also isn't so straightforward to attach a debugger. You can download the Android SDK from http://developer.android.com/sdk. Inside the tools directory, you can launch the Android Emulator by setting up a new Android Virtual Device (AVD). You can then launch the browser to test the touch events. Keep in mind to use 10.0.2.2 instead of localhost to access local running servers. To debug with a real device and Google Chrome, official documentation can be followed at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. When there isn't a built-in debugger or inspector, you can use generic tools such as Web Inspector Remote (Weinre). This tool is platform-agnostic, so you can use it for Mobile Safari, Android's Browser, Chrome for Android, Windows Phone, Blackberry, and so on. It can be downloaded from http://people.apache.org/~pmuellr/weinre. To start using Weinre, you need to include a JavaScript on the page you want to test, and then run the testing server. The URL of this JavaScript file is given to you when you run the weinre command from its downloaded directory. When you have Weinre running on your desktop, you can use its Web Inspector-like interface to do remote debugging and inspecting your touch events, you can send messages to console.log() and use a familiar interface for other debugging. Summary In this article the author has discussed about the touch events and has highlighted the limitations faced by such events when an unfavorable environment is encountered. He has also discussed the methods to test these events on non-touch enabled devices. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape [Article] Top Features You Need to Know About – Responsive Web Design [Article] Sencha Touch: Layouts Revisited [Article]
Read more
  • 0
  • 0
  • 5477

Packt
26 Nov 2013
6 min read
Save for later

CodeIgniter MVC – The Power of Simplicity!

Packt
26 Nov 2013
6 min read
(For more resources related to this topic, see here.) "Simplicity Wins In Big!" Back in the 80s there was a programming Language ADA that according to many contracts was required to be used. ADA was so complex and hard compared to C/C++ to maintain. Today ADA fades like Pascal. C/C++ is the simplicity winner for real time systems arena. In Telecom for network devices management protocols there were two standards in the 90s: CMIP (Common Management Information Protocol) and SNMP (Simple Network Management Protocol). Initially (90s) all telecom Requirement Papers required CMIP support. Eventually after several years a research found that there's about 1:10 or 10x effort to develop and maintain a same system based CMIP compared to SNMP. SNMP is the simplicity winner in network management systems arena! In VoIP or Media over IP, the H.323 and SIP (Session Initiation Protocol) were competing protocols in early 2000. H.323 had the messages in a cryptic binary way. SIP makes it all textual XML fashioned, easy to understand via text editor. Today almost all end point devices powered SIP while H.323 becomes a niche protocol for the VoIP backbone. SIP is the simplicity winner in VoIP arena! Back in 2010 I was looking for a good PHP platform to develop Web Application for my startup 1st product Logodial Zappix (http://zappix.com). I got a recommendation to use DRUPAL for this. I've tried the platform and found it very heavy to manipulate and change for my exact user interaction flow and experience I had in mind. Many times I had to compromise and the overhead of the platform was indeed horrible. Just make Hello world App and tons of irrelevant code will get into the project. Try to make free JavaScript and you found yourself struggling with the platform disabling you from the creativity of client side JavaScript and its Add-ons. I've decided to look for a better platform for my needs. Later on I've heard about Zend Framework MVC (Model-View-Controller framework typed). I've tried to work with it as it is based MVC and a lot of OOP usage, but I've found it heavy... Documentation seems great at first sight, but the more I've used I, looking for vivid examples and explanations, I found myself in endless close circle loops of links. It was lacking a clear explanation and vivid examples. The filling was like every match box moving task, I'd required a semi-trailer of declarations and calls to handle making it... Though it was MVC typed which I greatly liked. Keeping on with my search, I was looking for simple but powerful MVC based PHP which is my favorite language for server side. One day in early 2011 I got a note from a friend that there's a light and cool platform named CodeIgniter (CI in brief). I've checked the documentation link http://ellislab.com/codeigniter/user-guide/ and was amazed from the very clean, simple, well organized and well explained browsing experience. Having Examples? Yes, lots of clear examples, with great community. It was so great and simple. I felt like those platform designers were doing the best effort to make the simplest and most vivid code, reusable and clean OOP fashion from the infrastructure to the last function. I've tried making web app for a trail, trying to load helpers, libraries and use them and greatly loved the experience. Fast forward, today I see a matured CodeIgniter as a Lego like playground that I know well. I've wrote tons of models, helpers, libraries, controllers and views. CodeIgniter Simplicity enables me to do things fast and, clear and well maintained and expandable. In time I've gathered the most useful helpers and libraries, Ajax server and Browser side solutions for reuse, good links to useful add on such as the free Grid Powered Plug-In for CI the http://www.grocerycrud.com/ that keep improving day by day. Today I see Codeigniter as a matured scalable (See at&t and sprint Call Center Web apps based CI), reusability and simplicity champion. The following is the high-level architecture of the Codeigniter MVC with the Controller/s as the hub the application session. The CI controller main use cases are: Handles requests from web browser as HTTP URI call, based on submitted parameters (for example Submitting a Login with the credentials) or with no-parameters (for example Home Page navigation). Handles Asynchronous Ajax requests from the Web Client mostly as JSON HTTP POST request and response. Serving CRON job requests that creates HTTP URI request, calling controller methods, similar to browser navigation, silently from the CRON PHP module. The CI Views main features: Rendered by a controller with optionally set of parameters (scalar, arrays, objects) Has full open access to all the helpers, libraries, models as their rendering controller has. Has the freedom to integrate any JavaScript / 3rd party Web Client side plug-ins. The CI helper/s main features and fashion: Flat functions sets protected from duplication risks Can be loaded for use by any controller and accessed by any rendered view. Can access any CI resource / library and others via the &get_instance() service. The CI Libraries main features and fashion: OOP classes that can expand other 3rd party classes (For example, see the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). Can be used by the CI project controllers and all their rendered views. The CI Model main features and fashion: Similar to Libraries but has access to the default database, that can be expanded to multi databases and any other CI resource via the &get_instance(). OOP classes that can expand other 3rd party classes (For example, See the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). It seems that CodeIgniter is continuously increasing its popularity as it has a simple yet high quality OOP core that enables great creativity, reusability, and code clarity naming conventions, which are easy to expand (user class extends CI class), while more third-party application plugins (packages of views and/or models and/or libraries and/or helpers). I found Codeigniter flexible, great reusability enabler, having light infrastructure, enables developer creativity powered active global community. For a day to day the CI code clarity, high performance capabilities, minimal controllable footprint (You decide what helpers/libraries/models to load for each controller). Above all CI blessed with very fast learning curve of PHP developers and many blogs and community sites to share knowledge and raise and resolve issues and changes. CodeIgniter is the simplicity winner I've found for Web Apps MVC Server side. Summary This article introduces the CodeIgniter framework, while initially getting started with web-based applications. Resources for Article: Further resources on this subject: Database Interaction with Codeigniter 1.7 [Article] User Authentication with Codeigniter 1.7 using Facebook Connect [Article] CodeIgniter 1.7 and Objects [Article]
Read more
  • 0
  • 0
  • 3025

article-image-creating-blog-content-wordpress
Packt
25 Nov 2013
18 min read
Save for later

Creating Blog Content in WordPress

Packt
25 Nov 2013
18 min read
(For more resources related to this topic, see here.) Posting on your blog The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (in this case, you, though WordPress allows multiple authors to contribute to a blog). If a blog is like an online diary, every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date, excerpt, tags, and categories. In this section, you will learn how to create a new post and what kind of information to attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to add content or carry out a maintenance process on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) of your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log in to the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it so don't worry about that right now. The quickest way to get to the Add New Post page at any time is to click on + New and then the Post link at the top of the page in the top bar. This is the Add New Post page: To add a new post to your site quickly, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the Text view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or preview your post as well. Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post screen, but now the following message would have appeared telling you that your post was published, and giving you a link View post: If you view the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top). Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post and Edit Post pages. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical, meaning a category can be a parent of another category. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in such a blog is likely to have from one up to, maybe four categories assigned to it. For example, a blog about food and cooking might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Of course, the numbers mentioned are just suggestions; you can create and assign as many categories as you like. The way you structure your categories is entirely up to you as well. There are no true rules regarding this in the WordPress world, just guidelines like these. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to even 100 tags in use. Each post in this blog is likely to have 3 to 10 tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, and easy. Again, you can create and assign as many tags as you like. Let's add a new post to the blog. After you give it a title and content, let's add tags and categories. While adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little x buttons next to them. You can click on an x button to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most used tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field, and click on the Add New Category button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select — Parent Category — from the pull-down menu before clicking on the Add New Category button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do so on the Categories page. Click on the Publish button, and you're done (you can instead choose to schedule a post; we'll explore that in detail in a few pages). When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself. Images in your posts Almost every good blog post needs an image! An image will give the reader an instant idea of what the post is about, and the image will draw people's attention as well. WordPress makes it easy to add an image to your post, control default image sizes, make minor edits to that image, and designate a featured image for your post. Adding an image to a post Luckily, WordPress makes adding images to your content very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on the post's title. To add an image to a post, first you'll need to have that image on your computer, or know the exact URL pointing to the image if it's already online. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. Just to give you a good example here, I'm using a photo of my own so I don't have to worry about any copyright issues (always make sure to use only the images that you have the right to use, copyright infringement online is a serious problem, to say the least). I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, carry out the following steps to add the photo to your blog post: Click on the Add Media button, which is right above the content box and below the title box: The box that appears allows you to do a number of different things regarding the media you want to include in your post. The most user-friendly feature here, however, is the drag-and-drop support. Just drag the image from your desktop and drop it into the center area of the page labeled as Drop files anywhere to upload. Immediately after dropping the image, the uploader bar will show the progress of the operation, and when it's done, you'll be able to do some final tuning up. The fields that are important right now are Title, Alt Text, Alignment, Link To, and Size. Title is a description for the image, Alt Text is a phrase that's going to appear instead of the image in case the file goes missing or any other problems present themselves, Alignment will tell the image whether to have text wrap around it and whether it should be right, left, or center, Link To instructs WordPress whether or not to link the image to anything (a common solution is to select None), and Size is the size of the image. Once you have all of the above filled out click on Insert into post. This box will disappear, and your image will show up in the post—right where your cursor was prior to clicking on the Add Media button—on the edit page itself (in the visual editor, that is. If you're using the text editor, the HTML code of the image will be displayed instead). Now, click on the Update button, and go and look at the front page of your site again. There's your image! Controlling default image sizes You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? Whenever you upload an image, WordPress creates three versions of that image for you. You can set the pixel dimensions of those three versions by opening Settings in the main menu, and then clicking on Media. This takes you to the Media Settings page. Here you can specify the size of the uploaded images for: Thumbnail size Medium size Large size If you change the dimensions on this page, and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old settings. It's a good idea to decide what you want your three media sizes to be early on in your site, so you can set them and have them applied to all images, right from the start. Another thing about uploading images is the whole craze with HiDPI displays, also called Retina displays. Currently, WordPress is in a kind of a transitional phase with images and being in tune with the modern display technology; the Retina Ready functionality was introduced quite recently in WordPress 3.5. In short, if you want to make your images Retina-compatible (meaning that they look good on iPads and other devices with HiDPI screens), you should upload the images at twice the dimensions you plan to display them in. For example, if you want your image to be presented as 800 pixel wide and 600 pixel high, upload it as 1,600 pixel wide and 1,200 pixel high. WordPress will manage to display it properly anyway, and whoever visits your site from a modern device will see a high-definition version of the image. In future versions, WordPress will surely provide a more managed way of handling Retina-compatible images. Editing an uploaded image As of WordPress 2.9, you can now make minor edits on images you've uploaded. In fact, every image that has been previously uploaded to WordPress can be edited. In order to do this, go to Media Library by clicking on the Media button in the main sidebar. What you'll see is a standard WordPress listing (similar to the one we saw while working with posts) presenting all media files and allowing you to edit each one. When you click on the Edit link and then the Edit Image button on the subsequent screen, you'll enter the Edit Media section. Here, you can perform a number of operations to make your image just perfect. As it turns out, WordPress does a good enough job with simple image tuning so you don't really need expensive software such as Photoshop for this. Among the possibilities you'll find cropping, rotating, and flipping vertically and horizontally. For example, you can use your mouse to draw a box as I have done in the preceding image. On the right, in the box marked Image Crop, you'll see the pixel dimensions of your selection. Click on the Crop icon (top left), then the Thumbnail radio button (on the right), and then Save (just below your photo). You now have a new thumbnail! Of course, you can adjust any other version of your image just by making a different selection prior to hitting the Save button. Play around a little and you can become familiar with the details. Designating a featured image As of WordPress 2.9, you can designate a single image that represents your post. This is referred to as the featured image. Some themes will make use of this, and some will not. The default theme, the one we've been using, is named Twenty Thirteen, and it uses the featured image right above the post on the front page. Depending on the theme you're using, its behavior with featured images can vary, but in general, every modern theme supports them in one way or the other. In order to set a featured image, go to the Edit Post screen. In the sidebar you'll see a box labeled Featured Image. Just click on the Set featured image link. After doing so, you'll see a pop-up window, very similar to the one we used while uploading images. Here, you can either upload a completely new image or select an existing image by clicking on it. All you have to do now is click on the Set featured image button in the bottom right corner. After completing the operation, you can finally see what your new image looks like on the front page. Also, keep in mind that WordPress uses featured images in multiple places not only the front page. And as mentioned above, much of this behavior depends on your current theme. Using the visual editor versus text editor WordPress comes with a visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, and stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the text editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the text editor, click on the Text tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. Even though the text editor allows you to use some HTML elements, it's not a fully fledged HTML support. For instance, using the <p> tags is not necessary in the text editor, as they will be stripped by default. In order to create a new paragraph in the text editor, all you have to do is press the Enter key twice. That being said, at the same time, the text editor is currently the only way to use HTML tables in WordPress (within posts and pages). You can easily place your table content inside the <table><tr><td> tags and WordPress won't alter it in any way, effectively allowing you to create the exact table you want. Another thing the text editor is most commonly used for is introducing custom HTML parameters in the <img /> and <a> tags and also custom CSS classes in other popular tags. Some content creators actually prefer working with the text editor rather than the visual editor because it gives them much more control and more certainty regarding the way their content is going to be presented on the frontend. Lead and body One of many interesting publishing features WordPress has to offer is the concept of the lead and the body of the post. This may sound like a strange thing, but it's actually quite simple. When you're publishing a new post, you don't necessarily want to display its whole contents right away on the front page. A much more user-friendly approach is to display only the lead, and then display the complete post under its individual URL. Achieving this in WordPress is very simple. All you have to do is use the Insert More Tag button available in the visual editor (or the more button in the text editor). Simply place your cursor exactly where you want to break your post (the text before the cursor will become the lead) and then click on the Insert More Tag button: An alternative way of using this tag is to switch to the text editor and input the tag manually, which is <!--more-->. Both approaches produce the same result. Clicking on the main Update button will save the changes. On the front page, most WordPress themes display such posts by presenting the lead along with a Continue reading link, and then the whole post (both the lead and the rest of the post) is displayed under the post's individual URL. Drafts, pending articles, timestamps, and managing posts There are four additional, simple but common, items I'd like to cover in this section: drafts, pending articles, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you, about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then show the time of the last draft saved: At this point, after a manual save or an autosave, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from Dashboard or from the Edit Posts page. In essence, drafts are meant to hold your "work in progress" which means all the articles that haven't been finished yet, or haven't even been started yet, and obviously everything in between. Pending articles Pending articles is a functionality that's going to be a lot more helpful to people working with multi-author blogs, rather than single-author blogs. The thing is that in a bigger publishing structure, there are individuals responsible for different areas of the publishing process. WordPress, being a quality tool, supports such a structure by providing a way to save articles as Pending Review. In an editor-author relationship, if an editor sees a post marked as Pending Review, they know that they should have a look at it and prepare it for the final publication. That's it for the theory, and now how to do it. While creating a new post, click on the Edit link right next to the Status: Draft label: Right after doing so, you'll be presented with a new drop-down menu from which you can select Pending Review and then click on the OK button. Now just click on the Save as Pending button that will appear in place of the old Save Draft button, and you have a shiny new article that's pending review. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Post page in the WP Admin and navigate to Posts in the main menu. Once you do so, there are many things you can do on this page as with every management page in the WP Admin.
Read more
  • 0
  • 0
  • 17134

article-image-exploring-streams
Packt
22 Nov 2013
15 min read
Save for later

Exploring streams

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) According to Bjarne Stoustrup in his book The C++ Programming Language, Third Edition: Designing and implementing a general input/output facility for a programming language is notoriously difficult... An I/O facility should be easy, convenient, and safe to use; efficient and flexible; and, above all, complete. It shouldn't surprise anyone that a design team, focused on providing efficient and easy I/O, has delivered such a facility through Node. Through a symmetrical and simple interface, which handles data buffers and stream events so that the implementer does not have to, Node's Stream module is the preferred way to manage asynchronous data streams for both internal modules and, hopefully, for the modules developers will create. A stream in Node is simply a sequence of bytes. At any time, a stream contains a buffer of bytes, and this buffer has a zero or greater length: Because each character in a stream is well defined, and because every type of digital data can be expressed in bytes, any part of a stream can be redirected, or "piped", to any other stream, different chunks of the stream can be sent do different handlers, and so on. In this way stream input and output interfaces are both flexible and predictable and can be easily coupled. Digital streams are well described using the analogy of fluids, where individual bytes (drops of water) are being pushed through a pipe. In Node, streams are objects representing data flows that can be written to and read from asynchronously. The Node philosophy is a non-blocking flow, I/O is handled via streams, and so the design of the Stream API naturally duplicates this general philosophy. In fact, there is no other way of interacting with streams except in an asynchronous, evented manner—you are prevented, by design, from blocking I/O. Five distinct base classes are exposed via the abstract Stream interface: Readable, Writable, Duplex, Transform, and PassThrough. Each base class inherits from EventEmitter, which we know of as an interface to which event listeners and emitters can be bound. As we will learn, and here will emphasize, the Stream interface is an abstract interface. An abstract interface functions as a kind of blueprint or definition, describing the features that must be built into each constructed instance of a Stream object. For example, a readable stream implementation is required to implement a public read method which delegates to the interface's internal _read method. In general, all stream implementations should follow these guidelines: As long as data exists to send, write to a stream until that operation returns false, at which point the implementation should wait for a drain event, indicating that the buffered stream data has emptied Continue to call read until a null value is received, at which point wait for a readable event prior to resuming reads Several Node I/O modules are implemented as streams. Network sockets, file readers and writers, stdin and stdout, zlib, and so on. Similarly, when implementing a readable data source, or data reader, one should implement that interface as a Stream interface. It is important to note that as of Node 0.10.0 the Stream interface changed in some fundamental ways. The Node team has done its best to implement backwards-compatible interfaces, such that (most) older programs will continue to function without modification. In this article we will not spend any time discussing the specific features of this older API, focusing on the current (and future) design. The reader is encouraged to consult Node's online documentation for information on migrating older programs. Implementing readable streams Streams producing data that another process may have an interest in are normally implemented using a Readable stream. A Readable stream saves the implementer all the work of managing the read queue, handling the emitting of data events, and so on. To create a Readable stream: var stream = require('stream'); var readable = new stream.Readable({ encoding : "utf8", highWaterMark : 16000, objectMode: true }); As previously mentioned, Readable is exposed as a base class, which can be initialized through three options: encoding: Decode buffers into the specified encoding, defaulting to UTF-8. highWaterMark: Number of bytes to keep in the internal buffer before ceasing to read from the data source. The default is 16 KB. objectMode: Tell the stream to behave as a stream of objects instead of a stream of bytes, such as a stream of JSON objects instead of the bytes in a file. Default false. In the following example we create a mock Feed object whose instances will inherit the Readable stream interface. Our implementation need only implement the abstract _read method of Readable, which will push data to a consumer until there is nothing more to push, at which point it triggers the Readable stream to emit an "end" event by pushing a null value: var Feed = function(channel) { var readable = new stream.Readable({ encoding : "utf8" }); var news = [ "Big Win!", "Stocks Down!", "Actor Sad!" ]; readable._read = function() { if(news.length) { return readable.push(news.shift() + "n"); } readable.push(null); }; return readable; } Now that we have an implementation, a consumer might want to instantiate the stream and listen for stream events. Two key events are readable and end. The readable event is emitted as long as data is being pushed to the stream. It alerts the consumer to check for new data via the read method of Readable. Note again how the Readable implementation must provide a private _read method, which services the public read method exposed to the consumer API. The end event will be emitted whenever a null value is passed to the push method of our Readable implementation. Here we see a consumer using these methods to display new stream data, providing a notification when the stream has stopped sending data: var feed = new Feed(); feed.on("readable", function() { var data = feed.read(); data && process.stdout.write(data); }); feed.on("end", function() { console.log("No more news"); }); Similarly, we could implement a stream of objects through the use of the objectMode option: var readable = new stream.Readable({ objectMode : true }); var prices = [ { price : 1 }, { price : 2 } ]; ... readable.push(prices.shift()); // } { prices : 1 } // } { prices : 2 } Here we see that each read event is receiving an object, rather than a buffer or string. Finally, the read method of a Readable stream can be passed a single argument indicating the number of bytes to be read from the stream's internal buffer. For example, if it was desired that a file should be read one byte at a time, one might implement a consumer using a routine similar to: readable.push("Sequence of bytes"); ... feed.on("readable", function() { var character; while(character = feed.read(1)) { console.log(character); }; }); // } S // } e // } q // } ... Here it should be clear that the Readable stream's buffer was filled with a number of bytes all at once, but was read from discretely. Pushing and pulling We have seen how a Readable implementation will use push to populate the stream buffer for reading. When designing these implementations it is important to consider how volume is managed, at either end of the stream. Pushing more data into a stream than can be read can lead to complications around exceeding available space (memory). At the consumer end it is important to maintain awareness of termination events, and how to deal with pauses in the data stream. One might compare the behavior of data streams running through a network with that of water running through a hose. As with water through a hose, if a greater volume of data is being pushed into the read stream than can be efficiently drained out of the stream at the consumer end through read, a great deal of back pressure builds, causing a data backlog to begin accumulating in the stream object's buffer. Because we are dealing with strict mathematical limitations, read simply cannot be compelled to release this pressure by reading more quickly—there may be a hard limit on available memory space, or other limitation. As such, memory usage can grow dangerously high, buffers can overflow, and so forth. A stream implementation should therefore be aware of, and respond to, the response from a push operation. If the operation returns false this indicates that the implementation should cease reading from its source (and cease pushing) until the next _read request is made. In conjunction with the above, if there is no more data to push but more is expected in the future the implementation should push an empty string (""), which adds no data to the queue but does ensure a future readable event. While the most common treatment of a stream buffer is to push to it (queuing data in a line), there are occasions where one might want to place data on the front of the buffer (jumping the line). Node provides an unshift operation for these cases, which behavior is identical to push, outside of the aforementioned difference in buffer placement. Writable streams A Writable stream is responsible for accepting some value (a stream of bytes, a string) and writing that data to a destination. Streaming data into a file container is a common use case. To create a Writable stream: var stream = require('stream'); var readable = new stream.Writable({ highWaterMark : 16000, decodeStrings: true }); The Writable streams constructor can be instantiated with two options: highWaterMark: The maximum number of bytes the stream's buffer will accept prior to returning false on writes. Default is 16 KB decodeStrings: Whether to convert strings into buffers before writing. Default is true. As with Readable streams, custom Writable stream implementations must implement a _write handler, which will be passed the arguments sent to the write method of instances. One should think of a Writable stream as a data target, such as for a file you are uploading. Conceptually this is not unlike the implementation of push in a Readable stream, where one pushes data until the data source is exhausted, passing null to terminate reading. For example, here we write 100 bytes to stdout: var stream = require('stream'); var writable = new stream.Writable({ decodeStrings: false }); writable._write = function(chunk, encoding, callback) { console.log(chunk); callback(); } var w = writable.write(new Buffer(100)); writable.end(); console.log(w); // Will be `true` There are two key things to note here. First, our _write implementation fires the callback function immediately after writing, a callback that is always present, regardless of whether the instance write method is passed a callback directly. This call is important for indicating the status of the write attempt, whether a failure (error) or a success. Second, the call to write returned true. This indicates that the internal buffer of the Writable implementation has been emptied after executing the requested write. What if we sent a very large amount of data, enough to exceed the default size of the internal buffer? Modifying the above example, the following would return false: var w = writable.write(new Buffer(16384)); console.log(w); // Will be 'false' The reason this write returns false is that it has reached the highWaterMark option—default value of 16 KB (16 * 1024). If we changed this value to 16383, write would again return true (or one could simply increase its value). What to do when write returns false? One should certainly not continue to send data! Returning to our metaphor of water in a hose: when the stream is full, one should wait for it to drain prior to sending more data. Node's Stream implementation will emit a drain event whenever it is safe to write again. When write returns false listen for the drain event before sending more data. Putting together what we have learned, let's create a Writable stream with a highWaterMark value of 10 bytes. We will send a buffer containing more than 10 bytes (composed of A characters) to this stream, triggering a drain event, at which point we write a single Z character. It should be clear from this example that Node's Stream implementation is managing the buffer overflow of our original payload, warning the original write method of this overflow, performing a controlled depletion of the internal buffer, and notifying us when it is safe to write again: var stream = require('stream'); var writable = new stream.Writable({ highWaterMark: 10 }); writable._write = function(chunk, encoding, callback) { process.stdout.write(chunk); callback(); } writable.on("drain", function() { writable.write("Zn"); }); var buf = new Buffer(20, "utf8"); buf.fill("A"); console.log(writable.write(buf.toString())); // false The result should be a string of 20 A characters, followed by false, then followed by the character Z. The fluid data in a Readable stream can be easily redirected to a Writable stream. For example, the following code will take any data sent by a terminal (stdin is a Readable stream) and pass it to the destination Writable stream, stdout: process.stdin.pipe(process.stdout); Whenever a Writable stream is passed to a Readable stream's pipe method, a pipe event will fire. Similarly, when a Writable stream is removed as a destination for a Readable stream, the unpipe event fires. To remove a pipe, use the following: unpipe(destination stream)   Duplex streams A duplex stream is both readable and writeable. For instance, a TCP server created in Node exposes a socket that can be both read from and written to: var stream = require("stream"); var net = require("net"); net .createServer(function(socket) { socket.write("Go ahead and type something!"); socket.on("readable", function() { process.stdout.write(this.read()) }); }) .listen(8080); When executed, this code will create a TCP server that can be connected to via Telnet: telnet 127.0.0.1 8080 Upon connection, the connecting terminal will print out Go ahead and type something! —writing to the socket. Any text entered in the connecting terminal will be echoed to the stdout of the terminal running the TCP server (reading from the socket). This implementation of a bi-directional (duplex) communication protocol demonstrates clearly how independent processes can form the nodes of a complex and responsive application, whether communicating across a network or within the scope of a single process. The options sent when constructing a Duplex instance merge those sent to Readable and Writable streams, with no additional parameters. Indeed, this stream type simply assumes both roles, and the rules for interacting with it follow the rules for the interactive mode being used. As a Duplex stream assumes both read and write roles, any implementation is required to implement both _write and _read methods, again following the standard implementation details given for the relevant stream type. Transforming streams On occasion stream data needs to be processed, often in cases where one is writing some sort of binary protocol or other "on the fly" data transformation. A Transform stream is designed for this purpose, functioning as a Duplex stream that sits between a Readable stream and a Writable stream. A Transform stream is initialized using the same options used to initialize a typical Duplex stream. Where Transform differs from a normal Duplex stream is in its requirement that the custom implementation merely provide a _transform method, excluding the _write and _read method requirement. The _transform method will receive three arguments, first the sent buffer, an optional encoding argument, and finally a callback which _transform is expected to call when the transformation is complete: _transform = function(buffer, encoding, cb) { var transformation = "..."; this.push(transformation) cb(); } Let's imagine a program that wishes to convert ASCII (American Standard Code for Information Interchange) codes into ASCII characters, receiving input from stdin. We would simply pipe our input to a Transform stream, then piping its output to stdout: var stream = require('stream'); var converter = new stream.Transform(); converter._transform = function(num, encoding, cb) { this.push(String.fromCharCode(new Number(num)) + "n") cb(); } process.stdin.pipe(converter).pipe(process.stdout); Interacting with this program might produce an output resembling the following: 65 A 66 B 256 A 257 a   Using PassThrough streams This sort of stream is a trivial implementation of a Transform stream, which simply passes received input bytes through to an output stream. This is useful if one doesn't require any transformation of the input data, and simply wants to easily pipe a Readable stream to a Writable stream. PassThrough streams have benefits similar to JavaScript's anonymous functions, making it easy to assert minimal functionality without too much fuss. For example, it is not necessary to implement an abstract base class, as one does with for the _read method of a Readable stream. Consider the following use of a PassThrough stream as an event spy: var fs = require('fs'); var stream = new require('stream').PassThrough(); spy.on('end', function() { console.log("All data has been sent"); }); fs.createReadStream("./passthrough.js").pipe(spy).pipe(process.std out); Summary As we have learned, Node's designers have succeeded in creating a simple, predictable, and convenient solution to the very difficult problem of enabling efficient I/O between disparate sources and targets. Its abstract Stream interface facilitates the instantiation of consistent readable and writable interfaces, and the extension of this interface into HTTP requests and responses, the filesystem, child processes, and other data channels makes stream programming with Node a pleasant experience. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Getting Started with Zombie.js [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 13701
article-image-working-data-components
Packt
22 Nov 2013
15 min read
Save for later

Working with Data Components

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Introducing the DataList component The DataList component displays a collection of data in the list layout with several display types and supports AJAX pagination. The DataList component iterates through a collection of data and renders its child components for each item. Let us see how to use <p:dataList>to display a list of tag names as an unordered list: <p:dataList value="#{tagController.tags}" var="tag" type="unordered" itemType="disc"> #{tag.label} </p:dataList> The preceding <p:dataList> component displays tag names as an unordered list of elements marked with disc type bullets. The valid type options are unordered, ordered, definition, and none. We can use type="unordered" to display items as an unordered collection along with various itemType options such as disc, circle, and square. By default, type is set to unordered and itemType is set to disc. We can set type="ordered" to display items as an ordered list with various itemType options such as decimal, A, a, and i representing numbers, uppercase letters, lowercase letters, and roman numbers respectively. Time for action – displaying unordered and ordered data using DataList Let us see how to display tag names as unordered and ordered lists with various itemType options. Create <p:dataList> components to display items as unordered and ordered lists using the following code: <h:form> <p:panel header="Unordered DataList"> <h:panelGrid columns="3"> <h:outputText value="Disc"/> <h:outputText value="Circle" /> <h:outputText value="Square" /> <p:dataList value="#{tagController.tags}" var="tag" itemType="disc"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="circle"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" itemType="square"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> <p:panel header="Ordered DataList"> <h:panelGrid columns="4"> <h:outputText value="Number"/> <h:outputText value="Uppercase Letter" /> <h:outputText value="Lowercase Letter" /> <h:outputText value="Roman Letter" /> <p:dataList value="#{tagController.tags}" var="tag" type="ordered"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="A"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="a"> #{tag.label} </p:dataList> <p:dataList value="#{tagController.tags}" var="tag" type="ordered" itemType="i"> #{tag.label} </p:dataList> </h:panelGrid> </p:panel> </h:form> Implement the TagController.getTags() method to return a collection of tag objects: public class TagController { private List<Tag> tags = null; public TagController() { tags = loadTagsFromDB(); } public List<Tag> getTags() { return tags; } } What just happened? We have created DataList components to display tag names as an unordered list using type="unordered" and as an ordered list using type="ordered" with various supported itemTypes values. This is shown in the following screenshot: Using DataList with pagination support DataList has built-in pagination support that can be enabled by setting paginator="true". By enabling pagination, the various page navigation options will be displayed using the default paginator template. We can customize the paginator template to display only the desired options. The paginator can be customized using the paginatorTemplate option that accepts the following keys of UI controls: FirstPageLink LastPageLink PreviousPageLink NextPageLink PageLinks CurrentPageReport RowsPerPageDropdown Note that {RowsPerPageDropdown} has its own template, and options to display is provided via the rowsPerPageTemplate attribute (for example, rowsPerPageTemplate="5,10,15"). Also, {CurrentPageReport} has its own template defined with the currentPageReportTemplate option. You can use the {currentPage}, {totalPages}, {totalRecords}, {startRecord}, and {endRecord} keywords within the currentPageReport template. The default is "{currentPage} of {totalPages}". The default paginator template is "{FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink}". We can customize the paginator template to display only the desired options. For example: {CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown} The paginator can be positioned using the paginatorPosition attribute in three different locations: top, bottom, or both(default). The DataList component provides the following attributes for customization: rows: This is the number of rows to be displayed per page. first: This specifies the index of the first row to be displayed. The default is 0. paginator: This enables pagination. The default is false. paginatorTemplate: This is the template of the paginator. rowsPerPageTemplate: This is the template of the rowsPerPage dropdown. currentPageReportTemplate: This is the template of the currentPageReport UI. pageLinks: This specifies the maximum number of page links to display. The default value is 10. paginatorAlwaysVisible: This defines if paginator should be hidden when the total data count is less than the number of rows per page. The default is true. rowIndexVar: This specifies the name of the iterator to refer to for each row index. varStatus: This specifies the name of the exported request scoped variable to represent the state of the iteration same as in <ui:repeat> attribute varStatus. Time for action – using DataList with pagination Let us see how we can use the DataList component's pagination support to display five tags per page. Create a DataList component with pagination support along with custom paginatorTemplate: <p:panel header="DataList Pagination"> <p:dataList value="#{tagController.tags}" var="tag" id="tags" type="none" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15"> <f:facet name="header"> Tags </f:facet> <h:outputText value="#{tag.id} - #{tag.label}" style="margin-left:10px" /> <br/> </p:dataList> </p:panel> What just happened? We have created a DataList component along with pagination support by setting paginator="true". We have customized the paginator template to display additional information such as CurrentPageReport and RowsPerPageDropdown. Also, we have used the rowsPerPageTemplate attribute to specify the values for RowsPerPageDropdown. The following screenshot displays the result: Displaying tabular data using the DataTable component DataTable is an enhanced version of the standard DataTable that provides various additional features such as: Pagination Lazy loading Sorting Filtering Row selection Inline row/cell editing Conditional styling Expandable rows Grouping and SubTable and many more In our TechBuzz application, the administrator can view a list of users and enable/disable user accounts. First, let us see how we can display list of users using basic DataTable as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <f:facet name="header"> List of Users </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> <f:facet name="footer"> Total no. of Users: #{fn:length(adminController.users)}. </f:facet> </p:dataTable> The following screenshot shows us the result: PrimeFaces 4.0 introduced the Sticky component and provides out-of-the-box support for DataTable to make the header as sticky while scrolling using the stickyHeader attribute: <p:dataTable var="user" value="#{adminController.users}" stickyHeader="true"> ... </p:dataTable> Using pagination support If there are a large number of users, we may want to display users in a page-by-page style. DataTable has in-built support for pagination. Time for action – using DataTable with pagination Let us see how we can display five users per page using pagination. Create a DataTable component using pagination to display five records per page, using the following code: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" paginator="true" rows="5" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" currentPageReportTemplate="( {startRecord} - {endRecord}) of {totalRecords} Records." rowsPerPageTemplate="5,10,15"> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="Disabled"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> What just happened? We have created a DataTable component with the pagination feature to display five rows per page. Also, we have customized the paginator template and provided an option to change the page size dynamically using the rowsPerPageTemplate attribute. Using columns sorting support DataTable comes with built-in support for sorting on a single column or multiple columns. You can define a column as sortable using the sortBy attribute as follows: <p:column headerText="FirstName" sortBy="#{user.firstName}"> <h:outputText value="#{user.firstName}" /> </p:column> You can specify the default sort column and sort order using the sortBy and sortOrder attributes on the <p:dataTable> element: <p:dataTable id="usersTbl2" var="user" value="#{adminController.users}" sortBy="#{user.firstName}" sortOrder="descending"> </p:dataTable> The <p:dataTable> component's default sorting algorithm uses a Java comparator, you can use your own customized sort method as well: <p:column headerText="FirstName" sortBy="#{user.firstName}" sortFunction="#{adminController.sortByFirstName}"> <h:outputText value="#{user.firstName}" /> </p:column> public int sortByFirstName(Object firstName1, Object firstName2) { //return -1, 0 , 1 if firstName1 is less than, equal to or greater than firstName2 respectively return ((String)firstName1).compareToIgnoreCase(((String)firstName2)); } By default, DataTable's sortMode is set to single, to enable sorting on multiple columns set sortMode to multiple. In multicolumns' sort mode, you can click on a column while the metakey (Ctrl or command) adds the column to the order group: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}" sortMode="multiple"> </p:dataTable> Using column filtering support DataTable provides support for column-level filtering as well as global filtering (on all columns) and provides an option to hold the list of filtered records. In addition to the default match mode startsWith, we can use various other match modes such as endsWith, exact, and contains. Time for action – using DataTable with filtering Let us see how we can use filters with users' DataTable. Create a DataTable component and apply column-level filters and a global filter to apply filter on all columns: <p:dataTable widgetVar="userTable" var="user" value="#{adminController.users}" filteredValue="#{adminController.filteredUsers}" emptyMessage="No Users found for the given Filters"> <f:facet name="header"> <p:outputPanel> <h:outputText value="Search all Columns:" /> <p:inputText id="globalFilter" onkeyup="userTable.filter()" style="width:150px" /> </p:outputPanel> </f:facet> <p:column headerText="Id"> <h:outputText value="#{user.id}" /> </p:column> <p:column headerText="Email" filterBy="#{user.emailId}" footerText="contains" filterMatchMode="contains"> <h:outputText value="#{user.emailId}" /> </p:column> <p:column headerText="FirstName" filterBy="#{user.firstName}" footerText="startsWith"> <h:outputText value="#{user.firstName}" /> </p:column> <p:column headerText="LastName" filterBy="#{user.lastName}" filterMatchMode="endsWith" footerText="endsWith"> <h:outputText value="#{user.lastName}" /> </p:column> <p:column headerText="Disabled" filterBy="#{user.disabled}" filterOptions="#{adminController.userStatusOptions}" filterMatchMode="exact" footerText="exact"> <h:outputText value="#{user.disabled}" /> </p:column> </p:dataTable> Initialize userStatusOptions in AdminController ManagedBean. @ManagedBean @ViewScoped public class AdminController { private List<User> users = null; private List<User> filteredUsers = null; private SelectItem[] userStatusOptions; public AdminController() { users = loadAllUsersFromDB(); this.userStatusOptions = new SelectItem[3]; this.userStatusOptions[0] = new SelectItem("", "Select"); this.userStatusOptions[1] = new SelectItem("true", "True"); this.userStatusOptions[2] = new SelectItem("false", "False"); } //setters and getters } What just happened? We have used various filterMatchMode instances, such as startsWith, endsWith, and contains, while applying column-level filters. We have used the filterOptions attribute to specify the predefined filter values, which is displayed as a select drop-down list. As we have specified filteredValue="#{adminController.filteredUsers}", once the filters are applied the filtered users list will be populated into the filteredUsers property. This following is the resultant screenshot: Since PrimeFaces Version 4.0, we can specify the sortBy and filterBy properties as sortBy="emailId" and filterBy="emailId" instead of sortBy="#{user.emailId}" and filterBy="#{user.emailId}". A couple of important tips It is suggested to use a scope longer than the request such as the view scope to keep the filteredValue attribute so that the filtered list is still accessible after filtering. The filter located at the header is a global one applying on all fields; this is implemented by calling the client-side API method called filter(). The important part is to specify the ID of the input text as globalFilter, which is a reserved identifier for DataTable. Selecting DataTable rows Selecting one or more rows from a table and performing operations such as editing or deleting them is a very common requirement. The DataTable component provides several ways to select a row(s). Selecting single row We can use a PrimeFaces' Command component, such as commandButton or commandLink, and bind the selected row to a server-side property using <f:setPropertyActionListener>, shown as follows: <p:dataTable id="usersTbl" var="user" value="#{adminController.users}"> <!-- Column definitions --> <p:column style="width:20px;"> <p:commandButton id="selectButton" update=":form:userDetails" icon="ui-icon-search" title="View"> <f:setPropertyActionListener value="#{user}" target="#{adminController.selectedUser}" /> </p:commandButton> </p:column> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Selecting rows using a row click Instead of having a separate button to trigger binding of a selected row to a server-side property, PrimeFaces provides another simpler way to bind the selected row by using selectionMode, selection, and rowKey attributes. Also, we can use the rowSelect and rowUnselect events to update other components based on the selected row, shown as follows: <p:dataTable var="user" value="#{adminController.users}" selectionMode="single" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:ajax event="rowSelect" listener="#{adminController.onRowSelect}" update=":form:userDetails"/> <p:ajax event="rowUnselect" listener="#{adminController.onRowUnselect}" update=":form:userDetails"/> <!-- Column definitions --> </p:dataTable> <h:panelGrid id="userDetails" columns="2" > <h:outputText value="Id:" /> <h:outputText value="#{adminController.selectedUser.id}"/> <h:outputText value="Email:" /> <h:outputText value="#{adminController.selectedUser.emailId}"/> </h:panelGrid> Similarly, we can select multiple rows using selectionMode="multiple" and bind the selection attribute to an array or list of user objects: <p:dataTable var="user" value="#{adminController.users}" selectionMode="multiple" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <!-- Column definitions --> </p:dataTable> rowKey should be a unique identifier from your data model and should be used by DataTable to find the selected rows. You can either define this key by using the rowKey attribute or by binding a data model that implements org.primefaces.model.SelectableDataModel. When the multiple selection mode is enabled, we need to hold the Ctrl or command key and click on the rows to select multiple rows. If we don't hold on to the Ctrl or command key and click on a row and the previous selection will be cleared with only the last clicked row selected. We can customize this behavior using the rowSelectMode attribute. If you set rowSelectMode="add", when you click on a row, it will keep the previous selection and add the current selected row even though you don't hold the Ctrl or command key. The default rowSelectMode value is new. We can disable the row selection feature by setting disabledSelection="true". Selecting rows using a radio button / checkbox Another very common scenario is having a radio button or checkbox for each row, and the user can select one or more rows and then perform actions such as edit or delete. The DataTable component provides a radio-button-based single row selection using a nested <p:column> element with selectionMode="single": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUser}" rowKey="#{user.id}"> <p:column selectionMode="single"/> <!-- Column definitions --> </p:dataTable> The DataTable component also provides checkbox-based multiple row selection using a nested <p:column> element with selectionMode="multiple": <p:dataTable var="user" value="#{adminController.users}" selection="#{adminController.selectedUsers}" rowKey="#{user.id}"> <p:column selectionMode="multiple"/> <!-- Column definitions --> </p:dataTable> In our TechBuzz application, the administrator would like to have a facility to be able to select multiple users and disable them at one go. Let us see how we can implement this using the checkbox-based multiple rows selection.
Read more
  • 0
  • 0
  • 2611

article-image-platform-service
Packt
21 Nov 2013
5 min read
Save for later

Platform as a Service

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Platform as a Service is a very interesting take on the traditional cloud computing models. While there are many (often conflicting) definitions of a PaaS, for all practical purposes, PaaS provides a complete platform and environment to build and host applications or services. Emphasis is clearly on providing an end-to-end precreated environment to develop and deploy the application that automatically scales as required. PaaS packs together all the necessary components such as an operating system, database, programming language, libraries, web or application container, and a storage or hosting option. PaaS offerings vary and their chargebacks are dependent on what is utilized by the end user. There are excellent public offerings of PaaS such as Google App Engine, Heroku, Microsoft Azure, and Amazon Elastic Beanstalk. In a private cloud offering for an enterprise, it is possible to implement a similar PaaS environment. Out of the various possibilities, we will focus on building a Database as a Service (DBaaS) infrastructure using Oracle Enterprise Manager. DBaaS is sometimes seen as a mix of PaaS or SaaS depending on the kind of service it provides. DBaaS that provides services such as a database would be leaning more towards its PaaS legacy; but if it provides a service such as Business Intelligence, it takes more of a SaaS form. Oracle Enterprise Manager enables self-service provisioning of virtualized database instances out of a common shared database instance or cluster. Oracle Database is built to be clustered, and this makes it an easy fit for a robust DBaaS platform. Setting up the PaaS infrastructure Before we go about implementing a DBaaS, we will need to make sure our common platform is up and working. We will now check how we can create a PaaS Zone. Creating a PaaS Zone Enterprise Manager groups host or Oracle VM Manager Zones into PaaS Infrastructure Zones. You will need to have at least one PaaS Zone before you can add more features into the setup. To create a PaaS Zone, make sure that you have the following: The EM_CLOUD_ADMINISTRATOR, EM_SSA_ADMINISTRATOR, and EM_SSA_USER roles created A software library To set up a PaaS Infrastructure Zone, perform the following steps: Navigate to Setup | Cloud | PaaS Infrastructure Zone. Click on Create in the PaaS Infrastructure Zone main page. Enter the necessary details for PaaS Infrastructure Zone such as Name and Description. Based on the type of members you want to add to this zone, you can select any of the following member types: Host: This option will only allow the host targets to be part of this zone. Also, make sure you provide the necessary details for the placement policy constraints defined per host. These values are used to prevent over utilization of hosts which are already being heavily used. You can set a percentage threshold for Maximum CPU Utilization and Maximum Memory Allocation. Any host exceeding this threshold will not be used for provisioning. OVM Zone: This option will allow you to add Oracle Virtual Manager Zone targets: If you select Host at this stage, you will see the following page: Click on the + button to add named credentials and make sure you click on Test Credentials button to verify the credential. These named credentials must be global and available on all the hosts in this zone. Click on the Add button to add target hosts to this zone. If you selected OVM Zone in the previous screen (step 1 of 4), you will be presented with the following screen: Click on the Add button to add roles that can access this PaaS Infrastructure Zone. Once you have created a PaaS Infrastructure Zone, you can proceed with setting up necessary pieces for a DBaaS. However, time and again you might want to edit or review your PaaS Infrastructure Zone. To view and manage your PaaS Infrastructure Zones, navigate to Enterprise Menu | Cloud | Middleware and Database Cloud | PaaS Infrastructure Zones. From this page you can create, edit, delete, or view more details for a PaaS Infrastructure Zone. Clicking on the PaaS infrastructure zone link will display a detailed drill-down page with quite a few details related to that zone. The page is shown as follows: This page shows a lot of very useful details about the zone. Some of them are listed as follows: General: This section shows stats for this zone and shows details such as the total number of software pools, Oracle VM zones, member types (hosts or Oracle VM Zones), and other related details. CPU and Memory: This section gives an overview of CPU and memory utilization across all servers in the zone. Issues: This section shows incidents and problems for the target. This is a handy summary to check if there are any issues that needs attention. Request Summary: This section shows the status of requests being processed currently. Software Pool Summary: This section shows the name and type of each software pool in the zone. Unallocated Servers: This section shows a list of servers that are not associated with any software pool. Members: This section shows the members of the zones and the member. Service Template Summary: Shows the service templates associated with the zone. Summary We saw in this article, how PaaS plays a vital role in the structure of a DBaaS architechture. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] Oracle Tools and Products [Article]
Read more
  • 0
  • 0
  • 12208

Packt
21 Nov 2013
10 min read
Save for later

Our First Machine Learning Method – Linear Classification

Packt
21 Nov 2013
10 min read
(For more resources related to this topic, see here.) To get a grip on the problem of machine learning in scikit-learn, we will start with a very simple machine learning problem: we will try to predict the Iris flower species using only two attributes: sepal width and sepal length. This is an instance of a classification problem, where we want to assign a label (a value taken from a discrete set) to an item according to its features. Let's first build our training dataset—a subset of the original sample, represented by the two attributes we selected and their respective target values. After importing the dataset, we will randomly select about 75 percent of the instances, and reserve the remaining ones (the evaluation dataset) for evaluation purposes (we will see later why we should always do that): >>> from sklearn.cross_validation import train_test_split >>> from sklearn import preprocessing >>> # Get dataset with only the first two attributes >>> X, y = X_iris[:, :2], y_iris >>> # Split the dataset into a training and a testing set >>> # Test set will be the 25% taken randomly >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) >>> print X_train.shape, y_train.shape (112, 2) (112,) >>> # Standardize the features >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> X_train = scaler.transform(X_train) >>> X_test = scaler.transform(X_test) The train_test_split function automatically builds the training and evaluation datasets, randomly selecting the samples. Why not just select the first 112 examples? This is because it could happen that the instance ordering within the sample could matter and that the first instances could be different to the last ones. In fact, if you look at the Iris datasets, the instances are ordered by their target class, and this implies that the proportion of 0 and 1 classes will be higher in the new training set, compared with that of the original dataset. We always want our training data to be a representative sample of the population they represent. The last three lines of the previous code modify the training set in a process usually called feature scaling. For each feature, calculate the average, subtract the mean value from the feature value, and divide the result by their standard deviation. After scaling, each feature will have a zero average, with a standard deviation of one. This standardization of values (which does not change their distribution, as you could verify by plotting the X values before and after scaling) is a common requirement of machine learning methods, to avoid that features with large values may weight too much on the final results. Now, let's take a look at how our training instances are distributed in the two-dimensional space generated by the learning feature. pyplot, from the matplotlib library, will help us with this: >>> import matplotlib.pyplot as plt >>> colors = ['red', 'greenyellow', 'blue'] >>> for i in xrange(len(colors)): >>> xs = X_train[:, 0][y_train == i] >>> ys = X_train[:, 1][y_train == i] >>> plt.scatter(xs, ys, c=colors[i]) >>> plt.legend(iris.target_names) >>> plt.xlabel('Sepal length') >>> plt.ylabel('Sepal width') The scatter function simply plots the first feature value (sepal width) for each instance versus its second feature value (sepal length) and uses the target class values to assign a different color for each class. This way, we can have a pretty good idea of how these attributes contribute to determine the target class. The following screenshot shows the resulting plot: Looking at the preceding screenshot, we can see that the separation between the red dots (corresponding to the Iris setosa) and green and blue dots (corresponding to the two other Iris species) is quite clear, while separating green from blue dots seems a very difficult task, given the two features available. This is a very common scenario: one of the first questions we want to answer in a machine learning task is if the feature set we are using is actually useful for the task we are solving, or if we need to add new attributes or change our method. Given the available data, let's, for a moment, redefine our learning task: suppose we aim, given an Iris flower instance, to predict if it is a setosa or not. We have converted our problem into a binary classification task (that is, we only have two possible target classes). If we look at the picture, it seems that we could draw a straight line that correctly separates both the sets (perhaps with the exception of one or two dots, which could lie in the incorrect side of the line). This is exactly what our first classification method, linear classification models, tries to do: build a line (or, more generally, a hyperplane in the feature space) that best separates both the target classes, and use it as a decision boundary (that is, the class membership depends on what side of the hyperplane the instance is). To implement linear classification, we will use the SGDClassifier from scikit-learn. SGD stands for Stochastic Gradient Descent, a very popular numerical procedure to find the local minimum of a function (in this case, the loss function, which measures how far every instance is from our boundary). The algorithm will learn the coefficients of the hyperplane by minimizing the loss function. To use any method in scikit-learn, we must first create the corresponding classifier object, initialize its parameters, and train the model that better fits the training data. You will see while you advance that this procedure will be pretty much the same for what initially seemed very different tasks. >>> from sklearn.linear_modelsklearn._model import SGDClassifier >>> clf = SGDClassifier() >>> clf.fit(X_train, y_train)</p></pre> The SGDClassifier initialization function allows several parameters. For the moment, we will use the default values, but keep in mind that these parameters could be very important, especially when you face more real-world tasks, where the number of instances (or even the number of attributes) could be very large. The fit function is probably the most important one in scikit-learn. It receives the training data and the training classes, and builds the classifier. Every supervised learning method in scikit-learn implements this function. What does the classifier look like in our linear model method? As we have already said, every future classification decision depends just on a hyperplane. That hyperplane is, then, our model. The coef_ attribute of the clf object (consider, for the moment, only the first row of the matrices), now has the coefficients of the linear boundary and the intercept_ attribute, the point of intersection of the line with the y axis. Let's print them: >>> print clf.coef_ [[-28.53692691 15.05517618] [ -8.93789454 -8.13185613] [ 14.02830747 -12.80739966]] >>> print clf.intercept_ [-17.62477802 -2.35658325 -9.7570213 ] Indeed in the real plane, with these three values, we can draw a line, represented by the following equation: -17.62477802 - 28.53692691 * x1 + 15.05517618 * x2 = 0 Now, given x1 and x2 (our real-valued features), we just have to compute the value of the left-side of the equation: if its value is greater than zero, then the point is above the decision boundary (the red side), otherwise it will be beneath the line (the green or blue side). Our prediction algorithm will simply check this and predict the corresponding class for any new iris flower. But, why does our coefficient matrix have three rows? Because we did not tell the method that we have changed our problem definition (how could we have done this?), and it is facing a three-class problem, not a binary decision problem. What, in this case, the classifier does is the same we did—it converts the problem into three binary classification problems in a one-versus-all setting (it proposes three lines that separate a class from the rest). The following code draws the three decision boundaries and lets us know if they worked as expected: >>> x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5 >>> y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5 >>> xs = np.arange(x_min, x_max, 0.5) >>> fig, axes = plt.subplots(1, 3) >>> fig.set_size_inches(10, 6) >>> for i in [0, 1, 2]: >>> axes[i].set_aspect('equal') >>> axes[i].set_title('Class '+ str(i) + ' versus the rest') >>> axes[i].set_xlabel('Sepal length') >>> axes[i].set_ylabel('Sepal width') >>> axes[i].set_xlim(x_min, x_max) >>> axes[i].set_ylim(y_min, y_max) >>> sca(axes[i]) >>> plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=plt.cm.prism) >>> ys = (-clf.intercept_[i] – Xs * clf.coef_[i, 0]) / clf.coef_[i, 1] >>> plt.plot(xs, ys, hold=True) The first plot shows the model built for our original binary problem. It looks like the line separates quite well the Iris setosa from the rest. For the other two tasks, as we expected, there are several points that lie on the wrong side of the hyperplane. Now, the end of the story: suppose that we have a new flower with a sepal width of 4.7 and a sepal length of 3.1, and we want to predict its class. We just have to apply our brand new classifier to it (after normalizing!). The predict method takes an array of instances (in this case, with just one element) and returns a list of predicted classes: >>>print clf.predict(scaler.transform([[4.7, 3.1]])) [0] If our classifier is right, this Iris flower is a setosa. Probably, you have noticed that we are predicting a class from the possible three classes but that linear models are essentially binary: something is missing. You are right. Our prediction procedure combines the result of the three binary classifiers and selects the class in which it is more confident. In this case, we will select the boundary line whose distance to the instance is longer. We can check that using the classifier decision_function method: >>>print clf.decision_function(scaler.transform([[4.7, 3.1]])) [[ 19.73905808 8.13288449 -28.63499119]] Summary In this article we included a very simple example of classification, trying to show the main steps for learning. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Inheritance in Python [Article] Python 3: Object-Oriented Design [Article]
Read more
  • 0
  • 0
  • 15384
Packt
21 Nov 2013
7 min read
Save for later

Zurb Foundation – an Overview

Packt
21 Nov 2013
7 min read
(For more resources related to this topic, see here.) Most importantly, you can apply your creativity to make the design your own. Foundation gives you the tools you need for this. Then it gets out of the way and your site becomes your own. Especially when you advance to using the Foundation's SASS variables, functions and mixins, you have the ability to make your site your own unique creation. Foundation's grid system The foundation (pun intended) of Zurb Foundation is its grid system—rows and columns—much like a spread sheet, a blank sheet of graph paper, or tables, similar to what we used to use for HTML layout. Think of it as the canvas upon which you design your website. Each cell is a content area that can be merged with other cells, beside or below it, to make larger content areas. A default installation of Foundation will be based on twelve cells in a row. A column is comprised of one or more individual cells. Lay out a website Let's put Foundation's grid system to work in an example. We'll build a basic website with a two part header, a two part content area, a sidebar, and a three part footer area. With the simple techniques we demonstrate here, you can craft mostly any layout you want. Here is the mobile view Foundation works best when you design for small devices first, so here is what we want our small device (mobile) view to look like: This is the layout we want on mobile or small devices. But we've labeled the content areas with a title that describes where we want them on a regular desktop. By doing this, we are thinking ahead and creating a view ready for the desktop as well. Here is the desktop view Since a desktop display is typically wider than a mobile display, we have more horizontal space and things that had to be presented vertically on the mobile view can be displayed horizontally on the desktop view. Here is how we want our regular desktop or laptop to display the same content areas: These are not necessarily drawn to scale. It is the layout we are interested in. The two part header went from being one above the other in the mobile view to being side-by-side in the desktop view. The header on the top went left and the bottom header went right. All these make perfect sense. However, the sidebar shifted from being above the content area in the mobile view and shifted to its right in the mobile view. That's not natural when rendering HTML. Something must have happened! The content areas, left and right, stayed the same in both the views. And that's exactly what we wanted. The three part footer got rearranged. The center footer appears to have slid down between the left and right footers. That makes sense from a design perspective but it isn't natural from an HTML rendering perspective. Foundation provides the classes to easily make all this magic happen. Here is the code Unlike the early days of mobile design where a separate website was built for mobile devices, with Foundation you build your site once, and use classes to specify how it should look on both mobile and regular displays. Here is the HTML code that generates the two layouts: <header class="row"> <div class="large-6 column">Header Left</div> <div class="large-6 column">Header Right</div> </header> <main class="row"> <aside class="large-3 push-9 column">Sidebar Right</aside> <section class="large-9 pull-3 columns"> <article class="row"> <div class="small-9 column">Content Left</div> <div class="small-3 column">Content Right</div> </article> </section> </main> <footer class="row"> <div class="small-6 small-centered large-4 large-uncentered push-4 column">Footer Center</div> <div class="small-6 large-4 pull-4 column">Footer Left</div> <div class="small-6 large-4 column">Footer Right</div> </footer> That's all there is to it. Replace the text we used for labels with real content and you have a design that displays on mobile and regular displays in the layouts we've shown in this article. Toss in some widgets What we've shown above is just the core of the Foundation framework. As a toolkit, it also includes numerous CSS components and JavaScript plugins. Foundation includes styles for labels, lists, and data tables. It has several navigation components including Breadcrumbs, Pagination, Side Nav, and Sub Nav. You can add regular buttons, drop-down buttons, and button groups. You can make unique content areas with Block Grids, a special variation of the underlying grid. You can add images as thumbnails, put content into panels, present your video feed using the Flex Video component, easily add pricing tables, and represent progress bars. All these components only require CSS and are the easiest to integrate. By tossing in Foundation's JavaScript plugins, you have even more capabilities. Plugins include things like Alerts, Tooltips, and Dropdowns. These can be used to pop up messages in various ways. The Section plugin is very powerful when you want to organize your content into horizontal or vertical tabs, or when you want horizontal or vertical navigation. Like most components and plugins, it understands the mobile and regular desktop views and adapts accordingly. The Top Bar plugin is a favorite for many developers. It is a multi-level fly out menu plugin. Build your menu in HTML the way Top Bar expects. Set it up with the appropriate classes and it just works. Magellan and Joyride are two plugins that you can put to work to help show your viewers where they are on a page or to help them navigate to various sections on a page. Orbit is Foundation's slide presentation plugin. You often see sliders on the home page of websites these days. Clearing is similar to Orbit except that it displays thumbnails of the images in a presentation below the main display window. A viewer clicks on a thumbnail to display the full image. Reveal is a plugin that allows you to put a link anywhere on your page and when the viewer clicks on it, a box pops up extra content, which could even be an Orbit slider, is revealed. Interchange is one of the most recent additions to Foundation's plugin factory. With it you can selectively load images depending on the target environment. This lets you optimize bandwidth between your web server and your viewer's browser. Foundation also provides a great Forms plugin. On its own it is capable. With the additional Abide plugin you have a great deal of control over form layout and editing. Summary As you can see, Foundation is very capable of laying out web page for mobile devices and regular displays. One set of code, two very different looks. And that's just the beginning. Foundation's CSS components and JavaScript plugins can be placed on a web page in almost any content area. With these widgets you can have much more interaction with your viewers than you otherwise would. Put Foundation to work in your website today! Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Introduction to RWD frameworks [Article] Nesting, Extend, Placeholders, and Mixins [Article]
Read more
  • 0
  • 0
  • 12509

article-image-foundations
Packt
20 Nov 2013
6 min read
Save for later

Foundations

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Installation If you do not have node installed, visit: http://nodejs.org/download/. There is also an installation guide on the node GitHub repository wiki if you prefer not to or cannot use an installer: https://github.com/joyent/node/wiki/Installation. Let's install Express globally: npm install -g express If you have downloaded the source code, install its dependencies by running this command: npm install Testing Express with Mocha and SuperTest Now that we have Express installed and our package.json file in place, we can begin to drive out our application with a test-first approach. We will now install two modules to assist us: mocha and supertest. Mocha is a testing framework for node; it's flexible, has good async support, and allows you to run tests in both a TDD and BDD style. It can also be used on both the client and server side. Let's install Mocha with the following command: npm install -g mocha –-save-dev SuperTest is an integration testing framework that will allow us to easily write tests against a RESTful HTTP server. Let's install SuperTest: npm install supertest –-save-dev Continuous testing with Mocha One of the great things about working with a dynamic language and one of the things that has drawn me to node is the ability to easily do Test-Driven Development and continuous testing. Simply run Mocha with the -w watch switch and Mocha will respond when changes to our codebase are made, and will automatically rerun the tests: mocha -w Extracting routes Express supports multiple options for application structure. Extracting elements of an Express application into separate files is one option; a good candidate for this is routes. Let's extract our route heartbeat into ./lib/routes/heartbeat.js; the following listing simply exports the route as a function called index: exports.index = function(req, res){ res.json(200, 'OK'); }; Let's make a change to our Express server and remove the anonymous function we pass to app.get for our route and replace it with a call to the function in the following listing. We import the route heartbeat and pass in a callback function, heartbeat.index: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); http.createServer(app).listen(app.get('port')); module.exports = app; 404 handling middleware In order to handle a 404 Not Found response, let's add a 404 not found middleware. Let's write a test, ./test/heartbeat.js; the content type returned should be JSON and the status code expected should be 404 Not Found: describe('vision heartbeat api', function(){ describe('when requesting resource /missing', function(){ it('should respond with 404', function(done){ request(app) .get('/missing') .expect('Content-Type', /json/) .expect(404, done); }) }); }); Now, add the following middleware to ./lib/middleware/notFound.js. Here we export a function called index and call res.json, which returns a 404 status code and the message Not Found. The next parameter is not called as our 404 middleware ends the request by returning a response. Calling next would call the next middleware in our Express stack; we do not have any more middleware due to this, it's customary to add error middleware and 404 middleware as the last middleware in your server: exports.index = function(req, res, next){ res.json(404, 'Not Found.'); }; Now add the 404 not found middleware to ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; Logging middleware Express comes with a logger middleware via Connect; it's very useful for debugging an Express application. Let's add it to our Express server ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.use(express.logger({ immediate: true, format: 'dev' })); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; The immediateoption will write a log line on request instead of on response. The devoption provides concise output colored by the response status. The logger middleware is placed high in the Express stack in order to log all requests. Logging with Winston We will now add logging to our application using Winston; let's install Winston: npm install winston --save The 404 middleware will need to log 404 not found, so let's create a simple logger module, ./lib/logger/index.js; the details of our logger will be configured with Nconf. We import Winston and the configuration modules. We define our Logger function, which constructs and returns a file logger—winston.transports. File—that we configure using values from our config. We default the loggers maximum size to 1 MB, with a maximum of three rotating files. We instantiate the Logger function, returning it as a singleton. var winston = require('winston') , config = require('../configuration'); function Logger(){ return winston.add(winston.transports.File, { filename: config.get('logger:filename'), maxsize: 1048576, maxFiles: 3, level: config.get('logger:level') }); } module.exports = new Logger(); Let's add the Loggerconfiguration details to our config files ./config/ development.jsonand ./config/test.json: { "express": { "port": 3000 }, "logger" : { "filename": "logs/run.log", "level": "silly", } } Let's alter the ./lib/middleware/notFound.js middleware to log errors. We import our logger and log an error message via logger when a 404 Not Found response is thrown: var logger = require("../logger"); exports.index = function(req, res, next){ logger.error('Not Found'); res.json(404, 'Not Found'); }; Summary This article has shown in detail with all the commands how Node.js is installed along with Express. The testing of our Express with Mocha and SuperTest is shown in detail. The logging in into our application is shown with middleware and Winston. Resources for Article: Further resources on this subject: Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance [Article]
Read more
  • 0
  • 0
  • 2499
Modal Close icon
Modal Close icon