Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 12450

Packt
30 Oct 2014
7 min read
Save for later

concrete5 – Creating Blocks

Packt
30 Oct 2014
7 min read
In this article by Sufyan bin Uzayr, author of the book concrete5 for Developers, you will be introduced to concrete5. Basically, we will be talking about the creation of concrete5 blocks. (For more resources related to this topic, see here.) Creating a new block Creating a new block in concrete5 can be a daunting task for beginners, but once you get the hang of it, the process is pretty simple. For the sake of clarity, we will focus on the creation of a new block from scratch. If you already have some experience with block building in concrete5, you can skip the initial steps of this section. The steps to create a new block are as follows: First, create a new folder within your project's blocks folder. Ideally, the name of the folder should bear relevance to the actual purpose of the block. Thus, a slideshow block can be slide. Assuming that we are building a contact form block, let's name our block's folder contact. Next, you need to add a controller class to your block. Again, if you have some level of expertise with concrete5 development, you will already be aware of the meaning and purpose of the controller class. Basically, a controller is used to control the flow of an application, say, it can accept requests from the user, process them, and then prepare the data to present it in the result, and so on. For now, we need to create a file named controller.php in our block's folder. For the contact form block, this is how it is going to look (don't forget the PHP tags): class ContactBlockController extends BlockController {protected $btTable = 'btContact';/*** Used for internationalization (i18n).*/public function getBlockTypeDescription() {return t('Display a contact form.');}public function getBlockTypeName() {return t('Contact');}public function view() {// If the block is rendered}public function add() {// If the block is added to a page}public function edit() {// If the block instance is edited}} The preceding code is pretty simple and seems to have become the industry norm when it comes to block creation in concrete5. Basically, our class extends BlockController, which is responsible for installing the block, saving the data, and rendering templates. The name of the class should be the Camel Case version of the block handle, followed by BlockController. We also need to specify the name of the database table in which the block's data will be saved. More importantly, as you must have noticed, we have three separate functions: view(), add(), and edit(). The roles of these functions have been described earlier. Next, create three files within the block's folder: view.php, add.php, and edit.php (yes, the same names as the functions in our code). The names are self-explanatory: add.php will be used when a new block is added to a given page, edit.php will be used when an existing block is edited, and view.php jumps into action when users view blocks live on the page. Often, it becomes necessary to have more than one template file within a block. If so, you need to dynamically render templates in order to decide which one to use in a given situation. As discussed in the previous table, the BlockController class has a render($view) method that accepts a single parameter in the form of the template's filename. To do this from controller.php, we can use the code as follows: public function view() {if ($this->isPost()) {$this->render('block_pb_view');}} In the preceding example, the file named block_pb_view.php will be rendered instead of view.php. To reiterate, we should note that the render($view) method does not require the .php extension with its parameters. Now, it is time to display the contact form. The file in question is view.php, where we can put virtually any HTML or PHP code that suits our needs. For example, in order to display our contact form, we can hardcode the HTML markup or make use of Form Helper to display the HTML markup. Thus, a hardcoded version of our contact form might look as follows: <?php defined('C5_EXECUTE') or die("Access Denied.");global $c; ?><form method="post" action="<?php echo $this->action('contact_submit');?>"><label for="txtContactTitle">SampleLabel</label><input type="text" name="txtContactTitle" /><br /><br /><label for="taContactMessage"></label><textarea name="taContactMessage"></textarea><br /><br /><input type="submit" name="btnContactSubmit" /></form> Each time the block is displayed, the view() function from controller.php will be called. The action() method in the previous code generates URLs and verifies the submitted values each time a user inputs content in our contact form. Much like any other contact form, we now need to handle contact requests. The procedure is pretty simple and almost the same as what we will use in any other development environment. We need to verify that the request in question is a POST request and accordingly, call the $post variable. If not, we need to discard the entire request. We can also use the mail helper to send an e-mail to the website owner or administrator. Before our block can be fully functional, we need to add a database table because concrete5, much like most other CMSs in its league, tends to work with a database system. In order to add a database table, create a file named db.xml within the concerned block's folder. Thereafter, concrete5 will automatically parse this file and create a relevant table in the database for your block. For our previous contact form block, and for other basic block building purposes, this is how the db.xml file should look: <?xml version="1.0"?><schema version="0.3"><table name="btContact"><field name="bID" type="I"><key /><unsigned /></field></table></schema> You can make relevant changes in the preceding schema definitions to suit your needs. For instance, this is how the default YouTube block's db.xml file will look: <?xml version="1.0"?><schema version="0.3"><table name="btYouTube"><field name="bID" type="I"><key /><unsigned /></field><field name="title" type="C" size="255"></field><field name="videoURL" type="C" size="255"></field></table></schema> The preceding steps enumerate the process of creating your first block in concrete5. However, while you are now aware of the steps involved in the creation of blocks and can easily work with concrete5 blocks for the most part, there are certain additional details that you should be aware of if you are to utilize the block's functionality in concrete5 to its fullest abilities. The first and probably the most useful of such detail is validation of user inputs within blocks and forms. Summary In this article, we learned how to create our very first block in concrete5. Resources for Article: Further resources on this subject: Alfresco 3: Writing and Executing Scripts [Article] Integrating Moodle 2.0 with Alfresco to Manage Content for Business [Article] Alfresco 3 Business Solutions: Types of E-mail Integration [Article]
Read more
  • 0
  • 0
  • 10371

article-image-media-queries-less
Packt
21 Oct 2014
9 min read
Save for later

Media Queries with Less

Packt
21 Oct 2014
9 min read
In this article by Alex Libby, author of Learning Less.js, we'll see how Less can make creating media queries a cinch; we will cover the following topics: How media queries work What's wrong with CSS? Creating a simple example (For more resources related to this topic, see here.) Introducing media queries If you've ever spent time creating content for sites, particularly for display on a mobile platform, then you might have come across media queries. For those of you who are new to the concept, media queries are a means of tailoring the content that is displayed on screen when the viewport is resized to a smaller size. Historically, websites were always built at a static size—with more and more people viewing content on smartphones and tablets, this means viewing them became harder, as scrolling around a page can be a tiresome process! Thankfully, this became less of an issue with the advent of media queries—they help us with what should or should not be displayed when viewing content on a particular device. Almost all modern browsers offer native support for media queries—the only exception being IE Version 8 or below, where it is not supported natively: Media queries always begin with @media and consist of two parts: The first part, only screen, determines the media type where a rule should apply—in this case, it will only show the rule if we're viewing content on screen; content viewed when printed can easily be different. The second part, or media feature, (min-width: 530px) and (max-width: 949px), means the rule will only apply between a screen size set at a minimum of 530px and a maximum of 949px. This will rule out any smartphones and will apply to larger tablets, laptops, or PCs. There are literally dozens of combinations of media queries to suit a variety of needs—for some good examples, visit http://cssmediaqueries.com/overview.html, where you can see an extensive list, along with an indication whether it is supported in the browser you normally use. Media queries are perfect to dynamically adjust your site to work in multiple browsers—indeed, they are an essential part of a responsive web design. While browsers support media queries, there are some limitations we need to consider; let's take a look at these now. The limitations of CSS If we spend any time working with media queries, there are some limitations we need to consider; these apply equally if we were writing using Less or plain CSS: Not every browser supports media features uniformly; to see the differences, visit http://cssmediaqueries.com/overview.html using different browsers. Current thinking is that a range of breakpoints has to be provided; this can result in a lot of duplication and a constant battle to keep up with numerous different screen sizes! The @media keyword is not supported in IE8 or below; you will need to use JavaScript or jQuery to achieve the same result, or a library such as Modernizr to provide a graceful fallback option. Writing media queries will tie your design to a specific display size; this increases the risk of duplication as you might want the same element to appear in multiple breakpoints, but have to write individual rules to cover each breakpoint. Breakpoints are points where your design will break if it is resized larger or smaller than a particular set of given dimensions. The traditional thinking is that we have to provide different style rules for different breakpoints within our style sheets. While this is valid, ironically it is something we should not follow! The reason for this is the potential proliferation of breakpoint rules that you might need to add, just to manage a site. With care and planning and a design-based breakpoints mindset, we can often get away with a fewer number of rules. There is only one breakpoint given, but it works in a range of sizes without the need for more breakpoints. The key to the process is to start small, then increase the size of your display. As soon as it breaks your design (this is where your first breakpoint is) add a query to fix it, and then, keep doing it until you get to your maximum size. Okay, so we've seen what media queries are; let's change tack and look at what you need to consider when working with clients, before getting down to writing the queries in code. Creating a simple example The best way to see how media queries work is in the form of a simple demo. In this instance, we have a simple set of requirements, in terms of what should be displayed at each size: We need to cater for four different sizes of content The small version must be shown to the authors as plain text e-mail links, with no decoration For medium-sized screens, we will add an icon before the link On large screens, we will add an e-mail address after the e-mail links On extra-large screens, we will combine the medium and large breakpoints together, so both icons and e-mail addresses are displayed In all instances, we will have a simple container in which there will be some dummy text and a list of editors. The media queries we create will control the appearance of the editor list, depending on the window size of the browser being used to display the content. Next, add the following code to a new document. We'll go through it section by section, starting with the variables created for our media queries: @small: ~"(max-width: 699px) and (min-width: 520px)"; @medium: ~"(max-width: 1000px) and (min-width: 700px)"; @large: ~"(min-width: 1001px)"; @xlarge: ~"(min-width: 1151px)"; Next comes some basic styles to define margins, font sizes, and styles: * { margin: 0; padding: 0; } body { font: 14px Georgia, serif; } h3 { margin: 0 0 8px 0; } p { margin: 0 25px } We need to set sizes for each area within our demo, so go ahead and add the following styles: #fluid-wrap { width: 70%; margin: 60px auto; padding: 20px; background: #eee; overflow: hidden; } #main-content { width: 65%; float: right; }  #sidebar { width: 35%; float: left; ul { list-style: none; } ul li a { color: #900; text-decoration: none; padding: 3px 0; display: block; } } Now that the basic styles are set, we can add our media queries—beginning with the query catering for small screens, where we simply display an e-mail logo: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } The medium query comes next; here, we add the word Email before the e-mail address instead: @media @medium { #sidebar ul li a:before { content: "Email: "; font-style: italic; color: #666; } } In the large media query, we switch to showing the name first, followed by the e-mail (the latter extracted from the data-email attribute): @media @large { #sidebar ul li a:after { content: " (" attr(data-email) ")"; font-size: 11px; font-style: italic; color: #666; } } We finish with the extra-large query, where we use the e-mail address format shown in the large media query, but add an e-mail logo to it: @media @xlarge { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } Save the file as simple.less. Now that our files are prepared, let's preview the results in a browser. For this, I recommend that you use Responsive Design View within Firefox (activated by pressing Ctrl + Shift + M). Once activated, resize the view to 416 x 735; here we can see that only the name is displayed as an e-mail link: Increasing the size to 544 x 735 adds an e-mail logo, while still keeping the same name/e-mail format as before: If we increase it further to 716 x 735, the e-mail logo changes to the word Email, as seen in the following screenshot: Let's increase the size even further to 735 x 1029; the format changes again, to a name/e-mail link, followed by an e-mail address in parentheses: In our final change, increase the size to 735 x 1182. Here, we can see the previous style being used, but with the addition of an e-mail logo: These screenshots illustrate perfectly how you can resize your screen and still maintain a suitable layout for each device you decide to support; let's take a moment to consider how the code works. The normal accepted practice for developers is to work on the basis of "mobile first", or create the smallest view so it is perfect, then increase the size of the screen and adjust the content until the maximum size is reached. This works perfectly well for new sites, but the principle might have to be reversed if a mobile view is being retrofitted to an existing site. In our instance, we've produced the content for a full-size screen first. From a Less perspective, there is nothing here that isn't new—we've used nesting for the #sidebar div, but otherwise the rest of this part of the code is standard CSS. The magic happens in two parts—immediately at the top of the file, we've set a number of Less variables, which encapsulate the media definition strings we use in the queries. Here, we've created four definitions, ranging from @small (for devices between 520px to 699px), right through to @xlarge for widths of 1151px or more. We then take each of the variables and use them within each query as appropriate, for example, the @small query is set as shown in the following code: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } In the preceding code, we have standard CSS style rules to display an e-mail logo before the name/e-mail link. Each of the other queries follows exactly the same principle; they will each compile as valid CSS rules when running through Less. Summary Media queries have rapidly become a de facto part of responsive web design. We started our journey through media queries with a brief introduction, with a review of some of the limitations that we must work around and considerations that need to be considered when working with clients. We then covered how to create a simple media query. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Customizing WordPress Settings for SEO [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 20734

article-image-introduction-typescript
Packt
20 Oct 2014
16 min read
Save for later

Introduction to TypeScript

Packt
20 Oct 2014
16 min read
One of the primary benefits of compiled languages is that they provide a more plain syntax for the developer to work with before the code is eventually converted to machine code. TypeScript is able to bring this advantage to JavaScript development by wrapping several different patterns into language constructs that allow us to write better code. Every explicit type annotation that is provided is simply syntactic sugar that will be removed during compilation, but not before their constraints are analyzed and any errors are caught. In this article by Christopher Nance, the author of TypeScript Essentials, we will explore this type system in depth. We will also discuss the different language structures that TypeScript introduces. We will look at how these structures are emitted by the compiler into plain JavaScript. This article will contain a detailed at into each of these concepts: (For more resources related to this topic, see here.) Types Functions Interfaces Classes Types These type annotations put a specific set of constraints on the variables being created. These constraints allow the compiler and development tools to better assist in the proper use of the object. This includes a list of functions, variables, and properties available on the object. If a variable is created and no type is provided for it, TypeScript will attempt to infer the type from the context in which it is used. For instance, in the following code, we do not explicitly declare the variable hello as string; however, since it is created with an initial value, TypeScript is able to infer that it should always be treated as a string: var hello = "Hello There"; The ability of TypeScript to do this contextual typing provides development tools with the ability to enhance the development experience in a variety of ways. The type information allows our IDE to warn us of potential errors in our code, or provide intelligent code completion and suggestion. As you can see from the following screenshot, Visual Studio is able to provide a list of methods and properties associated with string objects as well as their type information: When an object’s type is not given and cannot be inferred from its initialization then it will be treated as an Any type. The Any type is the base type for all other types in TypeScript. It can represent any JavaScript value and the minimum amount of type checking is performed on objects of type Any. Every other type that exists in TypeScript falls into one of three categories: primitive types, object types, or type parameters. TypeScript's primitive types closely mirror those of JavaScript. The TypeScript primitive types are as follows: Number: var myNum: number = 2; Boolean: var myBool: boolean = true; String: var myString: string = "Hello"; Void: function(): void { var x = 2; } Null: if (x != null) { alert(x); } Undefined: if (x != undefined) { alert(x); } All of these types correspond directly to JavaScript's primitive types except for Void. The Void type is meant to represent the absence of a value. A function that returns no value has a return type of void. Object types are the most common types you will see in TypeScript and they are made up of references to classes, interfaces, and anonymous object types. Object types are made up of a complex set of members. These members fall into one of four categories: properties, call signatures, constructor signatures, or index signatures. Type parameters are used when referencing generic types or calling generic functions. Type parameters are used to keep code generic enough to be used on a multitude of objects while limiting those objects to a specific set of constraints. An early example of generics that we can cover is arrays. Arrays exist just like they do in JavaScript and they have an extra set of type constraints placed upon them. The array object itself has certain type constraints and methods that are created as being an object of the Array type, the second piece of information that comes from the array declaration is the type of the objects contained in the array. There are two ways to explicitly type an array; otherwise, the contextual typing system will attempt to infer the type information: var array1: string[] = []; var array2: Array<string> = []; Both of these examples are completely legal ways of declaring an array. They both generate the same JavaScript output and they both provide the same type information. The first example is a shorthand type literal using the [ and ] characters to create arrays. The resulting JavaScript for each of these arrays is shown as follows: var array1 = []; var array2 = []; Despite all of the type annotations and compile-time checking, TypeScript compiles to plain JavaScript and therefore adds absolutely no overhead to the run time speed of your applications. All of the type annotations are removed from the final code, providing us with both a much richer development experience and a clean finished product. Functions If you are at all familiar with JavaScript you will be very familiar with the concept of functions. TypeScript has added type annotations to the parameter list as well as the return type. Due to the new constraints being placed on the parameter list, the concept of function overloads was also included in the language specification. TypeScript also takes advantage of JavaScript's arguments object and provides syntax for rest parameters. Let's take a look at a function declaration in TypeScript: function add(x: number, y: number): number {    return x + y; } As you can see, we have created a function called add. It takes two parameters that are both of the type number, one of the primitive types, and it returns a number. This function is useful in its current form but it is a little limited in overall functionality. What if we want to add a third number to the first two? Then we have to call our function multiple times. TypeScript provides a way to provide optional parameters to functions. So now we can modify our function to take a third parameter, z, that will get added to the first two numbers, as shown in the following code: function add(x: number, y: number, z?: number) {    if (z !== undefined) {        return x + y + z;    }    return x + y; } As you can see, we have a third named parameter now but this one is followed by ?. This tells the compiler that this parameter is not required for the function to be called. Optional parameters tell the compiler not to generate an error if the parameter is not provided when the function is called. In JavaScript, this compile-time checking is not performed, meaning an exception could occur at runtime because each missing parameter will have a value of undefined. It is the responsibility of the developer to write code that verifies a value exists before attempting to use it. So now we can add three numbers together and we haven't broken any of our previous code that relied on the add method only taking two parameters. This has added a little bit more functionality but I think it would be nice to extend this code to operate on multiple types. We know that strings can be added together just the same as numbers can, so why not use the same method? In its current form, though, passing strings to the add function will result in compilation errors. We will modify the function's definition to take not only numbers but strings as well, as shown in the following code: function add(x: string, y: string): string; function add(x: number, y: number): number; function add(x: any, y: any): any {    return x + y; } As you can see, we now have two declarations of the add function: one for strings, one for numbers, and then we have the final implementation using the any type. The signature of the actual function implementation is not included in the function’s type definition, though. Attempting to call our add method with anything other than a number or string will fail at compile time, however, the overloads have no effect on the generated JavaScript. All of the type annotations are stripped out, as well as the overloads, and all we are left with is a very simple JavaScript method: function add(x, y) {  return x + y; } Great, so now we have a multipurpose add function that can take two values and combine them together for either strings or numbers. This still feels a little limited in overall functionality though. What if we wanted to add an indeterminate number of values together? We would have to call our add method over and over again until we eventually had only one value. Thankfully, TypeScript includes rest parameters, which is essentially an unbounded list of optional parameters. The following code shows how to modify our add functions to include a rest parameter: function add(arg1: string, ...args: string[]): string; function add(arg1: number, ...args: number[]): number; function add(arg1: any, ...args: any[]): any {    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } A rest parameter can only be the final parameter in a function's declaration. The TypeScript compiler recognizes the syntax of this final parameter and generates an extra bit of JavaScript to generate a shifted array from the JavaScript arguments object that is available to code inside of a function. The resulting JavaScript code shows the loop that the compiler has added to create the array that represents our indeterminate list of parameters: function add(arg1) {    var args = [];    for (var _i = 0; _i < (arguments.length - 1); _i++) {        args[_i] = arguments[_i + 1];    }    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } Now adding numbers and strings together is very simple and is completely type-safe. If you attempt to mix the different parameter types, a compile error will occur. The first two of the following statements are legal calls to our Add function; however, the third is not because the objects being passed in are not of the same type: alert(add("Hello ", "World!")); alert(add(3, 5, 9, 120, 42)); //Error alert(add(3, "World!")); We are still very early into our exploration of TypeScript but the benefits are already very apparent. There are still a few features of functions that we haven't covered yet but we need to learn more about the language first. Next, we will discuss the interface construct and the benefits it provides with absolutely no cost. Interfaces Interfaces are a key piece of creating large-scale software applications. They are a way of representing complex types about any object. Despite their usefulness they have absolutely no runtime consequences because JavaScript does not include any sort of runtime type checking. Interfaces are analyzed at compile time and then omitted from the resulting JavaScript. Interfaces create a contract for developers to use when developing new objects or writing methods to interact with existing ones. Interfaces are named types that contain a list of members. Let's look at an example of an interface: interface IPoint {    x: number;    y: number; } As you can see we use the interface keyword to start the interface declaration. Then we give the interface a name that we can easily reference from our code. Interfaces can be named anything, for example, foo or bar, however, a simple naming convention will improve the readability of the code. Interfaces will be given the format I<name> and object types will just use <name>, for example, IFoo and Foo. The interfaces' declaration body contains just a list of members and functions and their types. Interface members can only be instance members of an object. Using the static keyword in an interface declaration will result in a compile error. Interfaces have the ability to inherit from base types. This interface inheritance allows us to extend existing interfaces into a more enhanced version as well as merge separate interfaces together. To create an inheritance chain, interfaces use the extends clause. The extends clause is followed by a comma-separated list of types that the interface will merge with. interface IAdder {    add(arg1: number, ...args: number[]): number; } interface ISubtractor {    subtract(arg1: number, ...args: number[]): number; } interface ICalculator extends IAdder, ISubtractor {    multiply(arg1: number, ...args: number[]): number;    divide(arg1: number, arg2: number): number; } Here, we see three interfaces: IAdder, which defines a type that must implement the add method that we wrote earlier ISubtractor, which defines a new method called subtract that any object typed with ISubtractor must define ICalculator, which extends both IAdder and ISubtractor as well as defining two new methods that perform operations a calculator would be responsible for, which an adder or subtractor wouldn't perform These interfaces can now be referenced in our code as type parameters or type declarations. Interfaces cannot be directly instantiated and attempting to reference the members of an interface by using its type name directly will result in an error. In the following function declaration the ICalculator interface is used to restrict the object type that can be passed to the function. The compiler can now examine the function body and infer all of the type information associated with the calculator parameter and warn us if the object used does not implement this interface. function performCalculations(calculator: ICalculator, num1, num2) {    calculator.add(num1, num2);    calculator.subtract(num1, num2);    calculator.multiply(num1, num2);    calculator.divide(num1, num2);    return true; } The last thing that you need to know about interface definitions is that their declarations are open-ended and will implicitly merge together if they have the same type name. Our ICalculator interface could have been split into two separate declarations with each one adding its own list of base types and its own list of members. The resulting type definition from the following declaration is equivalent to the declaration we saw previously: interface ICalculator extends IAdder {    multiply(arg1: number, ...args: number[]): number; } interface ICalculator extends ISubtractor {    divide(arg1: number, arg2: number): number; } Creating large scale applications requires code that is flexible and reusable. Interfaces are a key component of keeping TypeScript as flexible as plain JavaScript, yet allow us to take advantage of the type checking provided at compile time. Your code doesn't have to be dependent on existing object types and will be ready for any new object types that might be introduced. The TypeScript compiler also implements a duck typing system that allows us to create objects on the fly while keeping type safety. The following example shows how we can pass objects that don't explicitly implement an interface but contain all of the required members to a function: function addPoints(p1: IPoint, p2: IPoint): IPoint {    var x = p1.x + p2.x;    var y = p1.y + p2.y;    return { x: x, y: y } } //Valid var newPoint = addPoints({ x: 3, y: 4 }, { x: 5, y: 1 }); //Error var newPoint2 = addPoints({ x: 1 }, { x: 4, y: 3 }); Classes In the next version of JavaScript, ECMAScript 6, a standard has been proposed for the definition of classes. TypeScript brings this concept to the current versions of JavaScript. Classes consist of a variety of different properties and members. These members can be either public or private and static or instance members. Definitions Creating classes in TypeScript is essentially the same as creating interfaces. Let's create a very simple Point class that keeps track of an x and a y position for us: class Point {    public x: number;    public y: number;    constructor(x: number, y = 0) {        this.x = x;        this.y = y;    } } As you can see, defining a class is very simple. Use the keyword class and then provide a name for the new type. Then you create a constructor for the object with any parameters you wish to provide upon creation. Our Point class requires two values that represent a location on a plane. The constructor is completely optional. If a constructor implementation is not provided, the compiler will automatically generate one that takes no parameters and initializes any instance members. We provided a default value for the property y. This default value tells the compiler to generate an extra JavaScript statement than if we had only given it a type. This also allows TypeScript to treat parameters with default values as optional parameters. If the parameter is not provided then the parameter's value is assigned to the default value you provide. This provides a simple method for ensuring that you are always operating on instantiated objects. The best part is that default values are available for all functions, not just constructors. Now let's examine the JavaScript output for the Point class: var Point = (function () {    function Point(x, y) {        if (typeof y === "undefined") { y = 0; }        this.x = x;        this.y = y;    }    return Point; })(); As you can see, a new object is created and assigned to an anonymous function that initializes the definition of the Point class. As we will see later, any public methods or static members will be added to the inner Point function's prototype. JavaScript closures are a very important concept in understanding TypeScript. Classes, modules, and enums in TypeScript all compile into JavaScript closures. Closures are actually a construct of the JavaScript language that provide a way of creating a private state for a specific segment of code. When a closure is created it contains two things: a function, and the state of the environment when the function was created. The function is returned to the caller of the closure and the state is used when the function is called. For more information about JavaScript closures and the module pattern visit http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html. The optional parameter was accounted for by checking its type and initializing it if a value is not available. You can also see that both x and y properties were added to the new instance and assigned to the values that were passed into the constructor. Summary This article has thoroughly discussed the different language constructs in TypeScript. Resources for Article: Further resources on this subject: Setting Up The Rig [Article] Making Your Code Better [Article] Working with Flexible Content Elements in TYPO3 Templates [Article]
Read more
  • 0
  • 0
  • 22351

article-image-routing
Packt
16 Oct 2014
17 min read
Save for later

Routing

Packt
16 Oct 2014
17 min read
In this article by Mitchel Kelonye, author of Mastering Ember.js, we will learn URL-based state management in Ember.js, which constitutes routing. Routing enables us to translate different states in our applications into URLs and vice-versa. It is a key concept in Ember.js that enables developers to easily separate application logic. It also enables users to link back to content in the application via the usual HTTP URLs. (For more resources related to this topic, see here.) We all know that in traditional web development, every request is linked by a URL that enables the server make a decision on the incoming request. Typical actions include sending back a resource file or JSON payload, redirecting the request to a different resource, or sending back an error response such as in the case of unauthorized access. Ember.js strives to preserve these ideas in the browser environment by enabling association between these URLs and state of the application. The main component that manages these states is the application router. It is responsible for restoring an application to a state matching the given URL. It also enables the user to navigate between the application's history as expected. The router is automatically created on application initialization and can be referenced as MyApplicationNamespace.Router. Before we proceed, we will be using the bundled sample to better understand this extremely convenient component. The sample is a simple implementation of the Contacts OS X application as shown in the following screenshot: It enables users to add new contacts as well as edit and delete existing ones. For simplicity, we won't support avatars but that could be an implementation exercise for the reader. We already mentioned some of the states in which this application can transition into. These states have to be registered in the same way server-side frameworks have URL dispatchers that backend programmers use to map URL patters to views. The article sample already illustrates how these possible states are defined:  // app.jsvar App = Ember.Application.create();App.Router.map(function() {this.resource('contacts', function(){this.route('new');this.resource('contact', {path: '/:contact_id'}, function(){this.route('edit');});});this.route('about');}); Notice that the already instantiated router was referenced as App.Router. Calling its map method gives the application an opportunity to register its possible states. In addition, two other methods are used to classify these states into routes and resources. Mapping URLs to routes When defining routes and resources, we are essentially mapping URLs to possible states in our application. As shown in the first code snippet, the router's map function takes a function as its only argument. Inside this function, we may define a resource using the corresponding method, which takes the following signature: this.resource(resourceName, options, function); The first argument specifies the name of the resource and coincidentally, the path to match the request URL. The next argument is optional and holds configurations that we may need to specify as we shall see later. The last one is a function that is used to define the routes of that particular resource. For example, the first defined resource in the samples says, let the contacts resource handle any requests whose URL start with /contacts. It also specifies one route, new, that is used to handle creation of new contacts. Routes on the other hand accept the same arguments for the function argument. You must be asking yourself, "So how are routes different from resources?" The two are essentially the same, other than the former offers a way to categorize states (routes) that perform actions on a specific entity. We can think of an Ember.js application as tree, composed of a trunk (the router), branches (resources), and leaves (routes). For example, the contact state (a resource) caters for a specific contact. This resource can be displayed in two modes: read and write; hence, the index and edit routes respectively, as shown: this.resource('contact', {path: '/:contact_id'}, function(){this.route('index'); // auto definedthis.route('edit');}); Because Ember.js encourages convention, there are two components of routes and resources that are always autodefined: A default application resource: This is the master resource into which all other resources are defined. We therefore did not need to define it in the router. It's not mandatory to define resources on every state. For example, our about state is a route because it only needs to display static content to the user. It can however be thought to be a route of the already autodefined application resource. A default index route on every resource: Again, every resource has a default index route. It's autodefined because an application cannot settle on a resource state. The application therefore uses this route if no other route within this same resource was intended to be used. Nesting resources Resources can be nested depending on the architecture of the application. In our case, we need to load contacts in the sidebar before displaying any of them to the user. Therefore, we need to define the contact resource inside the contacts. On the other hand, in an application such as Twitter, it won't make sense to define a tweet resource embedded inside a tweets resource because an extra overhead will be incurred when a user just wants to view a single tweet linked from an external application. Understanding the state transition cycle A request is handled in the same way water travels from the roots (the application), up the trunk, and is eventually lost off leaves. This request we are referring to is a change in the browser location that can be triggered in a number of ways. Before we proceed into finer details about routes, let's discuss what happened when the application was first loaded. On boot, a few things happened as outlined: The application first transitioned into the application state, then the index state. Next, the application index route redirected the request to the contacts resource. Our application uses the browsers local storage to store the contacts and so for demoing purposes, the contacts resource populated this store with fixtures (located at fixtures.js). The application then transitioned into the corresponding contacts resource index route, contacts.index. Again, here we made a few decisions based on whether our store contained any data in it. Since we indeed have data, we redirected the application into the contact resource, passing the ID of the first contact along. Just as in the two preceding resources, the application transitioned from this last resource into the corresponding index route, contact.index. The following figure gives a good view of the preceding state change: Configuring the router The router can be customized in the following ways: Logging state transitions Specifying the root app URL Changing browser location lookup method During development, it may be necessary to track the states into which the application transitions into. Enabling these logs is as simple as: var App = Ember.Application.create({LOG_TRANSITIONS: true}); As illustrated, we enable the LOG_TRANSITIONS flag when creating the application. If an application is not served at the root of the website domain, then it may be necessary to specify the path name used as in the following example: App.Router.reopen({rootURL: '/contacts/'}); One other modification we may need to make revolves around the techniques Ember.js uses to subscribe to the browser's location changes. This makes it possible for the router to do its job of transitioning the app into the matched URL state. Two of these methods are as follows: Subscribing to the hashchange event Using the history.pushState API The default technique used is provided by the HashLocation class documented at http://emberjs.com/api/classes/Ember.HashLocation.html. This means that URL paths are usually prefixed with the hash symbol, for example, /#/contacts/1/edit. The other one is provided by the HistoryLocation class located at http://emberjs.com/api/classes/Ember.HistoryLocation.html. This does not distinguish URLs from the traditional ones and can be enabled as: App.Router.reopen({location: 'history'}); We can also opt to let Ember.js pick which method is best suited for our app with the following code: App.Router.reopen({location: 'auto'}); If we don't need any of these techniques, we could opt to do so especially when performing tests: App.Router.reopen({location: none}); Specifying a route's path We now know that when defining a route or resource, the resource name used also serves as the path the router uses to match request URLs. Sometimes, it may be necessary to specify a different path to use to match states. There are two common reasons that may lead us to do this, the first of which is good for delegating route handling to another route. Although, we have not yet covered route handlers, we already mentioned that our application transitions from the application index route into the contacts.index state. We may however specify that the contacts route handler should manage this path as: this.resource('contacts', {path: '/'}, function(){}); Therefore, to specify an alternative path for a route, simply pass the desired route in a hash as the second argument during resource definition. This also applies when defining routes. The second reason would be when a resource contains dynamic segments. For example, our contact resource handles contacts who should obviously have different URLs linking back to them. Ember.js uses URL pattern matching techniques used by other open source projects such as Ruby on Rails, Sinatra, and Express.js. Therefore, our contact resource should be defined as: this.resource('contact', {path: '/:contact_id'}, function(){}); In the preceding snippet, /:contact_id is the dynamic segment that will be replaced by the actual contact's ID. One thing to note is that nested resources prefix their paths with those of parent resources. Therefore, the contact resource's full path would be /contacts/:contact_id. It's also worth noting that the name of the dynamic segment is not mandated and so we could have named the dynamic segment as /:id. Defining route and resource handlers Now that we have defined all the possible states that our application can transition into, we need to define handlers to these states. From this point onwards, we will use the terms route and resource handlers interchangeably. A route handler performs the following major functions: Providing data (model) to be used by the current state Specifying the view and/or template to use to render the provided data to the user Redirecting an application away into another state Before we move into discussing these roles, we need to know that a route handler is defined from the Ember.Route class as: App.RouteHandlerNameRoute = Ember.Route.extend(); This class is used to define handlers for both resources and routes and therefore, the naming should not be a concern. Just as routes and resources are associated with paths and handlers, they are also associated with controllers, views, and templates using the Ember.js naming conventions. For example, when the application initializes, it enters into the application state and therefore, the following objects are sought: The application route The application controller The application view The application template In the spirit of do more with reduced boilerplate code, Ember.js autogenerates these objects unless explicitly defined in order to override the default implementations. As another example, if we examine our application, we notice that the contact.edit route has a corresponding App.ContactEditController controller and contact/edit template. We did not need to define its route handler or view. Having seen this example, when referring to routes, we normally separate the resource name from the route name by a period as in the following: resourceName.routeName In the case of templates, we may use a period or a forward slash: resourceName/routeName The other objects are usually camelized and suffixed by the class name: ResourcenameRoutenameClassname For example, the following table shows all the objects used. As mentioned earlier, some are autogenerated. Route Name Controller Route Handler View Template  applicationApplicationControllerApplicationRoute  ApplicationViewapplication        ApplicationViewapplication  IndexViewindex       about AboutController  AboutRoute  AboutView about  contactsContactsControllerContactsRoute  ContactsView  contacts      contacts.indexContactsIndexControllerContactsIndexRoute  ContactsIndexViewcontacts/index        ContactsIndexViewcontacts/index  ContactsNewRoute  ContactsNewViewcontacts/new      contact  ContactController  ContactRoute  ContactView contact  contact.index  ContactIndexController  ContactIndexRoute  ContactIndexView contact/index contact.edit  ContactEditController  ContactEditRoute  ContactEditView contact/index One thing to note is that objects associated with the intermediary application state do not need to carry the suffix; hence, just index or about. Specifying a route's model We mentioned that route handlers provide controllers, the data needed to be displayed by templates. These handlers have a model hook that can be used to provide this data in the following format: AppNamespace.RouteHandlerName = Ember.Route.extend({model: function(){}}) For instance, the route contacts handler in the sample loads any saved contacts from local storage as: model: function(){return App.Contact.find();} We have abstracted this logic into our App.Contact model. Notice how we reopen the class in order to define this static method. A static method can only be called by the class of that method and not its instances: App.Contact.reopenClass({find: function(id){return (!!id)? App.Contact.findOne(id): App.Contact.findAll();},…}) If no arguments are passed to the method, it goes ahead and calls the findAll method, which uses the local storage helper to retrieve the contacts: findAll: function(){var contacts = store('contacts') || [];return contacts.map(function(contact){return App.Contact.create(contact);});} Because we want to deal with contact objects, we iteratively convert the contents of the loaded contact list. If we examine the corresponding template, contacts, we notice that we were able to populate the sidebar as shown in the following code: <ul class="nav nav-pills nav-stacked">{{#each model}}<li>{{#link-to "contact.index" this}}{{name}}{{/link-to}}</li>{{/each}}</ul> Do not worry about the template syntax at this point if you're new to Ember.js. The important thing to note is that the model was accessed via the model variable. Of course, before that, we check to see if the model has any content in: {{#if model.length}}...{{else}}<h1>Create contact</h1>{{/if}} As we shall see later, if the list was empty, the application would be forced to transition into the contacts.new state, in order for the user to add the first contact as shown in the following screenshot: The contact handler is a different case. Remember we mentioned that its path has a dynamic segment that would be passed to the handler. This information is passed to the model hook in an options hash as: App.ContactRoute = Ember.Route.extend({model: function(params){return App.Contact.find(params.contact_id);},...}); Notice that we are able to access the contact's ID via the contact_id attribute of the hash. This time, the find method calls the findOne static method of the contact's class, which performs a search for the contact matching the provided ID, as shown in the following code: findOne: function(id){var contacts = store('contacts') || [];var contact = contacts.find(function(contact){return contact.id == id;});if (!contact) return;return App.Contact.create(contact);} Serializing resources We've mentioned that Ember.js supports content to be linked back externally. Internally, Ember.js simplifies creating these links in templates. In our sample application, when the user selects a contact, the application transitions into the contact.index state, passing his/her ID along. This is possible through the use of the link-to handlebars expression: {{#link-to "contact.index" this}}{{name}}{{/link-to}} The important thing to note is that this expression enables us to construct a link that points to the said resource by passing the resource name and the affected model. The destination resource or route handler is responsible for yielding this path constituting serialization. To serialize a resource, we need to override the matching serialize hook as in the contact handler case shown in the following code: App.ContactRoute = Ember.Route.extend({...serialize: function(model, params){var data = {}data[params[0]] = Ember.get(model, 'id');return data;}}); Serialization means that the hook is supposed to return the values of all the specified segments. It receives two arguments, the first of which is the affected resource and the second is an array of all the specified segments during the resource definition. In our case, we only had one and so we returned the required hash that resembled the following code: {contact_id: 1} If we, for example, defined a resource with multiple segments like the following code: this.resource('book',{path: '/name/:name/:publish_year'},function(){}); The serialization hook would need to return something close to: {name: 'jon+doe',publish_year: '1990'} Asynchronous routing In actual apps, we would often need to load the model data in an asynchronous fashion. There are various approaches that can be used to deliver this kind of data. The most robust way to load asynchronous data is through use of promises. Promises are objects whose unknown value can be set at a later point in time. It is very easy to create promises in Ember.js. For example, if our contacts were located in a remote resource, we could use jQuery to load them as: App.ContactsRoute = Ember.Route.extend({model: function(params){return Ember.$.getJSON('/contacts');}}); jQuery's HTTP utilities also return promises that Ember.js can consume. As a by the way, jQuery can also be referenced as Ember.$ in an Ember.js application. In the preceding snippet, once data is loaded, Ember.js would set it as the model of the resource. However, one thing is missing. We require that the loaded data be converted to the defined contact model as shown in the following little modification: App.ContactsRoute = Ember.Route.extend({model: function(params){var promise = Ember.Object.createWithMixins(Ember.DeferredMixin);Ember.$.getJSON('/contacts').then(reject, resolve);function resolve(contacts){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function reject(res){var err = new Error(res.responseText);promise.reject(err);}return promise;}}); We first create the promise, kick off the XHR request, and then return the promise while the request is still being processed. Ember.js will resume routing once this promise is rejected or resolved. The XHR call also creates a promise; so, we need to attach to it, the then method which essentially says, invoke the passed resolve or reject function on successful or failed load respectively. The resolve function converts the loaded data and resolves the promise passing the data along thereby resumes routing. If the promise was rejected, the transition fails with an error. We will see how to handle this error in a moment. Note that there are two other flavors we can use to create promises in Ember.js as shown in the following examples: var promise = Ember.Deferred.create();Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function fail(res){var err = new Error(res.responseText);promise.reject(err);}return promise; The second example is as follows: return new Ember.RSVP.Promise(function(resolve, reject){Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});resolve(contacts)}function fail(res){var err = new Error(res.responseText);reject(err);}}); Summary This article detailed how a browser's location-based state management is accomplished in Ember.js apps. Also, we accomplished how to create a router, define resources and routes, define a route's model, and perform a redirect. Resources for Article: Further resources on this subject: AngularJS Project [Article] Automating performance analysis with YSlow and PhantomJS [Article] AngularJS [Article]
Read more
  • 0
  • 0
  • 1891

article-image-introduction-custom-template-filters-and-tags
Packt
13 Oct 2014
25 min read
Save for later

Introduction to Custom Template Filters and Tags

Packt
13 Oct 2014
25 min read
This article is written by Aidas Bendoratis, the author of Web Development with Django Cookbook. In this article, we will cover the following recipes: Following conventions for your own template filters and tags Creating a template filter to show how many days have passed Creating a template filter to extract the first media object Creating a template filter to humanize URLs Creating a template tag to include a template if it exists Creating a template tag to load a QuerySet in a template Creating a template tag to parse content as a template Creating a template tag to modify request query parameters As you know, Django has quite an extensive template system, with features such as template inheritance, filters for changing the representation of values, and tags for presentational logic. Moreover, Django allows you to add your own template filters and tags in your apps. Custom filters or tags should be located in a template-tag library file under the templatetags Python package in your app. Your template-tag library can then be loaded in any template with a {% load %} template tag. In this article, we will create several useful filters and tags that give more control to the template editors. Following conventions for your own template filters and tags Custom template filters and tags can become a total mess if you don't have persistent guidelines to follow. Template filters and tags should serve template editors as much as possible. They should be both handy and flexible. In this recipe, we will look at some conventions that should be used when enhancing the functionality of the Django template system. How to do it... Follow these conventions when extending the Django template system: Don't create or use custom template filters or tags when the logic for the page fits better in the view, context processors, or in model methods. When your page is context-specific, such as a list of objects or an object-detail view, load the object in the view. If you need to show some content on every page, create a context processor. Use custom methods of the model instead of template filters when you need to get some properties of an object not related to the context of the template. Name the template-tag library with the _tags suffix. When your app is named differently than your template-tag library, you can avoid ambiguous package importing problems. In the newly created library, separate filters from tags, for example, by using comments such as the following: # -*- coding: UTF-8 -*-from django import templateregister = template.Library()### FILTERS #### .. your filters go here..### TAGS #### .. your tags go here.. Create template tags that are easy to remember by including the following constructs: for [app_name.model_name]: Include this construct to use a specific model using [template_name]: Include this construct to use a template for the output of the template tag limit [count]: Include this construct to limit the results to a specific amount as [context_variable]: Include this construct to save the results to a context variable that can be reused many times later Try to avoid multiple values defined positionally in template tags unless they are self-explanatory. Otherwise, this will likely confuse the template developers. Make as many arguments resolvable as possible. Strings without quotes should be treated as context variables that need to be resolved or as short words that remind you of the structure of the template tag components. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template filter to show how many days have passed Not all people keep track of the date, and when talking about creation or modification dates of cutting-edge information, for many of us, it is more convenient to read the time difference, for example, the blog entry was posted three days ago, the news article was published today, and the user last logged in yesterday. In this recipe, we will create a template filter named days_since that converts dates to humanized time differences. Getting ready Create the utils app and put it under INSTALLED_APPS in the settings, if you haven't done that yet. Then, create a Python package named templatetags inside this app (Python packages are directories with an empty __init__.py file). How to do it... Create a utility_tags.py file with this content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from datetime import datetimefrom django import templatefrom django.utils.translation import ugettext_lazy as _from django.utils.timezone import now as tz_nowregister = template.Library()### FILTERS ###@register.filterdef days_since(value):""" Returns number of days between today and value."""today = tz_now().date()if isinstance(value, datetime.datetime):value = value.date()diff = today - valueif diff.days > 1:return _("%s days ago") % diff.dayselif diff.days == 1:return _("yesterday")elif diff.days == 0:return _("today")else:# Date is in the future; return formatted date.return value.strftime("%B %d, %Y") How it works... If you use this filter in a template like the following, it will render something like yesterday or 5 days ago: {% load utility_tags %}{{ object.created|days_since }} You can apply this filter to the values of the date and datetime types. Each template-tag library has a register where filters and tags are collected. Django filters are functions registered by the register.filter decorator. By default, the filter in the template system will be named the same as the function or the other callable object. If you want, you can set a different name for the filter by passing name to the decorator, as follows: @register.filter(name="humanized_days_since")def days_since(value):... The filter itself is quite self-explanatory. At first, the current date is read. If the given value of the filter is of the datetime type, the date is extracted. Then, the difference between today and the extracted value is calculated. Depending on the number of days, different string results are returned. There's more... This filter is easy to extend to also show the difference in time, such as just now, 7 minutes ago, or 3 hours ago. Just operate the datetime values instead of the date values. See also The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe Creating a template filter to extract the first media object Imagine that you are developing a blog overview page, and for each post, you want to show images, music, or videos in that page taken from the content. In such a case, you need to extract the <img>, <object>, and <embed> tags out of the HTML content of the post. In this recipe, we will see how to do this using regular expressions in the get_first_media filter. Getting ready We will start with the utils app that should be set in INSTALLED_APPS in the settings and the templatetags package inside this app. How to do it... In the utility_tags.py file, add the following content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templatefrom django.utils.safestring import mark_saferegister = template.Library()### FILTERS ###media_file_regex = re.compile(r"<object .+?</object>|"r"<(img|embed) [^>]+>") )@register.filterdef get_first_media(content):""" Returns the first image or flash file from the htmlcontent """m = media_file_regex.search(content)media_tag = ""if m:media_tag = m.group()return mark_safe(media_tag) How it works... While the HTML content in the database is valid, when you put the following code in the template, it will retrieve the <object>, <img>, or <embed> tags from the content field of the object, or an empty string if no media is found there: {% load utility_tags %} {{ object.content|get_first_media }} At first, we define the compiled regular expression as media_file_regex, then in the filter, we perform a search for that regular expression pattern. By default, the result will show the <, >, and & symbols escaped as &lt;, &gt;, and &amp; entities. But we use the mark_safe function that marks the result as safe HTML ready to be shown in the template without escaping. There's more... It is very easy to extend this filter to also extract the <iframe> tags (which are more recently being used by Vimeo and YouTube for embedded videos) or the HTML5 <audio> and <video> tags. Just modify the regular expression like this: media_file_regex = re.compile(r"<iframe .+?</iframe>|"r"<audio .+?</ audio>|<video .+?</video>|"r"<object .+?</object>|<(img|embed) [^>]+>") See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to humanize URLs recipe Creating a template filter to humanize URLs Usually, common web users enter URLs into address fields without protocol and trailing slashes. In this recipe, we will create a humanize_url filter used to present URLs to the user in a shorter format, truncating very long addresses, just like what Twitter does with the links in tweets. Getting ready As in the previous recipes, we will start with the utils app that should be set in INSTALLED_APPS in the settings, and should contain the templatetags package. How to do it... In the FILTERS section of the utility_tags.py template library in the utils app, let's add a filter named humanize_url and register it: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templateregister = template.Library()### FILTERS ###@register.filterdef humanize_url(url, letter_count):""" Returns a shortened human-readable URL """letter_count = int(letter_count)re_start = re.compile(r"^https?://")re_end = re.compile(r"/$")url = re_end.sub("", re_start.sub("", url))if len(url) > letter_count:url = u"%s…" % url[:letter_count - 1]return url How it works... We can use the humanize_url filter in any template like this: {% load utility_tags %}<a href="{{ object.website }}" target="_blank">{{ object.website|humanize_url:30 }}</a> The filter uses regular expressions to remove the leading protocol and the trailing slash, and then shortens the URL to the given amount of letters, adding an ellipsis to the end if the URL doesn't fit into the specified letter count. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template tag to include a template if it exists recipe Creating a template tag to include a template if it exists Django has the {% include %} template tag that renders and includes another template. However, in some particular situations, there is a problem that an error is raised if the template does not exist. In this recipe, we will show you how to create a {% try_to_include %} template tag that includes another template, but fails silently if there is no such template. Getting ready We will start again with the utils app that should be installed and is ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templatefrom django.template.loader import get_templateregister = template.Library()### TAGS ###@register.tagdef try_to_include(parser, token):"""Usage: {% try_to_include "sometemplate.html" %}This will fail silently if the template doesn't exist.If it does, it will be rendered with the current context."""try:tag_name, template_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]return IncludeNode(template_name) Then, we need the node class in the same file, as follows: class IncludeNode(template.Node):def __init__(self, template_name):self.template_name = template_namedef render(self, context):try:# Loading the template and rendering ittemplate_name = template.resolve_variable(self. template_name, context)included_template = get_template(template_name).render(context)except template.TemplateDoesNotExist:included_template = ""return included_template How it works... The {% try_to_include %} template tag expects one argument, that is, template_name. So, in the try_to_include function, we are trying to assign the split contents of the token only to the tag_name variable (which is "try_to_include") and the template_name variable. If this doesn't work, the template syntax error is raised. The function returns the IncludeNode object, which gets the template_name field for later usage. In the render method of IncludeNode, we resolve the template_name variable. If a context variable was passed to the template tag, then its value will be used here for template_name. If a quoted string was passed to the template tag, then the content within quotes will be used for template_name. Lastly, we try to load the template and render it with the current template context. If that doesn't work, an empty string is returned. There are at least two situations where we could use this template tag: When including a template whose path is defined in a model, as follows: {% load utility_tags %}{% try_to_include object.template_path %} When including a template whose path is defined with the {% with %} template tag somewhere high in the template context variable's scope. This is especially useful when you need to create custom layouts for plugins in the placeholder of a template in Django CMS: #templates/cms/start_page.html{% with editorial_content_template_path="cms/plugins/editorial_content/start_page.html" %}{% placeholder "main_content" %}{% endwith %}#templates/cms/plugins/editorial_content.html{% load utility_tags %}{% if editorial_content_template_path %}{% try_to_include editorial_content_template_path %}{% else %}<div><!-- Some default presentation ofeditorial content plugin --></div>{% endif % There's more... You can use the {% try_to_include %} tag as well as the default {% include %} tag to include templates that extend other templates. This has a beneficial use for large-scale portals where you have different kinds of lists in which complex items share the same structure as widgets but have a different source of data. For example, in the artist list template, you can include the artist item template as follows: {% load utility_tags %}{% for object in object_list %}{% try_to_include "artists/includes/artist_item.html" %}{% endfor %} This template will extend from the item base as follows: {# templates/artists/includes/artist_item.html #}{% extends "utils/includes/item_base.html" %}  {% block item_title %}{{ object.first_name }} {{ object.last_name }}{% endblock %} The item base defines the markup for any item and also includes a Like widget, as follows: {# templates/utils/includes/item_base.html #}{% load likes_tags %}<h3>{% block item_title %}{% endblock %}</h3>{% if request.user.is_authenticated %}{% like_widget for object %}{% endif %} See also  The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to load a QuerySet in a template Most often, the content that should be shown in a web page will have to be defined in the view. If this is the content to show on every page, it is logical to create a context processor. Another situation is when you need to show additional content such as the latest news or a random quote on some specific pages, for example, the start page or the details page of an object. In this case, you can load the necessary content with the {% get_objects %} template tag, which we will implement in this recipe. Getting ready Once again, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of function parsing arguments passed to the tag and a node class that renders the output of the tag or modifies the template context. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django import templateregister = template.Library()### TAGS ###@register.tagdef get_objects(parser, token):"""Gets a queryset of objects of the model specified by appandmodel namesUsage:{% get_objects [<manager>.]<method> from<app_name>.<model_name> [limit <amount>] as<var_name> %}Example:{% get_objects latest_published from people.Personlimit 3 as people %}{% get_objects site_objects.all from news.Articlelimit 3 as articles %}{% get_objects site_objects.all from news.Articleas articles %}"""amount = Nonetry:tag_name, manager_method, str_from, appmodel,str_limit,amount, str_as, var_name = token.split_contents()except ValueError:try:tag_name, manager_method, str_from, appmodel, str_as,var_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires a following syntax: ""{% get_objects [<manager>.]<method> from ""<app_ name>.<model_name>"" [limit <amount>] as <var_name> %}"try:app_name, model_name = appmodel.split(".")except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires application name and ""model name separated by a dot"model = models.get_model(app_name, model_name)return ObjectsNode(model, manager_method, amount, var_name) Then, we create the node class in the same file, as follows: class ObjectsNode(template.Node):def __init__(self, model, manager_method, amount, var_name):self.model = modelself.manager_method = manager_methodself.amount = amountself.var_name = var_namedef render(self, context):if "." in self.manager_method:manager, method = self.manager_method.split(".")else:manager = "_default_manager"method = self.manager_methodqs = getattr(getattr(self.model, manager),method,self.model._default_manager.none,)()if self.amount:amount = template.resolve_variable(self.amount,context)context[self.var_name] = qs[:amount]else:context[self.var_name] = qsreturn "" How it works... The {% get_objects %} template tag loads a QuerySet defined by the manager method from a specified app and model, limits the result to the specified amount, and saves the result to a context variable. This is the simplest example of how to use the template tag that we have just created. It will load five news articles in any template using the following snippet: {% load utility_tags %}{% get_objects all from news.Article limit 5 as latest_articles %}{% for article in latest_articles %}<a href="{{ article.get_url_path }}">{{ article.title }}</a>{% endfor %} This is using the all method of the default objects manager of the Article model, and will sort the articles by the ordering attribute defined in the Meta class. A more advanced example would be required to create a custom manager with a custom method to query objects from the database. A manager is an interface that provides database query operations to models. Each model has at least one manager called objects by default. As an example, let's create the Artist model, which has a draft or published status, and a new manager, custom_manager, which allows you to select random published artists: #artists/models.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django.utils.translation import ugettext_lazy as _STATUS_CHOICES = (('draft', _("Draft"),('published', _("Published"),)class ArtistManager(models.Manager):def random_published(self):return self.filter(status="published").order_by('?')class Artist(models.Model):# ...status = models.CharField(_("Status"), max_length=20,choices=STATUS_CHOICES)custom_manager = ArtistManager() To load a random published artist, you add the following snippet to any template: {% load utility_tags %}{% get_objects custom_manager.random_published from artists.Artistlimit 1 as random_artists %}{% for artist in random_artists %}{{ artist.first_name }} {{ artist.last_name }}{% endfor %} Let's look at the code of the template tag. In the parsing function, there is one of two formats expected: with the limit and without it. The string is parsed, the model is recognized, and then the components of the template tag are passed to the ObjectNode class. In the render method of the node class, we check the manager's name and its method's name. If this is not defined, _default_manager will be used, which is, in most cases, the same as objects. After that, we call the manager method and fall back to empty the QuerySet if the method doesn't exist. If the limit is defined, we resolve the value of it and limit the QuerySet. Lastly, we save the QuerySet to the context variable. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to parse content as a template In this recipe, we will create a template tag named {% parse %}, which allows you to put template snippets into the database. This is valuable when you want to provide different content for authenticated and non-authenticated users, when you want to include a personalized salutation, or when you don't want to hardcode media paths in the database. Getting ready No surprise, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templateregister = template.Library()### TAGS ###@register.tagdef parse(parser, token):"""Parses the value as a template and prints it or saves to avariableUsage:{% parse <template_value> [as <variable>] %}Examples:{% parse object.description %}{% parse header as header %}{% parse "{{ MEDIA_URL }}js/" as js_url %}"""bits = token.split_contents()tag_name = bits.pop(0)try:template_value = bits.pop(0)var_name = Noneif len(bits) == 2:bits.pop(0) # remove the word "as"var_name = bits.pop(0)except ValueError:raise template.TemplateSyntaxError, "parse tag requires a following syntax: ""{% parse <template_value> [as <variable>] %}"return ParseNode(template_value, var_name) Then, we create the node class in the same file, as follows: class ParseNode(template.Node):def __init__(self, template_value, var_name):self.template_value = template_valueself.var_name = var_namedef render(self, context):template_value = template.resolve_variable(self.template_value, context)t = template.Template(template_value)context_vars = {}for d in list(context):for var, val in d.items():context_vars[var] = valresult = t.render(template.RequestContext(context['request'], context_vars))if self.var_name:context[self.var_name] = resultreturn ""return result How it works... The {% parse %} template tag allows you to parse a value as a template and to render it immediately or to save it as a context variable. If we have an object with a description field, which can contain template variables or logic, then we can parse it and render it using the following code: {% load utility_tags %}{% parse object.description %} It is also possible to define a value to parse using a quoted string like this: {% load utility_tags %}{% parse "{{ STATIC_URL }}site/img/" as img_path %}<img src="{{ img_path }}someimage.png" alt="" /> Let's have a look at the code of the template tag. The parsing function checks the arguments of the template tag bit by bit. At first, we expect the name parse, then the template value, then optionally the word as, and lastly the context variable name. The template value and the variable name are passed to the ParseNode class. The render method of that class at first resolves the value of the template variable and creates a template object out of it. Then, it renders the template with all the context variables. If the variable name is defined, the result is saved to it; otherwise, the result is shown immediately. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to modify request query parameters Django has a convenient and flexible system to create canonical, clean URLs just by adding regular expression rules in the URL configuration files. But there is a lack of built-in mechanisms to manage query parameters. Views such as search or filterable object lists need to accept query parameters to drill down through filtered results using another parameter or to go to another page. In this recipe, we will create a template tag named {% append_to_query %}, which lets you add, change, or remove parameters of the current query. Getting ready Once again, we start with the utils app that should be set in INSTALLED_APPS and should contain the templatetags package. Also, make sure that you have the request context processor set for the TEMPLATE_CONTEXT_PROCESSORS setting, as follows: #settings.pyTEMPLATE_CONTEXT_PROCESSORS = ("django.contrib.auth.context_processors.auth","django.core.context_processors.debug","django.core.context_processors.i18n","django.core.context_processors.media","django.core.context_processors.static","django.core.context_processors.tz","django.contrib.messages.context_processors.messages","django.core.context_processors.request",) How to do it... For this template tag, we will be using the simple_tag decorator that parses the components and requires you to define just the rendering function, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import urllibfrom django import templatefrom django.utils.encoding import force_strregister = template.Library()### TAGS ###@register.simple_tag(takes_context=True)def append_to_query(context, **kwargs):""" Renders a link with modified current query parameters """query_params = context['request'].GET.copy()for key, value in kwargs.items():query_params[key] = valuequery_string = u""if len(query_params):query_string += u"?%s" % urllib.urlencode([(key, force_str(value)) for (key, value) inquery_params. iteritems() if value]).replace('&', '&amp;')return query_string How it works... The {% append_to_query %} template tag reads the current query parameters from the request.GET dictionary-like QueryDict object to a new dictionary named query_params, and loops through the keyword parameters passed to the template tag updating the values. Then, the new query string is formed, all spaces and special characters are URL-encoded, and ampersands connecting query parameters are escaped. This new query string is returned to the template. To read more about QueryDict objects, refer to the official Django documentation: https://docs.djangoproject.com/en/1.6/ref/request-response/#querydict-objects Let's have a look at an example of how the {% append_to_query %} template tag can be used. If the current URL is http://127.0.0.1:8000/artists/?category=fine-art&page=1, we can use the following template tag to render a link that goes to the next page: {% load utility_tags %}<a href="{% append_to_query page=2 %}">2</a> The following is the output rendered, using the preceding template tag: <a href="?category=fine-art&amp;page=2">2</a> Or we can use the following template tag to render a link that resets pagination and goes to another category: {% load utility_tags i18n %} <a href="{% append_to_query category="sculpture" page="" %}">{% trans "Sculpture" %}</a> The following is the output rendered, using the preceding template tag: <a href="?category=sculpture">Sculpture</a> See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe Summary In this article showed you how to create and use your own template filters and tags, as the default Django template system is quite extensive, and there are more things to add for different cases. Resources for Article: Further resources on this subject: Adding a developer with Django forms [Article] So, what is Django? [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 5762
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-to-create-flappy-bird-clone-with-melonjs
Ellison Leao
26 Sep 2014
18 min read
Save for later

How to Create a Flappy Bird Clone with MelonJS

Ellison Leao
26 Sep 2014
18 min read
How to create a Flappy Bird clone using MelonJS Web game frameworks such as MelonJS are becoming more popular every day. In this post I will show you how easy it is to create a Flappy Bird clone game using the MelonJS bag of tricks. I will assume that you have some experience with JavaScript and that you have visited the melonJS official page. All of the code shown in this post is available on this GitHub repository. Step 1 - Organization A MelonJS game can be divided into three basic objects: Scene objects: Define all of the game scenes (Play, Menus, Game Over, High Score, and so on) Game entities: Add all of the stuff that interacts on the game (Players, enemies, collectables, and so on) Hud entities: All of the HUD objects to be inserted on the scenes (Life, Score, Pause buttons, and so on) For our Flappy Bird game, first create a directory, flappybirdgame, on your machine. Then create the following structure: flabbybirdgame | |--js |--|--entities |--|--screens |--|--game.js |--|--resources.js |--data |--|--img |--|--bgm |--|--sfx |--lib |--index.html Just a quick explanation about the folders: The js contains all of the game source. The entities folder will handle the HUD and the Game entities. In the screen folder, we will create all of the scene files. The game.js is the main game file. It will initialize all of the game resources, which is created in the resources.js file, the input, and the loading of the first scene. The data folder is where all of the assets, sounds, and game themes are inserted. I divided the folders into img for images (backgrounds, player atlas, menus, and so on), bgm for background music files (we need to provide a .ogg and .mp3 file for each sound if we want full compatibility with all browsers) and sfx for sound effects. In the lib folder we will add the current 1.0.2 version of MelonJS. Lastly, an index.html file is used to build the canvas. Step 2 - Implementation First we will build the game.js file: var game = { data: { score : 0, steps: 0, start: false, newHiScore: false, muted: false }, "onload": function() { if (!me.video.init("screen", 900, 600, true, 'auto')) { alert("Your browser does not support HTML5 canvas."); return; } me.audio.init("mp3,ogg"); me.loader.onload = this.loaded.bind(this); me.loader.preload(game.resources); me.state.change(me.state.LOADING); }, "loaded": function() { me.state.set(me.state.MENU, new game.TitleScreen()); me.state.set(me.state.PLAY, new game.PlayScreen()); me.state.set(me.state.GAME_OVER, new game.GameOverScreen()); me.input.bindKey(me.input.KEY.SPACE, "fly", true); me.input.bindKey(me.input.KEY.M, "mute", true); me.input.bindPointer(me.input.KEY.SPACE); me.pool.register("clumsy", BirdEntity); me.pool.register("pipe", PipeEntity, true); me.pool.register("hit", HitEntity, true); // in melonJS 1.0.0, viewport size is set to Infinity by default me.game.viewport.setBounds(0, 0, 900, 600); me.state.change(me.state.MENU); } }; The game.js is divided into: data object: This global object will handle all of the global variables that will be used on the game. For our game we will use score to record the player score, and steps to record how far the bird goes. The other variables are flags that we are using to control some game states. onload method: This method preloads the resources and initializes the canvas screen and then calls the loaded method when it's done. loaded method: This method first creates and puts into the state stack the screens that we will use on the game. We will use the implementation for these screens later on. It enables all of the input keys to handle the game. For our game we will be using the space and left mouse keys to control the bird and the M key to mute sound. It also adds the game entities BirdEntity, PipeEntity and the HitEntity in the game poll. I will explain the entities later. Then you need to create the resource.js file: game.resources = [ {name: "bg", type:"image", src: "data/img/bg.png"}, {name: "clumsy", type:"image", src: "data/img/clumsy.png"}, {name: "pipe", type:"image", src: "data/img/pipe.png"}, {name: "logo", type:"image", src: "data/img/logo.png"}, {name: "ground", type:"image", src: "data/img/ground.png"}, {name: "gameover", type:"image", src: "data/img/gameover.png"}, {name: "gameoverbg", type:"image", src: "data/img/gameoverbg.png"}, {name: "hit", type:"image", src: "data/img/hit.png"}, {name: "getready", type:"image", src: "data/img/getready.png"}, {name: "new", type:"image", src: "data/img/new.png"}, {name: "share", type:"image", src: "data/img/share.png"}, {name: "tweet", type:"image", src: "data/img/tweet.png"}, {name: "leader", type:"image", src: "data/img/leader.png"}, {name: "theme", type: "audio", src: "data/bgm/"}, {name: "hit", type: "audio", src: "data/sfx/"}, {name: "lose", type: "audio", src: "data/sfx/"}, {name: "wing", type: "audio", src: "data/sfx/"}, ]; Now let's create the game entities. First the HUD elements: create a HUD.js file in the entities folder. In this file you will create: A score entity A background layer entity The share buttons entities (Facebook, Twitter, and so on) game.HUD = game.HUD || {}; game.HUD.Container = me.ObjectContainer.extend({ init: function() { // call the constructor this.parent(); // persistent across level change this.isPersistent = true; // non collidable this.collidable = false; // make sure our object is always draw first this.z = Infinity; // give a name this.name = "HUD"; // add our child score object at the top left corner this.addChild(new game.HUD.ScoreItem(5, 5)); } }); game.HUD.ScoreItem = me.Renderable.extend({ init: function(x, y) { // call the parent constructor // (size does not matter here) this.parent(new me.Vector2d(x, y), 10, 10); // local copy of the global score this.stepsFont = new me.Font('gamefont', 80, '#000', 'center'); // make sure we use screen coordinates this.floating = true; }, update: function() { return true; }, draw: function (context) { if (game.data.start && me.state.isCurrent(me.state.PLAY)) this.stepsFont.draw(context, game.data.steps, me.video.getWidth()/2, 10); } }); var BackgroundLayer = me.ImageLayer.extend({ init: function(image, z, speed) { name = image; width = 900; height = 600; ratio = 1; // call parent constructor this.parent(name, width, height, image, z, ratio); }, update: function() { if (me.input.isKeyPressed('mute')) { game.data.muted = !game.data.muted; if (game.data.muted){ me.audio.disable(); }else{ me.audio.enable(); } } return true; } }); var Share = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "share"; settings.spritewidth = 150; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; FB.ui( { method: 'feed', name: 'My Clumsy Bird Score!', caption: "Share to your friends", description: ( shareText ), link: url, picture: 'http://ellisonleao.github.io/clumsy-bird/data/img/clumsy.png' } ); return false; } }); var Tweet = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "tweet"; settings.spritewidth = 152; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; var hashtags = 'clumsybird,melonjs' window.open('https://twitter.com/intent/tweet?text=' + shareText + '&hashtags=' + hashtags + '&count=' + url + '&url=' + url, 'Tweet!', 'height=300,width=400') return false; } }); You should notice that there are different me classes for different types of entities. The ScoreItem is a Renderable object that is created under an ObjectContainer and it will render the game steps on the play screen that we will create later. The share and Tweet buttons are created with the GUI_Object class. This class implements the onClick event that handles click events used to create the share events. The BackgroundLayer is a particular object created using the ImageLayer class. This class controls some generic image layers that can be used in the game. In our particular case we are just using a single fixed image, with fixed ratio and no scrolling. Now to the game entities. For this game we will need: BirdEntity: The bird and its behavior PipeEntity: The pipe object HitEntity: A invisible entity just to get the steps counting PipeGenerator: Will handle the PipeEntity creation Ground: A entity for the ground TheGround: The animated ground Container Add an entities.js file into the entities folder: var BirdEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('clumsy'); settings.width = 85; settings.height = 60; settings.spritewidth = 85; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0.2; this.gravityForce = 0.01; this.maxAngleRotation = Number.prototype.degToRad(30); this.maxAngleRotationDown = Number.prototype.degToRad(90); this.renderable.addAnimation("flying", [0, 1, 2]); this.renderable.addAnimation("idle", [0]); this.renderable.setCurrentAnimation("flying"); this.animationController = 0; // manually add a rectangular collision shape this.addShape(new me.Rect(new me.Vector2d(5, 5), 70, 50)); // a tween object for the flying physic effect this.flyTween = new me.Tween(this.pos); this.flyTween.easing(me.Tween.Easing.Exponential.InOut); }, update: function(dt) { // mechanics if (game.data.start) { if (me.input.isKeyPressed('fly')) { me.audio.play('wing'); this.gravityForce = 0.01; var currentPos = this.pos.y; // stop the previous one this.flyTween.stop() this.flyTween.to({y: currentPos - 72}, 100); this.flyTween.start(); this.renderable.angle = -this.maxAngleRotation; } else { this.gravityForce += 0.2; this.pos.y += me.timer.tick * this.gravityForce; this.renderable.angle += Number.prototype.degToRad(3) * me.timer.tick; if (this.renderable.angle > this.maxAngleRotationDown) this.renderable.angle = this.maxAngleRotationDown; } } var res = me.game.world.collide(this); if (res) { if (res.obj.type != 'hit') { me.device.vibrate(500); me.state.change(me.state.GAME_OVER); return false; } // remove the hit box me.game.world.removeChildNow(res.obj); // the give dt parameter to the update function // give the time in ms since last frame // use it instead ? game.data.steps++; me.audio.play('hit'); } else { var hitGround = me.game.viewport.height - (96 + 60); var hitSky = -80; // bird height + 20px if (this.pos.y >= hitGround || this.pos.y <= hitSky) { me.state.change(me.state.GAME_OVER); return false; } } return this.parent(dt); }, }); var PipeEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('pipe'); settings.width = 148; settings.height= 1664; settings.spritewidth = 148; settings.spriteheight= 1664; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; }, update: function(dt) { // mechanics this.pos.add(new me.Vector2d(-this.gravity * me.timer.tick, 0)); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var PipeGenerator = me.Renderable.extend({ init: function() { this.parent(new me.Vector2d(), me.game.viewport.width, me.game.viewport.height); this.alwaysUpdate = true; this.generate = 0; this.pipeFrequency = 92; this.pipeHoleSize = 1240; this.posX = me.game.viewport.width; }, update: function(dt) { if (this.generate++ % this.pipeFrequency == 0) { var posY = Number.prototype.random( me.video.getHeight() - 100, 200 ); var posY2 = posY - me.video.getHeight() - this.pipeHoleSize; var pipe1 = new me.pool.pull("pipe", this.posX, posY); var pipe2 = new me.pool.pull("pipe", this.posX, posY2); var hitPos = posY - 100; var hit = new me.pool.pull("hit", this.posX, hitPos); pipe1.renderable.flipY(); me.game.world.addChild(pipe1, 10); me.game.world.addChild(pipe2, 10); me.game.world.addChild(hit, 11); } return true; }, }); var HitEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('hit'); settings.width = 148; settings.height= 60; settings.spritewidth = 148; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; this.type = 'hit'; this.renderable.alpha = 0; this.ac = new me.Vector2d(-this.gravity, 0); }, update: function() { // mechanics this.pos.add(this.ac); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var Ground = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('ground'); settings.width = 900; settings.height= 96; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0; this.updateTime = false; this.accel = new me.Vector2d(-4, 0); }, update: function() { // mechanics this.pos.add(this.accel); if (this.pos.x < -this.renderable.width) { this.pos.x = me.video.getWidth() - 10; } return true; }, }); var TheGround = Object.extend({ init: function() { this.ground1 = new Ground(0, me.video.getHeight() - 96); this.ground2 = new Ground(me.video.getWidth(), me.video.getHeight() - 96); me.game.world.addChild(this.ground1, 11); me.game.world.addChild(this.ground2, 11); }, update: function () { return true; } }) Note that every game entity inherits from the me.ObjectEntity class. We need to pass the settings of the entity on the init method, telling it which image we will use from the resources along with the image measure. We also implement the update method for each Entity, telling it how it will behave during game time. Now we need to create our scenes. The game is divided into: TitleScreen PlayScreen GameOverScreen We will separate the scenes into js files. First create a title.js file in the screens folder: game.TitleScreen = me.ScreenObject.extend({ init: function(){ this.font = null; }, onResetEvent: function() { me.audio.stop("theme"); game.data.newHiScore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", true); me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.PLAY); } }); //logo var logoImg = me.loader.getImage('logo'); var logo = new me.SpriteObject ( me.game.viewport.width/2 - 170, -logoImg, logoImg ); me.game.world.addChild(logo, 10); var logoTween = new me.Tween(logo.pos).to({y: me.game.viewport.height/2 - 100}, 1000).easing(me.Tween.Easing.Exponential.InOut).start(); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); me.game.world.addChild(new (me.Renderable.extend ({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); //this.font = new me.Font('Arial Black', 20, 'black', 'left'); this.text = me.device.touch ? 'Tap to start' : 'PRESS SPACE OR CLICK LEFT MOUSE BUTTON TO START ntttttttttttPRESS "M" TO MUTE SOUND'; this.font = new me.Font('gamefont', 20, '#000'); }, update: function () { return true; }, draw: function (context) { var measure = this.font.measureText(context, this.text); this.font.draw(context, this.text, me.game.viewport.width/2 - measure.width/2, me.game.viewport.height/2 + 50); } })), 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); } }); Then, create a play.js file on the same folder: game.PlayScreen = me.ScreenObject.extend({ init: function() { me.audio.play("theme", true); // lower audio volume on firefox browser var vol = me.device.ua.contains("Firefox") ? 0.3 : 0.5; me.audio.setVolume(vol); this.parent(this); }, onResetEvent: function() { me.audio.stop("theme"); if (!game.data.muted){ me.audio.play("theme", true); } me.input.bindKey(me.input.KEY.SPACE, "fly", true); game.data.score = 0; game.data.steps = 0; game.data.start = false; game.data.newHiscore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); this.HUD = new game.HUD.Container(); me.game.world.addChild(this.HUD); this.bird = me.pool.pull("clumsy", 60, me.game.viewport.height/2 - 100); me.game.world.addChild(this.bird, 10); //inputs me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.SPACE); this.getReady = new me.SpriteObject( me.video.getWidth()/2 - 200, me.video.getHeight()/2 - 100, me.loader.getImage('getready') ); me.game.world.addChild(this.getReady, 11); var fadeOut = new me.Tween(this.getReady).to({alpha: 0}, 2000) .easing(me.Tween.Easing.Linear.None) .onComplete(function() { game.data.start = true; me.game.world.addChild(new PipeGenerator(), 0); }).start(); }, onDestroyEvent: function() { me.audio.stopTrack('theme'); // free the stored instance this.HUD = null; this.bird = null; me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); } }); Finally, the gameover.js screen: game.GameOverScreen = me.ScreenObject.extend({ init: function() { this.savedData = null; this.handler = null; }, onResetEvent: function() { me.audio.play("lose"); //save section this.savedData = { score: game.data.score, steps: game.data.steps }; me.save.add(this.savedData); // clay.io if (game.data.score > 0) { me.plugin.clay.leaderboard('clumsy'); } if (!me.save.topSteps) me.save.add({topSteps: game.data.steps}); if (game.data.steps > me.save.topSteps) { me.save.topSteps = game.data.steps; game.data.newHiScore = true; } me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", false) me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.MENU); } }); var gImage = me.loader.getImage('gameover'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImage.width/2, me.video.getHeight()/2 - gImage.height/2 - 100, gImage ), 12); var gImageBoard = me.loader.getImage('gameoverbg'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImageBoard.width/2, me.video.getHeight()/2 - gImageBoard.height/2, gImageBoard ), 10); me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); // share button var buttonsHeight = me.video.getHeight() / 2 + 200; this.share = new Share(me.video.getWidth()/3 - 100, buttonsHeight); me.game.world.addChild(this.share, 12); //tweet button this.tweet = new Tweet(this.share.pos.x + 170, buttonsHeight); me.game.world.addChild(this.tweet, 12); //leaderboard button this.leader = new Leader(this.tweet.pos.x + 170, buttonsHeight); me.game.world.addChild(this.leader, 12); // add the dialog witht he game information if (game.data.newHiScore) { var newRect = new me.SpriteObject( 235, 355, me.loader.getImage('new') ); me.game.world.addChild(newRect, 12); } this.dialog = new (me.Renderable.extend({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); this.font = new me.Font('gamefont', 40, 'black', 'left'); this.steps = 'Steps: ' + game.data.steps.toString(); this.topSteps= 'Higher Step: ' + me.save.topSteps.toString(); }, update: function () { return true; }, draw: function (context) { var stepsText = this.font.measureText(context, this.steps); var topStepsText = this.font.measureText(context, this.topSteps); var scoreText = this.font.measureText(context, this.score); //steps this.font.draw( context, this.steps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 ); //top score this.font.draw( context, this.topSteps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 + 50 ); } })); me.game.world.addChild(this.dialog, 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); this.font = null; me.audio.stop("theme"); } });  Here is how the ScreenObjects works: First it calls the init constructor method for any variable initialization. onResetEvent is called next. This method will be called every time the scene is called. In our case the onResetEvent will add some objects to the game world stack. The onDestroyEvent acts like a garbage collector and unregisters bind events and removes some elements on the draw calls. Now, let's put it all together in the index.html file: <!DOCTYPE HTML> <html lang="en"> <head> <title>Clumsy Bird</title> </head> <body> <!-- the facebook init for the share button --> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '213642148840283', status : true, xfbml : true }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/pt_BR/all.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <!-- Canvas placeholder --> <div id="screen"></div> <!-- melonJS Library --> <script type="text/javascript" src="lib/melonJS-1.0.2.js" ></script> <script type="text/javascript" src="js/entities/HUD.js" ></script> <script type="text/javascript" src="js/entities/entities.js" ></script> <script type="text/javascript" src="js/screens/title.js" ></script> <script type="text/javascript" src="js/screens/play.js" ></script> <script type="text/javascript" src="js/screens/gameover.js" ></script> </body> </html> Step 3 - Flying! To run our game we will need a web server of your choice. If you have Python installed, you can simply type the following in your shell: $python -m SimpleHTTPServer Then you can open your browser at http://localhost:8000. If all went well, you will see the title screen after it loads, like in the following image: I hope you enjoyed this post!  About this author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 13
  • 16577

article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 10953

article-image-building-content-management-system
Packt
25 Sep 2014
25 min read
Save for later

Building a Content Management System

Packt
25 Sep 2014
25 min read
In this article by Charles R. Portwood II, the author of Yii Project Blueprints, we will look at how to create a feature-complete content management system and blogging platform. (For more resources related to this topic, see here.) Describing the project Our CMS can be broken down into several different components: Users who will be responsible for viewing and managing the content Content to be managed Categories for our content to be placed into Metadata to help us further define our content and users Search engine optimizations Users The first component of our application is the users who will perform all the tasks in our application. For this application, we're going to largely reuse the user database and authentication system. In this article, we'll enhance this functionality by allowing social authentication. Our CMS will allow users to register new accounts from the data provided by Twitter; after they have registered, the CMS will allow them to sign-in to our application by signing in to Twitter. To enable us to know if a user is a socially authenticated user, we have to make several changes to both our database and our authentication scheme. First, we're going to need a way to indicate whether a user is a socially authenticated user. Rather than hardcoding a isAuthenticatedViaTwitter column in our database, we'll create a new database table called user_metadata, which will be a simple table that contains the user's ID, a unique key, and a value. This will allow us to store additional information about our users without having to explicitly change our user's database table every time we want to make a change: ID INTEGER PRIMARY KEYuser_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER We'll also need to modify our UserIdentity class to allow socially authenticated users to sign in. To do this, we'll be expanding upon this class to create a RemoteUserIdentity class that will work off the OAuth codes that Twitter (or any other third-party source that works with HybridAuth) provide to us rather than authenticating against a username and password. Content At the core of our CMS is our content that we'll manage. For this project, we'll manage simple blog posts that can have additional metadata associated with them. Each post will have a title, a body, an author, a category, a unique URI or slug, and an indication whether it has been published or not. Our database structure for this table will look as follows: ID INTEGER PRIMARY KEYtitle STRINGbody TEXTpublished INTEGERauthor_id INTEGERcategory_id INTEGERslug STRINGcreated INTEGERupdated INTEGER Each post will also have one or many metadata columns that will further describe the posts we'll be creating. We can use this table (we’ll call it content_metadata) to have our system store information about each post automatically for us, or add information to our posts ourselves, thereby eliminating the need to constantly migrate our database every time we want to add a new attribute to our content: ID INTEGER PRIMARY KEYcontent_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER Categories Each post will be associated with a category in our system. These categories will help us further refine our posts. As with our content, each category will have its own slug. Before either a post or a category is saved, we'll need to verify that the slug is not already in use. Our table structure will look as follows: ID INTEGER PRIMARY KEYname STRINGdescription TEXTslug STRINGcreated INTEGERupdated INTEGER Search engine optimizations The last core component of our application is optimization for search engines so that our content can be indexed quickly. SEO is important because it increases our discoverability and availability both on search engines and on other marketing materials. In our application, there are a couple of things we'll perform to improve our SEO: The first SEO enhancement we'll add is a sitemap.xml file, which we can submit to popular search engines to index. Rather than crawl our content, search engines can very quickly index our sitemap.xml file, which means that our content will show up in search engines faster. The second enhancement we'll be adding is the slugs that we discussed earlier. Slugs allow us to indicate what a particular post is about directly from a URL. So rather than have a URL that looks like http://chapter6.example.com/content/post/id/5, we can have URL's that look like: http://chapter6.example.com/my-awesome-article. These types of URLs allow search engines and our users to know what our content is about without even looking at the content itself, such as when a user is browsing through their bookmarks or browsing a search engine. Initializing the project To provide us with a common starting ground, a skeleton project has been included with the project resources for this article. Included with this skeleton project are the necessary migrations, data files, controllers, and views to get us started with developing. Also included in this skeleton project are the user authentication classes. Copy this skeleton project to your web server, configure it so that it responds to chapter6.example.com as outlined at the beginning of the article, and then perform the following steps to make sure everything is set up: Adjust the permissions on the assets and protected/runtime folders so that they are writable by your web server. In this article, we'll once again use the latest version of MySQL (at the time of writing MySQL 5.6). Make sure that your MySQL server is set up and running on your server. Then, create a username, password, and database for our project to use, and update your protected/config/main.php file accordingly. For simplicity, you can use ch6_cms for each value. Install our Composer dependencies: Composer install Run the migrate command and install our mock data: php protected/yiic.php migrate up --interactive=0psql ch6_cms -f protected/data/postgres.sql Finally, add your SendGrid credentials to your protected/config/params.php file: 'username' => '<username>','password' => '<password>','from' => 'noreply@ch6.home.erianna.net') If everything is loaded correctly, you should see a 404 page similar to the following: Exploring the skeleton project There are actually a lot of different things going on in the background to make this work even if this is just a 404 error. Before we start doing any development, let's take a look at a few of the classes that have been provided in our skeleton project in the protected/components folder. Extending models from a common class The first class that has been provided to us is an ActiveRecord extension called CMSActiveRecord that all of our models will stem from. This class allows us to reduce the amount of code that we have to write in each class. For now, we'll simply add CTimestampBehavior and the afterFind() method to store the old attributes for the time the need arises to compare the changed attributes with the new attributes: class CMSActiveRecordCMSActiveRecord extends CActiveRecord{public $_oldAttributes = array();public function behaviors(){return array('CTimestampBehavior' => array('class' => 'zii.behaviors.CTimestampBehavior','createAttribute' => 'created','updateAttribute' => 'updated','setUpdateOnCreate' => true));}public function afterFind(){if ($this !== NULL)$this->_oldAttributes = $this->attributes;return parent::afterFind();}} Creating a custom validator for slugs Since both Content and Category classes have slugs, we'll need to add a custom validator to each class that will enable us to ensure that the slug is not already in use by either a post or a category. To do this, we have another class called CMSSlugActiveRecord that extends CMSActiveRecord with a validateSlug() method that we'll implement as follows: class CMSSLugActiveRecord extends CMSActiveRecord{public function validateSlug($attributes, $params){// Fetch any records that have that slug$content = Content::model()->findByAttributes(array('slug' =>$this->slug));$category = Category::model()->findByAttributes(array('slug' =>$this->slug));$class = strtolower(get_class($this));if ($content == NULL && $category == NULL)return true;else if (($content == NULL && $category != NULL) || ($content !=NULL && $category == NULL)){$this->addError('slug', 'That slug is already in use');return false;}else{if ($this->id == $$class->id)return true;}$this->addError('slug', 'That slug is already in use');return false;}} This implementation simply checks the database for any item that has that slug. If nothing is found, or if the current item is the item that is being modified, then the validator will return true. Otherwise, it will add an error to the slug attribute and return false. Both our Content model and Category model will extend from this class. View management with themes One of the largest challenges of working with larger applications is changing their appearance without locking functionality into our views. One way to further separate our business logic from our presentation logic is to use themes. Using themes in Yii, we can dynamically change the presentation layer of our application simply utilizing the Yii::app()->setTheme('themename) method. Once this method is called, Yii will look for view files in themes/themename/views rather than protected/views. Throughout the rest of the article, we'll be adding views to a custom theme called main, which is located in the themes folder. To set this theme globally, we'll be creating a custom class called CMSController, which all of our controllers will extend from. For now, our theme name will be hardcoded within our application. This value could easily be retrieved from a database though, allowing us to dynamically change themes from a cached or database value rather than changing it in our controller. Have a look at the following lines of code: class CMSController extends CController{public function beforeAction($action){Yii::app()->setTheme('main');return parent::beforeAction($action);}} Truly dynamic routing In our previous applications, we had long, boring URL's that had lots of IDs and parameters in them. These URLs provided a terrible user experience and prevented search engines and users from knowing what the content was about at a glance, which in turn would hurt our SEO rankings on many search engines. To get around this, we're going to heavily modify our UrlManager class to allow truly dynamic routing, which means that, every time we create or update a post or a category, our URL rules will be updated. Telling Yii to use our custom UrlManager Before we can start working on our controllers, we need to create a custom UrlManager to handle routing of our content so that we can access our content by its slug. The steps are as follows: The first change we need to make to allow for this routing is to update the components section of our protected/config/main.php file. This will tell Yii what class to use for the UrlManager component: 'urlManager' => array('class' => 'application.components.CMSUrlManager','urlFormat' => 'path','showScriptName' => false) Next, within our protected/components folder, we need to create CMSUrlManager.php: class CMSUrlManager extends CUrlManager {} CUrlManager works by populating a rules array. When Yii is bootstrapped, it will trigger the processRules() method to determine which route should be executed. We can overload this method to inject our own rules, which will ensure that the action that we want to be executed is executed. To get started, let's first define a set of default routes that we want loaded. The routes defined in the following code snippet will allow for pagination on our search and home page, enable a static path for our sitemap.xml file, and provide a route for HybridAuth to use for social authentication: public $defaultRules = array('/sitemap.xml' => '/content/sitemap','/search/<page:d+>' => '/content/search','/search' => '/content/search','/blog/<page:d+>' => '/content/index','/blog' => '/content/index','/' => '/content/index','/hybrid/<provider:w+>' => '/hybrid/index',); Then, we'll implement our processRules() method: protected function processRules() {} CUrlManager already has a public property that we can interface to modify the rules, so we'll inject our own rules into this. The rules property is the same property that can be accessed from within our config file. Since processRules() gets called on every page load, we'll also utilize caching so that our rules don't have to be generated every time. We'll start by trying to load any of our pregenerated rules from our cache, depending upon whether we are in debug mode or not: $this->rules = !YII_DEBUG ? Yii::app()->cache->get('Routes') : array(); If the rules we get back are already set up, we'll simple return them; otherwise, we'll generate the rules, put them into our cache, and then append our basic URL rules: if ($this->rules == false || empty($this->rules)) { $this->rules = array(); $this->rules = $this->generateClientRules(); $this->rules = CMap::mergearray($this->addRssRules(), $this- >rules); Yii::app()->cache->set('Routes', $this->rules); } $this->rules['<controller:w+>/<action:w+>/<id:w+>'] = '/'; $this->rules['<controller:w+>/<action:w+>'] = '/'; return parent::processRules(); For abstraction purposes, within our processRules() method, we've utilized two methods we'll need to create: generateClientRules, which will generate the rules for content and categories, and addRSSRules, which will generate the RSS routes for each category. The first method, generateClientRules(), simply loads our default rules that we defined earlier with the rules generated from our content and categories, which are populated by the generateRules() method: private function generateClientRules() { $rules = CMap::mergeArray($this->defaultRules, $this->rules); return CMap::mergeArray($this->generateRules(), $rules); } private function generateRules() { return CMap::mergeArray($this->generateContentRules(), $this- >generateCategoryRules()); } The generateRules() method, that we just defined, actually calls the methods that build our routes. Each route is a key-value pair that will take the following form: array( '<slug>' => '<controller>/<action>/id/<id>' ) Content rules will consist of an entry that is published. Have a look at the following code: private function generateContentRules(){$rules = array();$criteria = new CDbCriteria;$criteria->addCondition('published = 1');$content = Content::model()->findAll($criteria);foreach ($content as $el){if ($el->slug == NULL)continue;$pageRule = $el->slug.'/<page:d+>';$rule = $el->slug;if ($el->slug == '/')$pageRule = $rule = '';$pageRule = $el->slug . '/<page:d+>';$rule = $el->slug;$rules[$pageRule] = "content/view/id/{$el->id}";$rules[$rule] = "content/view/id/{$el->id}";}return $rules;} Our category rules will consist of all categories in our database. Have a look at the following code: private function generateCategoryRules() { $rules = array(); $categories = Category::model()->findAll(); foreach ($categories as $el) { if ($el->slug == NULL) continue; $pageRule = $el->slug.'/<page:d+>'; $rule = $el->slug; if ($el->slug == '/') $pageRule = $rule = ''; $pageRule = $el->slug . '/<page:d+>'; $rule = $el->slug; $rules[$pageRule] = "category/index/id/{$el->id}"; $rules[$rule] = "category/index/id/{$el->id}"; } return $rules; } Finally, we'll add our RSS rules that will allow RSS readers to read all content for the entire site or for a particular category, as follows: private function addRSSRules() { $categories = Category::model()->findAll(); foreach ($categories as $category) $routes[$category->slug.'.rss'] = "category/rss/id/ {$category->id}"; $routes['blog.rss'] = '/category/rss'; return $routes; } Displaying and managing content Now that Yii knows how to route our content, we can begin work on displaying and managing it. Begin by creating a new controller called ContentController in protected/controllers that extends CMSController. Have a look at the following line of code: class ContentController extends CMSController {} To start with, we'll define our accessRules() method and the default layout that we're going to use. Here's how: public $layout = 'default';public function filters(){return array('accessControl',);}public function accessRules(){return array(array('allow','actions' => array('index', 'view', 'search'),'users' => array('*')),array('allow','actions' => array('admin', 'save', 'delete'),'users'=>array('@'),'expression' => 'Yii::app()->user->role==2'),array('deny', // deny all users'users'=>array('*'),),);} Rendering the sitemap The first method we'll be implementing is our sitemap action. In our ContentController, create a new method called actionSitemap(): public function actionSitemap() {} The steps to be performed are as follows: Since sitemaps come in XML formatting, we'll start by disabling WebLogRoute defined in our protected/config/main.php file. This will ensure that our XML validates when search engines attempt to index it: Yii::app()->log->routes[0]->enabled = false; We'll then send the appropriate XML headers, disable the rendering of the layout, and flush any content that may have been queued to be sent to the browser: ob_end_clean();header('Content-type: text/xml; charset=utf-8');$this->layout = false; Then, we'll load all the published entries and categories and send them to our sitemap view: $content = Content::model()->findAllByAttributes(array('published'=> 1));$categories = Category::model()->findAll();$this->renderPartial('sitemap', array('content' => $content,'categories' => $categories,'url' => 'http://'.Yii::app()->request->serverName .Yii::app()->baseUrl)) Finally, we have two options to render this view. We can either make it a part of our theme in themes/main/views/content/sitemap.php, or we can place it in protected/views/content/sitemap.php. Since a sitemap's structure is unlikely to change, let's put it in the protected/views folder: <?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?><urlset ><?php foreach ($content as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>1</priority></url><?php endforeach; ?><?php foreach ($categories as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>0.7</priority></url><?php endforeach; ?></urlset> You can now load http://chapter6.example.com/sitemap.xml in your browser to see the sitemap. Before you make your site live, be sure to submit this file to search engines for them to index. Displaying a list view of content Next, we'll implement the actions necessary to display all of our content and a particular post. We'll start by providing a paginated view of our posts. Since CListView and the Content model's search() method already provide this functionality, we can utilize those classes to generate and display this data: To begin with, open protected/models/Content.php and modify the return value of the search() method as follows. This will ensure that Yii's pagination uses the correct variable in our CListView, and tells Yii how many results to load per page. return new CActiveDataProvider($this, array('criteria' =>$criteria,'pagination' => array('pageSize' => 5,'pageVar' =>'page'))); Next, implement the actionIndex() method with the $page parameter. We've already told our UrlManager how to handle this, which means that we'll get pretty URI's for pagination (for example, /blog, /blog/2, /blog/3, and so on): public function actionIndex($page=1){// Model Search without $_GET params$model = new Content('search');$model->unsetAttributes();$model->published = 1;$this->render('//content/all', array('dataprovider' => $model->search()));} Then we'll create a view in themes/main/views/content/all.php, that will display the data within our dataProvider: <?php $this->widget('zii.widgets.CListView', array('dataProvider'=>$dataprovider,'itemView'=>'//content/list','summaryText' => '','pager' => array('htmlOptions' => array('class' => 'pager'),'header' => '','firstPageCssClass'=>'hide','lastPageCssClass'=>'hide','maxButtonCount' => 0))); Finally, copy themes/main/views/content/all.php from the project resources folder so that our views can render. Since our database has already been populated with some sample data, you can start playing around with the results right away, as shown in the following screenshot: Displaying content by ID Since our routing rules are already set up, displaying our content is extremely simple. All that we have to do is search for a published model with the ID passed to the view action and render it: public function actionView($id=NULL){// Retrieve the data$content = Content::model()->findByPk($id);// beforeViewAction should catch thisif ($content == NULL || !$content->published)throw new CHttpException(404, 'The article you specified doesnot exist.');$this->render('view', array('id' => $id,'post' => $content));} After copying themes/main/views/content/view.php from the project resources folder into your project, you'll be able to click into a particular post from the home page. In its actions present form, this action has introduced an interesting side effect that could negatively impact our SEO rankings on search engines—the same entry can now be accessed from two URI's. For example, http://chapter6.example.com/content/view/id/1 and http://chapter6.example.com/quis-condimentum-tortor now bring up the same post. Fortunately, correcting this bug is fairly easy. Since the goal of our slugs is to provide more descriptive URI's, we'll simply block access to the view if a user tries to access it from the non-slugged URI. We'll do this by creating a new method called beforeViewAction() that takes the entry ID as a parameter and gets called right after the actionView() method is called. This private method will simply check the URI from CHttpRequest to determine how actionView was accessed and return a 404 if it's not through our beautiful slugs: private function beforeViewAction($id=NULL){// If we do not have an ID, consider it to be null, and throw a 404errorif ($id == NULL)throw new CHttpException(404,'The specified post cannot befound.');// Retrieve the HTTP Request$r = new CHttpRequest();// Retrieve what the actual URI$requestUri = str_replace($r->baseUrl, '', $r->requestUri);// Retrieve the route$route = '/' . $this->getRoute() . '/' . $id;$requestUri = preg_replace('/?(.*)/','',$requestUri);// If the route and the uri are the same, then a direct accessattempt was made, and we need to block access to the controllerif ($requestUri == $route)throw new CHttpException(404, 'The requested post cannot befound.');return str_replace($r->baseUrl, '', $r->requestUri);} Then right after our actionView starts, we can simultaneously set the correct return URL and block access to the content if it wasn't accessed through the slug as follows: Yii::app()->user->setReturnUrl($this->beforeViewAction($id)); Adding comments to our CMS with Disqus Presently, our content is only informative in nature—we have no way for our users to communicate with us what they thought about our entry. To encourage engagement, we can add a commenting system to our CMS to further engage with our readers. Rather than writing our own commenting system, we can leverage comment through Disqus, a free, third-party commenting system. Even through Disqus, comments are implemented in JavaScript and we can create a custom widget wrapper for it to display comments on our site. The steps are as follows: To begin with, log in to the Disqus account you created at the beginning of this article as outlined in the prerequisites section. Then, navigate to http://disqus.com/admin/create/ and fill out the form fields as prompted and as shown in the following screenshot: Then, add a disqus section to your protected/config/params.php file with your site shortname: 'disqus' => array('shortname' => 'ch6disqusexample',) Next, create a new widget in protected/components called DisqusWidget.php. This widget will be loaded within our view and will be populated by our Content model: class DisqusWidget extends CWidget {} Begin by specifying the public properties that our view will be able to inject into as follows: public $shortname = NULL; public $identifier = NULL; public $url = NULL; public $title = NULL; Then, overload the init() method to load the Disqus JavaScript callback and to populate the JavaScript variables with those populated to the widget as follows:public function init() public function init(){parent::init();if ($this->shortname == NULL)throw new CHttpException(500, 'Disqus shortname isrequired');echo "<div id='disqus_thread'></div>";Yii::app()->clientScript->registerScript('disqus', "var disqus_shortname = '{$this->shortname}';var disqus_identifier = '{$this->identifier}';var disqus_url = '{$this->url}';var disqus_title = '{$this->title}';/* * * DON'T EDIT BELOW THIS LINE * * */(function() {var dsq = document.createElement('script'); dsq.type ='text/javascript'; dsq.async = true;dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);})();");} Finally, within our themes/main/views/content/view.php file, load the widget as follows: <?php $this->widget('DisqusWidget', array('shortname' => Yii::app()->params['includes']['disqus']['shortname'],'url' => $this->createAbsoluteUrl('/'.$post->slug),'title' => $post->title,'identifier' => $post->id)); ?> Now, when you load any given post, Disqus comments will also be loaded with that post. Go ahead and give it a try! Searching for content Next, we'll implement a search method so that our users can search for posts. To do this, we'll implement an instance of CActiveDataProvider and pass that data to our themes/main/views/content/all.php view to be rendered and paginated: public function actionSearch(){$param = Yii::app()->request->getParam('q');$criteria = new CDbCriteria;$criteria->addSearchCondition('title',$param,'OR');$criteria->addSearchCondition('body',$param,'OR');$dataprovider = new CActiveDataProvider('Content', array('criteria'=>$criteria,'pagination' => array('pageSize' => 5,'pageVar'=>'page')));$this->render('//content/all', array('dataprovider' => $dataprovider));} Since our view file already exists, we can now search for content in our CMS. Managing content Next, we'll implement a basic set of management tools that will allow us to create, update, and delete entries: We'll start by defining our loadModel() method and the actionDelete() method: private function loadModel($id=NULL){if ($id == NULL)throw new CHttpException(404, 'No category with that IDexists');$model = Content::model()->findByPk($id);if ($model == NULL)throw new CHttpException(404, 'No category with that IDexists');return $model;}public function actionDelete($id){$this->loadModel($id)->delete();$this->redirect($this->createUrl('content/admin'));} Next, we can implement our admin view, which will allow us to view all the content in our system and to create new entries. Be sure to copy the themes/main/views/content/admin.php file from the project resources folder into your project before using this view: public function actionAdmin(){$model = new Content('search');$model->unsetAttributes();if (isset($_GET['Content']))$model->attributes = $_GET;$this->render('admin', array('model' => $model));} Finally, we'll implement a save view to create and update entries. Saving content will simply pass it through our content model's validation rules. The only override we'll be adding is ensuring that the author is assigned to the user editing the entry. Before using this view, be sure to copy the themes/main/views/content/save.php file from the project resources folder into your project: public function actionSave($id=NULL){if ($id == NULL)$model = new Content;else$model = $this->loadModel($id);if (isset($_POST['Content'])){$model->attributes = $_POST['Content'];$model->author_id = Yii::app()->user->id;if ($model->save()){Yii::app()->user->setFlash('info', 'The articles wassaved');$this->redirect($this->createUrl('content/admin'));}}$this->render('save', array('model' => $model));} At this point, you can now log in to the system using the credentials provided in the following table and start managing entries: Username Password user1@example.com test user2@example.com test Summary In this article, we dug deeper into Yii framework by manipulating our CUrlManager class to generate completely dynamic and clean URIs. We also covered the use of Yii's built-in theming to dynamically change the frontend appearance of our site by simply changing a configuration value. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [Article] Yii 1.1: Using Zii Components [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 2435

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 6494
article-image-using-socketio-and-express-together
Packt
23 Sep 2014
16 min read
Save for later

Using Socket.IO and Express together

Packt
23 Sep 2014
16 min read
In this article by Joshua Johanan, the author of the book Building Scalable Apps with Redis and Node.js, tells us that Express application is just the foundation. We are going to add features until it is a fully usable app. We currently can serve web pages and respond to HTTP, but now we want to add real-time communication. It's very fortunate that we just spent most of this article learning about Socket.IO; it does just that! Let's see how we are going to integrate Socket.IO with an Express application. (For more resources related to this topic, see here.) We are going to use Express and Socket.IO side by side. Socket.IO does not use HTTP like a web application. It is event based, not request based. This means that Socket.IO will not interfere with Express routes that we have set up, and that's a good thing. The bad thing is that we will not have access to all the middleware that we set up for Express in Socket.IO. There are some frameworks that combine these two, but it still has to convert the request from Express into something that Socket.IO can use. I am not trying to knock down these frameworks. They simplify a complex problem and most importantly, they do it well (Sails is a great example of this). Our app, though, is going to keep Socket.IO and Express separated as much as possible with the least number of dependencies. We know that Socket.IO does not need Express, as all our examples have not used Express in any way. This has an added benefit in that we can break off our Socket.IO module and run it as its own application at a future point in time. The other great benefit is that we learn how to do it ourselves. We need to go into the directory where our Express application is. Make sure that our pacakage.json has all the additional packages for this article and run npm.install. The first thing we need to do is add our configuration settings. Adding Socket.IO to the config We will use the same config file that we created for our Express app. Open up config.js and change the file to what I have done in the following code: var config = {port: 3000,secret: 'secret',redisPort: 6379,redisHost: 'localhost',routes: {   login: '/account/login',   logout: '/account/logout'}};module.exports = config; We are adding two new attributes, redisPort and redisHost. This is because of how the redis package configures its clients. We also are removing the redisUrl attribute. We can configure all our clients with just these two Redis config options. Next, create a directory under the root of our project named socket.io. Then, create a file called index.js. This will be where we initialize Socket.IO and wire up all our event listeners and emitters. We are just going to use one namespace for our application. If we were to add multiple namespaces, I would just add them as files underneath the socket.io directory. Open up app.js and change the following lines in it: //variable declarations at the topVar io = require('./socket.io');//after all the middleware and routesvar server = app.listen(config.port);io.startIo(server); We will define the startIo function shortly, but let's talk about our app.listen change. Previously, we had the app.listen execute, and we did not capture it in a variable; now we are. Socket.IO listens using Node's http.createServer. It does this automatically if you pass in a number into its listen function. When Express executes app.listen, it returns an instance of the HTTP server. We capture that, and now we can pass the http server to Socket.IO's listen function. Let's create that startIo function. Open up index.js present in the socket.io location and add the following lines of code to it: var io = require('socket.io');var config = require('../config');var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.on('connection', socketConnection);return io;}; We are exporting the startIo function that expects a server object that goes right into Socket.IO's listen function. This should start Socket.IO serving. Next, we get a reference to our namespace and listen on the connection event, sending a message event back to the client. We also are loading our configuration settings. Let's add some code to the layout and see whether our application has real-time communication. We will need the Socket.IO client library, so link to it from node_modules like you have been doing, and put it in our static directory under a newly created js directory. Open layout.ejs present in the packtchatviews location and add the following lines to it: <!-- put these right before the body end tag --><script type="text/javascript" src="/js/socket.io.js"></script><script>var socket = io.connect("http://localhost:3000/packtchat");socket.on('message', function(d){console.log(d);});</script> We just listen for a message event and log it to the console. Fire up the node and load your application, http://localhost:3000. Check to see whether you get a message in your console. You should see your message logged to the console, as seen in the following screenshot: Success! Our application now has real-time communication. We are not done though. We still have to wire up all the events for our app. Who are you? There is one glaring issue. How do we know who is making the requests? Express has middleware that parses the session to see if someone has logged in. Socket.IO does not even know about a session. Socket.IO lets anyone connect that knows the URL. We do not want anonymous connections that can listen to all our events and send events to the server. We only want authenticated users to be able to create a WebSocket. We need to get Socket.IO access to our sessions. Authorization in Socket.IO We haven't discussed it yet, but Socket.IO has middleware. Before the connection event gets fired, we can execute a function and either allow the connection or deny it. This is exactly what we need. Using the authorization handler Authorization can happen at two places, on the default namespace or on a named namespace connection. Both authorizations happen through the handshake. The function's signature is the same either way. It will pass in the socket server, which has some stuff we need such as the connection's headers, for example. For now, we will add a simple authorization function to see how it works with Socket.IO. Open up index.js, present at the packtchatsocket.io location, and add a new function that will sit next to the socketConnection function, as seen in the following code: var io = require('socket.io');var socketAuth = function socketAuth(socket, next){return next();return next(new Error('Nothing Defined'));};var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; I know that there are two returns in this function. We are going to comment one out, load the site, and then switch the lines that are commented out. The socket server that is passed in will have a reference to the handshake data that we will use shortly. The next function works just like it does in Express. If we execute it without anything, the middleware chain will continue. If it is executed with an error, it will stop the chain. Let's load up our site and test both by switching which return gets executed. We can allow or deny connections as we please now, but how do we know who is trying to connect? Cookies and sessions We will do it the same way Express does. We will look at the cookies that are passed and see if there is a session. If there is a session, then we will load it up and see what is in it. At this point, we should have the same knowledge about the Socket.IO connection that Express does about a request. The first thing we need to do is get a cookie parser. We will use a very aptly named package called cookie. This should already be installed if you updated your package.json and installed all the packages. Add a reference to this at the top of index.js present in the packtchatsocket.io location with all the other variable declarations: Var cookie = require('cookie'); And now we can parse our cookies. Socket.IO passes in the cookie with the socket object in our middleware. Here is how we parse it. Add the following code in the socketAuth function: var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie); At this point, we will have an object that has our connect.sid in it. Remember that this is a signed value. We cannot use it as it is right now to get the session ID. We will need to parse this signed cookie. This is where cookie-parser comes in. We will now create a reference to it, as follows: var cookieParser = require('cookie-parser'); We can now parse the signed connect.sid cookie to get our session ID. Add the following code right after our parsing code: var sid = cookieParser.signedCookie (parsedCookie['connect.sid'], config.secret); This will take the value from our parsedCookie and using our secret passphrase, will return the unsigned value. We will do a quick check to make sure this was a valid signed cookie by comparing the unsigned value to the original. We will do this in the following way: if (parsedCookie['connect.sid'] === sid)   return next(new Error('Not Authenticated')); This check will make sure we are only using valid signed session IDs. The following screenshot will show you the values of an example Socket.IO authorization with a cookie: Getting the session We now have a session ID so we can query Redis and get the session out. The default session store object of Express is extended by connect-redis. To use connect-redis, we use the same session package as we did with Express, express-session. The following code is used to create all this in index.js, present at packtchatsocket.io: //at the top with the other variable declarationsvar expressSession = require('express-session');var ConnectRedis = require('connect-redis')(expressSession);var redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort}); The final line is creating the object that will connect to Redis and get our session. This is the same command used with Express when setting the store option for the session. We can now get the session from Redis and see what's inside of it. What follows is the entire socketAuth function along with all our variable declarations: var io = require('socket.io'),connect = require('connect'),cookie = require('cookie'),expressSession = require('express-session'),ConnectRedis = require('connect-redis')(expressSession),redis = require('redis'),config = require('../config'),redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort});var socketAuth = function socketAuth(socket, next){var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie);var sid = connect.utils.parseSignedCookie(parsedCookie['connect.sid'], config.secret);if (parsedCookie['connect.sid'] === sid) return next(new Error('Not Authenticated'));redisSession.get(sid, function(err, session){   if (session.isAuthenticated)   {     socket.user = session.user;     socket.sid = sid;     return next();   }   else     return next(new Error('Not Authenticated'));});}; We can use redisSession and sid to get the session out of Redis and check its attributes. As far as our packages are concerned, we are just another Express app getting session data. Once we have the session data, we check the isAuthenticated attribute. If it's true, we know the user is logged in. If not, we do not let them connect yet. We are adding properties to the socket object to store information from the session. Later on, after a connection is made, we can get this information. As an example, we are going to change our socketConnection function to send the user object to the client. The following should be our socketConnection function: var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});socket.emit('message', socket.user);}; Now, let's load up our browser and go to http://localhost:3000. Log in and then check the browser's console. The following screenshot will show that the client is receiving the messages: Adding application-specific events The next thing to do is to build out all the real-time events that Socket.IO is going to listen for and respond to. We are just going to create the skeleton for each of these listeners. Open up index.js, present in packtchatsocket.io, and change the entire socketConnection function to the following code: var socketConnection = function socketConnection(socket){socket.on('GetMe', function(){});socket.on('GetUser', function(room){});socket.on('GetChat', function(data){});socket.on('AddChat', function(chat){});socket.on('GetRoom', function(){});socket.on('AddRoom', function(r){});socket.on('disconnect', function(){});}; Most of our emit events will happen in response to a listener. Using Redis as the store for Socket.IO The final thing we are going to add is to switch Socket.IO's internal store to Redis. By default, Socket.IO uses a memory store to save any data you attach to a socket. As we know now, we cannot have an application state that is stored only on one server. We need to store it in Redis. Therefore, we add it to index.js, present in packtchatsocket.io. Add the following code to the variable declarations: Var redisAdapter = require('socket.io-redis'); An application state is a flexible idea. We can store the application state locally. This is done when the state does not need to be shared. A simple example is keeping the path to a local temp file. When the data will be needed by multiple connections, then it must be put into a shared space. Anything with a user's session will need to be shared, for example. The next thing we need to do is add some code to our startIo function. The following code is what our startIo function should look like: exports.startIo = function startIo(server){io = io.listen(server);io.adapter(redisAdapter({host: config.redisHost, port: config.redisPort}));var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; The first thing is to start the server listening. Next, we will call io.set, which allows us to set configuration options. We create a new redisStore and set all the Redis attributes (redisPub, redisSub, and redisClient) to a new Redis client connection. The Redis client takes a port and the hostname. Socket.IO inner workings We are not going to completely dive into everything that Socket.IO does, but we will discuss a few topics. WebSockets This is what makes Socket.IO work. All web servers serve HTTP, that is, what makes them web servers. This works great when all you want to do is serve pages. These pages are served based on requests. The browser must ask for information before receiving it. If you want to have real-time connections, though, it is difficult and requires some workaround. HTTP was not designed to have the server initiate the request. This is where WebSockets come in. WebSockets allow the server and client to create a connection and keep it open. Inside of this connection, either side can send messages back and forth. This is what Socket.IO (technically, Engine.io) leverages to create real-time communication. Socket.IO even has fallbacks if you are using a browser that does not support WebSockets. The browsers that do support WebSockets at the time of writing include the latest versions of Chrome, Firefox, Safari, Safari on iOS, Opera, and IE 11. This means the browsers that do not support WebSockets are all the older versions of IE. Socket.IO will use different techniques to simulate a WebSocket connection. This involves creating an Ajax request and keeping the connection open for a long time. If data needs to be sent, it will send it in an Ajax request. Eventually, that request will close and the client will immediately create another request. Socket.IO even has an Adobe Flash implementation if you have to support really old browsers (IE 6, for example). It is not enabled by default. WebSockets also are a little different when scaling our application. Because each WebSocket creates a persistent connection, we may need more servers to handle Socket.IO traffic then regular HTTP. For example, when someone connects and chats for an hour, there will have only been one or two HTTP requests. In contrast, a WebSocket will have to be open for the entire hour. The way our code base is written, we can easily scale up more Socket.IO servers by themselves. Ideas to take away from this article The first takeaway is that for every emit, there needs to be an on. This is true whether the sender is the server or the client. It is always best to sit down and map out each event and which direction it is going. The next idea is that of note, which entails building our app out of loosely coupled modules. Our app.js kicks everything that deals with Express off. Then, it fires the startIo function. While it does pass over an object, we could easily create one and use that. Socket.IO just wants a basic HTTP server. In fact, you can just pass the port, which is what we used in our first couple of Socket.IO applications (Ping-Pong). If we wanted to create an application layer of Socket.IO servers, we could refactor this code out and have all the Socket.IO servers run on separate servers other than Express. Summary At this point, we should feel comfortable about using real-time events in Socket.IO. We should also know how to namespace our io server and create groups of users. We also learned how to authorize socket connections to only allow logged-in users to connect. Resources for Article: Further resources on this subject: Exploring streams [article] Working with Data Access and File Formats Using Node.js [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 18817

article-image-adding-real-time-functionality-using-socketio
Packt
22 Sep 2014
18 min read
Save for later

Adding Real-time Functionality Using Socket.io

Packt
22 Sep 2014
18 min read
In this article by Amos Q. Haviv, the author of MEAN Web Development, decribes how Socket.io enables Node.js developers to support real-time communication using WebSockets in modern browsers and legacy fallback protocols in older browsers. (For more resources related to this topic, see here.) Introducing WebSockets Modern web applications such as Facebook, Twitter, or Gmail are incorporating real-time capabilities, which enable the application to continuously present the user with recently updated information. Unlike traditional applications, in real-time applications the common roles of browser and server can be reversed since the server needs to update the browser with new data, regardless of the browser request state. This means that unlike the common HTTP behavior, the server won't wait for the browser's requests. Instead, it will send new data to the browser whenever this data becomes available. This reverse approach is often called Comet, a term coined by a web developer named Alex Russel back in 2006 (the term was a word play on the AJAX term; both Comet and AJAX are common household cleaners in the US). In the past, there were several ways to implement a Comet functionality using the HTTP protocol. The first and easiest way is XHR polling. In XHR polling, the browser makes periodic requests to the server. The server then returns an empty response unless it has new data to send back. Upon a new event, the server will return the new event data to the next polling request. While this works quite well for most browsers, this method has two problems. The most obvious one is that using this method generates a large number of requests that hit the server with no particular reason, since a lot of requests are returning empty. The second problem is that the update time depends on the request period. This means that new data will only get pushed to the browser on the next request, causing delays in updating the client state. To solve these issues, a better approach was introduced: XHR long polling. In XHR long polling, the browser makes an XHR request to the server, but a response is not sent back unless the server has a new data. Upon an event, the server responds with the event data and the browser makes a new long polling request. This cycle enables a better management of requests, since there is only a single request per session. Furthermore, the server can update the browser immediately with new information, without having to wait for the browser's next request. Because of its stability and usability, XHR long polling has become the standard approach for real-time applications and was implemented in various ways, including Forever iFrame, multipart XHR, JSONP long polling using script tags (for cross-domain, real-time support), and the common long-living XHR. However, all these approaches were actually hacks using the HTTP and XHR protocols in a way they were not meant to be used. With the rapid development of modern browsers and the increased adoption of the new HTML5 specifications, a new protocol emerged for implementing real-time communication: the full duplex WebSockets. In browsers that support the WebSockets protocol, the initial connection between the server and browser is made over HTTP and is called an HTTP handshake. Once the initial connection is made, the browser and server open a single ongoing communication channel over a TCP socket. Once the socket connection is established, it enables bidirectional communication between the browser and server. This enables both parties to send and retrieve messages over a single communication channel. This also helps to lower server load, decrease message latency, and unify PUSH communication using a standalone connection. However, WebSockets still suffer from two major problems. First and foremost is browser compatibility. The WebSockets specification is fairly new, so older browsers don't support it, and though most modern browsers now implement the protocol, a large group of users are still using these older browsers. The second problem is HTTP proxies, firewalls, and hosting providers. Since WebSockets use a different communication protocol than HTTP, a lot of these intermediaries don't support it yet and block any socket communication. As it has always been with the Web, developers are left with a fragmentation problem, which can only be solved using an abstraction library that optimizes usability by switching between protocols according to the available resources. Fortunately, a popular library called Socket.io was already developed for this purpose, and it is freely available for the Node.js developer community. Introducing Socket.io Created in 2010 by JavaScript developer, Guillermo Rauch, Socket.io aimed to abstract Node.js' real-time application development. Since then, it has evolved dramatically, released in nine major versions before being broken in its latest version into two different modules: Engine.io and Socket.io. Previous versions of Socket.io were criticized for being unstable, since they first tried to establish the most advanced connection mechanisms and then fallback to more primitive protocols. This caused serious issues with using Socket.io in production environments and posed a threat to the adoption of Socket.io as a real-time library. To solve this, the Socket.io team redesigned it and wrapped the core functionality in a base module called Engine.io. The idea behind Engine.io was to create a more stable real-time module, which first opens a long-polling XHR communication and then tries to upgrade the connection to a WebSockets channel. The new version of Socket.io uses the Engine.io module and provides the developer with various features such as events, rooms, and automatic connection recovery, which you would otherwise implement by yourself. In this article's examples, we will use the new Socket.io 1.0, which is the first version to use the Engine.io module. Older versions of Socket.io prior to Version 1.0 are not using the new Engine.io module and therefore are much less stable in production environments. When you include the Socket.io module, it provides you with two objects: a socket server object that is responsible for the server functionality and a socket client object that handles the browser's functionality. We'll begin by examining the server object. The Socket.io server object The Socket.io server object is where it all begins. You start by requiring the Socket.io module, and then use it to create a new Socket.io server instance that will interact with socket clients. The server object supports both a standalone implementation and the ability to use it in conjunction with the Express framework. The server instance then exposes a set of methods that allow you to manage the Socket.io server operations. Once the server object is initialized, it will also be responsible for serving the socket client JavaScript file for the browser. A simple implementation of the standalone Socket.io server will look as follows: var io = require('socket.io')();io.on('connection', function(socket){ /* ... */ });io.listen(3000); This will open a Socket.io over the 3000 port and serve the socket client file at the URL http://localhost:3000/socket.io/socket.io.js. Implementing the Socket.io server in conjunction with an Express application will be a bit different: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){ /* ... */ });server.listen(3000); This time, you first use the http module of Node.js to create a server and wrap the Express application. The server object is then passed to the Socket.io module and serves both the Express application and the Socket.io server. Once the server is running, it will be available for socket clients to connect. A client trying to establish a connection with the Socket.io server will start by initiating the handshaking process. Socket.io handshaking When a client wants to connect the Socket.io server, it will first send a handshake HTTP request. The server will then analyze the request to gather the necessary information for ongoing communication. It will then look for configuration middleware that is registered with the server and execute it before firing the connection event. When the client is successfully connected to the server, the connection event listener is executed, exposing a new socket instance. Once the handshaking process is over, the client is connected to the server and all communication with it is handled through the socket instance object. For example, handling a client's disconnection event will be as follows: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); Notice how the socket.on() method adds an event handler to the disconnection event. Although the disconnection event is a predefined event, this approach works the same for custom events as well, as you will see in the following sections. While the handshake mechanism is fully automatic, Socket.io does provide you with a way to intercept the handshake process using a configuration middleware. The Socket.io configuration middleware Although the Socket.io configuration middleware existed in previous versions, in the new version it is even simpler and allows you to manipulate socket communication before the handshake actually occurs. To create a configuration middleware, you will need to use the server's use() method, which is very similar to the Express application's use() method: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.use(function(socket, next) {/* ... */next(null, true);});io.on('connection', function(socket){socket.on('disconnect', function() {   console.log('user has disconnected');});});server.listen(3000); As you can see, the io.use() method callback accepts two arguments: the socket object and a next callback. The socket object is the same socket object that will be used for the connection and it holds some connection properties. One important property is the socket.request property, which represents the handshake HTTP request. In the following sections, you will use the handshake request to incorporate the Passport session with the Socket.io connection. The next argument is a callback method that accepts two arguments: an error object and Boolean value. The next callback tells Socket.io whether or not to proceed with the handshake process, so if you pass an error object or a false value to the next method, Socket.io will not initiate the socket connection. Now that you have a basic understanding of how handshaking works, it is time to discuss the Socket.io client object. The Socket.io client object The Socket.io client object is responsible for the implementation of the browser socket communication with the Socket.io server. You start by including the Socket.io client JavaScript file, which is served by the Socket.io server. The Socket.io JavaScript file exposes an io() method that connects to the Socket.io server and creates the client socket object. A simple implementation of the socket client will be as follows: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('connect', function() {   /* ... */});</script> Notice the default URL for the Socket.io client object. Although this can be altered, you can usually leave it like this and just include the file from the default Socket.io path. Another thing you should notice is that the io() method will automatically try to connect to the default base path when executed with no arguments; however, you can also pass a different server URL as an argument. As you can see, the socket client is much easier to implement, so we can move on to discuss how Socket.io handles real-time communication using events. Socket.io events To handle the communication between the client and the server, Socket.io uses a structure that mimics the WebSockets protocol and fires events messages across the server and client objects. There are two types of events: system events, which indicate the socket connection status, and custom events, which you'll use to implement your business logic. The system events on the socket server are as follows: io.on('connection', ...): This is emitted when a new socket is connected socket.on('message', ...): This is emitted when a message is sent using the socket.send() method socket.on('disconnect', ...): This is emitted when the socket is disconnected The system events on the client are as follows: socket.io.on('open', ...): This is emitted when the socket client opens a connection with the server socket.io.on('connect', ...): This is emitted when the socket client is connected to the server socket.io.on('connect_timeout', ...): This is emitted when the socket client connection with the server is timed out socket.io.on('connect_error', ...): This is emitted when the socket client fails to connect with the server socket.io.on('reconnect_attempt', ...): This is emitted when the socket client tries to reconnect with the server socket.io.on('reconnect', ...): This is emitted when the socket client is reconnected to the server socket.io.on('reconnect_error', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('reconnect_failed', ...): This is emitted when the socket client fails to reconnect with the server socket.io.on('close', ...): This is emitted when the socket client closes the connection with the server Handling events While system events are helping us with connection management, the real magic of Socket.io relies on using custom events. In order to do so, Socket.io exposes two methods, both on the client and server objects. The first method is the on() method, which binds event handlers with events and the second method is the emit() method, which is used to fire events between the server and client objects. An implementation of the on() method on the socket server is very simple: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In the preceding code, you bound an event listener to the customEvent event. The event handler is being called when the socket client object emits the customEvent event. Notice how the event handler accepts the customEventData argument that is passed to the event handler from the socket client object. An implementation of the on() method on the socket client is also straightforward: <script src="/socket.io/socket.io.js"></script><script>var socket = io();socket.on('customEvent', function(customEventData) {   /* ... */});</script> This time the event handler is being called when the socket server emits the customEvent event that sends customEventData to the socket client event handler. Once you set your event handlers, you can use the emit() method to send events from the socket server to the socket client and vice versa. Emitting events On the socket server, the emit() method is used to send events to a single socket client or a group of connected socket clients. The emit() method can be called from the connected socket object, which will send the event to a single socket client, as follows: io.on('connection', function(socket){socket.emit('customEvent', customEventData);}); The emit() method can also be called from the io object, which will send the event to all connected socket clients, as follows: io.on('connection', function(socket){io.emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients except from the sender using the broadcast property, as shown in the following lines of code: io.on('connection', function(socket){socket.broadcast.emit('customEvent', customEventData);}); On the socket client, things are much simpler. Since the socket client is only connected to the socket server, the emit() method will only send the event to the socket server: var socket = io();socket.emit('customEvent', customEventData); Although these methods allow you to switch between personal and global events, they still lack the ability to send events to a group of connected socket clients. Socket.io offers two options to group sockets together: namespaces and rooms. Socket.io namespaces In order to easily control socket management, Socket.io allow developers to split socket connections according to their purpose using namespaces. So instead of creating different socket servers for different connections, you can just use the same server to create different connection endpoints. This means that socket communication can be divided into groups, which will then be handled separately. Socket.io server namespaces To create a socket server namespace, you will need to use the socket server of() method that returns a socket namespace. Once you retain the socket namespace, you can just use it the same way you use the socket server object: var app = require('express')();var server = require('http').Server(app);var io = require('socket.io')(server);io.of('/someNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});io.of('/someOtherNamespace').on('connection', function(socket){socket.on('customEvent', function(customEventData) {   /* ... */});});server.listen(3000); In fact, when you use the io object, Socket.io actually uses a default empty namespace as follows: io.on('connection', function(socket){/* ... */}); The preceding lines of code are actually equivalent to this: io.of('').on('connection', function(socket){/* ... */}); Socket.io client namespaces On the socket client, the implementation is a little different: <script src="/socket.io/socket.io.js"></script><script>var someSocket = io('/someNamespace');someSocket.on('customEvent', function(customEventData) {   /* ... */});var someOtherSocket = io('/someOtherNamespace');someOtherSocket.on('customEvent', function(customEventData) {   /* ... */});</script> As you can see, you can use multiple namespaces on the same application without much effort. However, once sockets are connected to different namespaces, you will not be able to send an event to all these namespaces at once. This means that namespaces are not very good for a more dynamic grouping logic. For this purpose, Socket.io offers a different feature called rooms. Socket.io rooms Socket.io rooms allow you to partition connected sockets into different groups in a dynamic way. Connected sockets can join and leave rooms, and Socket.io provides you with a clean interface to manage rooms and emit events to the subset of sockets in a room. The rooms functionality is handled solely on the socket server but can easily be exposed to the socket client. Joining and leaving rooms Joining a room is handled using the socket join() method, while leaving a room is handled using the leave() method. So, a simple subscription mechanism can be implemented as follows: io.on('connection', function(socket) {   socket.on('join', function(roomData) {       socket.join(roomData.roomName);   })   socket.on('leave', function(roomData) {       socket.leave(roomData.roomName);   })}); Notice that the join() and leave() methods both take the room name as the first argument. Emitting events to rooms To emit events to all the sockets in a room, you will need to use the in() method. So, emitting an event to all socket clients who joined a room is quite simple and can be achieved with the help of the following code snippets: io.on('connection', function(socket){   io.in('someRoom').emit('customEvent', customEventData);}); Another option is to send the event to all connected socket clients in a room except the sender by using the broadcast property and the to() method: io.on('connection', function(socket){   socket.broadcast.to('someRoom').emit('customEvent', customEventData);}); This pretty much covers the simple yet powerful room functionality of Socket.io. In the next section, you will learn how implement Socket.io in your MEAN application, and more importantly, how to use the Passport session to identify users in the Socket.io session. While we covered most of Socket.io features, you can learn more about Socket.io by visiting the official project page at https://socket.io. Summary In this article, you learned how the Socket.io module works. You went over the key features of Socket.io and learned how the server and client communicate. You configured your Socket.io server and learned how to integrate it with your Express application. You also used the Socket.io handshake configuration to integrate the Passport session. In the end, you built a fully functional chat example and learned how to wrap the Socket.io client with an AngularJS service. Resources for Article: Further resources on this subject: Creating a RESTful API [article] Angular Zen [article] Digging into the Architecture [article]
Read more
  • 0
  • 0
  • 15879

article-image-improving-code-quality
Packt
22 Sep 2014
18 min read
Save for later

Improving Code Quality

Packt
22 Sep 2014
18 min read
In this article by Alexandru Vlăduţu, author of Mastering Web Application Development with Express, we are going to see how to test Express applications and how to improve the code quality of our code by leveraging existing NPM modules. (For more resources related to this topic, see here.) Creating and testing an Express file-sharing application Now, it's time to see how to develop and test an Express application with what we have learned previously. We will create a file-sharing application that allows users to upload files and password-protect them if they choose to. After uploading the files to the server, we will create a unique ID for that file, store the metadata along with the content (as a separate JSON file), and redirect the user to the file's information page. When trying to access a password-protected file, an HTTP basic authentication pop up will appear, and the user will have to only enter the password (no username in this case). The package.json file, so far, will contain the following code: { "name": "file-uploading-service", "version": "0.0.1", "private": true, "scripts": { "start": "node ./bin/www" }, "dependencies": { "express": "~4.2.0", "static-favicon": "~1.0.0", "morgan": "~1.0.0", "cookie-parser": "~1.0.1", "body-parser": "~1.0.0", "debug": "~0.7.4", "ejs": "~0.8.5", "connect-multiparty": "~1.0.5", "cuid": "~1.2.4", "bcrypt": "~0.7.8", "basic-auth-connect": "~1.0.0", "errto": "~0.2.1", "custom-err": "0.0.2", "lodash": "~2.4.1", "csurf": "~1.2.2", "cookie-session": "~1.0.2", "secure-filters": "~1.0.5", "supertest": "~0.13.0", "async": "~0.9.0" }, "devDependencies": { } } When bootstrapping an Express application using the CLI, a /bin/www file will be automatically created for you. The following is the version we have adopted to extract the name of the application from the package.json file. This way, in case we decide to change it we won't have to alter our debugging code because it will automatically adapt to the new name, as shown in the following code: #!/usr/bin/env node var pkg = require('../package.json'); var debug = require('debug')(pkg.name + ':main'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The application configurations will be stored inside config.json: { "filesDir": "files", "maxSize": 5 } The properties listed in the preceding code refer to the files folder (where the files will be updated), which is relative to the root and the maximum allowed file size. The main file of the application is named app.js and lives in the root. We need the connect-multiparty module to support file uploads, and the csurf and cookie-session modules for CSRF protection. The rest of the dependencies are standard and we have used them before. The full code for the app.js file is as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var session = require('cookie-session'); var bodyParser = require('body-parser'); var multiparty = require('connect-multiparty'); var Err = require('custom-err'); var csrf = require('csurf'); var ejs = require('secure-filters').configure(require('ejs')); var csrfHelper = require('./lib/middleware/csrf-helper'); var homeRouter = require('./routes/index'); var filesRouter = require('./routes/files'); var config = require('./config.json'); var app = express(); var ENV = app.get('env'); // view engine setup app.engine('html', ejs.renderFile); app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'html'); app.use(favicon()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); // Limit uploads to X Mb app.use(multiparty({ maxFilesSize: 1024 * 1024 * config.maxSize })); app.use(cookieParser()); app.use(session({ keys: ['rQo2#0s!qkE', 'Q.ZpeR49@9!szAe'] })); app.use(csrf()); // add CSRF helper app.use(csrfHelper); app.use('/', homeRouter); app.use('/files', filesRouter); app.use(express.static(path.join(__dirname, 'public'))); /// catch 404 and forward to error handler app.use(function(req, res, next) { next(Err('Not Found', { status: 404 })); }); /// error handlers // development error handler // will print stacktrace if (ENV === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler // no stacktraces leaked to user app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); module.exports = app; Instead of directly binding the application to a port, we are exporting it, which makes our lives easier when testing with supertest. We won't need to care about things such as the default port availability or specifying a different port environment variable when testing. To avoid having to create the whole input when including the CSRF token, we have created a helper for that inside lib/middleware/csrf-helper.js: module.exports = function(req, res, next) { res.locals.csrf = function() { return "<input type='hidden' name='_csrf' value='" + req.csrfToken() + "' />"; } next(); }; For the password–protection functionality, we will use the bcrypt module and create a separate file inside lib/hash.js for the hash generation and password–compare functionality: var bcrypt = require('bcrypt'); var errTo = require('errto'); var Hash = {}; Hash.generate = function(password, cb) { bcrypt.genSalt(10, errTo(cb, function(salt) { bcrypt.hash(password, salt, errTo(cb, function(hash) { cb(null, hash); })); })); }; Hash.compare = function(password, hash, cb) { bcrypt.compare(password, hash, cb); }; module.exports = Hash; The biggest file of our application will be the file model, because that's where most of the functionality will reside. We will use the cuid() module to create unique IDs for files, and the native fs module to interact with the filesystem. The following code snippet contains the most important methods for models/file.js: function File(options, id) { this.id = id || cuid(); this.meta = _.pick(options, ['name', 'type', 'size', 'hash', 'uploadedAt']); this.meta.uploadedAt = this.meta.uploadedAt || new Date(); }; File.prototype.save = function(path, password, cb) { var _this = this; this.move(path, errTo(cb, function() { if (!password) { return _this.saveMeta(cb); } hash.generate(password, errTo(cb, function(hashedPassword) { _this.meta.hash = hashedPassword; _this.saveMeta(cb); })); })); }; File.prototype.move = function(path, cb) { fs.rename(path, this.path, cb); }; For the full source code of the file, browse the code bundle. Next, we will create the routes for the file (routes/files.js), which will export an Express router. As mentioned before, the authentication mechanism for password-protected files will be the basic HTTP one, so we will need the basic-auth-connect module. At the beginning of the file, we will include the dependencies and create the router: var express = require('express'); var basicAuth = require('basic-auth-connect'); var errTo = require('errto'); var pkg = require('../package.json'); var File = require('../models/file'); var debug = require('debug')(pkg.name + ':filesRoute'); var router = express.Router(); We will have to create two routes that will include the id parameter in the URL, one for displaying the file information and another one for downloading the file. In both of these cases, we will need to check if the file exists and require user authentication in case it's password-protected. This is an ideal use case for the router.param() function because these actions will be performed each time there is an id parameter in the URL. The code is as follows: router.param('id', function(req, res, next, id) { File.find(id, errTo(next, function(file) { debug('file', file); // populate req.file, will need it later req.file = file; if (file.isPasswordProtected()) { // Password – protected file, check for password using HTTP basic auth basicAuth(function(user, pwd, fn) { if (!pwd) { return fn(); } // ignore user file.authenticate(pwd, errTo(next, function(match) { if (match) { return fn(null, file.id); } fn(); })); })(req, res, next); } else { // Not password – protected, proceed normally next(); } })); }); The rest of the routes are fairly straightforward, using response.download() to send the file to the client, or using response.redirect() after uploading the file: router.get('/', function(req, res, next) { res.render('files/new', { title: 'Upload file' }); }); router.get('/:id.html', function(req, res, next) { res.render('files/show', { id: req.params.id, meta: req.file.meta, isPasswordProtected: req.file.isPasswordProtected(), hash: hash, title: 'Download file ' + req.file.meta.name }); }); router.get('/download/:id', function(req, res, next) { res.download(req.file.path, req.file.meta.name); }); router.post('/', function(req, res, next) { var tempFile = req.files.file; if (!tempFile.size) { return res.redirect('/files'); } var file = new File(tempFile); file.save(tempFile.path, req.body.password, errTo(next, function() { res.redirect('/files/' + file.id + '.html'); })); }); module.exports = router; The view for uploading a file contains a multipart form with a CSRF token inside (views/files/new.html): <%- include ../layout/header.html %> <form action="/files" method="POST" enctype="multipart/form-data"> <div class="form-group"> <label>Choose file:</label> <input type="file" name="file" /> </div> <div class="form-group"> <label>Password protect (leave blank otherwise):</label> <input type="password" name="password" /> </div> <div class="form-group"> <%- csrf() %> <input type="submit" /> </div> </form> <%- include ../layout/footer.html %> To display the file's details, we will create another view (views/files/show.html). Besides showing the basic file information, we will display a special message in case the file is password-protected, so that the client is notified that a password should also be shared along with the link: <%- include ../layout/header.html %> <p> <table> <tr> <th>Name</th> <td><%= meta.name %></td> </tr> <th>Type</th> <td><%= meta.type %></td> </tr> <th>Size</th> <td><%= meta.size %> bytes</td> </tr> <th>Uploaded at</th> <td><%= meta.uploadedAt %></td> </tr> </table> </p> <p> <a href="/files/download/<%- id %>">Download file</a> | <a href="/files">Upload new file</a> </p> <p> To share this file with your friends use the <a href="/files/<%- id %>">current link</a>. <% if (isPasswordProtected) { %> <br /> Don't forget to tell them the file password as well! <% } %> </p> <%- include ../layout/footer.html %> Running the application To run the application, we need to install the dependencies and run the start script: $ npm i $ npm start The default port for the application is 3000, so if we visit http://localhost:3000/files, we should see the following page: After uploading the file, we should be redirected to the file's page, where its details will be displayed: Unit tests Unit testing allows us to test individual parts of our code in isolation and verify their correctness. By making our tests focused on these small components, we decrease the complexity of the setup, and most likely, our tests should execute faster. Using the following command, we'll install a few modules to help us in our quest: $ npm i mocha should sinon––save-dev We are going to write unit tests for our file model, but there's nothing stopping us from doing the same thing for our routes or other files from /lib. The dependencies will be listed at the top of the file (test/unit/file-model.js): var should = require('should'); var path = require('path'); var config = require('../../config.json'); var sinon = require('sinon'); We will also need to require the native fs module and the hash module, because these modules will be stubbed later on. Apart from these, we will create an empty callback function and reuse it, as shown in the following code: // will be stubbing methods on these modules later on var fs = require('fs'); var hash = require('../../lib/hash'); var noop = function() {}; The tests for the instance methods will be created first: describe('models', function() { describe('File', function() { var File = require('../../models/file'); it('should have default properties', function() { var file = new File(); file.id.should.be.a.String; file.meta.uploadedAt.should.be.a.Date; }); it('should return the path based on the root and the file id', function() { var file = new File({}, '1'); file.path.should.eql(File.dir + '/1'); }); it('should move a file', function() { var stub = sinon.stub(fs, 'rename'); var file = new File({}, '1'); file.move('/from/path', noop); stub.calledOnce.should.be.true; stub.calledWith('/from/path', File.dir + '/1', noop).should.be.true; stub.restore(); }); it('should save the metadata', function() { var stub = sinon.stub(fs, 'writeFile'); var file = new File({}, '1'); file.meta = { a: 1, b: 2 }; file.saveMeta(noop); stub.calledOnce.should.be.true; stub.calledWith(File.dir + '/1.json', JSON.stringify(file.meta), noop).should.be.true; stub.restore(); }); it('should check if file is password protected', function() { var file = new File({}, '1'); file.meta.hash = 'y'; file.isPasswordProtected().should.be.true; file.meta.hash = null; file.isPasswordProtected().should.be.false; }); it('should allow access if matched file password', function() { var stub = sinon.stub(hash, 'compare'); var file = new File({}, '1'); file.meta.hash = 'hashedPwd'; file.authenticate('password', noop); stub.calledOnce.should.be.true; stub.calledWith('password', 'hashedPwd', noop).should.be.true; stub.restore(); }); We are stubbing the functionalities of the fs and hash modules because we want to test our code in isolation. Once we are done with the tests, we restore the original functionality of the methods. Now that we're done testing the instance methods, we will go on to test the static ones (assigned directly onto the File object): describe('.dir', function() { it('should return the root of the files folder', function() { path.resolve(__dirname + '/../../' + config.filesDir).should.eql(File.dir); }); }); describe('.exists', function() { var stub; beforeEach(function() { stub = sinon.stub(fs, 'exists'); }); afterEach(function() { stub.restore(); }); it('should callback with an error when the file does not exist', function(done) { File.exists('unknown', function(err) { err.should.be.an.instanceOf(Error).and.have.property('status', 404); done(); }); // call the function passed as argument[1] with the parameter `false` stub.callArgWith(1, false); }); it('should callback with no arguments when the file exists', function(done) { File.exists('existing-file', function(err) { (typeof err === 'undefined').should.be.true; done(); }); // call the function passed as argument[1] with the parameter `true` stub.callArgWith(1, true); }); }); }); }); To stub asynchronous functions and execute their callback, we use the stub.callArgWith() function provided by sinon, which executes the callback provided by the argument with the index <<number>> of the stub with the subsequent arguments. For more information, check out the official documentation at http://sinonjs.org/docs/#stubs. When running tests, Node developers expect the npm test command to be the command that triggers the test suite, so we need to add that script to our package.json file. However, since we are going to have different tests to be run, it would be even better to add a unit-tests script and make npm test run that for now. The scripts property should look like the following code: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "test": "npm run unit-tests" }, Now, if we run the tests, we should see the following output in the terminal: Functional tests So far, we have tested each method to check whether it works fine on its own, but now, it's time to check whether our application works according to the specifications when wiring all the things together. Besides the existing modules, we will need to install and use the following ones: supertest: This is used to test the routes in an expressive manner cheerio: This is used to extract the CSRF token out of the form and pass it along when uploading the file rimraf: This is used to clean up our files folder once we're done with the testing We will create a new file called test/functional/files-routes.js for the functional tests. As usual, we will list our dependencies first: var fs = require('fs'); var request = require('supertest'); var should = require('should'); var async = require('async'); var cheerio = require('cheerio'); var rimraf = require('rimraf'); var app = require('../../app'); There will be a couple of scenarios to test when uploading a file, such as: Checking whether a file that is uploaded without a password can be publicly accessible Checking that a password-protected file can only be accessed with the correct password We will create a function called uploadFile that we can reuse across different tests. This function will use the same supertest agent when making requests so it can persist the cookies, and will also take care of extracting and sending the CSRF token back to the server when making the post request. In case a password argument is provided, it will send that along with the file. The function will assert that the status code for the upload page is 200 and that the user is redirected to the file page after the upload. The full code of the function is listed as follows: function uploadFile(agent, password, done) { agent .get('/files') .expect(200) .end(function(err, res) { (err == null).should.be.true; var $ = cheerio.load(res.text); var csrfToken = $('form input[name=_csrf]').val(); csrfToken.should.not.be.empty; var req = agent .post('/files') .field('_csrf', csrfToken) .attach('file', __filename); if (password) { req = req.field('password', password); } req .expect(302) .expect('Location', /files/(.*).html/) .end(function(err, res) { (err == null).should.be.true; var fileUid = res.headers['location'].match(/files/(.*).html/)[1]; done(null, fileUid); }); }); } Note that we will use rimraf in an after function to clean up the files folder, but it would be best to have a separate path for uploading files while testing (other than the one used for development and production): describe('Files-Routes', function(done) { after(function() { var filesDir = __dirname + '/../../files'; rimraf.sync(filesDir); fs.mkdirSync(filesDir); When testing the file uploads, we want to make sure that without providing the correct password, access will not be granted to the file pages: describe("Uploading a file", function() { it("should upload a file without password protecting it", function(done) { var agent = request.agent(app); uploadFile(agent, null, done); }); it("should upload a file and password protect it", function(done) { var agent = request.agent(app); var pwd = 'sample-password'; uploadFile(agent, pwd, function(err, filename) { async.parallel([ function getWithoutPwd(next) { agent .get('/files/' + filename + '.html') .expect(401) .end(function(err, res) { (err == null).should.be.true; next(); }); }, function getWithPwd(next) { agent .get('/files/' + filename + '.html') .set('Authorization', 'Basic ' + new Buffer(':' + pwd).toString('base64')) .expect(200) .end(function(err, res) { (err == null).should.be.true; next(); }); } ], function(err) { (err == null).should.be.true; done(); }); }); }); }); }); It's time to do the same thing we did for the unit tests: make a script so we can run them with npm by using npm run functional-tests. At the same time, we should update the npm test script to include both our unit tests and our functional tests: "scripts": { "start": "node ./bin/www", "unit-tests": "mocha --reporter=spec test/unit", "functional-tests": "mocha --reporter=spec --timeout=10000 --slow=2000 test/functional", "test": "npm run unit-tests && npm run functional-tests" } If we run the tests, we should see the following output: Running tests before committing in Git It's a good practice to run the test suite before committing to git and only allowing the commit to pass if the tests have been executed successfully. The same applies for other version control systems. To achieve this, we should add the .git/hooks/pre-commit file, which should take care of running the tests and exiting with an error in case they failed. Luckily, this is a repetitive task (which can be applied to all Node applications), so there is an NPM module that creates this hook file for us. All we need to do is install the pre-commit module (https://www.npmjs.org/package/pre-commit) as a development dependency using the following command: $ npm i pre-commit ––save-dev This should automatically create the pre-commit hook file so that all the tests are run before committing (using the npm test command). The pre-commit module also supports running custom scripts specified in the package.json file. For more details on how to achieve that, read the module documentation at https://www.npmjs.org/package/pre-commit. Summary In this article, we have learned about writing tests for Express applications and in the process, explored a variety of helpful modules. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Rendering web pages to PDF using Railo Open Source [article]
Read more
  • 0
  • 0
  • 1842
article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 5480

article-image-creating-restful-api
Packt
19 Sep 2014
24 min read
Save for later

Creating a RESTful API

Packt
19 Sep 2014
24 min read
In this article by Jason Krol, the author of Web Development with MongoDB and NodeJS, we will review the following topics: (For more resources related to this topic, see here.) Introducing RESTful APIs Installing a few basic tools Creating a basic API server and sample JSON data Responding to GET requests Updating data with POST and PUT Removing data with DELETE Consuming external APIs from Node What is an API? An Application Programming Interface (API) is a set of tools that a computer system makes available that provides unrelated systems or software the ability to interact with each other. Typically, a developer uses an API when writing software that will interact with a closed, external, software system. The external software system provides an API as a standard set of tools that all developers can use. Many popular social networking sites provide developer's access to APIs to build tools to support those sites. The most obvious examples are Facebook and Twitter. Both have a robust API that provides developers with the ability to build plugins and work with data directly, without them being granted full access as a general security precaution. As you will see with this article, providing your own API is not only fairly simple, but also it empowers you to provide your users with access to your data. You also have the added peace of mind knowing that you are in complete control over what level of access you can grant, what sets of data you can make read-only, as well as what data can be inserted and updated. What is a RESTful API? Representational State Transfer (REST) is a fancy way of saying CRUD over HTTP. What this means is when you use a REST API, you have a uniform means to create, read, and update data using simple HTTP URLs with a standard set of HTTP verbs. The most basic form of a REST API will accept one of the HTTP verbs at a URL and return some kind of data as a response. Typically, a REST API GET request will always return some kind of data such as JSON, XML, HTML, or plain text. A POST or PUT request to a RESTful API URL will accept data to create or update. The URL for a RESTful API is known as an endpoint, and while working with these endpoints, it is typically said that you are consuming them. The standard HTTP verbs used while interfacing with REST APIs include: GET: This retrieves data POST: This submits data for a new record PUT: This submits data to update an existing record PATCH: This submits a date to update only specific parts of an existing record DELETE: This deletes a specific record Typically, RESTful API endpoints are defined in a way that they mimic the data models and have semantic URLs that are somewhat representative of the data models. What this means is that to request a list of models, for example, you would access an API endpoint of /models. Likewise, to retrieve a specific model by its ID, you would include that in the endpoint URL via /models/:Id. Some sample RESTful API endpoint URLs are as follows: GET http://myapi.com/v1/accounts: This returns a list of accounts GET http://myapi.com/v1/accounts/1: This returns a single account by Id: 1 POST http://myapi.com/v1/accounts: This creates a new account (data submitted as a part of the request) PUT http://myapi.com/v1/accounts/1: This updates an existing account by Id: 1 (data submitted as part of the request) GET http://myapi.com/v1/accounts/1/orders: This returns a list of orders for account Id: 1 GET http://myapi.com/v1/accounts/1/orders/21345: This returns the details for a single order by Order Id: 21345 for account Id: 1 It's not a requirement that the URL endpoints match this pattern; it's just common convention. Introducing Postman REST Client Before we get started, there are a few tools that will make life much easier when you're working directly with APIs. The first of these tools is called Postman REST Client, and it's a Google Chrome application that can run right in your browser or as a standalone-packaged application. Using this tool, you can easily make any kind of request to any endpoint you want. The tool provides many useful and powerful features that are very easy to use and, best of all, free! Installation instructions Postman REST Client can be installed in two different ways, but both require Google Chrome to be installed and running on your system. The easiest way to install the application is by visiting the Chrome Web Store at https://chrome.google.com/webstore/category/apps. Perform a search for Postman REST Client and multiple results will be returned. There is the regular Postman REST Client that runs as an application built into your browser, and then separate Postman REST Client (packaged app) that runs as a standalone application on your system in its own dedicated window. Go ahead and install your preference. If you install the application as the standalone packaged app, an icon to launch it will be added to your dock or taskbar. If you installed it as a regular browser app, you can launch it by opening a new tab in Google Chrome and going to Apps and finding the Postman REST Client icon. After you've installed and launched the app, you should be presented with an output similar to the following screenshot: A quick tour of Postman REST Client Using Postman REST Client, we're able to submit REST API calls to any endpoint we want as well as modify the type of request. Then, we can have complete access to the data that's returned from the API as well as any errors that might have occurred. To test an API call, enter the URL to your favorite website in the Enter request URL here field and leave the dropdown next to it as GET. This will mimic a standard GET request that your browser performs anytime you visit a website. Click on the blue Send button. The request is made and the response is displayed at the bottom half of the screen. In the following screenshot, I sent a simple GET request to http://kroltech.com and the HTML is returned as follows: If we change this URL to that of the RSS feed URL for my website, you can see the XML returned: The XML view has a few more features as it exposes the sidebar to the right that gives you a handy outline to glimpse the tree structure of the XML data. Not only that, you can now see a history of the requests we've made so far along the left sidebar. This is great when we're doing more advanced POST or PUT requests and don't want to repeat the data setup for each request while testing an endpoint. Here is a sample API endpoint I submitted a GET request to that returns the JSON data in its response: A really nice thing about making API calls to endpoints that return JSON using Postman Client is that it parses and displays the JSON in a very nicely formatted way, and each node in the data is expandable and collapsible. The app is very intuitive so make sure you spend some time playing around and experimenting with different types of calls to different URLs. Using the JSONView Chrome extension There is one other tool I want to let you know about (while extremely minor) that is actually a really big deal. The JSONView Chrome extension is a very small plugin that will instantly convert any JSON you view directly via the browser into a more usable JSON tree (exactly like Postman Client). Here is an example of pointing to a URL that returns JSON from Chrome before JSONView is installed: And here is that same URL after JSONView has been installed: You should install the JSONView Google Chrome extension the same way you installed Postman REST Client—access the Chrome Web Store and perform a search for JSONView. Now that you have the tools to be able to easily work with and test API endpoints, let's take a look at writing your own and handling the different request types. Creating a Basic API server Let's create a super basic Node.js server using Express that we'll use to create our own API. Then, we can send tests to the API using Postman REST Client to see how it all works. In a new project workspace, first install the npm modules that we're going to need in order to get our server up and running: $ npm init $ npm install --save express body-parser underscore Now that the package.json file for this project has been initialized and the modules installed, let's create a basic server file to bootstrap up an Express server. Create a file named server.js and insert the following block of code: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'), json = require('./movies.json'),    app = express();   app.set('port', process.env.PORT || 3500);   app.use(bodyParser.urlencoded()); app.use(bodyParser.json());   var router = new express.Router(); // TO DO: Setup endpoints ... app.use('/', router);   var server = app.listen(app.get('port'), function() {    console.log('Server up: http://localhost:' + app.get('port')); }); Most of this should look familiar to you. In the server.js file, we are requiring the express, body-parser, and underscore modules. We're also requiring a file named movies.json, which we'll create next. After our modules are required, we set up the standard configuration for an Express server with the minimum amount of configuration needed to support an API server. Notice that we didn't set up Handlebars as a view-rendering engine because we aren't going to be rendering any HTML with this server, just pure JSON responses. Creating sample JSON data Let's create the sample movies.json file that will act as our temporary data store (even though the API we build for the purposes of demonstration won't actually persist data beyond the app's life cycle): [{    "Id": "1",    "Title": "Aliens",    "Director": "James Cameron",    "Year": "1986",    "Rating": "8.5" }, {    "Id": "2",    "Title": "Big Trouble in Little China",    "Director": "John Carpenter",    "Year": "1986",    "Rating": "7.3" }, {    "Id": "3",    "Title": "Killer Klowns from Outer Space",    "Director": "Stephen Chiodo",    "Year": "1988",    "Rating": "6.0" }, {    "Id": "4",    "Title": "Heat",    "Director": "Michael Mann",    "Year": "1995",    "Rating": "8.3" }, {    "Id": "5",    "Title": "The Raid: Redemption",    "Director": "Gareth Evans",    "Year": "2011",    "Rating": "7.6" }] This is just a really simple JSON list of a few of my favorite movies. Feel free to populate it with whatever you like. Boot up the server to make sure you aren't getting any errors (note we haven't set up any routes yet, so it won't actually do anything if you tried to load it via a browser): $ node server.js Server up: http://localhost:3500 Responding to GET requests Adding a simple GET request support is fairly simple, and you've seen this before already in the app we built. Here is some sample code that responds to a GET request and returns a simple JavaScript object as JSON. Insert the following code in the routes section where we have the // TO DO: Setup endpoints ... waiting comment: router.get('/test', function(req, res) {    var data = {        name: 'Jason Krol',        website: 'http://kroltech.com'    };      res.json(data); }); Let's tweak the function a little bit and change it so that it responds to a GET request against the root URL (that is /) route and returns the JSON data from our movies file. Add this new route after the /test route added previously: router.get('/', function(req, res) {    res.json(json); }); The res (response) object in Express has a few different methods to send data back to the browser. Each of these ultimately falls back on the base send method, which includes header information, statusCodes, and so on. res.json and res.jsonp will automatically format JavaScript objects into JSON and then send using res.send. res.render will render a template view as a string and then send it using res.send as well. With that code in place, if we launch the server.js file, the server will be listening for a GET request to the / URL route and will respond with the JSON data of our movies collection. Let's first test it out using the Postman REST Client tool: GET requests are nice because we could have just as easily pulled that same URL via our browser and received the same result: However, we're going to use Postman for the remainder of our endpoint testing as it's a little more difficult to send POST and PUT requests using a browser. Receiving data – POST and PUT requests When we want to allow our users using our API to insert or update data, we need to accept a request from a different HTTP verb. When inserting new data, the POST verb is the preferred method to accept data and know it's for an insert. Let's take a look at code that accepts a POST request and data along with the request, and inserts a record into our collection and returns the updated JSON. Insert the following block of code after the route you added previously for GET: router.post('/', function(req, res) {    // insert the new item into the collection (validate first)    if(req.body.Id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        json.push(req.body);        res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); You can see the first thing we do in the POST function is check to make sure the required fields were submitted along with the actual request. Assuming our data checks out and all the required fields are accounted for (in our case every field), we insert the entire req.body object into the array as is using the array's push function. If any of the required fields aren't submitted with the request, we return a 500 error message instead. Let's submit a POST request this time to the same endpoint using the Postman REST Client. (Don't forget to make sure your API server is running with node server.js.): First, we submitted a POST request with no data, so you can clearly see the 500 error response that was returned. Next, we provided the actual data using the x-www-form-urlencoded option in Postman and provided each of the name/value pairs with some new custom data. You can see from the results that the STATUS was 200, which is a success and the updated JSON data was returned as a result. Reloading the main GET endpoint in a browser yields our original movies collection with the new one added. PUT requests will work in almost exactly the same way except traditionally, the Id property of the data is handled a little differently. In our example, we are going to require the Id attribute as a part of the URL and not accept it as a parameter in the data that's submitted (since it's usually not common for an update function to change the actual Id of the object it's updating). Insert the following code for the PUT route after the existing POST route you added earlier: router.put('/:id', function(req, res) {    // update the item in the collection    if(req.params.id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        _.each(json, function(elem, index) {             // find and update:            if (elem.Id === req.params.id) {                elem.Title = req.body.Title;                elem.Director = req.body.Director;                elem.Year = req.body.Year;                elem.Rating = req.body.Rating;            }        });          res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); This code again validates that the required fields are included with the data that was submitted along with the request. Then, it performs an _.each loop (using the underscore module) to look through the collection of movies and find the one whose Id parameter matches that of the Id included in the URL parameter. Assuming there's a match, the individual fields for that matched object are updated with the new values that were sent with the request. Once the loop is complete, the updated JSON data is sent back as the response. Similarly, in the POST request, if any of the required fields are missing, a simple 500 error message is returned. The following screenshot demonstrates a successful PUT request updating an existing record. The response from Postman after including the value 1 in the URL as the Id parameter, which provides the individual fields to update as x-www-form-urlencoded values, and finally sending as PUT shows that the original item in our movies collection is now the original Alien (not Aliens, its sequel as we originally had). Removing data – DELETE The final stop on our whirlwind tour of the different REST API HTTP verbs is DELETE. It should be no surprise that sending a DELETE request should do exactly what it sounds like. Let's add another route that accepts DELETE requests and will delete an item from our movies collection. Here is the code that takes care of DELETE requests that should be placed after the existing block of code from the previous PUT: router.delete('/:id', function(req, res) {    var indexToDel = -1;    _.each(json, function(elem, index) {        if (elem.Id === req.params.id) {            indexToDel = index;        }    });    if (~indexToDel) {        json.splice(indexToDel, 1);    }    res.json(json); }); This code will loop through the collection of movies and find a matching item by comparing the values of Id. If a match is found, the array index for the matched item is held until the loop is finished. Using the array.splice function, we can remove an array item at a specific index. Once the data has been updated by removing the requested item, the JSON data is returned. Notice in the following screenshot that the updated JSON that's returned is in fact no longer displaying the original second item we deleted. Note that ~ in there! That's a little bit of JavaScript black magic! The tilde (~) in JavaScript will bit flip a value. In other words, take a value and return the negative of that value incremented by one, that is ~n === -(n+1). Typically, the tilde is used with functions that return -1 as a false response. By using ~ on -1, you are converting it to a 0. If you were to perform a Boolean check on -1 in JavaScript, it would return true. You will see ~ is used primarily with the indexOf function and jQuery's $.inArray()—both return -1 as a false response. All of the endpoints defined in this article are extremely rudimentary, and most of these should never ever see the light of day in a production environment! Whenever you have an API that accepts anything other than GET requests, you need to be sure to enforce extremely strict validation and authentication rules. After all, you are basically giving your users direct access to your data. Consuming external APIs from Node.js There will undoubtedly be a time when you want to consume an API directly from within your Node.js code. Perhaps, your own API endpoint needs to first fetch data from some other unrelated third-party API before sending a response. Whatever the reason, the act of sending a request to an external API endpoint and receiving a response can be done fairly easily using a popular and well-known npm module called Request. Request was written by Mikeal Rogers and is currently the third most popular and (most relied upon) npm module after async and underscore. Request is basically a super simple HTTP client, so everything you've been doing with Postman REST Client so far is basically what Request can do, only the resulting data is available to you in your node code as well as the response status codes and/or errors, if any. Consuming an API endpoint using Request Let's do a neat trick and actually consume our own endpoint as if it was some third-party external API. First, we need to ensure we have Request installed and can include it in our app: $ npm install --save request Next, edit server.js and make sure you include Request as a required module at the start of the file: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'),    json = require('./movies.json'),    app = express(),    request = require('request'); Now let's add a new endpoint after our existing routes, which will be an endpoint accessible in our server via a GET request to /external-api. This endpoint, however, will actually consume another endpoint on another server, but for the purposes of this example, that other server is actually the same server we're currently running! The Request module accepts an options object with a number of different parameters and settings, but for this particular example, we only care about a few. We're going to pass an object that has a setting for the method (GET, POST, PUT, and so on) and the URL of the endpoint we want to consume. After the request is made and a response is received, we want an inline callback function to execute. Place the following block of code after your existing list of routes in server.js: router.get('/external-api', function(req, res) {    request({            method: 'GET',            uri: 'http://localhost:' + (process.env.PORT || 3500),        }, function(error, response, body) {             if (error) { throw error; }              var movies = [];            _.each(JSON.parse(body), function(elem, index) {                movies.push({                    Title: elem.Title,                    Rating: elem.Rating                });            });            res.json(_.sortBy(movies, 'Rating').reverse());        }); }); The callback function accepts three parameters: error, response, and body. The response object is like any other response that Express handles and has all of the various parameters as such. The third parameter, body, is what we're really interested in. That will contain the actual result of the request to the endpoint that we called. In this case, it is the JSON data from our main GET route we defined earlier that returns our own list of movies. It's important to note that the data returned from the request is returned as a string. We need to use JSON.parse to convert that string to actual usable JSON data. Using the data that came back from the request, we transform it a little bit. That is, we take that data and manipulate it a bit to suit our needs. In this example, we took the master list of movies and just returned a new collection that consists of only the title and rating of each movie and then sorts the results by the top scores. Load this new endpoint by pointing your browser to http://localhost:3500/external-api, and you can see the new transformed JSON output to the screen. Let's take a look at another example that's a little more real world. Let's say that we want to display a list of similar movies for each one in our collection, but we want to look up that data somewhere such as www.imdb.com. Here is the sample code that will send a GET request to IMDB's JSON API, specifically for the word aliens, and returns a list of related movies by the title and year. Go ahead and place this block of code after the previous route for external-api: router.get('/imdb', function(req, res) {    request({            method: 'GET',            uri: 'http://sg.media-imdb.com/suggests/a/aliens.json',        }, function(err, response, body) {            var data = body.substring(body.indexOf('(')+1);            data = JSON.parse(data.substring(0,data.length-1));            var related = [];            _.each(data.d, function(movie, index) {                related.push({                    Title: movie.l,                    Year: movie.y,                    Poster: movie.i ? movie.i[0] : ''                });            });              res.json(related);        }); }); If we take a look at this new endpoint in a browser, we can see the JSON data that's returned from our /imdb endpoint is actually itself retrieving and returning data from some other API endpoint: Note that the JSON endpoint I'm using for IMDB isn't actually from their API, but rather what they use on their homepage when you type in the main search box. This would not really be the most appropriate way to use their data, but it's more of a hack to show this example. In reality, to use their API (like most other APIs), you would need to register and get an API key that you would use so that they can properly track how much data you are requesting on a daily or an hourly basis. Most APIs will to require you to use a private key with them for this same reason. Summary In this article, we took a brief look at how APIs work in general, the RESTful API approach to semantic URL paths and arguments, and created a bare bones API. We used Postman REST Client to interact with the API by consuming endpoints and testing the different types of request methods (GET, POST, PUT, and so on). You also learned how to consume an external API endpoint by using the third-party node module Request. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0 [Article] REST – Where It Begins [Article] RESTful Web Services – Server-Sent Events (SSE) [Article]
Read more
  • 0
  • 0
  • 8114
Modal Close icon
Modal Close icon