Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 6039

article-image-nsb-and-security
Packt
06 Feb 2015
14 min read
Save for later

NSB and Security

Packt
06 Feb 2015
14 min read
This article by Rich Helton, the author of Learning NServiceBus Sagas, delves into the details of NSB and its security. In this article, we will cover the following: Introducing web security Cloud vendors Using .NET 4 Adding NServiceBus Benefits of NSB (For more resources related to this topic, see here.) Introducing web security According to the Top 10 list of 2013 by the Open Web Application Security Project (OWASP), found at https://www.owasp.org/index.php/Top10#OWASP_Top_10_for_2013, injection flaws still remain at the top among the ways to penetrate a web site. This is shown in the following screenshot: An injection flaw is a means of being able to access information or the site by injecting data into the input fields. This is normally used to bypass proper authentication and authorization. Normally, this is the data that the website has not seen in the testing efforts or considered during development. For references, I will consider some slides found at http://www.slideshare.net/rhelton_1/cweb-sec-oct27-2010-final. An instance of an injection flaw is to put SQL commands in form fields and even URL fields to try to get SQL errors and returns with further information. If the error is not generic, and a SQL exception occurs, it will sometimes return with table names. It may deny authorization for sa under the password table in SQL Server 2008. Knowing this gives a person knowledge of the SQL Server version, the sa user is being used, and the existence of a password table. There are many tools and websites for people on the Internet to practice their web security testing skills, rather than them literally being in IT security as a professional or amateur. Many of these websites are well-known and posted at places such as https://www.owasp.org/index.php/Phoenix/Tools. General disclaimer I do not endorse or encourage others to practice on websites without written permission from the website owner. Some of the live sites are as follows, and most are used to test web scanners: http://zero.webappsecurity.com/: This is developed by SPI Dynamics (now HP Security) for Web Inspect. It is an ASP site. http://crackme.cenzic.com/Kelev/view/home.php: This PHP site is from Cenzic. http://demo.testfire.net/: This is developed by WatchFire (now IBM Rational AppScan). It is an ASP site. http://testaspnet.vulnweb.com/: This is developed by Acunetix. It is a PHP site. http://webscantest.com/: This is developed by NT OBJECTives NTOSpider. It is a PHP site. There are many more sites and tools, and one would have to research them themselves. There are tools that will only look for SQL Injection. Hacking professionals who are very gifted and spend their days looking for only SQL injection would find these useful. We will start with SQL injection, as it is one of the most popular ways to enter a website. But before we start an analysis report on a website hack, we will document the website. Our target site will be http://zero.webappsecurity.com/. We will start with the EC-Council's Certified Ethical Hacker program, where they divide footprinting and scanning into seven basic steps: Information gathering Determining the network range Identifying active machines Finding open ports and access points OS fingerprinting Fingerprinting services Mapping the network We could also follow the OWASP Web Testing checklist, which includes: Information gathering Configuration testing Identity management testing Authentication testing Session management testing Data validation testing Error handling Cryptography Business logic testing Client-side testing The idea is to gather as much information on the website as possible before launching an attack, as there is no information gathered so far. To gather information on the website, you don't actually have to scan the website yourself at the start. There are many scanners that scan the website before you start. There are Google Bots gathering search information about the site, the Netcraft search engine gathering statistics about the site, as well as many domain search engines with contact information. If another person has hacked the site, there are sites and blogs where hackers talk about hacking a specific site, including what tools they used. They may even post security scans on the Internet, which could be found by googling. There is even a site (https://archive.org/) that is called the WayBack Machine as it keeps previous versions of websites that it scans for in archive. These are just some basic pieces, and any person who has studied for their Certified Ethical Hacker's exam should have all of this on their fingertips. We will discuss some of the benefits that Microsoft and Particular.net have taken into consideration to assist those who develop solutions in C#. We can search at http://web.archive.org/web/ or http://zero.webappsecurity.com/ for changes from the WayBack Machine, and we will see something like this: From this search engine, we look at what the screens looked like 2003, and walk through various changes to the present 2014. Actually, there were errors on archive copying the site in 2003, so this machine directed us to the first best copy on May 11, 2006, as shown in the following screenshot: Looking with Netcraft, we can see that it was first started in 2004, last rebooted in 2014, and is running Ubuntu, as shown in this screenshot: Next, we can try to see what Google tells us. There are many Google Hacking Databases that keep track of keywords in the Google Search Engine API. These keywords are expressions such as file: passwd to search for password files in Ubuntu, and many more. This is not a hacking book, and this site is well-known, so we will just search for webappsecurity.com file:passwd. This gives me more information than needed. On the first item, I get a sample web scan report of the available vulnerabilities in the site from 2008, as shown in the following screenshot: We can also see which links Google has already found by running http://zero.webappsecurity.com/, as shown in this screenshot: In these few steps, I have enough information to bring a targeted website attack to check whether these vulnerabilities are still active or not. I know the operating system of the website and have details of the history of the website. This is before I have even considered running tools to approach the website. To scan the website, for which permission is always needed ahead of time, there are multiple web scanners available. For a list of web scanners, one website is http://sectools.org/tag/web-scanners/. One of the favorites is built by the famed Googler Michal Zalewski, and is called skipfish. Skipfish is an open source tool written in the C language, and it can be used in Windows by compiling it in Cygwin libraries, which are Linux virtual libraries and tools for Windows. Skipfish has its own man pages at http://dev.man-online.org/man1/skipfish/, and it can be downloaded from https://code.google.com/p/skipfish/. Skipfish performs web crawling, fuzzing, and tests for many issues such as XSS and SQL Injection. In Skipfish's case, its fussing uses dictionaries to add more paths to websites, extensions, and keywords that are normally found as attack vectors through the experience of hackers, to apply to the website being scanned. For instance, it may not be apparent from the pages being scanned that there is an admin/index.html page available, but the dictionary will try to check whether the page is available. Skipfish results will appear as follows: The issue with Skipfish is that it is noisy, because of its fuzzer. Skipfish will try many scans and checks for links that might not exist, which will take some time and can be a little noisy out of the box. There are many configurations, and there is throttling of the scanning to try to hide the noise. An associated scan in HP's WebInspect scanner will appear like this: These are just automated means to inspect a website. These steps are common, and much of this material is known in web security. After an initial inspection of a website, a person may start making decisions on how to check their information further. Manually checking websites An experienced web security person may now start proceeding through more manual checks and less automated checking of websites after taking an initial look at the website. For instance, type Admin as the user ID and password, or type Guest instead of Admin, and the list progresses based on experience. Then try the Admin and password combination, then the Admin and password123 combination, and so on. A person inspecting a website might have a lot of time to try to perform penetration testing, and might try hundreds of scenarios. There are many tools and scripts to automate the process. As security analysts, we find many sites that give admin access just by using Admin and Admin as the user ID and password, respectively. To enhance personal skills, there are many tutorials to walk through. One thing to do is to pull down a live website that you can set up for practice, such as WebGoat, and go through the steps outlined in the tutorials from sites such as http://webappsecmovies.sourceforge.net/webgoat/. These sites will show a person how to perform SQL Injection testing through the WebGoat site. As part of these tutorials, there are plugins of Firefox to test security scripts, HTML, debug pieces and tamper with the website through the browser, as shown in this screenshot: Using .NET 4 can help Every page that is deployed to the Internet (and in many cases, the Intranet as well), constantly gets probed and prodded by scans, viruses, and network noise. There are so many pokes, probes, and prods on networks these days that most of them are seen as noise. By default, .NET 4 offers some validation and out-of-the-box support for Web requests. Using .NET 4, you may discover that some input types such as double quotes, single quotes, and even < are blocked in some form fields. You will get an error like what is shown in the following screenshot when trying to pass some of the values: This is very basic validation, and it will reside in the .NET version 4 framework's pooling pieces of Internet Information Services (IIS) for Windows. To further offer security following Microsoft's best enterprise practices, we may also consider using Model-View-Controller (MVC) and Entity Frameworks (EF). To get this information, we can review Microsoft Application Architecture Guide at http://msdn.microsoft.com/en-us/library/ff650706.aspx. The MVC design pattern is the most commonly used pattern in software and is designed as follows: This is a very common design pattern, so why is this important in security? What is helpful is that we can validate data requests and responses through the controllers, as well as provide data annotations for each data element for more validation. A common attack that appeared through viruses through the years is the buffer overflow. A buffer overflow is used to send a lot of data to the data elements. Validation can check whether there is sufficient data to counteract the buffer overflow. EF is a Microsoft framework used to provide an object-relationship mapper. Not only can it easily generate objects to and from the SQL Server through Visual Studio, but it can also use objects instead of SQL scripting. Since it does not use SQL, SQL Injection, which is an attack involving injecting SQL commands through input fields, can be counteracted. Even though some of these techniques will help mitigate many attack vectors, the gateway to backend processes is usually through the website. There are many more injection attack vectors. If stored procedures are used for SQL Server, a scan be tried to access any stored procedures that the website may be calling, as well as for any default stored procedures that may be lingering from default installations from SQL Server. So how do we add further validation and decouple the backend processes in an organization from the website? NServiceBus to the rescue NServiceBus is the most popular C# platform framework used to implement an Enterprise Service Bus (ESB) for service-oriented architecture (SOA). Basically, NSB hosts Windows services through its NServiceBus.Host.exe program, and interfaces these services through different message queuing components. A C# MVC-EF program can call web services directly, and when the web service receives an error, the website will receive the error directly in the MVC program. This creates a coupling of the web service and the website, where changes in the website can affect the web services and actions in the web services can affect the website. Because of this coupling, websites may have a Please do not refresh the page until the process is finished warning. Normally, it is wise to step away from the phone, tablet, or computer until the website is loaded. It could be that even though you may not touch the website, another process running on the machine may. A virus scanner, update, or multiple other processes running on the device could cause any glitch in the refreshing of anything on the device. With all the scans that could be happening on a website and that others on the Internet could be doing, it seems quite odd that a page would say Please don't' touch me, I am busy. In order to decouple the website from the web services, a service needs to be deployed between the website and web service. It helps if that service has a lot of out-of-the-box security features as well, to help protect the interaction between the website and web service. For this reason, a product such as NServiceBus is most helpful, where others have already laid the groundwork to have advanced security features in services tested through the industry by their use. Being the most common C# ESB platform has its advantages, as developers and architects ensure the integrity of the framework well before a new design starts using it. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: There is separation of duties from the frontend to the backend, allowing the frontend to fire a message to a service and continue in its processing, and not worrying about the results until it needs an update. Also, separation of workflow responsibility exists through separating out NSB services. One service could be used to send payments to a bank, and another service could be used to provide feedback of the current status of payment to the MVC-EF database so that a user may see their payment status. Message durability: Messages are saved in queues between services so that in case services are stopped, they can start from the messages in the queues when they restart, and the messages will persist until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful during any network or server issues. Monitoring: NSB ServicePulse can keep a heartbeat on its services. Other monitoring can easily be done on the NSB queues to report on the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services or subscribers could be processing the same or similar messages from various services that are living on different servers. When one server or service goes down, others could be made available to take over those that are already running. Summary If any website is on the Internet, it is being scanned by a multitude of means, from websites and others. It is wise to decouple external websites from backend processes through a means such as NServiceBus. Websites that are not decoupled from the backend can be acted upon by the external processes that it may be accomplishing, such as a web service to validate a credit card. These websites may say Do not refresh this page. Other conditions might occur to the website and be beyond your reach, refreshing the page to affect that interaction. The best solution is to decouple the website from these processes through NServiceBus. Resources for Article: Further resources on this subject: Mobile Game Design [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 3783

article-image-transformations-using-mapreduce
Packt
05 Feb 2015
19 min read
Save for later

Transformations Using Map/Reduce

Packt
05 Feb 2015
19 min read
In this article written by Adam Boduch, author of the book Lo-Dash Essentials, we'll be looking at all the interesting things we can do with Lo-Dash and the map/reduce programming model. We'll start off with the basics, getting our feet wet with some basic mappings and basic reductions. As we progress through the article, we'll start introducing more advanced techniques to think in terms of map/reduce with Lo-Dash. The goal, once you've reached the end of this article, is to have a solid understanding of the Lo-Dash functions available that aid in mapping and reducing collections. Additionally, you'll start to notice how disparate Lo-Dash functions work together in the map/reduce domain. Ready? (For more resources related to this topic, see here.) Plucking values Consider that as your informal introduction to mapping because that's essentially what it's doing. It's taking an input collection and mapping it to a new collection, plucking only the properties we're interested in. This is shown in the following example: var collection = [ { name: 'Virginia', age: 45 }, { name: 'Debra', age: 34 }, { name: 'Jerry', age: 55 }, { name: 'Earl', age: 29 } ]; _.pluck(collection, 'age'); // → [ 45, 34, 55, 29 ] This is about as simple a mapping operation as you'll find. In fact, you can do the same thing with map(): var collection = [ { name: 'Michele', age: 58 }, { name: 'Lynda', age: 23 }, { name: 'William', age: 35 }, { name: 'Thomas', age: 41 } ]; _.map(collection, 'name'); // → // [ // "Michele", // "Lynda", // "William", // "Thomas" // ] As you'd expect, the output here is exactly the same as it would be with pluck(). In fact, pluck() is actually using the map() function under the hood. The callback passed to map() is constructed using property(), which just returns the specified property value. The map() function falls back to this plucking behavior when a string instead of a function is passed to it. With that brief introduction to the nature of mapping, let's dig a little deeper and see what's possible in mapping collections. Mapping collections In this section, we'll explore mapping collections. Mapping one collection to another ranges from composing really simple—as we saw in the preceding section—to sophisticated callbacks. These callbacks that map each item in the collection can include or exclude properties and can calculate new values. Besides, we can apply functions to these items. We'll also address the issue of filtering collections and how this can be done in conjunction with mapping. Including and excluding properties When applied to an object, the pick() function generates a new object containing only the specified properties. The opposite of this function, omit(), generates an object with every property except those specified. Since these functions work fine for individual object instances, why not use them in a collection? You can use both of these functions to shed properties from collections by mapping them to new ones, as shown in the following code: var collection = [ { first: 'Ryan', last: 'Coleman', age: 23 }, { first: 'Ann', last: 'Sutton', age: 31 }, { first: 'Van', last: 'Holloway', age: 44 }, { first: 'Francis', last: 'Higgins', age: 38 } ]; _.map(collection, function(item) { return _.pick(item, [ 'first', 'last' ]); }); // → // [ // { first: "Ryan", last: "Coleman" }, // { first: "Ann", last: "Sutton" }, // { first: "Van", last: "Holloway" }, // { first: "Francis", last: "Higgins" } // ] Here, we're creating a new collection using the map() function. The callback function supplied to map() is applied to each item in the collection. The item argument is the original item from the collection. The callback is expected to return the mapped version of that item and this version could be anything, including the original item itself. Be careful when manipulating the original item in map() callbacks. If the item is an object and it's referenced elsewhere in your application, it could have unintended consequences. We're returning a new object as the mapped item in the preceding code. This is done using the pick() function. We only care about the first and the last properties. Our newly mapped collection looks identical to the original, except that no item has an age property. This newly mapped collection is seen in the following code: var collection = [ { first: 'Clinton', last: 'Park', age: 19 }, { first: 'Dana', last: 'Hines', age: 36 }, { first: 'Pete', last: 'Ross', age: 31 }, { first: 'Annie', last: 'Cross', age: 48 } ]; _.map(collection, function(item) { return _.omit(item, 'first'); }); // → // [ // { last: "Park", age: 19 }, // { last: "Hines", age: 36 }, // { last: "Ross", age: 31 }, // { last: "Cross", age: 48 } // ] The preceding code follows the same approach as the pick() code. The only difference is that we're excluding the first property from the newly created collection. You'll also notice that we're passing a string containing a single property name instead of an array of property names. In addition to passing strings or arrays as the argument to pick() or omit(), we can pass in a function callback. This is suitable when it's not very clear which objects in a collection should have which properties. Using a callback like this inside a map() callback lets us perform detailed comparisons and transformations on collections while using very little code: function invalidAge(value, key) { return key === 'age' && value < 40; } var collection = [ { first: 'Kim', last: 'Lawson', age: 40 }, { first: 'Marcia', last: 'Butler', age: 31 }, { first: 'Shawna', last: 'Hamilton', age: 39 }, { first: 'Leon', last: 'Johnston', age: 67 } ]; _.map(collection, function(item) { return _.omit(item, invalidAge); }); // → // [ // { first: "Kim", last: "Lawson", age: 40 }, // { first: "Marcia", last: "Butler" }, // { first: "Shawna", last: "Hamilton" }, // { first: "Leon", last: "Johnston", age: 67 } // ] The new collection generated by this code excludes the age property for items where the age value is less than 40. The callback supplied to omit() is applied to each key-value pair in the object. This code is a good illustration of the conciseness achievable with Lo-Dash. There's a lot of iterative code running here and there is no for or while statement in sight. Performing calculations It's time now to turn our attention to performing calculations in our map() callbacks. This entails looking at the item and, based on its current state, computing a new value that will be ultimately mapped to the new collection. This could mean extending the original item's properties or replacing one with a newly computed value. Whichever the case, it's a lot easier to map these computations than to write your own logic that applies these functions to every item in your collection. This is explained using the following example: var collection = [ { name: 'Valerie', jqueryYears: 4, cssYears: 3 }, { name: 'Alonzo', jqueryYears: 1, cssYears: 5 }, { name: 'Claire', jqueryYears: 3, cssYears: 1 }, { name: 'Duane', jqueryYears: 2, cssYears: 0 } ]; _.map(collection, function(item) { return _.extend({ experience: item.jqueryYears + item.cssYears, specialty: item.jqueryYears >= item.cssYears ? 'jQuery' : 'CSS' }, item); }); // → // [ // { // experience": 7, // specialty": "jQuery", // name": "Valerie", // jqueryYears": 4, // cssYears: 3 // }, // { // experience: 6, // specialty: "CSS", // name: "Alonzo", // jqueryYears: 1, // cssYears: 5 // }, // { // experience: 4, // specialty: "jQuery", // name: "Claire", // jqueryYears: 3, // cssYears: 1 // }, // { // experience: 2, // specialty: "jQuery", // name: "Duane", // jqueryYears: 2, // cssYears: 0 // } // ] Here, we're mapping each item in the original collection to an extended version of it. Particularly, we're computing two new values for each item—experience and speciality. The experience property is simply the sum of the jqueryYears and cssYears properties. The speciality property is computed based on the larger value of the jqueryYears and cssYears properties. Earlier, I mentioned the need to be careful when modifying items in map() callbacks. In general, it's a bad idea. It's helpful to try and remember that map() is used to generate new collections, not to modify existing collections. Here's an illustration of the horrific consequences of not being careful: var app = {}, collection = [ { name: 'Cameron', supervisor: false }, { name: 'Lindsey', supervisor: true }, { name: 'Kenneth', supervisor: false }, { name: 'Caroline', supervisor: true } ]; app.supervisor = _.find(collection, { supervisor: true }); _.map(collection, function(item) { return _.extend(item, { supervisor: false }); }); console.log(app.supervisor); // → { name: "Lindsey", supervisor: false } The destructive nature of this callback is not obvious at all and next to impossible for programmers to track down and diagnose. Its nature is essentially resetting the supervisor attribute for each item. If these items are used anywhere else in the application, the supervisor property value will be clobbered whenever this map job is executed. If you need to reset values like this, ensure that the change is mapped to the new value and not made to the original. Mapping also works with primitive values as the item. Often, we'll have an array of primitive values that we'd like transformed into an alternative representation. For example, let's say you have an array of sizes, expressed in bytes. You can map those arrays to a new collection with those sizes expressed as human-readable values, using the following code: function bytes(b) { var units = [ 'B', 'K', 'M', 'G', 'T', 'P' ], target = 0; while (b >= 1024) { b = b / 1024; target++; } return (b % 1 === 0 ? b : b.toFixed(1)) + units[target] + (target === 0 ? '' : 'B'); } var collection = [ 1024, 1048576, 345198, 120120120 ]; _.map(collection, bytes); // → [ "1KB", "1MB", "337.1KB", "114.6MB" ] The bytes() function takes a numerical argument, which is the number of bytes to be formatted. This is the starting unit. We just keep incrementing the target unit until we have something that is less than 1024. For example, the last item in our collection maps to '114.6MB'. The bytes() function can be passed directly to map() since it's expecting values in our collection as they are. Calling functions We don't always have to write our own callback functions for map(). Wherever it makes sense, we're free to leverage Lo-Dash functions to map our collection items. For example, let's say we have a collection and we'd like to know the size of each item. There's a size() Lo-Dash function we can use as our map() callback, as follows: var collection = [ [ 1, 2 ], [ 1, 2, 3 ], { first: 1, second: 2 }, { first: 1, second: 2, third: 3 } ]; _.map(collection, _.size); // → [ 2, 3, 2, 3 ] This code has the added benefit that the size() function returns consistent results, no matter what kind of argument is passed to it. In fact, any function that takes a single argument and returns a new value based on that argument is a valid candidate for a map() callback. For instance, we could also map the minimum and maximum value of each item: var source = _.range(1000), collection = [ _.sample(source, 50), _.sample(source, 100), _.sample(source, 150) ]; _.map(collection, _.min); // → [ 20, 21, 1 ] _.map(collection, _.max); // → [ 931, 985, 991 ] What if we want to map each item of our collection to a sorted version? Since we do not sort the collection itself, we don't care about the item positions within the collection, but the items themselves, if they're arrays, for instance. Let's see what happens with the following code: var collection = [ [ 'Evan', 'Veronica', 'Dana' ], [ 'Lila', 'Ronald', 'Dwayne' ], [ 'Ivan', 'Alfred', 'Doug' ], [ 'Penny', 'Lynne', 'Andy' ] ]; _.map(collection, _.compose(_.first, function(item) { return _.sortBy(item); })); // → [ "Dana", "Dwayne", "Alfred", "Andy" ] This code uses the compose() function to construct a map() callback. The first function returns the sorted version of the item by passing it to sortBy(). The first() item of this sorted list is then returned as the mapped item. The end result is a new collection containing the alphabetically first item from each array in our collection, with three lines of code. This is not bad. Filtering and mapping Filtering and mapping are two closely related collection operations. Filtering extracts only those collection items that are of particular interest in a given context. Mapping transforms collections to produce new collections. But what if you only want to map a certain subset of your collection? Then it would make sense to chain together the filtering and mapping operations, right? Here's an example of what that might look like: var collection = [ { name: 'Karl', enabled: true }, { name: 'Sophie', enabled: true }, { name: 'Jerald', enabled: false }, { name: 'Angie', enabled: false } ]; _.compose( _.partialRight(_.map, 'name'), _.partialRight(_.filter, 'enabled') )(collection); // → [ "Karl", "Sophie" ] This map is executed using compose() to build a function that is called right away, with our collection as the argument. The function is composed of two partials. We're using partialRight() on both arguments because we want the collection supplied as the leftmost argument in both cases. The first partial function is filter(). We're partially applying the enabled argument. So this function will filter our collection before it's passed to map(). This brings us to our next partial in the function composition. The result of filtering the collection is passed to map(), which has the name argument partially applied. The end result is a collection with enabled name strings. The important thing to note about the preceding code is that the filtering operation takes place before the map() function is run. We could have stored the filtered collection in an intermediate variable instead of streamlining with compose(). Regardless of flavor, it's important that the items in your mapped collection correspond to the items in the source collection. It's conceivable to filter out the items in the map() callback by not returning anything, but this is ill-advised as it doesn't map well, both figuratively and literally. Mapping objects The previous section focused on collections and how to map them. But wait, objects are collections too, right? That is indeed correct, but it's worth differentiating between the more traditional collections, arrays, and plain objects. The main reason is that there are implications with ordering and keys when performing map/reduce. At the end of the day, arrays and objects serve different use cases with map/reduce, and this article tries to acknowledge these differences. Now we'll start looking at some techniques Lo-Dash programmers employ when working with objects and mapping them to collections. There are a number of factors to consider such as the keys within an object and calling methods on objects. We'll take a look at the relationship between key-value pairs and how they can be used in a mapping context. Working with keys We can use the keys of a given object in interesting ways to map the object to a new collection. For example, we can use the keys() function to extract the keys of an object and map them to values other than the property value, as shown in the following example: var object = { first: 'Ronald', last: 'Walters', employer: 'Packt' }; _.map(_.sortBy(_.keys(object)), function(item) { return object[item]; }); // → [ "Packt", "Ronald", "Walters" ] The preceding code builds an array of property values from object. It does so using map(), which is actually mapping the keys() array of object. These keys are sorted using sortBy(). So Packt is the first element of the resulting array because employer is alphabetically first in the object keys. Sometimes, it's desirable to perform lookups in other objects and map those values to a target object. For example, not all APIs return everything you need for a given page, packaged in a neat little object. You have to do joins and build the data you need. This is shown in the following code: var users = {}, preferences = {}; _.each(_.range(100), function() { var id = _.uniqueId('user-'); users[id] = { type: 'user' }; preferences[id] = { emailme: !!(_.random()) }; }); _.map(users, function(value, key) { return _.extend({ id: key }, preferences[key]); }); // → // [ // { id: "user-1", emailme: true }, // { id: "user-2", emailme: false }, // ... // ] This example builds two objects, users and preferences. In the case of each object, the keys are user identifiers that we're generating with uniqueId(). The user objects just have some dummy attribute in them, while the preferences objects have an emailme attribute, set to a random Boolean value. Now let's say we need quick access to this preference for all users in the users object. As you can see, it's straightforward to implement using map() on the users object. The callback function returns a new object with the user ID. We extend this object with the preference for that particular user by looking at them by key. Calling methods Objects aren't limited to storing primitive strings and numbers. Properties can store functions as their values, or methods, as they're commonly referred. However, depending on the context where you're using your object, methods aren't always callable, especially if you have little or no control over the context where your objects are used. One technique that's helpful in situations such as these is mapping the result of calling these methods and using this result in the context in question. Let's see how this can be done with the following code: var object = { first: 'Roxanne', last: 'Elliot', name: function() { return this.first + ' ' + this.last; }, age: 38, retirement: 65, working: function() { return this.retirement - this.age; } }; _.map(object, function(value, key) { var item = {}; item[key] = _.isFunction(value) ? object[key]() : value return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] _.map(object, function(value, key) { var item = {}; item[key] = _.result(object, key); return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] Here, we have an object with both primitive property values and methods that use these properties. Now we'd like to map the results of calling those methods and we will experiment with two different approaches. The first approach uses the isFunction() function to determine whether the property value is callable or not. If it is, we call it and return that value. The second approach is a little easier to implement and achieves the same outcome. The result() function is applied to the object using the current key. This tests whether we're working with a function or not, so our code doesn't have to. In the first approach to mapping method invocations, you might have noticed that we're calling the method using object[key]() instead of value(). The former retains the context as the object variable, but the latter loses the context, since it is invoked as a plain function without any object. So when you're writing mapping callbacks that call methods and not getting the expected results, make sure the method's context is intact. Perhaps, you have an object but you're not sure which properties are methods. You can use functions() to figure this out and then map the results of calling each method to an array, as shown in the following code: var object = { firstName: 'Fredrick', lastName: 'Townsend', first: function() { return this.firstName; }, last: function() { return this.lastName; } }; var methods = _.map(_.functions(object), function(item) { return [ _.bindKey(object, item) ]; }); _.invoke(methods, 0); // → [ "Fredrick", "Townsend" ] The object variable has two methods, first() and last(). Assuming we didn't know about these methods, we can find them using functions(). Here, we're building a methods array using map(). The input is an array containing the names of all the methods of the given object. The value we're returning is interesting. It's a single-value array; you'll see why in a moment. The value of this array is a function built by passing the object and the name of the method to bindKey(). This function, when invoked, will always use object as its context. Lastly, we use invoke() to invoke each method in our methods array, building a new result array. Recall that our map() callback returned an array. This was a simple hack to make invoke() work, since it's a convenient way to call methods. It generally expects a key as the second argument, but a numerical index works just as well, since they're both looked up as same. Mapping key-value pairs Just because you're working with an object doesn't mean it's ideal, or even necessary. That's what map() is for—mapping what you're given to what you need. For instance, the property values are sometimes all that matter for what you're doing, and you can dispense with the keys entirely. For that, we have the values() function and we feed the values to map(): var object = { first: 'Lindsay', last: 'Castillo', age: 51 }; _.map(_.filter(_.values(object), _.isString), function(item) { return '<strong>' + item + '</strong>'; }); // → [ "<strong>Lindsay</strong>", "<strong>Castillo</strong>" ] All we want from the object variable here is a list of property values, which are strings, so that we can format them. In other words, the fact that the keys are first, last, and age is irrelevant. So first, we call values() to build an array of values. Next, we pass that array to filter(), removing anything that's not a string. We then pass the output of this to map, where we're able to map the string using <strong/> tags. The opposite might also be true—the value is completely meaningless without its key. If that's the case, it may be fitting to map key-value pairs to a new collection, as shown in the following example: function capitalize(s) { return s.charAt(0).toUpperCase() + s.slice(1); } function format(label, value) { return '<label>' + capitalize(label) + ':</label>' + '<strong>' + value + '</strong>'; } var object = { first: 'Julian', last: 'Ramos', age: 43 }; _.map(_.pairs(object), function(pair) { return format.apply(undefined, pair); }); // → // [ // "<label>First:</label><strong>Julian</strong>", // "<label>Last:</label><strong>Ramos</strong>", // "<label>Age:</label><strong>43</strong>" // ] We're passing the result of running our object through the pairs() function to map(). The argument passed to our map callback function is an array, the first element being the key and the second being the value. It so happens that the format() function expects a key and a value to format the given string, so we're able to use format.apply() to call the function, passing it the pair array. This approach is just a matter of taste. There's no need to call pairs() before map(). We could just as easily have called format directly. But sometimes, this approach is preferred, and the reasons, not least of which is the style of the programmer, are wide and varied. Summary This article introduced you to the map/reduce programming model and how Lo-Dash tools help realize it in your application. First, we examined mapping collections, including how to choose which properties get included and how to perform calculations. We then moved on to mapping objects. Keys can have an important role in how objects get mapped to new objects and collections. There are also methods and functions to consider when mapping. Resources for Article: Further resources on this subject: The First Step [article] Recursive directives [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 6209

article-image-building-next-generation-web-meteor
Packt
05 Feb 2015
9 min read
Save for later

Building the next generation Web with Meteor

Packt
05 Feb 2015
9 min read
This article by Fabian Vogelsteller, the author of Building Single-page Web Apps with Meteor, explores the full-stack framework of Meteor. Meteor is not just a JavaScript library such as jQuery or AngularJS. It's a full-stack solution that contains frontend libraries, a Node.js-based server, and a command-line tool. All this together lets us write large-scale web applications in JavaScript, on both the server and client, using a consistent API. (For more resources related to this topic, see here.) Even with Meteor being quite young, already a few companies such as https://lookback.io, https://respond.ly and https://madeye.io use Meteor already in their production environment. If you want to see for yourself what's made with Meteor, take a look at http://madewith.meteor.com. Meteor makes it easy for us to build web applications quickly and takes care of the boring processes such as file linking, minifying, and concatenating of files. Here are a few highlights of what is possible with Meteor: We can build complex web applications amazingly fast using templates that automatically update themselves when data changes We can push new code to all clients on the fly while they are using our app Meteor core packages come with a complete account solution, allowing a seamless integration with Facebook, Twitter, and more Data will automatically be synced across clients, keeping every client in the same state in almost real time Latency compensation will make our interface appear super fast while the server response happens in the background With Meteor, we never have to link files with the <script> tags in HTML. Meteor's command-line tool automatically collects JavaScript or CSS files in our application's folder and links them in the index.html file, which is served to clients on initial page load. This makes structuring our code in separate files as easy as creating them. Meteor's command-line tool also watches all files inside our application's folder for changes and rebuilds them on the fly when they change. Additionally, it starts a Meteor server that serves the app's files to the clients. When a file changes, Meteor reloads the site of every client while preserving its state. This is called a hot code reload. In production, the build process also concatenates and minifies our CSS and JavaScript files. By simply adding the less and coffee core packages, we can even write all styles in LESS and code in CoffeeScript with no extra effort. The command-line tool is also the tool for deploying and bundling our app so that we can run it on a remote server. Sounds awesome? Let's take a look at what's needed to use Meteor Adding basic packages Packages in Meteor are libraries that can be added to our projects. The nice thing about Meteor packages is that they are self-contained units, which run out of the box. They mostly add either some templating functionality or provide extra objects in the global namespace of our project. Packages can also add features to Meteor's build process like the stylus package, which lets us write our app's style files with the stylus pre-processor syntax. Writing templates in Meteor Normally when we build websites, we build the complete HTML on the server side. This was quite straightforward; every page is built on the server, then it is sent to the client, and at last JavaScript added some additional animation or dynamic behavior to it. This is not so in single-page apps, where each page needs to be already in the client's browser so that it can be shown at will. Meteor solves that problem by providing templates that exists in JavaScript and can be placed in the DOM at some point. These templates can have nested templates, allowing for and easy way to reuse and structure an app's HTML layout. Since Meteor is so flexible in terms of folder and file structure, any *.html page can contain a template and will be parsed during Meteor's build process. This allows us to put all templates in the my-meteor-blog/client/templates folder. This folder structure is chosen as it helps us organizing templates while our app grows. Meteor template engine is called Spacebars, which is a derivative of the handlebars template engine. Spacebars is built on top of Blaze, which is Meteor's reactive DOM update engine. Meteor and databases Meteor currently uses MongoDB by default to store data on the server, although there are drivers planned for relational databases, too. If you are adventurous, you can try one of the community-built SQL drivers, such as the numtel:mysql package from https://atmospherejs.com/numtel/mysql. MongoDB is a NoSQL database. This means it is based on a flat document structure instead of a relational table structure. Its document approach makes it ideal for JavaScript as documents are written in BJSON, which is very similar to the JSON format. Meteor has a database everywhere approach, which means we have the same API to query the database on the client as well as on the server. Yet, when we query the database on the client, we are only able to access data that we published to a client. MongoDB uses a datastructure called a collection, which is the equivalent of a table in an SQL database. Collections contain documents, where each document has its own unique ID. These documents are JSON-like structures and can contain properties with values, even with multiple dimensions: { "_id": "W7sBzpBbov48rR7jW", "myName": "My Document Name", "someProperty": 123456, "aNestedProperty": { "anotherOne": "With another string" } } These collections are used to store data in the servers MongoDB as well as the client-sides minimongo collections, which is an in-memory database mimicking the behavior of the real MongoDB. The MongoDB API let us use a simple JSON-based query language to get documents from a collection. We can pass additional options to only ask for specific fields or sort the returned documents. These are very powerful features, especially on the client side, to display data in various ways. Data everywhere In Meteor, we can use the browser console to update data, which means we update the database from the client. This works because Meteor automatically syncs these changes to the server and updates the database accordingly. This is happening because we have the autopublish and insecure core packages added to our project by default. The autopublish package publishes automatically all documents to every client, whereas the insecure package allows every client to update database records by its _id field. Obviously, this works well for prototyping but is infeasible for production, as every client could manipulate our database. If we remove the insecure package, we would need to add the "allow and deny" rules to determine what a client is allowed to update and what not; otherwise all updates will get denied. Differences between client and server collections Meteor has a database everywhere approach. This means it provides the same API on the client as on the server. The data flow is controlled using a publication subscription model. On the server sits the real MongoDB database, which stores data persistently. On the client Meteor has a package called minimongo, which is a pure in-memory database mimicking most of MongoDB's query and update functions. Every time a client connects to its Meteor server, Meteor downloads the documents the client subscribed to and stores them in its local minimongo database. From here, they can be displayed in a template or processed by functions. When the client updates a document, Meteor syncs it back to the server, where it is passed through any allow/deny functions before being persistently stored in the database. This works also in the other way, when a document in the server-side database changes, it will get automatically sync to every client that is subscribed to it, keeping every connected client up to date. Syncing data – the current Web versus the new Web In the current Web, most pages are either static files hosted on a server or dynamically generated by a server on a request. This is true for most server-side-rendered websites, for example, those written with PHP, Rails, or Django. Both of these techniques required no effort besides being displayed by the clients; therefore, they are called thin clients. In modern web applications, the idea of the browser has moved from thin clients to fat clients. This means most of the website's logic resides on the client and the client asks for the data it needs. Currently, this is mostly done via calls to an API server. This API server then returns data, commonly in JSON form, giving the client an easy way to handle it and use it appropriately. Most modern websites are a mixture of thin and fat clients. Normal pages are server-side-rendered, where only some functionality, such as a chat box or news feed, is updated using API calls. Meteor, however, is built on the idea that it's better to use the calculation power of all clients instead of one single server. A pure fat client or a single-page app contains the entire logic of a website's frontend, which is send down on the initial page load. The server then merely acts as a data source, sending only the data to the clients. This can happen by connecting to an API and utilizing AJAX calls, or as with Meteor, using a model called publication/subscription. In this model, the server offers a range of publications and each client decides which dataset it wants to subscribe to. Compared with AJAX calls, the developer doesn't have to take care of any downloading or uploading logic. The Meteor client syncs all of the data automatically in the background as soon as it subscribes to a specific dataset. When data on the server changes, the server sends the updated documents to the clients and vice versa, as shown in the following diagram: Summary Meteor comes with more great ways of building pure JavaScript applications such as simple routing and simple ways to make components, which can be packaged for others to use. Meteor's reactivity model, which allows you to rerun any function and template helpers at will, allows for great consistent interfaces and simple dependency tracking, which is a key for large-scale JavaScript applications. If you want to dig deeper, buy the book and read How to build your own blog as single-page web application in a simple step-by-step fashion by using Meteor, the next generation web! Resources for Article: Further resources on this subject: Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 1897

article-image-google-app-engine
Packt
05 Feb 2015
11 min read
Save for later

Google App Engine

Packt
05 Feb 2015
11 min read
In this article by Massimiliano Pippi, author of the book Python for Google App Engine, in this article, you will learn how to write a web application and seeing the platform in action. Web applications commonly provide a set of features such as user authentication and data storage. App Engine provides the services and tools needed to implement such features. (For more resources related to this topic, see here.) In this article, we will see: Details of the webapp2 framework How to authenticate users Storing data on Google Cloud Datastore Building HTML pages using templates Experimenting on the Notes application To better explore App Engine and Cloud Platform capabilities, we need a real-world application to experiment on; something that's not trivial to write, with a reasonable list of requirements. A good candidate is a note-taking application; we will name it Notes. Notes enable the users to add, remove, and modify a list of notes; a note has a title and a body of text. Users can only see their personal notes, so they must authenticate before using the application. The main page of the application will show the list of notes for logged-in users and a form to add new ones. The code from the helloworld example is a good starting point. We can simply change the name of the root folder and the application field in the app.yaml file to match the new name we chose for the application, or we can start a new project from scratch named notes. Authenticating users The first requirement for our Notes application is showing the home page only to users who are logged in and redirect others to the login form; the users service provided by App Engine is exactly what we need and adding it to our MainHandler class is quite simple: import webapp2 from google.appengine.api import users class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: self.response.write('Hello Notes!') else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) The user package we import on the second line of the previous code provides access to users' service functionalities. Inside the get() method of the MainHandler class, we first check whether the user visiting the page has logged in or not. If they have, the get_current_user() method returns an instance of the user class provided by App Engine and representing an authenticated user; otherwise, it returns None as output. If the user is valid, we provide the response as we did before; otherwise, we redirect them to the Google login form. The URL of the login form is returned using the create_login_url() method, and we call it, passing as a parameter the URL we want to redirect users to after a successful authentication. In this case, we want to redirect users to the same URL they are visiting, provided by webapp2 in the self.request.uri property. The webapp2 framework also provides handlers with a redirect() method we can use to conveniently set the right status and location properties of the response object so that the client browsers will be redirected to the login page. HTML templates with Jinja2 Web applications provide rich and complex HTML user interfaces, and Notes is no exception but, so far, response objects in our applications contained just small pieces of text. We could include HTML tags as strings in our Python modules and write them in the response body but we can imagine how easily it could become messy and hard to maintain the code. We need to completely separate the Python code from HTML pages and that's exactly what a template engine does. A template is a piece of HTML code living in its own file and possibly containing additional, special tags; with the help of a template engine, from the Python script, we can load this file, properly parse special tags, if any, and return valid HTML code in the response body. App Engine includes in the Python runtime a well-known template engine: the Jinja2 library. To make the Jinja2 library available to our application, we need to add this code to the app.yaml file under the libraries section: libraries: - name: webapp2 version: "2.5.2" - name: jinja2 version: latest We can put the HTML code for the main page in a file called main.html inside the application root. We start with a very simple page: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>Notes</title> </head> <body> <div class="container"> <h1>Welcome to Notes!</h1> <p> Hello, <b>{{user}}</b> - <a href="{{logout_url}}">Logout</a> </p> </div> </body> </html> Most of the content is static, which means that it will be rendered as standard HTML as we see it but there is a part that is dynamic and whose content depend on which data will be passed at runtime to the rendering process. This data is commonly referred to as template context. What has to be dynamic is the username of the current user and the link used to log out from the application. The HTML code contains two special elements written in the Jinja2 template syntax, {{user}} and {{logout_url}}, that will be substituted before the final output occurs. Back to the Python script; we need to add the code to initialize the template engine before the MainHandler class definition: import os import jinja2 jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(os.path.dirname(__file__))) The environment instance stores engine configuration and global objects, and it's used to load templates instances; in our case, instances are loaded from HTML files on the filesystem in the same directory as the Python script. To load and render our template, we add the following code to the MainHandler.get() method: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) Similar to how we get the login URL, the create_logout_url() method provided by the user service returns the absolute URI to the logout procedure that we assign to the logout_url variable. We then create the template_context dictionary that contains the context values we want to pass to the template engine for the rendering process. We assign the nickname of the current user to the user key in the dictionary and the logout URL string to the logout_url key. The get_template() method from the jinja_env instance takes the name of the file that contains the HTML code and returns a Jinja2 template object. To obtain the final output, we call the render() method on the template object passing in the template_context dictionary whose values will be accessed, specifying their respective keys in the HTML file with the template syntax elements {{user}} and {{logout_url}}. Handling forms The main page of the application is supposed to list all the notes that belong to the current user but there isn't any way to create such notes at the moment. We need to display a web form on the main page so that users can submit details and create a note. To display a form to collect data and create notes, we put the following HTML code right below the username and the logout link in the main.html template file: {% if note_title %} <p>Title: {{note_title}}</p> <p>Content: {{note_content}}</p> {% endif %} <h4>Add a new note</h4> <form action="" method="post"> <div class="form-group"> <label for="title">Title:</label> <input type="text" id="title" name="title" /> </div> <div class="form-group"> <label for="content">Content:</label> <textarea id="content" name="content"></textarea> </div> <div class="form-group"> <button type="submit">Save note</button> </div> </form> Before showing the form, a message is displayed only when the template context contains a variable named note_title. To do this, we use an if statement, executed between the {% if note_title %} and {% endif %} delimiters; similar delimiters are used to perform for loops or assign values inside a template. The action property of the form tag is empty; this means that upon form submission, the browser will perform a POST request to the same URL, which in this case is the home page URL. As our WSGI application maps the home page to the MainHandler class, we need to add a method to this class so that it can handle POST requests: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) def post(self): user = users.get_current_user() if user is None: self.error(401) logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, 'note_title': self.request.get('title'), 'note_content': self.request.get('content'), } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) When the form is submitted, the handler is invoked and the post() method is called. We first check whether a valid user is logged in; if not, we raise an HTTP 401: Unauthorized error without serving any content in the response body. Since the HTML template is the same served by the get() method, we still need to add the logout URL and the user name to the context. In this case, we also store the data coming from the HTML form in the context. To access the form data, we call the get() method on the self.request object. The last three lines are boilerplate code to load and render the home page template. We can move this code in a separate method to avoid duplication: def _render_template(self, template_name, context=None): if context is None: context = {} template = jinja_env.get_template(template_name) return template.render(context) In the handler class, we will then use something like this to output the template rendering result: self.response.out.write( self._render_template('main.html', template_context)) We can try to submit the form and check whether the note title and content are actually displayed above the form. Summary Thanks to App Engine, we have already implemented a rich set of features with a relatively small effort so far. We have discovered some more details about the webapp2 framework and its capabilities, implementing a nontrivial request handler. We have learned how to use the App Engine users service to provide users authentication. We have delved into some fundamental details of Datastore and now we know how to structure data in grouped entities and how to effectively retrieve data with ancestor queries. In addition, we have created an HTML user interface with the help of the Jinja2 template library, learning how to serve static content such as CSS files. Resources for Article: Further resources on this subject: Machine Learning in IPython with scikit-learn [Article] Introspecting Maya, Python, and PyMEL [Article] Driving Visual Analyses with Automobile Data (Python) [Article]
Read more
  • 0
  • 0
  • 3192

article-image-first-step
Packt
04 Feb 2015
16 min read
Save for later

The First Step

Packt
04 Feb 2015
16 min read
The First Step In this article by Tim Chaplin, author of the book AngularJS Test-driven Development, provides an initial introductory walk-through of how to use TDD to build an AngularJS application with a controller, model, and scope. You will be able to begin the TDD journey and see the fundamentals in action. Now, we will switch gears and dive into TDD with AngularJS. This article will be the first step of TDD. This article will focus on the creation of social media comments. It will also focus on the testing associated with controllers and the use of Angular mocks to AngularJS components in a test. (For more resources related to this topic, see here.) Preparing the application's specification Create an application to enter comments. The specification of the application is as follows: Given I am posting a new comment, when I click on the submit button, the comment should be added to the to-do list Given a comment, when I click on the like button, the number of likes for the comment should be increased Now that we have the specification of application, we can create our development to-do list. It won't be easy to create an entire to-do list of the whole application. Based on the user specifications, we have an idea of what needs to be developed. Here is a rough sketch of the UI: Hold yourself back from jumping into the implementation and thinking about how you will use a controller with a service, ng-repeat, and so on. Resist, resist, resist! Although you can think of how this will be developed in the future, it is never clear until you delve into the code, and that is where you start getting into trouble. TDD and its principles are here to help you get your mind and focus in the right place. Setting up the project I will provide a list in the following section for the initial actions to get the project set up. Setting up the directory The following instructions are specific to setting up the project directory: Create a new project directory. Get angular into the project using Bower: bower install angular Get angular-mocks for testing using Bower: bower install angular-mocks Initialize the application's source directory: mkdir app Initialize the test directory: mkdir spec Initialize the unit test directory: mkdir spec/unit Initialize the end-to-end test directory: mkdir spec/e2e Once the initialization is complete, your folder structure should look as follows: Setting up Protractor In this article, we will just discuss the steps at a higher level: Install Protractor in the project: $ npm install protractor Update Selenium WebDriver: $ ./node_modules/protractor/bin/webdriver-manager update Make sure that Selenium has been installed. Copy the example chromeOnly configuration into the root of the project: $ cp ./node_modules/protractor/example/chromeOnlyConf.js . Configure the Protractor configuration using the following steps: Open the Protractor configuration. Edit the Selenium WebDriver location to reflect the relative directory to chromeDriver: chromeDriver: './node_modules/protractor/selenium/chromedriver', Edit the files section to reflect the test directory: specs: ['spec/e2e/**/*.js'], Set the default base URL: baseUrl: 'http://localhost:8080/', Excellent! Protractor should now be installed and set up. Here is the complete configuration: exports.config = { chromeOnly: true, chromeDriver: './node_modules/protractor/selenium/chromedriver', capabilities: { 'browserName': 'chrome' }, baseUrl: 'http://localhost:8080/', specs: ['spec/e2e/**/*.js'], }; Setting up Karma Here is a brief summary of the steps required to install and get your new project set up: Install Karma using the following command: npm install karma -g Initialize the Karma configuration: karma init Update the Karma configuration: files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], Now that we have set up the project directory and initialized Protractor and Karma, we can dive into the code. Here is the complete karma.conf.js file: module.exports = function(config) { config.set({ basePath: '', frameworks: ['jasmine'], files: [ 'bower_components/angular/angular.js', 'bower_components/angular-mocks/angular-mocks.js', 'spec/unit/**/*.js' ], reporters: ['progress'], port: 9876, autoWatch: true, browsers: ['Chrome'], singleRun: false }); }; Setting up http-server A web server will be used to host the application. As this will just be for local development only, you can use http-server. The http-server module is a simple HTTP server that serves static content. It is available as an npm module. To install http-server in your project, type the following command: $ npm install http-server Once http-server is installed, you can run the server by providing it with the root directory of the web page. Here is an example: $ ./node_modules/http-server/bin/http-server Now that you have http-server installed, you can move on to the next step. Top-down or bottom-up approach From our development perspective, we have to determine where to start. The approaches that we will discuss in this article are as follows: The bottom-up approach: With this approach, we think about the different components we will need (controller, service, module, and so on) and then pick the most logical one and start coding. The top-down approach: With this approach, we work from the user scenario and UI. We then create the application around the components in the application. There are merits to both types of approaches and the choice can be based on your team, existing components, requirements, and so on. In most cases, it is best for you to make the choice based on the least resistance. In this article, the approach of specification is top-down, everything is laid out for us from the user scenario and will allow you to organically build the application around the UI. Testing a controller Before getting into the specification, and the mind-set of the feature being delivered, it is important to see the fundamentals of testing a controller. An AngularJS controller is a key component used in most applications. A simple controller test setup When testing a controller, tests are centered on the controller's scope. The tests confirm either the objects or methods in the scope. Angular mocks provide inject, which finds a particular reference and returns it for you to use. When inject is used for the controller, the controllers scope can be assigned to an outer reference for the entire test to use. Here is an example of what this would look like: describe('',function(){ var scope = {}; beforeEach(function(){ module('anyModule'); inject(function($controller){ $controller('AnyController',{$scope:scope}); }); }); }); In the preceding case, the test's scope object is assigned to the actual scope of the controller within the inject function. The scope object can now be used throughout the test, and is also reinitialized before each test. Initializing the scope In the preceding example, scope is initialized to an object {}. This is not the best approach; just like a page, a controller might be nested within another controller. This will cause inheritance of a parent scope as follows: <body ng-app='anyModule'> <div ng-controller='ParentController'> <div ng-controller='ChildController'> </div> </div> </body> As seen in the preceding code, we have this hierarchy of scopes that the ChildController function has access to. In order to test this, we have to initialize the scope object properly in the inject function. Here is how the preceding scope hierarchy can be recreated: inject(function($controller,$rootScope){ var parentScope = $rootScope.$new(); $controller('ParentController',{$scope:parentScope}); var childScope = parentScope.$new(); $controller('AnyController',{$scope: childScope}); }); There are two main things that the preceding code does: The $rootScope scope is injected into the test. The $rootScope scope is the highest level of scope that exists. Each level of scope is created with the $new() method. This method creates the child scope. In this article, we will use the simplified version and initialize the scope to an empty object; however, it is important to understand how to create the scope when required. Bring on the comments Now that the setup and approach have been decided, we can start our first test. From a testing point of view, as we will be using a top-down approach, we will write our Protractor tests first and then build the application. We will follow the same TDD life cycle we have already reviewed, that is, test first, make it run, and make it better. Test first The scenario given is in a well-specified format already and fits our Protractor testing template: describe('',function(){ beforeEach(function(){ }); it('',function(){ }); }); Placing the scenario in the template, we get the following code: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ }); it('Should then add the comment',function(){ }); }); }); Following the 3 A's (Assemble, Act, Assert), we will fit the user scenario in the template. Assemble The browser will need to point to the first page of the application. As the base URL has already been defined, we can add the following to the test: beforeEach(function(){ browser.get('/'); }); Now that the test is prepared, we can move on to the next step, Act. Act The next thing we need to do, based on the user specification, is add an actual comment. The easiest thing is to just put some text into an input box. The test for this, again without knowing what the element will be called or what it will do, is to write it based on what it should be. Here is the code to add the comment section for the application: beforeEach(function(){ ... var commentInput = $('input'); commentInput.sendKeys('a comment'); }); The last assemble component, as part of the test, is to push the Submit button. This can be easily achieved in Protractor using the click function. Even though we don't have a page yet, or any attributes, we can still name the button that will be created: beforeEach(function(){ ... var submitButton = element.all(by.buttonText('Submit')).click(); }); Finally, we will hit the crux of the test and assert the users' expectations. Assert The user expectation is that once the Submit button is clicked, the comment is added. This is a little ambiguous, but we can determine that somehow the user needs to get notified that the comment was added. The simplest approach is to display all comments on the page. In AngularJS, the easiest way to do this is to add an ng-repeat object that displays all comments. To test this, we will add the following: it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); Now, the test has been constructed and meets the user specifications. It is small and concise. Here is the completed test: describe('Given I am posting a new comment',function(){ describe('When I push the submit button',function(){ beforeEach(function(){ //Assemble browser.get('/'); var commentInput = $('input'); commentInput.sendKeys('a comment'); //Act //Act var submitButton = element.all(by.buttonText('Submit')). click(); }); //Assert it('Should then add the comment',function(){ var comments = element(by.repeater('comment in comments')).first(); expect(comment.getText()).toBe('a comment'); }); }); }); Make it run Based on the errors and output of the test, we will build our application as we go. The first step to make the code run is to identify the errors. Before starting off the site, let's create a bare bones index.html page: <!DOCTYPE html> <html> <head> <title></title> </head> <body> </body> </html> Already anticipating the first error, add AngularJS as a dependency in the page: <script type='text/javascript' src='bower_components/angular/angular.js'></script> </body> Now, starting the web server using the following command: $ ./node_modules/http-server/bin/http-server -p 8080 Run Protractor to see the first error: $ ./node_modules/.bin/protractor chromeOnlyConf.js Our first error states that AngularJS could not be found: Error: Angular could not be found on the page http://localhost:8080/ : angular never provided resumeBootstrap This is because we need to add ng-app to the page. Let's create a module and add it to the page. The complete HTML page now looks as follows: <!DOCTYPE html> <html> <head> <title></title> </head> <body> <script src="bower_components/angular/angular.js"></script> </body> </html> Adding the module The first component that you need to define is an ng-app attribute in the index.html page. Use the following steps to add the module: Add ng-app as an attribute to the body tag: <body ng-app='comments'> Now, we can go ahead and create a simple comments module and add it to a file named comments.js: angular.module('comments',[]); Add this new file to index.html: <script src='app/commentController.js'></script> Rerun the Protractor test to get the next error: $ Error: No element found using locator: By.cssSelector('input') The test couldn't find our input locator. You need to add the input to the page. Adding the input Here are the steps you need to follow to add the input to the page: All we have to do is add a simple input tag to the page: <input type='text' /> Run the test and see what the new output is: $ Error: No element found using locator: by.buttonText('Submit') Just like the previous error, we need to add a button with the appropriate text: <button type='button'>Submit</button> Run the test again and the next error is as follows: $ Error: No element found using locator: by.repeater('comment in comments') This appears to be from our expectation that a submitted comment will be available on the page through ng-repeat. To add this to the page, we will use a controller to provide the data for the repeater. Controller As we mentioned in the preceding section, the error is because there is no comments object. In order to add the comments object, we will use a controller that has an array of comments in its scope. Use the following steps to add a comments object in the scope: Create a new file in the app directory named commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Add it to the web page after the AngularJS script: <script src='app/commentController.js'></script> Now, we can add commentController to the page: <div ng-controller='CommentController'> Then, add a repeater for the comments as follows: <ul ng-repeat='comment in comments'> <li>{{comment}}</li> </ul> Run the Protractor test and let's see where we are: $ Error: No element found using locator: by.repeater('comment in comments') Hmmm! We get the same error. Let's look at the actual page that gets rendered and see what's going on. In Chrome, go to http://localhost:8080 and open the console to see the page source (Ctrl + Shift + J). You should see something like what's shown in the following screenshot: Notice that the repeater and controller are both there; however, the repeater is commented out. Since Protractor is only looking at visible elements, it won't find the repeater. Great! Now we know why the repeater isn't visible, but we have to fix it. In order for a comment to show up, it has to exist on the controller's comments scope. The smallest change is to add something to the array to initialize it as shown in the following code snippet: .controller('CommentController',['$scope',function($scope){ $scope.comments = ['anything']; }]); Now run the test and we get the following: $ Expected 'anything' to be 'a comment'. Wow! We finally tackled all the errors and reached the expectation. Here is what the HTML code looks like so far: <!DOCTYPE html> <html> <head> <title></title> </head> <body ng-app='comments'> <div ng-controller='CommentController'> <input type='text' /> <ul> <li ng-repeat='comment in comments'> {{comment.value}} </li> </ul> </div> <script src='bower_components/angular/angular.js'></script> <script src='app/comments.js'></script> <script src='app/commentController.js'></script> </body> </html> The comments.js module looks as follows: angular.module('comments',[]); Here is commentController.js: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = []; }]) Make it pass With TDD, you want to add the smallest possible component to make the test pass. Since we have hardcoded, for the moment, the comments to be initialized to anything, change anything to a comment; this should make the test pass. Here is the code to make the test pass: angular.module('comments') .controller('CommentController',['$scope', function($scope){ $scope.comments = ['a comment']; }]); … Run the test, and bam! We get a passing test: $ 1 test, 1 assertion, 0 failures Wait a second! We still have some work to do. Although we got the test to pass, it is not done. We added some hacks just to get the test passing. The two things that stand out are: Clicking on the Submit button, which really doesn't have any functionality Hardcoded initialization of the expected value for a comment The preceding changes are critical steps we need to perform before we move forward. They will be tackled in the next phase of the TDD life cycle, that is, make it better (refactor). Summary In this article, we walked through the TDD techniques of using Protractor and Karma together. As the application was developed, you were able to see where, why, and how to apply the TDD testing tools and techniques. With the bottom-up approach, the specifications are used to build unit tests and then build the UI layer on top of that. In this article, a top-down approach was shown to focus on the user's behavior. The top-down approach tests the UI and then filters the development through the other layers. Resources for Article: Further resources on this subject: AngularJS Project [Article] Role of AngularJS [Article] Creating Our First Animation in AngularJS [Article]
Read more
  • 0
  • 0
  • 2015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-servicestack-applications
Packt
21 Jan 2015
9 min read
Save for later

ServiceStack applications

Packt
21 Jan 2015
9 min read
In this article by Kyle Hodgson and Darren Reid, authors of the book ServiceStack 4 Cookbook, we'll learn about unit testing ServiceStack applications. (For more resources related to this topic, see here.) Unit testing ServiceStack applications In this recipe, we'll focus on simple techniques to test individual units of code within a ServiceStack application. We will use the ServiceStack testing helper BasicAppHost as an application container, as it provides us with some useful helpers to inject a test double for our database. Our goal is small; fast tests that test one unit of code within our application. Getting ready We are going to need some services to test, so we are going to use the PlacesToVisit application. How to do it… Create a new testing project. It's a common convention to name the testing project <ProjectName>.Tests—so in our case, we'll call it PlacesToVisit.Tests. Create a class within this project to contain the tests we'll write—let's name it PlacesServiceTests as the tests within it will focus on the PlacesService class. Annotate this class with the [TestFixture] attribute, as follows: [TestFixture]public class PlaceServiceTests{ We'll want one method that runs whenever this set of tests begins to set up the environment and another one that runs afterwards to tear the environment down. These will be annotated with the NUnit attributes of TestFixtureSetUp and TextFixtureTearDown, respectively. Let's name them FixtureInit and FixtureTearDown. In the FixtureInit method, we will use BasicAppHost to initialize our appHost test container. We'll make it a field so that we can easily access it in each test, as follows: ServiceStackHost appHost; [TestFixtureSetUp]public void FixtureInit(){appHost = new BasicAppHost(typeof(PlaceService).Assembly){   ConfigureContainer = container =>   {     container.Register<IDbConnectionFactory>(c =>       new OrmLiteConnectionFactory(         ":memory:", SqliteDialect.Provider));     container.RegisterAutoWiredAs<PlacesToVisitRepository,       IPlacesToVisitRepository>();   }}.Init();} The ConfigureContainer property on BasicAppHost allows us to pass in a function that we want AppHost to run inside of the Configure method. In this case, you can see that we're registering OrmLiteConnectionFactory with an in-memory SQLite instance. This allows us to test code that uses a database without that database actually running. This useful technique could be considered a classic unit testing approach—the mockist approach might have been to mock the database instead. The FixtureTearDown method will dispose of appHost as you might imagine. This is how the code will look: [TestFixtureTearDown]public void FixtureTearDown(){appHost.Dispose();} We haven't created any data in our in memory database yet. We'll want to ensure the data is the same prior to each test, so our TestInit method is a good place to do that—it will be run once before each and every test run as we'll annotate it with the [SetUp] attribute, as follows: [SetUp]public void TestInit(){using (var db = appHost.Container     .Resolve<IDbConnectionFactory>().Open()){   db.DropAndCreateTable<Place>();   db.InsertAll(PlaceSeedData.GetSeedPlaces());}} As our tests all focus on PlaceService, we'll make sure to create Place data. Next, we'll begin writing tests. Let's start with one that asserts that we can create new places. The first step is to create the new method, name it appropriately, and annotate it with the [Test] attribute, as follows: [Test]public void ShouldAddNewPlaces(){ Next, we'll create an instance of PlaceService that we can test against. We'll use the Funq IoC TryResolve method for this: var placeService = appHost.TryResolve<PlaceService>(); We'll want to create a new place, then query the database later to see whether the new one was added. So, it's useful to start by getting a count of how many places there are based on just the seed data. Here's how you can get the count based on the seed data: var startingCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places               .Count; Since we're testing the ability to handle a CreatePlaceToVisit request, we'll need a test object that we can send the service to. Let's create one and then go ahead and post it: var melbourne = new CreatePlaceToVisit{   Name = "Melbourne",   Description = "A nice city to holiday"}; placeService.Post(melbourne); Having done that, we can get the updated count and then assert that there is one more item in the database than there were before: var newCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places              .Count;Assert.That(newCount == startingCount + 1); Next, let's fetch the new record that was created and make an assertion that it's the one we want: var newPlace = placeService.Get(new PlaceToVisitRequest{   Id = startingCount + 1});Assert.That(newPlace.Place.Name == melbourne.Name);} With this in place, if we run the test, we'll expect it to pass both assertions. This proves that we can add new places via PlaceService registered with Funq, and that when we do that we can go and retrieve them later as expected. We can also build a similar test that asserts that on our ability to update an existing place. Adding the code is simple, following the pattern we set out previously. We'll start with the arrange section of the test, creating the variables and objects we'll need: [Test]public void ShouldUpdateExistingPlaces(){var placeService = appHost.TryResolve<PlaceService>();var startingPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var startingCount = startingPlaces.Count;  var canberra = startingPlaces     .First(c => c.Name.Equals("Canberra")); const string canberrasNewName = "Canberra, ACT";canberra.Name = canberrasNewName; Once they're in place, we'll act. In this case, the Put method on placeService has the responsibility for update operations: placeService.Put(canberra.ConvertTo<UpdatePlaceToVisit>()); Think of the ConvertTo helper method from ServiceStack as an auto-mapper, which converts our Place object for us. Now that we've updated the record for Canberra, we'll proceed to the assert section of the test, as follows: var updatedPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var updatedCanberra = updatedPlaces     .First(p => p.Id.Equals(canberra.Id));var updatedCount = updatedPlaces.Count; Assert.That(updatedCanberra.Name == canberrasNewName);Assert.That(updatedCount == startingCount);} How it works… These unit tests are using a few different patterns that help us write concise tests, including the development of our own test helpers, and with helpers from the ServiceStack.Testing namespace, for instance BasicAppHost allows us to set up an application host instance without actually hosting a web service. It also lets us provide a custom ConfigureContainer action to mock any of our dependencies for our services and seed our testing data, as follows: appHost = new BasicAppHost(typeof(PlaceService).Assembly){ConfigureContainer = container =>{   container.Register<IDbConnectionFactory>(c =>     new OrmLiteConnectionFactory(     ":memory:", SqliteDialect.Provider));    container.RegisterAutoWiredAs<PlacesToVisitRepository,     IPlacesToVisitRepository>();}}.Init(); To test any ServiceStack service, you can resolve it through the application host via TryResolve<ServiceType>().This will have the IoC container instantiate an object of the type requested. This gives us the ability to test the Get method independent of other aspects of our web service, such as validation. This is shown in the following code: var placeService = appHost.TryResolve<PlaceService>(); In this example, we are using an in-memory SQLite instance to mock our use of OrmLite for data access, which IPlacesToVisitRepository will also use as well as seeding our test data in our ConfigureContainer hook of BasicAppHost. The use of both in-memory SQLite and BasicAppHost provide fast unit tests to very quickly iterate our application services while ensuring we are not breaking any functionality specifically associated with this component. In the example provided, we are running three tests in less than 100 milliseconds. If you are using the full version of Visual Studio, extensions such as NCrunch can allow you to regularly run your unit tests while you make changes to your code. The performance of ServiceStack components and the use of these extensions results in a smooth developer experience with productivity and quality of code. There's more… In the examples in this article, we wrote out tests that would pass, ran them, and saw that they passed (no surprise). While this makes explaining things a bit simpler, it's not really a best practice. You generally want to make sure your tests fail when presented with wrong data at some point. The authors have seen many cases where subtle bugs in test code were causing a test to pass that should not have passed. One best practice is to write tests so that they fail first and then make them pass—this guarantees that the test can actually detect the defect you're guarding against. This is commonly referred to as the red/green/refactor pattern. Summary In this article, we covered some techniques to unit test ServiceStack applications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Web API and Client Integration [article] WebSockets in Wildfly [article]
Read more
  • 0
  • 0
  • 2117

article-image-creating-photo-sharing-application
Packt
16 Jan 2015
34 min read
Save for later

Creating a Photo-sharing Application

Packt
16 Jan 2015
34 min read
In this article by Rob Foster, the author of CodeIgniter Web Application Blueprints, we will create a photo-sharing application. There are quite a few image-sharing websites around at the moment. They all share roughly the same structure: the user uploads an image and that image can be shared, allowing others to view that image. Perhaps limits or constraints are placed on the viewing of an image, perhaps the image only remains viewable for a set period of time, or within set dates, but the general structure is the same. And I'm happy to announce that this project is exactly the same. We'll create an application allowing users to share pictures; these pictures are accessible from a unique URL. To make this app, we will create two controllers: one to process image uploading and one to process the viewing and displaying of images stored. We'll create a language file to store the text, allowing you to have support for multiple languages should it be needed. We'll create all the necessary view files and a model to interface with the database. In this article, we will cover: Design and wireframes Creating the database Creating the models Creating the views Creating the controllers Putting it all together So without further ado, let's get on with it. (For more resources related to this topic, see here.) Design and wireframes As always, before we start building, we should take a look at what we plan to build. First, a brief description of our intent: we plan to build an app to allow the user to upload an image. That image will be stored in a folder with a unique name. A URL will also be generated containing a unique code, and the URL and code will be assigned to that image. The image can be accessed via that URL. The idea of using a unique URL to access that image is so that we can control access to that image, such as allowing an image to be viewed only a set number of times, or for a certain period of time only. Anyway, to get a better idea of what's happening, let's take a look at the following site map: So that's the site map. The first thing to notice is how simple the site is. There are only three main areas to this project. Let's go over each item and get a brief idea of what they do: create: Imagine this as the start point. The user will be shown a simple form allowing them to upload an image. Once the user presses the Upload button, they are directed to do_upload. do_upload: The uploaded image is validated for size and file type. If it passes, then a unique eight-character string is generated. This string is then used as the name of a folder we will make. This folder is present in the main upload folder and the uploaded image is saved in it. The image details (image name, folder name, and so on) are then passed to the database model, where another unique code is generated for the image URL. This unique code, image name, and folder name are then saved to the database. The user is then presented with a message informing them that their image has been uploaded and that a URL has been created. The user is also presented with the image they have uploaded. go: This will take a URL provided by someone typing into a browser's address bar, or an img src tag, or some other method. The go item will look at the unique code in the URL, query the database to see if that code exists, and if so, fetch the folder name and image name and deliver the image back to the method that called it. Now that we have a fairly good idea of the structure and form of the site, let's take a look at the wireframes of each page. The create item The following screenshot shows a wireframe for the create item discussed in the previous section. The user is shown a simple form allowing them to upload an image. Image2 The do_upload item The following screenshot shows a wireframe from the do_upload item discussed in the previous section. The user is shown the image they have uploaded and the URL that will direct other users to that image. The go item The following screenshot shows a wireframe from the go item described in the previous section. The go controller takes the unique code in a URL, attempts to find it in the database table images, and if found, supplies the image associated with it. Only the image is supplied, not the actual HTML markup. File overview This is a relatively small project, and all in all we're only going to create seven files, which are as follows: /path/to/codeigniter/application/models/image_model.php: This provides read/write access to the images database table. This model also takes the upload information and unique folder name (which we store the uploaded image in) from the create controller and stores this to the database. /path/to/codeigniter/application/views/create/create.php: This provides us with an interface to display a form allowing the user to upload a file. This also displays any error messages to the user, such as wrong file type, file size too big, and so on. /path/to/codeigniter/application/views/create/result.php: This displays the image to the user after it has been successfully uploaded, as well as the URL required to view that image. /path/to/codeigniter/application/views/nav/top_nav.php: This provides a navigation bar at the top of the page. /path/to/codeigniter/application/controllers/create.php: This performs validation checks on the image uploaded by the user, creates a uniquely named folder to store the uploaded image, and passes this information to the model. /path/to/codeigniter/application/controllers/go.php: This performs validation checks on the URL input by the user, looks for the unique code in the URL and attempts to find this record in the database. If it is found, then it will display the image stored on disk. /path/to/codeigniter/application/language/english/en_admin_lang.php: This provides language support for the application. The file structure of the preceding seven files is as follows: application/ ├── controllers/ │   ├── create.php │   ├── go.php ├── models/ │   ├── image_model.php ├── views/create/ │   ├── create.php │   ├── result.php ├── views/nav/ │   ├── top_nav.php ├── language/english/ │   ├── en_admin_lang.php Creating the database First, we'll build the database. Copy the following MySQL code into your database: CREATE DATABASE `imagesdb`; USE `imagesdb`;   DROP TABLE IF EXISTS `images`; CREATE TABLE `images` ( `img_id` int(11) NOT NULL AUTO_INCREMENT, `img_url_code` varchar(10) NOT NULL, `img_url_created_at` timestamp NOT NULL DEFAULT     CURRENT_TIMESTAMP, `img_image_name` varchar(255) NOT NULL, `img_dir_name` varchar(8) NOT NULL, PRIMARY KEY (`img_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; Right, let's take a look at each item in every table and see what they mean: Table: images Element Description img_id This is the primary key. img_url_code This stores the unique code that we use to identify the image in the database. img_url_created_at This is the MySQL timestamp for the record. img_image_name This is the filename provided by the CodeIgniter upload functionality. img_dir_name This is the name of the directory we store the image in. We'll also need to make amends to the config/database.php file, namely setting the database access details, username, password, and so on. Open the config/database.php file and find the following lines: $db['default']['hostname'] = 'localhost'; $db['default']['username'] = 'your username'; $db['default']['password'] = 'your password'; $db['default']['database'] = 'imagesdb'; Edit the values in the preceding code ensuring you substitute those values for the ones more specific to your setup and situation—so enter your username, password, and so on. Adjusting the config.php and autoload.php files We don't actually need to adjust the config.php file in this project as we're not really using sessions or anything like that. So we don't need an encryption key or database information. So just ensure that you are not autoloading the session in the config/autoload.php file or you will get an error, as we've not set any session variables in the config/config.php file. Adjusting the routes.php file We want to redirect the user to the create controller rather than the default CodeIgniter welcome controller. To do this, we will need to amend the default controller settings in the routes.php file to reflect this. The steps are as follows: Open the config/routes.php file for editing and find the following lines (near the bottom of the file): $route['default_controller'] = "welcome"; $route['404_override'] = ''; First, we need to change the default controller. Initially, in a CodeIgniter application, the default controller is set to welcome. However, we don't need that, instead we want the default controller to be create, so find the following line: $route['default_controller'] = "welcome"; Replace it with the following lines: $route['default_controller'] = "create"; $route['404_override'] = ''; Then we need to add some rules to govern how we handle URLs coming in and form submissions. Leave a few blank lines underneath the preceding two lines of code (default controller and 404 override) and add the following three lines of code: $route['create'] = "create/index"; $route['(:any)'] = "go/index"; $route['create/do_upload'] = "create/do_upload"; Creating the model There is only one model in this project, image_model.php. It contains functions specific to creating and resetting passwords. Create the /path/to/codeigniter/application/models/image_model.php file and add the following code to it: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');   class Image_model extends CI_Model { function __construct() {    parent::__construct(); }   function save_image($data) {    do {       $img_url_code = random_string('alnum', 8);        $this->db->where('img_url_code = ', $img_url_code);      $this->db->from('images');      $num = $this->db->count_all_results();    } while ($num >= 1);      $query = "INSERT INTO `images` (`img_url_code`,       `img_image_name`, `img_dir_name`) VALUES (?,?,?) ";    $result = $this->db->query($query, array($img_url_code,       $data['image_name'], $data['img_dir_name']));      if ($result) {      return $img_url_code;    } else {      return flase;    } }   function fetch_image($img_url_code) {    $query = "SELECT * FROM `images` WHERE `img_url_code` = ? ";    $result = $this->db->query($query, array($img_url_code));      if ($result) {      return $result;    } else {      return false;    } } } There are two main functions in this model, which are as follows: save_image(): This generates a unique code that is associated with the uploaded image and saves it, with the image name and folder name, to the database. fetch_image(): This fetches an image's details from the database according to the unique code provided. Okay, let's take save_image() first. The save_image() function accepts an array from the create controller containing image_name (from the upload process) and img_dir_name (this is the folder that the image is stored in). A unique code is generated using a do…while loop as shown here: $img_url_code = random_string('alnum', 8); First a string is created, eight characters in length, containing alpha-numeric characters. The do…while loop checks to see if this code already exists in the database, generating a new code if it is already present. If it does not already exist, this code is used: do { $img_url_code = random_string('alnum', 8);   $this->db->where('img_url_code = ', $img_url_code); $this->db->from('images'); $num = $this->db->count_all_results(); } while ($num >= 1); This code and the contents of the $data array are then saved to the database using the following code: $query = "INSERT INTO `images` (`img_url_code`, `img_image_name`,   `img_dir_name`) VALUES (?,?,?) "; $result = $this->db->query($query, array($img_url_code,   $data['image_name'], $data['img_dir_name'])); The $img_url_code is returned if the INSERT operation was successful, and false if it failed. The code to achieve this is as follows: if ($result) { return $img_url_code; } else { return false; } Creating the views There are only three views in this project, which are as follows: /path/to/codeigniter/application/views/create/create.php: This displays a form to the user allowing them to upload an image. /path/to/codeigniter/application/views/create/result.php: This displays a link that the user can use to forward other people to the image, as well as the image itself. /path/to/codeigniter/application/views/nav/top_nav.php: This displays the top-level menu. In this project it's very simple, containing a project name and a link to go to the create controller. So those are our views, as I said, there are only three of them as it's a simple project. Now, let's create each view file. Create the /path/to/codeigniter/application/views/create/create.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <p><?php echo $this->lang->line('encode_instruction_1');   ?></p>   <?php echo validation_errors(); ?>   <?php if (isset($success) && $success == true) : ?> <div class="alert alert-success">    <strong><?php echo $this->lang->line('     common_form_elements_success_notifty'); ?></strong>     <?php echo $this->lang->     line('encode_encode_now_success'); ?> </div> <?php endif ; ?>   <?php if (isset($fail) && $fail == true) : ?> <div class="alert alert-danger">    <strong><?php echo $this->lang->line('     common_form_elements_error_notifty'); ?> </strong>     <?php echo $this->lang->line('encode_encode_now_error     '); ?>    <?php echo $fail ; ?> </div> <?php endif ; ?>   <?php echo form_open_multipart('create/do_upload');?> <input type="file" name="userfile" size="20" /> <br /> <input type="submit" value="upload" /> <?php echo form_close() ; ?> <br /> <?php if (isset($result) && $result == true) : ?> <div class="alert alert-info">    <strong><?php echo $this->lang->line('     encode_upload_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?> </div> <?php endif ; ?> This view file can be thought of as the main view file; it is here that the user can upload their image. Error messages are displayed here too. Create the /path/to/codeigniter/application/views/create/result.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <?php if (isset($result) && $result == true) : ?>    <strong><?php echo $this->lang->line('     encode_encoded_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?>    <br />    <img src="<?php echo base_url() . 'upload/' .       $img_dir_name . '/' . $file_name ;?>" /> <?php endif ; ?> This view will display the encoded image resource URL to the user (so they can copy and share it) and the actual image itself. Create the /path/to/codeigniter/application/views/nav/top_nav.php file and add the following code to it: <!-- Fixed navbar --> <div class="navbar navbar-inverse navbar-fixed-top"   role="navigation"> <div class="container">    <div class="navbar-header">      <button type="button" class="navbar-toggle" data- toggle="collapse" data-target=".navbar-collapse">        <span class="sr-only">Toggle navigation</span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>      </button>      <a class="navbar-brand" href="#"><?php echo $this-       >lang->line('system_system_name'); ?></a>  </div>    <div class="navbar-collapse collapse">      <ul class="nav navbar-nav">        <li class="active"><?php echo anchor('create',           'Create') ; ?></li>      </ul>    </div><!--/.nav-collapse --> </div> </div>   <div class="container theme-showcase" role="main"> This view is quite basic but still serves an important role. It displays an option to return to the index() function of the create controller. Creating the controllers We're going to create two controllers in this project, which are as follows: /path/to/codeigniter/application/controllers/create.php: This handles the creation of unique folders to store images and performs the upload of a file. /path/to/codeigniter/application/controllers/go.php: This fetches the unique code from the database, and returns any image associated with that code. These are two of our controllers for this project, let's now go ahead and create them. Create the /path/to/codeigniter/application/controllers/create.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   class Create extends MY_Controller { function __construct() {    parent::__construct();      $this->load->helper(array('string'));      $this->load->library('form_validation');      $this->load->library('image_lib');      $this->load->model('Image_model');      $this->form_validation->set_error_delimiters('<div         class="alert alert-danger">', '</div>');    }   public function index() {    $page_data = array('fail' => false,                        'success' => false);    $this->load->view('common/header');    $this->load->view('nav/top_nav');    $this->load->view('create/create', $page_data);    $this->load->view('common/footer'); }   public function do_upload() {    $upload_dir = '/filesystem/path/to/upload/folder/';    do {      // Make code      $code = random_string('alnum', 8);        // Scan upload dir for subdir with same name      // name as the code      $dirs = scandir($upload_dir);        // Look to see if there is already a      // directory with the name which we      // store in $code      if (in_array($code, $dirs)) { // Yes there is        $img_dir_name = false; // Set to false to begin again      } else { // No there isn't        $img_dir_name = $code; // This is a new name      }      } while ($img_dir_name == false);      if (!mkdir($upload_dir.$img_dir_name)) {      $page_data = array('fail' => $this->lang->       line('encode_upload_mkdir_error'),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    }      $config['upload_path'] = $upload_dir.$img_dir_name;    $config['allowed_types'] = 'gif|jpg|jpeg|png';    $config['max_size'] = '10000';    $config['max_width'] = '1024';    $config['max_height'] = '768';      $this->load->library('upload', $config);      if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload->       display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);       $this->load->view('common/footer');    } else {      $image_data = $this->upload->data();      $page_data['result'] = $this->Image_model->save_image(       array('image_name' => $image_data['file_name'],         'img_dir_name' => $img_dir_name));    $page_data['file_name'] = $image_data['file_name'];      $page_data['img_dir_name'] = $img_dir_name;        if ($page_data['result'] == false) {        // success - display image and link        $page_data = array('fail' => $this->lang->         line('encode_upload_general_error'));        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/create', $page_data);        $this->load->view('common/footer');      } else {        // success - display image and link        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/result', $page_data);        $this->load->view('common/footer');      }    } } } Let's start with the index() function. The index() function sets the fail and success elements of the $page_data array to false. This will suppress any initial messages from being displayed to the user. The views are loaded, specifically the create/create.php view, which contains the image upload form's HTML markup. Once the user submits the form in create/create.php, the form will be submitted to the do_upload() function of the create controller. It is this function that will perform the task of uploading the image to the server. First off, do_upload() defines an initial location for the upload folder. This is stored in the $upload_dir variable. Next, we move into a do…while structure. It looks something like this: do { // something } while ('…a condition is not met'); So that means do something while a condition is not being met. Now with that in mind, think about our problem—we have to save the image being uploaded in a folder. That folder must have a unique name. So what we will do is generate a random string of eight alpha-numeric characters and then look to see if a folder exists with that name. Keeping that in mind, let's look at the code in detail: do { // Make code $code = random_string('alnum', 8);   // Scan uplaod dir for subdir with same name // name as the code $dirs = scandir($upload_dir);   // Look to see if there is already a // directory with the name which we // store in $code if (in_array($code, $dirs)) { // Yes there is    $img_dir_name = false; // Set to false to begin again } else { // No there isn't    $img_dir_name = $code; // This is a new name } } while ($img_dir_name == false); So we make a string of eight characters, containing only alphanumeric characters, using the following line of code: $code = random_string('alnum', 8); We then use the PHP function scandir() to look in $upload_dir. This will store all directory names in the $dirs variable, as follows: $dirs = scandir($upload_dir); We then use the PHP function in_array() to look for the value in $code in the list of directors from scandir(): If we don't find a match, then the value in $code must not be taken, so we'll go with that. If the value is found, then we set $img_dir_name to false, which is picked up by the final line of the do…while loop: ... } while ($img_dir_name == false); Anyway, now that we have our unique folder name, we'll attempt to create it. We use the PHP function mkdir(), passing to it $upload_dir concatenated with $img_dir_name. If mkdir() returns false, the form is displayed again along with the encode_upload_mkdir_error message set in the language file, as shown here: if (!mkdir($upload_dir.$img_dir_name)) { $page_data = array('fail' => $this->lang->   line('encode_upload_mkdir_error'),                      'success' => false); $this->load->view('common/header'); $this->load->view('nav/top_nav'); $this->load->view('create/create', $page_data); $this->load->view('common/footer'); } Once the folder has been made, we then set the configuration variables for the upload process, as follows: $config['upload_path'] = $upload_dir.$img_dir_name; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; Here we are specifying that we only want to upload .gif, .jpg, .jpeg, and .png files. We also specify that an image cannot be above 10,000 KB in size (although you can set this to any value you wish—remember to adjust the upload_max_filesize and post_max_size PHP settings in your php.ini file if you want to have a really big file). We also set the minimum dimensions that an image must be. As with the file size, you can adjust this as you wish. We then load the upload library, passing to it the configuration settings, as shown here: $this->load->library('upload', $config); Next we will attempt to do the upload. If unsuccessful, the CodeIgniter function $this->upload->do_upload() will return false. We will look for this and reload the upload page if it does return false. We will also pass the specific error as a reason why it failed. This error is stored in the fail item of the $page_data array. This can be done as follows:    if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload-       >display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    } else { ... If, however, it did not fail, we grab the information generated by CodeIgniter from the upload. We'll store this in the $image_data array, as follows: $image_data = $this->upload->data(); Then we try to store a record of the upload in the database. We call the save_image function of Image_model, passing to it file_name from the $image_data array, as well as $img_dir_name, as shown here: $page_data['result'] = $this->Image_model-> save_image(array('image_name' => $image_data['file_name'],   'img_dir_name' => $img_dir_name)); We then test for the return value of the save_image() function; if it is successful, then Image_model will return the unique URL code generated in the model. If it is unsuccessful, then Image_model will return the Boolean false. If false is returned, then the form is loaded with a general error. If successful, then the create/result.php view file is loaded. We pass to it the unique URL code (for the link the user needs), and the folder name and image name, necessary to display the image correctly. Create the /path/to/codeigniter/application/controllers/go.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access allowed'); class Go extends MY_Controller {function __construct() {parent::__construct();   $this->load->helper('string');} public function index() {   if (!$this->uri->segment(1)) {     redirect (base_url());   } else {     $image_code = $this->uri->segment(1);     $this->load->model('Image_model');     $query = $this->Image_model->fetch_image($image_code);      if ($query->num_rows() == 1) {       foreach ($query->result() as $row) {         $img_image_name = $row->img_image_name;         $img_dir_name = $row->img_dir_name;       }          $url_address = base_url() . 'upload/' . $img_dir_name .'/' . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } The go controller has only one main function, index(). It is called when a user clicks on a URL or a URL is called (perhaps as the src value of an HTML img tag). Here we grab the unique code generated and assigned to an image when it was uploaded in the create controller. This code is in the first value of the URI. Usually it would occupy the third parameter—with the first and second parameters normally being used to specify the controller and controller function respectively. However, we have changed this behavior using CodeIgniter routing. This is explained fully in the Adjusting the routes.php file section of this article. Once we have the unique code, we pass it to the fetch_image() function of Image_model: $image_code = $this->uri->segment(1); $this->load->model('Image_model'); $query = $this->Image_model->fetch_image($image_code); We test for what is returned. We ask if the number of rows returned equals exactly 1. If not, we will then redirect to the create controller. Perhaps you may not want to do this. Perhaps you may want to do nothing if the number of rows returned does not equal 1. For example, if the image requested is in an HTML img tag, then if an image is not found a redirect may send someone away from the site they're viewing to the upload page of this project—something you might not want to happen. If you want to remove this functionality, remove the following lines in bold from the code excerpt: ....        $img_dir_name = $row->img_dir_name;        }          $url_address = base_url() . 'upload/' . $img_dir_name .'/'           . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } .... Anyway, if the returned value is exactly 1, then we'll loop over the returned database object and find img_image_name and img_dir_name, which we'll need to locate the image in the upload folder on the disk. This can be done as follows: foreach ($query->result() as $row) { $img_image_name = $row->img_image_name; $img_dir_name = $row->img_dir_name; } We then build the address of the image file and redirect the browser to it, as follows: $url_address = base_url() . 'upload/' . $img_dir_name .'/'   . $img_image_name; redirect (prep_url($url_address)); Creating the language file We make use of the language file to serve text to users. In this way, you can enable multiple region/multiple language support. Create the /path/to/codeigniter/application/language/english/en_admin_lang.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   // General $lang['system_system_name'] = "Image Share";   // Upload $lang['encode_instruction_1'] = "Upload your image to share it"; $lang['encode_upload_now'] = "Share Now"; $lang['encode_upload_now_success'] = "Your image was uploaded, you   can share it with this URL"; $lang['encode_upload_url'] = "Hey look at this, here's your   image:"; $lang['encode_upload_mkdir_error'] = "Cannot make temp folder"; $lang['encode_upload_general_error'] = "The Image cannot be saved   at this time"; Putting it all together Let's look at how the user uploads an image. The following is the sequence of events: CodeIgniter looks in the routes.php config file and finds the following line: $route['create'] = "create/index"; It directs the request to the create controller's index() function. The index() function loads the create/create.php view file that displays the upload form to the user. The user clicks on the Choose file button, navigates to the image file they wish to upload, and selects it. The user presses the Upload button and the form is submitted to the create controller's index() function. The index() function creates a folder in the main upload directory to store the image in, then does the actual upload. On a successful upload, index() sends the details of the upload (the new folder name and image name) to the save_image() model function. The save_model() function also creates a unique code and saves it in the images table along with the folder name and image name passed to it by the create controller. The unique code generated during the database insert is then returned to the controller and passed to the result view, where it will form part of a success message to the user. Now, let's see how an image is viewed (or fetched). The following is the sequence of events: A URL with the syntax www.domain.com/226KgfYH comes into the application—either when someone clicks on a link or some other call (<img src="">). CodeIgniter looks in the routes.php config file and finds the following line: $route['(:any)'] = "go/index"; As the incoming request does not match the other two routes, the preceding route is the one CodeIgniter applies to this request. The go controller is called and the code of 226KgfYH is passed to it as the 1st segment of uri. The go controller passes this to the fetch_image() function of the Image_model.php file. The fetch_image() function will attempt to find a matching record in the database. If found, it returns the folder name marking the saved location of the image, and its filename. This is returned and the path to that image is built. CodeIgniter then redirects the user to that image, that is, supplies that image resource to the user that requested it. Summary So here we have a basic image sharing application. It is capable of accepting a variety of images and assigning them to records in a database and unique folders in the filesystem. This is interesting as it leaves things open to you to improve on. For example, you can do the following: You can add limits on views. As the image record is stored in the database, you could adapt the database. Adding two columns called img_count and img_count_limit, you could allow a user to set a limit for the number of views per image and stop providing that image when that limit is met. You can limit views by date. Similar to the preceding point, but you could limit image views to set dates. You can have different URLs for different dimensions. You could add functionality to make several dimensions of image based on the initial upload, offering several different URLs for different image dimensions. You can report abuse. You could add an option allowing viewers of images to report unsavory images that might be uploaded. You can have terms of service. If you are planning on offering this type of application as an actual web service that members of the public could use, then I strongly recommend you add a terms of service document, perhaps even require that people agree to terms before they upload an image. In those terms, you'll want to mention that in order for someone to use the service, they first have to agree that they do not upload and share any images that could be considered illegal. You should also mention that you'll cooperate with any court if information is requested of you. You really don't want to get into trouble for owning or running a web service that stores unpleasant images; as much as possible you want to make your limits of liability clear and emphasize that it is the uploader who has provided the images. Resources for Article: Further resources on this subject: UCodeIgniter MVC – The Power of Simplicity! [article] Navigating Your Site using CodeIgniter 1.7: Part 1 [article] Navigating Your Site using CodeIgniter 1.7: Part 2 [article]
Read more
  • 0
  • 0
  • 3671

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 15855

article-image-ride-through-worlds-best-etl-tool-informatica-powercenter
Packt
30 Dec 2014
25 min read
Save for later

A ride through world's best ETL tool – Informatica PowerCenter

Packt
30 Dec 2014
25 min read
In this article, by Rahul Malewar, author of the book, Learning Informatica PowerCenter 9.x, we will go through the basics of Informatica PowerCenter. Informatica Corporation (Informatica), a multi-million dollar company incorporated in February 1993, is an independent provider of enterprise data integration and data quality software and services. The company enables a variety of complex enterprise data integration products, which include PowerCenter, Power Exchange, enterprise data integration, data quality, master data management, business to business (B2B) data exchange, application information lifecycle management, complex event processing, ultra messaging, and cloud data integration. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely used tool in the data integration world. (For more resources related to this topic, see here.) Informatica PowerCenter architecture PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. This lets you access the single licensed software installed on a remote machine via multiple machines. High availability functionality helps minimize service downtime due to unexpected failures or scheduled maintenance in the PowerCenter environment. Informatica architecture is divided into two sections: server and client. Server is the basic administrative unit of Informatica where we configure all services, create users, and assign authentication. Repository, nodes, Integration Service, and code page are some of the important services we configure while we work on the server side of Informatica PowerCenter. Client is the graphical interface provided to the users. Client includes PowerCenter Designer, PowerCenter Workflow Manager, PowerCenter Workflow Monitor, and PowerCenter Repository Manager. The best place to download the Informatica software for training purpose is from EDelivery (www.edelivery.com) website of Oracle. Once you download the files, start the extraction of the zipped files. After you finish extraction, install the server first and later client part of PowerCenter. For installation of Informatica PowerCenter, the minimum requirement is to have a database installed in your machine. Because Informatica uses the space from the Oracle database to store the system-related information and the metadata of the code, which you develop in client tool. Informatica PowerCenter client tools Informatica PowerCenter Designer client tool talks about working of the source files and source tables and similarly talks about working on targets. Designer tool allows import/create flat files and relational databases tables. Informatica PowerCenter allows you to work on both types of flat files, that is, delimited and fixed width files. In delimited files, the values are separated from each other by a delimiter. Any character or number can be used as delimiter but usually for better interpretation we use special characters as delimiter. In delimited files, the width of each field is not a mandatory option as each value gets separated by other using a delimiter. In fixed width files, the width of each field is fixed. The values are separated by each other by the fixed size of the column defined. There can be issues in extracting the data if the size of each column is not maintained properly. PowerCenter Designer tool allows you to create mappings using sources, targets, and transformations. Mappings contain source, target, and transformations linked to each other through links. The group of transformations which can be reused is called as mapplets. Mapplets are another important aspect of Informatica tool. The transformations are most important aspect of Informatica, which allows you to manipulate the data based on your requirements. There are various types of transformations available in Informatica PowerCenter. Every transformation performs specific functionality. Various transformations in Informatica PowerCenter The following are the various transformations in Informatica PowerCenter: Expression transformation is used for row-wise manipulation. For any type of manipulation you wish to do on an individual record, use Expression transformation. Expression transformation accepts the row-wise data, manipulates it, and passes to the target. The transformation receives the data from input port and it sends the data out from output ports. Use the Expression transformation for any row-wise calculation, like if you want to concatenate the names, get total salary, and convert in upper case. Aggregator transformation is used for calculations using aggregate functions on a column as against in the Expression transformation, which is used for row-wise manipulation. You can use aggregate functions, such as SUM, AVG, MAX, MIN, and so on in Aggregator transformation. When you use Aggregator transformation, Integration Services stores the data temporarily in cache memory. Cache memory is created because the data flows in row-wise manner in Informatica and the calculations required in Aggregator transformation is column wise. Unless we store the data temporarily in cache, we cannot perform the aggregate calculations to get the result. Using Group By option in Aggregator transformation, you can get the result of the Aggregate function based on group. Also it is always recommended that we pass sorted input to Aggregator transformation as this will enhance the performance. When you pass the sorted input to Aggregator transformation, Integration Services enhances the performance by storing less data into cache. When you pass unsorted data, Aggregator transformation stores all the data into cache which takes more time. When you pass the sorted data to Aggregator transformation, Aggregator transformation stores comparatively lesser data in the cache. Aggregator passes the result of each group as soon the data for particular group is received. Note that Aggregator transformation does not sort the data. If you have unsorted data, use Sorter transformation to sort the data and then pass the sorted data to Aggregator transformation. Sorter transformation is used to sort the data in ascending or descending order based on single or multiple key. Apart from ordering the data in ascending or descending order, you can also use Sorter transformation to remove duplicates from the data using the distinct option in the properties. Sorter can remove duplicates only if complete record is duplicate and not only particular column. Filter transformation is used to remove unwanted records from the mapping. You define the Filter condition in the Filter transformation. Based on filter condition, the records will be rejected or passed further in mapping. The default condition in Filter transformation is TRUE. Based on the condition defined, if the record returns True, the Filter transformation allows the record to pass. For each record which returns False, the Filter transformation drops those records. It is always recommended to use Filter transformation as early as possible in the mapping for better performance. Router transformation is single input group multiple output group transformation. Router can be used in place of multiple Filter transformations. Router transformation accepts the data once through input group and based on the output groups you define, it sends the data to multiple output ports. You need to define the filter condition in each output group. It is always recommended to use Router in place of multiple filters in the mapping to enhance the performance. Rank transformation is used to get top or bottom specific number of records based on the key. When you create a Rank transformation, a default output port RANKINDEX comes with the transformation. It is not mandatory to use the RANKINDEX port. Sequence Generator transformation is used to generate sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. You need to define the start value, the increment by value, and the end value in the properties. Sequence Generator transformation has only two ports: NEXTVAL and CURRVAL. Both the ports are output port. Sequence Generator does not have any input port. You cannot add or delete any port in Sequence Generator. It is recommended that you should always use the NEXTVAL port first. If the NEXTVAL port is utilized, use the CURRVAL port. You can define the value of CURRVAL in the properties of Sequence Generator transformation. Joiner transformation is used to join two heterogeneous sources. You can join data from same source type also. The basic criteria for joining the data are a matching column in both the source. Joiner transformation has two pipelines, one is called mater and other is called as detail. We do not have left or right join as we have in SQL database. It is always recommended to make table with lesser number of record as master and other one as details. This is because Integration Service picks the data from master source and scans the corresponding record in details table. So if we have lesser number of records in master table, lesser number of times the scanning will happen. This enhances the performance. Joiner transformation has four types of joins: normal join, full outer join, master outer join, details outer join. Union transformation is used the merge the data from multiple sources. Union is multiple input single output transformation. This is opposite of Router transformation, which we discussed earlier. The basic criterion for using Union transformation is that you should have data with matching data type. If you do not have data with matching data type coming from multiple sources, Union transformation will not work. Union transformation merges the data coming from multiple sources and do not remove duplicates, that is, it acts as UNION ALL of SQL statements. As mentioned earlier, Union requires data coming from multiple sources. Union reads the data concurrently from multiple sources and processes the data. You can use heterogeneous sources to merge the data using Union transformation. Source Qualifier transformation acts as virtual source in Informatica. When you drag relational table or flat file in Mapping Designer, Source Qualifier transformation comes along. Source Qualifier is the point where actually Informatica processing starts. The extraction process starts from the Source Qualifier. Lookup transformation is used to lookup of source, Source Qualifier, or target to get the relevant data. You can look up on flat file or relational tables. Lookup transformation works on the similar lines as Joiner with few differences like Lookup does not require two source. Lookup transformations can be connected and unconnected. Lookup transformation extracts the data from the lookup table or file based on the lookup condition. When you create the Lookup transformation you can configure the Lookup transformation to cache the data. Caching the data makes the processing faster since the data is stored internally after cache is created. Once you select to cache the data, Lookup transformation caches the data from the file or table once and then based on the condition defined, lookup sends the output value. Since the data gets stored internally, the processing becomes faster as it does not require checking the lookup condition in file or database. Integration Services queries the cache memory as against checking the file or table for fetching the required data. The cache is created automatically and also it is deleted automatically once the processing is complete. Lookup transformation has four different types of ports. Input ports (I) receive the data from other transformation. This port will be used in Lookup condition. You need to have at least one input port. Output port (O) passes the data out of the Lookup transformation to other transformations. Lookup port (L) is the port for which you wish to bring the data in mapping. Each column is assigned as lookup and output port when you create the Lookup transformation. If you delete the lookup port from the flat file lookup source, the session will fail. If you delete the lookup port from relational lookup table, Integration Services extracts the data only with Lookup port. This helps in reducing the data extracted from the lookup source. Return port (R) is only used in case of unconnected Lookup transformation. This port indicates which data you wish to return in the Lookup transformation. You can define only one port as return port. Return port is not used in case on connected Lookup transformation. Cache is the temporary memory, which is created when you execute the process. Cache is created automatically when the process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define in the transformation level or session level. You usually set the property as default, so as required it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available for lookup transformation. You can define the session property to create the cache either sequentially or concurrently. When you select to create the cache sequentially, Integration Service caches the data in row-wise manner as the records enters the Lookup transformation. When the first record enters the Lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way the cache stores only matching data. It helps in saving the cache space by not storing the unnecessary data. When you select to create cache concurrently, Integration Service does not wait for the data to flow from the source, but it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select concurrent cache, the performance enhances as compared to sequential cache, since the scanning happens internally using the data stored in cache. You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default Integration Service creates a static cache. Lookup cache is created as soon as the first record enters the Lookup transformation. Integration Service does not update the cache while it is processing the data. A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the Lookup transformation properties to make the cache as dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration Service keeps on updating the cache while it is processing the data. The Integration Service marks the record as insert for new row inserted in dynamic cache. For the record which is updated, it marks the record as update in the cache. For every record which no change, the Integration Service marks it as unchanged. Update Strategy transformation is used to INSERT, UPDATE, DELETE, or REJECT record based on defined condition in the mapping. Update Strategy transformation is mostly used when you design mappings for SCD. When you implement SCD, you actually decide how you wish to maintain historical data with the current data. When you wish to maintain no history, complete history, or partial history, you can either use property defined in the session task or you use Update Strategy transformation. When you use Session task, you instruct the Integration Service to treat all records in the same way, that is, either insert, update or delete. When you use Update Strategy transformation in the mapping, the control is no more with the session task. Update Strategy transformation allows you to insert, update, delete or reject record based on the requirement. When you use Update Strategy transformation, the control is no more with session task. You need to define the following functions to perform the corresponding operation: DD_INSERT: This can be used when you wish to insert the records. It is also represented by numeric 0. DD_UPDATE: This can be used when you wish to update the records. It is also represented by numeric 1. DD_DELETE: This can be used when you wish to delete the records. It is also represented by numeric 2. DD_REJECT: This can be used when you wish to reject the records. It is also represented by numeric 3. Normalizer transformation is used in place of Source Qualifier transformation when you wish to read the data from Cobol Copybook source. Also, the Normalizer transformation is used to convert column-wise data to row-wise data. This is similar to transpose feature of MS Excel. You can use this feature if your source is Cobol Copybook file or relational database tables. Normalizer transformation converts column to row and also generate index for each converted row. Stored procedure is a database component. Informatica uses the stored procedure similar to database tables. Stored procedures are set of SQL instructions, which require certain set of input values and in return stored procedure returns output value. The way you either import or create database tables, you can import or create the stored procedure in mapping. To use the Stored Procedure in mapping the stored procedure should exist in the database. Similar to Lookup transformation, stored procedure can also be connected or unconnected transformation in Informatica. When you use connected stored procedure, you pass the value to stored procedure through links. When you use unconnected stored procedure, you pass the value using :SP function. Transaction Control transformation allows you to commit or rollback individual records, based on certain condition. By default, Integration Service commits the data based on the properties you define at the session task level. Using the commit interval property Integration Service commits or rollback the data into target. Suppose you define commit interval as 10,000, Integration Service will commit the data after every 10,000 records. When you use Transaction Control transformation, you get the control at each record to commit or rollback. When you use Transaction Control transformation, you need to define the condition in expression editor of the Transaction Control transformation. When you run the process, the data enters the Transaction Control transformation in row-wise manner. The Transaction Control transformation evaluates each row, based on which it commits or rollback the data. Classification of Transformations The transformations, which we discussed are classified into two categories—active/passive and connected/unconnected. Active/Passive classification of transformations is based on the number of records at the input and output port of the transformation. If the transformation does not change the number of records at its input and output port, it is said to be passive transformation. If the transformation changes the number of records at the input and output port of the transformation, it is said to be active transformation. Also if the transformation changes the sequence of records passing through it, it will be an active transformation as in case of Union transformation. A transformation is said to be connected if it is connected to any source or any target or any other transformation by at least a link. If the transformation is not connected by any link is it classed as unconnected. Only Lookup and stored procedure transformations can be connected and unconnected, rest all transformations are connected. Advanced Features of designer screen Talking about the advanced features of PowerCenter Designer tool, debugger helps you to debug the mappings to find the error in your code. Informatica PowerCenter provides a utility called as debugger to debug the mapping so that you can easily find the issue in the mapping which you created. Using the debugger, you can see the flow of every record across the transformations. Another feature is target load plan, a functionality which allows you to load data in multiple targets in a same mapping maintaining their constraints. The reusable transformations are transformations which allow you to reuse the transformations across multiple mapping. As source and target are reusable components, transformations can also be reused. When you work on any technology, it is always advised that your code should be dynamic. This means you should use the hard coded values as less as possible in your code. It is always recommended that you use the parameters or the variable in your code so you can easily pass these values and need not frequently change the code. This functionality is achieved by using parameter file in Informatica. The value of a variable can change between the session run. The value of parameter will remain constant across the session runs. The difference is very minute so you should define parameter or variable properly as per your requirements. Informatica PowerCenter allows you to compare objects present within repository. You can compare sources, targets, transformations, mapplets, and mappings in PowerCenter Designer under Source Analyzer, Target Designer, Transformation Developer, Mapplet Designer, Mapping Designer respectively. You can compare the objects in the same repository or in multiple repositories. Tracing level in Informatica defines the amount of data you wish to write in the session log when you execute the workflow. Tracing level is a very important aspect in Informatica as it helps in analyzing the error. Tracing level is very helpful in finding the bugs in the process. You can define tracing level in every transformation. Tracing level option is present in every transformation properties window. There are four types of tracing level available: Normal: When you set the tracing level as normal, Informatica stores status information, information about errors, and information about skipped rows. You get detailed information but not at individual row level. Terse: When you set the tracing level as terse, Informatica stores error information and information of rejected records. Terse tracing level occupies lesser space as compared to normal. Verbose initialization: When you set the tracing level as verbose initialize, it stores process details related to startup, details about index and data files created and more details of transformation process in addition to details stored in normal tracing. This tracing level takes more space as compared to normal and terse. Verbose data: This is the most detailed level of tracing level. It occupies more space and takes longer time as compared to other three. It stores row level data in the session log. It writes the truncation information whenever it truncates the data. It also writes the data to error log if you enable row error logging. Default tracing level is normal. You can change the tracing level to terse to enhance the performance. Tracing level can be defined at individual transformation level or you can override the tracing level by defining it at session level. Informatica PowerCenter Workflow Manager Workflow Manager screen is the second and last phase of our development work. In the Workflow Manager session task and workflows are created, which is used to execute mapping. Workflow Manager screen allows you to work on various connections like relations, FTP, and so on. Basically, Workflow Manager contains set of instructions which we define as workflow. The basic building block of workflow is tasks. As we have multiple transformations in designer screen, we have multiple tasks in Workflow Manager Screen. When you create a workflow, you add tasks into it as per your requirement and execute the workflow to see the status in the monitor. Workflow is a combination of multiple tasks connected with links that trigger in proper sequence to execute a process. Every workflow contains start task along with other tasks. When you execute the workflow, you actually trigger start task, which in turn triggers other tasks connected in the flow. Every task performs a specific functionality. You need to use the task based on the functionality you need to achieve. Various tasks in Workflow Manager The following are the tasks in Workflow Manager: Session task is used to execute the mapping. Every session task can execute a single mapping. You need to define the path/connection of the source and target used in the mapping, so the session can extract the data from the defined path and send the data to the mapping for processing. Email task is used to send success or failure email notifications. You can configure your outlook or mailbox with the email task to directly send the notification. Command task is used to execute Unix scripts/commands or Windows commands. Timer task is used to add some time gap or to add delay between two tasks. Timer task have properties related to absolute time and relative time. Assignment task is used to assign a value to workflow variable. Control task is used to control the flow of workflow by stopping or aborting the workflow in case on some error. You can control the flow of complete workflow using control task. Decision task is used to check the status of multiple tasks and hence control the execution of workflow. Link task as against decision task can only check the status of the previous task. Event task is used to wait for a particular event to occur. Usually it is used as file watcher task. Using event wait task we can keep looking for a particular file and then trigger the next task. Evert raise task is used to trigger a particular event defined in workflow. Advanced Workflow Manager Workflow Manager screen has some very important features called as scheduling and incremental aggregation, which allows in easier and convenient processing of data. Scheduling allows you to schedule the workflow as specified timing so the workflow runs at the desired time. You need not manually run the workflow every time, schedule can do the needful. Incremental aggregation and partitioning are advanced features, which allows you to process the data faster. When you run the workflow, Integration Service extracts the data in row wise manner from the source path/connection you defined in session task and makes it flow from the mapping. The data reaches the target through the transformations you defined in the mapping. The data always flow in a row wise manner in Informatica, no matter what so ever is your calculation or manipulation. So if you have 10 records in source, there will be 10 Source to target flows while the process is executed. Informatica PowerCenter Workflow Monitor The Workflow Monitor screen allows the monitoring of the workflows execute in Workflow Manager. Workflow Monitor screen allows you check the status and log files for the Workflow. Using the logs generated, you can easily find the error and rectify the error. Workflow Manager also shows the statistics for number of records extracted from source and number of records loaded into target. Also it gives statistics of error records and bad records. Informatica PowerCenter Repository Manager Repository Manager screen is the fourth client screen, which is basically used for migration (deployment) purpose. This screen is also used for some administration related activities like configuring server with client and creating users. Performance Tuning in Informatica PowerCenter The performance tuning has the contents for the optimizations of various components of Informatica PowerCenter tool, such as source, targets, mappings, sessions, systems. Performance tuning at high level involves two stages, finding the issues called as bottleneck and resolving them. Informatica PowerCenter has features like pushdown optimization and partitioning for better performance. With the defined steps and using the best practices for coding the performance can be enhanced drastically. Slowly Changing Dimensions Using all the understanding of the different client tools you can implement the Data warehousing concepts called as SCD, slowly changing dimensions. Informatica PowerCenter provides wizards, which allow you to easily create different types of SCDs, that is, SCD1, SCD2, and SCD3. Type 1 Dimension mapping (SCD1): It keeps only current data and do not maintain historical data. Type 2 Dimension/Version Number mapping (SCD2): It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Type 2 Dimension/Flag mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 2 Dimension/Effective Date Range mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 3 Dimension mapping: It keeps current as well as historical data in the table. We maintain only partial history by adding new column. Summary With this we have discussed the complete PowerCenter tool in brief. The PowerCenter is the best fit tool for any size and any type of data, which you wish to handle. It also provides compatibility with all the files and databases for processing purpose. The transformations present allow you to manipulate any type of data in any form you wish. The advanced features make your work simple by providing convenient options. The PowerCenter tool can make your life easy and can offer you some great career path if you learn it properly as Informatica PowerCenter tool have huge demand in job market and it is one of the highly paid technologies in IT market. Just grab a book and start walking the path. The end will be a great career. We are always available for help. For any help in installation or any issues related to PowerCenter you can reach me at info@dw-learnwell.com. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 4260
article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 4515

article-image-using-frameworks
Packt
24 Dec 2014
24 min read
Save for later

Using Frameworks

Packt
24 Dec 2014
24 min read
In this article by Alex Libby, author of the book Responsive Media in HTML5, we will cover the following topics: Adding responsive media to a CMS Implementing responsive media in frameworks such as Twitter Bootstrap Using the Less CSS preprocessor to create CSS media queries Ready? Let's make a start! (For more resources related to this topic, see here.) Introducing our three examples Throughout this article, we've covered a number of simple, practical techniques to make media responsive within our sites—these are good, but nothing beats seeing these principles used in a real-world context, right? Absolutely! To prove this, we're going to look at three examples throughout this article, using technologies that you are likely to be familiar with: WordPress, Bootstrap, and Less CSS. Each demo will assume a certain level of prior knowledge, so it may be worth reading up a little first. In all three cases, we should see that with little effort, we can easily add responsive media to each one of these technologies. Let's kick off with a look at working with WordPress. Adding responsive media to a CMS We will begin the first of our three examples with a look at using the ever popular WordPress system. Created back in 2003, WordPress has been used to host sites by small independent traders all the way up to Fortune 500 companies—this includes some of the biggest names in business such as eBay, UPS, and Ford. WordPress comes in two flavors; the one we're interested in is the self-install version available at http://www.wordpress.org. This example assumes you have a local installation of WordPress installed and working; if not, then head over to http://codex.wordpress.org/Installing_WordPress and follow the tutorial to get started. We will also need a DOM inspector such as Firebug installed if you don't already have it. It can be downloaded from http://www.getfirebug.com if you need to install it. If you only have access to WordPress.com (the other flavor of WordPress), then some of the tips in this section may not work, due to limitations in that version of WordPress. Okay, assuming we have WordPress set up and running, let's make a start on making uploaded media responsive. Adding responsive media manually It's at this point that you're probably thinking we have to do something complex when working in WordPress, right? Wrong! As long as you use the Twenty Fourteen core theme, the work has already been done for you. For this exercise, and the following sections, I will assume you have installed and/or activated WordPress' Twenty Fourteen theme. Don't believe me? It's easy to verify: try uploading an image to a post or page in WordPress. Resize the browser—you should see the image shrink or grow in size as the browser window changes size. If we take a look at the code elsewhere using Firebug, we can also see the height: auto set against a number of the img tags; this is frequently done for responsive images to ensure they maintain the correct proportions. The responsive style seems to work well in the Twenty Fourteen theme; if you are using an older theme, we can easily apply the same style rule to images stored in WordPress when using that theme. Fixing a responsive issue So far, so good. Now, we have the Twenty Fourteen theme in place, we've uploaded images of various sizes, and we try resizing the browser window ... only to find that the images don't seem to grow in size above a certain point. At least not well—what gives? Well, it's a classic trap: we've talked about using percentage values to dynamically resize images, only to find that we've shot ourselves in the foot (proverbially speaking, of course!). The reason? Let's dive in and find out using the following steps: Browse to your WordPress installation and activate Firebug using F12. Switch to the HTML tab and select your preferred image. In Firebug, look for the <header class="entry-header"> line, then look for the following line in the rendered styles on the right-hand side of the window: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 474px; } The keen-eyed amongst you should hopefully spot the issue straightaway—we're using percentages to make the sizes dynamic for each image, yet we're constraining its parent container! To fix this, change the highlighted line as indicated: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 100%; } To balance the content, we need to make the same change to the comments area. So go ahead and change max-width to 100% as follows: .comments-area { margin: 48px auto; max-width: 100%;   padding: 0 10px; } If we try resizing the browser window now, we should see the image size adjust automatically. At this stage, the change is not permanent. To fix this, we would log in to WordPress' admin area, go to Appearance | Editor and add the adjusted styles at the foot of the Stylesheet (style.css) file. Let's move on. Did anyone notice two rather critical issues with the approach used here? Hopefully, you must have spotted that if a large image is used and then resized to a smaller size, we're still working with large files. The alteration we're making has a big impact on the theme, even though it is only a small change. Even though it proves that we can make images truly responsive, it is the kind of change that we would not necessarily want to make without careful consideration and plenty of testing. We can improve on this. However, making changes directly to the CSS style sheet is not ideal; they could be lost when upgrading to a newer version of the theme. We can improve on this by either using a custom CSS plugin to manage these changes or (better) using a plugin that tells WordPress to swap an existing image for a small one automatically if we resize the window to a smaller size. Using plugins to add responsive images A drawback though, of using a theme such as Twenty Fourteen, is the resizing of images. While we can grow or shrink an image when resizing the browser window, we are still technically altering the size of what could potentially be an unnecessarily large image! This is considered bad practice (and also bad manners!)—browsing on a desktop with a fast Internet connection as it might not have too much of an impact; the same cannot be said for mobile devices, where we have less choice. To overcome this, we need to take a different approach—get WordPress to automatically swap in smaller images when we reach a particular size or breakpoint. Instead of doing this manually using code, we can take advantage of one of the many plugins available that offer responsive capabilities in some format. I feel a demo coming on. Now's a good time to take a look at one such plugin in action: Let's start by downloading our plugin. For this exercise, we'll use the PictureFill.WP plugin by Kyle Ricks, which is available at https://wordpress.org/plugins/picturefillwp/. We're going to use the version that uses Picturefill.js version 2. This is available to download from https://github.com/kylereicks/picturefill.js.wp/tree/master. Click on Download ZIP to get the latest version. Log in to the admin area of your WordPress installation and click on Settings and then Media. Make sure your image settings for Thumbnail, Medium, and Large sizes are set to values that work with useful breakpoints in your design. We then need to install the plugin. In the admin area, go to Plugins | Add New to install the plugin and activate it in the normal manner. At this point, we will have installed responsive capabilities in WordPress—everything is managed automatically by the plugin; there is no need to change any settings (except maybe the image sizes we talked about in step 2). Switch back to your WordPress frontend and try resizing the screen to a smaller size. Press F12 to activate Firebug and switch to the HTML tab. Press Ctrl + Shift + C (or Cmd + Shift + C for Mac users) to toggle the element inspector; move your mouse over your resized image. If we've set the right image sizes in WordPress' admin area and the window is resized correctly, we can expect to see something like the following screenshot: To confirm we are indeed using a smaller image, right-click on the image and select View Image Info; it will display something akin to the following screenshot: We should now have a fully functioning plugin within our WordPress installation. A good tip is to test this thoroughly, if only to ensure we've set the right sizes for our breakpoints in WordPress! What happens if WordPress doesn't refresh my thumbnail images properly? This can happen. If you find this happening, get hold of and install the Regenerate Thumbnails plugin to resolve this issue; it's available at https://wordpress.org/plugins/regenerate-thumbnails/. Adding responsive videos using plugins Now that we can add responsive images to WordPress, let's turn our attention to videos. The process of adding them is a little more complex; we need to use code to achieve the best effect. Let's examine our options. If you are hosting your own videos, the simplest way is to add some additional CSS style rules. Although this removes any reliance on JavaScript or jQuery using this method, the result isn't perfect and will need additional styles to handle the repositioning of the play button overlay. Although we are working locally, we should remember the note from earlier in this article; changes to the CSS style sheet may be lost when upgrading. A custom CSS plugin should be used, if possible, to retain any changes. To use a CSS-only solution, it only requires a couple of steps: Browse to your WordPress theme folder and open a copy of styles.css in your text editor of choice. Add the following lines at the end of the file and save it: video { width: 100%; height: 100%; max-width: 100%; } .wp-video { width: 100% !important; } .wp-video-shortcode {width: 100% !important; } Close the file. You now have the basics in place for responsive videos. At this stage, you're probably thinking, "great, my videos are now responsive. I can handle the repositioning of the play button overlay myself, no problem"; sounds about right? Thought so and therein lies the main drawback of this method! Repositioning the overlay shouldn't be too difficult. The real problem is in the high costs of hardware and bandwidth that is needed to host videos of any reasonable quality and that even if we were to spend time repositioning the overlay, the high costs would outweigh any benefit of using a CSS-only solution. A far better option is to let a service such as YouTube do all the hard work for you and to simply embed your chosen video directly from YouTube into your pages. The main benefit of this is that YouTube's servers do all the hard work for you. You can take advantage of an increased audience and YouTube will automatically optimize the video for the best resolution possible for the Internet connections being used by your visitors. Although aimed at beginners, wpbeginner.com has a useful article located at http://www.wpbeginner.com/beginners-guide/why-you-should-never-upload-a-video-to-wordpress/, on the pros and cons of why self-hosting videos isn't recommended and that using an external service is preferable. Using plugins to embed videos Embedding videos from an external service into WordPress is ironically far simpler than using the CSS method. There are dozens of plugins available to achieve this, but one of the simplest to use (and my personal favorite) is FluidVids, by Todd Motto, available at http://github.com/toddmotto/fluidvids/. To get it working in WordPress, we need to follow these steps using a video from YouTube as the basis for our example: Browse to your WordPress' theme folder and open a copy of functions.php in your usual text editor. At the bottom, add the following lines: add_action ( 'wp_enqueue_scripts', 'add_fluidvid' );   function add_fluidvid() { wp_enqueue_script( 'fluidvids',     get_stylesheet_directory_uri() .     '/lib/js/fluidvids.js', array(), false, true ); } Save the file, then log in to the admin area of your WordPress installation. Navigate to Posts | Add New to add a post and switch to the Text tab of your Post Editor, then add http://www.youtube.com/watch?v=Vpg9yizPP_g&hd=1 to the editor on the page. Click on Update to save your post, then click on View post to see the video in action. There is no need to further configure WordPress—any video added from services such as YouTube or Vimeo will be automatically set as responsive by the FluidVids plugin. At this point, try resizing the browser window. If all is well, we should see the video shrink or grow in size, depending on how the browser window has been resized: To prove that the code is working, we can take a peek at the compiled results within Firebug. We will see something akin to the following screenshot: For those of us who are not feeling quite so brave (!), there is fortunately a WordPress plugin available that will achieve the same results, without configuration. It's available at https://wordpress.org/plugins/fluidvids/ and can be downloaded and installed using the normal process for WordPress plugins. Let's change track and move onto our next demo. I feel a need to get stuck in some coding, so let's take a look at how we can implement responsive images in frameworks such as Bootstrap. Implementing responsive media in Bootstrap A question—as developers, hands up if you have not heard of Bootstrap? Good—not too many hands going down Why have I asked this question, I hear you say? Easy—it's to illustrate that in popular frameworks (such as Bootstrap), it is easy to add basic responsive capabilities to media, such as images or video. The exact process may differ from framework to framework, but the result is likely to be very similar. To see what I mean, let's take a look at using Bootstrap for our second demo, where we'll see just how easy it is to add images and video to our Bootstrap-enabled site. If you would like to explore using some of the free Bootstrap templates that are available, then http://www.startbootstrap.com/ is well worth a visit! Using Bootstrap's CSS classes Making images and videos responsive in Bootstrap uses a slightly different approach to what we've examined so far; this is only because we don't have to define each style property explicitly, but instead simply add the appropriate class to the media HTML for it to render responsively. For the purposes of this demo, we'll use an edited version of the Blog Page example, available at http://www.getbootstrap.com/getting-started/#examples; a copy of the edited version is available on the code download that accompanies this article. Before we begin, go ahead and download a copy of the Bootstrap Example folder that is in the code download. Inside, you'll find the CSS, image and JavaScript files needed, along with our HTML markup file. Now that we have our files, the following is a screenshot of what we're going to achieve over the course of our demo: Let's make a start on our example using the following steps: Open up bootstrap.html and look for the following lines (around lines 34 to 35):    <p class="blog-post-meta">January 1, 2014 by <a href="#">Mark</a></p>      <p>This blog post shows a few different types of content that's supported and styled with Bootstrap.         Basic typography, images, and code are all         supported.</p> Immediately below, add the following code—this contains markup for our embedded video, using Bootstrap's responsive CSS styling: <div class="bs-example"> <div class="embed-responsive embed-responsive-16by9">    <iframe allowfullscreen="" src="http://www.youtube.com/embed/zpOULjyy-n8?rel=0" class="embed-responsive-item"></iframe> </div> </div> With the video now styled, let's go ahead and add in an image—this will go in the About section on the right. Look for these lines, on or around lines 74 and 75:    <h4>About</h4>      <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet       fermentum. Aenean lacinia bibendum nulla sed       consectetur.</p> Immediately below, add in the following markup for our image: <a href="#" class="thumbnail"> <img src="http://placehold.it/350x150" class="img-responsive"> </a> Save the file and preview the results in a browser. If all is well, we can see our video and image appear, as shown at the start of our demo. At this point, try resizing the browser—you should see the video and placeholder image shrink or grow as the window is resized. However, the great thing about Bootstrap is that the right styles have already been set for each class. All we need to do is apply the correct class to the appropriate media file—.embed-responsive embed-responsive-16by9 for videos or .img-responsive for images—for that image or video to behave responsively within our site. In this example, we used Bootstrap's .img-responsive class in the code; if we have a lot of images, we could consider using img { max-width: 100%; height: auto; } instead. So far, we've worked with two popular examples of frameworks in the form of WordPress and Bootstrap. This is great, but it can mean getting stuck into a lot of CSS styling, particularly if we're working with media queries, as we saw earlier in the article! Can we do anything about this? Absolutely! It's time for a brief look at CSS preprocessing and how this can help with adding responsive media to our pages. Using Less CSS to create responsive content Working with frameworks often means getting stuck into a lot of CSS styling; this can become awkward to manage if we're not careful! To help with this, and for our third scenario, we're going back to basics to work on an alternative way of rendering CSS using the Less CSS preprocessing language. Why? Well, as a superset (or extension) of CSS, Less allows us to write our styles more efficiently; it then compiles them into valid CSS. The aim of this example is to show that if you're already using Less, then we can still apply the same principles that we've covered throughout this article, to make our content responsive. It should be noted that this exercise does assume a certain level of prior experience using Less; if this is the first time, you may like to peruse my article, Learning Less, by Packt Publishing. There will be a few steps involved in making the changes, so the following screenshot gives a heads-up on what it will look like, once we've finished: You would be right. If we play our cards right, there should indeed be no change in appearance; working with Less is all about writing CSS more efficiently. Let's see what is involved: We'll start by extracting copies of the Less CSS example from the code download that accompanies this article—inside it, we'll find our HTML markup, reset style sheet, images, and video needed for our demo. Save the folder locally to your PC. Next, add the following styles in a new file, saving it as responsive.less in the css subfolder—we'll start with some of the styling for the base elements, such as the video and banner: #wrapper {width: 96%; max-width: 45rem; margin: auto;   padding: 2%} #main { width: 60%; margin-right: 5%; float: left } #video-wrapper video { max-width: 100%; } #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width:   100%; float: left; margin-bottom: 15px; } #skipTo { display: none; li { background: #197a8a }; }   p { font-family: "Droid Sans",sans-serif; } aside { width: 35%; float: right; } footer { border-top: 1px solid #ccc; clear: both; height:   30px; padding-top: 5px; } We need to add some basic formatting styles for images and links, so go ahead and add the following, immediately below the #skipTo rule: a { text-decoration: none; text-transform: uppercase } a, img { border: medium none; color: #000; font-weight: bold; outline: medium none; } Next up comes the navigation for our page. These styles control the main navigation and the Skip To… link that appears when viewed on smaller devices. Go ahead and add these style rules immediately below the rules for a and img: header { font-family: 'Droid Sans', sans-serif; h1 { height: 70px; float: left; display: block; fontweight: 700; font-size: 2rem; } nav { float: right; margin-top: 40px; height: 22px; borderradius: 4px; li { display: inline; margin-left: 15px; } ul { font-weight: 400; font-size: 1.1rem; } a { padding: 5px 5px 5px 5px; &:hover { background-color: #27a7bd; color: #fff; borderradius: 4px; } } } } We need to add the media query that controls the display for smaller devices, so go ahead and add the following to a new file and save it as media.less in the css subfolder. We'll start with setting the screen size for our media query: @smallscreen: ~"screen and (max-width: 30rem)";   @media @smallscreen { p { font-family: "Droid Sans", sans-serif; }      #main, aside { margin: 0 0 10px; width: 100%; }    #banner { margin-top: 150px; height: 4.85rem; max-width: 100%; background-image: url('../img/abstract-     banner-medium.jpg'); width: 45.5rem; } Next up comes the media query rule that will handle the Skip To… link at the top of our resized window:    #skipTo {      display: block; height: 18px;      a {         display: block; text-align: center; color: #fff; font-size: 0.8rem;        &:hover { background-color: #27a7bd; border-radius: 0; height: 20px }      }    } We can't forget the main navigation, so go ahead and add the following line of code immediately below the block for #skipTo:    header {      h1 { margin-top: 20px }      nav {        float: left; clear: left; margin: 0 0 10px; width:100%;        li { margin: 0; background: #efefef; display:block; margin-bottom: 3px; height: 40px; }        a {          display: block; padding: 10px; text-align:center; color: #000;          &:hover {background-color: #27a7bd; border-radius: 0; padding: 10px; height: 20px; }        }     }    } } At this point, we should then compile the Less style sheet before previewing the results of our work. If we launch responsive.html in a browser, we'll see our mocked up portfolio page appear as we saw at the beginning of the exercise. If we resize the screen to its minimum width, its responsive design kicks in to reorder and resize elements on screen, as we would expect to see. Okay, so we now have a responsive page that uses Less CSS for styling; it still seems like a lot of code, right? Working through the code in detail Although this seems like a lot of code for a simple page, the principles we've used are in fact very simple and are the ones we already used earlier in the article. Not convinced? Well, let's look at it in more detail—the focus of this article is on responsive images and video, so we'll start with video. Open the responsive.css style sheet and look for the #video-wrapper video class: #video-wrapper video { max-width: 100%; } Notice how it's set to a max-width value of 100%? Granted, we don't want to resize a large video to a really small size—we would use a media query to replace it with a smaller version. But, for most purposes, max-width should be sufficient. Now, for the image, this is a little more complicated. Let's start with the code from responsive.less: #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width: 100%; float: left; margin-bottom: 15px; } Here, we used the max-width value again. In both instances, we can style the element directly, unlike videos where we have to add a container in order to style it. The theme continues in the media query setup in media.less: @smallscreen: ~"screen and (max-width: 30rem)"; @media @smallscreen { ... #banner { margin-top: 150px; background-image: url('../img/abstract-banner-medium.jpg'); height: 4.85rem;     width: 45.5rem; max-width: 100%; } ... } In this instance, we're styling the element to cover the width of the viewport. A small point of note; you might ask why we are using the rem values instead of the percentage values when styling our image? This is a good question—the key to it is that when using pixel values, these do not scale well in responsive designs. However, the rem values do scale beautifully; we could use percentage values if we're so inclined, although they are best suited to instances where we need to fill a container that only covers part of the screen (as we did with the video for this demo). An interesting article extolling the virtues of why we should use rem units is available at http://techtime.getharvest.com/blog/in-defense-of-rem-units - it's worth a read. Of particular note is a known bug with using rem values in Mobile Safari, which should be considered when developing for mobile platforms; with all of the iPhones available, its usage could be said to be higher than Firefox! For more details, head over to http://wtfhtmlcss.com/#rems-mobile-safari. Transferring to production use Throughout this exercise, we used Less to compile our styles on the fly each time. This is okay for development purposes, but is not recommended for production use. Once we've worked out the requisite styles needed for our site, we should always look to precompile them into valid CSS before uploading the results into our site. There are a number of options available for this purpose; two of my personal favorites are Crunch! available at http://www.crunchapp.net and the Less2CSS plugin for Sublime Text available at https://github.com/timdouglas/sublime-less2css. You can learn more about precompiling Less code from my new article, Learning Less.js, by Packt Publishing. Summary Wow! We've certainly covered a lot; it shows that adding basic responsive capabilities to media need not be difficult. Let's take a moment to recap on what you learned. We kicked off this article with an introduction to three real-word scenarios that we would then cover. Our first scenario looked at using WordPress. We covered how although we can add simple CSS styling to make images and videos responsive, the preferred method is to use one of the several plugins available to achieve the same result. Our next scenario visited the all too familiar framework known as Twitter Bootstrap. In comparison, we saw that this is a much easier framework to work with, in that styles have been predefined and that all we needed to do was add the right class to the right selector. Our third and final scenario went completely the opposite way, with a look at using the Less CSS preprocessor to handle the styles that we would otherwise have manually created. We saw how easy it was to rework the styles we originally created earlier in the article to produce a more concise and efficient version that compiled into valid CSS with no apparent change in design. Well, we've now reached the end of the book; all good things must come to an end at some point! Nonetheless, I hope you've enjoyed reading the book as much as I have writing it. Hopefully, I've shown that adding responsive media to your sites need not be as complicated as it might first look and that it gives you a good grounding to develop something more complex using responsive media. Resources for Article: Further resources on this subject: Styling the Forms [article] CSS3 Animation [article] Responsive image sliders [article]
Read more
  • 0
  • 41
  • 24202

article-image-api-mongodb-and-nodejs
Packt
22 Dec 2014
26 min read
Save for later

API with MongoDB and Node.js

Packt
22 Dec 2014
26 min read
In this article by Fernando Monteiro, author of the book Learning Single-page Web Application Development, we will see how to build a solid foundation for our API. Our main aim is to discuss the techniques to build rich web applications, with the SPA approach. We will be covering the following topics in this article: The working of an API Boilerplates and generators The speakers API concept Creating the package.json file The Node server with server.js The model with the Mongoose schema Defining the API routes Using MongoDB in the cloud Inserting data with the Postman Chrome extension (For more resources related to this topic, see here.) The working of an API An API works through communication between different codes, thus defining specific behavior of certain objects on an interface. That is, the API will connect several functions on one website (such as search, images, news, authentications, and so on) to enable it to be used in other applications. Operating systems also have APIs, and they still have the same function. Windows, for example, has APIs such as the Win16 API, Win32 API, or Telephony API, in all its versions. When you run a program that involves some process of the operating system, it is likely that we make a connection with one or more Windows APIs. To clarify the concept of an API, we will give go through some examples of how it works. On Windows, it works on an application that uses the system clock to display the same function within the program. It then associates a behavior to a given clock time in another application, for example, using the Time/Clock API from Windows to use the clock functionality on your own application. Another example, is when you use the Android SDK to build mobile applications. When you use the device GPS, you are interacting with the API (android.location) to display the user location on the map through another API, in this case, Google Maps API. The following is the API example: When it comes to web APIs, the functionality can be even greater. There are many services that provide their code, so that they can be used on other websites. Perhaps, the best example is the Facebook API. Several other websites use this service within their pages, for instance a like button, share, or even authentication. An API is a set of programming patterns and instructions to access a software application based on the Web. So, when you access a page of a beer store in your town, you can log in with your Facebook account. This is accomplished through the API. Using it, software developers and web programmers can create beautiful programs and pages filled with content for their users. Boilerplates and generators On a MEAN stack environment, our ecosystem is infinitely diverse, and we can find excellent alternatives to start the construction of our API. At hand, we have simple boilerplates to complex code generators that can be used with other tools in an integrated way, or even alone. Boilerplates are usually a group of tested code that provides the basic structure to the main goal, that is to create a foundation of a web project. Besides saving us from common tasks such as assembling the basic structure of the code and organizing the files, boilerplates already have a number of scripts to make life easier for the frontend. Let's describe some alternatives that we consider as good starting points for the development of APIs with the Express framework, MongoDB database, Node server, and AngularJS for the frontend. Some more accentuated knowledge of JavaScript might be necessary for the complete understanding of the concepts covered here; so we will try to make them as clearly as possible. It is important to note that everything is still very new when we talk about Node and all its ecosystems, and factors such as scalability, performance, and maintenance are still major risk factors. Bearing in mind also that languages such as Ruby on Rails, Scala, and the Play framework have a higher reputation in building large and maintainable web applications, but without a doubt, Node and JavaScript will conquer your space very soon. That being said, we present some alternatives for the initial kickoff with MEAN, but remember that our main focus is on SPA and not directly on MEAN stack. Hackathon starter Hackathon is highly recommended for a quick start to develop with Node. This is because the boilerplate has the main necessary characteristics to develop applications with the Express framework to build RESTful APIs, as it has no MVC/MVVM frontend framework as a standard but just the Bootstrap UI framework. Thus, you are free to choose the framework of your choice, as you will not need to refactor it to meet your needs. Other important characteristics are the use of the latest version of the Express framework, heavy use of Jade templates and some middleware such as Passport - a Node module to manage authentication with various social network sites such as Twitter, Facebook, APIs for LinkedIn, Github, Last.fm, Foursquare, and many more. They provide the necessary boilerplate code to start your projects very fast, and as we said before, it is very simple to install; just clone the Git open source repository: git clone --depth=1 https://github.com/sahat/hackathon-starter.git myproject Run the NPM install command inside the project folder: npm install Then, start the Node server: node app.js Remember, it is very important to have your local database up and running, in this case MongoDB, otherwise the command node app.js will return the error: Error connecting to database: failed to connect to [localhost: 27017] MEAN.io or MEAN.JS This is perhaps the most popular and currently available boilerplate. MEAN.JS is a fork of the original project MEAN.io; both are open source, with a very peculiar similarity, both have the same author. You can check for more details at http://meanjs.org/. However, there are some differences. We consider MEAN.JS to be a more complete and robust environment. It has a structure of directories, better organized, subdivided modules, and better scalability by adopting a vertical modules development. To install it, follow the same steps as previously: Clone the repository to your machine: git clone https://github.com/meanjs/mean.git Go to the installation directory and type on your terminal: npm install Finally, execute the application; this time with the Grunt.js command: grunt If you are on Windows, type the following command: grunt.cmd Now, you have your app up and running on your localhost. The most common problem when we need to scale a SPA is undoubtedly the structure of directories and how we manage all of the frontend JavaScript files and HTML templates using MVC/MVVM. Later, we will see an alternative to deal with this on a large-scale application; for now, let's see the module structure adopted by MEAN.JS: Note that MEAN.JS leaves more flexibility to the AngularJS framework to deal with the MVC approach for the frontend application, as we can see inside the public folder. Also, note the modules approach; each module has its own structure, keeping some conventions for controllers, services, views, config, and tests. This is very useful for team development, so keep all the structure well organized. It is a complete solution that makes use of additional modules such as passport, swig, mongoose, karma, among others. The Passport module Some things about the Passport module must be said; it can be defined as a simple, unobtrusive authentication module. It is a powerful middleware to use with Node; it is very flexible and also modular. It can also adapt easily within applications that use the Express. It has more than 140 alternative authentications and support session persistence; it is very lightweight and extremely simple to be implemented. It provides us with all the necessary structure for authentication, redirects, and validations, and hence it is possible to use the username and password of social networks such as Facebook, Twitter, and others. The following is a simple example of how to use local authentication: var passport = require('passport'), LocalStrategy = require('passport-local').Strategy, User = require('mongoose').model('User');   module.exports = function() { // Use local strategy passport.use(new LocalStrategy({ usernameField: 'username', passwordField: 'password' }, function(username, password, done) { User.findOne({    username: username }, function(err, user) { if (err) { return done(err); } if (!user) {    return done(null, false, {    message: 'Unknown user'    }); } if (!user.authenticate(password)) {    return done(null, false, {    message: 'Invalid password'    }); } return done(null, user); }); } )); }; Here's a sample screenshot of the login page using the MEAN.JS boilerplate with the Passport module: Back to the boilerplates topic; most boilerplates and generators already have the Passport module installed and ready to be configured. Moreover, it has a code generator so that it can be used with Yeoman, which is another essential frontend tool to be added to your tool belt. Yeoman is the most popular code generator for scaffold for modern web applications; it's easy to use and it has a lot of generators such as Backbone, Angular, Karma, and Ember to mention a few. More information can be found at http://yeoman.io/. Generators Generators are for the frontend as gem is for Ruby on Rails. We can create the foundation for any type of application, using available generators. Here's a console output from a Yeoman generator: It is important to bear in mind that we can solve almost all our problems using existing generators in our community. However, if you cannot find the generator you need, you can create your own and make it available to the entire community, such as what has been done with RubyGems by the Rails community. RubyGem, or simply gem, is a library of reusable Ruby files, labeled with a name and a version (a file called gemspec). Keep in mind the Don't Repeat Yourself (DRY) concept; always try to reuse an existing block of code. Don't reinvent the wheel. One of the great advantages of using a code generator structure is that many of the generators that we have currently, have plenty of options for the installation process. With them, you can choose whether or not to use many alternatives/frameworks that usually accompany the generator. The Express generator Another good option is the Express generator, which can be found at https://github.com/expressjs/generator. In all versions up to Express Version 4, the generator was already pre-installed and served as a scaffold to begin development. However, in the current version, it was removed and now must be installed as a supplement. They provide us with the express command directly in terminal and are quite useful to start the basic settings for utilization of the framework, as we can see in the following commands: create : . create : ./package.json create : ./app.js create : ./public create : ./public/javascripts create : ./public/images create : ./public/stylesheets create : ./public/stylesheets/style.css create : ./routes create : ./routes/index.js create : ./routes/users.js create : ./views create : ./views/index.jade create : ./views/layout.jade create : ./views/error.jade create : ./bin create : ./bin/www   install dependencies:    $ cd . && npm install   run the app:    $ DEBUG=express-generator ./bin/www Very similar to the Rails scaffold, we can observe the creation of the directory and files, including the public, routes, and views folders that are the basis of any application using Express. Note the npm install command; it installs all dependencies provided with the package.json file, created as follows: { "name": "express-generator", "version": "0.0.1", "private": true, "scripts": {    "start": "node ./bin/www" }, "dependencies": {    "express": "~4.2.0",    "static-favicon": "~1.0.0",    "morgan": "~1.0.0",    "cookie-parser": "~1.0.1",    "body-parser": "~1.0.0",    "debug": "~0.7.4",    "jade": "~1.3.0" } } This has a simple and effective package.json file to build web applications with the Express framework. The speakers API concept Let's go directly to build the example API. To be more realistic, let's write a user story similar to a backlog list in agile methodologies. Let's understand what problem we need to solve by the API. The user history We need a web application to manage speakers on a conference event. The main task is to store the following speaker information on an API: Name Company Track title Description A speaker picture Schedule presentation For now, we need to add, edit, and delete speakers. It is a simple CRUD function using exclusively the API with JSON format files. Creating the package.json file Although not necessarily required at this time, we recommend that you install the Webstorm IDE, as we'll use it throughout the article. Note that we are using the Webstorm IDE with an integrated environment with terminal, Github version control, and Grunt to ease our development. However, you are absolutely free to choose your own environment. From now on, when we mention terminal, we are referring to terminal Integrated WebStorm, but you can access it directly by the chosen independent editor, terminal for Mac and Linux and Command Prompt for Windows. Webstorm is very useful when you are using a Windows environment, because Windows Command Prompt does not have the facility to copy and paste like Mac OS X on the terminal window. Initiating the JSON file Follow the steps to initiate the JSON file: Create a blank folder and name it as conference-api, open your terminal, and place the command: npm init This command will walk you through creating a package.json file with the baseline configuration for our application. Also, this file is the heart of our application; we can control all the dependencies' versions and other important things like author, Github repositories, development dependencies, type of license, testing commands, and much more. Almost all commands are questions that guide you to the final process, so when we are done, we'll have a package.json file very similar to this: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT" } Now, we need to add the necessary dependencies, such as Node modules, which we will use in our process. You can do this in two ways, either directly via terminal as we did here, or by editing the package.json file. Let's see how it works on the terminal first; let's start with the Express framework. Open your terminal in the api folder and type the following command: npm install express@4.0.0 –-save This command installs the Express module, in this case, Express Version 4, and updates the package.json file and also creates dependencies automatically, as we can see: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "dependencies": {    "express": "^4.0.0" } } Now, let's add more dependencies directly in the package.json file. Open the file in your editor and add the following lines: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "engines": {        "node": "0.8.4",        "npm": "1.1.49" }, "dependencies": {    "body-parser": "^1.0.1",    "express": "^4.0.0",    "method-override": "^1.0.0",    "mongoose": "^3.6.13",    "morgan": "^1.0.0",    "nodemon": "^1.2.0" }, } It's very important when you deploy your application using some services such as Travis Cl or Heroku hosting company. It's always good to set up the Node environment. Open the terminal again and type the command: npm install You can actually install the dependencies in two different ways, either directly into the directory of your application or globally with the -g command. This way, you will have the modules installed to use them in any application. When using this option, make sure that you are the administrator of the user machine, as this command requires special permissions to write to the root directory of the user. At the end of the process, we'll have all Node modules that we need for this project; we just need one more action. Let's place our code over a version control, in our case Git. More information about the Git can be found at http://git-scm.com however, you can use any version control as subversion or another. We recommend using Git, as we will need it later to deploy our application in the cloud, more specificly, on Heroku cloud hosting. At this time, our project folder must have the same structure as that of the example shown here: We must point out the utilization of an important module called the Nodemon module. Whenever a file changes it restarts the server automatically; otherwise, you will have to restart the server manually every time you make a change to a file, especially in a development environment that is extremely useful, as it constantly updates our files. Node server with server.js With this structure formed, we will start the creation of the server itself, which is the creation of a main JavaScript file. The most common name used is server.js, but it is also very common to use the app.js name, especially in older versions. Let's add this file to the root folder of the project and we will start with the basic server settings. There are many ways to configure our server, and probably you'll find the best one for yourself. As we are still in the initial process, we keep only the basics. Open your editor and type in the following code: // Import the Modules installed to our server var express   = require('express'); var bodyParser = require('body-parser');   // Start the Express web framework var app       = express();   // configure app app.use(bodyParser());   // where the application will run var port     = process.env.PORT || 8080;   // Import Mongoose var mongoose   = require('mongoose');   // connect to our database // you can use your own MongoDB installation at: mongodb://127.0.0.1/databasename mongoose.connect('mongodb://username:password@kahana.mongohq.com:10073/node-api');   // Start the Node Server app.listen(port); console.log('Magic happens on port ' + port); Realize that the line-making connection with MongoDB on our localhost is commented, because we are using an instance of MongoDB in the cloud. In our case, we use MongoHQ, a MongoDB-hosting service. Later on, will see how to connect with MongoHQ. Model with the Mongoose schema Now, let's create our model, using the Mongoose schema to map our speakers on MongoDB. // Import the Mongoose module. var mongoose     = require('mongoose'); var Schema       = mongoose.Schema;   // Set the data types, properties and default values to our Schema. var SpeakerSchema   = new Schema({    name:           { type: String, default: '' },    company:       { type: String, default: '' },    title:         { type: String, default: '' },    description:   { type: String, default: '' },    picture:       { type: String, default: '' },    schedule:       { type: String, default: '' },    createdOn:     { type: Date,   default: Date.now} }); module.exports = mongoose.model('Speaker', SpeakerSchema); Note that on the first line, we added the Mongoose module using the require() function. Our schema is pretty simple; on the left-hand side, we have the property name and on the right-hand side, the data type. We also we set the default value to nothing, but if you want, you can set a different value. The next step is to save this file to our project folder. For this, let's create a new directory named server; then inside this, create another folder called models and save the file as speaker.js. At this point, our folder looks like this: The README.md file is used for Github; as we are using the Git version control, we host our files on Github. Defining the API routes One of the most important aspects of our API are routes that we take to create, read, update, and delete our speakers. Our routes are based on the HTTP verb used to access our API, as shown in the following examples: To create record, use the POST verb To read record, use the GET verb To update record, use the PUT verb To delete records, use the DELETE verb So, our routes will be as follows: Routes Verb and Action /api/speakers GET retrieves speaker's records /api/speakers/ POST inserts speakers' record /api/speakers/:speaker_id GET retrieves a single record /api/speakers/:speaker_id PUT updates a single record /api/speakers/:speaker_id DELETE deletes a single record Configuring the API routes: Let's start defining the route and a common message for all requests: var Speaker     = require('./server/models/speaker');   // Defining the Routes for our API   // Start the Router var router = express.Router();   // A simple middleware to use for all Routes and Requests router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); });   // Default message when access the API folder through the browser router.get('/', function(req, res) { // Give some Hello there message res.json({ message: 'Hello SPA, the API is working!' }); }); Now, let's add the route to insert the speakers when the HTTP verb is POST: // When accessing the speakers Routes router.route('/speakers')   // create a speaker when the method passed is POST .post(function(req, res) {   // create a new instance of the Speaker model var speaker = new Speaker();   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully created!' }); }); }) For the HTTP GET method, we need this: // get all the speakers when a method passed is GET .get(function(req, res) { Speaker.find(function(err, speakers) {    if (err)      res.send(err);      res.json(speakers); }); }); Note that in the res.json() function, we send all the object speakers as an answer. Now, we will see the use of different routes in the following steps: To retrieve a single record, we need to pass speaker_id, as shown in our previous table, so let's build this function: // on accessing speaker Route by id router.route('/speakers/:speaker_id')   // get the speaker by id .get(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,    speaker) {    if (err)      res.send(err);      res.json(speaker);    }); }) To update a specific record, we use the PUT HTTP verb and then insert the function: // update the speaker by id .put(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,     speaker) {      if (err)      res.send(err);   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);      // give some success message      res.json({ message: 'speaker successfully       updated!'}); });   }); }) To delete a specific record by its id: // delete the speaker by id .delete(function(req, res) { Speaker.remove({    _id: req.params.speaker_id }, function(err, speaker) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully deleted!' }); }); }); Finally, register the Routes on our server.js file: // register the route app.use('/api', router); All necessary work to configure the basic CRUD routes has been done, and we are ready to run our server and begin creating and updating our database. Open a small parenthesis here, for a quick step-by-step process to introduce another tool to create a database using MongoDB in the cloud. There are many companies that provide this type of service but we will not go into individual merits here; you can choose your preference. We chose Compose (formerly MongoHQ) that has a free sandbox for development, which is sufficient for our examples. Using MongoDB in the cloud Today, we have many options to work with MongoDB, from in-house services to hosting companies that provide Platform as a Service (PaaS) and Software as a Service (SaaS). We will present a solution called Database as a Service (DbaaS) that provides database services for highly scalable web applications. Here's a simple step-by-step process to start using a MongoDB instance with a cloud service: Go to https://www.compose.io/. Create your free account. On your dashboard panel, click on add Database. On the right-hand side, choose Sandbox Database. Name your database as node-api. Add a user to your database. Go back to your database title, click on admin. Copy the connection string. The string connection looks like this: mongodb://<user>:<password>@kahana.mongohq.com:10073/node-api. Let's edit the server.js file using the following steps: Place your own connection string to the Mongoose.connect() function. Open your terminal and input the command: nodemon server.js Open your browser and place http://localhost:8080/api. You will see a message like this in the browser: { Hello SPA, the API is working! } Remember the api folder was defined on the server.js file when we registered the routes: app.use('/api', router); But, if you try to access http://localhost:8080/api/speakers, you must have something like this: [] This is an empty array, because we haven't input any data into MongoDB. We use an extension for the Chrome browser called JSONView. This way, we can view the formatted and readable JSON files. You can install this for free from the Chrome Web Store. Inserting data with Postman To solve our empty database and before we create our frontend interface, let's add some data with the Chrome extension Postman. By the way, it's a very useful browser interface to work with RESTful APIs. As we already know that our database is empty, our first task is to insert a record. To do so, perform the following steps: Open Postman and enter http://localhost:8080/api/speakers. Select the x-www-form-urlencoded option and add the properties of our model: var SpeakerSchema   = new Schema({ name:           { type: String, default: '' }, company:       { type: String, default: '' }, title:         { type: String, default: '' }, description:   { type: String, default: '' }, picture:       { type: String, default: '' }, schedule:       { type: String, default: '' }, createdOn:     { type: Date,   default: Date.now} }); Now, click on the blue button at the end to send the request. With everything going as expected, you should see message: speaker successfully created! at the bottom of the screen, as shown in the following screenshot: Now, let's try http://localhost:8080/api/speakers in the browser again. Now, we have a JSON file like this, instead of an empty array: { "_id": "53a38ffd2cd34a7904000007", "__v": 0, "createdOn": "2014-06-20T02:20:31.384Z", "schedule": "10:20", "picture": "fernando.jpg", "description": "Lorem ipsum dolor sit amet, consectetur     adipisicing elit, sed do eiusmod...", "title": "MongoDB", "company": "Newaeonweb", "name": "Fernando Monteiro" } When performing the same action on Postman, we see the same result, as shown in the following screenshot: Go back to Postman, copy _id from the preceding JSON file and add to the end of the http://localhost:8080/api/speakers/53a38ffd2cd34a7904000005 URL and click on Send. You will see the same object on the screen. Now, let's test the method to update the object. In this case, change the method to PUT on Postman and click on Send. The output is shown in the following screenshot: Note that on the left-hand side, we have three methods under History; now, let's perform the last operation and delete the record. This is very simple to perform; just keep the same URL, change the method on Postman to DELETE, and click on Send. Finally, we have the last method executed successfully, as shown in the following screenshot: Take a look at your terminal, you can see four messages that are the same: An action was performed by the server. We configured this message in the server.js file when we were dealing with all routes of our API. router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); }); This way, we can monitor all interactions that take place at our API. Now that we have our API properly tested and working, we can start the development of the interface that will handle all this data. Summary In this article, we have covered almost all modules of the Node ecosystem to develop the RESTful API. Resources for Article: Further resources on this subject: Web Application Testing [article] A look into responsive design frameworks [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 4259
article-image-recursive-directives
Packt
22 Dec 2014
13 min read
Save for later

Recursive directives

Packt
22 Dec 2014
13 min read
In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. (For more resources related to this topic, see here.) Recursive directives In this article by Matt Frisbie, the author of AngularJS Web Application Development Cookbook, we will see recursive directives. The power of directives can also be effectively applied when consuming data in a more unwieldy format. Consider the case in which you have a JavaScript object that exists in some sort of recursive tree structure. The view that you will generate for this object will also reflect its recursive nature and will have nested HTML elements that match the underlying data structure. Getting ready Suppose you had a recursive data object in your controller as follows: (app.js)   angular.module('myApp', []) .controller('MainCtrl', function($scope) { $scope.data = {    text: 'Primates',    items: [      {        text: 'Anthropoidea',        items: [          {            text: 'New World Anthropoids'          },          {            text: 'Old World Anthropoids',            items: [              {                text: 'Apes',                items: [                 {                    text: 'Lesser Apes'                  },                  {                    text: 'Greater Apes'                  }                ]              },              {                text: 'Monkeys'              }            ]          }        ]      },      {        text: 'Prosimii'      }    ] }; }); How to do it… As you might imagine, iteratively constructing a view or only partially using directives to accomplish this will become extremely messy very quickly. Instead, it would be better if you were able to create a directive that would seamlessly break apart the data recursively, and define and render the sub-HTML fragments cleanly. By cleverly using directives and the $compile service, this exact directive functionality is possible. The ideal directive in this scenario will be able to handle the recursive object without any additional parameters or outside assistance in parsing and rendering the object. So, in the main view, your directive will look something like this: <recursive value="nestedObject"></recursive> The directive is accepting an isolate scope = binding to the parent scope object, which will remain structurally identical as the directive descends through the recursive object. The $compile service You will need to inject the $compile service in order to make the recursive directive work. The reason for this is that each level of the directive can instantiate directives inside it and convert them from an uncompiled template to real DOM material. The angular.element() method The angular.element() method can be thought of as the jQuery $() equivalent. It accepts a string template or DOM fragment and returns a jqLite object that can be modified, inserted, or compiled for your purposes. If the jQuery library is present when the application is initialized, AngularJS will use that instead of jqLite. If you use the AngularJS template cache, retrieved templates will already exist as if you had called the angular.element() method on the template text. The $templateCache Inside a directive, it's possible to create a template using angular.element() and a string of HTML similar to an underscore.js template. However, it's completely unnecessary and quite unwieldy to use compared to AngularJS templates. When you declare a template and register it with AngularJS, it can be accessed through the injected $templateCache, which acts as a key-value store for your templates. The recursive template is as follows: <script type="text/ng-template" id="recursive.html"> <span>{{ val.text }}</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <tree val="item" parent-data="val.items"></tree>    </li> </ul> </script> The <span> and <button> elements are present at each instance of a node, and they present the data at that node as well as an interface to the click event (which we will define in a moment) that will destroy it and all its children. Following these, the conditional <ul> element renders only if the isParent flag is set in the scope, and it repeats through the items array, recursing the child data and creating new instances of the directive. Here, you can see the full template definition of the directive: <tree val="item" parent-data="val.items"></tree> Not only does the directive take a val attribute for the local node data, but you can also see its parent-data attribute, which is the point of scope indirection that allows the tree structure. To make more sense of this, examine the following directive code: (app.js)   .directive('tree', function($compile, $templateCache) { return {    restrict: 'E',    scope: {      val: '=',      parentData: '='    },    link: function(scope, el, attrs) {      scope.isParent = angular.isArray(scope.val.items)      scope.delSubtree = function() {        if(scope.parentData) {            scope.parentData.splice(            scope.parentData.indexOf(scope.val),            1          );        }        scope.val={};      }        el.replaceWith(        $compile(          $templateCache.get('recursive.html')        )(scope)      );      } }; }); With all of this, if you provide the recursive directive with the data object provided at the beginning of this article, it will result in the following (presented here without the auto-added AngularJS comments and directives): (index.html – uncompiled)   <div ng-app="myApp"> <div ng-controller="MainCtrl">    <tree val="data"></tree> </div>    <script type="text/ng-template" id="recursive.html">    <span>{{ val.text }}</span>    <button ng-click="deleteSubtree()">delete</button>    <ul ng-if="isParent" style="margin-left:30px">      <li ng-repeat="item in val.items">        <tree val="item" parent-data="val.items"></tree>      </li>    </ul> </script> </div> The recursive nature of the directive templates enables nesting, and when compiled using the recursive data object located in the wrapping controller, it will compile into the following HTML: (index.html - compiled)   <div ng-controller="MainController"> <span>Primates</span> <button ng-click="delSubtree()">delete</button> <ul ng-if="isParent" style="margin-left:30px">    <li ng-repeat="item in val.items">      <span>Anthropoidea</span>      <button ng-click="delSubtree()">delete</button>      <ul ng-if="isParent" style="margin-left:30px">        <li ng-repeat="item in val.items">          <span>New World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>        </li>        <li ng-repeat="item in val.items">          <span>Old World Anthropoids</span>          <button ng-click="delSubtree()">delete</button>          <ul ng-if="isParent" style="margin-left:30px">            <li ng-repeat="item in val.items">              <span>Apes</span>              <button ng-click="delSubtree()">delete</button>              <ul ng-if="isParent" style="margin-left:30px">                <li ng-repeat="item in val.items">                  <span>Lesser Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>                <li ng-repeat="item in val.items">                  <span>Greater Apes</span>                  <button ng-click="delSubtree()">delete</button>                </li>              </ul>            </li>            <li ng-repeat="item in val.items">              <span>Monkeys</span>              <button ng-click="delSubtree()">delete</button>            </li>          </ul>         </li>      </ul>    </li>    <li ng-repeat="item in val.items">      <span>Prosimii</span>      <button ng-click="delSubtree()">delete</button>    </li> </ul> </div> JSFiddle: http://jsfiddle.net/msfrisbie/ka46yx4u/ How it works… The definition of the isolate scope through the nested directives described in the previous section allows all or part of the recursive objects to be bound through parentData to the appropriate directive instance, all the while maintaining the nested connectedness afforded by the directive hierarchy. When a parent node is deleted, the lower directives are still bound to the data object and the removal propagates through cleanly. The meatiest and most important part of this directive is, of course, the link function. Here, the link function determines whether the node has any children (which simply checks for the existence of an array in the local data node) and declares the deleting method, which simply removes the relevant portion from the recursive object and cleans up the local node. Up until this point, there haven't been any recursive calls, and there shouldn't need to be. If your directive is constructed correctly, AngularJS data binding and inherent template management will take care of the template cleanup for you. This, of course, leads into the final line of the link function, which is broken up here for readability: el.replaceWith( $compile(    $templateCache.get('recursive.html') )(scope) ); Recall that in a link function, the second parameter is the jqLite-wrapped DOM object that the directive is linking—here, the <tree> element. This exposes to you a subset of jQuery object methods, including replaceWith(), which you will use here. The top-level instance of the directive will be replaced by the recursively-defined template, and this will carry down through the tree. At this point, you should have an idea of how the recursive structure is coming together. The element parameter needs to be replaced with a recursively-compiled template, and for this, you will employ the $compile service. This service accepts a template as a parameter and returns a function that you will invoke with the current scope inside the directive's link function. The template is retrieved from $templateCache by the recursive.html key, and then it's compiled. When the compiler reaches the nested <tree> directive, the recursive directive is realized all the way down through the data in the recursive object. Summary This article demonstrates the power of constructing a directive to convert a complex data object into a large DOM object. Relevant portions can be broken into individual templates, handled with distributed directive logic, and combined together in an elegant fashion to maximize modularity and reusability. Resources for Article:  Further resources on this subject: Working with Live Data and AngularJS [article] Angular Zen [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 4959

article-image-adding-websockets
Packt
22 Dec 2014
22 min read
Save for later

Adding WebSockets

Packt
22 Dec 2014
22 min read
In this article, Michal Cmil, Michal Matloka and Francesco Marchioni, authors for the book Java EE 7 Development with WildFly we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking whether someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), the peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we have used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server will recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its connected WebSocket clients, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an Upgrade: websocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows you to reuse the HTTP port (80/8080) for other protocols and therefore minimise the number of required ports that should be configured. If the server can "understand" the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see this in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first annotation @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7) UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-7) Deploying javax.ws.rs.core.Application: class com.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow] (MSC service thread 1-7) JBAS017534: Registered web context: /ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen The connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when an error occurs. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object.  There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. Since we're just adding a feature to our previous app, we will only describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets")public class TicketEndpoint {   @Inject   private SessionRegistry sessionRegistry;   @OnOpen   public void open(Session session, EndpointConfig conf) {       sessionRegistry.add(session);   }   @OnClose   public void close(Session session, CloseReason reason) {       sessionRegistry.remove(session);   }   public void send(@Observes Seat seat) {       sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));   }   private String toJson(Seat seat) {       final JsonObject jsonObject = Json.createObjectBuilder()               .add("id", seat.getId())             .add("booked", seat.isBooked())               .build();       return jsonObject.toString();   } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the string, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {            if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot:   However, if the server fails after opening the website, you might get an error as shown in the following screenshot:   Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON command to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the Gson object before creation. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. And that's all. Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only id and booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot:   Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients too. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 1989
Modal Close icon
Modal Close icon