Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-model-view-viewmodel
Packt
02 Mar 2015
24 min read
Save for later

Model-View-ViewModel

Packt
02 Mar 2015
24 min read
In this article, by Einar Ingebrigtsen, author of the book, SignalR Blueprints, we will focus on a different programming model for client development: Model-View-ViewModel (MVVM). It will reiterate what you have already learned about SignalR, but you will also start to see a recurring theme in how you should architect decoupled software that adheres to the SOLID principles. It will also show the benefit of thinking in single page application terms (often referred to as Single Page Application (SPA)), and how SignalR really fits well with this idea. (For more resources related to this topic, see here.) The goal – an imagined dashboard A counterpart to any application is often a part of monitoring its health. Is it running? and are there any failures?. Getting this information in real time when the failure occurs is important and also getting some statistics from it is interesting. From a SignalR perspective, we will still use the hub abstraction to do pretty much what we have been doing, but the goal is to give ideas of how and what we can use SignalR for. Another goal is to dive into the architectural patterns, making it ready for larger applications. MVVM allows better separation and is very applicable for client development in general. A question that you might ask yourself is why KnockoutJS instead of something like AngularJS? It boils down to the personal preference to a certain degree. AngularJS is described as a MVW where W stands for Whatever. I find AngularJS less focused on the same things I focus on and I also find it very verbose to get it up and running. I'm not in any way an expert in AngularJS, but I have used it on a project and I found myself writing a lot to make it work the way I wanted it to in terms of MVVM. However, I don't think it's fair to compare the two. KnockoutJS is very focused in what it's trying to solve, which is just a little piece of the puzzle, while AngularJS is a full client end-to-end framework. On this note, let's just jump straight to it. Decoupling it all MVVM is a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model. Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a view. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Back to basics This time we will go back in time, going down what might be considered a more purist path; use the browser elements (HTML, JavaScript, and CSS) and don't rely on any server-side rendering. Clients today are powerful and very capable and offloading the composition of what the user sees onto the client frees up server resources. You can also rely on the infrastructure of the Web for caching with static HTML files not rendered by the server. In fact, you could actually put these resources on a content delivery network, making the files available as close as possible to the end user. This would result in better load times for the user. You might have other reasons to perform server-side rendering and not just plain HTML. Leveraging existing infrastructure or third-party party tools could be those reasons. It boils down to what's right for you. But this particular sample will focus on things that the client can do. Anyways, let's get started. Open Visual Studio and create a new project by navigating to FILE | New | Project. The following dialog box will show up: From the left-hand side menu, select Web and then ASP.NET Web Application. Enter Chapter4 in the Name textbox and select your location. Select the Empty template from the template selector and make sure you deselect the Host in the cloud option. Then, click on OK, as shown in the following screenshot: Setting up the packages First, we want Twitter bootstrap. To get this, follow these steps: Add a NuGet package reference. Right-click on References in Solution Explorer and select Manage NuGet Packages and type Bootstrap in the search dialog box. Select it and then click on Install. We want a slightly different look, so we'll download one of the many bootstrap themes out here. Add a NuGet package reference called metro-bootstrap. As jQuery is still a part of this, let's add a NuGet package reference to it as well. For the MVVM part, we will use something called KnockoutJS; add it through NuGet as well. Add a NuGet package reference, as in the previous steps, but this time, type SignalR in the search dialog box. Find the package called Microsoft ASP.NET SignalR. Making any SignalR hubs available for the client Add a file called Startup.cs file to the root of the project. Add a Configuration method that will expose any SignalR hubs, as follows: public void Configuration(IAppBuilder app) { app.MapSignalR(); } At the top of the Startup.cs file, above the namespace declaration, but right below the using statements, add the following code:  [assembly: OwinStartupAttribute(typeof(Chapter4.Startup))] Knocking it out of the park KnockoutJS is a framework that implements a lot of the principles found in MVVM and makes it easier to apply. We're going to use the following two features of KnockoutJS, and it's therefore important to understand what they are and what significance they have: Observables: In order for a view to be able to know when state change in a ViewModel occurs, KnockoutJS has something called an observable for single objects or values and observable array for arrays. BindingHandlers: In the view, the counterparts that are able to recognize the observables and know how to deal with its content are known as BindingHandlers. We create binding expression in the view that instructs the view to get its content from the properties found in the binding context. The default binding context will be the ViewModel, but there are more advanced scenarios where this changes. In fact, there is a BindingHandler that enables you to specify the context at any given time called with. Our single page Whether one should strive towards having an SPA is widely discussed on the Web these days. My opinion on the subject, in the interest of the user, is that we should really try to push things in this direction. Having not to post back and cause a full reload of the page and all its resources and getting into the correct state gives the user a better experience. Some of the arguments to perform post-backs every now and then go in the direction of fixing potential memory leaks happening in the browser. Although, the technique is sound and the result is right, it really just camouflages a problem one has in the system. However, as with everything, it really depends on the situation. At the core of an SPA is a single page (pun intended), which is usually the index.html file sitting at the root of the project. Add the new index.html file and edit it as follows: Add a new HTML file (index.html) at the root of the project by right- clicking on the Chapter4 project in Solution Explorer. Navigate to Add | New Item | Web from the left-hand side menu, and then select HTML Page and name it index.html. Finally, click on Add. Let's put in the things we've added dependencies to, starting with the style sheets. In the index.html file, you'll find the <head> tag; add the following code snippet under the <title></title> tag: <link href="Content/bootstrap.min.css" rel="stylesheet" /> <link href="Content/metro-bootstrap.min.css" rel="stylesheet" /> Next, add the following code snippet right beneath the preceding code: <script type="text/javascript" src="Scripts/jquery- 1.9.0.min.js"></script> <script type="text/javascript" src="Scripts/jquery.signalR- 2.1.1.js"></script> <script type="text/javascript" src="signalr/hubs"></script> <script type="text/javascript" src="Scripts/knockout- 3.2.0.js"></script> Another thing we will need in this is something that helps us visualize things; Google has a free, open source charting library that we will use. We will take a dependency to the JavaScript APIs from Google. To do this, add the following script tag after the others: <script type="text/javascript" src="https://www.google.com/jsapi"></script> Now, we can start filling in the view part. Inside the <body> tag, we start by putting in a header, as shown here: <div class="navbar navbar-default navbar-static-top bsnavbar">     <div class="container">         <div class="navbar-header">             <h1>My Dashboard</h1>         </div>     </div> </div> The server side of things In this little dashboard thing, we will look at web requests, both successful and failed. We will perform some minor things for us to be able to do this in a very naive way, without having to flesh out a full mechanism to deal with error situations. Let's start by enabling all requests even static resources, such as HTML files, to run through all HTTP modules. A word of warning: there are performance implications of putting all requests through the managed pipeline, so normally, you wouldn't necessarily want to do this on a production system, but for this sample, it will be fine to show the concepts. Open Web.config in the project and add the following code snippet within the <configuration> tag: <system.webServer>   <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> The hub In this sample, we will only have one hub, the one that will be responsible for dealing with reporting requests and failed requests. Let's add a new class called RequestStatisticsHub. Right-click on the project in Solution Explorer, select Class from Add, name it RequestStatisticsHub.cs, and then click on Add. The new class should inherit from the hub. Add the following using statement at the top: using Microsoft.AspNet.SignalR; We're going to keep a track of the count of requests and failed requests per time with a resolution of not more than every 30 seconds in the memory on the server. Obviously, if one wants to scale across multiple servers, this is way too naive and one should choose an out-of-process shared key-value store that goes across servers. However, for our purpose, this will be fine. Let's add a using statement at the top, as shown here: using System.Collections.Generic; At the top of the class, add the two dictionaries that we will use to hold this information: static Dictionary<string, int> _requestsLog = new Dictionary<string, int>(); static Dictionary<string, int> _failedRequestsLog = new Dictionary<string, int>(); In our client, we want to access these logs at startup. So let's add two methods to do so: public Dictionary<string, int> GetRequests() {     return _requestsLog; }   public Dictionary<string, int> GetFailedRequests() {     return _failedRequestsLog; } Remember the resolution of only keeping track of number of requests per 30 seconds at a time. There is no default mechanism in the .NET Framework to do this so we need to add a few helper methods to deal with rounding of time. Let's add a class called DateTimeRounding at the root of the project. Mark the class as a public static class and put the following extension methods in the class: public static DateTime RoundUp(this DateTime dt, TimeSpan d) {     var delta = (d.Ticks - (dt.Ticks % d.Ticks)) % d.Ticks;     return new DateTime(dt.Ticks + delta); }   public static DateTime RoundDown(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     return new DateTime(dt.Ticks - delta); }   public static DateTime RoundToNearest(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     bool roundUp = delta > d.Ticks / 2;       return roundUp ? dt.RoundUp(d) : dt.RoundDown(d); } Let's go back to the RequestStatisticsHub class and add some more functionality now so that we can deal with rounding of time: static void Register(Dictionary<string, int> log, Action<dynamic, string, int> hubCallback) {     var now = DateTime.Now.RoundToNearest(TimeSpan.FromSeconds(30));     var key = now.ToString("HH:mm");       if (log.ContainsKey(key))         log[key] = log[key] + 1;     else         log[key] = 1;       var hub = GlobalHost.ConnectionManager.GetHubContext<RequestStatisticsHub>() ;     hubCallback(hub.Clients.All, key, log[key]); }   public static void Request() {     Register(_requestsLog, (hub, key, value) => hub.requestCountChanged(key, value)); }   public static void FailedRequest() {     Register(_requestsLog, (hub, key, value) => hub.failedRequestCountChanged(key, value)); } This enables us to have a place to call in order to report requests and these get published back to any clients connected to this particular hub. Note the usage of GlobalHost and its ConnectionManager property. When we want to get a hub instance and when we are not in the hub context of a method being called from a client, we use ConnectionManager to get it. It gives is a proxy for the hub and enables us to call methods on any connected client. Naively dealing with requests With all this in place, we will be able to easily and naively deal with what we consider correct and failed requests. Let's add a Global.asax file by right-clicking on the project in Solution Explorer and select the New item from the Add. Navigate to Web and find Global Application Class, then click on Add. In the new file, we want to replace the BindingHandlers method with the following code snippet: protected void Application_AuthenticateRequest(object sender, EventArgs e) {     var path = HttpContext.Current.Request.Path;     if (path == "/") path = "index.html";       if (path.ToLowerInvariant().IndexOf(".html") < 0) return;       var physicalPath = HttpContext.Current.Request.MapPath(path);     if (File.Exists(physicalPath))     {         RequestStatisticsHub.Request();     }     else     {         RequestStatisticsHub.FailedRequest();     } } Basically, with this, we are only measuring requests with .html in its path, and if it's only "/", we assume it's "index.html". Any file that does not exist, accordingly, is considered an error; typically a 404 error and we register it as a failed request. Bringing it all back to the client With the server taken care of, we can start consuming all this in the client. We will now be heading down the path of creating a ViewModel and hook everything up. ViewModel Let's start by adding a JavaScript file sitting next to our index.html file at the root level of the project, call it index.js. This file will represent our ViewModel. Also, this scenario will be responsible to set up KnockoutJS, so that the ViewModel is in fact activated and applied to the page. As we only have this one page for this sample, this will be fine. Let's start by hooking up the jQuery document that is ready: $(function() { }); Inside the function created here, we will enter our viewModel definition, which will start off being an empty one: var viewModel = function() { }; KnockoutJS has a function to apply a viewModel to the document, meaning that the document or body will be associated with the viewModel instance given. Right under the definition of viewModel, add the following line: ko.applyBindings(new viewModel()); Compiling this and running it should at the very least not give you any errors but nothing more than a header saying My Dashboard. So, we need to lighten this up a bit. Inside the viewModel function definition, add the following code snippet: var self = this; this.requests = ko.observableArray(); this.failedRequests = ko.observableArray(); We enter a reference to this as a variant called self. This will help us with scoping issues later on. The arrays we added are now KnockoutJS's observable arrays that allows the view or any BindingHandler to observe the changes that are coming in. The ko.observableArray() and ko.observable() arrays both return a new function. So, if you want to access any values in it, you must unwrap it by calling it something that might seem counterintuitive at first. You might consider your variable as just another property. However, for the observableArray(), KnockoutJS adds most of the functions found in the array type in JavaScript and they can be used directly on the function without unwrapping. If you look at a variable that is an observableArray in the console of the browser, you'll see that it looks as if it actually is just any array. This is not really true though; to get to the values, you will have to unwrap it by adding () after accessing the variable. However, all the functions you're used to having on an array are here. Let's add a function that will know how to handle an entry into the viewModel function. An entry coming in is either an existing one or a new one; the key of the entry is the giveaway to decide: function handleEntry(log, key, value) {     var result = log().forEach(function (entry) {         if (entry[0] == key) {             entry[1](value);             return true;         }     });       if (result !== true) {         log.push([key, ko.observable(value)]);     } }; Let's set up the hub and add the following code to the viewModel function: var hub = $.connection.requestStatisticsHub; var initializedCount = 0;   hub.client.requestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.requests, key, value); }   hub.client.failedRequestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.failedRequests, key, value); } You might notice the initalizedCount variable. Its purpose is not to deal with requests until completely initialized, which comes next. Add the following code snippet to the viewModel function: $.connection.hub.start().done(function () {     hub.server.getRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.requests, property, requests[property]);         }           initializedCount++;     });     hub.server.getFailedRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.failedRequests, property, requests[property]);         }           initializedCount++;     }); }); We should now have enough logic in our viewModel function to actually be able to get any requests already sitting there and also respond to new ones coming. BindingHandler The key element of KnockoutJS is its BindingHandler mechanism. In KnockoutJS, everything starts with a data-bind="" attribute on an element in the HTML view. Inside the attribute, one puts binding expressions and the BindingHandlers are a key to this. Every expression starts with the name of the handler. For instance, if you have an <input> tag and you want to get the value from the input into a property on the ViewModel, you would use the BindingHandler value. There are a few BindingHandlers out of the box to deal with the common scenarios (text, value for each, and more). All of the BindingHandlers are very well documented on the KnockoutJS site. For this sample, we will actually create our own BindingHandler. KnockoutJS is highly extensible and allows you to do just this amongst other extensibility points. Let's add a JavaScript file called googleCharts.js at the root of the project. Inside it, add the following code: google.load('visualization', '1.0', { 'packages': ['corechart'] }); This will tell the Google API to enable the charting package. The next thing we want to do is to define the BindingHandler. Any handler has the option of setting up an init function and an update function. The init function should only occur once, when it's first initialized. Actually, it's when the binding context is set. If the parent binding context of the element changes, it will be called again. The update function will be called whenever there is a change in an observable or more observables that the binding expression is referring to. For our sample, we will use the init function only and actually respond to changes manually because we have a more involved scenario than what the default mechanism would provide us with. The update function that you can add to a BindingHandler has the exact same signature as the init function; hence, it is called an update. Let's add the following code underneath the load call: ko.bindingHandlers.lineChart = {     init: function (element, valueAccessor, allValueAccessors, viewModel, bindingContext) {     } }; This is the core structure of a BindingHandler. As you can see, we've named the BindingHandler as lineChart. This is the name we will use in our view later on. The signature of init and update are the same. The first parameter represents the element that holds the binding expression, whereas the second valueAccessor parameter holds a function that enables us to access the value, which is a result of the expression. KnockoutJS deals with the expression internally and parses any expression and figures out how to expand any values, and so on. Add the following code into the init function: optionsInput = valueAccessor();   var options = {     title: optionsInput.title,     width: optionsInput.width || 300,     height: optionsInput.height || 300,     backgroundColor: 'transparent',     animation: {         duration: 1000,         easing: 'out'     } };   var dataHash = {};   var chart = new google.visualization.LineChart(element); var data = new google.visualization.DataTable(); data.addColumn('string', 'x'); data.addColumn('number', 'y');   function addRow(row, rowIndex) {     var value = row[1];     if (ko.isObservable(value)) {         value.subscribe(function (newValue) {             data.setValue(rowIndex, 1, newValue);             chart.draw(data, options);         });     }       var actualValue = ko.unwrap(value);     data.addRow([row[0], actualValue]);       dataHash[row[0]] = actualValue; };   optionsInput.data().forEach(addRow);   optionsInput.data.subscribe(function (newValue) {     newValue.forEach(function(row, rowIndex) {         if( !dataHash.hasOwnProperty(row[0])) {             addRow(row,rowIndex);         }     });       chart.draw(data, options); });         chart.draw(data, options); As you can see, observables has a function called subscribe(), which is the same for both an observable array and a regular observable. The code adds a subscription to the array itself; if there is any change to the array, we will find the change and add any new row to the chart. In addition, when we create a new row, we subscribe to any change in its value so that we can update the chart. In the ViewModel, the values were converted into observable values to accommodate this. View Go back to the index.html file; we need the UI for the two charts we're going to have. Plus, we need to get both the new BindingHandler loaded and also the ViewModel. Add the following script references after the last script reference already present, as shown here: <script type="text/javascript" src="googleCharts.js"></script> <script type="text/javascript" src="index.js"></script> Inside the <body> tag below the header, we want to add a bootstrap container and a row to hold two metro styled tiles and utilize our new BindingHandler. Also, we want a footer sitting at the bottom, as shown in the following code: <div class="container">     <div class="row">         <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-green-sea tile-large">                 <div data-bind="lineChart: { title: 'Web Requests', width: 300, height: 300, data: requests }"></div>             </div>         </div>           <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-pomegranate tile- large">                 <div data-bind="lineChart: { title: 'Failed Web Requests', width: 300, height: 300, data: failedRequests }"></div>             </div>         </div>     </div>       <hr />     <footer class="bs-footer" role="contentinfo">         <div class="container">             The Dashboard         </div>     </footer> </div> Note the data: requests and data: failedRequests are a part of the binding expressions. These will be handled and resolved by KnockoutJS internally and pointed to the observable arrays on the ViewModel. The other properties are options that go into the BindingHandler and something it forwards to the Google Charting APIs. Trying it all out Running the preceding code (Ctrl + F5) should yield the following result: If you open a second browser and go to the same URL, you will see the change in the chart in real time. Waiting approximately for 30 seconds and refreshing the browser should add a second point automatically and also animate the chart accordingly. Typing a URL with a file that does exist should have the same effect on the failed requests chart. Summary In this article, we had a brief encounter with MVVM as a pattern with the sole purpose of establishing good practices for your client code. We added this to a single page application setting, sprinkling on top the SignalR to communicate from the server to any connected client. Resources for Article: Further resources on this subject: Using R for Statistics Research and Graphics? [article] Aspects Data Manipulation in R [article] Learning Data Analytics R and Hadoop [article]
Read more
  • 0
  • 0
  • 1928

article-image-controlling-dc-motors-using-shield
Packt
27 Feb 2015
4 min read
Save for later

Controlling DC motors using a shield

Packt
27 Feb 2015
4 min read
 In this article by Richard Grimmett, author of the book Intel Galileo Essentials,let's graduate from a simple DC motor to a wheeled platform. There are several simple, two-wheeled robotics platforms. In this example, you'll use one that is available on several online electronics stores. It is called the Magician Chassis, sourced by SparkFun. The following image shows this: (For more resources related to this topic, see here.) To make this wheeled robotic platform work, you're going to control the two DC motors connected directly to the two wheels. You'll want to control both the direction and the speed of the two wheels to control the direction of the robot. You'll do this with an Arduino shield designed for this purpose. The Galileo is designed to accommodate many of these shields. The following image shows the shield: Specifically, you'll be interested in the connections on the front corner of the shield, which is where you will connect the two DC motors. Here is a close-up of that part of the board: It is these three connections that you will use in this example. First, however, place the board on top of the Galileo. Then mount the two boards to the top of your two-wheeled robotic platform, like this: In this case, I used a large cable tie to mount the boards to the platform, using the foam that came with the motor shield between the Galileo and plastic platform. This particular platform comes with a 4 AA battery holder, so you'll need to connect this power source, or whatever power source you are going to use, to the motor shield. The positive and negative terminals are inserted into the motor shield by loosening the screws, inserting the wires, and then tightening the screws, like this: The final step is to connect the motor wires to the motor controller shield. There are two sets of connections, one for each motor like this: Insert some batteries, and then connect the Galileo to the computer via the USB cable, and you are now ready to start programming in order to control the motors. Galileo code for the DC motor shield Now that the Hardware is in place, bring up the IDE, make sure that the proper port and device are selected, and enter the following code: The code is straightforward. It consists of the following three blocks: The declaration of the six variables that connect to the proper Galileo pins: int pwmA = 3; int pwmB = 11; int brakeA = 9; int brakeB = 8; int directionA = 12; int directionB = 13; The setup() function, which sets the directionA, directionB, brakeA, and brakeB digital output pins: pinMode(directionA, OUTPUT); pinMode(brakeA, OUTPUT); pinMode(directionB, OUTPUT); pinMode(brakeB, OUTPUT); The loop() function. This is an example of how to make the wheeled robot go forward, then turn to the right. At each of these steps, you use the brake to stop the robot: // Move Forward digitalWrite(directionA, HIGH); digitalWrite(brakeA, LOW); analogWrite(pwmA, 255); digitalWrite(directionB, HIGH); digitalWrite(brakeB, LOW); analogWrite(pwmB, 255); delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); //Turn Right digitalWrite(directionA, LOW); //Establishes backward direction of Channel A digitalWrite(brakeA, LOW); //Disengage the Brake for Channel A analogWrite(pwmA, 128); //Spins the motor on Channel A at half speed digitalWrite(directionB, HIGH); //Establishes forward direction of Channel B digitalWrite(brakeB, LOW); //Disengage the Brake for Channel B analogWrite(pwmB, 128); //Spins the motor on Channel B at full speed delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); Once you have uploaded the code, the program should run in a loop. If you want to run your robot without connecting to the computer, you'll need to add a battery to power the Galileo. The Galileo will need at least 2 Amps, but you might want to consider providing 3 Amps or more based on your project. To supply this from a battery, you can use one of several different choices. My personal favorite is to use an emergency cell phone charging battery, like this: If you are going to use this, you'll need a USB-to-2.1 mm DC plug cable, available at most online stores. Once you have uploaded the code, you can disconnect the computer, then press the reset button. Your robot can move all by itself! Summary By now, you should be feeling a bit more comfortable with configuring Hardware and writing code for the Galileo. This example is fun, and provides you with a moving platform. Resources for Article: Further resources on this subject: The Raspberry Pi And Raspbian? [article] Raspberry Pi Gaming Operating Systems [article] Clusters Parallel Computing And Raspberry Pi- Brief Background [article]
Read more
  • 0
  • 0
  • 5345

Packt
27 Feb 2015
25 min read
Save for later

Putting It All Together – Community Radio

Packt
27 Feb 2015
25 min read
In this article by Andy Matthews, author of the book Creating Mobile Apps with jQuery Mobile, Second Edition, we will see a website where listeners will be greeted with music from local, independent bands across several genres and geographic regions. Building this will take many of the skills, and we'll pepper in some new techniques that can be used in this new service. Let's see what technology and techniques we could bring to bear on this venture. In this article, we will cover: A taste of Balsamiq Organizing your code An introduction to the Web Audio API Prompting the user to install your app New device-level hardware access To app or not to app Three good reasons for compiling an app (For more resources related to this topic, see here.) A taste of Balsamiq Balsamiq (http://www.balsamiq.com/) is a very popular User Experience (UX) tool for rapid prototyping. It is perfect for creating and sharing interactive mockups: When I say very popular, I mean lots of major names that you're used to seeing. Over 80,000 companies create their software with the help of Balsamiq Mockups. So, let's take a look at what the creators of a community radio station might have in mind. They might start with a screen which looks like this; a pretty standard implementation. It features an icon toolbar at the bottom and a listview element in the content: Ideally, we'd like to keep this particular implementation as pure HTML/JavaScript/CSS. That way, we could compile it into a native app at some point, using PhoneGap. However, we'd like to stay true to the Don't Repeat Yourself (DRY) principle. That means, that we're going to want to inject this footer onto every page without using a server-side process. To that end, let's set up a hidden part of our app to contain all the global elements that we may want: <div id="globalComponents">   <div data-role="navbar" class="bottomNavBar">       <ul>           <li><a data-icon="music" href="#stations_by_region" data-transition="slideup">stations</a></li>         <li><a data-icon="search" href="#search_by_artist" data-transition="slideup">discover</a></li>           <li><a data-icon="calendar" href="#events_by_location" data-transition="slideup">events</a></li>           <li><a data-icon="gear" href="#settings" data-transition="slideup">settings</a></li>       </ul>   </div></div> We'll keep this code at the bottom of the page and hide it with a simple CSS rule in the stylesheet, #globalComponents{display:none;}. Now, we'll insert this global footer into each page, just before they are created. Using the clone() method (shown in the next code snippet) ensures that not only are we pulling over a copy of the footer, but also any data attached with it. In this way, each page is built with the exact same footer, just like it is in a server-side include. When the page goes through its normal initialization process, the footer will receive the same markup treatment as the rest of the page: /************************* The App************************/var radioApp = {universalPageBeforeCreate:function(){   var $page = $(this);   if($page.find(".bottomNavBar").length == 0){     $page.append($("#globalComponents .bottomNavBar").clone());   }}}/************************* The Events************************///Interface Events$(document).on("pagebeforecreate", "[data- "role="page"]",radioApp.universalPageBeforeCreate); Look at what we've done here in this piece of JavaScript code. We're actually organizing our code a little more effectively. Organizing your code I believe in a very pragmatic approach to coding, which leads me to use more simple structures and a bare minimum of libraries. However, there are values and lessons to be learned out there. MVC, MVVM, MV* For the last couple of years, serious JavaScript developers have been bringing backend development structures to the web, as the size and scope of their project demanded a more regimented approach. For highly ambitious, long-lasting, in-browser apps, this kind of structured approach can help. This is even truer if you're on a larger team. MVC stands for Model-View-Controller ( see http://en.wikipedia.org/wiki/Model–view–controller), MVVM is for Model View ViewModel (see http://en.wikipedia.org/wiki/Model_View_ViewModel), and MV* is shorthand for Model View Whatever and is the general term used to sum up this entire movement of bringing these kinds of structures to the frontend. Some of the more popular libraries include: Backbone.JS (http://backbonejs.org/): An adapter and sample of how to make Backbone play nicely with jQuery Mobile can be found at http://demos.jquerymobile.com/1.4.5/backbone-requirejs. Ember (http://emberjs.com/): An example for Ember can be found at https://github.com/LuisSala/emberjs-jqm. AngularJS (https://angularjs.org/): Angular also has adapters for jQM in progress. There are several examples at https://github.com/tigbro/jquery-mobile-angular-adapter. Knockout: (http://knockoutjs.com/). A very nice comparison of these, and more, is at http://readwrite.com/2014/02/06/angular-backbone-ember-best-javascript-framework-for-you. MV* and jQuery Mobile Yes, you can do it!! You can add any one of these MV* frameworks to jQuery Mobile and make as complex an app as you like. Of them all, I lean toward the Ember platform for desktop and Angular for jQuery Mobile. However, I'd like to propose another alternative. I'm not going to go in-depth into the concepts behind MVC frameworks. Ember, Angular, and Backbone, all help you to separate the concerns of your application into manageable pieces, offering small building blocks with which to create your application. But, we don't need yet another library/framework to do this. It is simple enough to write code in a more organized fashion. Let's create a structure similar to what I've started before: //JavaScript Document/******************** The Application*******************//******************** The Events*******************//******************** The Model*******************/ The application Under the application section, let's fill in some of our app code and give it a namespace. Essentially, namespacing is taking your application-specific code and putting it into its own named object, so that the functions and variables won't collide with other potential global variables and functions. It keeps you from polluting the global space and helps preserve your code from those who are ignorant regarding your work. Granted, this is JavaScript and people can override anything they wish. However, this also makes it a whole lot more intentional to override something like the radioApp.getStarted function, than simply creating your own function called getStarted. Nobody is going to accidentally override a namespaced function. /******************** The application*******************/var radioApp = {settings:{   initialized:false,   geolocation:{     latitude:null,     longitude:null,   },   regionalChoice:null,   lastStation:null},getStarted:function(){   location.replace("#initialize");},fireCustomEvent:function(){   var $clicked = $(this);   var eventValue = $clicked.attr("data-appEventValue");   var event = new jQuery.Event($(this).attr("data-appEvent"));   if(eventValue){ event.val = eventValue; }   $(window).trigger(event);},otherMethodsBlahBlahBlah:function(){}} Pay attention, in particular, to the fireCustomEvent. function With that, we can now set up an event management system. At its core, the idea is pretty simple. We'd like to be able to simply put tag attributes on our clickable objects and have them fire events, such as all the MV* systems. This fits the bill perfectly. It would be quite common to set up a click event handler on a link, or something, to catch the activity. This is far simpler. Just an attribute here and there and you're wired in. The HTML code becomes more readable too. It's easy to see how declarative this makes your code: <a href="javascript://" data-appEvent="playStation" data- appEventValue="country">Country</a> The events Now, instead of watching for clicks, we're listening for events. You can have as many parts of your app as you like registering themselves to listen for the event, and then execute appropriately. As we fill out more of our application, we'll start collecting a lot of events. Instead of letting them get scattered throughout multiple nested callbacks and such, we'll be keeping them all in one handy spot. In most JavaScript MV* frameworks, this part of the code is referred to as the Router. Hooked to each event, you will see nothing but namespaced application calls: /******************** The events*******************///Interface events$(document).on("click", "[data-appEvent]",radioApp.fireCustomEvent);"$(document).on("pagecontainerbeforeshow","[data-role="page"]",radioApp.universalPageBeforeShow);"$(document).on("pagebeforecreate","[data-role="page"]",radioApp.universalPageBeforeCreate);"$(document).on("pagecontainershow", "#initialize",radioApp.getLocation);"$(document).on("pagecontainerbeforeshow", "#welcome",radioApp.initialize);//Application events$(window).on("getStarted",radioApp.getStarted);$(window).on("setHomeLocation",radioApp.setHomeLocation);$(window).on("setNotHomeLocation",radioApp.setNotHomeLocation);$(window).on("playStation",radioApp.playStation); Notice the separation of concerns into interface events and application events. We're using this as a point of distinction between events that are fired as a result of natural jQuery Mobile events (interface events), and events that we have thrown (application events). This may be an arbitrary distinction, but for someone who comes along later to maintain your code, this could come in handy. The model The model section contains the data for your application. This is typically the kind of data that is pulled in from your backend APIs. It's probably not as important here, but it never hurts to namespace what's yours. Here, we have labeled our data as the modelData label. Any information we pull in from the APIs can be dumped right into this object, like we've done here with the station data: /******************** The Model*******************/var modelData = {station:{   genres:[       {       display:"Seattle Grunge",       genreId:12,       genreParentId:1       }   ],   metroIds[14,33,22,31],   audioIds[55,43,26,23,11]}} Pair this style of programming with client-side templating, and you'll be looking at some highly maintainable, well-structured code. However, there are some features that are still missing. Typically, these frameworks will also provide bindings for your templates. This means that you only have to render the templates once. After that, simply updating your model object will be enough to cause the UI to update itself. The problem with these bound templates, is that they update the HTML in a way that would be perfect for a desktop application. But remember, jQuery Mobile does a lot of DOM manipulation to make things happen. In jQuery Mobile, a listview element starts like this: <ul data-role="listview" data-inset="true"><li><a href="#stations">Local Stations</a></li></ul> After the normal DOM manipulation, you get this: <ul data-role="listview" data-inset="true" data-theme="b" style="margin-top:0" class="ui-listview ui-listview-inset ui- corner-all ui-shadow"> <li data-corners="false" data-shadow="false" data- iconshadow="true" data-wrapperels="div" data-icon="arrow-r" data- iconpos="right" data-theme="b" class="ui-btn ui-btn-icon-right ui- li-has-arrow ui-li ui-corner-top ui-btn-up-b">       <div class="ui-btn-inner ui-li ui-corner-top">           <div class="ui-btn-text">               <a href="#stations" class="ui-link-inherit">Local Stations</a>           </div><span class="ui-icon ui-icon-arrow-r ui-icon- shadow">&nbsp;</span>       </div>   </li></ul> And that's just a single list item. You really don't want to include all that junk in your templates; so what you need to do, is just add your usual items to the listview element and then call the .listview("refresh") function. Even if you're using one of the MV* systems, you'll still have to either find, or write, an adapter that will refresh the listviews when something is added or deleted. With any luck, these kinds of things will be solved at the platform level soon. Until then, using a real MV* system with jQM will be a pain in the posterior. Introduction to the Web Audio API The Web Audio API is a fairly new development and, at the time of writing this, only existed within the mobile space on Mobile Safari and Chrome for Android (http://caniuse.com/#feat=audio-api). The Web Audio API is available on the latest versions of desktop Chrome, Safari, and Firefox, so you can still do your initial test coding there. It's only a matter of time before this is built into other major platforms. Most of the code for this part of the project, and the full explanation of the API, can be found at http://tinyurl.com/webaudioapi2014. Let's use feature detection to branch our capabilities: function init() {if("webkitAudioContext" in window) {   context = new webkitAudioContext();   // an analyzer is used for the spectrum   analyzer = context.createAnalyzer();   analyzer.smoothingTimeConstant = 0.85;   analyzer.connect(context.destination);   fetchNextSong();} else {   //do the old stuff}} The original code for this page was designed to kick off simultaneous downloads for every song in the queue. With a fat connection, this would probably be OK. Not so much on mobile. Because of the limited connectivity and bandwidth, it would be better to just chain downloads to ensure a better experience and a more respectful use of bandwidth: function fetchNextSong() {var request = new XMLHttpRequest();var nextSong = songs.pop();if(nextSong){   request = new XMLHttpRequest();   // the underscore prefix is a common naming convention   // to remind us that the variable is developer-supplied   request._soundName = nextSong;   request.open("GET", PATH + request._soundName + ".mp3", " true);   request.responseType = "arraybuffer";   request.addEventListener("load", bufferSound, false);   request.send();}} Now, the bufferSound function just needs to call the fetchNextSong function after buffering, as shown in the following code snippet: function bufferSound(event) {   var request = event.target;   context.decodeAudioData(request.response, function onSuccess(decodedBuffer) {       myBuffers.push(decodedBuffer);       fetchNextSong();   }, function onFailure() {       alert("Decoding the audio buffer failed");   });} One last thing we need to change from the original, is telling the buffer to pull the songs in the order that they were inserted: function playSound() {   // create a new AudioBufferSourceNode   var source = context.createBufferSource();   source.buffer = myBuffers.shift();  source.loop = false;   source = routeSound(source);   // play right now (0 seconds from now)   // can also pass context.currentTime   source.start(0); mySpectrum = setInterval(drawSpectrum, 30);   mySource = source;} For anyone on iOS, this solution is pretty nice. There is a lot more to this API for those who want to dig in. With this out-of-the-box example, you get a nice canvas-based audio analyzer that gives you a very nice professional look, as the audio levels bounce to the music. Slider controls are used to change the volume, the left-right balance, and the high-pass filter. If you don't know what a high-pass filter is, don't worry; I think that filter's usefulness went the way of the cassette deck. Regardless, it's fun to play with: The Web Audio API is a very serious piece of business. This example was adapted from the example on Apple's site. It only plays one sound. However, the Web Audio API was designed with the idea of making it possible to play multiple sounds, alter them in multiple ways, and even dynamically generate sounds using JavaScript. In the meantime, if you want to see this proof of concept in jQuery Mobile, you will find it in the example source in the webaudioapi.html file. For an even deeper look at what is coming, you can check the docs at https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html. Prompting the user to install your app Now, let's take a look at how we can prompt our users to download the Community Radio app to their home screens. It is very likely that you've seen it before; it's the little bubble that pops up and instructs the user with the steps to install the app. There are many different projects out there, but the best one that I have seen is a derivative of the one started by Google. Much thanks and respect to Okamototk on GitHub (https://github.com/okamototk) for taking and improving it. Okamototk evolved the bubble to include several versions of Android, legacy iOS, and even BlackBerry. You can find his original work at https://github.com/okamototk/jqm-mobile-bookmark-bubble. However, unless you can read Japanese, or enjoy translating it. Don't worry about annoying your customers too much. With this version, if they dismiss the bookmarking bubble three times, they won't see it again. The count is stored in HTML5 LocalStorage; so, if they clear out the storage, they'll see the bubble again. Thankfully, most people out there don't even know that can be done, so it won't happen very often. Usually, it's geeks like us that clear things like LocalStorage and cookies, and we know what we're getting into when we do it. In my edition of the code, I've combined all the JavaScript into a single file meant to be placed between your import of jQuery and jQuery Mobile. At the top, the first non-commented line is: page_popup_bubble="#welcome"; This is what you would change to be your own first page, or where you want the bubble to pop up. In my version, I have hardcoded the font color and text shadow properties into the bubble. This was needed, because in jQM the font color and text shadow color vary, based on the theme you're using. Consequently, in jQuery Mobile's original default A theme (white text on a black background), the font was showing up as white with a dark shadow on top of a white bubble. With my modified version, for older jQuery Mobile versions, it will always look right. We just need to be sure we've set up our page with the proper links in the head, and that our images are in place: <link rel="apple-touch-icon-precomposed" sizes="144x144" href="images/album144.png"><link rel="apple-touch-icon-precomposed" sizes="114x114" href="images/album114.png"><link rel="apple-touch-icon-precomposed" sizes="72x72" href="images/album72.png"><link rel="apple-touch-icon-precomposed" href="images/album57.png"><link rel="shortcut icon" href="img/images/album144.png"> Note the Community Radio logo here. The logo is pulled from our link tags marked with rel="apple-touch-icon-precomposed" and injected into the bubble. So, really, the only thing in the jqm_bookmark_bubble.js file that you would need to alter is the page_popup_bubble function. New device-level hardware access New kinds of hardware-level access are coming to our mobile browsers every year. Here is a look at some of what you can start doing now, and what's on the horizon. Not all of these are applicable to every project, but if you think creatively, you can probably find innovative ways to use them. Accelerometers Accelerometers are the little doo-dads inside your phone that measure the phone's orientation in space. To geek out on this, read http://en.wikipedia.org/wiki/Accelerometer. This goes beyond the simple orientation we've been using. This is true access to the accelerometers, in detail. Think about the user being able to shake their device, or tilting it as a method of interaction with your app. Maybe, Community Radio is playing something they don't like and we can give them a fun way to rage against the song. Something such as, shake a song to never hear it again. Here is a simple marble rolling game somebody made as a proof of concept. See http://menscher.com/teaching/woaa/examples/html5_accelerometer.html. Camera Apple's iOS 8 and Android's Lollipop can both access photos on their filesystems as well as the cameras. Granted, these are the latest and greatest versions of these two platforms. If you intend to support the many woefully out of date Android devices (2.3, 2.4) that are still being sold off the shelves as if brand new, then you're going to want to go with a native compilation such as PhoneGap or Apache Cordova to get that capability: <input type="file" accept="image/*"><input type="file" accept="video/*"> The following screenshot has iOS to the left and Android to the right: APIs on the horizon Mozilla is doing a lot to push the mobile web API envelope. You can check out what's on the horizon here: https://wiki.mozilla.org/WebAPI. To app or not to app, that is the question Should you or should you not compile your project into a native app? Here are some things to consider. Raining on the parade (take this seriously) When you compile your first project into an app, there is a certain thrill that you get. You did it! You made a real app! It is at this point that we need to remember the words of Dr. Ian Malcolm from the movie Jurassic Park (Go watch it again. I'll wait): "You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now [bangs on the table] you're selling it, you wanna sell it. Well... your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."                                                                                                  – Dr. Ian Malcolm These words are very close to prophetic for us. In the end, their own creation ate most of the guests for lunch. According to this report from August 2012 http://www.webpronews.com/over-two-thirds-of-the-app-store-has-never-been-downloaded-2012-08 (and several others like it that I've seen before), over two-thirds of all apps on the app stores have never been downloaded. Not even once! So, realistically, app stores are where most projects go to die. Even if your app is discovered, the likelihood that anyone will use it for any significant period of time is astonishingly small. According to this article in Forbes (http://tech.fortune.cnn.com/2009/02/20/the-half-life-of-an-iphone-app/), most apps are abandoned in the space of minutes and never opened again. Paid apps last about twice as long, before either being forgotten or removed. Games have some staying power, but let's be honest, jQuery Mobile isn't exactly a compelling gaming platform, is it?? The Android world is in terrible shape. Devices can still be purchased running ancient versions of the OS, and carriers and hardware partners are not providing updates to them in anything even resembling a timely fashion. If you want to monitor the trouble you could be bringing upon yourself by embracing a native strategy, look here: http://developer.android.com/about/dashboards/index.html: You can see how fractured the Android landscape is, as well as how many older versions you'll probably have to support. On the flip side, if you're publishing strictly to the web, then every time your users visit your site, they'll be on the latest edition using the latest APIs, and you'll never have to worry about somebody using some out-of-date version. Do you have a security patch you need to apply? You can do it in seconds. If you're on the Apple app store, this patch could take days or even weeks. Three good reasons for compiling an app Yes, I know I just finished telling you about your slim chances of success and the fire and brimstone you will face for supporting apps. However, here are a few good reasons to make a real app. In fact, in my opinion, they're the only acceptable reasons. The project itself is the product This is the first and only sure sign that you need to package your project as an app. I'm not talking about selling things through your project. I'm talking about the project itself. It should be made into an app. May the force be with you. Access to native only hardware capabilities GPS and camera are reliably available for the two major platforms in their latest editions. iOS even supports accelerometers. However, if you're looking for more than this, you'll need to compile down to an app to get access to these APIs. Push notifications Do you like them? I don't know about you, but I get way too many push notifications; any app that gets too pushy either gets uninstalled or its notifications are completely turned off. I'm not alone in this. However, if you simply must have push notifications and can't wait for the web-based implementation, you'll have to compile an app. Supporting current customers Ok, this one is a stretch, but if you work in corporate America, you're going to hear it. The idea is that you're an established business and you want to give mobile support to your clients. You or someone above you has read a few whitepapers and/or case studies that show that almost 50 percent of people search in the app stores first. Even if that were true (which I'm still not sold on), you're talking to a businessperson. They understand money, expenses, and escalated maintenance. Once you explain to them the cost, complexity, and potential ongoing headaches of building and testing for all the platforms and their OS versions in the wild, it becomes a very appealing alternative to simply put out a marketing push to your current customers that you're now supporting mobile, and all they have to do is go to your site on their mobile device. Marketing folks are always looking for reasons to toot their horns at customers anyway. Marketing might still prefer to have the company icon on the customer's device to reinforce brand loyalty, but this is simply a matter of educating them that it can be done without an app. You still may not be able to convince all the right people that apps are the wrong way to go when it comes to customer support. If you can't do it on your own, slap them on their heads with a little Jakob Nielson. If they won't listen to you, maybe they'll listen to him. I would defy anyone who says that the Nielsen Norman Group doesn't know what they're saying. See http://www.nngroup.com/articles/mobile-sites-vs-apps-strategy-shift/ for the following quote: "Summary: Mobile apps currently have better usability than mobile sites, but forthcoming changes will eventually make a mobile site the superior strategy." So the $64,000 question becomes: are we making something for right now or for the future? If we're making it for right now, what are the criteria that should mark the retirement of the native strategy? Or do we intend to stay locked on it forever? Don't go into that war without an exit strategy. Summary I don't know about you, but I'm exhausted. I really don't think there's any more that can be said about jQuery Mobile, or its supporting technologies at this time. You've got examples on how to build things for a whole host of industries, and ways to deploy it through the web. At this point, you should be quoting Bob the Builder. Can we build it? Yes, we can! I hope this article has assisted and/or inspired you to go and make something great. I hope you change the world and get filthy stinking rich doing it. I'd love to hear your success stories as you move forward. To let me know how you're doing, or to let me know of any errata, or even if you just have some questions, please don't hesitate to email us directly at http://andy@commadelimited.com. Now, go be awesome! Resources for Article: Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress [article] Building a Custom Version of jQuery [article] jQuery UI 1.8: The Accordion Widget [article]
Read more
  • 0
  • 0
  • 3248

article-image-postgres-add
Packt
27 Feb 2015
7 min read
Save for later

Postgres Add-on

Packt
27 Feb 2015
7 min read
In this article by Patrick Espake, author of the book Learning Heroku Postgres, you will learn how to install and set up PostgreSQL and how to create an app using Postgres. (For more resources related to this topic, see here.) Local setup You need to install PostgreSQL on your computer; this installation is recommended because some commands of the Postgres add-on require PostgreSQL to be installed. Besides that, it's a good idea for your development database to be similar to your production database; this avoids problems between these environments. Next, you will learn how to set up PostgreSQL on Mac OS X, Windows, and Linux. In addition to pgAdmin, this is the most popular and rich feature in PostgreSQL's administration and development platform. The versions recommended for installation are PostgreSQL 9.4.0 and pgAdmin 1.20.0, or the latest available versions. Setting up PostgreSQL on Mac OS X The Postgres.app application is the simplest way to get started with PostgreSQL on Mac OS X, it contains many features in a single installation package: PostgreSQL 9.4.0 PostGIS 2.1.4 Procedural languages: PL/pgSQL, PL/Perl, PL/Python, and PLV8 (JavaScript) Popular extensions such as hstore, uuid-ossp, and others Many command-line tools for managing PostgreSQL and convenient tools for GIS The following screenshot displays the postgresapp website: For installation, visit the address http://postgresapp.com/, carry out the appropriate download, drag it to the applications directory, and then double-click to open. The other alternatives for installing PostgreSQL are to use the default graphic installer, Fink, MacPorts, or Homebrew. All of them are available at http://www.postgresql.org/download/macosx. To install pgAdmin, you should visit http://www.pgadmin.org/download/macosx.php, download the latest available version, and follow the installer instructions. Setting up PostgreSQL on Windows PostgreSQL on Windows is provided using a graphical installer that includes the PostgreSQL server, pgAdmin, and the package manager that is used to download and install additional applications and drivers for PostgreSQL. To install PostgreSQL, visit http://www.postgresql.org/download/windows, click on the download link, and select the the appropriate Windows version: 32 bit or 64 bit. Follow the instructions provided by the installer. After installing PostgreSQL on Windows, you need to set the PATH environment variable so that the psql, pg_dump and pg_restore commands can work through the Command Prompt. Perform the following steps: Open My Computer. Right-click on My Computer and select Properties. Click on Advanced System Settings. Click on the Environment Variables button. From the System variables box, select the Path variable. Click on Edit. At the end of the line, add the bin directory of PostgreSQL: c:Program FilesPostgreSQL9.4bin;c:Program FilesPostgreSQL9.4lib. Click on the OK button to save. The directory follows the pattern c:Program FilesPostgreSQLVERSION..., check your PostgreSQL version. Setting up PostgreSQL on Linux The great majority of Linux distributions already have PostgreSQL in their package manager. You can search the appropriate package for your distribution and install it. If your distribution is Debian or Ubuntu, you can install it with the following command: $ sudo apt-get install postgresql If your Linux distribution is Fedora, Red Hat, CentOS, Scientific Linux, or Oracle Enterprise Linux, you can use the YUM package manager to install PostgreSQL: $ sudo yum install postgresql94-server$ sudo service postgresql-9.4 initdb$ sudo chkconfig postgresql-9.4 on$ sudo service postgresql-9.4 start If your Linux distribution doesn't have PostgreSQL in your package manager, you can install it using the Linux installer. Just visit the website http://www.postgresql.org/download/linux, choose the appropriate installer, 32-bit or 64-bits, and follow the install instructions. You can install pgAdmin through the package manager of your Linux distribution; for Debian or Ubuntu you can use the following command: $ sudo apt-get install pgadmin3 For Linux distributions that use the YUM package manager, you can install through the following command: $ sudo yum install pgadmin3 If your Linux distribution doesn't have pgAdmin in its package manager, you can download and install it following the instructions provided at http://www.pgadmin.org/download/. Creating a local database For the examples in this article, you will need to have a local database created. You will create a new database called my_local_database through pgAdmin. To create the new database, perform the following steps: Open pgAdmin. Connect to the database server through the access credentials that you chose in the installation process. Click on the Databases item in the tree view. Click on the menu Edit -> New Object -> New database. Type the name my_local_database for the database. Click on the OK button to save. Creating a new local database called my_local_database Creating a new app Many features in Heroku can be implemented in two different ways; the first is via the Heroku client, which is installed through the Heroku Toolbelt, and the other is through the web Heroku dashboard. In this section, you will learn how to use both of them. Via the Heroku dashboard Access the website https://dashboard.heroku.com and login. After that, click on the plus sign at the top of the dashboard to create a new app and the following screen will be shown: Creating an app In this step, you should provide the name of your application. In the preceding example, it's learning-heroku-postgres-app. You can choose a name you prefer. Select which region you want to host it on; two options are available: United States or Europe. Heroku doesn't allow duplicated names for applications; each application name supplied is global and, after it has been used once, it will not be available for another person. It can happen that you choose a name that is already being used. In this case, you should choose another name. Choose the best option for you, it is usually recommended you select the region that is closest to you to decrease server response time. Click on the Create App button. Then Heroku will provide some information to perform the first deploy of your application. The website URL and Git repository are created using the following addresses: http://your-app-name.herokuapp.com and git@heroku.com/your-app-name.git. learning-heroku-postgres-app created Next you will create a directory in your computer and link it with Heroku to perform future deployments of your source code. Open your terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init$ heroku git:remote -a your-app-nameGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Via the Heroku client Creating a new application via the Heroku client is very simple. The first step is to create the application directory on your computer. For that, open the Terminal and type the following commands: $ mkdir your-app-name$ cd your-app-name$ git init After that you need to create a new Heroku application through the command: $ heroku apps:create your-app-nameCreating your-app-name... done, stack is cedar-14https://your-app-name.herokuapp.com/ | HYPERLINK "https://git.heroku.com/your-app-name.git" https://git.heroku.com/your-app-name.gitGit remote heroku added Finally, you are able to deploy your source code at any time through these commands: $ git add .$ git commit –am "My updates"$ git push heroku master Another very common case is when you already have a Git repository on your computer with the application's source code and you want to deploy it on Heroku. In this case, you must run the heroku apps:create your-app-name command inside the application directory and the link with Heroku will be created. Summary In this article, you learned how to configure your local environment to work with PostgreSQL and pgAdmin. Besides that, you have also understood how to install Heroku Postgres in your application. In addition, you have understood that the first database is created automatically when the Heroku Postgres add-on is installed in your application and there are several PostgreSQL databases as well. You also learned that the great majority of tasks can be performed in two ways: via the Heroku Client and via the Heroku dashboard. Resources for Article: Further resources on this subject: Building Mobile Apps [article] Managing Heroku from the Command Line [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 2539

article-image-getting-and-running-cassandra
Packt
27 Feb 2015
20 min read
Save for later

Getting Up and Running with Cassandra

Packt
27 Feb 2015
20 min read
As an application developer, you have almost certainly worked with databases extensively. You must have built products using relational databases like MySQL and PostgreSQL, and perhaps experimented with a document store like MongoDB or a key-value database like Redis. While each of these tools has its strengths, you will now consider whether a distributed database like Cassandra might be the best choice for the task at hand. In this article by Mat Brown, author of the book Learning Apache Cassandra, we'll talk about the major reasons to choose Cassandra from among the many database options available to you. Having established that Cassandra is a great choice, we'll go through the nuts and bolts of getting a local Cassandra installation up and running. By the end of this article, you'll know: When and why Cassandra is a good choice for your application How to install Cassandra on your development machine How to interact with Cassandra using cqlsh How to create a keyspace (For more resources related to this topic, see here.) What Cassandra offers, and what it doesn't Cassandra is a fully distributed, masterless database, offering superior scalability and fault tolerance to traditional single master databases. Compared with other popular distributed databases like Riak, HBase, and Voldemort, Cassandra offers a uniquely robust and expressive interface for modeling and querying data. What follows is an overview of several desirable database capabilities, with accompanying discussions of what Cassandra has to offer in each category. Horizontal scalability Horizontal scalability refers to the ability to expand the storage and processing capacity of a database by adding more servers to a database cluster. A traditional single-master database's storage capacity is limited by the capacity of the server that hosts the master instance. If the data set outgrows this capacity, and a more powerful server isn't available, the data set must be sharded among multiple independent database instances that know nothing of each other. Your application bears responsibility for knowing to which instance a given piece of data belongs. Cassandra, on the other hand, is deployed as a cluster of instances that are all aware of each other. From the client application's standpoint, the cluster is a single entity; the application need not know, nor care, which machine a piece of data belongs to. Instead, data can be read or written to any instance in the cluster, referred to as a node; this node will forward the request to the instance where the data actually belongs. The result is that Cassandra deployments have an almost limitless capacity to store and process data; when additional capacity is required, more machines can simply be added to the cluster. When new machines join the cluster, Cassandra takes care of rebalancing the existing data so that each node in the expanded cluster has a roughly equal share. Cassandra is one of the several popular distributed databases inspired by the Dynamo architecture, originally published in a paper by Amazon. Other widely used implementations of Dynamo include Riak and Voldemort. You can read the original paper at http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf. High availability The simplest database deployments are run as a single instance on a single server. This sort of configuration is highly vulnerable to interruption: if the server is affected by a hardware failure or network connection outage, the application's ability to read and write data is completely lost until the server is restored. If the failure is catastrophic, the data on that server might be lost completely. A master-follower architecture improves this picture a bit. The master instance receives all write operations, and then these operations are replicated to follower instances. The application can read data from the master or any of the follower instances, so a single host becoming unavailable will not prevent the application from continuing to read data. A failure of the master, however, will still prevent the application from performing any write operations, so while this configuration provides high read availability, it doesn't completely provide high availability. Cassandra, on the other hand, has no single point of failure for reading or writing data. Each piece of data is replicated to multiple nodes, but none of these nodes holds the authoritative master copy. If a machine becomes unavailable, Cassandra will continue writing data to the other nodes that share data with that machine, and will queue the operations and update the failed node when it rejoins the cluster. This means in a typical configuration, two nodes must fail simultaneously for there to be any application-visible interruption in Cassandra's availability. How many copies?When you create a keyspace - Cassandra's version of a database - you specify how many copies of each piece of data should be stored; this is called the replication factor. A replication factor of 3 is a common and good choice for many use cases. Write optimization Traditional relational and document databases are optimized for read performance. Writing data to a relational database will typically involve making in-place updates to complicated data structures on disk, in order to maintain a data structure that can be read efficiently and flexibly. Updating these data structures is a very expensive operation from a standpoint of disk I/O, which is often the limiting factor for database performance. Since writes are more expensive than reads, you'll typically avoid any unnecessary updates to a relational database, even at the expense of extra read operations. Cassandra, on the other hand, is highly optimized for write throughput, and in fact never modifies data on disk; it only appends to existing files or creates new ones. This is much easier on disk I/O and means that Cassandra can provide astonishingly high write throughput. Since both writing data to Cassandra, and storing data in Cassandra, are inexpensive, denormalization carries little cost and is a good way to ensure that data can be efficiently read in various access scenarios. Because Cassandra is optimized for write volume, you shouldn't shy away from writing data to the database. In fact, it's most efficient to write without reading whenever possible, even if doing so might result in redundant updates. Just because Cassandra is optimized for writes doesn't make it bad at reads; in fact, a well-designed Cassandra database can handle very heavy read loads with no problem. Structured records The first three database features we looked at are commonly found in distributed data stores. However, databases like Riak and Voldemort are purely key-value stores; these databases have no knowledge of the internal structure of a record that's stored at a particular key. This means useful functions like updating only part of a record, reading only certain fields from a record, or retrieving records that contain a particular value in a given field are not possible. Relational databases like PostgreSQL, document stores like MongoDB, and, to a limited extent, newer key-value stores like Redis do have a concept of the internal structure of their records, and most application developers are accustomed to taking advantage of the possibilities this allows. None of these databases, however, offer the advantages of a masterless distributed architecture. In Cassandra, records are structured much in the same way as they are in a relational database—using tables, rows, and columns. Thus, applications using Cassandra can enjoy all the benefits of masterless distributed storage while also getting all the advanced data modeling and access features associated with structured records. Secondary indexes A secondary index, commonly referred to as an index in the context of a relational database, is a structure allowing efficient lookup of records by some attribute other than their primary key. This is a widely useful capability: for instance, when developing a blog application, you would want to be able to easily retrieve all of the posts written by a particular author. Cassandra supports secondary indexes; while Cassandra's version is not as versatile as indexes in a typical relational database, it's a powerful feature in the right circumstances. Efficient result ordering It's quite common to want to retrieve a record set ordered by a particular field; for instance, a photo sharing service will want to retrieve the most recent photographs in descending order of creation. Since sorting data on the fly is a fundamentally expensive operation, databases must keep information about record ordering persisted on disk in order to efficiently return results in order. In a relational database, this is one of the jobs of a secondary index. In Cassandra, secondary indexes can't be used for result ordering, but tables can be structured such that rows are always kept sorted by a given column or columns, called clustering columns. Sorting by arbitrary columns at read time is not possible, but the capacity to efficiently order records in any way, and to retrieve ranges of records based on this ordering, is an unusually powerful capability for a distributed database. Immediate consistency When we write a piece of data to a database, it is our hope that that data is immediately available to any other process that may wish to read it. From another point of view, when we read some data from a database, we would like to be guaranteed that the data we retrieve is the most recently updated version. This guarantee is called immediate consistency, and it's a property of most common single-master databases like MySQL and PostgreSQL. Distributed systems like Cassandra typically do not provide an immediate consistency guarantee. Instead, developers must be willing to accept eventual consistency, which means when data is updated, the system will reflect that update at some point in the future. Developers are willing to give up immediate consistency precisely because it is a direct tradeoff with high availability. In the case of Cassandra, that tradeoff is made explicit through tunable consistency. Each time you design a write or read path for data, you have the option of immediate consistency with less resilient availability, or eventual consistency with extremely resilient availability. Discretely writable collections While it's useful for records to be internally structured into discrete fields, a given property of a record isn't always a single value like a string or an integer. One simple way to handle fields that contain collections of values is to serialize them using a format like JSON, and then save the serialized collection into a text field. However, in order to update collections stored in this way, the serialized data must be read from the database, decoded, modified, and then written back to the database in its entirety. If two clients try to perform this kind of modification to the same record concurrently, one of the updates will be overwritten by the other. For this reason, many databases offer built-in collection structures that can be discretely updated: values can be added to, and removed from collections, without reading and rewriting the entire collection. Cassandra is no exception, offering list, set, and map collections, and supporting operations like "append the number 3 to the end of this list". Neither the client nor Cassandra itself needs to read the current state of the collection in order to update it, meaning collection updates are also blazingly efficient. Relational joins In real-world applications, different pieces of data relate to each other in a variety of ways. Relational databases allow us to perform queries that make these relationships explicit, for instance, to retrieve a set of events whose location is in the state of New York (this is assuming events and locations are different record types). Cassandra, however, is not a relational database, and does not support anything like joins. Instead, applications using Cassandra typically denormalize data and make clever use of clustering in order to perform the sorts of data access that would use a join in a relational database. For data sets that aren't already denormalized, applications can also perform client-side joins, which mimic the behavior of a relational database by performing multiple queries and joining the results at the application level. Client-side joins are less efficient than reading data that has been denormalized in advance, but offer more flexibility. MapReduce MapReduce is a technique for performing aggregate processing on large amounts of data in parallel; it's a particularly common technique in data analytics applications. Cassandra does not offer built-in MapReduce capabilities, but it can be integrated with Hadoop in order to perform MapReduce operations across Cassandra data sets, or Spark for real-time data analysis. The DataStax Enterprise product provides integration with both of these tools out-of-the-box. Comparing Cassandra to the alternatives Now that you've got an in-depth understanding of the feature set that Cassandra offers, it's time to figure out which features are most important to you, and which database is the best fit. The following table lists a handful of commonly used databases, and key features that they do or don't have: Feature Cassandra PostgreSQL MongoDB Redis Riak Structured records Yes Yes Yes Limited No Secondary indexes Yes Yes Yes No Yes Discretely writable collections Yes Yes Yes Yes No Relational joins No Yes No No No Built-in MapReduce No No Yes No Yes Fast result ordering Yes Yes Yes Yes No Immediate consistency Configurable at query level Yes Yes Yes Configurable at cluster level Transparent sharding Yes No Yes No Yes No single point of failure Yes No No No Yes High throughput writes Yes No No Yes Yes As you can see, Cassandra offers a unique combination of scalability, availability, and a rich set of features for modeling and accessing data. Installing Cassandra Now that you're acquainted with Cassandra's substantial powers, you're no doubt chomping at the bit to try it out. Happily, Cassandra is free, open source, and very easy to get running. Installing on Mac OS X First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks something like the following: java version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) Pay particular attention to the java version listed: if it's lower than 1.7.0_25, you'll need to install a new version. If you have an older version of Java or if Java isn't installed at all, head to https://www.java.com/en/download/mac_download.jsp and follow the download instructions on the page. You'll need to set up your environment so that Cassandra knows where to find the latest version of Java. To do this, set up your JAVA_HOME environment variable to the install location, and your PATH to include the executable in your new Java installation as follows: $ export JAVA_HOME="/Library/Internet Plug- Ins/JavaAppletPlugin.plugin/Contents/Home"$ export PATH="$JAVA_HOME/bin":$PATH You should put these two lines at the bottom of your .bashrc file to ensure that things still work when you open a new terminal. The installation instructions given earlier assume that you're using the latest version of Mac OS X (at the time of writing this, 10.10 Yosemite). If you're running a different version of OS X, installing Java might require different steps. Check out https://www.java.com/en/download/faq/java_mac.xml for detailed installation information. Once you've got the right version of Java, you're ready to install Cassandra. It's very easy to install Cassandra using Homebrew; simply type the following: $ brew install cassandra$ pip install cassandra-driver cql$ cassandra Here's what we just did: Installed Cassandra using the Homebrew package manager Installed the CQL shell and its dependency, the Python Cassandra driver Started the Cassandra server Installing on Ubuntu First, we need to make sure that we have an up-to-date installation of the Java Runtime Environment. Open the Terminal application, and type the following into the command prompt: $ java -version You will see an output that looks similar to the following: java version "1.7.0_65"OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3- 0ubuntu0.14.04.1)OpenJDK 64-bit Server VM (build 24.65-b04, mixed mode) Pay particular attention to the java version listed: it should start with 1.7. If you have an older version of Java, or if Java isn't installed at all, you can install the correct version using the following command: $ sudo apt-get install openjdk-7-jre-headless Once you've got the right version of Java, you're ready to install Cassandra. First, you need to add Apache's Debian repositories to your sources list. Add the following lines to the file /etc/apt/sources.list: deb http://www.apache.org/dist/cassandra/debian 21x maindeb-src http://www.apache.org/dist/cassandra/debian 21x main In the Terminal application, type the following into the command prompt: $ gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D$ gpg --export --armor F758CE318D77295D | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00$ gpg --export --armor 2B5C1B00 | sudo apt-key add -$ gpg --keyserver pgp.mit.edu --recv-keys 0353B12C$ gpg --export --armor 0353B12C | sudo apt-key add -$ sudo apt-get update$ sudo apt-get install cassandra$ cassandra Here's what we just did: Added the Apache repositories for Cassandra 2.1 to our sources list Added the public keys for the Apache repo to our system and updated our repository cache Installed Cassandra Started the Cassandra server Installing on Windows The easiest way to install Cassandra on Windows is to use the DataStax Community Edition. DataStax is a company that provides enterprise-level support for Cassandra; they also release Cassandra packages at both free and paid tiers. DataStax Community Edition is free, and does not differ from the Apache package in any meaningful way. DataStax offers a graphical installer for Cassandra on Windows, which is available for download at planetcassandra.org/cassandra. On this page, locate Windows Server 2008/Windows 7 or Later (32-Bit) from the Operating System menu (you might also want to look for 64-bit if you run a 64-bit version of Windows), and choose MSI Installer (2.x) from the version columns. Download and run the MSI file, and follow the instructions, accepting the defaults: Once the installer completes this task, you should have an installation of Cassandra running on your machine. Bootstrapping the project We will build an application called MyStatus, which allows users to post status updates for their friends to read. CQL – the Cassandra Query Language Since this is about Cassandra and not targeted to users of any particular programming language or application framework, we will focus entirely on the database interactions that MyStatus will require. Code examples will be in Cassandra Query Language (CQL). Specifically, we'll use version 3.1.1 of CQL, which is available in Cassandra 2.0.6 and later versions. As the name implies, CQL is heavily inspired by SQL; in fact, many CQL statements are equally valid SQL statements. However, CQL and SQL are not interchangeable. CQL lacks a grammar for relational features such as JOIN statements, which are not possible in Cassandra. Conversely, CQL is not a subset of SQL; constructs for retrieving the update time of a given column, or performing an update in a lightweight transaction, which are available in CQL, do not have an SQL equivalent. Interacting with Cassandra Most common programming languages have drivers for interacting with Cassandra. When selecting a driver, you should look for libraries that support the CQL binary protocol, which is the latest and most efficient way to communicate with Cassandra. The CQL binary protocol is a relatively new introduction; older versions of Cassandra used the Thrift protocol as a transport layer. Although Cassandra continues to support Thrift, avoid Thrift-based drivers, as they are less performant than the binary protocol. Here are CQL binary drivers available for some popular programming languages: Language Driver Available at Java DataStax Java Driver github.com/datastax/java-driver Python DataStax Python Driver github.com/datastax/python-driver Ruby DataStax Ruby Driver github.com/datastax/ruby-driver C++ DataStax C++ Driver github.com/datastax/cpp-driver C# DataStax C# Driver github.com/datastax/csharp-driver JavaScript (Node.js) node-cassandra-cql github.com/jorgebay/node-cassandra-cql PHP phpbinarycql github.com/rmcfrazier/phpbinarycql While you will likely use one of these drivers in your applications, you can simply use the cqlsh tool, which is a command-line interface for executing CQL queries and viewing the results. To start cqlsh on OS X or Linux, simply type cqlsh into your command line; you should see something like this: $ cqlshConnected to Test Cluster at localhost:9160.[cqlsh 4.1.1 | Cassandra 2.0.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]Use HELP for help.cqlsh> On Windows, you can start cqlsh by finding the Cassandra CQL Shell application in the DataStax Community Edition group in your applications. Once you open it, you should see the same output we just saw. Creating a keyspace A keyspace is a collection of related tables, equivalent to a database in a relational system. To create the keyspace for our MyStatus application, issue the following statement in the CQL shell:    CREATE KEYSPACE "my_status"   WITH REPLICATION = {      'class': 'SimpleStrategy', 'replication_factor': 1    }; Here we created a keyspace called my_status. When we create a keyspace, we have to specify replication options. Cassandra provides several strategies for managing replication of data; SimpleStrategy is the best strategy as long as your Cassandra deployment does not span multiple data centers. The replication_factor value tells Cassandra how many copies of each piece of data are to be kept in the cluster; since we are only running a single instance of Cassandra, there is no point in keeping more than one copy of the data. In a production deployment, you would certainly want a higher replication factor; 3 is a good place to start. A few things at this point are worth noting about CQL's syntax: It's syntactically very similar to SQL; as we further explore CQL, the impression of similarity will not diminish. Double quotes are used for identifiers such as keyspace, table, and column names. As in SQL, quoting identifier names is usually optional, unless the identifier is a keyword or contains a space or another character that will trip up the parser. Single quotes are used for string literals; the key-value structure we use for replication is a map literal, which is syntactically similar to an object literal in JSON. As in SQL, CQL statements in the CQL shell must terminate with a semicolon. Selecting a keyspace Once you've created a keyspace, you would want to use it. In order to do this, employ the USE command: USE "my_status"; This tells Cassandra that all future commands will implicitly refer to tables inside the my_status keyspace. If you close the CQL shell and reopen it, you'll need to reissue this command. Summary In this article, you explored the reasons to choose Cassandra from among the many databases available, and having determined that Cassandra is a great choice, you installed it on your development machine. You had your first taste of the Cassandra Query Language when you issued your first command via the CQL shell in order to create a keyspace. You're now poised to begin working with Cassandra in earnest. Resources for Article: Further resources on this subject: Getting Started with Apache Cassandra [article] Basic Concepts and Architecture of Cassandra [article] About Cassandra [article]
Read more
  • 0
  • 0
  • 5655

article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 12848
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-yarn-and-hadoop
Packt
27 Feb 2015
8 min read
Save for later

YARN and Hadoop

Packt
27 Feb 2015
8 min read
In this article, by the authors, Amol Fasale and Nirmal Kumar, of the book, YARN Essentials, you will learn about what YARN is and how it's implemented with Hadoop. YARN. YARN stands for Yet Another Resource Negotiator. YARN is a generic resource platform to manage resources in a typical cluster. YARN was introduced with Hadoop 2.0, which is an open source distributed processing framework from the Apache Software Foundation. In 2012, YARN became one of the subprojects of the larger Apache Hadoop project. YARN is also coined by the name of MapReduce 2.0. This is since Apache Hadoop MapReduce has been re-architectured from the ground up to Apache Hadoop YARN. Think of YARN as a generic computing fabric to support MapReduce and other application paradigms within the same Hadoop cluster; earlier, this was limited to batch processing using MapReduce. This really changed the game to recast Apache Hadoop as a much more powerful data processing system. With the advent of YARN, Hadoop now looks very different compared to the way it was only a year ago. YARN enables multiple applications to run simultaneously on the same shared cluster and allows applications to negotiate resources based on need. Therefore, resource allocation/management is central to YARN. YARN has been thoroughly tested at Yahoo! since September 2012. It has been in production across 30,000 nodes and 325 PB of data since January 2013. Recently, Apache Hadoop YARN won the Best Paper Award at ACM Symposium on Cloud Computing (SoCC) in 2013! (For more resources related to this topic, see here.) The redesign idea Initially, Hadoop was written solely as a MapReduce engine. Since it runs on a cluster, its cluster management components were also tightly coupled with the MapReduce programming paradigm. The concepts of MapReduce and its programming paradigm were so deeply ingrained in Hadoop that one could not use it for anything else except MapReduce. MapReduce therefore became the base for Hadoop, and as a result, the only thing that could be run on Hadoop was a MapReduce job, batch processing. In Hadoop 1.x, there was a single JobTracker service that was overloaded with many things such as cluster resource management, scheduling jobs, managing computational resources, restarting failed tasks, monitoring TaskTrackers, and so on. There was definitely a need to separate the MapReduce (specific programming model) part and the resource management infrastructure in Hadoop. YARN was the first attempt to perform this separation. Limitations of the classical MapReduce or Hadoop 1.x The main limitations of Hadoop 1.x can be categorized into the following areas: Limited scalability: Large Hadoop clusters reported some serious limitations on scalability. This is caused mainly by a single JobTracker service, which ultimately results in a serious deterioration of the overall cluster performance because of attempts to re-replicate data and overload live nodes, thus causing a network flood. According to Yahoo!, the practical limits of such a design are reached with a cluster of ~5,000 nodes and 40,000 tasks running concurrently. Therefore, it is recommended that you create smaller and less powerful clusters for such a design. Low cluster resource utilization: The resources in Hadoop 1.x on each slave node (data node), are divided in terms of a fixed number of map and reduce slots. Consider the scenario where a MapReduce job has already taken up all the available map slots and now wants more new map tasks to run. In this case, it cannot run new map tasks, even though all the reduce slots are still empty. This notion of a fixed number of slots has a serious drawback and results in poor cluster utilization. Lack of support for alternative frameworks/paradigms: The main focus of Hadoop right from the beginning was to perform computation on large datasets using parallel processing. Therefore, the only programming model it supported was MapReduce. With the current industry needs in terms of new use cases in the world of big data, many new and alternative programming models (such Apache Giraph, Apache Spark, Storm, Tez, and so on) are coming into the picture each day. There is definitely an increasing demand to support multiple programming paradigms besides MapReduce, to support the varied use cases that the big data world is facing. YARN as the modern operating system of Hadoop The MapReduce programming model is, no doubt, great for many applications, but not for everything in the world of computation. There are use cases that are best suited for MapReduce, but not all. MapReduce is essentially batch-oriented, but support for real-time and near real-time processing are the emerging requirements in the field of big data. YARN took cluster resource management capabilities from the MapReduce system so that new engines could use these generic cluster resource management capabilities. This lightened up the MapReduce system to focus on the data processing part, which it is good at and will ideally continue to be so. YARN therefore turns into a data operating system for Hadoop 2.0, as it enables multiple applications to coexist in the same shared cluster. Refer to the following figure: YARN as a modern OS for Hadoop What are the design goals for YARN This section talks about the core design goals of YARN: Scalability: Scalability is a key requirement for big data. Hadoop was primarily meant to work on a cluster of thousands of nodes with commodity hardware. Also, the cost of hardware is reducing year-on-year. YARN is therefore designed to perform efficiently on this network of a myriad of nodes. High cluster utilization: In Hadoop 1.x, the cluster resources were divided in terms of fixed size slots for both map and reduce tasks. This means that there could be a scenario where map slots might be full while reduce slots are empty, or vice versa. This was definitely not an optimal utilization of resources, and it needed further optimization. YARN fine-grained resources in terms of RAM, CPU, and disk (containers), leading to an optimal utilization of the available resources. Locality awareness: This is a key requirement for YARN when dealing with big data; moving computation is cheaper than moving data. This helps to minimize network congestion and increase the overall throughput of the system. Multitenancy: With the core development of Hadoop at Yahoo, primarily to support large-scale computation, HDFS also acquired a permission model, quotas, and other features to improve its multitenant operation. YARN was therefore designed to support multitenancy in its core architecture. Since cluster resource allocation/management is at the heart of YARN, sharing processing and storage capacity across clusters was central to the design. YARN has the notion of pluggable schedulers and the Capacity Scheduler with YARN has been enhanced to provide a flexible resource model, elastic computing, application limits, and other necessary features that enable multiple tenants to securely share the cluster in an optimized way. Support for programming model: The MapReduce programming model is no doubt great for many applications, but not for everything in the world of computation. As the world of big data is still in its inception phase, organizations are heavily investing in R&D to develop new and evolving frameworks to solve a variety of problems that big data brings. A flexible resource model: Besides mismatch with the emerging frameworks’ requirements, the fixed number of slots for resources had serious problems. It was straightforward for YARN to come up with a flexible and generic resource management model. A secure and auditable operation: As Hadoop continued to grow to manage more tenants with a myriad of use cases across different industries, the requirements for isolation became more demanding. Also, the authorization model lacked strong and scalable authentication. This is because Hadoop was designed with parallel processing in mind, with no comprehensive security. Security was an afterthought. YARN understands this and adds security-related requirements into its design. Reliability/availability: Although fault tolerance is in the core design, in reality maintaining a large Hadoop cluster is a tedious task. All issues related to high availability, failures, failures on restart, and reliability were therefore a core requirement for YARN. Backward compatibility: Hadoop 1.x has been in the picture for a while, with many successful production deployments across many industries. This massive installation base of MapReduce applications and the ecosystem of related projects, such as Hive, Pig, and so on, would not tolerate a radical redesign. Therefore, the new architecture reused as much code from the existing framework as possible, and no major surgery was conducted on it. This made MRv2 able to ensure satisfactory compatibility with MRv1 applications. Summary In this article, you learned what YARN is and how it has turned out to be the modern operating system for Hadoop, making it a multiapplication platform. Resources for Article: Further resources on this subject: Sizing and Configuring your Hadoop Cluster [article] Hive in Hadoop [article] Evolution of Hadoop [article]
Read more
  • 0
  • 0
  • 2372

article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 3293

article-image-restful-web-service-mocking-and-testing-soapui-raml-and-json-slurper-script-assertion
Packt
26 Feb 2015
15 min read
Save for later

RESTful Web Service Mocking and Testing with SoapUI, RAML, and a JSON Slurper Script Assertion

Packt
26 Feb 2015
15 min read
In this article written by Rupert Anderson, the author of SoapUI Cookbook, we will cover the following topics: Installing the SoapUI RAML plugin Generating a SoapUI REST project and mock service using the RAML plugin Testing response values using JSON Slurper As you might already know, despite being called SoapUI, the product actually has an excellent RESTful web service mock and testing functionality. Also, SoapUI is very open and extensible with a great plugin framework. This makes it relatively easy to use and develop plugins to support APIs defined by other technologies, for example RAML (http://raml.org/) and Swagger (http://swagger.io/). If you haven't seen it before, RESTful API Modeling Language or RAML is a modern way to describe RESTful web services that use YAML and JSON standards. As a brief demonstration, this article uses the excellent SoapUI RAML plugin to: Generate a SoapUI REST service definition automatically from the RAML definition Generate a SoapUI REST mock with an example response automatically from the RAML definition Create a SoapUI TestSuite, TestCase, and TestStep to call the mock Assert that the response contains the values we expect using a Script Assertion and JSON Slurper to parse and inspect the JSON content This article assumes that you have used SoapUI before, but not RAML or the RAML plugin. If you haven't used SoapUI before, then you can still give it a shot, but it might help to first take a look at Chapters 3 and 4 of the SoapUI Cookbook or the Getting Started, REST, and REST Mocking sections at http://www.soapui.org/. (For more resources related to this topic, see here.) Installing the SoapUI RAML plugin This recipe shows how to get the SoapUI RAML plugin installed and checked. Getting ready We'll use SoapUI open source version 5.0.0 here. You can download it from http://www.soapui.org/ if you need it. We'll also need the RAML plugin. You can download the latest version (0.4) from http://sourceforge.net/projects/soapui-plugins/files/soapui-raml-plugin/. How to do it... First, we'll download the plugin, install it, restart SoapUI, and check whether it's available: Download the RAML plugin zip file from sourceforge using the preceding link. It should be called soapui-raml-plugin-0.4-plugin-dist.zip. To install the plugin, if you're happy to, you can simply unzip the plugin zip file to <SoapUI Home>/java/app/bin; this will have the same effect as manually performing the following steps:     Create a plugins folder if one doesn't already exist in <SoapUI Home>/java/app/bin/plugins.     Copy the soapui-raml-plugin-0.4-plugin.jar file from the plugins folder of the expanded zip file into the plugins folder.     Copy the raml-parser-0.9-SNAPSHOT.jar and snakeyaml-1.13.jar files from the ext folder of the expanded zip file into the <SoapUI Home>/java/app/bin/ext folder.     The resulting folder structure under <SoapUI Home>/java/app/bin should now be something like: If SoapUI is running, we need to restart it so that the plugin and dependencies are added to its classpath and can be used, and check whether it's available. When SoapUI has started/restarted, we can confirm whether the plugin and dependencies have loaded successfully by checking the SoapUI tab log: INFO:Adding [/Applications/SoapUI-5.0.0.app/Contents/Resources/app/bin/ext/raml-parser-0.9-SNAPSHOT.jar] to extensions classpath INFO:Adding [/Applications/SoapUI-5.0.0.app/Contents/Resources/app/bin/ext/snakeyaml-1.13.jar] to extensions classpath INFO:Adding plugin from [/Applications/SoapUI-5.0.0.app/Contents/java/app/bin/plugins/soapui-raml-plugin-0.4-plugin.jar] To check whether the new RAML Action is available in SoapUI, we'll need a workspace and a project:     Create a new SoapUI Workspace if you don't already have one, and call it RESTWithRAML.     In the new Workspace, create New Generic Project; just enter Project Name of Invoice and click on OK. Finally, if you right-click on the created Invoice project, you should see a menu option of Import RAML Definition as shown in the following screenshot: How it works... In order to concentrate on using RAML with SoapUI, we won't go into how the plugin actually works. In very simple terms, the SoapUI plugin framework uses the plugin jar file (soapui-raml-plugin-0.4-plugin.jar) to load the standard Java RAML Parser (raml-parser-0.9-SNAPSHOT.jar) and the Snake YAML Parser (snakeyaml-1.13.jar) onto SoapUI's classpath and run them from a custom menu Action. If you have Java skills and understand the basics of SoapUI extensions, then many plugins are quite straightforward to understand and develop. If you would like to understand how SoapUI plugins work and how to develop them, then please refer to Chapters 10 and 11 of the SoapUI Cookbook. You can also take a look at Ole Lensmar's (plugin creator and co-founder of SoapUI) RAML Plugin blog and plugin source code from the following links. There's more... If you read more on SoapUI plugins, one thing to be aware of is that open source plugins are now termed "old-style" plugins in the SoapUI online documentation. This is because the commercial SoapUI Pro and newer SoapUI NG versions of SoapUI feature an enhanced plugin framework with Plugin Manger to install plugins from a plugin repository (see http://www.soapui.org/extension-plugins/plugin-manager.html). The new Plugin Manager plugins are not compatible with open source SoapUI, and open source or "old-style" plugins will not load using the Plugin Manager. See also More information on using and understanding various SoapUI plugins can be found in Chapter 10, Using Plugins of the SoapUI Cookbook More information on how to develop SoapUI extensions and plugins can be found in Chapter 11, Taking SoapUI Further of the SoapUI Cookbook The other open source SoapUI plugins can also be found in Ole Lensmar's blog, http://olensmar.blogspot.se/p/soapui-plugins.html Generating a SoapUI REST service definition and mock service using the RAML plugin In this section, we'll use the SoapUI RAML plugin to set up a SoapUI REST service definition, mock service, and example response using an RAML definition file. This recipe assumes you've followed the previous one to install the RAML plugin and create the SoapUI Workspace and Project. Getting ready For this recipe, we'll use the following simple invoice RAML definition: #%RAML 0.8title: Invoice APIbaseUri: http://localhost:8080/{version}version: v1.0/invoice:   /{id}:     get:       description: Retrieves an invoice by id.       responses:         200:           body:             application/json:             example: |                 {                   "invoice": {                     "id": "12345",                     "companyName": "Test Company",                     "amount": 100.0                   }                 } This RAML definition describes the following RESTful service endpoint and resource:http://localhost:8080/v1.0/invoice/{id}. The definition also provides example JSON invoice content (shown highlighted in the preceding code). How to do it... First, we'll use the SoapUI RAML plugin to generate the REST service, resource, and mock for the Invoice project created in the previous recipe. Then, we'll get the mock running and fire a test REST request to make sure it returns the example JSON invoice content provided by the preceding RAML definition: To generate the REST service, resource, and mock using the preceding RAML definition:     Right-click on the Invoice project created in the previous recipe and select Import RAML Definition.     Create a file that contains the preceding RAML definition, for example, invoicev1.raml.     Set RAML Definition to the location of invoicev1.raml.     Check the Generate MockService checkbox.     The Add RAML Definition window should look something like the following screenshot; when satisfied, click on OK to generate. If everything goes well, you should see the following log messages in the SoapUI tab: Importing RAML from [file:/work/RAML/invoicev1.raml] CWD:/Applications/SoapUI-5.0.0.app/Contents/java/app/bin Also, the Invoice Project should now contain the following:     An Invoice API service definition.     An invoice resource.     A sample GET request (Request 1).     An Invoice API MockService with a sample response (Response 200) that contains the JSON invoice content from the RAML definition. See the following screenshot: Before using the mock, we need to tweak Resource Path to remove the {id}, which is not a placeholder and will cause the mock to only respond to requests to /v1.0/invoice/{id} rather than /v1.0/invoice/12345 and so on. To do this:     Double-click on Invoice API MockService.     Double-click on the GET /v1.0/invoice/{id} action and edit the Resource Path as shown in the following screenshot: Start the mock. It should now publish the endpoint at http://localhost:8080/v1.0/invoice/and respond to the HTTP GET requests. Now, let's set up Request 1 and fire it at the mock to give it a quick test:     Double-click on Request 1 to edit it.     Under the Request tab, set Value of the id parameter to 12345.     Click on the green arrow to dispatch the request to the mock.     All being well, you should see a successful response, and under the JSON or RAW tab, you should see the JSON invoice content from the RAML definition, as shown in the following screenshot: How it works... Assuming that you're familiar with the basics of SoapUI projects, mocks, and test requests, the interesting part will probably be the RAML Plugin itself. To understand exactly what's going on in the RAML plugin, we really need to take a look at the source code on GitHub. A very simplified explanation of the plugin functionality is: The plugin contains a custom SoapUI Action ImportRamlAction.java. SoapUI Actions define menu-driven functionality so that when Import RAML Definition is clicked, the custom Action invokes the main RAML import functionality in RamlImporter.groovy. The RamlImporter.groovy class: Loads the RAML definition and uses the Java RAML Parser (see previous recipe) to build a Java representation of the definition. This Java representation is then traversed and used to create and configure the SoapUI framework objects (or Model items), that is, service definition, resource, mock, and its sample response. Once the plugin has done its work, everything else is standard SoapUI functionality! There's more... As you might have noticed under the REST service's menu or in the plugin source code, there are two other RAML menu options or Actions: Export RAML: This generates an RAML file from an existing SoapUI REST service; for example, you can design your RESTful API manually in a SoapUI REST project and then export an RAML definition for it. Update from RAML definition: Similar to the standard SOAP service's Update Definition, you could use this to update SoapUI project artifacts automatically using a newer version of the service's RAML definition. To understand more about developing custom SoapUI Actions, Model items, and plugins, Chapters 10 and 11 of the SoapUI Cookbook should hopefully help, as also the SoapUI online docs: Custom Actions: http://www.soapui.org/extension-plugins/old-style-extensions/developing-old-style-extensions.html Model Items: http://www.soapui.org/scripting---properties/the-soapui-object-model.html See also The blog article of the creator of the RAML plugin and the cofounder of SoapUI can be found at http://olensmar.blogspot.se/2013/12/a-raml-apihub-plugin-for-soapui.html The RAML plugin source code can be found at https://github.com/olensmar/soapui-raml-plugin There are many excellent RAML tools available for download at http://raml.org/projects.html For more information on SoapUI Mocks, see Chapter 3, Developing and Deploying Dynamic REST and SOAP Mocks of the SoapUI Cookbook Testing response values using JsonSlurper Now that we have a JSON invoice response, let's look at some options of testing it: XPath: Because of the way SoapUI stores the response, it is possible to use XPath Assertions, for example, to check whether the company is "Test Company": We could certainly use this approach, but to me, it doesn't seem completely appropriate to test JSON values! JSONPath Assertions (SoapUI Pro/NG only): The commercial versions of SoapUI have several types of JSONPath Assertion. This is a nice option if you've got it, but we'll only use open source SoapUI for this article. Script Assertion: When you want to assert something in a way that isn't available, there's always the (Groovy) Script Assertion option! In this recipe, we'll use option 3 and create a Script Assertion using JsonSlurper (http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html) to parse and assert the invoice values that we expect to see in the response. How to do it.. First, we'll need to add a SoapUI TestSuite, TestCase, and REST Test Request TestStep to our invoice project. Then, we'll add a new Script Assertion to TestStep to parse and assert that the invoice response values are according to what we expect. Right-click on the Invoice API service and select New TestSuite and enter a name, for example, TestSuite. Right-click on the TestSuite and select New TestCase and enter a name, for example, TestCase. Right-click on TestCase:     Select Add TestStep | REST Test Request.     Enter a name, for example, Get Invoice.     When prompted to select REST method to invoke for request, choose Invoice API -> /{id} -> get -> Request 1.     This should create a preconfigured REST Test Request TestStep, like the one in the following screenshot. Open the Assertions tab:     Right-click on the Assertions tab.     Select Add Assertion | Script Assertion.     Paste the following Groovy script: import groovy.json.JsonSlurper def responseJson = messageExchange.response.contentAsString def slurper = new JsonSlurper().parseText(responseJson) assert slurper.invoice.id=='12345' assert slurper.invoice.companyName=='Test Company' assert slurper.invoice.amount==100.0 Start Invoice API MockService if it isn't running. Now, if you run REST Test Request TestStep, the Script Assertion should pass, and you should see something like this: Optionally, to make sure the Script Assertion is checking the values, do the following:     Edit Response 200 in Invoice API MockService.     Change the id value from 12345 to 54321.     Rerun the Get Invoice TestStep, and you should now see an Assertion failure like this: com.eviware.soapui.impl.wsdl.teststeps.assertions.basic.GroovyScriptAssertion@6fe0c5f3assert slurper.invoice.id=='12345'       |       |       | |       |       |       | false       |       |       54321       |       [amount:100.0, id:54321, companyName:Test Company]       [invoice:[amount:100.0, id:54321, companyName:Test Company]] How it works JsonSlurper is packaged as a part of the standard groovy-all-x.x.x,jar that SoapUI uses, so there is no need to add any additional libraries to use it in Groovy scripts. However, we do need an import statement for the JsonSlurper class to use it in our Script Assertion, as it is not part of the Groovy language itself. Similar to Groovy TestSteps, Script Assertions have access to special variables that SoapUI populates and passes in when Assertion is executed. In this case, we use the messageExchange variable to obtain the response content as a JSON String. Then, JsonSlurper is used to parse the JSON String into a convenient object model for us to query and use in standard Groovy assert statements. There's more Another very similar option would have been to create a Script Assertion to use JsonPath (https://code.google.com/p/json-path/). However, since JsonPath is not a standard Groovy library, we would have needed to add its JAR files to the <SoapUI home>/java/app/bin/ext folder, like the RAML Parser in the previous recipe. See also You may also want to check the response for JSON schema compliance. For an example of how to do this, see the Testing response compliance using JSON schemas recipe in Chapter 4, Web Service Test Scenarios of the SoapUI Cookbook. To understand more about SoapUI Groovy scripting, the SoapUI Cookbook has numerous examples explained throughout its chapters, and explores many common use-cases when scripting the SoapUI framework. Some interesting SoapUI Groovy scripting examples can also be found at http://www.soapui.org/scripting---properties/tips---tricks.html. Summary In this article, you learned about installing the SoapUI RAML plugin, generating a SoapUI REST project and mock service using the RAML plugin, and testing response values using JSON Slurper. Resources for Article: Further resources on this subject: SOAP and PHP 5 [article] Web Services Testing and soapUI [article] ADempiere 3.6: Building and Configuring Web Services [article]
Read more
  • 0
  • 0
  • 4988

article-image-raspberry-pi-and-raspbian
Packt
25 Feb 2015
14 min read
Save for later

The Raspberry Pi and Raspbian

Packt
25 Feb 2015
14 min read
In this article by William Harrington, author of the book Learning Raspbian, we will learn about the Raspberry Pi, the Raspberry Pi Foundation and Raspbian, the official Linux-based operating system of Raspberry Pi. In this article, we will cover: (For more resources related to this topic, see here.) The Raspberry Pi History of the Raspberry Pi The Raspberry Pi hardware The Raspbian operating system Raspbian components The Raspberry Pi Despite first impressions, the Raspberry Pi is not a tasty snack. The Raspberry Pi is a small, powerful, and inexpensive single board computer developed over several years by the Raspberry Pi Foundation. If you are a looking for a low cost, small, easy-to-use computer for your next project, or are interested in learning how computers work, then the Raspberry Pi is for you. The Raspberry Pi was designed as an educational device and was inspired by the success of the BBC Micro for teaching computer programming to a generation. The Raspberry Pi Foundation set out to do the same in today's world, where you don't need to know how to write software to use a computer. At the time of printing, the Raspberry Pi Foundation had shipped over 2.5 million units, and it is safe to say that they have exceeded their expectations! The Raspberry Pi Foundation The Raspberry Pi Foundation is a not-for-profit charity and was founded in 2006 by Eben Upton, Rob Mullins, Jack Lang, and Alan Mycroft. The aim of this charity is to promote the study of computer science to a generation that didn't grow up with the BBC Micro or the Commodore 64. They became concerned about the lack of devices that a hobbyist could use to learn and experiment with. The home computer was often ruled out, as it was so expensive, leaving the hobbyist and children with nothing to develop their skills with. History of the Raspberry Pi Any new product goes through many iterations before mass production. In the case of the Raspberry Pi, it all began in 2006 when several concept versions of the Raspberry Pi based on the Atmel 8-bit ATMega664 microcontroller were developed. Another concept based on a USB memory stick with an ARM processor (similar to what is used in the current Raspberry Pi) was created after that. It took six years of hardware development to create the Raspberry Pi that we know and love today! The official logo of the Raspberry Pi is shown in the following screenshot: It wasn't until August 2011 when 50 boards of the Alpha version of the Raspberry Pi were built. These boards were slightly larger than the current version to allow the Raspberry Pi Foundation, to debug the device and confirm that it would all work as expected. Twenty-five beta versions of the Raspberry Pi were assembled in December 2011 and auctioned to raise money for the Raspberry Pi Foundation. Only a single small error with these was found and corrected for the first production run. The first production run consisted of 10,000 boards of Raspberry Pi manufactured overseas in China and Taiwan. Unfortunately, there was a problem with the Ethernet jack on the Raspberry Pi being incorrectly substituted with an incompatible part. This led to some minor shipping delays, but all the Raspberry Pi boards were delivered within weeks of their due date. As a bonus, the foundation was able to upgrade the Model A Raspberry Pi to 256 MB of RAM instead of the 128 MB that was planned. This upgrade in memory size allowed the Raspberry Pi to perform even more amazing tasks, such as real-time image processing. The Raspberry Pi is now manufactured in the United Kingdom, leading to the creation of many new jobs. The release of the Raspberry Pi was met with great fanfare, and the two original retailers of the Raspberry Pi - Premier Farnell and RS components-sold out of the first batch within minutes. The Raspberry Pi hardware At the heart of Raspberry Pi is the powerful Broadcom BCM2835 "system on a chip". The BCM2835 is similar to the chip at the heart of almost every smartphone and set top box in the world that uses ARM architecture. The BCM2835 CPU on the Raspberry Pi runs at 700 MHz and its performance is roughly equivalent to a 300 MHz Pentium II computer that was available back in 1999. To put this in perspective, the guidance computer used in the Apollo missions was less powerful than a pocket calculator! Block diagram of the Raspberry Pi The Raspberry Pi comes with either 256 MB or 512 MB of RAM, depending on which model you buy. Hopefully, this will increase in future versions! Graphic capabilities Graphics in the Raspberry Pi are provided by a Videocore 4 GPU. The graphic performance of the graphics processing unit (GPU) is roughly equivalent to the Xbox, launched in 2011, which cost many hundreds of dollars. These might seem like very low specifications, but they are enough to play Quake 3 at 1080p and full HD movies. There are two ways to connect a display to the Raspberry Pi. The first is using a composite video cable and the second is using HDMI. The composite output is useful as you are able to use any old TV as a monitor. The HDMI output is recommended however, as it provides superior video quality. A VGA connection is not provided on the Raspberry Pi as it would be cost prohibitive. However, it is possible to use an HDMI to VGA/DVI converter for users who have VGA or DVI monitors. The Raspberry Pi also supports an LCD touchscreen. An official version has not been released yet, although many unofficial ones are available. The Raspberry Pi Foundation says that they expect to release one this year. The Raspberry Pi model The Raspberry Pi has several different variants: the Model A and the Model B. The Model A is a low-cost version and unfortunately omits the USB hub chip. This chip also functions as a USB to an Ethernet converter. The Raspberry Pi Foundation has also just released the Raspberry Pi Model B+ that has extra USB ports and resolves many of the power issues surrounding the Model B and Model B USB ports. Parameters Model A Model B Model B+ CPU BCM2835 BCM2835 BCM2835 RAM 256 MB 512 MB 512 MB USB Ports 1 2 4 Ethernet Ports 0 1 1 Price (USD) ~$25 ~$35 ~$35 Available since February 2012 February 2012 July 2014 Boards     Differences between the Raspberry Pi models Did you know that the Raspberry Pi is so popular that if you search for raspberry pie in Google, they will actually show you results for the Raspberry Pi! Accessories The success of the Raspberry Pi has encouraged many other groups to design accessories for the Raspberry Pi, and users to use them. These accessories range from a camera to a controller for an automatic CNC machine. Some of these accessories include: Accessories Links Raspberry Pi camera http://www.raspberrypi.org/tag/camera-board/ VGA board http://www.suptronics.com/RPI.html CNC Controller http://code.google.com/p/picnc/ Autopilot http://www.emlid.com/ Case http://shortcrust.net/ Raspbian No matter how good the hardware of the Raspberry Pi is, without an operating system it is just a piece of silicon, fiberglass, and a few other materials. There are several different operating systems for the Raspberry Pi, including RISC OS, Pidora, Arch Linux, and Raspbian. Currently, Raspbian is the most popular Linux-based operating system for the Raspberry Pi. Raspbian is an open source operating system based on Debian, which has been modified specifically for the Raspberry Pi (thus the name Raspbian). Raspbian includes customizations that are designed to make the Raspberry Pi easier to use and includes many different software packages out of the box. Raspbian is designed to be easy to use and is the recommended operating system for beginners to start off with their Raspberry Pi. Debian The Debian operating system was created in August 1993 by Ian Murdock and is one of the original distributions of Linux. As Raspbian is based on the Debian operating system, it shares almost all the features of Debian, including its large repository of software packages. There are over 35,000 free software packages available for your Raspberry Pi, and they are available for use right now! An excellent resource for more information on Debian, and therefore Raspbian, is the Debian administrator's handbook. The handbook is available at http://debian-handbook.info. Open source software The majority of the software that makes up Raspbian on the Raspberry Pi is open source. Open source software is a software whose source code is available for modification or enhancement by anyone. The Linux kernel and most of the other software that makes up Raspbian is licensed under the GPLv2 License. This means that the software is made available to you at no cost, and that the source code that makes up the software is available for you to do what you want to. The GPLV2 license also removes any claim or warranty. The following extract from the GPLV2 license preamble gives you a good idea of the spirit of free software: "The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users…. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things." Raspbian components There are many components that make up a modern Linux distribution. These components work together to provide you with all the modern features you expect in a computer. There are several key components that Raspbian is built from. These components are: The Raspberry Pi bootloader The Linux kernel Daemons The shell Shell utilities The X.Org graphical server The desktop environment The Raspberry Pi bootloader When your Raspberry Pi is powered on, lot of things happen behind the scene. The role of the bootloader is to initialize the hardware in the Raspberry Pi to a known state, and then to start loading the Linux kernel. In the case of the Raspberry Pi, this is done by the first and second stage bootloaders. The first stage bootloader is programmed into the ROM of the Raspberry Pi during manufacture and cannot be modified. The second and third stage bootloaders are stored on the SD card and are automatically run by the previous stage bootloader. The Linux kernel The Linux kernel is one of the most fundamental parts of Raspbian. It manages every part of the operation of your Raspberry Pi, from displaying text on the screen to receiving keystrokes when you type on your keyboard. The Linux kernel was created by Linus Torvalds, who started working on the kernel in April 1991. Since then, groups of volunteers and organizations have worked together to continue the development of the kernel and make it what it is today. Did you know that the cost to rewrite the Linux kernel to where it was in 2011 would be over $3 billion USD? If you want to use a hardware device by connecting it to your Raspberry Pi, the kernel needs to know what it is and how to use it. The vast majority of devices on the market are supported by the Linux kernel, with more being added all the time. A good example of this is when you plug a USB drive into your Raspberry Pi. In this case, the kernel automatically detects the USB drive and notifies a daemon that automatically makes the files available to you. When the kernel has finished loading, it automatically runs a program called init. This program is designed to finish the initialization of the Raspberry Pi, and then to load the rest of the operating system. This program starts by loading all the daemons into the background, followed by the graphical user interface. Daemons A daemon is a piece of software that runs behind the scenes to provide the operating system with different features. Some examples of a daemon include the Apache web server, Cron, a job scheduler that is used to run programs automatically at different times, and Autofs, a daemon that automatically mounts removable storage devices such as USB drives. A distribution such as Raspbian needs more than just the kernel to work. It also needs other software that allows the user to interact with the kernel, and to manage the rest of the operating system. The core operating system consists of a collection of programs and scripts that make this happen. The shell After all the daemons have loaded, init launches a shell. A shell is an interface to your Raspberry Pi that allows you to monitor and control it using commands typed in using a keyboard. Don't be fooled by this interface, despite the fact that it looks exactly like what was used in computers 30 years ago. The shell is one of the most powerful parts of Raspbian. There are several shells available in Linux. Raspbian uses the Bourne again shell (bash) This shell is by far the most common shell used in Linux. Bash is an extremely powerful piece of software. One of bash's most powerful features is its ability to run scripts. A script is simply a collection of commands stored in a file that can do things, such as run a program, read keys from the keyboard, and many other things. Later on in this book, you will see how to use bash to make the most from your Raspberry Pi! Shell utilities A command interpreter is not much of use without any commands to run. While bash provides some very basic commands, all the other commands are shell utilities. These shell utilities together form one of the important parts of Raspbian (essential as without the utilities, the system would crash). They provide many features that range from copying files, creating directories, to the Advanced Packaging Tool (APT) – a package manager application that allows you to install and remove software from your Raspberry Pi. You will learn more about APT later in this book. The X.Org graphical server After the shell and daemons are loaded, by default the X.Org graphical server is automatically started. The role of X.Org is to provide you with a common platform from which to build a graphical user interface. X.Org handles everything from moving your mouse pointer, listening, and responding to your key presses to actually drawing the applications you are running onto the screen. The desktop environment It is difficult to use any computer without a desktop environment. A desktop environment lets you interact with your computer using more than just your keyboard, surf the Internet, view pictures and movies, and many other things. A GUI normally uses Windows, menus, and a mouse to do this. Raspbian includes a graphical user interface called Lightweight X11 Desktop Environment or LXDE. LXDE is used in Raspbian as it was specifically designed to run on devices such as the Raspberry Pi, which only have limited resources. Later in this book, you will learn how to customize and use LXDE to make the most of your Raspberry Pi.                 A screenshot of the LXDE desktop environment Summary This article introduces the Raspberry Pi and Raspbian. It discusses the history, hardware and components of the Raspberry Pi. This article is a great resource for understanding the Raspberry Pi. Resources for Article: Further resources on this subject: Raspberry Pi Gaming Operating Systems? [article] Learning Nservicebus Preparing Failure [article] Clusters Parallel Computing and Raspberry Pi Brief Background [article]
Read more
  • 0
  • 0
  • 9681
article-image-agile-data-modeling-neo4j
Packt
25 Feb 2015
18 min read
Save for later

Agile data modeling with Neo4j

Packt
25 Feb 2015
18 min read
In this article by Sumit Gupta, author of the book Neo4j Essentials, we will discuss data modeling in Neo4j, which is evolving and flexible enough to adapt to changing business requirements. It captures the new data sources, entities, and their relationships as they naturally occur, allowing the database to easily adapt to the changes, which in turn results in an extremely agile development and provides quick responsiveness to changing business requirements. (For more resources related to this topic, see here.) Data modeling is a multistep process and involves the following steps: Define requirements or goals in the form of questions that need to be executed and answered by the domain-specific model. Once we have our goals ready, we can dig deep into the data and identify entities and their associations/relationships. Now, as we have our graph structure ready, the next step is to form patterns from our initial questions/goals and execute them against the graph model. This whole process is applied in an iterative and incremental manner, similar to what we do in agile, and has to be repeated again whenever we change our goals or add new goals/questions, which need to be answered by your graph model. Let's see in detail how data is organized/structured and implemented in Neo4j to bring in the agility of graph models. Based on the principles of graph data structure available at http://en.wikipedia.org/wiki/Graph_(abstract_data_type), Neo4j implements the property graph data model at storage level, which is efficient, flexible, adaptive, and capable of effectively storing/representing any kind of data in the form of nodes, properties, and relationships. Neo4j not only implements the property graph model, but has also evolved the traditional model and added the feature of tagging nodes with labels, which is now referred to as the labeled property graph. Essentially, in Neo4j, everything needs to be defined in either of the following forms: Nodes: A node is the fundamental unit of a graph, which can also be viewed as an entity, but based on a domain model, it can be something else too. Relationships: These defines the connection between two nodes. Relationships also have types, which further differentiate relationships from one another. Properties: Properties are viewed as attributes and do not have their own existence. They are related either to nodes or to relationships. Nodes and relationships can both have their own set of properties. Labels: Labels are used to construct a group of nodes into sets. Nodes that are labeled with the same label belong to the same set, which can be further used to create indexes for faster retrieval, mark temporary states of certain nodes, and there could be many more, based on the domain model. Let's see how all of these four forms are related to each other and represented within Neo4j. A graph essentially consists of nodes, which can also have properties. Nodes are linked to other nodes. The link between two nodes is known as a relationship, which also can have properties. Nodes can also have labels, which are used for grouping the nodes. Let's take up a use case to understand data modeling in Neo4j. John is a male and his age is 24. He is married to a female named Mary whose age is 20. John and Mary got married in 2012. Now, let's develop the data model for the preceding use case in Neo4j: John and Mary are two different nodes. Marriage is the relationship between John and Mary. Age of John, age of Mary, and the year of their marriage become the properties. Male and Female become the labels. Easy, simple, flexible, and natural… isn't it? The data structure in Neo4j is adaptive and effectively can model everything that is not fixed and evolves over a period of time. The next step in data modeling is fetching the data from the data model, which is done through traversals. Traversals are another important aspect of graphs, where you need to follow paths within the graph starting from a given node and then following its relationships with other nodes. Neo4j provides two kinds of traversals: breadth first available at http://en.wikipedia.org/wiki/Breadth-first_search and depth first available at http://en.wikipedia.org/wiki/Depth-first_search. If you are from the RDBMS world, then you must now be wondering, "What about the schema?" and you will be surprised to know that Neo4j is a schemaless or schema-optional graph database. We do not have to define the schema unless we are at a stage where we want to provide some structure to our data for performance gains. Once performance becomes a focus area, then you can define a schema and create indexes/constraints/rules over data. Unlike the traditional models where we freeze requirements and then draw our models, Neo4j embraces data modeling in an agile way so that it can be evolved over a period of time and is highly responsive to the dynamic and changing business requirements. Read-only Cypher queries In this section, we will discuss one of the most important aspects of Neo4j, that is, read-only Cypher queries. Read-only Cypher queries are not only the core component of Cypher but also help us in exploring and leveraging various patterns and pattern matching constructs. It either begins with MATCH, OPTIONAL MATCH, or START, which can be used in conjunction with the WHERE clause and further followed by WITH and ends with RETURN. Constructs such as ORDER BY, SKIP, and LIMIT can also be used with WITH and RETURN. We will discuss in detail about read-only constructs, but before that, let's create a sample dataset and then we will discuss constructs/syntax of read-only Cypher queries with illustration. Creating a sample dataset – movie dataset Let's perform the following steps to clean up our Neo4j database and insert some data which will help us in exploring various constructs of Cypher queries: Open your Command Prompt or Linux shell and open the Neo4j shell by typing <$NEO4J_HOME>/bin/neo4j-shell. Execute the following commands on your Neo4j shell for cleaning all the previous data: //Delete all relationships between Nodes MATCH ()-[r]-() delete r; //Delete all Nodes MATCH (n) delete n; Now we will create a sample dataset, which will contain movies, artists, directors, and their associations. Execute the following set of Cypher queries in your Neo4j shell to create the list of movies: CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'Rocky', Year : '1976'}); CREATE (:Movie {Title : 'Rocky II', Year : '1979'}); CREATE (:Movie {Title : 'Rocky III', Year : '1982'}); CREATE (:Movie {Title : 'Rocky IV', Year : '1985'}); CREATE (:Movie {Title : 'Rocky V', Year : '1990'}); CREATE (:Movie {Title : 'The Expendables', Year : '2010'}); CREATE (:Movie {Title : 'The Expendables II', Year : '2012'}); CREATE (:Movie {Title : 'The Karate Kid', Year : '1984'}); CREATE (:Movie {Title : 'The Karate Kid II', Year : '1986'}); Execute the following set of Cypher queries in your Neo4j shell to create the list of artists: CREATE (:Artist {Name : 'Sylvester Stallone', WorkedAs : ["Actor", "Director"]}); CREATE (:Artist {Name : 'John G. Avildsen', WorkedAs : ["Director"]}); CREATE (:Artist {Name : 'Ralph Macchio', WorkedAs : ["Actor"]}); CREATE (:Artist {Name : 'Simon West', WorkedAs : ["Director"]}); Execute the following set of cypher queries in your Neo4j shell to create the relationships between artists and movies: Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:ACTED_IN {Role : "Rocky Balboa"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:ACTED_IN {Role : "Barney Ross"}]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky III"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "Rocky IV"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Sylvester Stallone"}), (movie:Movie {Title: "The Expendables"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "Rocky V"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "John G. Avildsen"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:DIRECTED]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Ralph Macchio"}), (movie:Movie {Title: "The Karate Kid II"}) CREATE (artist)-[:ACTED_IN {Role:"Daniel LaRusso"}]->(movie); Match (artist:Artist {Name : "Simon West"}), (movie:Movie {Title: "The Expendables II"}) CREATE (artist)-[:DIRECTED]->(movie); Next, browse your data through the Neo4j browser. Click on Get some data from the left navigation pane and then execute the query by clicking on the right arrow sign that will appear on the extreme right corner just below the browser navigation bar, and it should look something like this: Now, let's understand the different pieces of read-only queries and execute those against our movie dataset. Working with the MATCH clause MATCH is the most important clause used to fetch data from the database. It accepts a pattern, which defines "What to search?" and "From where to search?". If the latter is not provided, then Cypher will scan the whole tree and use indexes (if defined) in order to make searching faster and performance more efficient. Working with nodes Let's start asking questions from our movie dataset and then form Cypher queries, execute them on <$NEO4J_HOME>/bin/neo4j-shell against the movie dataset and get the results that will produce answers to our questions: How do we get all nodes and their properties? Answer: MATCH (n) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes in a variable n and then return the results, which in our case will be printed on our Neo4j shell. How do we get nodes with specific properties or labels? Answer: Match with label, MATCH (n:Artist) RETURN n; or MATCH (n:Movies) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, which contain the value of label as Artist or Movies. Answer: Match with a specific property MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n; Explanation: We are instructing Cypher to scan the complete database and capture all nodes that contain the value of label as Artist and the value of property WorkedAs is ["Actor"]. Since we have defined the WorkedAs collection, we need to use square brackets, but in all other cases, we should not use square brackets. We can also return specific columns (similar to SQL). For example, the preceding statement can also be formed as MATCH (n:Artist {WorkedAs:["Actor"]}) RETURN n.name as Name;. Working with relationships Let's understand the process of defining relationships in the form of Cypher queries in the same way as you did in the previous section while working with nodes: How do we get nodes that are associated or have relationships with other nodes? Answer: MATCH (n)-[r]-(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes with which they have relationships in variables n, r, and n1 and then further return/print the results on your Neo4j shell. Also, in the preceding query, we have used - and not -> as we do not care about the direction of relationships that we retrieve. How do we get nodes, their associated properties that have some specific type of relationship, or the specific property of a relationship? Answer: MATCH (n)-[r:ACTED_IN {Role : "Rocky Balboa"}]->(n1) RETURN n,r,n1; Explanation: We are instructing Cypher to scan the complete database and capture all nodes, their relationships, and nodes, which have a relationship as ACTED_IN and with the property of Role as Rocky Balboa. Also, in the preceding query, we do care about the direction (incoming/outgoing) of a relationship, so we are using ->. For matching multiple relations replace [r:ACTED_IN] with [r:ACTED_IN | DIRECTED] and use single quotes or escape characters wherever there are special characters in the name of relationships. How do we get a coartist? Answer: MATCH (n {Name : "Sylvester Stallone"})-[r]->(x)<-[r1]-(n1) return n.Name as Artist,type(r),x.Title as Movie, type(r1), n1.Name as Artist2; Explanation: We are trying to find out all artists that are related to Sylvester Stallone in some manner or the other. Once you run the preceding query, you will see something like the following image, which should be self-explanatory. Also, see the usage of as and type. as is similar to the SQL construct and is used to define a meaningful name to the column presenting the results, and type is a special keyword that gives the type of relationship between two nodes. How do we get the path and number of hops between two nodes? Answer: MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-(:Movie{Title:"Rocky V"}) return p; Explanation: Paths are the distance between two nodes. In the preceding statement, we are trying to find out all paths between two nodes, which are between 0 (minimum) and 4 (maximum) hops away from each other and are only connected through the relationship DIRECTED. You can also find the path, originating only from a particular node and re-write your query as MATCH p = (:Movie{Title:"The Karate Kid"})-[:DIRECTED*0..4]-() return p;. Integration of the BI tool – QlikView In this section, we will talk about the integration of the BI tool—QlikView with Neo4j. QlikView is available only on the Windows platform, so this section is only applicable for Windows users. Neo4j as an open source database exposes its core APIs for developers to write plugins and extends its intrinsic capabilities. Neo4j JDBC is one such plugin that enables the integration of Neo4j with various BI / visualization and ETL tools such as QlikView, Jaspersoft, Talend, Hive, Hbase, and many more. Let's perform the following steps for integrating Neo4j with QlikView on Windows: Download, install, and configure the following required software: Download the Neo4j JDBC driver directly from https://github.com/neo4j-contrib/neo4j-jdbc as the source code. You need to compile and create a JAR file or you can also directly download the compiled sources from http://dist.neo4j.org/neo4j-jdbc/neo4j-jdbc-2.0.1-SNAPSHOT-jar-with-dependencies.jar. Depending upon your Windows platform (32 bit or 64 bit), download the QlikView Desktop Personal Edition. In this article, we will be using QlikView Desktop Personal Edition 11.20.12577.0 SR8. Install QlikView and follow the instructions as they appear on your screen. After installation, you will see the QlikView icon in your Windows start menu. QlikView leverages QlikView JDBC Connector for integration with JDBC data sources, so our next step would be to install QlikView JDBC Connector. Let's perform the following steps to install and configure the QlikView JDBC Connector: Download QlikView JDBC Connector either from http://www.tiq-solutions.de/display/enghome/ENJDBC or from https://s3-eu-west-1.amazonaws.com/tiq-solutions/JDBCConnector/JDBCConnector_Setup.zip. Open the downloaded JDBCConnector_Setup.zip file and install the provided connector. Once the installation is complete, open JDBC Connector from your Windows Start menu and click on Active for a 30-day trial (if you haven't already done so during installation). Create a new profile of the name Qlikview-Neo4j and make it your default profile. Open the JVM VM Options tab and provide the location of jvm.dll, which can be located at <$JAVA_HOME>/jre/bin/client/jvm.dll. Click on Open Log-Folder to check the logs related to the Database connections. You can configure the Logging Level and also define the JVM runtime options such as -Xmx and -Xms in the textbox provided for Option in the preceding screenshot. Browse through the JDBC Driver tab, click on Add Library, and provide the location of your <$NEO4J-JDBC driver>.jar, and add the dependent JAR files. Instead of adding individual libraries, we can also add a folder containing the same list of libraries by clicking on the Add Folder option. We can also use non JDBC-4 compliant drivers by mentioning the name of the driver class in the Advanced Settings tab. There is no need to do that, however, if you are setting up a configuration profile that uses a JDBC-4 compliant driver. Open the License Registration tab and request Online Trial license, which will be valid for 30 days. Assuming that you are connected to the Internet, the trial license will be applied immediately. Save your settings and close QlikView JDBC Connector configuration. Open <$NEO4J_HOME>binneo4jshell and execute the following set of Cypher statements one by one to create sample data in your Neo4j database; then, in further steps, we will visualize this data in QlikView: CREATE (movies1:Movies {Title : 'Rocky', Year : '1976'}); CREATE (movies2:Movies {Title : 'Rocky II', Year : '1979'}); CREATE (movies3:Movies {Title : 'Rocky III', Year : '1982'}); CREATE (movies4:Movies {Title : 'Rocky IV', Year : '1985'}); CREATE (movies5:Movies {Title : 'Rocky V', Year : '1990'}); CREATE (movies6:Movies {Title : 'The Expendables', Year : '2010'}); CREATE (movies7:Movies {Title : 'The Expendables II', Year : '2012'}); CREATE (movies8:Movies {Title : 'The Karate Kid', Year : '1984'}); CREATE (movies9:Movies {Title : 'The Karate Kid II', Year : '1986'}); Open the QlikView Desktop Personal Edition and create a new view by navigating to File | New. The Getting Started wizard may appear as we are creating a new view. Close this wizard. Navigate to File | EditScript and change your database to JDBCConnector.dll (32). In the same window, click on Connect and enter "jdbc:neo4j://localhost:7474/" in the "url" box. Leave the username and password as empty and click on OK. You will see that a CUSTOM CONNECT TO statement is added in the box provided. Next insert the highlighted Cypher statements in the provided window just below the CUSTOM CONNECT TO statement. Save the script and close the EditScript window. Now, on your Qlikviewsheet, execute the script by pressing Ctrl + R on our keyboard. Next, add a new TableObject on your Qlikviewsheet, select "MovieTitle" from the provided fields and click on OK. And we are done!!!! You will see the data appearing in the listbox in the newly created Table Object. The data is fetched from the Neo4j database and QlikView is used to render this data. The same process is used for connecting to other JDBC-compliant BI / visualization / ETL tools such as Jasper, Talend, Hive, Hbase, and so on. We just need to define appropriate JDBC Type-4 drivers in JDBC Connector. We can also use ODBC-JDBC Bridge provided by EasySoft at http://www.easysoft.com/products/data_access/odbc_jdbc_gateway/index.html. EasySoft provides the ODBC-JDBC Gateway, which facilitates ODBC access from applications such as MS Access, MS Excel, Delphi, and C++ to Java databases. It is a fully functional ODBC 3.5 driver that allows you to access any JDBC data source from any ODBC-compatible application. Summary In this article, you have learned the basic concepts of data modeling in Neo4j and have walked you through the process of BI integration with Neo4j. Resources for Article: Further resources on this subject: Working Neo4j Embedded Database [article] Introducing Kafka [article] First Steps With R [article]
Read more
  • 0
  • 0
  • 2654

article-image-christmas-light-sequencer
Packt
25 Feb 2015
20 min read
Save for later

Christmas Light Sequencer

Packt
25 Feb 2015
20 min read
In this article by Sai Yamanoor and Srihari Yamanoor, authors of the book Raspberry Pi Mechatronics Projects Hotshot, have picked a Christmas-themed project to demonstrate controlling appliances connected to a local network using Raspberry Pi. We will design automation and control of Christmas lights in our homes. We will decorate our homes with lights for any festive occasion and work on a article that enables us to build fantastic projects. We will build a local server to control the devices. We will use the web.py framework to design the web server. We'd like to dedicate this article to the memory of Aaron Swartz who was the founder of the web.py framework. Mission briefing In this article, we will install a local web server-based control of GPIO pins on the Raspberry Pi. We will use this web server framework to control it via a web page. The Raspberry Pi on top of the tree is just an ornament for decoration Why is it awesome? We celebrate festive occasions by decorating our homes. The decorations reflect our heart and it can be enhanced by using Raspberry Pi. This article involves interfacing AC-powered devices to Raspberry Pi. You should exercise extreme caution while interfacing the devices, and it is strongly recommended that you stick to the recommended devices. Your objectives In this article, we will work on the following aspects: Interface of the Christmas tree lights and other decorative equipment to the Raspberry Pi Set up the digitally-addressable RGB matrix Interface of an audio device Setting up the web server Interfacing devices to the web server You can buy your sustainable and UK grown Christmas trees from christmastrees.co.uk. Mission checklist This article is based on a broad concept. You are free to choose decorative items of your own interest. We chose to show the following items for demonstration: Item Estimated Cost Christmas tree * 1 30 USD Outdoor decoration (optional) 30 USD Santa Claus figurine * 1 20 USD Digitally addressable strip * 1 30 USD approximately Power Switch Tail 2 from Adafruit Industries (http://www.adafruit.com/product/268) 25 USD approximately Arduino Uno (any variant) 20 – 30 USD approximately Interface the devices to the Raspberry Pi It is important to exercise caution while connecting electrical appliances to the Raspberry Pi. If you don't know what you are doing, please skip this section. Adult supervision is required while connecting appliances. In this task, we will look into interfacing decorative appliances (operated with an AC power supply) such as the Christmas tree. It is important to interface AC appliances to the Raspberry Pi in accordance with safety practices. It is possible to connect AC appliances to the Raspberry Pi using solid state relays. However, if the prototype boards aren't connected properly, it is a potential hazard. Hence, we use the Power Switch Tail II sold by Adafruit Industries. The Power Switch Tail II has been rated for 110V. According to the specifications provided on the Adafruit website, Power Switch Tail's relay can switch up to 15A resistive loads. It can be controlled by providing a 3-12V DC signal. We will look into controlling the lights on a Christmas tree in this task. Power Switch Tail II – Photo courtesy: Adafruit.com Prepare for lift off We have to connect the Power Switch Tail II to the Raspberry Pi to test it. The follow Fritzing schematic shows the connection of the switch to the Raspberry Pi using Pi Cobbler. Pin 25 is connected to in+, while the in- pin is connected to the Ground pin of the Raspberry Pi. The Pi Cobbler breakout board is connected to the Raspberry Pi as shown in the following image: The Raspberry Pi connection to the Power Switch Tail II using Pi Cobbler Engage thrusters In order to test the device, there are two options to control the device the GPIO Pins of the Raspberry Pi. This can be controlled either using the quick2wire GPIO library or using the Raspi GPIO library. The main difference between the quick2wire gpio library and the Raspi GPIO library is that the former does not require that the Python script to be run with root user privileges (to those who are not familiar with root privileges, the Python script needs to be run using sudo). In the case of the Raspi GPIO library, it is possible to set the ownership of the pins to avoid executing the script as root. Once the installation is complete, let's turn on/off the lights on the tree with a three second interval. The code for it is given as follows: # Import the rpi.gpio module.import RPi.GPIO as GPIO#Import delay module.from time import sleep#Set to BCM GPIOGPIO.setmode(GPIO.BCM)# BCM pin 25 is the output.GPIO.setup(25, GPIO.OUT) # Initialise Pin25 to low (false) so that the Christmas tree lights are switched off. GPIO.output(25, False)while 1:GPIO.output(25,False)sleep(3)GPIO.output(25,True)sleep(3) In the preceding task, we will get started by importing the raspi.gpio module and the time module to introduce a delay between turning on/off the lights: import RPi.GPIO as GPIO#Import delay modulefrom time import sleep We need to set the mode in which the GPIO pins are being used. There are two modes, namely the board's GPIO mode and the BCM GPIO mode (more information available on http://sourceforge.net/p/raspberry-gpio-python/wiki/). The former refers to the pin numbers on the Raspberry Pi board while the latter refers to the pin number found on the Broadcom chipset. In this example, we will adopt the BCM chipset's pin description. We will set the pin 25 to be an output pin and set it to false so that the Christmas tree lights are switched off at the start of the program: GPIO.setup(25, GPIO.OUT)GPIO.output(25, False) In the preceding routine, we are switching off the lights and turning them back on with a three-second interval: while 1:GPIO.output(25,True)sleep(3)GPIO.output(25,False)sleep(3) When the pin 25 is set to high, the device is turned on, and it is turned off when the pin is set to low with a three-second interval. Connecting multiple appliances to the Raspberry Pi Let's consider a scenario where we have to control multiple appliances using the Raspberry Pi. It is possible to connect a maximum of 15 devices to the GPIO interface of the Raspberry Pi. (There are 17 GPIO pins on the Raspberry Pi Model B, but two of those pins, namely GPIO14 and 15, are set to be UART in the default state. This can be changed after startup. It is also possible to connect a GPIO expander to connect more devices to Raspberry Pi.) In the case of appliances that need to be connected to the 110V AC mains, it is recommended that you use multiple power switch tails to adhere to safety practices. In the case of decorative lights that operate using a battery (for example, a two-feet Christmas tree) or appliances that operate at low voltage levels of 12V DC, a simple transistor circuit and a relay can be used to connect the devices. A sample circuit is shown in the figure that follows: A transistor switching circuit In the preceding circuit, since the GPIO pins operate at 3.3V levels, we will connect the GPIO pin to the base of the NPN transistor. The collector pin of the transistor is connected to one end of the relay. The transistor acts as a switch and when the GPIO pin is set to high, the collector is connected to the emitter (which in turn is connected to the ground) and hence, energizes the relay. Relays usually have three terminals, namely, the common terminal, Normally Open Terminal, and Normally Closed Terminal. When the relay is not energized, the common terminal is connected to the Normally Closed Terminal. Upon energization, the Normally Open Terminal is connected to the common terminal, thus turning on the appliance. The freewheeling diode across the relay is used to protect the circuit from any reverse current from the switching of the relays. The transistor switching circuit aids in operating an appliance that operates at 12V DC using the Raspberry Pi's GPIO pins (the GPIO pins of the Raspberry Pi operate at 3.3V levels). The relay and the transistor switching circuit enables controlling high current devices using the Raspberry Pi. It is possible to use an array of relays (as shown in the following image) and control an array of decorative lighting arrangements. It would be cool to control lighting arrangements according to the music that is being played on the Raspberry Pi (a project idea for the holidays!). The relay board (shown in the following image) operates at 5V DC and comes with the circuitry described earlier in this section. We can make use of the board by powering up the board using a 5V power supply and connecting the GPIO pins to the pins highlighted in red. As explained earlier, the relay can be energized by setting the GPIO pin to high. A relay board Objective complete – mini debriefing In this section, we discussed controlling decorative lights and other holiday appliances by running a Python script on the Raspberry Pi. Let's move on to the next section to set up the digitally addressable RGB LED strip! Setting up the digitally addressable RGB matrix In this task, we will talk about setting up options available for LED lighting. We will discuss two types of LED strips, namely analog RGB LED strips and digitally-addressable RGB LED strips. A sample of the digitally addressable RGB LED strip is shown in the image that follows: A digitally addressable RGB LED Strip Prepare for lift off As the name explains, digitally-addressable RGB LED strips are those where the colour of each RGB LED can be individually controlled (in the case of the analog strip, the colors cannot be individually controlled). Where can I buy them? There are different models of the digitally addressable RGB LED strips based on different chips such as LPD6803, LPD8806, and WS2811. The strips are sold in a reel of a maximum length of 5 meters. Some sources to buy the LED strips include Adafruit (http://www.adafruit.com/product/306) and Banggood (http://www.banggood.com/5M-5050-RGB-Dream-Color-6803-IC-LED-Strip-Light-Waterproof-IP67-12V-DC-p-931386.html) and they cost about 50 USD for a reel. Some vendors (including Adafruit) sell them in strips of one meter as well. Engage thrusters Let's review how to control and use these digitally-addressable RGB LED strips. How does it work? Most digitally addressable RGB strips come with terminals to powering the LEDs, a clock pin, and a data pin. The LEDs are serially connected to each other and are controlled through the SPI (Serial Peripheral Interface). The RGB LEDs on the strip are controlled by a chip that latches data from the microcontroller/Raspberry Pi onto the LEDs with reference to the clock cycles received on the clock pin. In the case of the LPD8806 strip, each chip can control about 2 LEDs. It can control each channel of the RGB LED using a seven-bit PWM channel. More information on the function of the RGB LED strip is available at https://learn.adafruit.com/digital-led-strip. It is possible to break the LED strip into individual segments. Each segment contains about 2 LEDs, and Adafruit industries has provided an excellent tutorial to separate the individual segments of the LED strip (https://learn.adafruit.com/digital-led-strip/advanced-separating-strips). Lighting up the RGB LED strip There are two ways of connecting the RGB LED strip. They can either be connected to an Arduino and controlled by the Raspberry Pi or controlled by the Raspberry Pi directly. An Arduino-based control It is assumed that you are familiar with programming microcontrollers, especially those on the Arduino platform. An Arduino connection to the digitally addressable interface In the preceding figure, the LED strip is powered by an external power supply. (The tiny green adapter represents the external power supply. The recommended power supply for the RGB LED strip is 5V/2A per meter of LEDs (while writing this article, we got an old computer power supply to power up the LEDs). The Clock pins (the CI pin) and the Data pins (DI) of the first segment of the RGB strip are connected to the pins D2 and D3 respectively. (We are doing this since we will test the example from Adafruit industries. The example is available at https://github.com/adafruit/LPD8806/tree/master/examples.) Since the RGB strip consists of multiple segments that are serially connected, the Clock Out (CO) and Data Out (DO) pins of the first segment are connected to the Clock In (CI) and Data In (DI) pins of the second segment and so on. Let's review the example, strandtest.pde, to test the RGB LED strip. The example makes use of Software SPI (Bit Banging of the clock and data pins for lighting effects). It is also possible to use the SPI interface of the Arduino platform. In the example, we need to set the number of LEDs used for the test. For example, we need to set the number of LEDs on the strip to 64 for a two-meter strip. Here is how to do this: The following line needs to be changed: int nLEDs = 64; Once the code is uploaded, the RGB matrix should light up, as shown in this image: 8 x 8 RGB matrix lit up Let's quickly review the Arduino sketch from Adafruit. We will get started by setting up an LPD8806 object as follows: //nLEDS refer to number of LEDs in the strip. This cannot exceed 160 LEDs/5m due to current draw. LPD8806 strip = LPD8806(nLEDs, dataPin, clockPin); In the setup() sectionof the Arduino sketch, we will initialize the RGB strip "as follows: // Start up the LED stripstrip.begin(); // Update the strip, to start they are all 'off'strip.show(); As soon as we enter the main loop, scripts such as colorChase and rainbow are executed. We can make use of this Arduino sketch to implement serial port commands to control the lighting scripts using the Raspberry Pi. This task merely provides some ideas of connecting and lighting up the RGB LED strip. You should familiarize yourself with the working principles of the RGB LED strip. The Raspberry Pi has an SPI port, and hence, it is possible to control the RGB strip directly from the Raspberry Pi. Objective complete – mini debriefing In this task, we reviewed options for decorative lighting and controlling them using the Raspberry Pi and Arduino. Interface of an audio device In this task, we will work on installing MP3 and WAV file audio player tools on the Raspbian operating system. Prepare for lift off The Raspberry Pi is equipped with a 3.5mm audio jack and the speakers can be connected to that output. In order to get started, we install the ALSA utilities package and a command-line mp3 player: sudo apt-get install alsa-utils sudo apt-get install mpg321 Engage thrusters In order to use the alsa-utils or mpg321 players, we have to activate the BCM2835's sound drivers and this can be done using the modprobe command: sudo modprobe snd_bcm2835 After activating the drivers, it is possible to play the WAV files using the aplay command (aplay is a command-line player available as part of the alsa-utils package): aplay testfile.wav An MP3 file can be played using the mpg321 command (a command-line MP3 player): mpg321 testfile.mp3 In the preceding examples, the commands were executed in the directory where the WAV file or the MP3 file was located. In the Linux environment, it is possible to stop playing a file by pressing CTRL + C. Objective complete – mini debriefing We were able to install sound utilities in this task. Later, we will use the installed utilities to play audio from a web page. It is possible to play the sound files on the Raspberry Pi using the module available in Python. Some examples include: Snack sound tool kit, Pygame, and so on. Installing the web server In this section, we will install a local web server on Raspberry Pi. There are different web server frameworks that can be installed on the Raspberry Pi. They include Apache v2.0, Boost, the REST framework, and so on. Prepare for lift off As mentioned earlier, we will build a web server based on the web.py framework. This section is entirely referenced from web.py tutorials (http://webpy.github.io/). In order to install web.py, a Python module installer such as pip or easy_install is required. We will install it using the following command: sudo apt-get install python-setuptools Engage thrusters The web.py framework can be installed using the easy_install tool: sudo easy_install web.py Once the installation is complete, it is time to test it with a Hello World! example. We will open a new file using a text editor available with Python IDLE and get started with a Hello World! example for the web.py framework using the following steps: The first step is to import the web.py framework: import web The next step is defining the class that will handle the landing page. In this case, it is index: urls = ('/','index') We need to define what needs to be done when one tries to access the URL. "We will like to return the Hello world!text: class index: def GET(self): return "Hello world!" The next step is to ensure that a web page is set up using the web.py framework when the Python script is launched: if __name__ == '__main__': app = web.application(urls, globals()) app.run() When everything is put together, the following code is what we'll see: import web urls = ('/','index') class index: def GET(self): return "Hello world!" if __name__ == '__main__': app = web.application(urls,globals()) app.run() We should be able to start the web page by executing the Python script: python helloworld.py We should be able to launch the website from the IP address of the Raspberry Pi. For example, if the IP address is 10.0.0.10, the web page can be accessed at http://10.0.0.10:8080 and it displays the text Hello world. Yay! A Hello world! example using the web.py framework Objective complete – mission debriefing We built a simple web page to display the Hello world text. In the next task, we will be interfacing the Christmas tree and other decorative appliances to our web page so that we can control it from anywhere on the local network. It is possible to change the default port number for the web page access by launching the Python script as follows: python helloworld.py 1234 Now, the web page can be accessed at http://<IP_Address_of_the_Pi>:1234. Interfacing the web server In this task, we will learn to interface one decorative appliance and a speaker. We will create a form and buttons on an HTML page to control the devices. Prepare for lift off In this task, we will review the code (available along with this article) required to interface decorative appliances and lighting arranging to a web page and controlled over a local network. Let's get started with opening the file using a text editing tool (Python IDLE's text editor or any other text editor). Engage thrusters We will import the following modules to get started with the program: import web from web import form import RPi.GPIO as GPIO import os The GPIO module is initialized, the board numbering is set, and ensure that all appliances are turned off by setting the GPIO pins to low or false and declare any global variables: #Set board GPIO.setmode(GPIO.BCM) #Initialize the pins that have to be controlled GPIO.setup(25,GPIO.OUT) GPIO.output(25,False) This is followed by defining the template location: urls = ('/', 'index') render = web.template.render('templates') The buttons used in the web page are also defined: appliances_form = form.Form( form.Button("appbtn", value="tree", class_="btntree"), form.Button("appbtn", value="Santa", class_="btnSanta"), form.Button("appbtn", value="audio", class_="btnaudio")    In this example, three buttons are used, a value is assigned to each button along with their class.    In this example, we are using three buttons and the name is appbtn. A value is assigned to each button that determines the desired action when a button is clicked. For example, when a Christmas tree button is clicked, the lights need to be turned on. This action can be executed based on the value that is returned during the button press. The home page is defined in the index class. The GET method is used to render the web page and POST for button click actions. class index: def GET(self): form = appliances_form() return render.index(form, "Raspberry Pi Christmas lights controller") def POST(self): userData = web.input() if userData.appbtn == "tree" global state state = not state elif userData.appbtn == "Santa": #do something here for another appliance elif userData.appbtn == "audio": os.system("mpg321 /home/pi/test.mp3") GPIO.output(25,state) raise web.seeother('/')    In the POST method, we need to monitor the button clicks and perform an action accordingly. For example, when the button with the tree value is returned, we can change the Boolean value, state. This in turn switches the state of the GPIO pin 25. Earlier, we connected the power tail switch to pin 25. The index page file that contains the form and buttons is as follows: $def with (form,title) <html> <head> <title>$title</title> <link rel="stylesheet" type="text/css" href="/static/styles.css"> </head> <body&gt <P><center><H1>Christmas Lights Controller</H1></center> <br /> <br /> <form class="form" method="post"> $:form.render() </form> </body> </html> The styles of the buttons used on the web page are described as follows in styles.css: form .btntree { margin-left : 200px; margin-right : auto; background:transparent url("images/topic_button.png") no-repeat top left; width : 186px; height: 240px; padding : 0px; position : absolute; } form .btnSanta{ margin-left :600px; margin-right : auto; background:transparent url("images/Santa-png.png") no-repeat top left; width : 240px; height: 240px; padding : 40px; position : absolute; } body {background-image:url('bg-snowflakes-3.gif'); } The web page looks like what is shown in the following figure: Yay! We have a Christmas lights controller interface. Objective complete – mini debriefing We have written a simple web page that interfaces a Christmas tree and RGB tree and plays MP3 files. This is a great project for a holiday weekend. It is possible to view this web page from anywhere on the Internet and turn these appliances on/off (Fans of the TV show Big Bang Theory might like this idea. A step-by-step instruction on setting it up is available at http://www.everydaylinuxuser.com/2013/06/connecting-to-raspberry-pi-from-outside.html). Summary In this article, we have accomplished the following: Interfacing the RGB matrix Interfacing AC appliances to Raspberry Pi Design of a web page Interfacing devices to the web page Resources for Article: Further resources on this subject: Raspberry Pi Gaming Operating Systems [article] Calling your fellow agents [article] Our First Project – A Basic Thermometer [article]
Read more
  • 0
  • 0
  • 9493

article-image-static-data-management
Packt
25 Feb 2015
29 min read
Save for later

Static Data Management

Packt
25 Feb 2015
29 min read
In this article by Loiane Groner, author of the book Mastering Ext JS, Second Edition, we will start implementing the application's core features, starting with static data management. What exactly is this? Every application has information that is not directly related to the core business, but this information is used by the core business logic somehow. There are two types of data in every application: static data and dynamic data. For example, the types of categories, languages, cities, and countries can exist independently of the core business and can be used by the core business information as well; this is what we call static data because it does not change very often. And there is the dynamic data, which is the information that changes in the application, what we call core business data. Clients, orders, and sales would be examples of dynamic or core business data. We can treat this static information as though they are independent MySQL tables (since we are using MySQL as the database server), and we can perform all the actions we can do on a MySQL table. (For more resources related to this topic, see here.) Creating a Model As usual, we are going to start by creating the Models. First, let's list the tables we will be working with and their columns: Actor: actor_id, first_name, last_name, last_update Category: category_id, name, last_update Language: language_id, name, last_update City: city_id, city, country_id, last_update Country: country_id, country, last_update We could create one Model for each of these entities with no problem at all; however, we want to reuse as much code as possible. Take another look at the list of tables and their columns. Notice that all tables have one column in common—the last_update column. All the previous tables have the last_update column in common. That being said, we can create a super model that contains this field. When we implement the actor and category models, we can extend the super Model, in which case we do not need to declare the column. Don't you think? Abstract Model In OOP, there is a concept called inheritance, which is a way to reuse the code of existing objects. Ext JS uses an OOP approach, so we can apply the same concept in Ext JS applications. If you take a look back at the code we already implemented, you will notice that we are already applying inheritance in most of our classes (with the exception of the util package), but we are creating classes that inherit from Ext JS classes. Now, we will start creating our own super classes. As all the models that we will be working with have the last_update column in common (if you take a look, all the Sakila tables have this column), we can create a super Model with this field. So, we will create a new file under app/model/staticData named Base.js: Ext.define('Packt.model.staticData.Base', {     extend: 'Packt.model.Base', //#1       fields: [         {             name: 'last_update',             type: 'date',             dateFormat: 'Y-m-j H:i:s'         }     ] }); This Model has only one column, that is, last_update. On the tables, the last_update column has the type timestamp, so the type of the field needs to be date, and we will also apply date format: 'Y-m-j H:i:s', which is years, months, days, hours, minutes, and seconds, following the same format as we have in the database (2006-02-15 04:34:33). When we can create each Model representing the tables, we will not need to declare the last_update field again. Look again at the code at line #1. We are not extending the default Ext.data.Model class, but another Base class (security.Base). Adapting the Base Model schema Create a file named Base.js inside the app/model folder with the following content in it: Ext.define('Packt.model.Base', {     extend: 'Ext.data.Model',       requires: [         'Packt.util.Util'     ],       schema: {         namespace: 'Packt.model', //#1         urlPrefix: 'php',         proxy: {             type: 'ajax',             api :{                 read : '{prefix}/{entityName:lowercase}/list.php',                 create:                     '{prefix}/{entityName:lowercase}/create.php',                 update:                     '{prefix}/{entityName:lowercase}/update.php',                 destroy:                     '{prefix}/{entityName:lowercase}/destroy.php'             },             reader: {                 type: 'json',                 rootProperty: 'data'             },             writer: {                 type: 'json',                 writeAllFields: true,                 encode: true,                 rootProperty: 'data',                 allowSingle: false             },             listeners: {                 exception: function(proxy, response, operation){               Packt.util.Util.showErrorMsg(response.responseText);                 }             }         }     } }); Instead of using Packt.model.security, we are going to use only Packt.model. The Packt.model.security.Base class will look simpler now as follows: Ext.define('Packt.model.security.Base', {     extend: 'Packt.model.Base',       idProperty: 'id',       fields: [         { name: 'id', type: 'int' }     ] }); It is very similar to the staticData.Base Model we are creating for this article. The difference is in the field that is common for the staticData package (last_update) and security package (id). Having a single schema for the application now means entityName of the Models will be created based on their name after 'Packt.model'. This means that the User and Group models we created will have entityName security.User, and security.Group respectively. However, we do not want to break the code we have implemented already, and for this reason we want the User and Group Model classes to have the entity name as User and Group. We can do this by adding entityName: 'User' to the User Model and entityName: 'Group' to the Group Model. We will do the same for the specific models we will be creating next. Having a super Base Model for all models within the application means our models will follow a pattern. The proxy template is also common for all models, and this means our server-side code will also follow a pattern. This is good to organize the application and for future maintenance. Specific models Now we can create all the models representing each table. Let's start with the Actor Model. We will create a new class named Packt.model.staticData.Actor; therefore, we need to create a new file name Actor.js under app/model/staticData, as follows: Ext.define('Packt.model.staticData.Actor', {     extend: 'Packt.model.staticData.Base', //#1       entityName: 'Actor', //#2       idProperty: 'actor_id', //#3       fields: [         { name: 'actor_id' },         { name: 'first_name'},         { name: 'last_name'}     ] }); There are three important things we need to note in the preceding code: This Model is extending (#1) from the Packt.model.staticData.Base class, which extends from the Packt.model.Base class, which in turn extends from the Ext.data.Model class. This means this Model inherits all the attributes and behavior from the classes Packt.model.staticData.Base, Packt.model.Base, and Ext.data.Model. As we created a super Model with the schema Packt.model, the default entityName created for this Model would be staticData.Actor. We are using entityName to help the proxy compile the url template with entityName. To make our life easier we are going to overwrite entityName as well (#2). The third point is idProperty (#3). By default, idProperty has the value "id". This means that when we declare a Model with a field named "id", Ext JS already knows that this is the unique field of this Model. When it is different from "id", we need to specify it using the idProperty configuration. As all Sakila tables do not have a unique field called "id"—it is always the name of the entity + "_id"—we will need to declare this configuration in all models. Now we can do the same for the other models. We need to create four more classes: Packt.model.staticData.Category Packt.model.staticData.Language Packt.model.staticData.City Packt.model.staticData.Country At the end, we will have six Model classes (one super Model and five specific models) created inside the app/model/staticData package. If we create a UML-class diagram for the Model classes, we will have the following diagram: The Actor, Category, Language, City, and Country Models extend the Packt.model.staticData Base Model, which extends from Packt.model.Base, which in turn extends the Ext.data.Model class. Creating a Store The next step is to create the storesfor each Model. As we did with the Model, we will try to create a generic Storeas well (in this article, will create a generic code for all screens, so creating a super Model, Store, and View is part of the capability). Although the common configurations are not in the Store, but in the Proxy(which we declared inside the schema in the Packt.model.Base class), having a super Store class can help us to listen to events that are common for all the static data stores. We will create a super Store named Packt.store.staticData.Base. As we need a Store for each Model, we will create the following stores: Packt.store.staticData.Actors Packt.store.staticData.Categories Packt.store.staticData.Languages Packt.store.staticData.Cities Packt.store.staticData.Countries At the end of this topic, we will have created all the previous classes. If we create a UML diagram for them, we will have something like the following diagram: All the Store classes extend from the Base Store. Now that we know what we need to create, let's get our hands dirty! Abstract Store The first class we need to create is the Packt.store.staticData.Base class. Inside this class, we will only declare autoLoad as true so that all the subclasses of this Store can be loaded when the application launches: Ext.define('Packt.store.staticData.Base', {     extend: 'Ext.data.Store',       autoLoad: true }); All the specific stores that we will create will extend this Store. Creating a super Store like this can feel pointless; however, we do not know that during future maintenance, we will need to add some common Store configuration. As we will use MVC for this module, another reason is that inside the Controller, we can also listen to Store events (available since Ext JS 4.2). If we want to listen to the same event of a set of stores and we execute exactly the same method, having a super Store will save us some lines of code. Specific Store Our next step is to implement the Actors, Categories, Languages, Cities, and Countries stores. So let's start with the Actors Store: Ext.define('Packt.store.staticData.Actors', {     extend: 'Packt.store.staticData.Base', //#1       model: 'Packt.model.staticData.Actor' //#2 }); After the definition of the Store, we need to extend from the Ext JS Store class. As we are using a super Store, we can extend directly from the super Store (#1), which means extending from the Packt.store.staticData.Base class. Next, we need to declare the fields or the model that this Store is going to represent. In our case, we always declare the Model (#2). Using a model inside the Store is good for reuse purposes. The fields configuration is recommended just in case we need to create a very specific Store with specific data that we are not planning to reuse throughout the application, as in a chart or a report. For the other stores, the only thing that is going to be different is the name of the Store and the Model. However, if you need the code to compare with yours or simply want to get the complete source code, you can download the code bundle from this book or get it at https://github.com/loiane/masteringextjs. Creating an abstract GridPanel for reuse Now is the time to implement the views. We have to implement five views: one to perform the CRUD operations for Actor, one for Category, one for Language, one for City, and one for Country. The following screenshot represents the final result we want to achieve after implementing the Actors screen: And the following screenshot represents the final result we want to achieve after implementing the Categories screen: Did you notice anything similar between these two screens? Let's take a look again: The top toolbar is the same (1); there is a Live Search capability (2); there is a filter plugin (4), and the Last Update and widget columns are also common (3). Going a little bit further, both GridPanels can be edited using a cell editor(similar to MS Excel capabilities, where you can edit a single cell by clicking on it). The only things different between these two screens are the columns that are specific to each screen (5). Does this mean we can reuse a good part of the code if we use inheritance by creating a super GridPanel with all these common capabilities? Yes! So this is what we are going to do. So let's create a new class named Packt.view.staticData.BaseGrid, as follows: Ext.define('Packt.view.staticData.BaseGrid', {     extend: 'Ext.ux.LiveSearchGridPanel', //#1     xtype: 'staticdatagrid',       requires: [         'Packt.util.Glyphs' //#2     ],       columnLines: true,    //#3     viewConfig: {         stripeRows: true //#4     },       //more code here });    We will extend the Ext.ux.LiveSearchGridPanel class instead of Ext.grid.Panel. The Ext.ux.LiveSearchGridPanel class already extends the Ext.grid.Panel class and also adds the Live Search toolbar (2). The LiveSearchGridPanel class is a plugin that is distributed with the Ext JS SDK. So, we do not need to worry about adding it manually to our project (you will learn how to add third-party plugins to the project later in this book). As we will also add a toolbar with the Add, Save Changes, Cancel Changes buttons, we need to require the util.Glyphs class we created (#2). The configurations #3 and #4 show the border of each cell of the grid and to alternate between a white background and a light gray background. Likewise, any other component that is responsible for displaying information in Ext JS, such as the "Panel" piece is only the shell. The View is responsible for displaying the columns in a GridPanel. We can customize it using the viewConfig (#4). The next step is to create an initComponent method. To initComponent or not? While browsing other developers' code, we might see some using the initComponent when declaring an Ext JS class and some who do not (as we have done until now). So what is the difference between using it and not using it? When declaring an Ext JS class, we usually configure it according to the application needs. They might become either a parent class for other classes or not. If they become a parent class, some of the configurations will be overridden, while some will not. Usually, we declare the ones that we expect to override in the class as configurations. We declare inside the initComponent method the ones we do not want to be overridden. As there are a few configurations we do not want to be overridden, we will declare them inside the initComponent, as follows: initComponent: function() {     var me = this;       me.selModel = {         selType: 'cellmodel' //#5     };       me.plugins = [         {             ptype: 'cellediting',  //#6             clicksToEdit: 1,             pluginId: 'cellplugin'         },         {             ptype: 'gridfilters'  //#7         }     ];       //docked items       //columns       me.callParent(arguments); //#8 } We can define how the user can select information from the GridPanel: the default configuration is the Selection RowModel class. As we want the user to be able to edit cell by cell, we will use the Selection CellModel class (#5) and also the CellEditing plugin (#6), which is part of the Ext JS SDK. For the CellEditing plugin, we configure the cell to be available to edit when the user clicks on the cell (if we need the user to double-click, we can change to clicksToEdit: 2). To help us later in the Controller, we also assign an ID to this plugin. To be able to filter the information (the Live Search will only highlight the matching records), we will use the Filters plugin (#7). The Filters plugin is also part of the Ext JS SDK. The callParent method (#8) will call initConfig from the superclass Ext.ux.LiveSearchGridPanel passing the arguments we defined. It is a common mistake to forget to include the callParent call when overriding the initComponent method. If the component does not work, make sure you are calling the callParent method! Next, we are going to declare dockedItems. As all GridPanels will have the same toolbar, we can declare dockedItems in the super class we are creating, as follows: me.dockedItems = [     {         xtype: 'toolbar',         dock: 'top',         itemId: 'topToolbar', //#9         items: [             {                 xtype: 'button',                 itemId: 'add', //#10                 text: 'Add',                 glyph: Packt.util.Glyphs.getGlyph('add')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'save',                 text: 'Save Changes',                 glyph: Packt.util.Glyphs.getGlyph('saveAll')             },             {                 xtype: 'button',                 itemId: 'cancel',                 text: 'Cancel Changes',                 glyph: Packt.util.Glyphs.getGlyph('cancel')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'clearFilter',                 text: 'Clear Filters',                 glyph: Packt.util.Glyphs.getGlyph('clearFilter')             }         ]     } ]; We will have Add, Save Changes, Cancel Changes, and Clear Filters buttons. Note that the toolbar (#9) and each of the buttons (#10) has itemId declared. As we are going to use the MVC approach in this example, we will declare a Controller. The itemId configuration has a responsibility similar to the reference that we declare when working with a ViewController. We will discuss the importance of itemId more when we declare the Controller. When declaring buttons inside a toolbar, we can omit the xtype: 'button' configuration since the button is the default component for toolbars. Inside the Glyphs class, we need to add the following attributes inside its config: saveAll: 'xf0c7', clearFilter: 'xf0b0' And finally, we will add the two columns that are common for all the screens (Last Update column and Widget Column delete (#13)) along with the columns already declared in each specific GridPanel: me.columns = Ext.Array.merge( //#11     me.columns,               //#12     [{         xtype    : 'datecolumn',         text     : 'Last Update',         width    : 150,         dataIndex: 'last_update',         format: 'Y-m-j H:i:s',         filter: true     },     {         xtype: 'widgetcolumn', //#13         width: 45,         sortable: false,       //#14         menuDisabled: true,    //#15         itemId: 'delete',         widget: {             xtype: 'button',   //#16             glyph: Packt.util.Glyphs.getGlyph('destroy'),             tooltip: 'Delete',             scope: me,                //#17             handler: function(btn) {  //#18                 me.fireEvent('widgetclick', me, btn);             }         }     }] ); In the preceding code we merge (#11) me.columns (#12) with two other columns and assign this value to me.columns again. We want all child grids to have these two columns plus the specific columns for each child grid. If the columns configuration from the BaseGrid class were outside initConfig, then when a child class declared its own columns configuration the value would be overridden. If we declare the columns configuration inside initComponent, a child class would not be able to add its own columns configuration, so we need to merge these two configurations (the columns from the child class #12 with the two columns we want each child class to have). For the delete button, we are going to use a Widget Column (#13) (introduced in Ext JS 5). Until Ext JS 4, the only way to have a button inside a Grid Column was using an Action Column. We are going to use a button (#16) to represent a Widget Column. Because it is a Widget Column, there is no reason to make this column sortable (#14), and we can also disable its menu (#15). Specific GridPanels for each table Our last stop before we implement the Controller is the specific GridPanels. We have already created the super GridPanel that contains most of the capabilities that we need. Now we just need to declare the specific configurations for each GridPanel. We will create five GridPanels that will extend from the Packt.view.staticData.BaseGrid class, as follows: Packt.view.staticData.Actors Packt.view.staticData.Categories Packt.view.staticData.Languages Packt.view.staticData.Cities Packt.view.staticData.Countries Let's start with the Actors GridPanel, as follows: Ext.define('Packt.view.staticData.Actors', {     extend: 'Packt.view.staticData.BaseGrid',     xtype: 'actorsgrid',        //#1       store: 'staticData.Actors', //#2       columns: [         {             text: 'Actor Id',             width: 100,             dataIndex: 'actor_id',             filter: {                 type: 'numeric'   //#3             }         },         {             text: 'First Name',             flex: 1,             dataIndex: 'first_name',             editor: {                 allowBlank: false, //#4                 maxLength: 45      //#5             },             filter: {                 type: 'string'     //#6             }         },         {             text: 'Last Name',             width: 200,             dataIndex: 'last_name',             editor: {                 allowBlank: false, //#7                 maxLength: 45      //#8             },             filter: {                 type: 'string'     //#9             }         }     ] }); Each specific class has its own xtype (#1). We also need to execute an UPDATE query in the database to update the menu table with the new xtypes we are creating: UPDATE `sakila`.`menu` SET `className`='actorsgrid' WHERE `id`='5'; UPDATE `sakila`.`menu` SET `className`='categoriesgrid' WHERE `id`='6'; UPDATE `sakila`.`menu` SET `className`='languagesgrid' WHERE `id`='7'; UPDATE `sakila`.`menu` SET `className`='citiesgrid' WHERE `id`='8'; UPDATE `sakila`.`menu` SET `className`='countriesgrid' WHERE `id`='9'; The first declaration that is specific to the Actors GridPanel is the Store (#2). We are going to use the Actors Store. Because the Actors Store is inside the staticData folder (store/staticData), we also need to pass the name of the subfolder; otherwise, Ext JS will think that this Store file is inside the app/store folder, which is not true. Then we need to declare the columns specific to the Actors GridPanel (we do not need to declare the Last Update and the Delete Action Column because they are already in the super GridPanel). What you need to pay attention to now are the editor and filter configurations for each column. The editor is for editing (cellediting plugin). We will only apply this configuration to the columns we want the user to be able to edit, and the filter (filters plugin) is the configuration that we will apply to the columns we want the user to be able to filter information from. For example, for the id column, we do not want the user to be able to edit it as it is a sequence provided by the MySQL database auto increment, so we will not apply the editor configuration to it. However, the user can filter the information based on the ID, so we will apply the filter configuration (#3). We want the user to be able to edit the other two columns: first_name and last_name, so we will add the editor configuration. We can perform client validations as we can do on a field of a form too. For example, we want both fields to be mandatory (#4 and #7) and the maximum number of characters the user can enter is 45 (#5 and #8). And at last, as both columns are rendering text values (string), we will also apply filter (#6 and #9). For other filter types, please refer to the Ext JS documentation as shown in the following screenshot. The documentation provides an example and more configuration options that we can use: And that is it! The super GridPanel will provide all the other capabilities. Summary In this article, we covered how to implement screens that look very similar to the MySQL Table Editor. The most important concept we covered in this article is implementing abstract classes, using the inheritance concept from OOP. We are used to using these concepts on server-side languages, such as PHP, Java, .NET, and so on. This article demonstrated that it is also important to use these concepts on the Ext JS side; this way, we can reuse a lot of code and also implement generic code that provides the same capability for more than one screen. We created a Base Model and Store. We used GridPanel and Live Search grid and filter plugin for the GridPanel as well. You learned how to perform CRUD operations using the Store capabilities. Resources for Article: Further resources on this subject: So, What Is EXT JS? [article] The Login Page Using EXT JS [article] Improving Code Quality [article]
Read more
  • 0
  • 0
  • 6168
article-image-introducing-web-application-development-rails
Packt
25 Feb 2015
8 min read
Save for later

Introducing Web Application Development in Rails

Packt
25 Feb 2015
8 min read
In this article by Syed Fazle Rahman, author of the book Bootstrap for Rails,we will learn how to present your application in the best possible way, which has been the most important factor for every web developer for ages. In this mobile-first generation, we are forced to go with the wind and make our application compatible with mobiles, tablets, PCs, and every possible display on Earth. Bootstrap is the one stop solution for all woes that developers have been facing. It creates beautiful responsive designs without any extra efforts and without any advanced CSS knowledge. It is a true boon for every developer. we will be focusing on how to beautify our Rails applications through the help of Bootstrap. We will create a basic Todo application with Rails. We will explore the folder structure of a Rails application and analyze which folders are important for templating a Rails Application. This will be helpful if you want to quickly revisit Rails concepts. We will also see how to create views, link them, and also style them. The styling in this article will be done traditionally through the application's default CSS files. Finally, we will discuss how we can speed up the designing process using Bootstrap. In short, we will cover the following topics: Why Bootstrap with Rails? Setting up a Todo Application in Rails Analyzing folder structure of a Rails application Creating views Styling views using CSS Challenges in traditionally styling a Rails Application (For more resources related to this topic, see here.) Why Bootstrap with Rails? Rails is one the most popular Ruby frameworks which is currently at its peak, both in terms of demand and technology trend. With more than 3,100 members contributing to its development, and tens of thousands of applications already built using it, Rails has created a standard for every other framework in the Web today. Rails was initially developed by David Heinemeier Hansson in 2003 to ease his own development process in Ruby. Later, he became generous enough to release Rails to the open source community. Today, it is popularly known as Ruby on Rails. Rails shortens the development life cycle by moving the focus from reinventing the wheel to innovating new features. It is based on the convention of the configurations principle, which means that if you follow the Rails conventions, you would end up writing much less code than you would otherwise write. Bootstrap, on the other hand, is one of the most popular frontend development frameworks. It was initially developed at Twitter for some of its internal projects. It makes the life of a novice web developer easier by providing most of the reusable components that are already built and are ready to use. Bootstrap can be easily integrated with a Rails development environment through various methods. We can directly use the .css files provided by the framework, or can extend it through its Sass version and let Rails compile it. Sass is a CSS preprocessor that brings logic and functionality into CSS. It includes features like variables, functions, mixins, and others. Using the Sass version of Bootstrap is a recommended method in Rails. It gives various options to customize Bootstrap's default styles easily. Bootstrap also provides various JavaScript components that can be used by those who don't have any real JavaScript knowledge. These components are required in almost every modern website being built today. Bootstrap with Rails is a deadly combination. You can build applications faster and invest more time to think about functionality, rather than rewrite codes. Setting up a Todo application in Rails I assume that you already have basic knowledge of Rails development. You should also have Rails and Ruby installed in your machine to start with. Let's first understand what this Todo application will do. Our application will allow us to create, update, and delete items from the Todo list. We will first analyze the folders that are created while scaffolding this application and which of them are necessary for templating the application. So, let's dip our feet into the water: First, we need to select our workspace, which can be any folder inside your system. Let's create a folder named Bootstrap_Rails_Project. Now, open the terminal and navigate to this folder. It's time to create our Todo application. Write the following command to create a Rails application named TODO: rails new TODO This command will execute a series of various other commands that are necessary to create a Rails application. So, just wait for sometime before it stops executing all the codes. If you are using a newer version of Rails, then this command will also execute bundle install command at the end. Bundle install command is used to install other dependencies. The output for the preceding command is as follows: Now, you should have a new folder inside Bootstrap_Rails_Project named TODO, which was created by the preceding code. Here is the output: Analyzing folder structure of a Rails application Let's navigate to the TODO folder to check how our application's folder structure looks like: Let me explain to you some of the important folders here: The first one is the app folder. The assets folder inside the app folder is the location to store all the static files like JavaScript, CSS, and Images. You can take a sneak peek inside them to look at the various files. The controllers folder handles various requests and responses of the browser. The helpers folder contains various helper methods both for the views and controllers. The next folder mailers, contains all the necessary files to send an e-mail. The models folder contains files that interact with the database. Finally, we have the views folder, which contains all the .erb files that will be complied to HTML files. So, let's start the Rails server and check out our application on the browser: Navigate to the TODO folder in the terminal and then type the following command to start a server: rails server You can also use the following command: rails s You will see that the server is deployed under the port 3000. So, type the following URL to view the application: http://localhost:3000. You can also use the following URL: http://0.0.0.0:3000. If your application is properly set up, you should see the default page of Rails in the browser: Creating views We will be using Rails' scaffold method to create models, views, and other necessary files that Rails needs to make our application live. Here's the set of tasks that our application should perform: It should list out the pending items Every task should be clickable, and the details related to that item should be seen in a new view We can edit that item's description and some other details We can delete that item The task looks pretty lengthy, but any Rails developer would know how easy it is to do. We don't actually have to do anything to achieve it. We just have to pass a single scaffold command, and the rest will be taken care of. Close the Rails server using Ctrl + C keys and then proceed as follows: First, navigate to the project folder in the terminal. Then, pass the following command: rails g scaffold todo title:string description:text completed:boolean This will create a new model called todo that has various fields like title, description, and completed. Each field has a type associated with it. Since we have created a new model, it has to be reflected in the database. So, let's migrate it: rake db:create db:migrate The preceding code will create a new table inside a new database with the associated fields. Let's analyze what we have done. The scaffold command has created many HTML pages or views that are needed for managing the todo model. So, let's check out our application. We need to start our server again: rails s Go to the localhost page http://localhost:3000 at port number 3000. You should still see the Rails' default page. Now, type the URL: http://localhost:3000/todos. You should now see the application, as shown in the following screenshot: Click on New Todo, you will be taken to a form which allows you to fill out various fields that we had earlier created. Let's create our first todo and click on submit. It will be shown on the listing page: It was easy, wasn't it? We haven't done anything yet. That's the power of Rails, which people are crazy about. Summary This article was to brief you on how to develop and design a simple Rails application without the help of any CSS frontend frameworks. We manually styled the application by creating an external CSS file styles.css and importing it into the application using another CSS file application.css. We also discussed the complexities that a novice web designer might face on directly styling the application. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 11788

article-image-url-routing-and-template-rendering
Packt
24 Feb 2015
11 min read
Save for later

URL Routing and Template Rendering

Packt
24 Feb 2015
11 min read
In this article by Ryan Baldwin, the author of Clojure Web Development Essentials, however, we will start building our application, creating actual endpoints that process HTTP requests, which return something we can look at. We will: Learn what the Compojure routing library is and how it works Build our own Compojure routes to handle an incoming request What this chapter won't cover, however, is making any of our HTML pretty, client-side frameworks, or JavaScript. Our goal is to understand the server-side/Clojure components and get up and running as quickly as possible. As a result, our templates are going to look pretty basic, if not downright embarrassing. (For more resources related to this topic, see here.) What is Compojure? Compojure is a small, simple library that allows us to create specific request handlers for specific URLs and HTTP methods. In other words, "HTTP Method A requesting URL B will execute Clojure function C. By allowing us to do this, we can create our application in a sane way (URL-driven), and thus architect our code in some meaningful way. For the studious among us, the Compojure docs can be found at https://github.com/weavejester/compojure/wiki. Creating a Compojure route Let's do an example that will allow the awful sounding tech jargon to make sense. We will create an extremely basic route, which will simply print out the original request map to the screen. Let's perform the following steps: Open the home.clj file. Alter the home-routes defroute such that it looks like this: (defroutes home-routes   (GET "/" [] (home-page))   (GET "/about" [] (about-page))   (ANY "/req" request (str request))) Start the Ring Server if it's not already started. Navigate to http://localhost:3000/req. It's possible that your Ring Server will be serving off a port other than 3000. Check the output on lein ring server for the serving port if you're unable to connect to the URL listed in step 4. You should see something like this: Using defroutes Before we dive too much into the anatomy of the routes, we should speak briefly about what defroutes is. The defroutes macro packages up all of the routes and creates one big Ring handler out of them. Of course, you don't need to define all the routes for an application under a single defroutes macro. You can, and should, spread them out across various namespaces and then incorporate them into the app in Luminus' handler namespace. Before we start making a bunch of example routes, let's move the route we've already created to its own namespace: Create a new namespace hipstr.routes.test-routes (/hipstr/routes/test_routes.clj) . Ensure that the namespace makes use of the Compojure library: (ns hipstr.routes.test-routes   (:require [compojure.core :refer :all])) Next, use the defroutes macro and create a new set of routes, and move the /req route we created in the hipstr.routes.home namespace under it: (defroutes test-routes   (ANY "/req" request (str request))) Incorporate the new test-routes route into our application handler. In hipstr.handler, perform the following steps: Add a requirement to the hipstr.routes.test-routes namespace: (:require [compojure.core :refer [defroutes]]   [hipstr.routes.home :refer [home-routes]]   [hipstr.routes.test-routes :refer [test-routes]]   …) Finally, add the test-routes route to the list of routes in the call to app-handler: (def app (app-handler   ;; add your application routes here   [home-routes test-routes base-routes] We've now created a new routing namespace. It's with this namespace where we will create the rest of the routing examples. Anatomy of a route So what exactly did we just create? We created a Compojure route, which responds to any HTTP method at /req and returns the result of a called function, in our case a string representation of the original request map. Defining the method The first argument of the route defines which HTTP method the route will respond to; our route uses the ANY macro, which means our route will respond to any HTTP method. Alternatively, we could have restricted which HTTP methods the route responds to by specifying a method-specific macro. The compojure.core namespace provides macros for GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH. Let's change our route to respond only to requests made using the GET method: (GET "/req" request (str request)) When you refresh your browser, the entire request map is printed to the screen, as we'd expect. However, if the URL and the method used to make the request don't match those defined in our route, the not-found route in hipstr.handler/base-routes is used. We can see this in action by changing our route to listen only to the POST methods: (POST "/req" request (str request)) If you try and refresh the browser again, you'll notice we don't get anything back. In fact, an "HTTP 404: Page Not Found" response is returned to the client. If we POST to the URL from the terminal using curl, we'll get the following expected response: # curl -d {} http://localhost:3000/req {:ssl-client-cert nil, :go-bowling? "YES! NOW!", :cookies {}, :remote-addr "0:0:0:0:0:0:0:1", :params {}, :flash nil, :route-params {}, :headers {"user-agent" "curl/7.37.1", "content-type" "application/x-www-form-urlencoded", "content-length" "2", "accept" "*/*", "host" "localhost:3000"}, :server-port 3000, :content-length 2, :form-params {}, :session/key nil, :query-params {}, :content-type "application/x-www-form-urlencoded", :character-encoding nil, :uri "/req", :server-name "localhost", :query-string nil, :body #<HttpInput org.eclipse.jetty.server.HttpInput@38dea1>, :multipart-params {}, :scheme :http, :request-method :post, :session {}} Defining the URL The second component of the route is the URL on which the route is served. This can be anything we want and as long as the request to the URL matches exactly, the route will be invoked. There are, however, two caveats we need to be aware of: Routes are tested in order of their declaration, so order matters. The trailing slash isn't handled well. Compojure will always strip the trailing slash from the incoming request but won't redirect the user to the URL without the trailing slash. As a result an HTTP 404: Page Not Found response is returned. So never base anything off a trailing slash, lest ye peril in an ocean of confusion. Parameter destructuring In our previous example we directly refer to the implicit incoming request and pass that request to the function constructing the response. This works, but it's nasty. Nobody ever said, I love passing around requests and maintaining meaningless code and not leveraging URLs, and if anybody ever did, we don't want to work with them. Thankfully, Compojure has a rather elegant destructuring syntax that's easier to read than Clojure's native destructuring syntax. Let's create a second route that allows us to define a request map key in the URL, then simply prints that value in the response: (GET "/req/:val" [val] (str val)) Compojure's destructuring syntax binds HTTP request parameters to variables of the same name. In the previous syntax, the key :val will be in the request's :params map. Compojure will automatically map the value of {:params {:val...}} to the symbol val in [val]. In the end, you'll get the following output for the URL http://localhost:3000/req/holy-moly-molly: That's pretty slick but what if there is a query string? For example, http://localhost:3000/req/holy-moly-molly!?more=ThatsAHotTomalle. We can simply add the query parameter more to the vector, and Compojure will automatically bring it in: (GET "/req/:val" [val more] (str val "<br>" more)) Destructuring the request What happens if we still need access to the entire request? It's natural to think we could do this: (GET "/req/:val" [val request] (str val "<br>" request)) However, request will always be nil because it doesn't map back to a parameter key of the same name. In Compojure, we can use the magical :as key: (GET "/req/:val" [val :as request] (str val "<br>" request)) This will now result in request being assigned the entire request map, as shown in the following screenshot: Destructuring unbound parameters Finally, we can bind any remaining unbound parameters into another map using &. Take a look at the following example code: (GET "/req/:val/:another-val/:and-another"   [val & remainders] (str val "<br>" remainders)) Saving the file and navigating to http://localhost:3000/req/holy-moly-molly!/what-about/susie-q will render both val and the map with the remaining unbound keys :another-val and :and-another, as seen in the following screenshot: Constructing the response The last argument in the route is the construction of the response. Whatever the third argument resolves to will be the body of our response. For example, in the following route: (GET "/req/:val" [val] (str val)) The third argument, (str val), will echo whatever the value passed in on the URL is. So far, we've simply been making calls to Clojure's str but we can just as easily call one of our own functions. Let's add another route to our hipstr.routes.test-routes, and write the following function to construct its response: (defn render-request-val [request-map & [request-key]]   "Simply returns the value of request-key in request-map,   if request-key is provided; Otherwise return the request-map.   If request-key is provided, but not found in the request-map,   a message indicating as such will be returned." (str (if request-key         (if-let [result ((keyword request-key) request-map)]           result           (str request-key " is not a valid key."))         request-map))) (defroutes test-routes   (POST "/req" request (render-request-val request))   ;no access to the full request map   (GET "/req/:val" [val] (str val))   ;use :as to get access to full request map   (GET "/req/:val" [val :as full-req] (str val "<br>" full-req))   ;use :as to get access to the remainder of unbound symbols   (GET "/req/:val/:another-val/:and-another" [val & remainders]     (str val "<br>" remainders))   ;use & to get access to unbound params, and call our route   ;handler function   (GET "/req/:key" [key :as request]     (render-request-val request key))) Now when we navigate to http://localhost:3000/req/server-port, we'll see the value of the :server-port key in the request map… or wait… we should… what's wrong? If this doesn't seem right, it's because it isn't. Why is our /req/:val route getting executed? As stated earlier, the order of routes is important. Because /req/:val with the GET method is declared earlier, it's the first route to match our request, regardless of whether or not :val is in the HTTP request map's parameters. Routes are matched on URL structure, not on parameters keys. As it stands right now, our /req/:key will never get matched. We'll have to change it as follows: ;use & to get access to unbound params, and call our route handler function (GET "/req/:val/:another-val/:and-another" [val & remainders]   (str val "<br>" remainders))   ;giving the route a different URL from /req/:val will ensure its   execution   (GET "/req/key/:key" [key :as request] (render-request-val   request key))) Now that our /req/key/:key route is logically unique, it will be matched appropriately and render the server-port value to screen. Let's try and navigate to http://localhost:3000/req/key/server-port again: Generating complex responses What if we want to create more complex responses? How might we go about doing that? The last thing we want to do is hardcode a whole bunch of HTML into a function, it's not 1995 anymore, after all. This is where the Selmer library comes to the rescue. Summary In this article we have learnt what Compojure is, what a Compojure routing library is and how it works. You have also learnt to build your own Compojure routes to handle an incoming request, within which you learnt how to use defroutes, the anatomy of a route, destructuring parameter and how to define the URL. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Websockets In Wildfly [article] Clojure For Domain-Specific Languages - Design Concepts With Clojure [article]
Read more
  • 0
  • 0
  • 2297
Modal Close icon
Modal Close icon