Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-packaged-elegance
Packt
03 Mar 2015
24 min read
Save for later

Packaged Elegance

Packt
03 Mar 2015
24 min read
In this article by John Farrar, author of the book KnockoutJS Web development, we will see how templates drove us to a more dynamic, creative platform. The next advancement in web development was custom HTML components. KnockoutJS allows us to jump right in with some game-changing elegance for designers and developers. In this article, we will focus on: An introduction to components Bring Your Own Tags (BYOT) Enhancing attribute handling Making your own libraries Asynchronous module definition (AMD)—on demand resource loading This entire article is about packaging your code for reuse. Using these techniques, you can make your code more approachable and elegant. (For more resources related to this topic, see here.) Introduction to components The best explanation of a component is a packaged template with an isolated ViewModel. Here is the syntax we would use to declare a like component on the page: <div data-bind="component: "like"''"></div> If you are passing no parameters through to the component, this is the correct syntax. If you wish to pass parameters through, you would use a JSON style structure as follows: <div data-bind="component:{name: 'like-widget',params:{ approve: like} }"></div> This would allow us to pass named parameters through to our custom component. In this case, we are passing a parameter named approve. This would mean we had a bound viewModel variable by the name of like. Look at how this would be coded. Create a page called components.html using the _base.html file to speed things up as we have done in all our other articles. In your script section, create the following ViewModel: <script>ViewModel = function(){self = this;self.like = ko.observable(true);};// insert custom component herevm = new ViewModel();ko.applyBindings(vm);</script> Now, we will create our custom component. Here is the basic component we will use for this first component. Place the code where the comment is, as we want to make sure it is added before our applyBindings method is executed: ko.components.register('like-widget', { viewModel: function(params) {    this.approve = params.approve;    // Behaviors:    this.toggle = function(){      this.approve(!this.approve());    }.bind(this); }, template:    '<div class="approve">      <button data-bind="click: toggle">        <span data-bind="visible: approve" class="glyphicon   glyphicon-thumbs-up"></span>        <span data-bind="visible:! approve()" class="glyphicon   glyphicon-thumbs-down"></span>      </button>    </div>' }); There are two sections to our components: the viewModel and template sections. In this article, we will be using Knockout template details inside the component. The standard Knockout component passes variables to the component using the params structure. We can either use this structure or you could optionally use the self = this approach if desired. In addition to setting the variable structure, it is also possible to create behaviors for our components. If we look in the template code, we can see we have data-bound the click event to toggle the approve setting in our component. Then, inside the button, by binding to the visible trait of the span element, either the thumbs up or thumbs down image will be shown to the user. Yes, we are using a Bootstrap icon element rather than a graphic here. Here is a screenshot of the initial state: When we click on the thumb image, it will toggle between the thumbs up and the thumbs down version. Since we also passed in the external variable that is bound to the page ViewModel, we see that the value in the matched span text will also toggle. Here is the markup we would add to the page to produce these results in the View section of our code: <div data-bind="component:   {name: 'like-widget', params:{ approve: like} }"></div> <span data-bind="text: like"></span> You could build this type of functionality with a jQuery plugin as well, but it is likely to take a bit more code to do two-way binding and match the tight functionality we have achieved here. This doesn't mean jQuery plugins are bad, as this is also a jQuery-related technology. What it does mean is we have ways to do things even better. It is this author's opinion that features like this would still make great additions to the core jQuery library. Yet, I am not holding my breath waiting for them to adopt a Knockout-type project to the wonderful collection of projects they have at this point, and do not feel we should hold that against them. Keeping focused on what they do best is one of the reasons libraries like Knockout can provide a wider array of options. It seems the decisions are working on our behalf even if they are taking a different approach than I expected. Dynamic component selection You should have noticed when we selected the component that we did so using a quoted declaration. While at first it may seem to be more constricting, remember that it is actually a power feature. By using a variable instead of a hardcoded value, you can dynamically select the component you would like to be inserted. Here is the markup code: <div data-bind="component:  { name: widgetName, params: widgetParams }"></div> <span data-bind="text:widgetParams.approve"></span> Notice that we are passing in both widgetName as well as widgetParams. Because we are binding the structure differently, we also need to show the bound value differently in our span. Here is the script part of our code that needs to be added to our viewModel code: self.widgetName = ko.observable("like-widget"); self.widgetParams = {    approve: ko.observable(true) }; We will get the same visible results but notice that each of the like buttons is acting independent of the other. What would happen if we put more than one of the same elements on the page? If we do that, Knockout components will act independent of other components. Well, most of the time they act independent. If we bound them to the same variable they would not be independent. In your viewModel declaration code, add another variable called like2 as follows: self.like2 = ko.observable(false); Now, we will add another like button to the page by copying our first like View code. This time, change the value from like to like2 as follows: <like-widget params="approve: like2"></like-widget> <span data-bind="text: like2"></span> This time when the page loads, the other likes display with a thumbs up, but this like will display with a thumbs down. The text will also show false stored in the bound value. Any of the like buttons will act independently because each of them is bound to unique values. Here is a screenshot of the third button: Bring Your Own Tags (BYOT) What is an element? Basically, an element is a component that you reach using the tag syntax. This is the way it is expressed in the official documentation at this point and it is likely to stay that way. It is still a component under the hood. Depending on the crowd you are in, this distinction will be more or less important. Mostly, just be aware of the distinction in case someone feels it is important, as that will let you be on the same page in discussions. Custom tags are a part of the forthcoming HTML feature called Web Components. Knockout allows you to start using them today. Here is the View code: <like-widget params="approve: like3"></like-widget> <span data-bind="text: like3"></span> You may want to code some tags with a single tag rather than a double tag, as in an opening and closing tag syntax. Well, at this time, there are challenges getting each browser to see the custom element tags when declared as a single tag. This means custom tags, or elements, will need to be declared as opening and closing tags for now. We will also need to create our like3 bound variable for viewModel with the following code: self.like3 = ko.observable(true); Running the code gives us the same wonderful functionality as our data-bind approach, but now we are creating our own HTML tags. Has there ever been a time you wanted a special HTML tag that just didn't exist? There is a chance you could create that now using Knockout component element-style coding. Enhancing attribute handling Now, while custom tags are awesome, there is just something different about passing everything in with a single param attribute. The reason for this is that this process matches how our tags work when we are using the data-bind approach to coding. In the following example, we will look at passing things in via individual attributes. This is not meant to work as a data-bind approach, but it is focused completely on the custom tag element component. The first thing you want to do is make sure this enhancement doesn't cause any issues with the normal elements. We did this by checking the custom elements for a standard prefix. You do not need to work through this code as it is a bit more advanced. The easiest thing to do is to include our Knockout components tag with the following script tag: <script src="/share/js/knockout.komponents.js"></script> In this tag, we have this code segment to convert the tags that start with kom- to tags that use individual attributes rather than a JSON translation of the attributes. Feel free to borrow the code to create libraries of your own. We are going to be creating a standard set of libraries on GitHub for these component tags. Since the HTML tags are Knockout components, we are calling these libraries "KOmponents". The" resource can be found at https://github.com/sosensible/komponents. Now, with that library included, we will use our View code to connect to the new tag. Here is the code to use in the View: <kom-like approve="tagLike"></kom-like> <span data-bind="text: tagLike"></span> Notice that in our HTML markup, the tag starts with the library prefix. This will also require viewModel to have a binding to pass into this tag as follows: self.tagLike = ko.observable(true); The following is the code for the actual "attribute-aware version" of Knockout components. Do not place this in the code as it is already included in the library in the shared directory: // <kom-like /> tag ko.components.register('kom-like', { viewModel: function(params) {    // Data: value must but true to approve    this.approve = params.approve;    // Behaviors:    this.toggle = function(){      this.approve(!this.approve());    }.bind(this); }, template:    '<div class="approve">      <button data-bind="click: toggle">        <span data-bind="visible: approve" class="glyphicon   glyphicon-thumbs-up"></span>        <span data-bind="visible:! approve()" class="glyphicon   glyphicon-thumbs-down"></span>      </button>    </div>' }); The tag in the View changed as we passed the information in via named attributes and not as a JSON structure inside a param attribute. We also made sure to manage these tags by using a prefix. The reason for this is that we did not want our fancy tags to break the standard method of passing params commonly practiced with regular Knockout components. As we see, again we have another functional component with the added advantage of being able to pass the values in a style more familiar to those used to coding with HTML tags. Building your own libraries Again, we are calling our custom components KOmponents. We will be creating a number of library solutions over time and welcome others to join in. Tags will not do everything for us, as there are some limitations yet to be conquered. That doesn't mean we wait for all the features before doing the ones we can for now. In this article, we will also be showing some tags from our Bootstrap KOmponents library. First we will need to include the Bootstrap KOmponents library: <script src="/share/js/knockout.komponents.bs.js"></script> Above viewModel in our script, we need to add a function to make this section of code simpler. At times, when passing items into observables, we can pass in richer bound data using a function like this. Again, create this function above the viewModel declaration of the script, shown as follows: var listItem = function(display, students){ this.display = ko.observable(display); this.students = ko.observable(students); this.type = ko.computed(function(){    switch(Math.ceil(this.students()/5)){      case 1:      case 2:        return 'danger';        break;      case 3:        return 'warning';        break;      case 4:        return 'info';        break;      default:        return 'success';    } },this); }; Now, inside viewModel, we will declare a set of data to pass to a Bootstrap style listGroup as follows: self.listData = ko.observableArray([ new listItem("HTML5",12), new listItem("CSS",8), new listItem("JavaScript",19), new listItem("jQuery",48), new listItem("Knockout",33) ]); Each item in our array will have display, students, and type variables. We are using a number of features in Bootstrap here but packaging them all up inside our Bootstrap smart tag. This tag starts to go beyond the bare basics. It is still very implementable, but we don't want to throw too much at you to absorb at one time, so we will not go into the detailed code for this tag. What we do want to show is how much power can be wrapped into custom Knockout tags. Here is the markup we will use to call this tag and bind the correct part of viewModel for display: <kom-listgroup data="listData" badgeField="'students'"   typeField="'type'"></kom-listgroup> That is it. You should take note of a couple of special details. The data is passed in as a bound Knockout ViewModel. The badge field is passed in as a string name to declare the field on the data collection where the badge count will be pulled. The same string approach has been used for the type field. The type will set the colors as per standard Bootstrap types. The theme here is that if there are not enough students to hold a class, then it shows the danger color in the list group custom tag. Here is what it looks like in the browser when we run the code: While this is neat, let's jump into our browser tools console and change the value of one of the items. Let's say there was a class on some cool web technology called jQuery. What if people had not heard of it and didn't know what it was and you really wanted to take the class? Well, it would be nice to encourage a few others to check it out. How would you know whether the class was at a danger level or not? Well, we could simply use the badge and the numbers, but how awesome is it to also use the color coding hint? Type the following code into the console and see what changes: vm.listData()[3].display() Because JavaScript starts counting with zero for the first item, we will get the following result: Now we know we have the right item, so let's set the student count to nine using the following code in the browser console: vm.listData()[3].students(9) Notice the change in the jQuery class. Both the badge and the type value have updated. This screenshot of the update shows how much power we can wield with very little manual coding: We should also take a moment to see how the type was managed. Using the functional assignment, we were able to use the Knockout computed binding for that value. Here is the code for that part again: this.type = ko.computed(function(){ switch(Math.ceil(this.students()/5)){    case 1:    case 2:      return 'danger';      break;    case 3:      return 'warning';      break;    case 4:      return 'info';      break;    default:      return 'success'; } },this); While the code is outside the viewModel declaration, it is still able to bind properly to make our code run even inside a custom tag created with Knockout's component binding. Bootstrap component example Here is another example of binding with Bootstrap. The general best practice for using modal display boxes is to place them higher in the code, perhaps under the body tag, to make sure there are no conflicts with the rest of the code. Place this tag right below the body tag as shown in the following code: <kom-modal id="'komModal'" title="komModal.title()"   body="komModal.body()"></kom-modal> Again, we will need to make some declarations inside viewModel for this to work right. Enter this code into the declarations of viewModel: self.komModal = { title: ko.observable('Modal KOMponent'), body: ko.observable('This is the body of the <strong>modal   KOMponent</strong>.') }; We will also create a button on the page to call our viewModel. The button will use the binding that is part of Bootstrap. The data-toggle and data-target attributes are not Knockout binding features. Knockout works side-by-side wonderfully though. Another point of interest is the standard ID attribute, which tells how Bootstrap items, like this button, interact with the modal box. This is another reason it may be beneficial to use KOmponents or a library like it. Here is the markup code: <button type="button" data-toggle="modal" data-   target="#komModal">Open Modal KOmponent</button> When we click on the button, this is the requestor we see: Now, to understand the power of Knockout working with our requestor, head back over to your browser tools console. Enter the following command into the prompt: vm.komModal.body("Wow, live data binding!") The following screenshot shows the change: Who knows what type of creative modular boxes we can build using this type of technology. This brings us closer towards creating what we can imagine. Perhaps it may bring us closer to building some of the wild things our customers imagine. While that may not be your main motivation for using Knockout, it would be nice to have a few less roadblocks when we want to be creative. It would also be nice to have this wonderful ability to package and reuse these solutions across a site without using copy and paste and searching back through the code when the client makes a change to make updates. Again, feel free to look at the file to see how we made these components work. They are not extremely complicated once you get the basics of using Knockout and its components. If you are looking to build components of your own, they will help you get some insight on how to do things inside as you move your skills to the next level. Understanding the AMD approach We are going to look into the concept of what makes an AMD-style website. The point of this approach to sites is to pull content on demand. The content, or modules as they are defined here, does not need to be loaded in a particular order. If there are pieces that depend on other pieces, that is, of course, managed. We will be using the RequireJS library to manage this part of our code. We will create four files in this example, as follows: amd.html amd.config.js pick.js pick.html In our AMD page, we are going to create a configuration file for our RequireJS functionality. That will be the amd.config.js file mentioned in the aforementioned list. We will start by creating this file with the following code: // require.js settings var require = {    baseUrl: ".",    paths: {        "bootstrap":       "/share/js/bootstrap.min",        "jquery":           "/share/js/jquery.min",        "knockout":         "/share/js/knockout",        "text":             "/share/js/text"    },    shim: {        "bootstrap": { deps: ["jquery"] },        "knockout": { deps: ["jquery"] },    } }; We see here that we are creating some alias names and setting the paths these names point to for this page. The file could, of course, be working for more than one page, but in this case, it has specifically been created for a single page. The configuration in RequireJS does not need the .js extension on the file names, as you would have noted. Now, we will look at our amd.html page where we pull things together. We are again using the standard page we have used for this article, which you will notice if you preview the done file example of the code. There are a couple of differences though, because the JavaScript files do not all need to be called at the start. RequireJS handles this well for us. We are not saying this is a standard practice of AMD, but it is an introduction of the concepts. We will need to include the following three script files in this example: <script src="/share/js/knockout.js"></script> <script src="amd.config.js"></script> <script src="/share/js/require.js"></script> Notice that the configuration settings need to be set before calling the require.js library. With that set, we can create the code to wire Knockout binding on the page. This goes in our amd.html script at the bottom of the page: <script> ko.components.register('pick', { viewModel: { require: 'pick' }, template: { require: 'text!pick.html' } }); viewModel = function(){ this.choice = ko.observable(); } vm = new viewModel(); ko.applyBindings(vm); </script> Most of this code should look very familiar. The difference is that the external files are being used to set the content for viewModel and template in the pick component. The require setting smartly knows to include the pick.js file for the pick setting. It does need to be passed as a string, of course. When we include the template, you will see that we use text! in front of the file we are including. We also declare the extension on the file name in this case. The text method actually needs to know where the text is coming from, and you will see in our amd.config.js file that we created an alias for the inclusion of the text function. Now, we will create the pick.js file and place it in the same directory as the amd.html file. It could have been in another directory, and you would have to just set that in the component declaration along with the filename. Here is the code for this part of our AMD component: define(['knockout'], function(ko) {    function LikeWidgetViewModel(params) {        this.chosenValue = params.value;        this.land = Math.round(Math.random()) ? 'heads' : 'tails';    }    LikeWidgetViewModel.prototype.heads = function() {        this.chosenValue('heads');    };    LikeWidgetViewModel.prototype.tails = function() {        this.chosenValue('tails');    };    return LikeWidgetViewModel; }); Notice that our code starts with the define method. This is our AMD functionality in place. It is saying that before we try to execute this section of code we need to make sure the Knockout library is loaded. This allows us to do on-demand loading of code as needed. The code inside the viewModel section is the same as the other examples we have looked at with one exception. We return viewModel as you see at the end of the preceding code. We used the shorthand code to set the value for heads and tails in this example. Now, we will look at our template file, pick.html. This is the code we will have in this file: <div class="like-or-dislike" data-bind="visible: !chosenValue()"> <button data-bind="click: heads">Heads</button> <button data-bind="click: tails">Tails</button> </div> <div class="result" data-bind="visible: chosenValue">    You picked <strong data-bind="text: chosenValue"></strong>    The correct value was <strong data-bind="text:   land"></strong> </div> There is nothing special other than the code needed to make this example work. The goal is to allow a custom tag to offer up heads or tails options on the page. We also pass in a bound variable from viewModel. We will be passing it into three identical tags. The tags are actually going to load the content instantly in this example. The goal is to get familiar with how the code works. We will take it to full practice at the end of the article. Right now, we will put this code in the View segment of our amd.html page: <h2>One Choice</h2> <pick params="value: choice"></pick><br> <pick params="value: choice"></pick><br> <pick params="value: choice"></pick> Notice that we have included the pick tag three times. While we are passing in the bound choice item from viewModel, each tag will randomly choose heads or tails. When we run the code, this is what we will see: Since we passed the same bound item into each of the three tags, when we click on any heads or tails set, it will immediately pass that value out to viewModel, which will in turn immediately pass the value back into the other two tag sets. They are all wired together through viewModel binding being the same variable. This is the result we get if we click on Tails: Well, it is the results we got that time. Actually, the results change pretty much every time we refresh the page. Now, we are ready to do something extra special by combining our AMD approach with Knockout modules. Summary This article has shown the awesome power of templates working together with ViewModels within Knockout components. You should now have an awesome foundation to do more with less than ever before. You should know how to mingle your jQuery code with the Knockout code side by side. To review, in this article, we learned what Knockout components are. We learned how to use the components to create custom HTML elements that are interactive and powerful. We learned how to enhance custom elements to allow variables to be managed using the more common attributes approach. We learned how to use an AMD-style approach to coding with Knockout. We also learned how to AJAX everything and integrate jQuery to enhance Knockout-based solutions. What's next? That is up to you. One thing is for sure, the possibilities are broader using Knockout than they were before. Happy coding and congratulations on completing your study of KnockoutJS! Resources for Article: Further resources on this subject: Top features of KnockoutJS [article] Components [article] Web Application Testing [article]
Read more
  • 0
  • 0
  • 1776

article-image-model-view-viewmodel
Packt
02 Mar 2015
24 min read
Save for later

Model-View-ViewModel

Packt
02 Mar 2015
24 min read
In this article, by Einar Ingebrigtsen, author of the book, SignalR Blueprints, we will focus on a different programming model for client development: Model-View-ViewModel (MVVM). It will reiterate what you have already learned about SignalR, but you will also start to see a recurring theme in how you should architect decoupled software that adheres to the SOLID principles. It will also show the benefit of thinking in single page application terms (often referred to as Single Page Application (SPA)), and how SignalR really fits well with this idea. (For more resources related to this topic, see here.) The goal – an imagined dashboard A counterpart to any application is often a part of monitoring its health. Is it running? and are there any failures?. Getting this information in real time when the failure occurs is important and also getting some statistics from it is interesting. From a SignalR perspective, we will still use the hub abstraction to do pretty much what we have been doing, but the goal is to give ideas of how and what we can use SignalR for. Another goal is to dive into the architectural patterns, making it ready for larger applications. MVVM allows better separation and is very applicable for client development in general. A question that you might ask yourself is why KnockoutJS instead of something like AngularJS? It boils down to the personal preference to a certain degree. AngularJS is described as a MVW where W stands for Whatever. I find AngularJS less focused on the same things I focus on and I also find it very verbose to get it up and running. I'm not in any way an expert in AngularJS, but I have used it on a project and I found myself writing a lot to make it work the way I wanted it to in terms of MVVM. However, I don't think it's fair to compare the two. KnockoutJS is very focused in what it's trying to solve, which is just a little piece of the puzzle, while AngularJS is a full client end-to-end framework. On this note, let's just jump straight to it. Decoupling it all MVVM is a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model. Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a view. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Back to basics This time we will go back in time, going down what might be considered a more purist path; use the browser elements (HTML, JavaScript, and CSS) and don't rely on any server-side rendering. Clients today are powerful and very capable and offloading the composition of what the user sees onto the client frees up server resources. You can also rely on the infrastructure of the Web for caching with static HTML files not rendered by the server. In fact, you could actually put these resources on a content delivery network, making the files available as close as possible to the end user. This would result in better load times for the user. You might have other reasons to perform server-side rendering and not just plain HTML. Leveraging existing infrastructure or third-party party tools could be those reasons. It boils down to what's right for you. But this particular sample will focus on things that the client can do. Anyways, let's get started. Open Visual Studio and create a new project by navigating to FILE | New | Project. The following dialog box will show up: From the left-hand side menu, select Web and then ASP.NET Web Application. Enter Chapter4 in the Name textbox and select your location. Select the Empty template from the template selector and make sure you deselect the Host in the cloud option. Then, click on OK, as shown in the following screenshot: Setting up the packages First, we want Twitter bootstrap. To get this, follow these steps: Add a NuGet package reference. Right-click on References in Solution Explorer and select Manage NuGet Packages and type Bootstrap in the search dialog box. Select it and then click on Install. We want a slightly different look, so we'll download one of the many bootstrap themes out here. Add a NuGet package reference called metro-bootstrap. As jQuery is still a part of this, let's add a NuGet package reference to it as well. For the MVVM part, we will use something called KnockoutJS; add it through NuGet as well. Add a NuGet package reference, as in the previous steps, but this time, type SignalR in the search dialog box. Find the package called Microsoft ASP.NET SignalR. Making any SignalR hubs available for the client Add a file called Startup.cs file to the root of the project. Add a Configuration method that will expose any SignalR hubs, as follows: public void Configuration(IAppBuilder app) { app.MapSignalR(); } At the top of the Startup.cs file, above the namespace declaration, but right below the using statements, add the following code:  [assembly: OwinStartupAttribute(typeof(Chapter4.Startup))] Knocking it out of the park KnockoutJS is a framework that implements a lot of the principles found in MVVM and makes it easier to apply. We're going to use the following two features of KnockoutJS, and it's therefore important to understand what they are and what significance they have: Observables: In order for a view to be able to know when state change in a ViewModel occurs, KnockoutJS has something called an observable for single objects or values and observable array for arrays. BindingHandlers: In the view, the counterparts that are able to recognize the observables and know how to deal with its content are known as BindingHandlers. We create binding expression in the view that instructs the view to get its content from the properties found in the binding context. The default binding context will be the ViewModel, but there are more advanced scenarios where this changes. In fact, there is a BindingHandler that enables you to specify the context at any given time called with. Our single page Whether one should strive towards having an SPA is widely discussed on the Web these days. My opinion on the subject, in the interest of the user, is that we should really try to push things in this direction. Having not to post back and cause a full reload of the page and all its resources and getting into the correct state gives the user a better experience. Some of the arguments to perform post-backs every now and then go in the direction of fixing potential memory leaks happening in the browser. Although, the technique is sound and the result is right, it really just camouflages a problem one has in the system. However, as with everything, it really depends on the situation. At the core of an SPA is a single page (pun intended), which is usually the index.html file sitting at the root of the project. Add the new index.html file and edit it as follows: Add a new HTML file (index.html) at the root of the project by right- clicking on the Chapter4 project in Solution Explorer. Navigate to Add | New Item | Web from the left-hand side menu, and then select HTML Page and name it index.html. Finally, click on Add. Let's put in the things we've added dependencies to, starting with the style sheets. In the index.html file, you'll find the <head> tag; add the following code snippet under the <title></title> tag: <link href="Content/bootstrap.min.css" rel="stylesheet" /> <link href="Content/metro-bootstrap.min.css" rel="stylesheet" /> Next, add the following code snippet right beneath the preceding code: <script type="text/javascript" src="Scripts/jquery- 1.9.0.min.js"></script> <script type="text/javascript" src="Scripts/jquery.signalR- 2.1.1.js"></script> <script type="text/javascript" src="signalr/hubs"></script> <script type="text/javascript" src="Scripts/knockout- 3.2.0.js"></script> Another thing we will need in this is something that helps us visualize things; Google has a free, open source charting library that we will use. We will take a dependency to the JavaScript APIs from Google. To do this, add the following script tag after the others: <script type="text/javascript" src="https://www.google.com/jsapi"></script> Now, we can start filling in the view part. Inside the <body> tag, we start by putting in a header, as shown here: <div class="navbar navbar-default navbar-static-top bsnavbar">     <div class="container">         <div class="navbar-header">             <h1>My Dashboard</h1>         </div>     </div> </div> The server side of things In this little dashboard thing, we will look at web requests, both successful and failed. We will perform some minor things for us to be able to do this in a very naive way, without having to flesh out a full mechanism to deal with error situations. Let's start by enabling all requests even static resources, such as HTML files, to run through all HTTP modules. A word of warning: there are performance implications of putting all requests through the managed pipeline, so normally, you wouldn't necessarily want to do this on a production system, but for this sample, it will be fine to show the concepts. Open Web.config in the project and add the following code snippet within the <configuration> tag: <system.webServer>   <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> The hub In this sample, we will only have one hub, the one that will be responsible for dealing with reporting requests and failed requests. Let's add a new class called RequestStatisticsHub. Right-click on the project in Solution Explorer, select Class from Add, name it RequestStatisticsHub.cs, and then click on Add. The new class should inherit from the hub. Add the following using statement at the top: using Microsoft.AspNet.SignalR; We're going to keep a track of the count of requests and failed requests per time with a resolution of not more than every 30 seconds in the memory on the server. Obviously, if one wants to scale across multiple servers, this is way too naive and one should choose an out-of-process shared key-value store that goes across servers. However, for our purpose, this will be fine. Let's add a using statement at the top, as shown here: using System.Collections.Generic; At the top of the class, add the two dictionaries that we will use to hold this information: static Dictionary<string, int> _requestsLog = new Dictionary<string, int>(); static Dictionary<string, int> _failedRequestsLog = new Dictionary<string, int>(); In our client, we want to access these logs at startup. So let's add two methods to do so: public Dictionary<string, int> GetRequests() {     return _requestsLog; }   public Dictionary<string, int> GetFailedRequests() {     return _failedRequestsLog; } Remember the resolution of only keeping track of number of requests per 30 seconds at a time. There is no default mechanism in the .NET Framework to do this so we need to add a few helper methods to deal with rounding of time. Let's add a class called DateTimeRounding at the root of the project. Mark the class as a public static class and put the following extension methods in the class: public static DateTime RoundUp(this DateTime dt, TimeSpan d) {     var delta = (d.Ticks - (dt.Ticks % d.Ticks)) % d.Ticks;     return new DateTime(dt.Ticks + delta); }   public static DateTime RoundDown(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     return new DateTime(dt.Ticks - delta); }   public static DateTime RoundToNearest(this DateTime dt, TimeSpan d) {     var delta = dt.Ticks % d.Ticks;     bool roundUp = delta > d.Ticks / 2;       return roundUp ? dt.RoundUp(d) : dt.RoundDown(d); } Let's go back to the RequestStatisticsHub class and add some more functionality now so that we can deal with rounding of time: static void Register(Dictionary<string, int> log, Action<dynamic, string, int> hubCallback) {     var now = DateTime.Now.RoundToNearest(TimeSpan.FromSeconds(30));     var key = now.ToString("HH:mm");       if (log.ContainsKey(key))         log[key] = log[key] + 1;     else         log[key] = 1;       var hub = GlobalHost.ConnectionManager.GetHubContext<RequestStatisticsHub>() ;     hubCallback(hub.Clients.All, key, log[key]); }   public static void Request() {     Register(_requestsLog, (hub, key, value) => hub.requestCountChanged(key, value)); }   public static void FailedRequest() {     Register(_requestsLog, (hub, key, value) => hub.failedRequestCountChanged(key, value)); } This enables us to have a place to call in order to report requests and these get published back to any clients connected to this particular hub. Note the usage of GlobalHost and its ConnectionManager property. When we want to get a hub instance and when we are not in the hub context of a method being called from a client, we use ConnectionManager to get it. It gives is a proxy for the hub and enables us to call methods on any connected client. Naively dealing with requests With all this in place, we will be able to easily and naively deal with what we consider correct and failed requests. Let's add a Global.asax file by right-clicking on the project in Solution Explorer and select the New item from the Add. Navigate to Web and find Global Application Class, then click on Add. In the new file, we want to replace the BindingHandlers method with the following code snippet: protected void Application_AuthenticateRequest(object sender, EventArgs e) {     var path = HttpContext.Current.Request.Path;     if (path == "/") path = "index.html";       if (path.ToLowerInvariant().IndexOf(".html") < 0) return;       var physicalPath = HttpContext.Current.Request.MapPath(path);     if (File.Exists(physicalPath))     {         RequestStatisticsHub.Request();     }     else     {         RequestStatisticsHub.FailedRequest();     } } Basically, with this, we are only measuring requests with .html in its path, and if it's only "/", we assume it's "index.html". Any file that does not exist, accordingly, is considered an error; typically a 404 error and we register it as a failed request. Bringing it all back to the client With the server taken care of, we can start consuming all this in the client. We will now be heading down the path of creating a ViewModel and hook everything up. ViewModel Let's start by adding a JavaScript file sitting next to our index.html file at the root level of the project, call it index.js. This file will represent our ViewModel. Also, this scenario will be responsible to set up KnockoutJS, so that the ViewModel is in fact activated and applied to the page. As we only have this one page for this sample, this will be fine. Let's start by hooking up the jQuery document that is ready: $(function() { }); Inside the function created here, we will enter our viewModel definition, which will start off being an empty one: var viewModel = function() { }; KnockoutJS has a function to apply a viewModel to the document, meaning that the document or body will be associated with the viewModel instance given. Right under the definition of viewModel, add the following line: ko.applyBindings(new viewModel()); Compiling this and running it should at the very least not give you any errors but nothing more than a header saying My Dashboard. So, we need to lighten this up a bit. Inside the viewModel function definition, add the following code snippet: var self = this; this.requests = ko.observableArray(); this.failedRequests = ko.observableArray(); We enter a reference to this as a variant called self. This will help us with scoping issues later on. The arrays we added are now KnockoutJS's observable arrays that allows the view or any BindingHandler to observe the changes that are coming in. The ko.observableArray() and ko.observable() arrays both return a new function. So, if you want to access any values in it, you must unwrap it by calling it something that might seem counterintuitive at first. You might consider your variable as just another property. However, for the observableArray(), KnockoutJS adds most of the functions found in the array type in JavaScript and they can be used directly on the function without unwrapping. If you look at a variable that is an observableArray in the console of the browser, you'll see that it looks as if it actually is just any array. This is not really true though; to get to the values, you will have to unwrap it by adding () after accessing the variable. However, all the functions you're used to having on an array are here. Let's add a function that will know how to handle an entry into the viewModel function. An entry coming in is either an existing one or a new one; the key of the entry is the giveaway to decide: function handleEntry(log, key, value) {     var result = log().forEach(function (entry) {         if (entry[0] == key) {             entry[1](value);             return true;         }     });       if (result !== true) {         log.push([key, ko.observable(value)]);     } }; Let's set up the hub and add the following code to the viewModel function: var hub = $.connection.requestStatisticsHub; var initializedCount = 0;   hub.client.requestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.requests, key, value); }   hub.client.failedRequestCountChanged = function (key, value) {     if (initializedCount < 2) return;     handleEntry(self.failedRequests, key, value); } You might notice the initalizedCount variable. Its purpose is not to deal with requests until completely initialized, which comes next. Add the following code snippet to the viewModel function: $.connection.hub.start().done(function () {     hub.server.getRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.requests, property, requests[property]);         }           initializedCount++;     });     hub.server.getFailedRequests().done(function (requests) {         for (var property in requests) {             handleEntry(self.failedRequests, property, requests[property]);         }           initializedCount++;     }); }); We should now have enough logic in our viewModel function to actually be able to get any requests already sitting there and also respond to new ones coming. BindingHandler The key element of KnockoutJS is its BindingHandler mechanism. In KnockoutJS, everything starts with a data-bind="" attribute on an element in the HTML view. Inside the attribute, one puts binding expressions and the BindingHandlers are a key to this. Every expression starts with the name of the handler. For instance, if you have an <input> tag and you want to get the value from the input into a property on the ViewModel, you would use the BindingHandler value. There are a few BindingHandlers out of the box to deal with the common scenarios (text, value for each, and more). All of the BindingHandlers are very well documented on the KnockoutJS site. For this sample, we will actually create our own BindingHandler. KnockoutJS is highly extensible and allows you to do just this amongst other extensibility points. Let's add a JavaScript file called googleCharts.js at the root of the project. Inside it, add the following code: google.load('visualization', '1.0', { 'packages': ['corechart'] }); This will tell the Google API to enable the charting package. The next thing we want to do is to define the BindingHandler. Any handler has the option of setting up an init function and an update function. The init function should only occur once, when it's first initialized. Actually, it's when the binding context is set. If the parent binding context of the element changes, it will be called again. The update function will be called whenever there is a change in an observable or more observables that the binding expression is referring to. For our sample, we will use the init function only and actually respond to changes manually because we have a more involved scenario than what the default mechanism would provide us with. The update function that you can add to a BindingHandler has the exact same signature as the init function; hence, it is called an update. Let's add the following code underneath the load call: ko.bindingHandlers.lineChart = {     init: function (element, valueAccessor, allValueAccessors, viewModel, bindingContext) {     } }; This is the core structure of a BindingHandler. As you can see, we've named the BindingHandler as lineChart. This is the name we will use in our view later on. The signature of init and update are the same. The first parameter represents the element that holds the binding expression, whereas the second valueAccessor parameter holds a function that enables us to access the value, which is a result of the expression. KnockoutJS deals with the expression internally and parses any expression and figures out how to expand any values, and so on. Add the following code into the init function: optionsInput = valueAccessor();   var options = {     title: optionsInput.title,     width: optionsInput.width || 300,     height: optionsInput.height || 300,     backgroundColor: 'transparent',     animation: {         duration: 1000,         easing: 'out'     } };   var dataHash = {};   var chart = new google.visualization.LineChart(element); var data = new google.visualization.DataTable(); data.addColumn('string', 'x'); data.addColumn('number', 'y');   function addRow(row, rowIndex) {     var value = row[1];     if (ko.isObservable(value)) {         value.subscribe(function (newValue) {             data.setValue(rowIndex, 1, newValue);             chart.draw(data, options);         });     }       var actualValue = ko.unwrap(value);     data.addRow([row[0], actualValue]);       dataHash[row[0]] = actualValue; };   optionsInput.data().forEach(addRow);   optionsInput.data.subscribe(function (newValue) {     newValue.forEach(function(row, rowIndex) {         if( !dataHash.hasOwnProperty(row[0])) {             addRow(row,rowIndex);         }     });       chart.draw(data, options); });         chart.draw(data, options); As you can see, observables has a function called subscribe(), which is the same for both an observable array and a regular observable. The code adds a subscription to the array itself; if there is any change to the array, we will find the change and add any new row to the chart. In addition, when we create a new row, we subscribe to any change in its value so that we can update the chart. In the ViewModel, the values were converted into observable values to accommodate this. View Go back to the index.html file; we need the UI for the two charts we're going to have. Plus, we need to get both the new BindingHandler loaded and also the ViewModel. Add the following script references after the last script reference already present, as shown here: <script type="text/javascript" src="googleCharts.js"></script> <script type="text/javascript" src="index.js"></script> Inside the <body> tag below the header, we want to add a bootstrap container and a row to hold two metro styled tiles and utilize our new BindingHandler. Also, we want a footer sitting at the bottom, as shown in the following code: <div class="container">     <div class="row">         <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-green-sea tile-large">                 <div data-bind="lineChart: { title: 'Web Requests', width: 300, height: 300, data: requests }"></div>             </div>         </div>           <div class="col-sm-6 col-md-4">             <div class="thumbnail tile tile-pomegranate tile- large">                 <div data-bind="lineChart: { title: 'Failed Web Requests', width: 300, height: 300, data: failedRequests }"></div>             </div>         </div>     </div>       <hr />     <footer class="bs-footer" role="contentinfo">         <div class="container">             The Dashboard         </div>     </footer> </div> Note the data: requests and data: failedRequests are a part of the binding expressions. These will be handled and resolved by KnockoutJS internally and pointed to the observable arrays on the ViewModel. The other properties are options that go into the BindingHandler and something it forwards to the Google Charting APIs. Trying it all out Running the preceding code (Ctrl + F5) should yield the following result: If you open a second browser and go to the same URL, you will see the change in the chart in real time. Waiting approximately for 30 seconds and refreshing the browser should add a second point automatically and also animate the chart accordingly. Typing a URL with a file that does exist should have the same effect on the failed requests chart. Summary In this article, we had a brief encounter with MVVM as a pattern with the sole purpose of establishing good practices for your client code. We added this to a single page application setting, sprinkling on top the SignalR to communicate from the server to any connected client. Resources for Article: Further resources on this subject: Using R for Statistics Research and Graphics? [article] Aspects Data Manipulation in R [article] Learning Data Analytics R and Hadoop [article]
Read more
  • 0
  • 0
  • 1928

article-image-starting-small-and-growing-modular-way
Packt
02 Mar 2015
27 min read
Save for later

Starting Small and Growing in a Modular Way

Packt
02 Mar 2015
27 min read
This article written by Carlo Russo, author of the book KnockoutJS Blueprints, describes that RequireJS gives us a simplified format to require many parameters and to avoid parameter mismatch using the CommonJS require format; for example, another way (use this or the other one) to write the previous code is: define(function(require) {   var $ = require("jquery"),       ko = require("knockout"),       viewModel = {};   $(function() {       ko.applyBindings(viewModel);   });}); (For more resources related to this topic, see here.) In this way, we skip the dependencies definition, and RequireJS will add all the texts require('xxx') found in the function to the dependency list. The second way is better because it is cleaner and you cannot mismatch dependency names with named function arguments. For example, imagine you have a long list of dependencies; you add one or remove one, and you miss removing the relative function parameter. You now have a hard-to-find bug. And, in case you think that r.js optimizer behaves differently, I just want to assure you that it's not so; you can use both ways without any concern regarding optimization. Just to remind you, you cannot use this form if you want to load scripts dynamically or by depending on variable value; for example, this code will not work: var mod = require(someCondition ? "a" : "b");if (someCondition) {   var a = require('a');} else {   var a = require('a1');} You can learn more about this compatibility problem at this URL: http://www.requirejs.org/docs/whyamd.html#commonjscompat. You can see more about this sugar syntax at this URL: http://www.requirejs.org/docs/whyamd.html#sugar. Now that you know the basic way to use RequireJS, let's look at the next concept. Component binding handler The component binding handler is one of the new features introduced in Version 2.3 of KnockoutJS. Inside the documentation of KnockoutJS, we find the following explanation: Components are a powerful, clean way of organizing your UI code into self-contained, reusable chunks. They can represent individual controls/widgets, or entire sections of your application. A component is a combination of HTML and JavaScript. The main idea behind their inclusion was to create full-featured, reusable components, with one or more points of extensibility. A component is a combination of HTML and JavaScript. There are cases where you can use just one of them, but normally you'll use both. You can get a first simple example about this here: http://knockoutjs.com/documentation/component-binding.html. The best way to create self-contained components is with the use of an AMD module loader, such as RequireJS; put the View Model and the template of the component inside two different files, and then you can use it from your code really easily. Creating the bare bones of a custom module Writing a custom module of KnockoutJS with RequireJS is a 4-step process: Creating the JavaScript file for the View Model. Creating the HTML file for the template of the View. Registering the component with KnockoutJS. Using it inside another View. We are going to build bases for the Search Form component, just to move forward with our project; anyway, this is the starting code we should use for each component that we write from scratch. Let's cover all of these steps. Creating the JavaScript file for the View Model We start with the View Model of this component. Create a new empty file with the name BookingOnline/app/components/search.js and put this code inside it: define(function(require) {var ko = require("knockout"),     template = require("text!./search.html");function Search() {}return {   viewModel: Search,   template: template};}); Here, we are creating a constructor called Search that we will fill later. We are also using the text plugin for RequireJS to get the template search.html from the current folder, into the argument template. Then, we will return an object with the constructor and the template, using the format needed from KnockoutJS to use as a component. Creating the HTML file for the template of the View In the View Model we required a View called search.html in the same folder. At the moment, we don't have any code to put inside the template of the View, because there is no boilerplate code needed; but we must create the file, otherwise RequireJS will break with an error. Create a new file called BookingOnline/app/components/search.html with the following content: <div>Hello Search</div> Registering the component with KnockoutJS When you use components, there are two different ways to give KnockoutJS a way to find your component: Using the function ko.components.register Implementing a custom component loader The first way is the easiest one: using the default component loader of KnockoutJS. To use it with our component you should just put the following row inside the BookingOnline/app/index.js file, just before the row $(function () {: ko.components.register("search", {require: "components/search"}); Here, we are registering a module called search, and we are telling KnockoutJS that it will have to find all the information it needs using an AMD require for the path components/search (so it will load the file BookingOnline/app/components/search.js). You can find more information and a really good example about a custom component loader at: http://knockoutjs.com/documentation/component-loaders.html#example-1-a-component-loader-that-sets-up-naming-conventions. Using it inside another View Now, we can simply use the new component inside our View; put the following code inside our Index View (BookingOnline/index.html), before the script tag:    <div data-bind="component: 'search'"></div> Here, we are using the component binding handler to use the component; another commonly used way is with custom elements. We can replace the previous row with the following one:    <search></search> KnockoutJS will use our search component, but with a WebComponent-like code. If you want to support IE6-8 you should register the WebComponents you are going to use before the HTML parser can find them. Normally, this job is done inside the ko.components.register function call, but, if you are putting your script tag at the end of body as we have done until now, your WebComponent will be discarded. Follow the guidelines mentioned here when you want to support IE6-8: http://knockoutjs.com/documentation/component-custom-elements.html#note-custom-elements-and-internet-explorer-6-to-8 Now, you can open your web application and you should see the text, Hello Search. We put that markup only to check whether everything was working here, so you can remove it now. Writing the Search Form component Now that we know how to create a component, and we put the base of our Search Form component, we can try to look for the requirements for this component. A designer will review the View later, so we need to keep it simple to avoid the need for multiple changes later. From our analysis, we find that our competitors use these components: Autocomplete field for the city Calendar fields for check-in and check-out Selection field for the number of rooms, number of adults and number of children, and age of children This is a wireframe of what we should build (we got inspired by Trivago): We could do everything by ourselves, but the easiest way to realize this component is with the help of a few external plugins; we are already using jQuery, so the most obvious idea is to use jQuery UI to get the Autocomplete Widget, the Date Picker Widget, and maybe even the Button Widget. Adding the AMD version of jQuery UI to the project Let's start downloading the current version of jQuery UI (1.11.1); the best thing about this version is that it is one of the first versions that supports AMD natively. After reading the documentation of jQuery UI for the AMD (URL: http://learn.jquery.com/jquery-ui/environments/amd/) you may think that you can get the AMD version using the download link from the home page. However, if you try that you will get just a package with only the concatenated source; for this reason, if you want the AMD source file, you will have to go directly to GitHub or use Bower. Download the package from https://github.com/jquery/jquery-ui/archive/1.11.1.zip and extract it. Every time you use an external library, remember to check the compatibility support. In jQuery UI 1.11.1, as you can see in the release notes, they removed the support for IE7; so we must decide whether we want to support IE6 and 7 by adding specific workarounds inside our code, or we want to remove the support for those two browsers. For our project, we need to put the following folders into these destinations: jquery-ui-1.11.1/ui -> BookingOnline/app/ui jquery-ui-1.11.1/theme/base -> BookingOnline/css/ui We are going to apply the widget by JavaScript, so the only remaining step to integrate jQuery UI is the insertion of the style sheet inside our application. We do this by adding the following rows to the top of our custom style sheet file (BookingOnline/css/styles.css): @import url("ui/core.css");@import url("ui/menu.css");@import url("ui/autocomplete.css");@import url("ui/button.css");@import url("ui/datepicker.css");@import url("ui/theme.css") Now, we are ready to add the widgets to our web application. You can find more information about jQuery UI and AMD at: http://learn.jquery.com/jquery-ui/environments/amd/ Making the skeleton from the wireframe We want to give to the user a really nice user experience, but as the first step we can use the wireframe we put before to create a skeleton of the Search Form. Replace the entire content with a form inside the file BookingOnline/components/search.html: <form data-bind="submit: execute"></form> Then, we add the blocks inside the form, step by step, to realize the entire wireframe: <div>   <input type="text" placeholder="Enter a destination" />   <label> Check In: <input type="text" /> </label>   <label> Check Out: <input type="text" /> </label>   <input type="submit" data-bind="enable: isValid" /></div> Here, we built the first row of the wireframe; we will bind data to each field later. We bound the execute function to the submit event (submit: execute), and a validity check to the button (enable: isValid); for now we will create them empty. Update the View Model (search.js) by adding this code inside the constructor: this.isValid = ko.computed(function() {return true;}, this); And add this function to the Search prototype: Search.prototype.execute = function() { }; This is because the validity of the form will depend on the status of the destination field and of the check-in date and check-out date; we will update later, in the next paragraphs. Now, we can continue with the wireframe, with the second block. Here, we should have a field to select the number of rooms, and a block for each room. Add the following markup inside the form, after the previous one, for the second row to the View (search.html): <div>   <fieldset>     <legend>Rooms</legend>     <label>       Number of Room       <select data-bind="options: rangeOfRooms,                           value: numberOfRooms">       </select>     </label>     <!-- ko foreach: rooms -->       <fieldset>         <legend>           Room <span data-bind="text: roomNumber"></span>         </legend>       </fieldset>     <!-- /ko -->   </fieldset></div> In this markup we are asking the user to choose between the values found inside the array rangeOfRooms, to save the selection inside a property called numberOfRooms, and to show a frame for each room of the array rooms with the room number, roomNumber. When developing and we want to check the status of the system, the easiest way to do it is with a simple item inside a View bound to the JSON of a View Model. Put the following code inside the View (search.html): <pre data-bind="text: ko.toJSON($data, null, 2)"></pre> With this code, you can check the status of the system with any change directly in the printed JSON. You can find more information about ko.toJSON at http://knockoutjs.com/documentation/json-data.html Update the View Model (search.js) by adding this code inside the constructor: this.rooms = ko.observableArray([]);this.numberOfRooms = ko.computed({read: function() {   return this.rooms().length;},write: function(value) {   var previousValue = this.rooms().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.rooms.push(new Room(i + 1));     }   } else {     this.rooms().splice(value);     this.rooms.valueHasMutated();   }},owner: this}); Here, we are creating the array of rooms, and a property to update the array properly. If the new value is bigger than the previous value it adds to the array the missing item using the constructor Room; otherwise, it removes the exceeding items from the array. To get this code working we have to create a module, Room, and we have to require it here; update the require block in this way:    var ko = require("knockout"),       template = require("text!./search.html"),       Room = require("room"); Also, add this property to the Search prototype: Search.prototype.rangeOfRooms = ko.utils.range(1, 10); Here, we are asking KnockoutJS for an array with the values from the given range. ko.utils.range is a useful method to get an array of integers. Internally, it simply makes an array from the first parameter to the second one; but if you use it inside a computed field and the parameters are observable, it re-evaluates and updates the returning array. Now, we have to create the View Model of the Room module. Create a new file BookingOnline/app/room.js with the following starting code: define(function(require) {var ko = require("knockout");function Room(roomNumber) {   this.roomNumber = roomNumber;}return Room;}); Now, our web application should appear like so: As you can see, we now have a fieldset for each room, so we can work on the template of the single room. Here, you can also see in action the previous tip about the pre field with the JSON data. With KnockoutJS 3.2 it is harder to decide when it's better to use a normal template or a component. The rule of thumb is to identify the degree of encapsulation you want to manage: Use the component when you want a self-enclosed black box, or the template if you want to manage the View Model directly. What we want to show for each room is: Room number Number of adults Number of children Age of each child We can update the Room View Model (room.js) by adding this code into the constructor: this.numberOfAdults = ko.observable(2);this.ageOfChildren = ko.observableArray([]);this.numberOfChildren = ko.computed({read: function() {   return this.ageOfChildren().length;},write: function(value) {   var previousValue = this.ageOfChildren().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.ageOfChildren.push(ko.observable(0));     }   } else {     this.ageOfChildren().splice(value);     this.ageOfChildren.valueHasMutated();   }},owner: this});this.hasChildren = ko.computed(function() {return this.numberOfChildren() > 0;}, this); We used the same logic we have used before for the mapping between the count of the room and the count property, to have an array of age of children. We also created a hasChildren property to know whether we have to show the box for the age of children inside the View. We have to add—as we have done before for the Search View Model—a few properties to the Room prototype: Room.prototype.rangeOfAdults = ko.utils.range(1, 10);Room.prototype.rangeOfChildren = ko.utils.range(0, 10);Room.prototype.rangeOfAge = ko.utils.range(0, 17); These are the ranges we show inside the relative select. Now, as the last step, we have to put the template for the room in search.html; add this code inside the fieldset tag, after the legend tag (as you can see here, with the external markup):      <fieldset>       <legend>         Room <span data-bind="text: roomNumber"></span>       </legend>       <label> Number of adults         <select data-bind="options: rangeOfAdults,                            value: numberOfAdults"></select>       </label>       <label> Number of children         <select data-bind="options: rangeOfChildren,                             value: numberOfChildren"></select>       </label>       <fieldset data-bind="visible: hasChildren">         <legend>Age of children</legend>         <!-- ko foreach: ageOfChildren -->           <select data-bind="options: $parent.rangeOfAge,                               value: $rawData"></select>         <!-- /ko -->       </fieldset>     </fieldset>     <!-- /ko --> Here, we are using the properties we have just defined. We are using rangeOfAge from $parent because inside foreach we changed context, and the property, rangeOfAge, is inside the Room context. Why did I use $rawData to bind the value of the age of the children instead of $data? The reason is that ageOfChildren is an array of observables without any container. If you use $data, KnockoutJS will unwrap the observable, making it one-way bound; but if you use $rawData, you will skip the unwrapping and get the two-way data binding we need here. In fact, if we use the one-way data binding our model won't get updated at all. If you really don't like that the fieldset for children goes to the next row when it appears, you can change the fieldset by adding a class, like this: <fieldset class="inline" data-bind="visible: hasChildren"> Now, your application should appear as follows: Now that we have a really nice starting form, we can update the three main fields to use the jQuery UI Widgets. Realizing an Autocomplete field for the destination As soon as we start to write the code for this field we face the first problem: how can we get the data from the backend? Our team told us that we don't have to care about the backend, so we speak to the backend team to know how to get the data. After ten minutes we get three files with the code for all the calls to the backend; all we have to do is to download these files (we already got them with the Starting Package, to avoid another download), and use the function getDestinationByTerm inside the module, services/rest. Before writing the code for the field let's think about which behavior we want for it: When you put three or more letters, it will ask the server for the list of items Each recurrence of the text inside the field into each item should be bold When you select an item, a new button should appear to clear the selection If the current selected item and the text inside the field are different when the focus exits from the field, it should be cleared The data should be taken using the function, getDestinationByTerm, inside the module, services/rest The documentation of KnockoutJS also explains how to create custom binding handlers in the context of RequireJS. The what and why about binding handlers All the bindings we use inside our View are based on the KnockoutJS default binding handler. The idea behind a binding handler is that you should put all the code to manage the DOM inside a component different from the View Model. Other than this, the binding handler should be realized with reusability in mind, so it's always better not to hard-code application logic inside. The KnockoutJS documentation about standard binding is already really good, and you can find many explanations about its inner working in the Appendix, Binding Handler. When you make a custom binding handler it is important to remember that: it is your job to clean after; you should register event handling inside the init function; and you should use the update function to update the DOM depending on the change of the observables. This is the standard boilerplate code when you use RequireJS: define(function(require) {var ko = require("knockout"),     $ = require("jquery");ko.bindingHandlers.customBindingHandler = {   init: function(element, valueAccessor,                   allBindingsAccessor, data, context) {     /* Code for the initialization… */     ko.utils.domNodeDisposal.addDisposeCallback(element,       function () { /* Cleaning code … */ });   },   update: function (element, valueAccessor) {     /* Code for the update of the DOM… */   }};}); And inside the View Model module you should require this module, as follows: require('binding-handlers/customBindingHandler'); ko.utils.domNodeDisposal is a list of callbacks to be executed when the element is removed from the DOM; it's necessary because it's where you have to put the code to destroy the widgets, or remove the event handlers. Binding handler for the jQuery Autocomplete widget So, now we can write our binding handler. We will define a binding handler named autocomplete, which takes the observable to put the found value. We will also define two custom bindings, without any logic, to work as placeholders for the parameters we will send to the main binding handler. Our binding handler should: Get the value for the autoCompleteOptions and autoCompleteEvents optional data bindings. Apply the Autocomplete Widget to the item using the option of the previous step. Register all the event listeners. Register the disposal of the Widget. We also should ensure that if the observable gets cleared, the input field gets cleared too. So, this is the code of the binding handler to put inside BookingOnline/app/binding-handlers/autocomplete.js (I put comments between the code to make it easier to understand): define(function(require) {var ko = require("knockout"),     $ = require("jquery"),     autocomplete = require("ui/autocomplete");ko.bindingHandlers.autoComplete = {   init: function(element, valueAccessor, allBindingsAccessor, data, context) { Here, we are giving the name autoComplete to the new binding handler, and we are also loading the Autocomplete Widget of jQuery UI: var value = ko.utils.unwrapObservable(valueAccessor()),   allBindings = ko.utils.unwrapObservable(allBindingsAccessor()),   options = allBindings.autoCompleteOptions || {},   events = allBindings.autoCompleteEvents || {},   $element = $(element); Then, we take the data from the binding for the main parameter, and for the optional binding handler; we also put the current element into a jQuery container: autocomplete(options, $element);if (options._renderItem) {   var widget = $element.autocomplete("instance");   widget._renderItem = options._renderItem;}for (var event in events) {   ko.utils.registerEventHandler(element, event, events[event]);} Now we can apply the Autocomplete Widget to the field. If you are questioning why we used ko.utils.registerEventHandler here, the answer is: to show you this function. If you look at the source, you can see that under the wood it uses $.bind if jQuery is registered; so in our case we could simply use $.bind or $.on without any problem. But I wanted to show you this function because sometimes you use KnockoutJS without jQuery, and you can use it to support event handling of every supported browser. The source code of the function _renderItem is (looking at the file ui/autocomplete.js): _renderItem: function( ul, item ) {return $( "<li>" ).text( item.label ).appendTo( ul );}, As you can see, for security reasons, it uses the function text to avoid any possible code injection. It is important that you know that you should do data validation each time you get data from an external source and put it in the page. In this case, the source of data is already secured (because we manage it), so we override the normal behavior, to also show the HTML tag for the bold part of the text. In the last three rows we put a cycle to check for events and we register them. The standard way to register for events is with the event binding handler. The only reason you should use a custom helper is to give to the developer of the View a way to register events more than once. Then, we add to the init function the disposal code: // handle disposalko.utils.domNodeDisposal.addDisposeCallback(element, function() {$element.autocomplete("destroy");}); Here, we use the destroy function of the widget. It's really important to clean up after the use of any jQuery UI Widget or you'll create a really bad memory leak; it's not a big problem with simple applications, but it will be a really big problem if you realize an SPA. Now, we can add the update function:    },   update: function(element, valueAccessor) {     var value = valueAccessor(),         $element = $(element),         data = value();     if (!data)       $element.val("");   }};}); Here, we read the value of the observable, and clean the field if the observable is empty. The update function is executed as a computed observable, so we must be sure that we subscribe to the observables required inside. So, pay attention if you put conditional code before the subscription, because your update function could be not called anymore. Now that the binding is ready, we should require it inside our form; update the View search.html by modifying the following row:    <input type="text" placeholder="Enter a destination" /> Into this:    <input type="text" placeholder="Enter a destination"           data-bind="autoComplete: destination,                     autoCompleteEvents: destination.events,                     autoCompleteOptions: destination.options" /> If you try the application you will not see any error; the reason is that KnockoutJS ignores any data binding not registered inside the ko.bindingHandlers object, and we didn't require the binding handler autocomplete module. So, the last step to get everything working is the update of the View Model of the component; add these rows at the top of the search.js, with the other require(…) rows:      Room = require("room"),     rest = require("services/rest");require("binding-handlers/autocomplete"); We need a reference to our new binding handler, and a reference to the rest object to use it as source of data. Now, we must declare the properties we used inside our data binding; add all these properties to the constructor as shown in the following code: this.destination = ko.observable();this.destination.options = { minLength: 3,source: rest.getDestinationByTerm,select: function(event, data) {   this.destination(data.item);}.bind(this),_renderItem: function(ul, item) {   return $("<li>").append(item.label).appendTo(ul);}};this.destination.events = {blur: function(event) {   if (this.destination() && (event.currentTarget.value !==                               this.destination().value)) {     this.destination(undefined);   }}.bind(this)}; Here, we are defining the container (destination) for the data selected inside the field, an object (destination.options) with any property we want to pass to the Autocomplete Widget (you can check all the documentation at: http://api.jqueryui.com/autocomplete/), and an object (destination.events) with any event we want to apply to the field. Here, we are clearing the field if the text inside the field and the content of the saved data (inside destination) are different. Have you noticed .bind(this) in the previous code? You can check by yourself that the value of this inside these functions is the input field. As you can see, in our code we put references to the destination property of this, so we have to update the context to be the object itself; the easiest way to do this is with a simple call to the bind function. Summary In this article, we have seen all some functionalities of KnockoutJS (core). The application we realized was simple enough, but we used it to learn better how to use components and custom binding handlers. If you think we put too much code for such a small project, try to think what differences you have seen between the first and the second component: the more component and binding handler code you write, the lesser you will have to write in the future. The most important point about components and custom binding handlers is that you have to realize them looking at future reuse; more good code you write, the better it will be for you later. The core point of this article was AMD and RequireJS; how to use them inside a KnockoutJS project, and why you should do it. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article] e to add—as we have done before for the Search View Model—  
Read more
  • 0
  • 0
  • 2180

article-image-building-color-picker-hex-rgb-conversion
Packt
02 Mar 2015
18 min read
Save for later

Building a Color Picker with Hex RGB Conversion

Packt
02 Mar 2015
18 min read
In this article by Vijay Joshi, author of the book Mastering jQuery UI, we are going to create a color selector, or color picker, that will allow the users to change the text and background color of a page using the slider widget. We will also use the spinner widget to represent individual colors. Any change in colors using the slider will update the spinner and vice versa. The hex value of both text and background colors will also be displayed dynamically on the page. (For more resources related to this topic, see here.) This is how our page will look after we have finished building it: Setting up the folder structure To set up the folder structure, follow this simple procedure: Create a folder named Article inside the MasteringjQueryUI folder. Directly inside this folder, create an HTML file and name it index.html. Copy the js and css folder inside the Article folder as well. Now go inside the js folder and create a JavaScript file named colorpicker.js. With the folder setup complete, let's start to build the project. Writing markup for the page The index.html page will consist of two sections. The first section will be a text block with some text written inside it, and the second section will have our color picker controls. We will create separate controls for text color and background color. Inside the index.html file write the following HTML code to build the page skeleton: <html> <head> <link rel="stylesheet" href="css/ui-lightness/jquery-ui- 1.10.4.custom.min.css"> </head> <body> <div class="container"> <div class="ui-state-highlight" id="textBlock"> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> </div> <div class="clear">&nbsp;</div> <ul class="controlsContainer"> <li class="left"> <div id="txtRed" class="red slider" data-spinner="sptxtRed" data-type="text"></div><input type="text" value="0" id="sptxtRed" data-slider="txtRed" readonly="readonly" /> <div id="txtGreen" class="green slider" dataspinner=" sptxtGreen" data-type="text"></div><input type="text" value="0" id="sptxtGreen" data-slider="txtGreen" readonly="readonly" /> <div id="txtBlue" class="blue slider" dataspinner=" sptxtBlue" data-type="text"></div><input type="text" value="0" id="sptxtBlue" data-slider="txtBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Text Color : <span>#000000</span> </li> <li class="right"> <div id="bgRed" class="red slider" data-spinner="spBgRed" data-type="bg" ></div><input type="text" value="255" id="spBgRed" data-slider="bgRed" readonly="readonly" /> <div id="bgGreen" class="green slider" dataspinner=" spBgGreen" data-type="bg" ></div><input type="text" value="255" id="spBgGreen" data-slider="bgGreen" readonly="readonly" /> <div id="bgBlue" class="blue slider" data-spinner="spBgBlue" data-type="bg" ></div><input type="text" value="255" id="spBgBlue" data-slider="bgBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Background Color : <span>#ffffff</span> </li> </ul> </div> <script src="js/jquery-1.10.2.js"></script> <script src="js/jquery-ui-1.10.4.custom.min.js"></script> <script src="js/colorpicker.js"></script> </body> </html> We started by including the jQuery UI CSS file inside the head section. Proceeding to the body section, we created a div with the container class, which will act as parent div for all the page elements. Inside this div, we created another div with id value textBlock and a ui-state-highlight class. We then put some text content inside this div. For this example, we have made three paragraph elements, each having some random text inside it. After div#textBlock, there is an unordered list with the controlsContainer class. This ul element has two list items inside it. First list item has the CSS class left applied to it and the second has CSS class right applied to it. Inside li.left, we created three div elements. Each of these three div elements will be converted to a jQuery slider and will represent the red (R), green (G), and blue (B) color code, respectively. Next to each of these divs is an input element where the current color code will be displayed. This input will be converted to a spinner as well. Let's look at the first slider div and the input element next to it. The div has id txtRed and two CSS classes red and slider applied to it. The red class will be used to style the slider and the slider class will be used in our colorpicker.js file. Note that this div also has two data attributes attached to it, the first is data-spinner, whose value is the id of the input element next to the slider div we have provided as sptxtRed, the second attribute is data-type, whose value is text. The purpose of the data-type attribute is to let us know whether this slider will be used for changing the text color or the background color. Moving on to the input element next to the slider now, we have set its id as sptxtRed, which should match the value of the data-spinner attribute on the slider div. It has another attribute named data-slider, which contains the id of the slider, which it is related to. Hence, its value is txtRed. Similarly, all the slider elements have been created inside div.left and each slider has an input next to id. The data-type attribute will have the text value for all sliders inside div.left. All input elements have also been assigned a value of 0 as the initial text color will be black. The same pattern that has been followed for elements inside div.left is also followed for elements inside div.right. The only difference is that the data-type value will be bg for slider divs. For all input elements, a value of 255 is set as the background color is white in the beginning. In this manner, all the six sliders and the six input elements have been defined. Note that each element has a unique ID. Finally, there is a span element inside both div.left and div.right. The hex color code will be displayed inside it. We have placed #000000 as the default value for the text color inside the span for the text color and #ffffff as the default value for the background color inside the span for background color. Lastly, we have included the jQuery source file, the jQuery UI source file, and the colorpicker.js file. With the markup ready, we can now write the properties for the CSS classes that we used here. Styling the content To make the page presentable and structured, we need to add CSS properties for different elements. We will do this inside the head section. Go to the head section in the index.html file and write these CSS properties for different elements: <style type="text/css">   body{     color:#025c7f;     font-family:Georgia,arial,verdana;     width:700px;     margin:0 auto;   }   .container{     margin:0 auto;     font-size:14px;     position:relative;     width:700px;     text-align:justify;    } #textBlock{     color:#000000;     background-color: #ffffff;   }   .ui-state-highlight{     padding: 10px;     background: none;   }   .controlsContainer{       border: 1px solid;       margin: 0;       padding: 0;       width: 100%;       float: left;   }   .controlsContainer li{       display: inline-block;       float: left;       padding: 0 0 0 50px;       width: 299px;   }   .controlsContainer div.ui-slider{       margin: 15px 0 0;       width: 200px;       float:left;   }   .left{     border-right: 1px solid;   }   .clear{     clear: both;   }     .red .ui-slider-range{ background: #ff0000; }   .green .ui-slider-range{ background: #00ff00; }   .blue .ui-slider-range{ background: #0000ff; }     .ui-spinner{       height: 20px;       line-height: 1px;       margin: 11px 0 0 15px;     }   input[type=text]{     margin-top: 0;     width: 30px;   } </style> First, we defined some general rules for page body and div .container. Then, we defined the initial text color and background color for the div with id textBlock. Next, we defined the CSS properties for the unordered list ul .controlsContainer and its list items. We have provided some padding and width to each list item. We have also specified the width and other properties for the slider as well. Since the class ui-slider is added by jQuery UI to a slider element after it is initialized, we have added our properties in the .controlsContainer div .ui-slider rule. To make the sliders attractive, we then defined the background colors for each of the slider bars by defining color codes for red, green, and blue classes. Lastly, CSS rules have been defined for the spinner and the input box. We can now check our progress by opening the index.html page in our browser. Loading it will display a page that resembles the following screenshot: It is obvious that sliders and spinners will not be displayed here. This is because we have not written the JavaScript code required to initialize those widgets. Our next section will take care of them. Implementing the color picker In order to implement the required functionality, we first need to initialize the sliders and spinners. Whenever a slider is changed, we need to update its corresponding spinner as well, and conversely if someone changes the value of the spinner, we need to update the slider to the correct value. In case any of the value changes, we will then recalculate the current color and update the text or background color depending on the context. Defining the object structure We will organize our code using the object literal. We will define an init method, which will be the entry point. All event handlers will also be applied inside this method. To begin with, go to the js folder and open the colorpicker.js file for editing. In this file, write the code that will define the object structure and a call to it: var colorPicker = {   init : function ()   {       },   setColor : function(slider, value)   {   },   getHexColor : function(sliderType)   {   },   convertToHex : function (val)   {   } }   $(function() {   colorPicker.init(); }); An object named colorPicker has been defined with four methods. Let's see what all these methods will do: init: This method will be the entry point where we will initialize all components and add any event handlers that are required. setColor: This method will be the main method that will take care of updating the text and background colors. It will also update the value of the spinner whenever the slider moves. This method has two parameters; the slider that was moved and its current value. getHexColor: This method will be called from within setColor and it will return the hex code based on the RGB values in the spinners. It takes a sliderType parameter based on which we will decide which color has to be changed; that is, text color or background color. The actual hex code will be calculated by the next method. convertToHex: This method will convert an RGB value for color into its corresponding hex value and return it to get a HexColor method. This was an overview of the methods we are going to use. Now we will implement these methods one by one, and you will understand them in detail. After the object definition, there is the jQuery's $(document).ready() event handler that will call the init method of our object. The init method In the init method, we will initialize the sliders and the spinners and set the default values for them as well. Write the following code for the init method in the colorpicker.js file:   init : function () {   var t = this;   $( ".slider" ).slider(   {     range: "min",     max: 255,     slide : function (event, ui)     {       t.setColor($(this), ui.value);     },     change : function (event, ui)     {       t.setColor($(this), ui.value);     }   });     $('input').spinner(   {     min :0,     max : 255,     spin : function (event, ui)     {       var sliderRef = $(this).data('slider');       $('#' + sliderRef).slider("value", ui.value);     }   });       $( "#txtRed, #txtGreen, #txtBlue" ).slider('value', 0);   $( "#bgRed, #bgGreen, #bgBlue" ).slider('value', 255); } In the first line, we stored the current scope value, this, in a local variable named t. Next, we will initialize the sliders. Since we have used the CSS class slider on each slider, we can simply use the .slider selector to select all of them. During initialization, we provide four options for sliders: range, max, slide, and change. Note the value for max, which has been set to 255. Since the value for R, G, or B can be only between 0 and 255, we have set max as 255. We do not need to specify min as it is 0 by default. The slide method has also been defined, which is invoked every time the slider handle moves. The call back for slide is calling the setColor method with an instance of the current slider and the value of the current slider. The setColor method will be explained in the next section. Besides slide, the change method is also defined, which also calls the setColor method with an instance of the current slider and its value. We use both the slide and change methods. This is because a change is called once the user has stopped sliding the slider handle and the slider value has changed. Contrary to this, the slide method is called each time the user drags the slider handle. Since we want to change colors while sliding as well, we have defined the slide as well as change methods. It is time to initialize the spinners now. The spinner widget is initialized with three properties. These are min and max, and the spin. min and max method has been set to 0 and 255, respectively. Every time the up/down button on the spinner is clicked or the up/down arrow key is used, the spin method will be called. Inside this method, $(this) refers to the current spinner. We find our related slider to this spinner by reading the data-slider attribute of this spinner. Once we get the exact slider, we set its value using the value method on the slider widget. Note that calling the value method will invoke the change method of the slider as well. This is the primary reason we have defined a callback for the change event while initializing the sliders. Lastly, we will set the default values for the sliders. For sliders inside div.left, we have set the value as 0 and for sliders inside div.right, the value is set to 255. You can now check the page on your browser. You will find that the slider and the spinner elements are initialized now, with the values we specified: You can also see that changing the spinner value using either the mouse or the keyboard will update the value of the slider as well. However, changing the slider value will not update the spinner. We will handle this in the next section where we will change colors as well. Changing colors and updating the spinner The setColor method is called each time the slider or the spinner value changes. We will now define this method to change the color based on whether the slider's or spinner's value was changed. Go to the setColor method declaration and write the following code: setColor : function(slider, value) {   var t = this;   var spinnerRef = slider.data('spinner');   $('#' + spinnerRef).spinner("value", value);     var sliderType = slider.data('type')     var hexColor = t.getHexColor(sliderType);   if(sliderType == 'text')   {       $('#textBlock').css({'color' : hexColor});       $('.left span:last').text(hexColor);                  }   else   {       $('#textBlock').css({'background-color' : hexColor});       $('.right span:last').text(hexColor);                  } } In the preceding code, we receive the current slider and its value as a parameter. First we get the related spinner to this slider using the data attribute spinner. Then we set the value of the spinner to the current value of the slider. Now we find out the type of slider for which setColor is being called and store it in the sliderType variable. The value for sliderType will either be text, in case of sliders inside div.left, or bg, in case of sliders inside div.right. In the next line, we will call the getHexColor method and pass the sliderType variable as its argument. The getHexColor method will return the hex color code for the selected color. Next, based on the sliderType value, we set the color of div#textBlock. If the sliderType is text, we set the color CSS property of div#textBlock and display the selected hex code in the span inside div.left. If the sliderType value is bg, we set the background color for div#textBlock and display the hex code for the background color in the span inside div.right. The getHexColor method In the preceding section, we called the getHexColor method with the sliderType argument. Let's define it first, and then we will go through it in detail. Write the following code to define the getHexColor method: getHexColor : function(sliderType) {   var t = this;   var allInputs;   var hexCode = '#';   if(sliderType == 'text')   {     //text color     allInputs = $('.left').find('input[type=text]');   }   else   {     //background color     allInputs = $('.right').find('input[type=text]');   }   allInputs.each(function (index, element) {     hexCode+= t.convertToHex($(element).val());   });     return hexCode; } The local variable t has stored this to point to the current scope. Another variable allInputs is declared, and lastly a variable to store the hex code has been declared, whose value has been set to # initially. Next comes the if condition, which checks the value of parameter sliderType. If the value of sliderType is text, it means we need to get all the spinner values to change the text color. Hence, we use jQuery's find selector to retrieve all input boxes inside div.left. If the value of sliderType is bg, it means we need to change the background color. Therefore, the else block will be executed and all input boxes inside div.right will be retrieved. To convert the color to hex, individual values for red, green, and blue will have to be converted to hex and then concatenated to get the full color code. Therefore, we iterate in inputs using the .each method. Another method convertToHex is called, which converts the value of a single input to hex. Inside the each method, we keep concatenating the hex value of the R, G, and B components to a variable hexCode. Once all iterations are done, we return the hexCode to the parent function where it is used. Converting to hex convertToHex is a small method that accepts a value and converts it to the hex equivalent. Here is the definition of the convertToHex method: convertToHex : function (val) {   var x  = parseInt(val, 10).toString(16);   return x.length == 1 ? "0" + x : x; } Inside the method, firstly we will convert the received value to an integer using the parseInt method and then we'll use JavaScript's toString method to convert it to hex, which has base 16. In the next line, we will check the length of the converted hex value. Since we want the 6-character dash notation for color (such as #ff00ff), we need two characters each for red, green, and blue. Hence, we check the length of the created hex value. If it is only one character, we append a 0 to the beginning to make it two characters. The hex value is then returned to the parent function. With this, our implementation is complete and we can check it on a browser. Load the page in your browser and play with the sliders and spinners. You will see the text or background color changing, based on their value: You will also see the hex code displayed below the sliders. Also note that changing the sliders will change the value of the corresponding spinner and vice versa. Improving the Colorpicker This was a very basic tool that we built. You can add many more features to it and enhance its functionality. Here are some ideas to get you started: Convert it into a widget where all the required DOM for sliders and spinners is created dynamically Instead of two sliders, incorporate the text and background changing ability into a single slider with two handles, but keep two spinners as usual Summary In this article, we created a basic color picker/changer using sliders and spinners. You can use it to view and change the colors of your pages dynamically. Resources for Article: Further resources on this subject: Testing Ui Using WebdriverJs? [article] Important Aspect Angularjs Ui Development [article] Kendo Ui Dataviz Advance Charting [article]
Read more
  • 0
  • 0
  • 5586

article-image-applications-webrtc
Packt
27 Feb 2015
20 min read
Save for later

Applications of WebRTC

Packt
27 Feb 2015
20 min read
This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook. WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications. In this article, we will cover the following recipes: Creating a multiuser conference using WebRTCO Taking a screenshot using WebRTC Compiling and running a demo for Android (For more resources related to this topic, see here.) Creating a multiuser conference using WebRTCO In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications. Getting ready For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server. To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO. How to do it… The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure: Create an HTML file and add common HTML heads: <!DOCTYPE html> <html lang="en"> <head>     <meta charset="utf-8"> Add some style definitions to make the web page looking nicer:     <style type="text/css">         video {             width: 384px;             height: 288px;             border: 1px solid black;             text-align: center;         }         .container {             width: 780px;             margin: 0 auto;         }     </style> Include the framework in your project: <script type="text/javascript" src ="https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"charset="utf-8"></script></head> Define the onLoad function—it will be called after the web page is loaded. In this function, we will make some preliminary initializing work: <body onload="onLoad();"> Define HTML containers where the local video will be placed: <div class="container">     <video id="localVideo"></video> </div> Define a place where the remote video will be added. Note that we don't create HTML video objects, and we just define a separate div. Further, video objects will be created and added to the page by the framework automatically: <div class="container" id="remoteVideos"></div> <div class="container"> Create the controls for the chat area: <div id="chat_area" style="width:100%; height:250px;overflow: auto; margin:0 auto 0 auto; border:1px solidrgb(200,200,200); background: rgb(250,250,250);"></div></div><div class="container" id="div_chat_input"><input type="text" class="search-query"placeholder="chat here" name="msgline" id="chat_input"><input type="submit" class="btn" id="chat_submit_btn"onclick="sendChatTxt();"/></div> Initialize a few variables: <script type="text/javascript">     var videoCount = 0;     var webrtco = null;     var parent = document.getElementById('remoteVideos');     var chatArea = document.getElementById("chat_area");     var chatColorLocal = "#468847";     var chatColorRemote = "#3a87ad"; Define a function that will be called by the framework when a new remote peer is connected. This function creates a new video object and puts it on the page:     function getRemoteVideo(remPid) {         var video = document.createElement('video');         var id = 'remoteVideo_' + remPid;         video.setAttribute('id',id);         parent.appendChild(video);         return video;     } Create the onLoad function. It initializes some variables and resizes the controls on the web page. Note that this is not mandatory, and we do it just to make the demo page look nicer:     function onLoad() {         var divChatInput =         document.getElementById("div_chat_input");         var divChatInputWidth = divChatInput.offsetWidth;         var chatSubmitButton =         document.getElementById("chat_submit_btn");         var chatSubmitButtonWidth =         chatSubmitButton.offsetWidth;         var chatInput =         document.getElementById("chat_input");         var chatInputWidth = divChatInputWidth -         chatSubmitButtonWidth - 40;         chatInput.setAttribute("style","width:" +         chatInputWidth + "px");         chatInput.style.width = chatInputWidth + 'px';         var lv = document.getElementById("localVideo"); Create a new WebRTCO object and start the application. After this point, the framework will start signaling connection, get access to the user's media, and will be ready for income connections from remote peers: webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);}; Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected. The following function will be called when the remote peer has closed the connection. It will remove video objects that became outdated:     function OnBye(pid) {         var video = document.getElementById("remoteVideo_"         + pid);         if (null !== video) video.remove();     }; We also need a function that will create a URL to share with other peers in order to make them able to connect to the virtual room. The following piece of code represents such a function: function OnRoomReceived(room) {addChatTxt("Now, if somebody wants to join you,should use this link: <ahref=""+window.location.href+"?room="+room+"">"+window.location.href+"?room="+room+"</a>",chatColorRemote);}; The following function prints some text in the chat area. We will also use it to print the URL to share with remote peers:     function addChatTxt(msg, msgColor) {         var txt = "<font color=" + msgColor + ">" +         getTime() + msg + "</font><br/>";         chatArea.innerHTML = chatArea.innerHTML + txt;         chatArea.scrollTop = chatArea.scrollHeight;     }; The next function is a callback that is called by the framework when a peer has sent us a message. This function will print the message in the chat area:     function onChatMsgReceived(msg) {         addChatTxt(msg, chatColorRemote);     }; To send messages to remote peers, we will create another function, which is represented in the following code:     function sendChatTxt() {         var msgline =         document.getElementById("chat_input");         var msg = msgline.value;         addChatTxt(msg, chatColorLocal);         msgline.value = '';         webrtco.API_sendPutChatMsg(msg);     }; We also want to print the time while printing messages; so we have a special function that formats time data appropriately:     function getTime() {         var d = new Date();         var c_h = d.getHours();         var c_m = d.getMinutes();         var c_s = d.getSeconds();           if (c_h < 10) { c_h = "0" + c_h; }         if (c_m < 10) { c_m = "0" + c_m; }         if (c_s < 10) { c_s = "0" + c_s; }         return c_h + ":" + c_m + ":" + c_s + ": ";     }; We have some helper code to make our life easier. We will use it while removing obsolete video objects after remote peers are disconnected:     Element.prototype.remove = function() {         this.parentElement.removeChild(this);     }     NodeList.prototype.remove =     HTMLCollection.prototype.remove = function() {         for(var i = 0, len = this.length; i < len; i++) {             if(this[i] && this[i].parentElement) {                 this[i].parentElement.removeChild(this[i]);             }         }     } </script> </body> </html> Now, save the file and put it on the web server, where it could be accessible from web browser. How it works… Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page. The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer: Taking a screenshot using WebRTC Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature. Getting ready No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application. How to do it… Follow these steps: First of all, add image and canvas objects to the web page of the application. We will use these objects to take screenshots and display them on the page: <img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas> Next, you have to add a button to the web page. After clicking on this button, the appropriate function will be called to take the screenshot from the local stream video: <button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button> Finally, we need to implement the screenshot taking function: function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d"); Draw an image on the canvas object—the image will be taken from the video object: ctx.drawImage(v,0,0); Now, take reference of the canvas, convert it to the DataURL object, and insert the value into the src option of the image object. As a result, the image object will show us the taken screenshot: s.src = c.toDataURL('image/png'); } That is it. Save the file and open the application in a web browser. Now, when you click on the Make a screenshot button, you will see the screenshot in the appropriate image object on the web page. You can save the screenshot to the disk using right-click and the pop-up menu. How it works… We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas. Compiling and running a demo for Android Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process. Getting ready We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X. If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code. Preparing the system First of all, you need to install the necessary system packages: sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6 Installing Oracle JDK By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully. Download the Oracle JDK from its home page at http://www.oracle.com/technetwork/java/javase/downloads/index.html. In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html. Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive. Install the downloaded JDK: sudo mkdir –p /usr/lib/jvmcd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister Here, I assume that you downloaded the JDK package into the home directory. Register the JDK in the system: sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/ Test the Java version: java -version You should see something like Java HotSpot on the screen—it means that the correct JVM is installed. Getting the WebRTC source code Perform the following steps to get the WebRTC source code: Download and prepare Google Developer Tools:Getting the WebRTC source code mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH" Download the WebRTC source code: gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code. Installing Android Developer Tools To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT: Download ADT from its home page http://developer.android.com/sdk/index.html#download. Unpack ADT to a folder: cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip Set up the ANDROID_HOME environment variable: export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk How to do it… After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application: Prepare Android-specific build dependencies: cd ~/dev/trunk source ./build/android/envsetup.sh Configure the build scripts: export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android"gclient runhooks Build the WebRTC code with the demo application: ninja -C out/Debug -j 5 AppRTCDemo After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk. Running on the Android simulator Follow these steps to run an application on the Android simulator: Run Android SDK manager and install the necessary Android components: $ANDROID_HOME/tools/android sdk Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2: Create an Android virtual device: cd $ANDROID_HOME/tools ./android avd & The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot: Start the emulator using just the created virtual device: ./emulator –avd emu1 & This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot: Check whether the virtual device is simulated and running: cd $ANDROID_HOME/platform-tools ./adb devices You should see something like the following: List of devices attached emulator-5554   device This means that your just created virtual device is OK and running; so we can use it to test our demo application. Install the demo application on the virtual device: ./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You should see something like the following: 636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success This means that the application is transferred to the virtual device and is ready to be started. Switch to the simulator window; you should see the demo application's icon. Execute it like it is a real Android device. In the following screenshot, you can see the installed demo application AppRTC: While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool. Fixing a bug with GLSurfaceView Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits. This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209. The following steps can help you fix this bug in the original demo application: Go to the ~/dev/trunk/talk/examples/android/src/org/appspot/apprtc folder and edit the AppRTCDemoActivity.java file. Look for the following line of code: vsv = new AppRTCGLView(this, displaySize); Right after this line, add the following line of code: vsv.setEGLConfigChooser(8,8,8,8,16,16); You will need to recompile the application: cd ~/dev/trunk ninja -C out/Debug AppRTCDemo  Now you can deploy your application and the issue will not appear anymore. Running on a physical Android device For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator. Connect the Android device to the machine using a USB cable. On the Android device, switch the USB debug mode on. Check whether your machine sees your device: cd $ANDROID_HOME/platform-tools ./adb devices If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command: List of devices attached QO4721C35410   device Deploy the application onto the device: cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk You will get the following output: 3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success After that you should see the AppRTC demo application's icon on the device: After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established: Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other. The following screenshot represents what was seen on the Android device: There's more… The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc. Summary In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android. Resources for Article: Further resources on this subject: Webrtc with Sip and Ims? [article] Using the Webrtc Data Api [article] Applying Webrtc for Education and E Learning [article]
Read more
  • 0
  • 0
  • 3293

article-image-introducing-web-application-development-rails
Packt
25 Feb 2015
8 min read
Save for later

Introducing Web Application Development in Rails

Packt
25 Feb 2015
8 min read
In this article by Syed Fazle Rahman, author of the book Bootstrap for Rails,we will learn how to present your application in the best possible way, which has been the most important factor for every web developer for ages. In this mobile-first generation, we are forced to go with the wind and make our application compatible with mobiles, tablets, PCs, and every possible display on Earth. Bootstrap is the one stop solution for all woes that developers have been facing. It creates beautiful responsive designs without any extra efforts and without any advanced CSS knowledge. It is a true boon for every developer. we will be focusing on how to beautify our Rails applications through the help of Bootstrap. We will create a basic Todo application with Rails. We will explore the folder structure of a Rails application and analyze which folders are important for templating a Rails Application. This will be helpful if you want to quickly revisit Rails concepts. We will also see how to create views, link them, and also style them. The styling in this article will be done traditionally through the application's default CSS files. Finally, we will discuss how we can speed up the designing process using Bootstrap. In short, we will cover the following topics: Why Bootstrap with Rails? Setting up a Todo Application in Rails Analyzing folder structure of a Rails application Creating views Styling views using CSS Challenges in traditionally styling a Rails Application (For more resources related to this topic, see here.) Why Bootstrap with Rails? Rails is one the most popular Ruby frameworks which is currently at its peak, both in terms of demand and technology trend. With more than 3,100 members contributing to its development, and tens of thousands of applications already built using it, Rails has created a standard for every other framework in the Web today. Rails was initially developed by David Heinemeier Hansson in 2003 to ease his own development process in Ruby. Later, he became generous enough to release Rails to the open source community. Today, it is popularly known as Ruby on Rails. Rails shortens the development life cycle by moving the focus from reinventing the wheel to innovating new features. It is based on the convention of the configurations principle, which means that if you follow the Rails conventions, you would end up writing much less code than you would otherwise write. Bootstrap, on the other hand, is one of the most popular frontend development frameworks. It was initially developed at Twitter for some of its internal projects. It makes the life of a novice web developer easier by providing most of the reusable components that are already built and are ready to use. Bootstrap can be easily integrated with a Rails development environment through various methods. We can directly use the .css files provided by the framework, or can extend it through its Sass version and let Rails compile it. Sass is a CSS preprocessor that brings logic and functionality into CSS. It includes features like variables, functions, mixins, and others. Using the Sass version of Bootstrap is a recommended method in Rails. It gives various options to customize Bootstrap's default styles easily. Bootstrap also provides various JavaScript components that can be used by those who don't have any real JavaScript knowledge. These components are required in almost every modern website being built today. Bootstrap with Rails is a deadly combination. You can build applications faster and invest more time to think about functionality, rather than rewrite codes. Setting up a Todo application in Rails I assume that you already have basic knowledge of Rails development. You should also have Rails and Ruby installed in your machine to start with. Let's first understand what this Todo application will do. Our application will allow us to create, update, and delete items from the Todo list. We will first analyze the folders that are created while scaffolding this application and which of them are necessary for templating the application. So, let's dip our feet into the water: First, we need to select our workspace, which can be any folder inside your system. Let's create a folder named Bootstrap_Rails_Project. Now, open the terminal and navigate to this folder. It's time to create our Todo application. Write the following command to create a Rails application named TODO: rails new TODO This command will execute a series of various other commands that are necessary to create a Rails application. So, just wait for sometime before it stops executing all the codes. If you are using a newer version of Rails, then this command will also execute bundle install command at the end. Bundle install command is used to install other dependencies. The output for the preceding command is as follows: Now, you should have a new folder inside Bootstrap_Rails_Project named TODO, which was created by the preceding code. Here is the output: Analyzing folder structure of a Rails application Let's navigate to the TODO folder to check how our application's folder structure looks like: Let me explain to you some of the important folders here: The first one is the app folder. The assets folder inside the app folder is the location to store all the static files like JavaScript, CSS, and Images. You can take a sneak peek inside them to look at the various files. The controllers folder handles various requests and responses of the browser. The helpers folder contains various helper methods both for the views and controllers. The next folder mailers, contains all the necessary files to send an e-mail. The models folder contains files that interact with the database. Finally, we have the views folder, which contains all the .erb files that will be complied to HTML files. So, let's start the Rails server and check out our application on the browser: Navigate to the TODO folder in the terminal and then type the following command to start a server: rails server You can also use the following command: rails s You will see that the server is deployed under the port 3000. So, type the following URL to view the application: http://localhost:3000. You can also use the following URL: http://0.0.0.0:3000. If your application is properly set up, you should see the default page of Rails in the browser: Creating views We will be using Rails' scaffold method to create models, views, and other necessary files that Rails needs to make our application live. Here's the set of tasks that our application should perform: It should list out the pending items Every task should be clickable, and the details related to that item should be seen in a new view We can edit that item's description and some other details We can delete that item The task looks pretty lengthy, but any Rails developer would know how easy it is to do. We don't actually have to do anything to achieve it. We just have to pass a single scaffold command, and the rest will be taken care of. Close the Rails server using Ctrl + C keys and then proceed as follows: First, navigate to the project folder in the terminal. Then, pass the following command: rails g scaffold todo title:string description:text completed:boolean This will create a new model called todo that has various fields like title, description, and completed. Each field has a type associated with it. Since we have created a new model, it has to be reflected in the database. So, let's migrate it: rake db:create db:migrate The preceding code will create a new table inside a new database with the associated fields. Let's analyze what we have done. The scaffold command has created many HTML pages or views that are needed for managing the todo model. So, let's check out our application. We need to start our server again: rails s Go to the localhost page http://localhost:3000 at port number 3000. You should still see the Rails' default page. Now, type the URL: http://localhost:3000/todos. You should now see the application, as shown in the following screenshot: Click on New Todo, you will be taken to a form which allows you to fill out various fields that we had earlier created. Let's create our first todo and click on submit. It will be shown on the listing page: It was easy, wasn't it? We haven't done anything yet. That's the power of Rails, which people are crazy about. Summary This article was to brief you on how to develop and design a simple Rails application without the help of any CSS frontend frameworks. We manually styled the application by creating an external CSS file styles.css and importing it into the application using another CSS file application.css. We also discussed the complexities that a novice web designer might face on directly styling the application. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 11788
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-static-data-management
Packt
25 Feb 2015
29 min read
Save for later

Static Data Management

Packt
25 Feb 2015
29 min read
In this article by Loiane Groner, author of the book Mastering Ext JS, Second Edition, we will start implementing the application's core features, starting with static data management. What exactly is this? Every application has information that is not directly related to the core business, but this information is used by the core business logic somehow. There are two types of data in every application: static data and dynamic data. For example, the types of categories, languages, cities, and countries can exist independently of the core business and can be used by the core business information as well; this is what we call static data because it does not change very often. And there is the dynamic data, which is the information that changes in the application, what we call core business data. Clients, orders, and sales would be examples of dynamic or core business data. We can treat this static information as though they are independent MySQL tables (since we are using MySQL as the database server), and we can perform all the actions we can do on a MySQL table. (For more resources related to this topic, see here.) Creating a Model As usual, we are going to start by creating the Models. First, let's list the tables we will be working with and their columns: Actor: actor_id, first_name, last_name, last_update Category: category_id, name, last_update Language: language_id, name, last_update City: city_id, city, country_id, last_update Country: country_id, country, last_update We could create one Model for each of these entities with no problem at all; however, we want to reuse as much code as possible. Take another look at the list of tables and their columns. Notice that all tables have one column in common—the last_update column. All the previous tables have the last_update column in common. That being said, we can create a super model that contains this field. When we implement the actor and category models, we can extend the super Model, in which case we do not need to declare the column. Don't you think? Abstract Model In OOP, there is a concept called inheritance, which is a way to reuse the code of existing objects. Ext JS uses an OOP approach, so we can apply the same concept in Ext JS applications. If you take a look back at the code we already implemented, you will notice that we are already applying inheritance in most of our classes (with the exception of the util package), but we are creating classes that inherit from Ext JS classes. Now, we will start creating our own super classes. As all the models that we will be working with have the last_update column in common (if you take a look, all the Sakila tables have this column), we can create a super Model with this field. So, we will create a new file under app/model/staticData named Base.js: Ext.define('Packt.model.staticData.Base', {     extend: 'Packt.model.Base', //#1       fields: [         {             name: 'last_update',             type: 'date',             dateFormat: 'Y-m-j H:i:s'         }     ] }); This Model has only one column, that is, last_update. On the tables, the last_update column has the type timestamp, so the type of the field needs to be date, and we will also apply date format: 'Y-m-j H:i:s', which is years, months, days, hours, minutes, and seconds, following the same format as we have in the database (2006-02-15 04:34:33). When we can create each Model representing the tables, we will not need to declare the last_update field again. Look again at the code at line #1. We are not extending the default Ext.data.Model class, but another Base class (security.Base). Adapting the Base Model schema Create a file named Base.js inside the app/model folder with the following content in it: Ext.define('Packt.model.Base', {     extend: 'Ext.data.Model',       requires: [         'Packt.util.Util'     ],       schema: {         namespace: 'Packt.model', //#1         urlPrefix: 'php',         proxy: {             type: 'ajax',             api :{                 read : '{prefix}/{entityName:lowercase}/list.php',                 create:                     '{prefix}/{entityName:lowercase}/create.php',                 update:                     '{prefix}/{entityName:lowercase}/update.php',                 destroy:                     '{prefix}/{entityName:lowercase}/destroy.php'             },             reader: {                 type: 'json',                 rootProperty: 'data'             },             writer: {                 type: 'json',                 writeAllFields: true,                 encode: true,                 rootProperty: 'data',                 allowSingle: false             },             listeners: {                 exception: function(proxy, response, operation){               Packt.util.Util.showErrorMsg(response.responseText);                 }             }         }     } }); Instead of using Packt.model.security, we are going to use only Packt.model. The Packt.model.security.Base class will look simpler now as follows: Ext.define('Packt.model.security.Base', {     extend: 'Packt.model.Base',       idProperty: 'id',       fields: [         { name: 'id', type: 'int' }     ] }); It is very similar to the staticData.Base Model we are creating for this article. The difference is in the field that is common for the staticData package (last_update) and security package (id). Having a single schema for the application now means entityName of the Models will be created based on their name after 'Packt.model'. This means that the User and Group models we created will have entityName security.User, and security.Group respectively. However, we do not want to break the code we have implemented already, and for this reason we want the User and Group Model classes to have the entity name as User and Group. We can do this by adding entityName: 'User' to the User Model and entityName: 'Group' to the Group Model. We will do the same for the specific models we will be creating next. Having a super Base Model for all models within the application means our models will follow a pattern. The proxy template is also common for all models, and this means our server-side code will also follow a pattern. This is good to organize the application and for future maintenance. Specific models Now we can create all the models representing each table. Let's start with the Actor Model. We will create a new class named Packt.model.staticData.Actor; therefore, we need to create a new file name Actor.js under app/model/staticData, as follows: Ext.define('Packt.model.staticData.Actor', {     extend: 'Packt.model.staticData.Base', //#1       entityName: 'Actor', //#2       idProperty: 'actor_id', //#3       fields: [         { name: 'actor_id' },         { name: 'first_name'},         { name: 'last_name'}     ] }); There are three important things we need to note in the preceding code: This Model is extending (#1) from the Packt.model.staticData.Base class, which extends from the Packt.model.Base class, which in turn extends from the Ext.data.Model class. This means this Model inherits all the attributes and behavior from the classes Packt.model.staticData.Base, Packt.model.Base, and Ext.data.Model. As we created a super Model with the schema Packt.model, the default entityName created for this Model would be staticData.Actor. We are using entityName to help the proxy compile the url template with entityName. To make our life easier we are going to overwrite entityName as well (#2). The third point is idProperty (#3). By default, idProperty has the value "id". This means that when we declare a Model with a field named "id", Ext JS already knows that this is the unique field of this Model. When it is different from "id", we need to specify it using the idProperty configuration. As all Sakila tables do not have a unique field called "id"—it is always the name of the entity + "_id"—we will need to declare this configuration in all models. Now we can do the same for the other models. We need to create four more classes: Packt.model.staticData.Category Packt.model.staticData.Language Packt.model.staticData.City Packt.model.staticData.Country At the end, we will have six Model classes (one super Model and five specific models) created inside the app/model/staticData package. If we create a UML-class diagram for the Model classes, we will have the following diagram: The Actor, Category, Language, City, and Country Models extend the Packt.model.staticData Base Model, which extends from Packt.model.Base, which in turn extends the Ext.data.Model class. Creating a Store The next step is to create the storesfor each Model. As we did with the Model, we will try to create a generic Storeas well (in this article, will create a generic code for all screens, so creating a super Model, Store, and View is part of the capability). Although the common configurations are not in the Store, but in the Proxy(which we declared inside the schema in the Packt.model.Base class), having a super Store class can help us to listen to events that are common for all the static data stores. We will create a super Store named Packt.store.staticData.Base. As we need a Store for each Model, we will create the following stores: Packt.store.staticData.Actors Packt.store.staticData.Categories Packt.store.staticData.Languages Packt.store.staticData.Cities Packt.store.staticData.Countries At the end of this topic, we will have created all the previous classes. If we create a UML diagram for them, we will have something like the following diagram: All the Store classes extend from the Base Store. Now that we know what we need to create, let's get our hands dirty! Abstract Store The first class we need to create is the Packt.store.staticData.Base class. Inside this class, we will only declare autoLoad as true so that all the subclasses of this Store can be loaded when the application launches: Ext.define('Packt.store.staticData.Base', {     extend: 'Ext.data.Store',       autoLoad: true }); All the specific stores that we will create will extend this Store. Creating a super Store like this can feel pointless; however, we do not know that during future maintenance, we will need to add some common Store configuration. As we will use MVC for this module, another reason is that inside the Controller, we can also listen to Store events (available since Ext JS 4.2). If we want to listen to the same event of a set of stores and we execute exactly the same method, having a super Store will save us some lines of code. Specific Store Our next step is to implement the Actors, Categories, Languages, Cities, and Countries stores. So let's start with the Actors Store: Ext.define('Packt.store.staticData.Actors', {     extend: 'Packt.store.staticData.Base', //#1       model: 'Packt.model.staticData.Actor' //#2 }); After the definition of the Store, we need to extend from the Ext JS Store class. As we are using a super Store, we can extend directly from the super Store (#1), which means extending from the Packt.store.staticData.Base class. Next, we need to declare the fields or the model that this Store is going to represent. In our case, we always declare the Model (#2). Using a model inside the Store is good for reuse purposes. The fields configuration is recommended just in case we need to create a very specific Store with specific data that we are not planning to reuse throughout the application, as in a chart or a report. For the other stores, the only thing that is going to be different is the name of the Store and the Model. However, if you need the code to compare with yours or simply want to get the complete source code, you can download the code bundle from this book or get it at https://github.com/loiane/masteringextjs. Creating an abstract GridPanel for reuse Now is the time to implement the views. We have to implement five views: one to perform the CRUD operations for Actor, one for Category, one for Language, one for City, and one for Country. The following screenshot represents the final result we want to achieve after implementing the Actors screen: And the following screenshot represents the final result we want to achieve after implementing the Categories screen: Did you notice anything similar between these two screens? Let's take a look again: The top toolbar is the same (1); there is a Live Search capability (2); there is a filter plugin (4), and the Last Update and widget columns are also common (3). Going a little bit further, both GridPanels can be edited using a cell editor(similar to MS Excel capabilities, where you can edit a single cell by clicking on it). The only things different between these two screens are the columns that are specific to each screen (5). Does this mean we can reuse a good part of the code if we use inheritance by creating a super GridPanel with all these common capabilities? Yes! So this is what we are going to do. So let's create a new class named Packt.view.staticData.BaseGrid, as follows: Ext.define('Packt.view.staticData.BaseGrid', {     extend: 'Ext.ux.LiveSearchGridPanel', //#1     xtype: 'staticdatagrid',       requires: [         'Packt.util.Glyphs' //#2     ],       columnLines: true,    //#3     viewConfig: {         stripeRows: true //#4     },       //more code here });    We will extend the Ext.ux.LiveSearchGridPanel class instead of Ext.grid.Panel. The Ext.ux.LiveSearchGridPanel class already extends the Ext.grid.Panel class and also adds the Live Search toolbar (2). The LiveSearchGridPanel class is a plugin that is distributed with the Ext JS SDK. So, we do not need to worry about adding it manually to our project (you will learn how to add third-party plugins to the project later in this book). As we will also add a toolbar with the Add, Save Changes, Cancel Changes buttons, we need to require the util.Glyphs class we created (#2). The configurations #3 and #4 show the border of each cell of the grid and to alternate between a white background and a light gray background. Likewise, any other component that is responsible for displaying information in Ext JS, such as the "Panel" piece is only the shell. The View is responsible for displaying the columns in a GridPanel. We can customize it using the viewConfig (#4). The next step is to create an initComponent method. To initComponent or not? While browsing other developers' code, we might see some using the initComponent when declaring an Ext JS class and some who do not (as we have done until now). So what is the difference between using it and not using it? When declaring an Ext JS class, we usually configure it according to the application needs. They might become either a parent class for other classes or not. If they become a parent class, some of the configurations will be overridden, while some will not. Usually, we declare the ones that we expect to override in the class as configurations. We declare inside the initComponent method the ones we do not want to be overridden. As there are a few configurations we do not want to be overridden, we will declare them inside the initComponent, as follows: initComponent: function() {     var me = this;       me.selModel = {         selType: 'cellmodel' //#5     };       me.plugins = [         {             ptype: 'cellediting',  //#6             clicksToEdit: 1,             pluginId: 'cellplugin'         },         {             ptype: 'gridfilters'  //#7         }     ];       //docked items       //columns       me.callParent(arguments); //#8 } We can define how the user can select information from the GridPanel: the default configuration is the Selection RowModel class. As we want the user to be able to edit cell by cell, we will use the Selection CellModel class (#5) and also the CellEditing plugin (#6), which is part of the Ext JS SDK. For the CellEditing plugin, we configure the cell to be available to edit when the user clicks on the cell (if we need the user to double-click, we can change to clicksToEdit: 2). To help us later in the Controller, we also assign an ID to this plugin. To be able to filter the information (the Live Search will only highlight the matching records), we will use the Filters plugin (#7). The Filters plugin is also part of the Ext JS SDK. The callParent method (#8) will call initConfig from the superclass Ext.ux.LiveSearchGridPanel passing the arguments we defined. It is a common mistake to forget to include the callParent call when overriding the initComponent method. If the component does not work, make sure you are calling the callParent method! Next, we are going to declare dockedItems. As all GridPanels will have the same toolbar, we can declare dockedItems in the super class we are creating, as follows: me.dockedItems = [     {         xtype: 'toolbar',         dock: 'top',         itemId: 'topToolbar', //#9         items: [             {                 xtype: 'button',                 itemId: 'add', //#10                 text: 'Add',                 glyph: Packt.util.Glyphs.getGlyph('add')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'save',                 text: 'Save Changes',                 glyph: Packt.util.Glyphs.getGlyph('saveAll')             },             {                 xtype: 'button',                 itemId: 'cancel',                 text: 'Cancel Changes',                 glyph: Packt.util.Glyphs.getGlyph('cancel')             },             {                 xtype: 'tbseparator'             },             {                 xtype: 'button',                 itemId: 'clearFilter',                 text: 'Clear Filters',                 glyph: Packt.util.Glyphs.getGlyph('clearFilter')             }         ]     } ]; We will have Add, Save Changes, Cancel Changes, and Clear Filters buttons. Note that the toolbar (#9) and each of the buttons (#10) has itemId declared. As we are going to use the MVC approach in this example, we will declare a Controller. The itemId configuration has a responsibility similar to the reference that we declare when working with a ViewController. We will discuss the importance of itemId more when we declare the Controller. When declaring buttons inside a toolbar, we can omit the xtype: 'button' configuration since the button is the default component for toolbars. Inside the Glyphs class, we need to add the following attributes inside its config: saveAll: 'xf0c7', clearFilter: 'xf0b0' And finally, we will add the two columns that are common for all the screens (Last Update column and Widget Column delete (#13)) along with the columns already declared in each specific GridPanel: me.columns = Ext.Array.merge( //#11     me.columns,               //#12     [{         xtype    : 'datecolumn',         text     : 'Last Update',         width    : 150,         dataIndex: 'last_update',         format: 'Y-m-j H:i:s',         filter: true     },     {         xtype: 'widgetcolumn', //#13         width: 45,         sortable: false,       //#14         menuDisabled: true,    //#15         itemId: 'delete',         widget: {             xtype: 'button',   //#16             glyph: Packt.util.Glyphs.getGlyph('destroy'),             tooltip: 'Delete',             scope: me,                //#17             handler: function(btn) {  //#18                 me.fireEvent('widgetclick', me, btn);             }         }     }] ); In the preceding code we merge (#11) me.columns (#12) with two other columns and assign this value to me.columns again. We want all child grids to have these two columns plus the specific columns for each child grid. If the columns configuration from the BaseGrid class were outside initConfig, then when a child class declared its own columns configuration the value would be overridden. If we declare the columns configuration inside initComponent, a child class would not be able to add its own columns configuration, so we need to merge these two configurations (the columns from the child class #12 with the two columns we want each child class to have). For the delete button, we are going to use a Widget Column (#13) (introduced in Ext JS 5). Until Ext JS 4, the only way to have a button inside a Grid Column was using an Action Column. We are going to use a button (#16) to represent a Widget Column. Because it is a Widget Column, there is no reason to make this column sortable (#14), and we can also disable its menu (#15). Specific GridPanels for each table Our last stop before we implement the Controller is the specific GridPanels. We have already created the super GridPanel that contains most of the capabilities that we need. Now we just need to declare the specific configurations for each GridPanel. We will create five GridPanels that will extend from the Packt.view.staticData.BaseGrid class, as follows: Packt.view.staticData.Actors Packt.view.staticData.Categories Packt.view.staticData.Languages Packt.view.staticData.Cities Packt.view.staticData.Countries Let's start with the Actors GridPanel, as follows: Ext.define('Packt.view.staticData.Actors', {     extend: 'Packt.view.staticData.BaseGrid',     xtype: 'actorsgrid',        //#1       store: 'staticData.Actors', //#2       columns: [         {             text: 'Actor Id',             width: 100,             dataIndex: 'actor_id',             filter: {                 type: 'numeric'   //#3             }         },         {             text: 'First Name',             flex: 1,             dataIndex: 'first_name',             editor: {                 allowBlank: false, //#4                 maxLength: 45      //#5             },             filter: {                 type: 'string'     //#6             }         },         {             text: 'Last Name',             width: 200,             dataIndex: 'last_name',             editor: {                 allowBlank: false, //#7                 maxLength: 45      //#8             },             filter: {                 type: 'string'     //#9             }         }     ] }); Each specific class has its own xtype (#1). We also need to execute an UPDATE query in the database to update the menu table with the new xtypes we are creating: UPDATE `sakila`.`menu` SET `className`='actorsgrid' WHERE `id`='5'; UPDATE `sakila`.`menu` SET `className`='categoriesgrid' WHERE `id`='6'; UPDATE `sakila`.`menu` SET `className`='languagesgrid' WHERE `id`='7'; UPDATE `sakila`.`menu` SET `className`='citiesgrid' WHERE `id`='8'; UPDATE `sakila`.`menu` SET `className`='countriesgrid' WHERE `id`='9'; The first declaration that is specific to the Actors GridPanel is the Store (#2). We are going to use the Actors Store. Because the Actors Store is inside the staticData folder (store/staticData), we also need to pass the name of the subfolder; otherwise, Ext JS will think that this Store file is inside the app/store folder, which is not true. Then we need to declare the columns specific to the Actors GridPanel (we do not need to declare the Last Update and the Delete Action Column because they are already in the super GridPanel). What you need to pay attention to now are the editor and filter configurations for each column. The editor is for editing (cellediting plugin). We will only apply this configuration to the columns we want the user to be able to edit, and the filter (filters plugin) is the configuration that we will apply to the columns we want the user to be able to filter information from. For example, for the id column, we do not want the user to be able to edit it as it is a sequence provided by the MySQL database auto increment, so we will not apply the editor configuration to it. However, the user can filter the information based on the ID, so we will apply the filter configuration (#3). We want the user to be able to edit the other two columns: first_name and last_name, so we will add the editor configuration. We can perform client validations as we can do on a field of a form too. For example, we want both fields to be mandatory (#4 and #7) and the maximum number of characters the user can enter is 45 (#5 and #8). And at last, as both columns are rendering text values (string), we will also apply filter (#6 and #9). For other filter types, please refer to the Ext JS documentation as shown in the following screenshot. The documentation provides an example and more configuration options that we can use: And that is it! The super GridPanel will provide all the other capabilities. Summary In this article, we covered how to implement screens that look very similar to the MySQL Table Editor. The most important concept we covered in this article is implementing abstract classes, using the inheritance concept from OOP. We are used to using these concepts on server-side languages, such as PHP, Java, .NET, and so on. This article demonstrated that it is also important to use these concepts on the Ext JS side; this way, we can reuse a lot of code and also implement generic code that provides the same capability for more than one screen. We created a Base Model and Store. We used GridPanel and Live Search grid and filter plugin for the GridPanel as well. You learned how to perform CRUD operations using the Store capabilities. Resources for Article: Further resources on this subject: So, What Is EXT JS? [article] The Login Page Using EXT JS [article] Improving Code Quality [article]
Read more
  • 0
  • 0
  • 6168

article-image-url-routing-and-template-rendering
Packt
24 Feb 2015
11 min read
Save for later

URL Routing and Template Rendering

Packt
24 Feb 2015
11 min read
In this article by Ryan Baldwin, the author of Clojure Web Development Essentials, however, we will start building our application, creating actual endpoints that process HTTP requests, which return something we can look at. We will: Learn what the Compojure routing library is and how it works Build our own Compojure routes to handle an incoming request What this chapter won't cover, however, is making any of our HTML pretty, client-side frameworks, or JavaScript. Our goal is to understand the server-side/Clojure components and get up and running as quickly as possible. As a result, our templates are going to look pretty basic, if not downright embarrassing. (For more resources related to this topic, see here.) What is Compojure? Compojure is a small, simple library that allows us to create specific request handlers for specific URLs and HTTP methods. In other words, "HTTP Method A requesting URL B will execute Clojure function C. By allowing us to do this, we can create our application in a sane way (URL-driven), and thus architect our code in some meaningful way. For the studious among us, the Compojure docs can be found at https://github.com/weavejester/compojure/wiki. Creating a Compojure route Let's do an example that will allow the awful sounding tech jargon to make sense. We will create an extremely basic route, which will simply print out the original request map to the screen. Let's perform the following steps: Open the home.clj file. Alter the home-routes defroute such that it looks like this: (defroutes home-routes   (GET "/" [] (home-page))   (GET "/about" [] (about-page))   (ANY "/req" request (str request))) Start the Ring Server if it's not already started. Navigate to http://localhost:3000/req. It's possible that your Ring Server will be serving off a port other than 3000. Check the output on lein ring server for the serving port if you're unable to connect to the URL listed in step 4. You should see something like this: Using defroutes Before we dive too much into the anatomy of the routes, we should speak briefly about what defroutes is. The defroutes macro packages up all of the routes and creates one big Ring handler out of them. Of course, you don't need to define all the routes for an application under a single defroutes macro. You can, and should, spread them out across various namespaces and then incorporate them into the app in Luminus' handler namespace. Before we start making a bunch of example routes, let's move the route we've already created to its own namespace: Create a new namespace hipstr.routes.test-routes (/hipstr/routes/test_routes.clj) . Ensure that the namespace makes use of the Compojure library: (ns hipstr.routes.test-routes   (:require [compojure.core :refer :all])) Next, use the defroutes macro and create a new set of routes, and move the /req route we created in the hipstr.routes.home namespace under it: (defroutes test-routes   (ANY "/req" request (str request))) Incorporate the new test-routes route into our application handler. In hipstr.handler, perform the following steps: Add a requirement to the hipstr.routes.test-routes namespace: (:require [compojure.core :refer [defroutes]]   [hipstr.routes.home :refer [home-routes]]   [hipstr.routes.test-routes :refer [test-routes]]   …) Finally, add the test-routes route to the list of routes in the call to app-handler: (def app (app-handler   ;; add your application routes here   [home-routes test-routes base-routes] We've now created a new routing namespace. It's with this namespace where we will create the rest of the routing examples. Anatomy of a route So what exactly did we just create? We created a Compojure route, which responds to any HTTP method at /req and returns the result of a called function, in our case a string representation of the original request map. Defining the method The first argument of the route defines which HTTP method the route will respond to; our route uses the ANY macro, which means our route will respond to any HTTP method. Alternatively, we could have restricted which HTTP methods the route responds to by specifying a method-specific macro. The compojure.core namespace provides macros for GET, POST, PUT, DELETE, HEAD, OPTIONS, and PATCH. Let's change our route to respond only to requests made using the GET method: (GET "/req" request (str request)) When you refresh your browser, the entire request map is printed to the screen, as we'd expect. However, if the URL and the method used to make the request don't match those defined in our route, the not-found route in hipstr.handler/base-routes is used. We can see this in action by changing our route to listen only to the POST methods: (POST "/req" request (str request)) If you try and refresh the browser again, you'll notice we don't get anything back. In fact, an "HTTP 404: Page Not Found" response is returned to the client. If we POST to the URL from the terminal using curl, we'll get the following expected response: # curl -d {} http://localhost:3000/req {:ssl-client-cert nil, :go-bowling? "YES! NOW!", :cookies {}, :remote-addr "0:0:0:0:0:0:0:1", :params {}, :flash nil, :route-params {}, :headers {"user-agent" "curl/7.37.1", "content-type" "application/x-www-form-urlencoded", "content-length" "2", "accept" "*/*", "host" "localhost:3000"}, :server-port 3000, :content-length 2, :form-params {}, :session/key nil, :query-params {}, :content-type "application/x-www-form-urlencoded", :character-encoding nil, :uri "/req", :server-name "localhost", :query-string nil, :body #<HttpInput org.eclipse.jetty.server.HttpInput@38dea1>, :multipart-params {}, :scheme :http, :request-method :post, :session {}} Defining the URL The second component of the route is the URL on which the route is served. This can be anything we want and as long as the request to the URL matches exactly, the route will be invoked. There are, however, two caveats we need to be aware of: Routes are tested in order of their declaration, so order matters. The trailing slash isn't handled well. Compojure will always strip the trailing slash from the incoming request but won't redirect the user to the URL without the trailing slash. As a result an HTTP 404: Page Not Found response is returned. So never base anything off a trailing slash, lest ye peril in an ocean of confusion. Parameter destructuring In our previous example we directly refer to the implicit incoming request and pass that request to the function constructing the response. This works, but it's nasty. Nobody ever said, I love passing around requests and maintaining meaningless code and not leveraging URLs, and if anybody ever did, we don't want to work with them. Thankfully, Compojure has a rather elegant destructuring syntax that's easier to read than Clojure's native destructuring syntax. Let's create a second route that allows us to define a request map key in the URL, then simply prints that value in the response: (GET "/req/:val" [val] (str val)) Compojure's destructuring syntax binds HTTP request parameters to variables of the same name. In the previous syntax, the key :val will be in the request's :params map. Compojure will automatically map the value of {:params {:val...}} to the symbol val in [val]. In the end, you'll get the following output for the URL http://localhost:3000/req/holy-moly-molly: That's pretty slick but what if there is a query string? For example, http://localhost:3000/req/holy-moly-molly!?more=ThatsAHotTomalle. We can simply add the query parameter more to the vector, and Compojure will automatically bring it in: (GET "/req/:val" [val more] (str val "<br>" more)) Destructuring the request What happens if we still need access to the entire request? It's natural to think we could do this: (GET "/req/:val" [val request] (str val "<br>" request)) However, request will always be nil because it doesn't map back to a parameter key of the same name. In Compojure, we can use the magical :as key: (GET "/req/:val" [val :as request] (str val "<br>" request)) This will now result in request being assigned the entire request map, as shown in the following screenshot: Destructuring unbound parameters Finally, we can bind any remaining unbound parameters into another map using &. Take a look at the following example code: (GET "/req/:val/:another-val/:and-another"   [val & remainders] (str val "<br>" remainders)) Saving the file and navigating to http://localhost:3000/req/holy-moly-molly!/what-about/susie-q will render both val and the map with the remaining unbound keys :another-val and :and-another, as seen in the following screenshot: Constructing the response The last argument in the route is the construction of the response. Whatever the third argument resolves to will be the body of our response. For example, in the following route: (GET "/req/:val" [val] (str val)) The third argument, (str val), will echo whatever the value passed in on the URL is. So far, we've simply been making calls to Clojure's str but we can just as easily call one of our own functions. Let's add another route to our hipstr.routes.test-routes, and write the following function to construct its response: (defn render-request-val [request-map & [request-key]]   "Simply returns the value of request-key in request-map,   if request-key is provided; Otherwise return the request-map.   If request-key is provided, but not found in the request-map,   a message indicating as such will be returned." (str (if request-key         (if-let [result ((keyword request-key) request-map)]           result           (str request-key " is not a valid key."))         request-map))) (defroutes test-routes   (POST "/req" request (render-request-val request))   ;no access to the full request map   (GET "/req/:val" [val] (str val))   ;use :as to get access to full request map   (GET "/req/:val" [val :as full-req] (str val "<br>" full-req))   ;use :as to get access to the remainder of unbound symbols   (GET "/req/:val/:another-val/:and-another" [val & remainders]     (str val "<br>" remainders))   ;use & to get access to unbound params, and call our route   ;handler function   (GET "/req/:key" [key :as request]     (render-request-val request key))) Now when we navigate to http://localhost:3000/req/server-port, we'll see the value of the :server-port key in the request map… or wait… we should… what's wrong? If this doesn't seem right, it's because it isn't. Why is our /req/:val route getting executed? As stated earlier, the order of routes is important. Because /req/:val with the GET method is declared earlier, it's the first route to match our request, regardless of whether or not :val is in the HTTP request map's parameters. Routes are matched on URL structure, not on parameters keys. As it stands right now, our /req/:key will never get matched. We'll have to change it as follows: ;use & to get access to unbound params, and call our route handler function (GET "/req/:val/:another-val/:and-another" [val & remainders]   (str val "<br>" remainders))   ;giving the route a different URL from /req/:val will ensure its   execution   (GET "/req/key/:key" [key :as request] (render-request-val   request key))) Now that our /req/key/:key route is logically unique, it will be matched appropriately and render the server-port value to screen. Let's try and navigate to http://localhost:3000/req/key/server-port again: Generating complex responses What if we want to create more complex responses? How might we go about doing that? The last thing we want to do is hardcode a whole bunch of HTML into a function, it's not 1995 anymore, after all. This is where the Selmer library comes to the rescue. Summary In this article we have learnt what Compojure is, what a Compojure routing library is and how it works. You have also learnt to build your own Compojure routes to handle an incoming request, within which you learnt how to use defroutes, the anatomy of a route, destructuring parameter and how to define the URL. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Websockets In Wildfly [article] Clojure For Domain-Specific Languages - Design Concepts With Clojure [article]
Read more
  • 0
  • 0
  • 2297

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 5607

article-image-testing-ui-using-webdriverjs
Packt
17 Feb 2015
30 min read
Save for later

Testing a UI Using WebDriverJS

Packt
17 Feb 2015
30 min read
In this article, by the author, Enrique Amodeo, of the book, Learning Behavior-driven Development with JavaScript, we will look into an advanced concept: how to test a user interface. For this purpose, you will learn the following topics: Using WebDriverJS to manipulate a browser and inspect the resulting HTML generated by our UI Organizing our UI codebase to make it easily testable The right abstraction level for our UI tests (For more resources related to this topic, see here.) Our strategy for UI testing There are two traditional strategies towards approaching the problem of UI testing: record-and-replay tools and end-to-end testing. The first approach, record-and-replay, leverages the use of tools capable of recording user activity in the UI and saves this into a script file. This script file can be later executed to perform exactly the same UI manipulation as the user performed and to check whether the results are exactly the same. This approach is not very compatible with BDD because of the following reasons: We cannot test-first our UI. To be able to use the UI and record the user activity, we first need to have most of the code of our application in place. This is not a problem in the waterfall approach, where QA and testing are performed after the codification phase is finished. However, in BDD, we aim to document the product features as automated tests, so we should write the tests before or during the coding. The resulting test scripts are low-level and totally disconnected from the problem domain. There is no way to use them as a live documentation for the requirements of the system. The resulting test suite is brittle and it will stop working whenever we make slight changes, even cosmetic ones, to the UI. The problem is that the tools record the low-level interaction with the system that depends on technical details of the HTML. The other classic approach is end-to-end testing, where we do not only test the UI layer, but also most of the system or even the whole of it. To perform the setup of the tests, the most common approach is to substitute the third-party systems with test doubles. Normally, the database is under the control of the development team, so some practitioners use a regular database for the setup. However, we could use an in-memory database or even mock the DAOs. In any case, this approach prompts us to create an integrated test suite where we are not only testing the correctness of the UI, but the business logic as well. In the context of this discussion, an integrated test is a test that checks several layers of abstraction, or subsystems, in combination. Do not confuse it with the act of testing several classes or functions together. This approach is not inherently against BDD; for example, we could use Cucumber.js to capture the features of the system and implement Gherkin steps using WebDriver to drive the UI and make assertions. In fact, for most people, when you say BDD they always interpret this term to refer to this kind of test. We will end up writing a lot of test cases, because we need to combine the scenarios from the business logic domain with the ones from the UI domain. Furthermore, in which language should we formulate the tests? If we use the UI language, maybe it will be too low-level to easily describe business concepts. If we use the business domain language, maybe we will not be able to test the important details of the UI because they are too low-level. Alternatively, we can even end up with tests that mix UI language with business terminology, so they will neither be focused nor very clear to anyone. Choosing the right tests for the UI If we want to test whether the UI works, why should we test the business rules? After all, this is already tested in the BDD test suite of the business logic layer. To decide which tests to write, we should first determine the responsibilities of the UI layer, which are as follows: Presenting the information provided by the business layer to the user in a nice way. Transforming user interaction into requests for the business layer. Controlling the changes in the appearance of the UI components, which includes things such as enabling/disabling controls, highlighting entry fields, showing/hiding UI elements, and so on. Orchestration between the UI components. Transferring and adapting information between the UI components and navigation between pages fall under this category. We do not need to write tests about business rules, and we should not assume much about the business layer itself, apart from a loose contract. How we should word our tests? We should use a UI-related language when we talk about what the user sees and does. Words such as fields, buttons, forms, links, click, hover, highlight, enable/disable, or show and hide are relevant in this context. However, we should not go too far; otherwise, our tests will be too brittle. Saying, for example, that the name field should have a pink border is too low-level. The moment that the designer decides to use red instead of pink, or changes his mind and decides to change the background color instead of the border, our test will break. We should aim for tests that express the real intention of the user interface; for example, the name field should be highlighted as incorrect. The testing architecture At this point, we could write tests relevant for our UI using the following testing architecture: A simple testing architecture for our UI We can use WebDriver to issue user gestures to interact with the browser. These user gestures are transformed by the browser in to DOM events that are the inputs of our UI logic and will trigger operations on it. We can use WebDriver again to read the resulting HTML in the assertions. We can simply use a test double to impersonate our server, so we can set up our tests easily. This architecture is very simple and sounds like a good plan, but it is not! There are three main problems here: UI testing is very slow. Take into account that the boot time and shutdown phase can take 3 seconds in a normal laptop. Each UI interaction using WebDriver can take between 50 and 100 milliseconds, and the latency with the fake server can be an extra 10 milliseconds. This gives us only around 10 tests per second, plus an extra 3 seconds. UI tests are complex and difficult to diagnose when they fail. What is failing? Our selectors used to tell WebDriver how to find the relevant elements. Some race condition we were not aware of? A cross-browser issue? Also note that our test is now distributed between two different processes, a fact that always makes debugging more difficult. UI tests are inherently brittle. We can try to make them less brittle with best practices, but even then a change in the structure of the HTML code will sometimes break our tests. This is a bad thing because the UI often changes more frequently than the business layer. As UI testing is very risky and expensive, we should try to code as less amount of tests that interact with the UI as possible. We can achieve this without losing testing power, with the following testing architecture:   A smarter testing architecture We have now split our UI layer into two components: the view and the UI logic. This design aligns with the family of MV* design patterns. In the context of this article, the view corresponds with a passive view, and the UI logic corresponds with the controller or the presenter, in combination with the model. A passive view is usually very hard to test; so in this article we will focus mostly on how to do it. You will often be able to easily separate the passive view from the UI logic, especially if you are using an MV* pattern, such as MVC, MVP, or MVVM. Most of our tests will be for the UI logic. This is the component that implements the client-side validation, orchestration of UI components, navigation, and so on. It is the UI logic component that has all the rules about how the user can interact with the UI, and hence it needs to maintain some kind of internal state. The UI logic component can be tested completely in memory using standard techniques. We can simply mock the XMLHttpRequest object, or the corresponding object in the framework we are using, and test everything in memory using a single Node.js process. No interaction with the browser and the HTML is needed, so these tests will be blazingly fast and robust. Then we need to test the view. This is a very thin component with only two responsibilities: Manipulating and updating the HTML to present the user with the information whenever it is instructed to do so by the UI logic component Listening for HTML events and transforming them into suitable requests for the UI logic component The view should not have more responsibilities, and it is a stateless component. It simply does not need to store the internal state, because it only transforms and transmits information between the HTML and the UI logic. Since it is the only component that interacts with the HTML, it is the only one that needs to be tested using WebDriver. The point of all of this is that the view can be tested with only a bunch of tests that are conceptually simple. Hence, we minimize the number and complexity of the tests that need to interact with the UI. WebDriverJS Testing the passive view layer is a technical challenge. We not only need to find a way for our test to inject native events into the browser to simulate user interaction, but we also need to be able to inspect the DOM elements and inject and execute scripts. This was very challenging to do approximately 5 years ago. In fact, it was considered complex and expensive, and some practitioners recommended not to test the passive view. After all, this layer is very thin and mostly contains the bindings of the UI to the HTML DOM, so the risk of error is not supposed to be high, specially if we use modern cross-browser frameworks to implement this layer. Nonetheless, nowadays the technology has evolved, and we can do this kind of testing without much fuss if we use the right tools. One of these tools is Selenium 2.0 (also known as WebDriver) and its library for JavaScript, which is WebDriverJS (https://code.google.com/p/selenium/wiki/WebDriverJs).  In this book, we will use WebDriverJS, but there are other bindings in JavaScript for Selenium 2.0, such as WebDriverIO (http://webdriver.io/). You can use the one you like most or even try both. The point is that the techniques I will show you here can be applied with any client of WebDriver or even with other tools that are not WebDriver. Selenium 2.0 is a tool that allows us to make direct calls to a browser automation API. This way, we can simulate native events, we can access the DOM, and we can control the browser. Each browser provides a different API and has its own quirks, but Selenium 2.0 will offer us a unified API called the WebDriver API. This allows us to interact with different browsers without changing the code of our tests. As we are accessing the browser directly, we do not need a special server, unless we want to control browsers that are on a different machine. Actually, this is only true, due some technical limitations, if we want to test against a Google Chrome or a Firefox browser using WebDriverJS. So, basically, the testing architecture for our passive view looks like this: Testing with WebDriverJS We can see that we use WebDriverJS for the following: Sending native events to manipulate the UI, as if we were the user, during the action phase of our tests Inspecting the HTML during the assert phase of our test Sending small scripts to set up the test doubles, check them, and invoke the update method of our passive view Apart from this, we need some extra infrastructure, such as a web server that serves our test HTML page and the components we want to test. As is evident from the diagram, the commands of WebDriverJS require some network traffic to able to send the appropriate request to the browser automation API, wait for the browser to execute, and get the result back through the network. This forces the API of WebDriverJS to be asynchronous in order to not block unnecessarily. That is why WebDriverJS has an API designed around promises. Most of the methods will return a promise or an object whose methods return promises. This plays perfectly well with Mocha and Chai.  There is a W3C specification for the WebDriver API. If you want to have a look, just visit https://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html. The API of WebDriverJS is a bit complex, and you can find its official documentation at http://selenium.googlecode.com/git/docs/api/javascript/module_selenium-webdriver.html. However, to follow this article, you do not need to read it, since I will now show you the most important API that WebDriverJS offers us. Finding and interacting with elements It is very easy to find an HTML element using WebDriverJS; we just need to use either the findElement or the findElements methods. Both methods receive a locator object specifying which element or elements to find. The first method will return the first element it finds, or simply fail with an exception, if there are no elements matching the locator. The findElements method will return a promise for an array with all the matching elements. If there are no matching elements, the promised array will be empty and no error will be thrown. How do we specify which elements we want to find? To do so, we need to use a locator object as a parameter. For example, if we would like to find the element whose identifier is order_item1, then we could use the following code: var By = require('selenium-webdriver').By;   driver.findElement(By.id('order_item1')); We need to import the selenium-webdriver module and capture its locator factory object. By convention, we store this locator factory in a variable called By. Later, we will see how we can get a WebDriverJS instance. This code is very expressive, but a bit verbose. There is another version of this: driver.findElement({ id: 'order_item1' }); Here, the locator criteria is passed in the form of a plain JSON object. There is no need to use the By object or any factory. Which version is better? Neither. You just use the one you like most. In this article, the plain JSON locator will be used. The following are the criteria for finding elements: Using the tag name, for example, to locate all the <li> elements in the document: driver.findElements(By.tagName('li'));driver.findElements({ tagName: 'li' }); We can also locate using the name attribute. It can be handy to locate the input fields. The following code will locate the first element named password: driver.findElement(By.name('password')); driver.findElement({ name: 'password' }); Using the class name; for example, the following code will locate the first element that contains a class called item: driver.findElement(By.className('item')); driver.findElement({ className: 'item' }); We can use any CSS selector that our target browser understands. If the target browser does not understand the selector, it will throw an exception; for example, to find the second item of an order (assuming there is only one order on the page): driver.findElement(By.css('.order .item:nth-of-type(2)')); driver.findElement({ css: '.order .item:nth-of-type(2)' }); Using only the CSS selector you can locate any element, and it is the one I recommend. The other ones can be very handy in specific situations. There are more ways of locating elements, such as linkText, partialLinkText, or xpath, but I seldom use them. Locating elements by their text, such as in linkText or partialLinkText, is brittle because small changes in the wording of the text can break the tests. Also, locating by xpath is not as useful in HTML as using a CSS selector. Obviously, it can be used if the UI is defined as an XML document, but this is very rare nowadays. In both methods, findElement and findElements, the resulting HTML elements are wrapped as a WebElement object. This object allows us to send an event to that element or inspect its contents. Some of its methods that allow us to manipulate the DOM are as follows: clear(): This will do nothing unless WebElement represents an input control. In this case, it will clear its value and then trigger a change event. It returns a promise that will be fulfilled whenever the operation is done. sendKeys(text or key, …): This will do nothing unless WebElement is an input control. In this case, it will send the equivalents of keyboard events to the parameters we have passed. It can receive one or more parameters with a text or key object. If it receives a text, it will transform the text into a sequence of keyboard events. This way, it will simulate a user typing on a keyboard. This is more realistic than simply changing the value property of an input control, since the proper keyDown, keyPress, and keyUp events will be fired. A promise is returned that will be fulfilled when all the key events are issued. For example, to simulate that a user enters some search text in an input field and then presses Enter, we can use the following code: var Key = require('selenium-webdriver').Key;   var searchField = driver.findElement({name: 'searchTxt'}); searchField.sendKeys('BDD with JS', Key.ENTER);  The webdriver.Key object allows us to specify any key that does not represent a character, such as Enter, the up arrow, Command, Ctrl, Shift, and so on. We can also use its chord method to represent a combination of several keys pressed at the same time. For example, to simulate Alt + Command + J, use driver.sendKeys(Key.chord(Key.ALT, Key.COMMAND, 'J'));. click(): This will issue a click event just in the center of the element. The returned promise will be fulfilled when the event is fired.  Sometimes, the center of an element is nonclickable, and an exception is thrown! This can happen, for example, with table rows, since the center of a table row may just be the padding between cells! submit(): This will look for the form that contains this element and will issue a submit event. Apart from sending events to an element, we can inspect its contents with the following methods: getId(): This will return a promise with the internal identifier of this element used by WebDriver. Note that this is not the value of the DOM ID property! getText(): This will return a promise that will be fulfilled with the visible text inside this element. It will include the text in any child element and will trim the leading and trailing whitespaces. Note that, if this element is not displayed or is hidden, the resulting text will be an empty string! getInnerHtml() and getOuterHtml(): These will return a promise that will be fulfilled with a string that contains innerHTML or outerHTML of this element. isSelected(): This will return a promise with a Boolean that determines whether the element has either been selected or checked. This method is designed to be used with the <option> elements. isEnabled(): This will return a promise with a Boolean that determines whether the element is enabled or not. isDisplayed(): This will return a promise with a Boolean that determines whether the element is displayed or not. Here, "displayed" is taken in a broad sense; in general, it means that the user can see the element without resizing the browser. For example, whether the element is hidden, whether it has diplay: none, or whether it has no size, or is in an inaccessible part of the document, the returned promise will be fulfilled as false. getTagName(): This will return a promise with the tag name of the element. getSize(): This will return a promise with the size of the element. The size comes as a JSON object with width and height properties that indicate the height and width in pixels of the bounding box of the element. The bounding box includes padding, margin, and border. getLocation(): This will return a promise with the position of the element. The position comes as a JSON object with x and y properties that indicate the coordinates in pixels of the element relative to the page. getAttribute(name): This will return a promise with the value of the specified attribute. Note that WebDriver does not distinguish between attributes and properties! If there is neither an attribute nor a property with that name, the promise will be fulfilled as null. If the attribute is a "boolean" HTML attribute (such as checked or disabled), the promise will be evaluated as true only if the attribute is present. If there is both an attribute and a property with the same name, the attribute value will be used.  If you really need to be precise about getting an attribute or a property, it is much better to use an injected script to get it. getCssValue(cssPropertyName): This will return a promise with a string that represents the computed value of the specified CSS property. The computed value is the resulting value after the browser has applied all the CSS rules and the style and class attributes. Note that the specific representation of the value depends on the browser; for example, the color property can be returned as red, #ff0000, or rgb(255, 0, 0) depending on the browser. This is not cross-browser, so we should avoid this method in our tests. findElement(locator) and findElements(locator): These will return an element, or all the elements that are the descendants of this element, and match the locator. isElementPresent(locator): This will return a promise with a Boolean that indicates whether there is at least one descendant element that matches this locator. As you can see, the WebElement API is pretty simple and allows us to do most of our tests easily. However, what if we need to perform some complex interaction with the UI, such as drag-and-drop? Complex UI interaction WebDriverJS allows us to define a complex action gesture in an easy way using the DSL defined in the webdriver.ActionSequence object. This DSL allows us to define any sequence of browser events using the builder pattern. For example, to simulate a drag-and-drop gesture, proceed with the following code: var beverageElement = driver.findElement({ id: 'expresso' });var orderElement = driver.findElement({ id: 'order' });driver.actions()    .mouseMove(beverageElement)    .mouseDown()    .mouseMove(orderElement)    .mouseUp()    .perform(); We want to drag an espresso to our order, so we move the mouse to the center of the espresso and press the mouse. Then, we move the mouse, by dragging the element, over the order. Finally, we release the mouse button to drop the espresso. We can add as many actions we want, but the sequence of events will not be executed until we call the perform method. The perform method will return a promise that will be fulfilled when the full sequence is finished. The webdriver.ActionSequence object has the following methods: sendKeys(keys...): This sends a sequence of key events, exactly as we saw earlier, to the method with the same name in the case of WebElement. The difference is that the keys will be sent to the document instead of a specific element. keyUp(key) and keyDown(key): These send the keyUp and keyDown events. Note that these methods only admit the modifier keys: Alt, Ctrl, Shift, command, and meta. mouseMove(targetLocation, optionalOffset): This will move the mouse from the current location to the target location. The location can be defined either as a WebElement or as page-relative coordinates in pixels, using a JSON object with x and y properties. If we provide the target location as a WebElement, the mouse will be moved to the center of the element. In this case, we can override this behavior by supplying an extra optional parameter indicating an offset relative to the top-left corner of the element. This could be needed in the case that the center of the element cannot receive events. mouseDown(), click(), doubleClick(), and mouseUp(): These will issue the corresponding mouse events. All of these methods can receive zero, one, or two parameters. Let's see what they mean with the following examples: var Button = require('selenium-webdriver').Button;   // to emit the event in the center of the expresso element driver.actions().mouseDown(expresso).perform(); // to make a right click in the current position driver.actions().click(Button.RIGHT).perform(); // Middle click in the expresso element driver.actions().click(expresso, Button.MIDDLE).perform();  The webdriver.Button object defines the three possible buttons of a mouse: LEFT, RIGHT, and MIDDLE. However, note that mouseDown() and mouseUp() only support the LEFT button! dragAndDrop(element, location): This is a shortcut to performing a drag-and-drop of the specified element to the specified location. Again, the location can be WebElement of a page-relative coordinate. Injecting scripts We can use WebDriver to execute scripts in the browser and then wait for its results. There are two methods for this: executeScript and executeAsyncScript. Both methods receive a script and an optional list of parameters and send the script and the parameters to the browser to be executed. They return a promise that will be fulfilled with the result of the script; it will be rejected if the script failed. An important detail is how the script and its parameters are sent to the browser. For this, they need to be serialized and sent through the network. Once there, they will be deserialized, and the script will be executed inside an autoexecuted function that will receive the parameters as arguments. As a result of of this, our scripts cannot access any variable in our tests, unless they are explicitly sent as parameters. The script is executed in the browser with the window object as its execution context (the value of this). When passing parameters, we need to take into consideration the kind of data that WebDriver can serialize. This data includes the following: Booleans, strings, and numbers. The null and undefined values. However, note that undefined will be translated as null. Any function will be transformed to a string that contains only its body. A WebElement object will be received as a DOM element. So, it will not have the methods of WebElement but the standard DOM method instead. Conversely, if the script results in a DOM element, it will be received as WebElement in the test. Arrays and objects will be converted to arrays and objects whose elements and properties have been converted using the preceding rules. With this in mind, we could, for example, retrieve the identifier of an element, such as the following one: var elementSelector = ".order ul > li"; driver.executeScript(     "return document.querySelector(arguments[0]).id;",     elementSelector ).then(function(id) {   expect(id).to.be.equal('order_item0'); }); Notice that the script is specified as a string with the code. This can be a bit awkward, so there is an alternative available: var elementSelector = ".order ul > li"; driver.executeScript(function() {     var selector = arguments[0];     return document.querySelector(selector).id; }, elementSelector).then(function(id) {   expect(id).to.be.equal('order_item0'); }); WebDriver will just convert the body of the function to a string and send it to the browser. Since the script is executed in the browser, we cannot access the elementSelector variable, and we need to access it through parameters. Unfortunately, we are forced to retrieve the parameters using the arguments pseudoarray, because WebDriver have no way of knowing the name of each argument. As its name suggest, executeAsyncScript allows us to execute an asynchronous script. In this case, the last argument provided to the script is always a callback that we need to call to signal that the script has finalized. The result of the script will be the first argument provided to that callback. If no argument or undefined is explicitly provided, then the result will be null. Note that this is not directly compatible with the Node.js callback convention and that any extra parameters passed to the callback will be ignored. There is no way to explicitly signal an error in an asynchronous way. For example, if we want to return the value of an asynchronous DAO, then proceed with the following code: driver.executeAsyncScript(function() {   var cb = arguments[1],       userId = arguments[0];   window.userDAO.findById(userId).then(cb, cb); }, 'user1').then(function(userOrError) {   expect(userOrError).to.be.equal(expectedUser); }); Command control flows All the commands in WebDriverJS are asynchronous and return a promise or WebElement. How do we execute an ordered sequence of commands? Well, using promises could be something like this: return driver.findElement({name:'quantity'}).sendKeys('23')     .then(function() {       return driver.findElement({name:'add'}).click();     })     .then(function() {       return driver.findElement({css:firstItemSel}).getText();     })     .then(function(quantity) {       expect(quantity).to.be.equal('23');     }); This works because we wait for each command to finish before issuing the next command. However, it is a bit verbose. Fortunately, with WebDriverJS we can do the following: driver.findElement({name:'quantity'}).sendKeys('23'); driver.findElement({name:'add'}).click(); return expect(driver.findElement({css:firstItemSel}).getText())     .to.eventually.be.equal('23'); How can the preceding code work? Because whenever we tell WebDriverJS to do something, it simply schedules the requested command in a queue-like structure called the control flow. The point is that each command will not be executed until it reaches the top of the queue. This way, we do not need to explicitly wait for the sendKeys command to be completed before executing the click command. The sendKeys command is scheduled in the control flow before click, so the latter one will not be executed until sendKeys is done. All the commands are scheduled against the same control flow queue that is associated with the WebDriver object. However, we can optionally create several control flows if we want to execute commands in parallel: var flow1 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); var flow2 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); webdriver.promise.fullyResolved([flow1, flow2]).then(function(){   // Wait for flow1 and flow2 to finish and do something }); We need to create each control flow instance manually and, inside each flow, create a separate WebDriver instance. The commands in both flows will be executed in parallel, and we can wait for both of them to be finalized to do something else using fullyResolved. In fact, we can even nest flows if needed to create a custom parallel command-execution graph. Taking screenshots Sometimes, it is useful to take some screenshots of the current screen for debugging purposes. This can be done with the takeScreenshot() method. This method will return a promise that will be fulfilled with a string that contains a base-64 encoded PNG. It is our responsibility to save this string as a PNG file. The following snippet of code will do the trick: driver.takeScreenshot()     .then(function(shot) {       fs.writeFileSync(fileFullPath, shot, 'base64');     });  Note that not all browsers support this capability. Read the documentation for the specific browser adapter to see if it is available. Working with several tabs and frames WebDriver allows us to control several tabs, or windows, for the same browser. This can be useful if we want to test several pages in parallel or if our test needs to assert or manipulate things in several frames at the same time. This can be done with the switchTo() method that will return a webdriver.WebDriver.TargetLocator object. This object allows us to change the target of our commands to a specific frame or window. It has the following three main methods: frame(nameOrIndex): This will switch to a frame with the specified name or index. It will return a promise that is fulfilled when the focus has been changed to the specified frame. If we specify the frame with a number, this will be interpreted as a zero-based index in the window.frames array. window(windowName): This will switch focus to the window named as specified. The returned promise will be fulfilled when it is done. alert(): This will switch the focus to the active alert window. We can dismiss an alert with driver.switchTo().alert().dismiss();. The promise returned by these methods will be rejected if the specified window, frame, or alert window is not found. To make tests on several tabs at the same time, we must ensure that they do not share any kind of state, or interfere with each other through cookies, local storage, or an other kind of mechanism. Summary This article showed us that a good way to test the UI of an application is actually to split it into two parts and test them separately. One part is the core logic of the UI that takes responsibility for control logic, models, calls to the server, validations, and so on. This part can be tested in a classic way, using BDD, and mocking the server access. No new techniques are needed for this, and the tests will be fast. Here, we can involve nonengineer stakeholders, such as UX designers, users, and so on, to write some nice BDD features using Gherkin and Cucumber.js. The other part is a thin view layer that follows a passive view design. It only updates the HTML when it is asked for, and listens to DOM events to transform them as requests to the core logic UI layer. This layer has no internal state or control rules; it simply transforms data and manipulates the DOM. We can use WebDriverJS to test the view. This is a good approach because the most complex part of the UI can be fully test-driven easily, and the hard and slow parts to test the view do not need many tests since they are very simple. In this sense, the passive view should not have a state; it should only act as a proxy of the DOM. Resources for Article: Further resources on this subject: Dart With Javascript [article] Behavior-Driven Development With Selenium WebDriver [article] Event-Driven Programming [article]
Read more
  • 0
  • 0
  • 9859
article-image-programming-littlebits-circuits-javascript-part-1
Anna Gerber
12 Feb 2015
6 min read
Save for later

Programming littleBits circuits with JavaScript Part 1

Anna Gerber
12 Feb 2015
6 min read
littleBits are electronic building blocks that snap together with magnetic connectors. They are great for getting started with electronics and robotics and for prototyping circuits. The littleBits Arduino Coding Kit includes an Arduino-compatible microcontroller, which means that you can use the Johnny-Five JavaScript Robotics programming framework to program your littleBits creations using JavaScript, the programming language of the web. Setup Plug the Arduino bit into your computer from the port at the top of the Arduino module. You'll need to supply power to the Arduino by connecting a blue power module to any of the input connectors. The Arduino will appear as a device with a name like /dev/cu.usbmodemfa131 on Mac, or COM3 on Windows. Johnny-Five uses a communication protocol called Firmata to communicate with the Arduino microcontroller. We'll load the Standard Firmata sketch onto the Arduino the first time we go to use it, to make this communication possible. Installing Firmata via the Chrome App One of the easiest ways to get started programming with Johnny-Five is by using this app for Google Chrome. After you have installed it, open the 'Johnny-Five Chrome' app from the Chrome apps page. To send the Firmata sketch to your board using the extension, select the port corresponding to your Arduino bit from the drop-down menu and then hit the Install Firmata button. If the device does not appear in the list at first, try the app's refresh button. Installing Firmata via the command line If you would prefer not to use the Chrome app, you can skip straight to using Node.js via the command line. You'll need a recent version of Node.js installed. Create a folder for your project's code. On a Mac run the Terminal app, and on Windows run Command Prompt. From the command line change directory so you are inside your project folder, and then use npm to install the Johnny-Five library and nodebots-interchange: npm install johnny-five npm install -g nodebots-interchange Use the interchange program from nodebots-interchange to send the StandardFirmata sketch to your Arduino: interchange install StandardFirmata -a leonardo -p /dev/cu.usbmodemfa131 Note: If you are familiar with Arduino IDE, you could alternatively use it to write Firmata to your Arduino. Open File > Examples > Firmata > StandardFirmata and select your port and Arduino Leonardo from Tools > Board then hit Upload. Inputs and Outputs Programming with hardware is all about I/O: inputs and outputs. These can be either analog (continuous values) or digital (discrete 0 or 1 values). littleBits input modules are color coded pink, while outputs are green. The Arduino Coding Kit includes analog inputs (dimmers) as well as a digital input module (button). The output modules included in the kit are a servo motor and an LED bargraph, which can be used as a digital output (i.e. on or off) or as an analog output to control the number of LEDs displayed, or with Pulse-Width-Modulation (PWM) - using a pattern of pulses on a digital output - to control LED brightness. Building a circuit Let's start with our output modules: the LED bargraph and servo. Connect a blue power module to any connector on the left-hand side of the Arduino. Connect the LED bargraph to the connector labelled d5 and the servo module to the connector labelled d9. Flick the switch next to both outputs to PWM. The mounting boards that come with the Arduino Coding Kit come in handy for holding your circuit together. Blinking an LED bargraph You can write the JavaScript program using the editor inside the Chrome app, or any text editor. We require the johnny-five library tocreate a board object with a "ready" handler. Our code for working with inputs and outputs will go inside the ready handler so that it will run after the Arduino has started up and communication has been established: var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { // code for button, dimmers, servo etc goes here }); We'll treat the bargraph like a single output. It's connected to digital "pin" 5 (d5), so we'll need to provide this with a parameter when we create the Led object. The strobe function causes the LED to blink on and off The parameter to the function indicates the number of milliseconds between toggling the LED on or off (one second in this case): var led = new five.Led(5); led.strobe( 1000 ); Running the code Note: Make sure the power switch on your power module is switched on. If you are using the Chrome app, hit the Run button to start the program. You should see the LED bargraph start blinking. Any errors will be printed to the console below the code. If you have unplugged your Arduino since the last time you ran code via the app, you'll probably need to hit refresh and select the port for your device again from the drop-down above the code editor. The Chrome app is great for getting started, but eventually you'll want to switch to running programs using Node.js, because the Chrome app only supports a limited number of built-in libraries. Use a text editor to save your code to a file (e.g. blink.js) within your project directory, and run it from the command line using Node.js: node blink.js You can hit control-D on Windows or command-D on Mac to end the program. Controlling a Servo Johnny-Five includes a Servo class, but this is for controlling servo motors directly using PWM. The littleBits servo module already takes care of that for us, so we can treat it like a simple motor. Create a Motor object on pin 9 to correspond to the servo. We can start moving it using the start function. The parameter is a number between 0 and 255, which controls the speed. The stop function stops the servo. We'll use the board's wait function to stop the servo after 5 seconds (i.e. 5000 milliseconds). var servo = new five.Motor(9); servo.start(255); this.wait(5000, function(){ servo.stop(); }); In Part 2, we'll read data from our littleBits input modules and use these values to trigger changes to the servo and bargraph. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 4184

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 6039

article-image-migrating-wordpress-blog-middleman-and-deploying-to-amazon-part3
Mike Ball
09 Feb 2015
9 min read
Save for later

Part 3: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
09 Feb 2015
9 min read
Part 3: Migrating WordPress blog content and deploying to production In parts 1 and 2 of this series, we created middleman-demo, a basic Middleman-based blog, imported content from WordPress, and deployed middleman-demo to Amazon S3. Now that middleman-demo has been deployed to production, let’s design a continuous integration workflow that automates builds and deployments. In part 3, we’ll cover the following: Testing middleman-demo with RSpec and Capybara Integrating with GitHub and Travis CI Configuring automated builds and deployments from Travis CI If you didn’t follow parts 1 and 2, or you no longer have your original middleman-demo code, you can clone mine and check out the part3 branch:  $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part3 Create some automated tests In software development, the practice of continuous delivery serves to frequently deploy iterative software bug fixes and enhancements, such that users enjoy an ever-improving product. Automated processes, such as tests, assist in rapidly validating quality with each change. middleman-demo is a relatively simple codebase, though much of its build and release workflow can still be automated via continuous delivery. Let’s write some automated tests for middleman-demo using RSpec and Capybara. These tests can assert that the site continues to work as expected with each change. Add the gems to the middleman-demo Gemfile:  gem 'rspec'gem 'capybara' Install the gems: $ bundle install Create a spec directory to house tests: $ mkdir spec As is the convention in RSpec, create a spec/spec_helper.rb file to house the RSpec configuration: $ touch spec/spec_helper.rb Add the following configuration to spec/spec_helper.rb to run middleman-demo during test execution: require "middleman" require "middleman-blog" require 'rspec' require 'capybara/rspec' Capybara.app = Middleman::Application.server.inst do set :root, File.expand_path(File.join(File.dirname(__FILE__), '..')) set :environment, :development end Create a spec/features directory to house the middleman-demo RSpec test files: $ mkdir spec/features Create an RSpec spec file for the homepage: $ touch spec/features/index_spec.rb Let’s create a basic test confirming that the Middleman Demo heading is present on the homepage. Add the following to spec/features/index_spec.rb: require "spec_helper" describe 'index', type: :feature do before do visit '/' end it 'displays the correct heading' do expect(page).to have_selector('h1', text: 'Middleman Demo') end end Run the test and confirm that it passes: $ rspec You should see output like the following: Finished in 0.03857 seconds (files took 6 seconds to load) 1 example, 0 failures Next, add a test asserting that the first blog post is listed on the homepage; confirm it passes by running the rspec command: it 'displays the "New Blog" blog post' do expect(page).to have_selector('ul li a[href="/blog/2014/08/20/new-blog/"]', text: 'New Blog') end As an example, let’s add one more basic test, this time asserting that the New Blog text properly links to the corresponding blog post. Add the following to spec/features/index_spec.rb and confirm that the test passes: it 'properly links to the "New Blog" blog post' do click_link 'New Blog' expect(page).to have_selector('h2', text: 'New Blog') end middleman-demo can be further tested in this fashion. The extent to which the specs test every element of the site’s functionality is up to you. At what point can it be confidently asserted that the site looks good, works as expected, and can be publicly deployed to users? Push to GitHub Next, push your middleman-demo code to GitHub. If you forked my original github.com/mdb/middleman-demo repository, skip this section. 1. Create a GitHub repositoryIf you don’t already have a GitHub account, create one. Create a repository through GitHub’s web UI called middleman-demo. 2. What should you do if your version of middleman-demo is not a git repository?If your middleman-demo is already a git repository, skip to step 3. If you started from scratch and your code isn’t already in a git repository, let’s initialize one now. I’m assuming you have git installed and have some basic familiarity with it. Make a middleman-demo git repository: $ git init && git add . && git commit -m 'initial commit' Declare your git origin, where <your_git_url_from_step_1> is your GitHub middleman-demo repository URL: $ git remote add origin <your_git_url_from_step_1> Push to your GitHub repository: $ git push origin master You’re done; skip step 3 and move on to Integrate with Travis CI. 3. If you cloned my mdb/middleman-demo repository…If you cloned my middleman-demo git repository, you’ll need to add your newly created middleman-demo GitHub repository as an additional remote: $ git remote add my_origin <your_git_url_from_step_1> If you are working in a branch, merge all your changes to master. Then push to your GitHub repository: $ git push -u my_origin master Integrate with Travis CI Travis CI is a distributed continuous integration service that integrates with GitHub. It’s free for open source projects. Let’s configure Travis CI to run the middleman-demo tests when we push to the GitHub repository. Log in to Travis CI First, sign in to Travis CI using your GitHub credentials. Visit your profile. Find your middleman-demo repository in the “Repositories” list. Activate Travis CI for middleman-demo; click the toggle button “ON.” Create a .travis.ymlfile Travis CI looks for a .travis.yml YAML file in the root of a repository. YAML is a simple, human-readable markup language; it’s a popular option in authoring configuration files. The .travis.yml file informs Travis how to execute the project’s build. Create a .travis.yml file in the root of middleman-demo: $ touch .travis.yml Configure Travis CI to use Ruby 2.1 when building middleman-demo. Add the following YAML to the .travis.yml file: language: rubyrvm: 2.1 Next, declare how Travis CI can install the necessary gem dependencies to build middleman-demo; add the following: install: bundle install Let’s also add before_script, which runs the middleman-demo tests to ensure all tests pass in advance of a build: before_script: bundle exec rspec Finally, add a script that instructs Travis CI how to build middleman-demo: script: bundle exec middleman build At this point, the .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build Commit the .travis.yml file: $ git add .travis.yml && git commit -m "added basic .travis.yml file" Now, after pushing to GitHub, Travis CI will attempt to install middleman-demo dependencies using Ruby 2.1, run its tests, and build the site. Travis CI’s command build output can be seen here: https://travis-ci.org/<your_github_username>/middleman-demo Add a build status badge Assuming the build passes, you should see a green build passing badge near the top right corner of the Travis CI UI on your Travis CI middleman-demo page. Let’s add this badge to the README.md file in middleman-demo, such that a build status badge reflecting the status of the most recent Travis CI build displays on the GitHub repository’s README. If one does not already exist, create a README.md file: $ touch README.md Add the following markdown, which renders the Travis CI build status badge: [![Build Status](https://travis-ci.org/<your_github_username>/middleman-demo.svg?branch=master)](https://travis-ci.org/<your_github_username>/middleman-demo) Configure continuous deployments Through continuous deployments, code is shipped to users as soon as a quality-validated change is committed. Travis CI can be configured to deploy a middleman-demo build with each successful build. Let’s configure Travis CI to continuously deploy middleman-demo to the S3 bucket created in part 2 of this tutorial series. First, install the travis command-line tools: $ gem install travis Use the travis command-line tools to set S3 deployments. Enter the following; you’ll be prompted for your S3 details (see the example below if you’re unsure how to answer): $ travis setup s3 An example response is: Access key ID: <your_aws_access_key_id> Secret access key: <your_aws_secret_access_key_id> Bucket: <your_aws_bucket> Local project directory to upload (Optional): build S3 upload directory (Optional): S3 ACL Settings (private, public_read, public_read_write, authenticated_read, bucket_owner_read, bucket_owner_full_control): |private| public_read Encrypt secret access key? |yes| yes Push only from <your_github_username>/middleman-demo? |yes| yes This automatically edits the .travis.yml file to include the following deploy information: deploy: provider: s3 access_key_id: <your_aws_access_key_id> secret_access_key: secure: <your_encrypted_aws_secret_access_key_id> bucket: <your_s3_bucket> local-dir: build acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Add one additional option, informing Travis to preserve the build directory for use during the deploy process: skip_cleanup: true The final .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build deploy: provider: s3 access_key_id: <your_aws_access_key> secret_access_key: secure: <your_encrypted_aws_secret_access_key> bucket: <your_aws_bucket> local-dir: build skip_cleanup: true acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Confirm that your continuous integration works Commit your changes: $ git add .travis.yml && git commit -m "added travis deploy configuration" Push to GitHub and watch the build output on Travis CI: https://travis-ci.org/<your_github_username>/middleman-demo If all works as expected, Travis CI will run the middleman-demo tests, build the site, and deploy to the proper S3 bucket. Recap Throughout this series, we’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a WordPress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). We’ve configured Travis CI to run automated tests, produce a build, and automate deployments. Beyond what’s been covered within this series, there’s an extensive Middleman ecosystem worth exploring, as well as numerous additional features. Middleman’s custom extensions seek to extend basic Middleman functionality through third-party gems. Read more about Middleman at Middlemanapp.com. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications. 
Read more
  • 0
  • 0
  • 5275
article-image-managing-local-environments
Packt
09 Feb 2015
15 min read
Save for later

Managing local environments

Packt
09 Feb 2015
15 min read
In this article by Juampy Novillo Requena, author of Drush for Developers, Second Edition, we will learn that Drush site aliases offer a useful way to manage local environments without having to be within Drupal's root directory. (For more resources related to this topic, see here.) A site alias consists of an array of settings for Drush to access a Drupal project. They can be defined in different locations, using various file structures. You can find all of its variations at drush topic docs-aliases. In this article, we will use the following variations: We will define local site aliases at $HOME/.drush/aliases.drushrc.php, which are accessible anywhere for our command-line user. We will define a group of site aliases to manage the development and production environments of our sample Drupal project. These will be defined at sites/all/drush/example.aliases.drushrc.php. In the following example, we will use the site-alias command to generate a site alias definition for our sample Drupal project: $ cd /home/juampy/projects/example $ drush --uri=example.local site-alias --alias-name=example.local @self $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', '#name' => 'self', ); The preceding command printed an array structure for the $aliases variable. You can see the root and uri options. There is also an internal property called #name that we can ignore. Now, we will place the preceding output at $HOME/.drush/aliases.drushrc.php so that we can invoke Drush commands to our local Drupal project from anywhere in the command-line interface: <?php   /** * @file * User-wide site alias definitions. * * Site aliases defined here are available everywhere for the current user. */   // Sample Drupal project. $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', ); Here is how we use this site alias in a command. The following example is running the core-status command for our sample Drupal project: $ cd /home/juampy $ drush @example.local core-status Drupal version                 : 7.29-dev                               Site URI                       : example.local                               Database driver                 : mysql                               Database username               : root                                   Database name                   : drupal7x                               Database                       : Connected                               ... Drush alias files              : /home/juampy/.drush/aliases.drushrc.php Drupal root                     : /home/juampy/projects/example           Site path                       : sites/default                           File directory path             : sites/default/files                     Drush loaded our site alias file and used the root and uri options defined in it to find and bootstrap Drupal. The preceding command is equivalent to the following one: $ drush --root=/home/juampy/projects/example --uri=example.local core-status While $HOME/.drush/aliases.drushrc.php is a good place to define site aliases in your local environment, /etc/drush is a first class directory to place site aliases in servers. Let's discover now how we can connect to remote environments via Drush. Managing remote environments Site aliases that reference remote websites can be accessed by Drush through a password-less SSH connection (http://en.wikipedia.org/wiki/Secure_Shell). Before we start with these, let's make sure that we meet the requirements. Verifying requirements First, it is recommended to install the same version of Drush in all the servers that host your website. Drush will fail to run a command if it is not installed in the remote machine except for core-rsync, which runs rsync, a non-Drush command that is available in Unix-like systems. If you can already access the server that hosts your Drupal project through a public key, then skip to the next section. If not, you can either use the pushkey command from Drush extras (https://www.drupal.org/project/drush_extras), or continue reading to set it up manually. Accessing a remote server through a public key The first thing that we need to do is generate a public key for our command-line user in our local machine. Open the command-line interface and execute the following command. We will explain the output step by step: $ cd $HOME $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/juampy/.ssh/id_rsa): By default, SSH keys are created at $HOME/.ssh/. It is fine to go ahead with the suggested path in the preceding prompt; so, let's hit Enter and continue: Created directory '/home/juampy/.ssh'. Enter passphrase (empty for no passphrase): ********* Enter same passphrase again: ********* If the .ssh directory does not exist for the current user, the ssh-keygen command will create it with the correct permissions. We are next prompted to enter a passphrase. It is highly recommended to set one as it makes our private key safer. Here is the rest of the output once we have entered a passphrase: Your identification has been saved in /home/juampy/.ssh/id_rsa. Your public key has been saved in /home/juampy/.ssh/id_rsa.pub. The key fingerprint is: 6g:bf:3j:a2:00:03:a6:00:e1:43:56:7a:a0:c7:e9:f3 juampy@juampy-box The key's randomart image is: +--[ RSA 2048]----+ |                 | |                 | |..               | |o..*             | |o + . . S        | | + * = . .       | | = O o . .     | |   *.o * . .     | |   .oE oo.     | +-----------------+ The result is a new hidden directory under our $HOME path named .ssh. This directory contains a private key file (id_rsa) and a public key file (id_rsa.pub). The former is to be kept secret by us, while the latter is the one we will copy into remote servers where we want to gain access. Now that we have a public key, we will announce it to the SSH agent so that it can be used without having to enter the passphrase every time: $ ssh-add ~/.ssh/id_rsa Identity added: /home/juampy/.ssh/id_rsa (/home/juampy/.ssh/id_rsa) Our key is ready to be used. Assuming that we know an SSH username and password to access the server that hosts the development environment of our website, we will now copy our public key into it. In the following command, replace exampledev and dev.example.com with the username and server's URL of your server: $ ssh-copy-id exampledev@dev.example.com exampledev@dev.example.com's password: Now try logging into the machine, with "ssh 'exampledev@dev.example.com'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't Our public key has been copied to the server and now we do not need to enter a password to identify ourselves anymore when we log in to it. We could have logged on to the server ourselves and manually copied the key, but the benefit of using the ssh-copy-id command is that it takes care of setting the right permissions to the ~/.ssh/authorized_keys file. Let's test it by logging in to the server: $ ssh exampledev@dev.example.com Welcome! We are ready to set up remote site aliases and run commands using the credentials that we have just configured. We will do this in the next section. If you have any trouble setting up SSH authentication, you can find plenty of debugging tips at https://help.github.com/articles/generating-ssh-keys and http://git-scm.com/book/en/Git-on-the-Server-Generating-Your-SSH-Public-Key. Defining a group of remote site aliases for our project Before diving into the specifics of how to define a Drush site alias, let's assume the following scenario: you are part of a development team working on a project that has two environments, each one located in its own server: Development, which holds the bleeding edge version of the project's codebase. It can be reached at http://dev.example.com. Production, which holds the latest stable release and real data. It can be reached at http://www.example.com. Additionally, there might be a variable amount of local environments for each developer in their working machines; although, these do not need a site alias. Given the preceding scenario and assuming that we have SSH access to the development and production servers, we will create a group of site aliases that identify them. We will define this group at sites/all/drush/example.aliases.drushrc.php within our Drupal project: <?php /** * @file * * Site alias definitions for Example project. */   // Development environment. $aliases['dev'] = array( 'root' => '/var/www/exampledev/docroot', 'uri' => 'dev.example.com', 'remote-host' => 'dev.example.com', 'remote-user' => 'exampledev', );   // Production environment. $aliases['prod'] = array( 'root' => '/var/www/exampleprod/docroot', 'uri' => 'www.example.com', 'remote-host' => 'prod.example.com', 'remote-user' => 'exampleprod', ); The preceding file defines two arrays for the $aliases variable keyed by the environment name. Drush will find this group of site aliases when being invoked from the root of our Drupal project. There are many more settings available, which you can find by reading the contents of the drush topic docs-aliases command. These site aliases contain options known to us: root and uri refer to the remote root path and the hostname of the remote Drupal project. There are also two new settings: remote-host and remote-uri. The former defines the URL of the server hosting the website, while the latter is the user to authenticate Drush when connecting via SSH. Now that we have a group of Drush site aliases to work with, the following section will cover some examples using them. Using site aliases in commands Site aliases prepend a command name for Drush to bootstrap the site and then run the command there. Our site aliases are @example.dev and @example.prod. The word example comes from the filename example.aliases.drushrc.php, while dev and prod are the two keys that we added to the $aliases array. Let's see them in action with a few command examples: Check the status of the Development environment: $ cd /home/juampy/projects/example $ drush @example.dev status Drupal version                 : 7.26                           Site URI                       : http://dev.example.com         Database driver                : mysql                           Database username              : exampledev                     Drush temp directory           : /tmp                           ... Drush alias files              : /home/juampy/projects/example/sites/all/drush/example.aliases.drushrc.php     Drupal root                    : /var/www/exampledev/docroot ...                                           The preceding output shows the current status of our development environment. Drush sent the command via SSH to our development environment and rendered back the resulting output. Most Drush commands support site aliases. Let's see the next example. Log in to the development environment and copy all the files from the files directory located at the production environment: $ drush @example.dev site-ssh Welcome to example.dev server! $ cd `drush @example.dev drupal-directory` $ drush core-rsync @example.prod:%files @self:%files You will destroy data from /var/www/exampledev/docroot/sites/default/files and replace with data from exampleprod@prod.example.com:/var/www/exampleprod/docroot/sites/default/files/ Do you really want to continue? (y/n): y Note the use of @self in the preceding command, which is a special Drush site alias that represents the current Drupal project where we are located. We are using @self instead of @example.dev because we are already logged inside the development environment. Now, we will move on to the next example. Open a connection with the Development environment's database: $ drush @example.dev sql-cli Welcome to the MySQL monitor. Commands end with ; or g. mysql> select database(); +------------+ | database() | +------------+ | exampledev | +------------+ 1 row in set (0.02 sec) The preceding command will be identical to the following set of commands: drush @example.dev site-ssh cd /var/www/exampledev drush sql-cli However, Drush is so clever that it opens the connection for us. Isn't this neat? This is one of the commands I use most frequently. Let's finish by looking at our last example. Log in as the administrator user in production: $ drush @example.prod user-login http://www.example.com/user/reset/1/some-long-token/login Created new window in existing browser session. The preceding command creates a login URL and attempts to open your default browser with it. I love Drush! Summary In this article, we covered practical examples with site aliases. We started by defining a site alias for our local Drupal project, and then went on to write a group of site aliases to manage remote environments for a hypothetical Drupal project with a development and production site. Before using site aliases for our remote environments, we covered the basics of setting up SSH in order for Drush to connect to these servers and run commands there. Resources for Article: Further resources on this subject: Installing and Configuring Drupal [article] Installing and Configuring Drupal Commerce [article] 25 Useful Extensions for Drupal 7 Themers [article]
Read more
  • 0
  • 0
  • 11261

article-image-advanced-less-coding
Packt
09 Feb 2015
40 min read
Save for later

Advanced Less Coding

Packt
09 Feb 2015
40 min read
In this article by Bass Jobsen, author of the book Less Web Development Cookbook, you will learn: Giving your rules importance with the !important statement Using mixins with multiple parameters Using duplicate mixin names Building a switch leveraging argument matching Avoiding individual parameters to leverage the @arguments variable Using the @rest... variable to use mixins with a variable number of arguments Using mixins as functions Passing rulesets to mixins Using mixin guards (as an alternative for the if…else statements) Building loops leveraging mixin guards Applying guards to the CSS selectors Creating color contrasts with Less Changing the background color dynamically Aggregating values under a single property (For more resources related to this topic, see here.) Giving your rules importance with the !important statement The !important statement in CSS can be used to get some style rules always applied no matter where that rules appears in the CSS code. In Less, the !important statement can be applied with mixins and variable declarations too. Getting ready You can write the Less code for this recipe with your favorite editor. After that, you can use the command-line lessc compiler to compile the Less code. Finally, you can inspect the compiled CSS code to see where the !important statements appear. To see the real effect of the !important statements, you should compile the Less code client side, with the client-side compiler less.js and watch the effect in your web browser. How to do it… Create an important.less file that contains the code like the following snippet: .mixin() { color: red; font-size: 2em; } p { &.important {    .mixin() !important; } &.unimportant {    .mixin(); } } After compiling the preceding Less code with the command-line lessc compiler, you will find the following code output produced in the console: p.important { color: red !important; font-size: 2em !important; } p.unimportant { color: red; font-size: 2em; } You can, for instance, use the following snippet of the HTML code to see the effect of the !important statements in your browser: <p class="important"   style="color:green;font-size:4em;">important</p> <p class="unimportant"   style="color:green;font-size:4em;">unimportant</p> Your HTML document should also include the important.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="important.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in the following screenshot:  How it works… In Less, you can use the !important statement not only for properties, but also with mixins. When !important is set for a certain mixin, all properties of this mixin will be declared with the !important statement. You can easily see this effect when inspecting the properties of the p.important selector, both the color and size property got the !important statement after compiling the code. There's more… You should use the !important statements with care as the only way to overrule an !important statement is to use another !important statement. The !important statement overrules the normal CSS cascading, specificity rules, and even the inline styles. Any incorrect or unnecessary use of the !important statements in your Less (or CCS) code will make your code messy and difficult to maintain. In most cases where you try to overrule a style rule, you should give preference to selectors with a higher specificity and not use the !important statements at all. With Less V2, you can also use the !important statement when declaring your variables. A declaration with the !important statement can look like the following code: @main-color: darkblue !important; Using mixins with multiple parameters In this section, you will learn how to use mixins with more than one parameter. Getting ready For this recipe, you will have to create a Less file, for instance, mixins.less. You can compile this mixins.less file with the command-line lessc compiler. How to do it… Create the mixins.less file and write down the following Less code into it: .mixin(@color; @background: black;) { background-color: @background; color: @color; } div { .mixin(red; white;); } Compile the mixins.less file by running the command shown in the console, as follows: lessc mixins.less Inspect the CSS code output on the console, and you will find that it looks like that shown, as follows: div { background-color: #ffffff; color: #ff0000; } How it works… In Less, parameters are either semicolon-separated or comma-separated. Using a semicolon as a separator will be preferred because the usage of the comma will be ambiguous. The comma separator is not used only to separate parameters, but is also used to define a csv list, which can be an argument itself. The mixin in this recipe accepts two arguments. The first parameter sets the @color variable, while the second parameter sets the @background variable and has a default value that has been set to black. In the argument list, the default values are defined by writing a colon behind the variable's name, followed by the value. Parameters with a default value are optional when calling the mixins. So the .color mixin in this recipe can also be called with the following line of code: .mixin(red); Because the second argument has a default value set to black, the .mixin(red); call also matches the .mixin(@color; @background:black){} mixin, as described in the Building a switch leveraging argument matching recipe. Only variables set as parameter of a mixin are set inside the scope of the mixin. You can see this when compiling the following Less code: .mixin(@color:blue){ color2: @color; } @color: red; div { color1: @color; .mixin; } The preceding Less code compiles into the following CSS code: div { color1: #ff0000; color2: #0000ff; } As you can see in the preceding example, setting @color inside the mixin to its default value does not influence the value of @color assigned in the main scope. So lazy loading is applied on only variables inside the same scope; nevertheless, you will have to note that variables assigned in a mixin will leak into the caller. The leaking of variables can be used to use mixins as functions, as described in the Using mixins as functions recipe. There's more… Consider the mixin definition in the following Less code: .mixin(@font-family: "Helvetica Neue", Helvetica, Arial,   sans-serif;) { font-family: @font-family; } The semicolon added at the end of the list prevents the fonts after the "Helvetica Neue" font name in the csv list from being read as arguments for this mixin. If the argument list contains any semicolon, the Less compiler will use semicolons as a separator. In the CSS3 specification, among others, the border and background shorthand properties accepts csv. Also, note that the Less compiler allows you to use the named parameters when calling mixins. This can be seen in the following Less code that uses the @color variable as a named parameter: .mixin(@width:50px; @color: yellow) { width: @width; color: @color; } span { .mixin(@color: green); } The preceding Less code will compile into the following CSS code: span { width: 50px; color: #008000; } Note that in the preceding code, #008000 is the hexadecimal representation for the green color. When using the named parameters, their order does not matter. Using duplicate mixin names When your Less code contains one or more mixins with the same name, the Less compiler compiles them all into the CSS code. If the mixin has parameters (see the Building a switch leveraging argument matching recipe) the number of parameters will also match. Getting ready Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a file called mixins.less that contains the following Less code: .mixin(){ height:50px; } .mixin(@color) { color: @color; }   .mixin(@width) { color: green; width: @width; }   .mixin(@color; @width) { color: @color; width: @width; }   .selector-1 { .mixin(red); } .selector-2 { .mixin(red; 500px); } Compile the Less code from step 1 by running the following command in the console: lessc mixins.less After running the command from the previous step, you will find the following Less code output on the console: .selector-1 { color: #ff0000; color: green; width: #ff0000; } .selector-2 { color: #ff0000; width: 500px; } How it works… The .selector-1 selector contains the .mixin(red); call. The .mixin(red); call does not match the .mixin(){}; mixin as the number of arguments does not match. On the other hand, both .mixin(@color){}; and .mixin(@width){}; match the color. For this reason, these mixins will compile into the CSS code. The .mixin(red; 500px); call inside the .selector-2 selector will match only the .mixin(@color; @width){}; mixin, so all other mixins with the same .mixin name will be ignored by the compiler when building the .selector-2 selector. The compiled CSS code for the .selector-1 selector also contains the width: #ff0000; property value as the .mixin(@width){}; mixin matches the call too. Setting the width property to a color value makes no sense in CSS as the Less compiler does not check for this kind of errors. In this recipe, you can also rewrite the .mixin(@width){}; mixin, as follows: .mixin(@width) when (ispixel(@width)){};. There's more… Maybe you have noted that the .selector-1 selector contains two color properties. The Less compiler does not remove duplicate properties unless the value also is the same. The CSS code sometimes should contain duplicate properties in order to provide a fallback for older browsers. Building a switch leveraging argument matching The Less mixin will compile into the final CSS code only when the number of arguments of the caller and the mixins match. This feature of Less can be used to build switches. Switches enable you to change the behavior of a mixin conditionally. In this recipe, you will create a mixin, or better yet, three mixins with the same name. Getting ready Use the command-line lessc compiler to evaluate the effect of this mixin. The compiler will output the final CSS to the console. You can use your favorite text editor to edit the Less code. This recipe makes use of browser-vendor prefixes, such as the -ms-transform prefix. CSS3 introduced vendor-specific rules, which offer you the possibility to write some additional CSS, applicable for only one browser. These rules allow browsers to implement proprietary CSS properties that would otherwise have no working standard (and might never actually become the standard). To find out which prefixes should be used for a certain property, you can consult the Can I use database (available at http://caniuse.com/). How to do it… Create a switch.less Less file, and write down the following Less code into it: @browserversion: ie9; .mixin(ie9; @degrees){ transform:rotate(@degrees); -ms-transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(ie10; @degrees){ transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(@_; @degrees){ transform:rotate(@degrees); } div { .mixin(@browserversion; 70deg); } Compile the Less code from step 1 by running the following command in the console: lessc switch.less Inspect the compiled CSS code that has been output to the console, and you will find that it looks like the following code: div { -ms-transform: rotate(70deg); -webkit-transform: rotate(70deg); transform: rotate(70deg); } Finally, run the following command and you will find that the compiled CSS wll indeed differ from that of step 2: lessc --modify-var="browserversion=ie10" switch.less Now the compiled CSS code will look like the following code snippet: div { -webkit-transform: rotate(70deg); transform: rotate(70deg); } How it works… The switch in this recipe is the @browserversion variable that can easily be changed just before compiling your code. Instead of changing your code, you can also set the --modify-var option of the compiler. Depending on the value of the @browserversion variable, the mixins that match will be compiled, and the other mixins will be ignored by the compiler. The .mixin(ie10; @degrees){} mixin matches the .mixin(@browserversion; 70deg); call only when the value of the @browserversion variable is equal to ie10. Note that the first ie10 argument of the mixin will be used only for matching (argument = ie10) and does not assign any value. You will note that the .mixin(@_; @degrees){} mixin will match each call no matter what the value of the @browserversion variable is. The .mixin(ie9,70deg); call also compiles the .mixin(@_; @degrees){} mixin. Although this should result in the transform: rotate(70deg); property output twice, you will find only one. Since the property got exactly the same value twice, the compiler outputs the property only once. There's more… Not only switches, but also mixin guards, as described in the Using mixin guards (as an alternative for the if…else statements) recipe, can be used to set some properties conditionally. Current versions of Less also support JavaScript evaluating; JavaScript code put between back quotes will be evaluated by the compiler, as can be seen in the following Less code example: @string: "example in lower case"; p { &:after { content: "`@{string}.toUpperCase()`"; } } The preceding code will be compiled into CSS, as follows: p:after { content: "EXAMPLE IN LOWER CASE"; } When using client-side compiling, JavaScript evaluating can also be used to get some information from the browser environment, such as the screen width (screen.width), but as mentioned already, you should not use client-side compiling for production environments. Because you can't be sure that future versions of Less still support JavaScript evaluating, and alternative compilers not written in JavaScript cannot evaluate the JavaScript code, you should always try to write your Less code without JavaScript. Avoiding individual parameters to leverage the @arguments variable In the Less code, the @arguments variable has a special meaning inside mixins. The @arguments variable contains all arguments passed to the mixin. In this recipe, you will use the @arguments variable together with the the CSS url() function to set a background image for a selector. Getting ready You can inspect the compiled CSS code in this recipe after compiling the Less code with the command-line lessc compiler. Alternatively, you can inspect the results in your browser using the client-side less.js compiler. When inspecting the result in your browser, you will also need an example image that can be used as a background image. Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a background.less file that contains the following Less code: .background(@color; @image; @repeat: no-repeat; @position:   top right;) { background: @arguments; }   div { .background(#000; url("./images/bg.png")); width:300px; height:300px; } Finally, inspect the compiled CSS code, and you will find that it will look like the following code snippet: div { background: #000000 url("./images/bg.png") no-repeat top     right; width: 300px; height: 300px; } How it works… The four parameters of the .background() mixin are assigned as a space-separated list to the @arguments variable. After that, the @arguments variable can be used to set the background property. Also, other CSS properties accept space-separated lists, for example, the margin and padding properties. Note that the @arguments variable does not contain only the parameters that have been set explicit by the caller, but also the parameters set by their default value. You can easily see this when inspecting the compiled CSS code of this recipe. The .background(#000; url("./images/bg.png")); caller doesn't set the @repeat or @position argument, but you will find their values in the compiled CSS code. Using the @rest... variable to use mixins with a variable number of arguments As you can also see in the Using mixins with multiple parameters and Using duplicate mixin names recipes, only matching mixins are compiled into the final CSS code. In some situations, you don't know the number of parameters or want to use mixins for some style rules no matter the number of parameters. In these situations, you can use the special ... syntax or the @rest... variable to create mixins that match independent of the number of parameters. Getting ready You will have to create a file called rest.less, and this file can be compiled with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called rest.less that contains the following Less code: .mixin(@a...) { .set(@a) when (iscolor(@a)) {    color: @a; } .set(@a) when (length(@a) = 2) {    margin: @a; } .set(@a); } p{ .mixin(red); } p { .mixin(2px;4px); } Compile the rest.less file from step 1 using the following command in the console: lessc rest.less Inspect the CSS code output to the console that will look like the following code: p { color: #ff0000; } p { margin: 2px 4px; } How it works… The special ... syntax (three dots) can be used as an argument for a mixin. Mixins with the ... syntax in their argument list match any number of arguments. When you put a variable name starting with an @ in front of the ... syntax, all parameters are assigned to that variable. You will find a list of examples of mixins that use the special ... syntax as follows: .mixin(@a; ...){}: This mixin matches 1-N arguments .mixin(...){}: This mixin matches 0-N arguments; note that mixin() without any argument matches only 0 arguments .mixin(@a: 1; @rest...){}: This mixin matches 0-N arguments; note that the first argument is assigned to the @a variable, and all other arguments are assigned as a space-separated list to @rest Because the @rest... variable contains a space-separated list, you can use the Less built-in list function. Using mixins as functions People who are used to functional programming expect a mixin to change or return a value. In this recipe, you will learn to use mixins as a function that returns a value. In this recipe, the value of the width property inside the div.small and div.big selectors will be set to the length of the longest side of a right-angled triangle based on the length of the two shortest sides of this triangle using the Pythagoras theorem. Getting ready The best and easiest way to inspect the results of this recipe will be compiling the Less code with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called pythagoras.less that contains the following Less code: .longestSide(@a,@b) { @length: sqrt(pow(@a,2) + pow(@b,2)); } div { &.small {    .longestSide(3,4);    width: @length; } &.big {    .longestSide(6,7);    width: @length; } } Compile the pythagoras.less file from step 1 using the following command in the console: lessc pyhagoras.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code snippet: div.small { width: 5; } div.big { width: 9.21954446; } How it works… Variables set inside a mixin become available inside the scope of the caller. This specific behavior of the Less compiler was used in this recipe to set the @length variable and to make it available in the scope of the div.small and div.big selectors and the caller. As you can see, you can use the mixin in this recipe more than once. With every call, a new scope is created and both selectors get their own value of @length. Also, note that variables set inside the mixin do not overwrite variables with the same name that are set in the caller itself. Take, for instance, the following code: .mixin() { @variable: 1; } .selector { @variable: 2; .mixin; property: @variable; } The preceding code will compile into the CSS code, as follows: .selector { property: 2; } There's more… Note that variables won't leak from the mixins to the caller in the following two situations: Inside the scope of the caller, a variable with the same name already has been defined (lazy loading will be applied) The variable has been previously defined by another mixin call (lazy loading will not be applied) Passing rulesets to mixins Since Version 1.7, Less allows you to pass complete rulesets as an argument for mixins. Rulesets, including the Less code, can be assigned to variables and passed into mixins, which also allow you to wrap blocks of the CSS code defined inside mixins. In this recipe, you will learn how to do this. Getting ready For this recipe, you will have to create a Less file called keyframes.less, for instance. You can compile this mixins.less file with the command-line lessc compiler. Finally, inspect the Less code output to the console. How to do it… Create the keyframes.less file, and write down the following Less code into it: // Keyframes .keyframe(@name; @roules) { @-webkit-keyframes @name {    @roules(); } @-o-keyframes @name {    @roules(); } @keyframes @name {    @roules(); } } .keyframe(progress-bar-stripes; { from { background-position: 40px 0; } to   { background-position: 0 0; } }); Compile the keyframes.less file by running the following command shown in the console: lessc keyframes.less Inspect the CSS code output on the console and you will find that it looks like the following code: @-webkit-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } How it works… Rulesets wrapped between curly brackets are passed as an argument to the mixin. A mixin's arguments are assigned to a (local) variable. When you assign the ruleset to the @ruleset variable, you are enabled to call @ruleset(); to "mixin" the ruleset. Note that the passed rulesets can contain the Less code, such as built-in functions too. You can see this by compiling the following Less code: .mixin(@color; @rules) { @othercolor: green; @media (print) {    @rules(); } }   p { .mixin(red; {color: lighten(@othercolor,20%);     background-color:darken(@color,20%);}) } The preceding Less code will compile into the following CSS code: @media (print) { p {    color: #00e600;    background-color: #990000; } } A group of CSS properties, nested rulesets, or media declarations stored in a variable is called a detached ruleset. Less offers support for the detached rulesets since Version 1.7. There's more… As you could see in the last example in the previous section, rulesets passed as an argument can be wrapped in the @media declarations too. This enables you to create mixins that, for instance, wrap any passed ruleset into a @media declaration or class. Consider the example Less code shown here: .smallscreens-and-olderbrowsers(@rules) { .lt-ie9 & {    @rules(); } @media (min-width:768px) {    @rules(); } } nav { float: left; width: 20%; .smallscreens-and-olderbrowsers({    float: none;    width:100%; }); } The preceding Less code will compile into the CSS code, as follows: nav { float: left; width: 20%; } .lt-ie9 nav { float: none; width: 100%; } @media (min-width: 768px) { nav {    float: none;    width: 100%; } } The style rules wrapped in the .lt-ie9 class can, for instance, be used with Paul Irish's <html> conditional classes technique or Modernizr. Now you can call the .smallscreens-and-olderbrowsers(){} mixin anywhere in your code and pass any ruleset to it. All passed rulesets get wrapped in the .lt-ie9 class or the @media (min-width: 768px) declaration now. When your requirements change, you possibly have to change only these wrappers once. Using mixin guards (as an alternative for the if…else statements) Most programmers are used to and familiar with the if…else statements in their code. Less does not have these if…else statements. Less tries to follow the declarative nature of CSS when possible and for that reason uses guards for matching expressions. In Less, conditional execution has been implemented with guarded mixins. Guarded mixins use the same logical and comparison operators as the @media feature in CSS does. Getting ready You can compile the Less code in this recipe with the command-line lessc compiler. Also, check the compiler options; you can find them by running the lessc command in the console without any argument. In this recipe, you will have to use the –modify-var option. How to do it… Create a Less file named guards.less, which contains the following Less code: @color: white; .mixin(@color) when (luma(@color) >= 50%) { color: black; } .mixin(@color) when (luma(@color) < 50%) { color: white; }   p { .mixin(@color); } Compile the Less code in the guards.less using the command-line lessc compiler with the following command entered in the console: lessc guards.less Inspect the output written on the console, which will look like the following code: p { color: black; } Compile the Less code with different values set for the @color variable and see how to output change. You can use the command as follows: lessc --modify-var="color=green" guards.less The preceding command will produce the following CSS code: p {   color: white;   } Now, refer to the following command: lessc --modify-var="color=lightgreen" guards.less With the color set to light green, it will again produce the following CSS code: p {   color: black;   }   How it works… The use of guards to build an if…else construct can easily be compared with the switch expression, which can be found in the programming languages, such as PHP, C#, and pretty much any other object-oriented programming language. Guards are written with the when keyword followed by one or more conditions. When the condition(s) evaluates true, the code will be mixed in. Also note that the arguments should match, as described in the Building a switch leveraging argument matching recipe, before the mixin gets compiled. The syntax and logic of guards is the same as that of the CSS @media feature. A condition can contain the following comparison operators: >, >=, =, =<, and < Additionally, the keyword true is the only value that evaluates as true. Two or more conditionals can be combined with the and keyword, which is equivalent to the logical and operator or, on the other hand, with a comma as the logical or operator. The following code will show you an example of the combined conditionals: .mixin(@a; @color) when (@a<10) and (luma(@color) >= 50%) { } The following code contains the not keyword that can be used to negate conditions: .mixin(@a; @color) when not (luma(@color) >= 50%) { } There's more… Inside the guard conditions, (global) variables can also be compared. The following Less code example shows you how to use variables inside guards: @a: 10; .mixin() when (@a >= 10) {} The preceding code will also enable you to compile the different CSS versions with the same code base when using the modify-var option of the compiler. The effect of the guarded mixin described in the preceding code will be very similar with the mixins built in the Building a switch leveraging argument matching recipe. Note that in the preceding example, variables in the mixin's scope overwrite variables from the global scope, as can be seen when compiling the following code: @a: 10; .mixin(@a) when (@a < 10) {property: @a;} selector { .mixin(5); } The preceding Less code will compile into the following CSS code: selector { property: 5; } When you compare guarded mixins with the if…else constructs or switch expressions in other programming languages, you will also need a manner to create a conditional for the default situations. The built-in Less default() function can be used to create such a default conditional that is functionally equal to the else statement in the if…else constructs or the default statement in the switch expressions. The default() function returns true when no other mixins match (matching also takes the guards into account) and can be evaluated as the guard condition. Building loops leveraging mixin guards Mixin guards, as described besides others in the Using mixin guards (as an alternative for the if…else statements) recipe, can also be used to dynamically build a set of CSS classes. In this recipe, you will learn how to do this. Getting ready You can use your favorite editor to create the Less code in this recipe. How to do it… Create a shadesofblue.less Less file, and write down the following Less code into it: .shadesofblue(@number; @blue:100%) when (@number > 0) {   .shadesofblue(@number - 1, @blue - 10%);   @classname: e(%(".color-%a",@number)); @{classname} {    background-color: rgb(0, 0, @blue);    height:30px; } } .shadesofblue(10); You can, for instance, use the following snippet of the HTML code to see the effect of the compiled Less code from the preceding step: <div class="color-1"></div> <div class="color-2"></div> <div class="color-3"></div> <div class="color-4"></div> <div class="color-5"></div> <div class="color-6"></div> <div class="color-7"></div> <div class="color-8"></div> <div class="color-9"></div> <div class="color-10"></div> Your HTML document should also include the shadesofblue.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="shadesofblue.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in this screenshot: How it works… The CSS classes in this recipe are built with recursion. The recursion here has been done by the .shadesofblue(){} mixin calling itself with different parameters. The loop starts with the .shadesofblue(10); call. When the compiler reaches the .shadesofblue(@number - 1, @blue – 10%); line of code, it stops the current code and starts compiling the .shadesofblue(){} mixin again with @number decreased by one and @blue decreased by 10 percent. The process will be repeated till @number < 1. Finally, when the @number variable becomes equal to 0, the compiler tries to call the .shadesofblue(0,0); mixin, which does not match the when (@number > 0) guard. When no matching mixin is found, the compiler stops, compiles the rest of the code, and writes the first class into the CSS code, as follows: .color-1 { background-color: #00001a; height: 30px; } Then, the compiler starts again where it stopped before, at the .shadesofblue(2,20); call, and writes the next class into the CSS code, as follows: .color-2 { background-color: #000033; height: 30px; } The preceding code will be repeated until the tenth class. There's more… When inspecting the compiled CSS code, you will find that the height property has been repeated ten times, too. This kind of code repeating can be prevented using the :extend Less pseudo class. The following code will show you an example of the usage of the :extend Less pseudo class: .baseheight { height: 30px; } .mixin(@i: 2) when(@i > 0) { .mixin(@i - 1); .class@{i} {    width: 10*@i;    &:extend(.baseheight); } } .mixin(); Alternatively, in this situation, you can create a more generic selector, which sets the height property as follows: div[class^="color"-] { height: 30px; } Recursive loops are also useful when iterating over a list of values. Max Mikhailov, one of the members of the Less core team, wrote a wrapper mixin for recursive Less loops, which can be found at https://github.com/seven-phases-max. This wrapper contains the .for and .-each mixins that can be used to build loops. The following code will show you how to write a nested loop: @import "for"; #nested-loops { .for(3, 1); .-each(@i) {    .for(0, 2); .-each(@j) {      x: (10 * @i + @j);    } } } The preceding Less code will produce the following CSS code: #nested-loops { x: 30; x: 31; x: 32; x: 20; x: 21; x: 22; x: 10; x: 11; x: 12; } Finally, you can use a list of mixins as your data provider in some situations. The following Less code gives an example about using mixins to avoid recursion: .data() { .-("dark"; black); .-("light"; white); .-("accent"; pink); }   div { .data(); .-(@class-name; @color){    @class: e(@class-name);    &.@{class} {      color: @color;    } } } The preceding Less code will compile into the CSS code, as follows: div.dark { color: black; } div.light { color: white; }   div.accent { color: pink; } Applying guards to the CSS selectors Since Version 1.5 of Less, guards can be applied not only on mixins, but also on the CSS selectors. This recipe will show you how to apply guards on the CSS selectors directly to create conditional rulesets for these selectors. Getting ready The easiest way to inspect the effect of the guarded selector in this recipe will be using the command-line lessc compiler. How to do it… Create a Less file named darkbutton.less that contains the following code: @dark: true; button when (@dark){ background-color: black; color: white; } Compile the darkbutton.less file with the command-line lessc compiler by entering the following command into the console: lessc darkbutton.less Inspect the CSS code output on the console, which will look like the following code: button { background-color: black; color: white; } Now try the following command and you will find that the button selector is not compiled into the CSS code: lessc --modify-var="dark=false" darkbutton.less How it works… The guarded CSS selectors are ignored by the compiler and so not compiled into the CSS code when the guard evaluates false. Guards for the CSS selectors and mixins leverage the same comparison and logical operators. You can read in more detail how to create guards with these operators in Using mixin guards (as an alternative for the if…else statements) recipe. There's more… Note that the true keyword will be the only value that evaluates true. So the following command, which sets @dark equal to 1, will not generate the button selector as the guard evaluates false: lessc --modify-var="dark=1" darkbutton.less The following Less code will give you another example of applying a guard on a selector: @width: 700px; div when (@width >= 600px ){ border: 1px solid black; } The preceding code will output the following CSS code: div {   border: 1px solid black;   } On the other hand, nothing will be output when setting @width to a value smaller than 600 pixels. You can also rewrite the preceding code with the & feature referencing the selector, as follows: @width: 700px; div { & when (@width >= 600px ){    border: 1px solid black; } } Although the CSS code produced of the latest code does not differ from the first, it will enable you to add more properties without the need to repeat the selector. You can also add the code in a mixin, as follows: .conditional-border(@width: 700px) {    & when (@width >= 600px ){    border: 1px solid black; } width: @width; } Creating color contrasts with Less Color contrasts play an important role in the first impression of your website or web application. Color contrasts are also important for web accessibility. Using high contrasts between background and text will help the visually disabled, color blind, and even people with dyslexia to read your content more easily. The contrast() function returns a light (white by default) or dark (black by default) color depending on the input color. The contrast function can help you to write a dynamical Less code that always outputs the CSS styles that create enough contrast between the background and text colors. Setting your text color to white or black depending on the background color enables you to meet the highest accessibility guidelines for every color. A sample can be found at http://www.msfw.com/accessibility/tools/contrastratiocalculator.aspx, which shows you that either black or white always gives enough color contrast. When you use Less to create a set of buttons, for instance, you don't want some buttons with white text while others have black text. In this recipe, you solve this situation by adding a stroke to the button text (text shadow) when the contrast ratio between the button background and button text color is too low to meet your requirements. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. You will have to create some HTML and Less code, and you can use your favorite editor to do this. You will have to create the following file structure: How to do it… Create a Less file named contraststrokes.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @buttonTextColor: white; @ContrastRatio: 7; //AAA, small texts   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px black; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px white; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   button { padding:10px; border-radius:10px; color: @buttonTextColor; width:200px; }   .safe { .setcontrast(@safe); background-color: @safe; }   .danger { .setcontrast(@danger); background-color: @danger; }   .warning { .setcontrast(@warning); background-color: @warning; } Create an HTML file, and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>    <link rel="stylesheet/less" type="text/css"       href="contraststrokes.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">     warning</button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like what's shown in the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. How it works… The main purpose of this recipe is to show you how to write dynamical code based on the color contrast ratio. Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations to make web content more accessible. They have defined the following three conformance levels: Conformance Level A: In this level, all Level A success criteria are satisfied Conformance Level AA: In this level, all Level A and AA success criteria are satisfied Conformance Level AAA: In this level, all Level A, AA, and AAA success criteria are satisfied If you focus only on the color contrast aspect, you will find the following paragraphs in the WCAG 2.0 guidelines. 1.4.1 Use of Color: Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (Level A) 1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (Level AA) 1.4.6 Contrast (Enhanced): The visual presentation of text and images of text has a contrast ratio of at least 7:1 (Level AAA) The contrast ratio can be calculated with a formula that can be found at http://www.w3.org/TR/WCAG20/#contrast-ratiodef: (L1 + 0.05) / (L2 + 0.05) In the preceding formula, L1 is the relative luminance of the lighter of the colors, and L2 is the relative luminance of the darker of the colors. In Less, the relative luminance of a color can be found with the built-in luma() function. In the Less code of this recipe are the four guarded .setcontrast(){} mixins. The guard conditions, such as (luma(@backgroundcolor) =< luma(@buttonTextColor)) are used to find which of the @backgroundcolor and @buttonTextColor colors is the lighter one. Then the (((luma({the lighter color})+5)/(luma({the darker color})+5)) < @ContrastRatio) condition can, according to the preceding formula, be used to determine whether the contrast ratio between these colors meets the requirement (@ContrastRatio) or not. When the value of the calculated contrast ratio is lower than the value set by the @ContrastRatio, the text-shadow: 0 0 2px {color}; ruleset will be mixed in, where {color} will be white or black depending on the relative luminance of the color set by the @buttonTextColor variable. There's more… In this recipe, you added a stroke to the web text to improve the accessibility. First, you will have to bear in mind that improving the accessibility by adding a stroke to your text is not a proven method. Also, automatic testing of the accessibility (by calculating the color contrast ratios) cannot be done. Other options to solve this issue are to increase the font size or change the background color itself. You can read how to change the background color dynamically based on color contrast ratios in the Changing the background color dynamically recipe. When you read the exceptions of the 1.4.6 Contrast (Enhanced) paragraph of the WCAG 2.0 guidelines, you will find that large-scale text requires a color contrast ratio of 4.5 instead of 7.0 to meet the requirements of the AAA Level. Large-scaled text is defined as at least 18 point or 14 point bold or font size that would yield the equivalent size for Chinese, Japanese, and Korean (CJK) fonts. To try this, you could replace the text-shadow properties in the Less code of step 1 of this recipe with the font-size, 14pt, and font-weight, bold; declarations. After this, you can inspect the results in your browser again. Depending on, among others, the values you have chosen for the @buttonTextColor and @ContrastRatio variables, you will find something like the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. Note that when you set the @ContrastRatio variable to 7.0, the code does not check whether the larger font indeed meets the 4.5 contrast ratio requirement. Changing the background color dynamically When you define some basic colors to generate, for instance, a set of button elements, you can use the built-in contrast() function to set the font color. The built-in contrast() function provides the highest possible contrast, but does not guarantee that the contrast ratio is also high enough to meet your accessibility requirements. In this recipe, you will learn how to change your basic color automatically to meet the required contrast ratio. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. Use your favorite editor to create the HTML and Less code in this recipe. You will have to create the following file structure: How to do it… Create a Less file named backgroundcolors.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @ContrastRatio: 7.0; //AAA @precision: 1%; @buttonTextColor: black; @threshold: 43;   .setcontrastcolor(@startcolor) when (luma(@buttonTextColor)   < @threshold) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) {    .contrastcolor(lighten(@startcolor,@precision)); } .contrastcolor(@startcolor) when (@startcolor =     color("white")),(((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   .setcontrastcolor(@startcolor) when (default()) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@buttonTextColor)+5)/     (luma(@startcolor)+5)) < @ContrastRatio) {    .contrastcolor(darken(@startcolor,@precision)); } .contrastcolor(@startcolor) when (luma(@startcolor) = 100     ),(((luma(@buttonTextColor)+5)/(luma(@startcolor)+5))       >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   button { padding:10px; border-radius:10px; color:@buttonTextColor; width:200px; }   .safe { .setcontrastcolor(@safe); background-color: @contrastcolor; }   .danger { .setcontrastcolor(@danger); background-color: @contrastcolor; }   .warning { .setcontrastcolor(@warning); background-color: @contrastcolor; } Create an HTML file and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>      <link rel="stylesheet/less" type="text/css"       href="backgroundcolors.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">warning     </button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like the following screenshot: On the left-hand side of the preceding figure, you will see the original colored buttons, and on the right-hand side, you will find the high contrast buttons. How it works… The guarded .setcontrastcolor(){} mixins are used to determine the color set depending upon whether the @buttonTextColor variable is a dark color or not. When the color set by @buttonTextColor is a dark color, with a relative luminance below the threshold value set by the @threshold variable, the background colors should be made lighter. For light colors, the background colors should be made darker. Inside each .setcontrastcolor(){} mixin, a second set of mixins has been defined. These guarded .contrastcolor(){} mixins construct a recursive loop, as described in the Building loops leveraging mixin guards recipe. In each step of the recursion, the guards test whether the contrast ratio that is set by the @ContrastRatio variable has been reached or not. When the contrast ratio does not meet the requirements, the @startcolor variable will darken or lighten by the number of percent set by the @precision variable with the built-in darken() and lighten() functions. When the required contrast ratio has been reached or the color defining the @startcolor variable has become white or black, the modified color value of @startcolor will be assigned to the @contrastcolor variable. The guarded .contrastcolor(){} mixins are used as functions, as described in the Using mixins as functions recipe to assign the @contrastcolor variable that will be used to set the background-color property of the button selectors. There's more… A small value of the @precision variable will increase the number of recursions (possible) needed to find the required colors as there will be more and smaller steps needed. With the number of recursions also, the compilation time will increase. When you choose a bigger value for @precision, the contrast color found might differ from the start color more than needed to meet the contrast ratio requirement. When you choose, for instance, a dark button text color, which is not black, all or some base background colors will be set to white. The chances of finding the highest contrast for white increase for high values of the @ContrastRatio variable. The recursions will stop when white (or black) has been reached as you cannot make the white color lighter. When the recursion stops on reaching white or black, the colors set by the mixins in this recipe don't meet the required color contrast ratios. Aggregating values under a single property The merge feature of Less enables you to merge property values into a list under a single property. Each list can be either space-separated or comma-separated. The merge feature can be useful to define a property that accepts a list as a value. For instance, the background accepts a comma-separated list of backgrounds. Getting ready For this recipe, you will need a text editor and a Less compiler. How to do it… Create a file called defaultfonts.less that contains the following Less code: .default-fonts() { font-family+: Helvetica, Arial, sans-serif; } p { font-family+: "Helvetica Neue"; .default-fonts(); } Compile the defaultfonts.less file from step 1 using the following command in the console: lessc defaultfonts.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code: p { font-family: "Helvetica Neue", Helvetica, Arial, sans-   serif; } How it works… When the compiler finds the plus sign (+) before the assignment sign (:), it will merge the values into a CSV list and will not create a new property into the CSS code. There's more… Since Version 1.7 of Less, you can also merge the property's values separated by a space instead of a comma. For space-separated values, you should use the +_ sign instead of a + sign, as can be seen in the following code: .text-overflow(@text-overflow: ellipsis) { text-overflow+_ : @text-overflow; } p, .text-overflow { .text-overflow(); text-overflow+_ : ellipsis; } The preceding Less code will compile into the CSS code, as follows: p, .text-overflow { text-overlow: ellipsis ellipsis; } Note that the text-overflow property doesn't force an overflow to occur; you will have to explicitly set, for instance, the overflow property to hidden for the element. Summary This article walks you through the process of parameterized mixins and shows you how to use guards. A guard can be used with as if-else statements and make it possible to construct interactive loops in Less. Resources for Article: Further resources on this subject: Web Application Testing [article] LESS CSS Preprocessor [article] Bootstrap 3 and other applications [article]
Read more
  • 0
  • 0
  • 3502
Modal Close icon
Modal Close icon