Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-distributed-rails-applications-with-chef
Rahmal Conda
31 Oct 2014
4 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 1

Rahmal Conda
31 Oct 2014
4 min read
Since the Advent of Rails (and Ruby by extension), in the period between 2005 and 2010, Rails went from a niche Web Application Framework to being the center of a robust web application platform. To do this it needed more than Ruby and a few complementary gems. Anyone who has ever tried to deploy a Rails application into a production environment knows that Rails doesn’t run in a vacuum. Rails still needs a web server in front of it to help manage requests like Apache or Nginx. Oops, you’ll need unicorn or Passenger too. Almost all of the Rails apps are backed by some sort of data persistence layer. Usually that is some sort of relational database. More and more it’s a NoSQL DB like MongoDB or depending on the application, you’re probably going to deploy a caching strategy at some point: Memcached, Redis, the list goes on. What about background jobs? You’ll need another server instance for that too, and not just one either. High availability systems need to be redundant. If you’re lucky enough to get a lot of traffic, you’ll need a way to scale all of this. Why Chef? Chances are that you’re managing all of this traffic manually. Don’t feel bad, everyone starts out that way. But as you grow, how do you manage all of this without going insane? Most Rails developers start off with Capistrano, which is a great choice. Capistrano is a remote server automation tool. It’s used most often as a deployment tool for Rails. For the most part it’s a great solution for managing multiple servers that make up your Rails stack. It’s only when your architecture reaches a certain size that I’d recommend choosing Chef over Capistrano. But really, there’s no reason to choose one over the other since they actually work pretty well together, and they are both similar regarding deployment. Where Chef excels, however, is when you need to provision multiple servers with different roles, and changing software stacks. This is what I’m going to focus on in this post. But let’s introduce Chef first. What is Chef anyway? Basically, Chef is a Ruby-based configuration management engine. It is a software configuration management tool, used for provisioning servers for certain roles within a platform stack, and deploying applications to those servers. It is used to automate server configuration and integration into your infrastructure. You define your infrastructure in configuration files written in Chef’s Ruby DSL and Chef takes care of setting up individual machines and linking them together. Chef server You set up one of your server instances (virtual or otherwise) as the server and all your other instances are clients that communicate with the Chef "server" via REST over HTTPS. The server is an application that stores cookbooks for your nodes. Recipes and cookbooks Recipes are files that contain sets of instructions written in Chef’s Ruby DSL. These instructions perform some kind of procedure, usually installing software and configuring some service. These recipes are bound together along with configuration file templates, resources, and helper scripts as cookbooks. Cookbooks generally correspond to a specific server configuration. For instance, a Postgres cookbook might contain a recipe for Postgres Server, Postgres Client, maybe PostGIS, and some configuration files for how the DB instance should be provisioned. Chef Solo For stacks that don’t necessarily need a full Chef server setup, but use cookbooks to set up Rails and DB servers, there’s Chef Solo. Chef Solo is a local standalone Chef application that can be used to remotely deploy servers and applications. Wait, where is the code? In Part 2 of this post I’m going to walk you through the setting up of a Rails application with Chef Solo, then I’ll expand to show a full Chef server configuration management engine. While Chef can be used for many different application stacks, I’m going to focus on Rails configuration and deployment, provisioning and deploying the entire stack. See you next time! About the Author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 4141

article-image-loading-data-creating-app-and-adding-dashboards-and-reports-splunk
Packt
31 Oct 2014
13 min read
Save for later

Loading data, creating an app, and adding dashboards and reports in Splunk

Packt
31 Oct 2014
13 min read
In this article by Josh Diakun, Paul R Johnson, and Derek Mock, authors of Splunk Operational Intelligence Cookbook, we will take a look at how to load sample data into Splunk, how to create an application, and how to add dashboards and reports in Splunk. (For more resources related to this topic, see here.) Loading the sample data While most of the data you will index with Splunk will be collected in real time, there might be instances where you have a set of data that you would like to put into Splunk, either to backfill some missing or incomplete data, or just to take advantage of its searching and reporting tools. This recipe will show you how to perform one-time bulk loads of data from files located on the Splunk server. We will also use this recipe to load the data samples that will be used as we build our Operational Intelligence app in Splunk. There are two files that make up our sample data. The first is access_log, which represents data from our web layer and is modeled on an Apache web server. The second file is app_log, which represents data from our application layer and is modeled on the log4j application log data. Getting ready To step through this recipe, you will need a running Splunk server and should have a copy of the sample data generation app (OpsDataGen.spl). (This file is part of the downloadable code bundle, which is available on the book's website.) How to do it... Follow the given steps to load the sample data generator on your system: Log in to your Splunk server using your credentials. From the home launcher, select the Apps menu in the top-left corner and click on Manage Apps. Select Install App from file. Select the location of the OpsDataGen.spl file on your computer, and then click on the Upload button to install the application. After installation, a message should appear in a blue bar at the top of the screen, letting you know that the app has installed successfully. You should also now see the OpsDataGen app in the list of apps. By default, the app installs with the data-generation scripts disabled. In order to generate data, you will need to enable either a Windows or Linux script, depending on your Splunk operating system. To enable the script, select the Settings menu from the top-right corner of the screen, and then select Data inputs. From the Data inputs screen that follows, select Scripts. On the Scripts screen, locate the OpsDataGen script for your operating system and click on Enable. For Linux, it will be $SPLUNK_HOME/etc/apps/OpsDataGen/bin/AppGen.path For Windows, it will be $SPLUNK_HOMEetcappsOpsDataGenbinAppGen-win.path The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on. Select the Settings menu from the top-right corner of the screen, select Data inputs, and then select Files & directories. On the Files & directories screen, locate the two OpsDataGen inputs for your operating system and for each click on Enable. For Linux, it will be: $SPLUNK_HOME/etc/apps/OpsDataGen/data/access_log $SPLUNK_HOME/etc/apps/OpsDataGen/data/app_log For Windows, it will be: $SPLUNK_HOMEetcappsOpsDataGendataaccess_log $SPLUNK_HOMEetcappsOpsDataGendataapp_log The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on. The data will now be generated in real time. You can test this by navigating to the Splunk search screen and running the following search over an All time (real-time) time range: index=main sourcetype=log4j OR sourcetype=access_combined After a short while, you should see data from both source types flowing into Splunk, and the data generation is now working as displayed in the following screenshot: How it works... In this case, you installed a Splunk application that leverages a scripted input. The script we wrote generates data for two source types. The access_combined source type contains sample web access logs, and the log4j source type contains application logs. Creating an Operational Intelligence application This recipe will show you how to create an empty Splunk app that we will use as the starting point in building our Operational Intelligence application. Getting ready To step through this recipe, you will need a running Splunk Enterprise server, with the sample data loaded from the previous recipe. You should be familiar with navigating the Splunk user interface. How to do it... Follow the given steps to create the Operational Intelligence application: Log in to your Splunk server. From the top menu, select Apps and then select Manage Apps. Click on the Create app button. Complete the fields in the box that follows. Name the app Operational Intelligence and give it a folder name of operational_intelligence. Add in a version number and provide an author name. Ensure that Visible is set to Yes, and the barebones template is selected. When the form is completed, click on Save. This should be followed by a blue bar with the message, Successfully saved operational_intelligence. Congratulations, you just created a Splunk application! How it works... When an app is created through the Splunk GUI, as in this recipe, Splunk essentially creates a new folder (or directory) named operational_intelligence within the $SPLUNK_HOME/etc/apps directory. Within the $SPLUNK_HOME/etc/apps/operational_intelligence directory, you will find four new subdirectories that contain all the configuration files needed for our barebones Operational Intelligence app that we just created. The eagle-eyed among you would have noticed that there were two templates, barebones and sample_app, out of which any one could have been selected when creating the app. The barebones template creates an application with nothing much inside of it, and the sample_app template creates an application populated with sample dashboards, searches, views, menus, and reports. If you wish to, you can also develop your own custom template if you create lots of apps, which might enforce certain color schemes for example. There's more... As Splunk apps are just a collection of directories and files, there are other methods to add apps to your Splunk Enterprise deployment. Creating an application from another application It is relatively simple to create a new app from an existing app without going through the Splunk GUI, should you wish to do so. This approach can be very useful when we are creating multiple apps with different inputs.conf files for deployment to Splunk Universal Forwarders. Taking the app we just created as an example, copy the entire directory structure of the operational_intelligence app and name it copied_app. cp -r $SPLUNK_HOME$/etc/apps/operational_intelligence/* $SPLUNK_HOME$/etc/apps/copied_app Within the directory structure of copied_app, we must now edit the app.conf file in the default directory. Open $SPLUNK_HOME$/etc/apps/copied_app/default/app.conf and change the label field to My Copied App, provide a new description, and then save the conf file. ## Splunk app configuration file#[install]is_configured = 0[ui]is_visible = 1label = My Copied App[launcher]author = John Smithdescription = My Copied applicationversion = 1.0 Now, restart Splunk, and the new My Copied App application should now be seen in the application menu. $SPLUNK_HOME$/bin/splunk restart Downloading and installing a Splunk app Splunk has an entire application website with hundreds of applications, created by Splunk, other vendors, and even users of Splunk. These are great ways to get started with a base application, which you can then modify to meet your needs. If the Splunk server that you are logged in to has access to the Internet, you can click on the Apps menu as you did earlier and then select the Find More Apps button. From here, you can search for apps and install them directly. An alternative way to install a Splunk app is to visit http://apps.splunk.com and search for the app. You will then need to download the application locally. From your Splunk server, click on the Apps menu and then on the Manage Apps button. After that, click on the Install App from File button and upload the app you just downloaded, in order to install it. Once the app has been installed, go and look at the directory structure that the installed application just created. Familiarize yourself with some of the key files and where they are located. When downloading applications from the Splunk apps site, it is best practice to test and verify them in a nonproduction environment first. The Splunk apps site is community driven and, as a result, quality checks and/or technical support for some of the apps might be limited. Adding dashboards and reports Dashboards are a great way to present many different pieces of information. Rather than having lots of disparate dashboards across your Splunk environment, it makes a lot of sense to group related dashboards into a common Splunk application, for example, putting operational intelligence dashboards into a common Operational Intelligence application. In this recipe, you will learn how to move the dashboards and associated reports into our new Operational Intelligence application. Getting ready To step through this recipe, you will need a running Splunk Enterprise server, with the sample data loaded from the Loading the sample data recipe. You should be familiar with navigating the Splunk user interface. How to do it... Follow these steps to move your dashboards into the new application: Log in to your Splunk server. Select the newly created Operational Intelligence application. From the top menu, select Settings and then select the User interface menu item. Click on the Views section. In the App Context dropdown, select Searching & Reporting (search) or whatever application you were in when creating the dashboards: Locate the website_monitoring dashboard row in the list of views and click on the Move link to the right of the row. In the Move Object pop up, select the Operational Intelligence (operational_intelligence) application that was created earlier and then click on the Move button. A message bar will then be displayed at the top of the screen to confirm that the dashboard was moved successfully. Repeat from step 5 to move the product_monitoring dashboard as well. After the Website Monitoring and Product Monitoring dashboards have been moved, we now want to move all the reports that were created, as these power the dashboards and provide operational intelligence insight. From the top menu, select Settings and this time select Searches, reports, and alerts. Select the Search & Reporting (search) context and filter by cp0* to view the searches (reports) that are created. Click on the Move link of the first cp0* search in the list. Select to move the object to the Operational Intelligence (operational_intelligence) application and click on the Move button. A message bar will then be displayed at the top of the screen to confirm that the dashboard was moved successfully. Select the Search & Reporting (search) context and repeat from step 11 to move all the other searches over to the new Operational Intelligence application—this seems like a lot but will not take you long! All of the dashboards and reports are now moved over to your new Operational Intelligence application. How it works... In the previous recipe, we revealed how Splunk apps are essentially just collections of directories and files. Dashboards are XML files found within the $SPLUNK_HOME/etc/apps directory structure. When moving a dashboard from one app to another, Splunk is essentially just moving the underlying file from a directory inside one app to a directory in the other app. In this recipe, you moved the dashboards from the Search & Reporting app to the Operational Intelligence app, as represented in the following screenshot: As visualizations on the dashboards leverage the underlying saved searches (or reports), you also moved these reports to the new app so that the dashboards maintain permissions to access them. Rather than moving the saved searches, you could have changed the permissions of each search to Global such that they could be seen from all the other apps in Splunk. However, the other reason you moved the reports was to keep everything contained within a single Operational Intelligence application, which you will continue to build on going forward. It is best practice to avoid setting permissions to Global for reports and dashboards, as this makes them available to all the other applications when they most likely do not need to be. Additionally, setting global permissions can make things a little messy from a housekeeping perspective and crowd the lists of reports and views that belong to specific applications. The exception to this rule might be for knowledge objects such as tags, event types, macros, and lookups, which often have advantages to being available across all applications. There's more… As you went through this recipe, you likely noticed that the dashboards had application-level permissions, but the reports had private-level permissions. The reports are private as this is the default setting in Splunk when they are created. This private-level permission restricts access to only your user account and admin users. In order to make the reports available to other users of your application, you will need to change the permissions of the reports to Shared in App as we did when adjusting the permissions of reports. Changing the permissions of saved reports Changing the sharing permission levels of your reports from the default Private to App is relatively straightforward: Ensure that you are in your newly created Operational Intelligence application. Select the Reports menu item to see the list of reports. Click on Edit next to the report you wish to change the permissions for. Then, click on Edit Permissions from the drop-down list. An Edit Permissions pop-up box will appear. In the Display for section, change from Owner to App, and then, click on Save. The box will close, and you will see that the Sharing permissions in the table will now display App for the specific report. This report will now be available to all the users of your application. Summary In this article, we loaded the sample data into Splunk. We also saw how to organize dashboards and knowledge into a custom Splunk app. Resources for Article: Further resources on this subject: VWorking with Pentaho Mobile BI [Article] Visualization of Big Data [Article] Highlights of Greenplum [Article]
Read more
  • 0
  • 0
  • 10125

article-image-creating-our-first-animation-angularjs
Packt
31 Oct 2014
36 min read
Save for later

Creating Our First Animation in AngularJS

Packt
31 Oct 2014
36 min read
In this article by Richard Keller, author of the book Learning AngularJS Animations, we will learn how to apply CSS animations within the context of AngularJS by creating animations using CSS transitions and CSS keyframe animations that are integrated with AngularJS native directives using the ngAnimate module. In this article, we will learn: The ngAnimate module setup and usage AngularJS directives with support for out-of-the-box animation AngularJS animations with the CSS transition AngularJS animations with CSS keyframe animations The naming convention of the CSS animation classes Animation of the ngMessage and ngMessages directives (For more resources related to this topic, see here.) The ngAnimate module setup and usage AngularJS is a module-based framework; if we want our AngularJS application to have the animation feature, we need to add the animation module (ngAnimate). We have to include this module in the application by adding the module as a dependency in our AngularJS application. However, before that, we should include the JavaScript angular-animate.js file in HTML. Both files are available on the Google content distribution network (CDN), Bower, Google Code, and https://angularjs.org/. The Google developers' CDN hosts many versions of AngularJS, as listed here: https://developers.google.com/speed/libraries/devguide#angularjs Currently, AngularJS Version 1.3 is the latest stable version, so we will use AngularJS Version 1.3.0 on all samples files of this book; we can get them from https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular.min.js and https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular-animate.min.js. You might want to use Bower. To do so, check out this great video article at https://thinkster.io/egghead/intro-to-bower/, explaining how to use Bower to get AngularJS. We include the JavaScript files of AngularJS and the ngAnimate module, and then we include the ngAnimate module as a dependency of our app. This is shown in the following sample, using the Google CDN and the minified versions of both files: <!DOCTYPE html> <html ng-app"myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular-animate.min.js"></script> <script>    var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> Here, we already have an AngularJS web app configured to use animations. Now, we will learn how to animate using AngularJS directives. AngularJS directives with native support for animations AngularJS has the purpose of changing the way web developers and designers manipulate the Document Object Model (DOM). We don't directly manipulate the DOM when developing controllers, services, and templates. AngularJS does all the DOM manipulation work for us. The only place where an application touches the DOM is within directives. For most of the DOM manipulation requirements, AngularJS already provides are built-in directives that fit our needs. There are many important AngularJS directives that already have built-in support for animations, and they use the ngAnimate module. This is why this module is so useful; it allows us to use animations within AngularJS directives DOM manipulation. This way, we don't have to replicate native directives by extending them just to add animation functionality. The ngAnimate module provides us a way to hook animations in between AngularJS directives execution. It even allows us to hook on custom directives. As we are dealing with animations between DOM manipulations, we can have animations before and after an element is added to or removed from the DOM, after an element changes (by adding or removing classes), and before and after an element is moved in the DOM. These events are the moments when we might add animations. Fade animations using AngularJS Now that we already know how to install a web app with the ngAnimate module enabled, let's create fade-in and fade-out animations to get started with AngularJS animations. We will use the same HTML from the installation topic and add a simple controller, just to change an ngShow directive model value and add a CSS transition. The ngShow directive shows or hides the given element based on the expression provided to the ng-show attribute. For this sample, we have a Toggle fade button that changes the ngShow model value, so we can see what happens when the element fades in and fades out from the DOM. The ngShow directive shows and hides an element by adding and removing the ng-hide class from the element that contains the directive, shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <style type="text/css">    .firstSampleAnimation.ng-hide-add,    .firstSampleAnimation.ng-hide-remove {     -webkit-transition: 1s ease-in-out opacity;     transition: 1s ease-in-out opacity;     opacity: 1;  } .firstSampleAnimation.ng-hide { opacity: 0; } </style> <div> <div ng-controller="animationsCtrl"> <h1>ngShow animation</h1> <button ng-click="fadeAnimation = !fadeAnimation">Toggle fade</button> fadeAnimation value: {{fadeAnimation}} <div class="firstSampleAnimation" ng-show="fadeAnimation"> This element appears when the fadeAnimation model is true </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> In the CSS code, we declared an opacity transition to elements with the firstAnimationSample and ng-hide-add classes, or elements with the firstAnimationSample and ng-hide-remove classes. We also added the firstAnimationSample class to the same element that has the ng-show directive attribute. The fadeAnimation model is initially false, so the element with the ngShow directive is initially hidden, as the ngShow directive adds the ng-hide class to the element to set the display property as none. When we first click on the Toggle fade button, the fadeAnimation model will become true. Then, the ngShow directive will remove the ng-hide class to display the element. But before that, the ngAnimate module knows there is a transition declared for this element. Because of that, the ngAnimate module will append the ng-hide-remove class to trigger the hide animation start. Then, ngAnimate will add the ng-hide-remove-active class that can contain the final state of the animation to the element and remove the ng-hide class at the same time. Both classes will last until the animation (1 second in this sample) finishes, and then they are removed. This is the fade-in animation; ngAnimate triggers animations by adding and removing the classes that contain the animations; this is why we say that AngularJS animations are class based. This is where the magic happens. All that we did to create this fade-in animation was declare a CSS transition with the class name, ng-hide-remove. This class name means that it's appended when the ng-hide class is removed. The fade-out animation will happen when we click on the Toggle fade button again, and then, the fadeAnimation model will become false. The ngShow directive will add the ng-hide class to remove the element, but before this, the ngAnimate module knows that there is a transition declared for that element too. The ngAnimate module will append the ng-hide-add class and then add the ng-hide and ng-hide-add-active classes to the element at the same time. Both classes will last until the animation (1 second in this sample) finishes, then they are removed, and only the ng-hide class is kept, to hide the element. The fade-out animation was created by just declaring the CSS transition with the class name of ng-hide-add. It is easy to understand that this class is appended to the element when the ng-hide class is about to be added. The AngularJS animations convention As this article is intended to teach you how to create animations with AngularJS, you need to know which directives already have built-in support for AngularJS animations to make our life easier. Here, we have a table of directives with the directive names and the events of the directive life cycle when animation hooks are supported. The first row means that the ngRepeat directive supports animation on enter, leave, and move event times. All events are relative to DOM manipulations, for example, when an element enters or leaves DOM, or when a class is added to or removed from an element. Directive Supported animations ngRepeat Enter, leave, and move ngView Enter and leave ngInclude Enter and leave ngSwitch Enter and leave ngIf Enter and leave ngClass Add and remove ngShow and ngHide Add and remove form and ngModel Add and remove ngMessages Add and remove ngMessage Enter and leave Perhaps, the more experienced AngularJS users have noticed that the most frequently used directives are attended in this list. This is great; it means that animating with AngularJS isn't hard for most use cases. AngularJS animation with CSS transitions We need to know how to bind the CSS animation as well as the AngularJS directives listed in the previous table. The ngIf directive, for example, has support for the enter and leave animations. When the value of the ngIf model is changed to true, it triggers the animation by adding the ng-enter class to the element just after the ngIf DOM element is created and injected. This triggers the animation, and the classes are kept for the duration of the transition ends. Then, the ng-enter class is removed. When the value of ngIf is changed to false, the ng-leave class is added to the element just before the ngIf content is removed from the DOM, and so, the animation is triggered while the element still exists. To illustrate the AngularJS ngIf directive and ngAnimate module behavior, let's see what happens in a sample. First, we have to declare a button that toggles the value of the fadeAnimation model, and one div tag that uses ng-if="fadeAnimation", so we can see what happens when the element is removed and added back. Here, we create the HTML code using the HTML template we used in the last topic to install the ngAnimate module: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngIf sample</title> </head> <body> <style> /* ngIf animation */ .animationIf.ng-enter, .animationIf.ng-leave { -webkit-transition: opacity ease-in-out 1s; transition: opacity ease-in-out 1s; } .animationIf.ng-enter, .animationIf.ng-leave.ng-leave-active { opacity: 0; } .animationIf.ng-leave, .animationIf.ng-enter.ng-enter-active { opacity: 1; } </style> <div ng-controller="animationsCtrl"> <h1>ngIf animation</h1> <div> fadeAnimation value: {{fadeAnimation}} </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <div ng-if="fadeAnimation" class="animationIf"> This element appears when the fadeAnimation model is true </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> So, let's see what happens in the DOM just after we click on the Toggle fade button. We will use Chrome Developer Tools (Chrome DevTools) to check the HTML in each animation step. It's a native tool that comes with the Chrome browser. To open Chrome DevTools, you just need to right-click on any part of the page and click on Inspect Element. The ng-enter class Our CSS declaration added an animation to the element with the animationIf and ng-enter classes. So, the transition is applied when the element has the ng-enter class too. This class is appended to the element when the element has just entered the DOM. It's important to add the specific class of the element you want to animate in the selector, which in this case is the animationIf class, because many other elements might trigger animation and add the ng-enter class too. We should be careful to use the specific target element class. Until the animation is completed, the resulting HTML fragment will be as follows: Consider the following snippet: <div ng-if="fadeAnimation" class="animationIf ng-scope ng-animate ng-enter ng-enter-active"> fadeAnimation value: true </div> We can see that the ng-animate, ng-enter, and ng-enter-active classes were added to the element. After the animation is completed, the DOM will have the animation classes removed as the next screenshot shows: As you can see, the animation classes are removed: <div ng-if="fadeAnimation" class="animationIf ng-scope"> This element appears when the fadeAnimation model is true </div> The ng-leave class We added the same transition of the ng-enter class to the element with the animationIf and ng-leave classes. The ng-leave class is added to the element before the element leaves the DOM. So, before the element vanishes, it will display the fade effect too. If we click again on the Toggle fade button, the leave animation will be displayed and the following HTML fragment and screen will be rendered: The fragment rendered is as follows: <div ng-if="fadeAnimation" class="animationIf ng-scope g-animate ng-leave ng-leave-active"> This element appears when the fadeAnimation model is true </div> We can notice that the ng-animate, ng-leave, and ng-leave-active classes were added to the element. Finally, after the element is removed from the DOM, the rendered result will be as follows: The code after removing the element is as follows: <div ng-controller="animationsCtrl" class="ng-scope"> <div class="ng-binding"> fadeAnimation value: false </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <!-- ngIf: fadeAnimation --> </div> Furthermore, there are the ng-enter-active and ng-leave-active classes. They are appended to the element classes too. Both are used to define the target value of the transition, and the -active classes define the destination CSS so that we can create a transition between the start and the end of an event. For example, ng-enter is the initial class of the enter event and ng-enter-active is the final class of the enter event. They are used to determine the style applied at the start of the animation beginning and the final transition style, and they are displayed when the transition completes the cycle. A use case of the -active class is when we want to set an initial color and a final color using the CSS transition. In the last sample case, the ng-leave class has opacity set to 1 and the ng-leave-active class has the opacity set to 0; so, the element will fade away at the end of the animation. Great, we just created our first animation using AngularJS and CSS transitions. AngularJS animation with CSS keyframe animations We created an animation using the ngIf directive and CSS transitions. Now we are going to create an animation using ngRepeat and CSS animations (keyframes). As we saw in the earlier table on directives and the supported animation events, the ngRepeat directive supports animation on the enter, leave, and move events. We already used the enter and leave events in the last sample. The move event is triggered when an item is moved around on the list of items. For this sample, we will create three functions on the controller scope: one to add elements to the list in order to execute the enter event, one to remove an item from list in order to execute the leave event, and one to sort the elements so that we can see the move event. Here is the JavaScript with the functions; $scope.items is the array that we will use on the ngRepeat directive: var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.items = [{ name: 'Richard' }, { name: 'Bruno' } , { name: 'Jobson' }]; $scope.counter = 0; $scope.addItem = function () { var name = 'Item' + $scope.counter++; $scope.items.push({ name: name }); }; $scope.removeItem = function () { var length = $scope.items.length; var indexRemoved = Math.floor(Math.random() * length); $scope.items.splice(indexRemoved, 1); }; $scope.sortItems = function () { $scope.items.sort(function (a, b) { return a[name] < b[name] ? -1 : 1 }); }; }); The HTML is as follows; it is without the CSS styles because we will see them later separating each animation block: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngRepeat sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngRepeat Animation</h1> <div> <div ng-repeat="item in items" class="repeatItem"> {{item.name}} </div> <button ng-click="addItem()">Add item</button> <button ng-click="removeItem()">Remove item</button><button ng-click="sortItems()"> Sort items</button> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> </body> </html> We will add an animation to the element with the repeatItem and ng-enter classes, and we will declare the from and to keyframes. So, when an element appears, it starts with opacity set to 0 and color set as red and will animate for 1 second until opacity is 1 and color is black. This will be seen when an item is added to the ngRepeat array. The enter animation definition is declared as follows: /* ngRepeat ng-enter animation */ .repeatItem.ng-enter { -webkit-animation: 1s ng-enter-repeat-animation; animation: 1s ng-enter-repeat-animation; } @-webkit-keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } @keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } The move animation is declared next is to be triggered when we move an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-move classes. We will declare the from and to keyframes. So, when an element moves, it starts with opacity set to 0 and color set as black and will animate for 1 second until opacity is 0.5 and color is blue, shown as follows: /* ngRepeat ng-move animation */ .repeatItem.ng-move { -webkit-animation: 1s ng-move-repeat-animation; animation: 1s ng-move-repeat-animation; } @-webkit-keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } @keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } The leave animation is declared next and is to be triggered when we remove an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-leave classes; we will declare the from and to keyframes; so, when an element leaves the DOM, it starts with opacity set to 1 and color set as black and animates for 1 second until opacity is 0 and color is red, shown as follows: /* ngRepeat ng-leave animation */ .repeatItem.ng-leave { -webkit-animation: 1s ng-leave-repeat-animation; animation: 1s ng-leave-repeat-animation; } @-webkit-keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } @keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } We can see that the ng-enter-active and ng-leave-active classes aren't used on this sample, as the keyframe animation already determines the initial and final properties' states. In this case, as we used CSS keyframes, the classes with the -active suffix are useless, although for CSS transitions, it's useful to set an animation destination. The CSS naming convention In the last few sections, we saw how to create animations using AngularJS, CSS transitions, and CSS keyframe animations. Creating animations using both CSS transitions and CSS animations is very similar because all animations in AngularJS are class based, and AngularJS animations have a well-defined class name pattern. We must follow the CSS naming convention by adding a specific class to the directive element so that we can determine the element animation. Otherwise, the ngAnimate module will not be able to recognize which element the animation applies to. We already know that both ngIf and ngRepeat use the ng-enter, ng-enter-active, ng-leave, and ng-leave-active classes that are added to the element in the enter and leave events. It's the same naming convention used by the ngInclude, ngSwitch, ngMessage, and ngView directives. The ngHide and ngShow directives follow a different convention. They add the ng-hide-add and ng-hide-add-active classes when the element is going to be hidden. When the element is going to be shown, they add the ng-hide-remove and ng-hide-remove-active classes. These class names are more intuitive for the purpose of hiding and showing elements. There is also the ngClass directive convention that uses the class name added to create the animation classes with the -add, -add-active, -remove, and -remove-active suffixes, similar to the ngHide directive. The ngRepeat directive uses the ng-move and ng-move-active classes when elements move their position in the DOM, as we already saw in the last sample. The ngClass directive animation sample The ngClass directive allows us to dynamically set CSS classes. So, we can programmatically add and remove CSS from DOM elements. Classes are already used to change element styles, so it's very good to see how useful animating the ngClass directive is. Let's see a sample of ngClass so that it's easier to understand. We will create the HTML code with a Toggle ngClassbutton that will add and remove the animationClass class from the element with the initialClass class through the ngClass directive: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngClass sample</title> </head> <body> <link href="ngClassSample.css" rel="stylesheet" /> <div> <h1>ngClass Animation</h1> <div> <button ng-click="toggleNgClass = !toggleNgClass">Toggle ngClass</button> <div class="initialClass" ng-class=" {'animationClass' : toggleNgClass}"> This element has class 'initialClass' and the ngClass directive is declared as ng-class="{'animationClass' : toggleNgClass}" </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> For this sample, we will use two basic classes: an initial class and the class that the ngClass directive will add to and remove from the element: /* ngclass animation */ /*This is the initialClass, that keeps in the element*/ .initialClass { background-color: white; color: black; border: 1px solid black; } /* This is the animationClass, that is added or removed by the ngClass expression*/ .animationClass { background-color: black; color: white; border: 1px solid white; } To create the animation, we will define a CSS animation using keyframes; so, we only will need to use the animationClass-add and animationClass-remove classes to add animations: @-webkit-keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } @keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } The initial state is shown as follows: So, we want to display an animation when animationClass is added to the element with the initialClass class by the ngClass directive. This way, our animation selector will be: .initialClass.animationClass-add{ -webkit-animation: 1s ng-class-animation; animation: 1s ng-class-animation; } After 500 ms, the result should be a complete gray div tag because the text, border, and background colors are halfway through the transition between black and white, as we can see in this screenshot: After a second of animation, this is the result: The remove animation, which occurs when animationClass is removed, is similar to the enter animation. However, this animation should be the reverse of the enter animation, and so, the CSS selector of the animation will be: initialClass.animationClass-remove { -webkit-animation: 1s ng-class-animation reverse; animation: 1s ng-class-animation reverse; } The animation result will be the same as we saw in previous screenshots, but in the reverse order. The ngHide and ngShow animation sample Let's see one sample of the ngHide animation, which is the directive that shows and hides the given HTML code based on an expression, such as the ngShow directive. We will use this directive to create a success notification message that fades in and out. To have a lean CSS file in this sample, we will use the Bootstrap CSS library, which is a great library to use with AngularJS. There is an AngularJS version of this library created by the Angular UI team, available at http://angular-ui.github.io/bootstrap/. The Twitter Bootstrap library is available at http://getbootstrap.com/. For this sample, we will use the Microsoft CDN; you can check out the Microsoft CDN libraries at http://www.asp.net/ajax/cdn. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngHide sample</title> </head> <body> <link href="http://ajax.aspnetcdn.com/ajax/bootstrap/3.2.0/css/ bootstrap.css" rel="stylesheet" /> <style> /* ngHide animation */ .ngHideSample { padding: 10px; } .ngHideSample.ng-hide-add { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 1; } .ngHideSample.ng-hide-add-active { opacity: 0; } .ngHideSample.ng-hide-remove { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 0; } .ngHideSample.ng-hide-remove-active { opacity: 1; } </style> <div> <h1>ngHide animation</h1> <div> <button ng-click="disabled = !disabled">Toggle ngHide animation</button> <div ng-hide="disabled" class="ngHideSample bg-success"> This element has the ng-hide directive. </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> In this sample, we created an animation in which when the element is going to hide, its opacity is transitioned until it's set to 0. Also, when the element appears again, its opacity transitions back to 1 as we can see in the sequence of the following sequence of screenshots. In the initial state, the output is as follows: After we click on the button, the notification message starts to fade: After the add (ng-hide-add) animation has completed, the output is as follows: Then, if we toggle again, we will see the success message fading in: After the animation has completed, it returns to the initial state: The ngShow directive uses the same convention; the only difference is that each directive has the opposite behavior for the model value. When the model is true, ngShow removes the ng-hide class and ngHide adds the ng-hide class, as we saw in the first sample of this article. The ngModel directive and form animations We can easily animate form controls such as input, select, and textarea on ngModel changes. Form controls already work with validation CSS classes such as ng-valid, ng-invalid, ng-dirty, and ng-pristine. These classes are appended to form controls by AngularJS, based on validations and the current form control status. We are able to animate on the add and remove features of those classes. So, let's see an example of how to change the input color to red when a field becomes invalid. This helps users to check for errors while filling in the form before it is submitted. The animation eases the validation error experience. For this sample, a valid input will contain only digits and will become invalid once a character is entered. Consider the following HTML: <h1>ngModel and form animation</h1> <div> <form> <input ng-model="ngModelSample" ng-pattern="/^d+$/" class="inputSample" /> </form> </div> This ng-pattern directive validates using the regular expression if the model ngModelSample is a number. So, if we want to warn the user when the input is invalid, we will set the input text color to red using a CSS transition. Consider the following CSS: /* ngModel animation */ .inputSample.ng-invalid-add { -webkit-transition: 1s linear all; transition: 1s linear all; color: black; } .inputSample.ng-invalid { color: red; } .inputSample.ng-invalid-add-active { color: red; } We followed the same pattern as ngClass. So, when the ng-invalid class is added, it will append the ng-invalid-add class and the transition will change the text color to red in a second; it will then continue to be red, as we have defined the ng-invalid color as red too. The test is easy; we just need to type in one non-numeric character on the input and it will display the animation. The ngMessage and ngMessages directive animations Both the ngMessage and ngMessages directives are complimentary, but you can choose which one you want to animate, or even animate both of them. They became separated from the core module, so we have to add the ngMessages module as a dependency of our AngularJS application. These directives were added to AngularJS in Version 1.3, and they are useful to display messages based on the state of the model of a form control. So, we can easily display a custom message if an input has a specific validation error, for example, when the input is required but is not filled in yet. Without these directives, we would rely on JavaScript code and/or complex ngIf statements to accomplish the same result. For this sample, we will create three different error messages for three different validations of a password field, as described in the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>ngMessages animation</title> </head> <body> <link href="ngMessageAnimation.css" rel="stylesheet" /> <h1>ngMessage and ngMessages animation</h1> <div> <form name="messageAnimationForm"> <label for="modelSample">Password validation input</label> <div> <input ng-model="ngModelSample" id="modelSample" name="modelSample" type="password" ng-pattern= "/^d+$/" ng-minlength="5" ng-maxlength="10" required class="ngMessageSample" /> <div ng-messages="messageAnimationForm. modelSample.$error" class="ngMessagesClass" ng-messages-multiple> <div ng-message="pattern" class="ngMessageClass">* This field is invalid, only numbers are allowed</div> <div ng-message="minlength" class="ngMessageClass">* It's mandatory at least 5 characters</div> <div ng-message="maxlength" class="ngMessageClass">* It's mandatory at most 10 characters</div> </div> </div> </form> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-messages.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngMessages']); </script> </body> </html> We included the ngMessage file too, as it's required for this sample. For the ngMessages directive, that is, the container of the ngMessage directives, we included an animation on ng-active-addthat changes the container background color from white to red and ng-inactive-add that does the opposite, changing the background color from red to white. This works because the ngMessages directive appends the ng-active class when there is any message to be displayed. When there is no message, it appends the ng-inactive class to the element. Let's see the ngMessages animation's declaration: .ngMessagesClass { height: 50px; width: 350px; } .ngMessagesClass.ng-active-add { transition: 0.3s linear all; background-color: red; } .ngMessagesClass.ng-active { background-color: red; } .ngMessagesClass.ng-inactive-add { transition: 0.3s linear all; background-color: white; } .ngMessagesClass.ng-inactive { background-color: white; } For the ngMessage directive, which contains a message, we created an animation that changes the color of the error message from transparent to white when the message enters the DOM, and changes the color from white to transparent when the message leaves DOM, shown as follows: .ngMessageClass { color: white; } .ngMessageClass.ng-enter { transition: 0.3s linear all; color: transparent; } .ngMessageClass.ng-enter-active { color: white; } .ngMessageClass.ng-leave { transition: 0.3s linear all; color: white; } .ngMessageClass.ng-leave-active { color: transparent; } This sample illustrates two animations for two directives that are related to each other. The initial result, before we add a password, is as follows: We can see both animations being triggered when we type in the a character, for example, in the password input. Between 0 and 300 ms of the animation, we will see both the background and text appearing for two validation messages: After 300 ms, the animation has completed, and the output is as follows: The ngView directive animation The ngView directive is used to add a template to the main layout. It has support for animation, for both enter and leave events. It's nice to have an animation for ngView, so the user has a better notion that we are switching views. For this directive sample, we need to add the ngRoute JavaScript file to the HTML and the ngRoute module as a dependency of our app. We will create a sample that slides the content of the current view to the left, and the new view appears sliding from the right to the left too so that we can see the current view leaving and the next view appearing. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngView sample</title> </head> <body> <style> .ngViewRelative { position: relative; height: 300px; } .ngViewContainer { position: absolute; width: 500px; display: block; } .ngViewContainer.ng-enter, .ngViewContainer.ng-leave { -webkit-transition: 600ms linear all; transition: 600ms linear all; } .ngViewContainer.ng-enter { transform: translateX(500px); } .ngViewContainer.ng-enter-active { transform: translateX(0px); } .ngViewContainer.ng-leave { transform: translateX(0px); } .ngViewContainer.ng-leave-active { transform: translateX(-1000px); } </style> <h1>ngView sample</h1> <div class="ngViewRelative"> <a href="#/First">First page</a> <a href="#/Second">Second page</a> <div ng-view class="ngViewContainer"> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-route.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngRoute']); app.config(['$routeProvider', function ($routeProvider) { $routeProvider .when('/First', { templateUrl: 'first.html' }) .when('/Second', { templateUrl: 'second.html' }) .otherwise({ redirectTo: '/First' }); }]); </script> </body> </html> We need to configure the routes on config, as the JavaScript shows us. We then create the two HTML templates on the same directory. The content of the templates are just plain lorem ipsum. The first.html file content is shown as follows: <div> <h2>First page</h2> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras consectetur dui nunc, vel feugiat lectus imperdiet et. In hac habitasse platea dictumst. In rutrum malesuada justo, sed porttitor dolor rutrum eu. Sed condimentum tempus est at euismod. Donec in faucibus urna. Fusce fermentum in mauris at pretium. Aenean ut orci nunc. Nulla id velit interdum nibh feugiat ultricies eu fermentum dolor. Pellentesque lobortis rhoncus nisi, imperdiet viverra leo ullamcorper sed. Donec condimentum tincidunt mollis. Curabitur lorem nibh, mattis non euismod quis, pharetra eu nibh. </p> </div> The second.html file content is shown as follows: <div> <h2>Second page</h2> <p> Ut eu metus vel ipsum tristique fringilla. Proin hendrerit augue quis nisl pellentesque posuere. Aliquam sollicitudin ligula elit, sit amet placerat augue pulvinar eget. Aliquam bibendum pulvinar nisi, quis commodo lorem volutpat in. Donec et felis sit amet mauris venenatis feugiat non id metus. Fusce leo elit, egestas non turpis sed, tincidunt consequat tellus. Fusce quis auctor neque, a ultricies urna. Cras varius purus id sagittis luctus. Sed id lectus tristique, euismod ipsum ut, congue augue. </p> </div> Great, we now have our app set up to enable ngView and routes. The animation was defined by adding animation to the enter and leave events, using translateX(). This animation is defined to the new view coming from 500 px from the right and animating until the position on the x-axis is 0, leaving the view in the left corner. The leaving view goes from the initial position until it is at -1000 px on the x-axis. Then, it leaves the DOM. This animation creates a sliding effect; the leaving view leaves faster as it has to move the double of the distance of the entering view in the same animation duration. We can change the translation using the y-axis to change the animation direction, creating the same sliding effect but with different aesthetics. The ngSwitch directive animation The ngSwitch directive is a directive that is used to conditionally swap the DOM structure based on an expression. It supports animation on the enter and leave events, for example, the ngView directive animation events. For this sample, we will create the same sliding effect of the ngView sample, but in this case, we will create a sliding effect from top to bottom instead of right to left. This animation helps the user to understand that one item is being replaced by the other. The ngSwitch sample HTML is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngSwitch sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngSwitch sample</h1> <p>Choose an item:</p> <select ng-model="ngSwitchSelected" ng-options="item for item in ngSwitchItems"></select> <p>Selected item:</p> <div class="switchItemRelative" ng-switch on="ngSwitchSelected"> <div class="switchItem" ng-switch-when="item1">Item 1</div> <div class="switchItem" ng-switch-when="item2">Item 2</div> <div class="switchItem" ng-switch-when="item3">Item 3</div> <div class="switchItem" ng-switch-default>Default Item</div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngSwitchItems = ['item1', 'item2', 'item3']; }); </script> </body> </html> In the JavaScript controller, we added the ngSwitchItems array to the scope, and the animation CSS is defined as follows: /* ngSwitch animation */ .switchItemRelative { position: relative; height: 25px; overflow: hidden; } .switchItem { position: absolute; width: 500px; display: block; } /*The transition is added when the switch item is about to enter or about to leave DOM*/ .switchItem.ng-enter, .switchItem.ng-leave { -webkit-transition: 300ms linear all; -moz-transition: 300ms linear all; -ms-transition: 300ms linear all; -o-transition: 300ms linear all; transition: 300ms linear all; } /* When the element is about to enter DOM*/ .switchItem.ng-enter { bottom: 100%; } /* When the element completes the enter transition */ .switchItem.ng-enter-active { bottom: 0; } /* When the element is about to leave DOM*/ .switchItem.ng-leave { bottom: 0; } /*When the element end the leave transition*/ .switchItem.ng-leave-active { bottom: -100%; } This is almost the same CSS as the ngView sample; we just used the bottom property, added a different height to the switchItemRelative class, and included overflow:hidden. The ngInclude directive sample The ngInclude directive is used to fetch, compile, and include an HTML fragment; it supports animations for the enter and leave events, such as the ngView and ngSwitch directives. For this sample, we will use both templates created in the last ngView sample, first.html and second.html. The ngInclude animation sample HTML with JavaScript and CSS included is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngInclude sample</title> </head> <body> <style> .ngIncludeRelative { position: relative; height: 500px; overflow: hidden; } .ngIncludeItem { position: absolute; width: 500px; display: block; } .ngIncludeItem.ng-enter, .ngIncludeItem.ng-leave { -webkit-transition: 300ms linear all; transition: 300ms linear all; } .ngIncludeItem.ng-enter { top: 100%; } .ngIncludeItem.ng-enter-active { top: 0; } .ngIncludeItem.ng-leave { top: 0; } .ngIncludeItem.ng-leave-active { top: -100%; } </style> <div ng-controller="animationsCtrl"> <h1>ngInclude sample</h1> <p>Choose one template</p> <select ng-model="ngIncludeSelected" ng-options="item.name for item in ngIncludeTemplates"></select> <p>ngInclude:</p> <div class="ngIncludeRelative"> <div class="ngIncludeItem" nginclude=" ngIncludeSelected.url"></div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngIncludeTemplates = [{ name: 'first', url: 'first.html' }, { name: 'second', url: 'second.html' }]; }) </script> </body> </html> In the JavaScript controller, we included the templates array. Finally, we can animate ngInclude using CSS. In this sample, we will animate by sliding the templates using the top property, using the enter and leave events animation. To test this sample, just change the template value selected. Do it yourself exercises The following are some exercises that you can refer to as an exercise that will help you understand the concepts of this article better: Create a spinning loading animation, using the ngShow or ngHide directives that appears when the scope controller variable, $scope.isLoading, is equal to true. Using exercise 1, create a gray background layer with opacity 0.5 that smoothly fills the entire page behind the loading spin, and after page content is loaded, covers all the content until isProcessing becomes false. The effect should be that of a drop of ink that is dropped on a piece of paper and spreads until it's completely stained. Create a success notification animation, similar to the ngShow example, but instead of using the fade animation, use a slide-down animation. So, the success message starts with height:0px. Check http://api.jquery.com/slidedown/ for the expected animation effect. Copy any animation from the http://capptivate.co/ website, using AngularJS and CSS animations. Summary In this article, we learned how to animate AngularJS native directives using the CSS transitions and CSS keyframe concepts. This article taught you how to create animations on AngularJS web apps. Resources for Article: Further resources on this subject: Important Aspect of AngularJS UI Development [article] Setting Up The Rig [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 3109

article-image-untangle-vpn-services
Packt
30 Oct 2014
18 min read
Save for later

Untangle VPN Services

Packt
30 Oct 2014
18 min read
This article by Abd El-Monem A. El-Bawab, the author of Untangle Network Security, covers the Untangle solution, OpenVPN. OpenVPN is an SSL/TLS-based VPN, which is mainly used for remote access as it is easy to configure and uses clients that can work on multiple operating systems and devices. OpenVPN can also provide site-to-site connections (only between two Untangle servers) with limited features. (For more resources related to this topic, see here.) OpenVPN Untangle's OpenVPN is an SSL-based VPN solution that is based on the well-known open source application, OpenVPN. Untangle's OpenVPN is mainly used for client-to-site connections with a client feature that is easy to deploy and configure, which is widely available for Windows, Mac, Linux, and smartphones. Untangle's OpenVPN can also be used for site-to-site connections but the two sites need to have Untangle servers. Site-to-site connections between Untangle and third-party devices are not supported. How OpenVPN works In reference to the OSI model, an SSL/TLS-based VPN will only encrypt the application layer's data, while the lower layer's information will be transferred unencrypted. In other words, the application packets will be encrypted. The IP addresses of the server and client are visible; the port number that the server uses for communication between the client and server is also visible, but the actual application port number is not visible. Furthermore, the destination IP address will not be visible; only the VPN server IP address is seen. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) refer to the same thing. SSL is the predecessor of TLS. SSL was originally developed by Netscape and many releases were produced (V.1 to V.3) till it got standardized under the TLS name. The steps to create an SSL-based VPN are as follows: The client will send a message to the VPN server that it wants to initiate an SSL session. Also, it will send a list of all ciphers (hash and encryption protocols) that it supports. The server will respond with a set of selected ciphers and will send its digital certificate to the client. The server's digital certificate includes the server's public key. The client will try to verify the server's digital certificate by checking it against trusted certificate authorities and by checking the certificate's validity (valid from and valid through dates). The server may need to authenticate the client before allowing it to connect to the internal network. This could be achieved either by asking for a valid username and password or by using the user's digital identity certificates. Untangle NGFW uses the digital certificates method. The client will create a session key (which will be used to encrypt the transferred data between the two devices) and will send this key to the server encrypted using the server's public key. Thus, no third party can get the session key as the server is the only device that can decrypt the session key as it's the only party that has the private key. The server will acknowledge the client that it received the session key and is ready for the encrypted data transformation. Configuring Untangle's OpenVPN server settings After installing the OpenVPN application, the application will be turned off. You'll need to turn it on before you can use it. You can configure Untangle's OpenVPN server settings under OpenVPN settings | Server. The settings configure how OpenVPN will be a server for remote clients (which can be clients on Windows, Linux, or any other operating systems, or another Untangle server). The different available settings are as follows: Site Name: This is the name of the OpenVPN site that is used to define the server among other OpenVPN servers inside your origination. This name should be unique across all Untangle servers in the organization. A random name is automatically chosen for the site name. Site URL: This is the URL that the remote client will use to reach this OpenVPN server. This can be configured under Config | Administration | Public Address. If you have more than one WAN interface, the remote client will first try to initiate the connection using the settings defined in the public address. If this fails, it will randomly try the IP of the remaining WAN interfaces. Server Enabled: If checked, the OpenVPN server will run and accept connections from the remote clients. Address Space: This defines the IP subnet that will be used to assign IPs for the remote VPN clients. The value in Address Space must be unique and separate across all existing networks and other OpenVPN address spaces. A default address space will be chosen that does not conflict with the existing configuration: Configuring Untangle's OpenVPN remote client settings Untangle's OpenVPN allows you to create OpenVPN clients to give your office employees, who are out of the company, the ability to remotely access your internal network resources via their PCs and/or smartphones. Also, an OpenVPN client can be imported to another Untangle server to provide site-to-site connection. Each OpenVPN client will have its unique IP (from the address space range defined previously). Thus, each OpenVPN client can only be used for one user. For multiple users, you'll have to create multiple clients as using the same client for multiple users will result in client disconnection issues. Creating a remote client You can create remote access clients by clicking on the Add button located under OpenVPN Settings | Server | Remote Clients. A new window will open, which has the following settings: Enabled: If this checkbox is checked, it will allow the client connection to the OpenVPN server. If unchecked, it will not allow the client connection. Client Name: Give a unique name for the client; this will help you identify the client. Only alphanumeric characters are allowed. Group: Specify the group the client will be a member of. Groups are used to apply similar settings to their members. Type: Select Individual Client for remote access and Network for site-to-site VPN. The following screenshot shows a remote access client created for JDoe: After configuring the client settings, you'll need to press the Done button and then the OK or Apply button to save this client configuration. The new client will be available under the Remote Clients tab, as shown in the following screenshot: Understanding remote client groups Groups are used to group clients together and apply similar settings to the group members. By default, there will be a Default Group. Each group has the following settings: Group Name: Give a suitable name for the group that describes the group settings (for example, full tunneling clients) or the target clients (for example, remote access clients). Full Tunnel: If checked, all the traffic from the remote clients will be sent to the OpenVPN server, which allows Untangle to filter traffic directed to the Internet. If unchecked, the remote client will run in the split tunnel mode, which means that the traffic directed to local resources behind Untangle is sent through VPN, and the traffic directed to the Internet is sent by the machine's default gateway. You can't use Full Tunnel for site-to-site connections. Push DNS: If checked, the remote OpenVPN client will use the DNS settings defined by the OpenVPN server. This is useful to resolve local names and services. Push DNS server: If the OpenVPN server is selected, remote clients will use the OpenVPN server for DNS queries. If set to Custom, DNS servers configured here will be used for DNS queries. Push DNS Custom 1: If the Push DNS server is set to Custom, the value configured here will be used as a primary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Custom 2: If the Push DNS server is set to Custom, the value configured here will be used as a secondary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Domain: The configured value will be pushed to the remote clients to extend their domain's search path during DNS resolution. The following screenshot illustrates all these settings: Defining the exported networks Exported networks are used to define the internal networks behind the OpenVPN server that the remote client can reach after successful connection. Additional routes will be added to the remote client's routing table that state that the exported networks (the main site's internal subnet) are reachable through the OpenVPN server. By default, each static non-WAN interface network will be listed in the Exported Networks list: You can modify the default settings or create new entries. The Exported Networks settings are as follows: Enabled: If checked, the defined network will be exported to the remote clients. Export Name: Enter a suitable name for the exported network. Network: This defines the exported network. The exported network should be written in CIDR form. These settings are illustrated in the following screenshot: Using OpenVPN remote access clients So far, we have been configuring the client settings but didn't create the real package to be used on remote systems. We can get the remote client package by pressing the Download Client button located under OpenVPN Settings | Server | Remote Clients, which will start the process of building the OpenVPN client that will be distributed: There are three available options to download the OpenVPN client. The first option is to download the client as a .exe file to be used with the Windows operating system. The second option is to download the client configuration files, which can be used with the Apple and Linux operating systems. The third option is similar to the second one except that the configuration file will be imported to another Untangle NGFW server, which is used for site-to-site scenarios. The following screenshot illustrates this: The configuration files include the following files: <Site_name>.ovpn <Site_name>.conf Keys<Site_name>.-<User_name>.crt Keys<Site_name>.-<User_name>.key Keys<Site_name>.-<User_name>-ca.crt The certificate files are for the client authentication, and the .ovpn and .conf files have the defined connection settings (that is, the OpenVPN server IP, used port, and used ciphers). The following screenshot shows the .ovpn file for the site Untangle-1849: As shown in the following screenshot, the created file (openvpn-JDoe-setup.exe) includes the client name, which helps you identify the different clients and simplifies the process of distributing each file to the right user: Using an OpenVPN client with Windows OS Using an OpenVPN client with the Windows operating system is really very simple. To do this, perform the following steps: Set up the OpenVPN client on the remote machine. The setup is very easy and it's just a next, next, install, and finish setup. To set up and run the application as an administrator is important in order to allow the client to write the VPN routes to the Windows routing table. You should run the client as an administrator every time you use it so that the client can create the required routes. Double-click on the OpenVPN icon on the Windows desktop: The application will run in the system tray: Right-click on the system tray of the application and select Connect. The client will start to initiate the connection to the OpenVPN server and a window with the connection status will appear, as shown in the following screenshot: Once the VPN tunnel is initiated, a notification will appear from the client with the IP assigned to it, as shown in the following screenshot: If the OpenVPN client was running in the task bar and there was an established connection, the client will automatically reconnect to the OpenVPN server if the tunnel was dropped due to Windows being asleep. By default, the OpenVPN client will not start at the Windows login. We can change this and allow it to start without requiring administrative privileges by going to Control Panel | Administrative Tools | Services and changing the OpenVPN service's Startup Type to automatic. Now, in the start parameters field, put –-connect <Site_name>.ovpn; you can find the <site_name>.ovpn under C:Program FilesOpenVPNconfig. Using OpenVPN with non-Windows clients The method to configure OpenVPN clients to work with Untangle is the same for all non-Windows clients. Simply download the .zip file provided by Untangle, which includes the configuration and certificate files, and place them into the application's configuration folder. The steps are as follows: Download and install any of the following OpenVPN-compatible clients for your operating system: For Mac OS X, Untangle, Inc. suggests using Tunnelblick, which is available at http://code.google.com/p/tunnelblick For Linux, OpenVPN clients for different Linux distros can be found at https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html OpenVPN connect for iOS is available at https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8 OpenVPN for Android 4.0+ is available at https://play.google.com/store/apps/details?id=net.openvpn.openvpn Log in to the Untangle NGFW server, download the .zip client configuration file, and extract the files from the .zip file. Place the configuration files into any of the following OpenVPN-compatible applications: Tunnelblick: Manually copy the files into the Configurations folder located at ~/Library/Application Support/Tunnelblick. Linux: Copy the extracted files into /etc/openvpn, and then you can connect using sudo openvpn /etc/openvpn/<Site_name>.conf. iOS: Open iTunes and select the files from the config ZIP file to add to the app on your iPhone or iPad. Android: From OpenVPN for an Android application, click on all your precious VPNs. In the top-right corner, click on the folder, and then browse to the folder where you have the OpenVPN .Conf file. Click on the file and hit Select. Then, in the top-right corner, hit the little floppy disc icon to save the import. Now, you should see the imported profile. Click on it to connect to the tunnel. For more information on this, visit http://forums.untangle.com/openvpn/30472-openvpn-android-4-0-a.html. Run the OpenVPN-compatible client. Using OpenVPN for site-to-site connection To use OpenVPN for site-to-site connection, one Untangle NGFW server will run on the OpenVPN server mode, and the other server will run on the client mode. We will need to create a client that will be imported in the remote server. The client settings are shown in the following screenshot: We will need to download the client configuration that is supposed to be imported on another Untangle server (the third option available on the client download menu), and then import this client configuration's zipped file on the remote server. To import the client, on the remote server under the Client tab, browse to the .zip file and press the Submit button. The client will be shown as follows: You'll need to restart the two servers before being able to use the OpenVPN site-to-site connection. The site-to-site connection is bidirectional. Reviewing the connection details The current connected clients (either they were OS clients or another Untangle NGFW client) will appear under Connected Remote Clients located under the Status tab. The screen will show the client name, its external address, and the address assigned to it by OpenVPN. In addition to the connection start time, the amount of transmitted and received MB during this connection is also shown: For the site-to-site connection, the client server will show the name of the remote server, whether the connection is established or not, in addition to the amount of transmitted and received data in MB: Event logs show a detailed connection history as shown in the following screenshot: In addition, there are two reports available for Untangle's OpenVPN: Bandwidth usage: This report shows the maximum and average data transfer rate (KB/s) and the total amount of data transferred that day Top users: This report shows the top users connected to the Untangle OpenVPN server Troubleshooting Untangle's OpenVPN In this section, we will discuss some points to consider when dealing with Untangle NGFW OpenVPN. OpenVPN acts as a router as it will route between different networks. Using OpenVPN with Untangle NGFW in the bridge mode (Untangle NGFW server is behind another router) requires additional configurations. The required configurations are as follows: Create a static route on the router that will route any traffic from the VPN range (the VPN address pool) to the Untangle NGFW server. Create a Port Forward rule for the OpenVPN port 1194 (UDP) on the router to Untangle NGFW. Verify that your setting is correct by going to Config | Administration | Public Address as it is used by Untangle to configure OpenVPN clients, and ensure that the configured address is resolvable from outside the company. If the OpenVPN client is connected, but you can't access anything, perform the following steps: Verify that the hosts you are trying to reach are exported in Exported Networks. Try to ping Untangle NGFW LAN IP address (if exported). Try to bring up the Untangle NGFW GUI by entering the IP address in a browser. If the preceding tasks work, your tunnel is up and operational. If you can't reach any clients inside the network, check for the following conditions: The client machine's firewall is not preventing the connection from the OpenVPN client. The client machine uses Untangle as a gateway or has a static route to send the VPN address pool to Untangle NGFW. In addition, some port forwarding rules on Untangle NGFW are needed for OpenVPN to function properly. The required ports are 53, 445, 389, 88, 135, and 1025. If the site-to-site tunnel is set up correctly, but the two sites can't talk to each other, the reason may be as follows: If your sites have IPs from the same subnet (this probably happens when you use a service from the same ISP for both branches), OpenVPN may fail as it consider no routing is needed from IPs in the same subnet, you should ask your ISP to change the IPs. To get DNS resolution to work over the site-to-site tunnel, you'll need to go to Config | Network | Advanced | DNS Server | Local DNS Servers and add the IP of the DNS server on the far side of the tunnel. Enter the domain in the Domain List column and use the FQDN when accessing resources. You'll need to do this on both sides of the tunnel for it to work from either side. If you are using site-to-site VPN in addition to the client-to-site VPN. However, the OpenVPN client is able to connect to the main site only: You'll need to add VPN Address Pool to Exported Hosts and Networks Lab-based training This section will provide training for the OpenVPN site-to-site and client-to-site scenarios. In this lab, we will mainly use Untangle-01, Untangle-03, and a laptop (192.168.1.7). The ABC bank started a project with Acme schools. As a part of this project, the ABC bank team needs to periodically access files located on Acme-FS01. So, the two parties decided to opt for OpenVPN. However, Acme's network team doesn't want to leave access wide open for ABC bank members, so they set firewall rules to limit ABC bank's access to the file server only. In addition, the IT team director wants to have VPN access from home to the Acme network, which they decided to accomplish using OpenVPN. The following diagram shows the environment used in the site-to-site scenario: To create the site-to-site connection, we will need to do the following steps: Enable OpenVPN Server on Untangle-01. Create a network type client with a remote network of 172.16.1.0/24. Download the client and import it under the Client tab in Untangle-03. Restart the two servers. After the restart, you have a site-to-site VPN connection. However, the Acme network is wide open to the ABC bank, so we need to create a firewall-limiting rule. On Untangle-03, create a rule that will allow any traffic that comes from the OpenVPN interface, and its source is 172.16.136.10 (Untangle-01 Client IP) and is directed to 172.16.1.7 (Acme-FS01). The rule is shown in the following screenshot: Also, we will need a general block rule that comes after the preceding rule in the rule evaluation order. The environment used for the client-to-site connection is shown in the following diagram: To create a client-to-site VPN connection, we need to perform the following steps: Enable the OpenVPN server on Untangle-03. Create an individual client type client on Untangle-03. Distribute the client to the intended user (that is 192.168.1.7). Install OpenVPN on your laptop. Connect using the installed OpenVPN and try to ping Acme-DC01 using its name. The ping will fail because the client is not able to query the Acme DNS. So, in the Default Group settings, change Push DNS Domain to Acme.local. Changing the group settings will not affect the OpenVPN client till the client is restarted. Now, the ping process will be a success. Summary In this article, we covered the VPN services provided by Untangle NGFW. We went deeply into understanding how each solution works. This article also provided a guide on how to configure and deploy the services. Untangle provides a free solution that is based on the well-known open source OpenVPN, which provides an SSL-based VPN. Resources for Article: Further resources on this subject: Important Features of Gitolite [Article] Target Exploitation [Article] IPv6 on Packet Tracer [Article]
Read more
  • 0
  • 0
  • 18642

Packt
30 Oct 2014
7 min read
Save for later

concrete5 – Creating Blocks

Packt
30 Oct 2014
7 min read
In this article by Sufyan bin Uzayr, author of the book concrete5 for Developers, you will be introduced to concrete5. Basically, we will be talking about the creation of concrete5 blocks. (For more resources related to this topic, see here.) Creating a new block Creating a new block in concrete5 can be a daunting task for beginners, but once you get the hang of it, the process is pretty simple. For the sake of clarity, we will focus on the creation of a new block from scratch. If you already have some experience with block building in concrete5, you can skip the initial steps of this section. The steps to create a new block are as follows: First, create a new folder within your project's blocks folder. Ideally, the name of the folder should bear relevance to the actual purpose of the block. Thus, a slideshow block can be slide. Assuming that we are building a contact form block, let's name our block's folder contact. Next, you need to add a controller class to your block. Again, if you have some level of expertise with concrete5 development, you will already be aware of the meaning and purpose of the controller class. Basically, a controller is used to control the flow of an application, say, it can accept requests from the user, process them, and then prepare the data to present it in the result, and so on. For now, we need to create a file named controller.php in our block's folder. For the contact form block, this is how it is going to look (don't forget the PHP tags): class ContactBlockController extends BlockController {protected $btTable = 'btContact';/*** Used for internationalization (i18n).*/public function getBlockTypeDescription() {return t('Display a contact form.');}public function getBlockTypeName() {return t('Contact');}public function view() {// If the block is rendered}public function add() {// If the block is added to a page}public function edit() {// If the block instance is edited}} The preceding code is pretty simple and seems to have become the industry norm when it comes to block creation in concrete5. Basically, our class extends BlockController, which is responsible for installing the block, saving the data, and rendering templates. The name of the class should be the Camel Case version of the block handle, followed by BlockController. We also need to specify the name of the database table in which the block's data will be saved. More importantly, as you must have noticed, we have three separate functions: view(), add(), and edit(). The roles of these functions have been described earlier. Next, create three files within the block's folder: view.php, add.php, and edit.php (yes, the same names as the functions in our code). The names are self-explanatory: add.php will be used when a new block is added to a given page, edit.php will be used when an existing block is edited, and view.php jumps into action when users view blocks live on the page. Often, it becomes necessary to have more than one template file within a block. If so, you need to dynamically render templates in order to decide which one to use in a given situation. As discussed in the previous table, the BlockController class has a render($view) method that accepts a single parameter in the form of the template's filename. To do this from controller.php, we can use the code as follows: public function view() {if ($this->isPost()) {$this->render('block_pb_view');}} In the preceding example, the file named block_pb_view.php will be rendered instead of view.php. To reiterate, we should note that the render($view) method does not require the .php extension with its parameters. Now, it is time to display the contact form. The file in question is view.php, where we can put virtually any HTML or PHP code that suits our needs. For example, in order to display our contact form, we can hardcode the HTML markup or make use of Form Helper to display the HTML markup. Thus, a hardcoded version of our contact form might look as follows: <?php defined('C5_EXECUTE') or die("Access Denied.");global $c; ?><form method="post" action="<?php echo $this->action('contact_submit');?>"><label for="txtContactTitle">SampleLabel</label><input type="text" name="txtContactTitle" /><br /><br /><label for="taContactMessage"></label><textarea name="taContactMessage"></textarea><br /><br /><input type="submit" name="btnContactSubmit" /></form> Each time the block is displayed, the view() function from controller.php will be called. The action() method in the previous code generates URLs and verifies the submitted values each time a user inputs content in our contact form. Much like any other contact form, we now need to handle contact requests. The procedure is pretty simple and almost the same as what we will use in any other development environment. We need to verify that the request in question is a POST request and accordingly, call the $post variable. If not, we need to discard the entire request. We can also use the mail helper to send an e-mail to the website owner or administrator. Before our block can be fully functional, we need to add a database table because concrete5, much like most other CMSs in its league, tends to work with a database system. In order to add a database table, create a file named db.xml within the concerned block's folder. Thereafter, concrete5 will automatically parse this file and create a relevant table in the database for your block. For our previous contact form block, and for other basic block building purposes, this is how the db.xml file should look: <?xml version="1.0"?><schema version="0.3"><table name="btContact"><field name="bID" type="I"><key /><unsigned /></field></table></schema> You can make relevant changes in the preceding schema definitions to suit your needs. For instance, this is how the default YouTube block's db.xml file will look: <?xml version="1.0"?><schema version="0.3"><table name="btYouTube"><field name="bID" type="I"><key /><unsigned /></field><field name="title" type="C" size="255"></field><field name="videoURL" type="C" size="255"></field></table></schema> The preceding steps enumerate the process of creating your first block in concrete5. However, while you are now aware of the steps involved in the creation of blocks and can easily work with concrete5 blocks for the most part, there are certain additional details that you should be aware of if you are to utilize the block's functionality in concrete5 to its fullest abilities. The first and probably the most useful of such detail is validation of user inputs within blocks and forms. Summary In this article, we learned how to create our very first block in concrete5. Resources for Article: Further resources on this subject: Alfresco 3: Writing and Executing Scripts [Article] Integrating Moodle 2.0 with Alfresco to Manage Content for Business [Article] Alfresco 3 Business Solutions: Types of E-mail Integration [Article]
Read more
  • 0
  • 0
  • 10371

article-image-planning-compliance-program-microsoft-system-center-2012
Packt
30 Oct 2014
16 min read
Save for later

Planning a Compliance Program in Microsoft System Center 2012

Packt
30 Oct 2014
16 min read
This article created by Andreas Baumgarten, Ronnie Isherwood, and Susan Roesner the authors of the book Microsoft System Center 2012 Compliance Management Cookbook, mentions all System Center Product we work with in the book. It also talks about responsibilities/stakeholders, and the reports involved. (For more resources related to this topic, see here.) Understanding the responsibilities of the System Center 2012 tools This recipe shows how the System Center tools, in addition to Security Compliance Manager, work together and it also explains the focus of each tool. Getting ready In order to create a successful compliance program, you must have a clear understanding of your goals and regulatory and business requirements. This information is key in understanding which control objectives are required and which control activities fulfill your goals and requirements, that is, your control objectives. How to do it... Based on your company and regulatory requirements, you must create your control objectives. There are libraries, such as the Unified Compliance Framework (UCF), that provide control objectives and control activities based on regulatory requirements such as privacy laws and frameworks such as COBIT, COSO, and so on. These compliance libraries are great tools to help you with this first step, but always keep in mind that they are just tools. The following diagram provides a visual summary of the tasks required for the successful creation of a technical compliance program showing which System Center tool to use for the different tasks. The tasks are as follows: Define compliance requirements Perform authorized implementation Review adherence to compliance requirements Perform remediation The minimum steps that are required are as follows: Define and document your compliance program, including compliance policies, standards, corresponding control objectives, and control activities. The results of this step are control objectives and control activities. Based on the scenario of access compliance, you should create a password policy that states the password rules and the reasons for the policy. This must be distributed to your users. With your policy and, more importantly, regulatory and company requirements in mind, decide on your control activities. The results of step 1 are the input for this task—for manual controls, use System Center Service Manager for documentation. The results are control status information. Based on the password policy scenario, in case no System Center Configuration Manager exists, the correct implementation of password policy would have to be checked manually. The result of this check should be entered into System Center Service Manager. This could either be entered in Incident or Change Management. Creating either one provides a record of the manual control and has the benefit of being included in the System Center Service Manager reports. The results of step 1 are the input for this task—for automated control activities based on configuration settings do the following: Use Security Compliance Manager to create compliance baselines for control activities of configuration settings. The result is the compliance baseline. Create a baseline with your password policy. The input is the compliance baseline from the previous step. Use Compliance Settings within System Center Configuration Manager to run the compliance baseline. The results are compliance control status information.Use the compliance baseline from Security Compliance Manager to ensure adherence to it. After configuring the controls, those will be run automatically. In addition, auto-remediation of failed controls is possible. The results of step 1 are inputs for this task—for automated control activities based on breach status, use Audit Collection Services from System Center Operations Manager. The results are compliance breach information. Based on the password policy scenario, create a monitoring rule for unauthorized access to critical systems and unauthorized changes to your password policy. The results from steps 2, 3, and 4 are input for compliance status and audit reports. Reports are available in the following: System Center Service Manager for manual control activities. In addition to centralized reports of automated controls, where input of the controls comes from other tools such as System Center Configuration Manager. The reports are based on the System Center Service Manager function of Incident or Change Management. System Center Configuration Manager for control activities in the previous step 3. System Center Operations Manager for control activities in step 4. The results from step 3 could be remediated automatically. The results of steps 2 and 4 must be considered and, if required, steps for remediation should be taken. How it works... The tool to use for control objectives and the corresponding control activities depends on the possible input and the required result of the control activity. The most basic questions that have to be answered are as follows: Does the control activity have a manual input/output? Is the control activity based on a query for system or application configuration status information? Is the control activity based on a query for monitoring/breaching status information? What type of control (manual or automated) and which characteristic of the control (preventive or detective) do you require? Based on the answers to those questions, it is possible to understand which System Center tool or Microsoft technology to use. Creating automated control activities within the System Center tools is but one task. The next steps must be as follows: Creating notifications or alerts for relevant stakeholders and reports Performing remediation Every System Center tool has its own reporting capability. This means there are different compliance reports in different System Center tools. To enhance usability, System Center Service Manager could be used to centralize most of those reports. Remediation for automated control activities, based on configuration settings, is the feature of System Center Configuration Manager. Depending on the requirements, after running a control activity compliance baseline that includes storing the results, all negative results of control activities could be remediated automatically. For auto-remediation based on monitoring or breach information, System Center Configuration Manager offers the capability to define actions. System Center Operations Manager offers the same capabilities. Each breach of a baseline could be auto-remediated out of the box. Regardless of the tool used, remediation must always be done in a documented fashion as this is a very common compliance requirement. In case a change is involved, the change management capabilities of System Center Service Manager should be used. All System Center tools create logs showing the action performed. There's more... Besides the already mentioned System Center tools, there are two more tools that belong to the core System Center product family. These are System Center Virtual Machine Manager and System Center Data Protection Manager. System Center 2012 R2 Virtual Machine Manager doesn't offer any compliance functionalities. It includes an audit log of administrator activities. This is a requirement in several regulatory requirements and therefore is quite useful. Out of the box, no additional benefits are provided. System Center 2012 R2 Data Protection Manager (SCDPM) is different. This tool is used for Backup and Disaster Recovery. These two topics are requirements in many standards and regulatory requirements. But there are already great books out there focusing on SCDPM, such as the following one: http://www.amazon.com/Microsoft-System-Center-Protection-Manager/dp/1849686300/ref=sr_1_1?ie=UTF8&qid=1403700777&sr=8-1&keywords=System+Center+Data+Protection+Manager Therefore, we have not included any recipes of SCDPM in this book. See also http://technet.microsoft.com/en-us/library/gg681958.aspx (the article in the Technet library on the planning of Compliance Settings in SCCM 2012) http://technet.microsoft.com/en-us/library/hh212740.aspx (planning of Audit Collection Service in SCOM 2012 in the Technet library) http://technet.microsoft.com/en-us/solutionaccelerators/cc835245.aspx (the article in the Technet library on the planning of compliance baselines in Security Compliance Manager) Planning the implementation of Microsoft System Center 2012 Service Manager Microsoft's System Center 2012 Service Manager (SCSM 2012) can be used to manage IT management processes (ITIL and MOF). The compliance management process is related to the IT management processes as well. Compliance issues can be handled as incident records; also, compliance related changes can be managed in SCSM 2012 as Change Requests. Getting ready Before we start planning the installation of SCSM 2012, you should be familiar with the ITIL or MOF management processes. Also, you should have planned the Incident and Change Management process for IT. How to do it... An example of the steps to plan the installation of the SCSM 2012 environment is as follows: Identify the required components of SCSM 2012. The SP1 or R2 version are also suitable. Identify the sizing of the SCSM 2012 infrastructure. Identify the amount of Configuration Items you want to have in the CMDB of SCSM 2012. How it works... For good performance in SCSM 2012, sizing and planning of the environment is essential. There is a Service Manager Sizing Helper tool available in the Service Manager job that aids documentation. You can download the SM_job_aids.zip file at http://go.microsoft.com/fwlink/p/?LinkID=232378. One sizing related factor is the amount of managed Configuration Items (CIs), for instance, managed users, computers, groups, printers, and other IT-related objects. Planning the numbers of CIs influences the sizing of the SCSM 2012 as well. There's more... The IT Governance, Risk and Compliance Management Pack (IT GRC MP) is not supported in SCSM 2012 R2. The last supported version is the Microsoft System Center 2012 SP1 Service Manager. See also http://www.itil-officialsite.com/ (IT Infrastructure Library) http://technet.microsoft.com/en-us/library/cc543224.aspx (Microsoft Operations Framework) http://technet.microsoft.com/en-us/library/hh519640.aspx (planning of SCSM 2012 in the Technet library) http://www.packtpub.com/microsoft-system-center-servicemanager-2012-cookbook/book (Microsoft System Center 2012 Service Manager Cookbook) http://www.microsoft.com/en-us/download/details.aspx?id=4953 (IT GRC Process Management Pack SP1 for System Center Service Manager) Planning and defining compliance reports The goal of compliance reports is to answer two things: "How am I doing" and "How effectively am I doing it" especially with regard to helping the business understand current and future threats. This recipe gives an overview on how to plan compliance reports. Getting ready Research the regulatory requirements using your country's respective laws, industry standards, and regulation. This will ensure your reports are relative only to your business and technical compliance objectives. For example, there are standards such as SOX section 404 that demand reports with certain criteria. How to do it... There are going to be at least two different types of reports you must plan for: Compliance status or audit reports Stakeholder-targeted reports Compliance status / audit reports Compliance status / audit reports are based on your controls. For these reports to answer the question about the actual compliance status and the effectiveness of the compliance program for your business, careful considerations have to be made as to which control activities will produce your business objectives. Therefore, the planning stage of these reports is already completed by the planning of your control objectives and their verification through control activities. With regard to System Center, out-of-the-box reports will be sufficient for many of these compliance status reports and audit reports. As mentioned in the Getting ready section of this recipe, be aware that several laws and industry standards demand certain formats for audit reports. In cases where the System Center out-of-the-box reports do not fulfill them, customized reports will have to be used. Stakeholder-targeted reports The purpose of these reports is to keep the stakeholder of your company informed about the compliance program. Therefore, the following questions must be answered: Who is the target audience of this report? What is your goal for this report? What input/output data can provide the required information? Who is responsible for the report (owner)? What is the frequency of the reports? How do you want to present this report? What is the improvement process on reports including data source input/output and controls? We will focus on the first question. You will have to consider reports for different levels such as the following: CEO and/or board members CISO and/or IT/Security team IT application owner Depending on the targeted audience of the report, different input/output values have to be used and sometimes, translation of inputs/outputs must be used. Using out-of-the-box reports from System Center tools will be possible for reports targeting IT application owners, CISO, the IT Security team, or the compliance team. Reports based on the analysis of certain control input/output data will either be accomplished using customized reports within System Center Service Manager or using additional tools such as System Center Orchestrator or dashboards. Regardless of the actual report, during the planning phase, several principles, that will increase the value of the reports to the business, should be followed. These principals are as follows: The overall report should be complete for its context The input of these reports should be consistently measured, at a low cost and preferably, as a number or percentage in context The output should be relevant The input and output should be transparent Complete Based on previous experience, compliance and IT security staff, while attempting to create reports, put in as much information and statistics as possible. In general, the report must be complete for the context it is designed for. So, on a CISO level, besides technical controls, manual controls including policy or process compliance may have to be included. Still, the report must be concise too. As a best practice, start out with the out-of-the-box reports provided by the System Center tools as they offer a large number of compliance reports. Measurable Being measurable is a key principle for creating valuable reports. The input and output data source or control should be measured in a consistent way. So, two different people using the same control at the same time should produce the same output. As much as possible, control should be automated to ensure this consistency and also to minimize cost. In case the control has to be done manually, it is essential that the people performing them do this in a consistent way. To accomplish this goal, each control should be documented; for example, there should be a document on the IT compliance for identity and access management. For the general users, one important document is the password policy, which should answer the questions why, what, how, and by whom. For IT people, there should be an additional document stating the technical implementation and automated or manual control, to ensure compliance with the password policy. In addition, it should mention how the control activity should be performed. In addition, whenever possible, controls used for reports should be expressed in numbers or percentage against a unit. For example, a control saying "10 systems out of 100 systems" have missing critical security updates, provides a clear value for a decision on what to do next, compared to a control with the value of "medium". Relevant This principle is important especially for the stakeholder-targeted reports. Reports should include controls or output that help the targeted audience in decision making. So the question of to whom the report will be provided is a decision factor on what to include and how to present the report. The IT security staff or compliance team requires the exact numbers of controls, for example, that 10 systems from 100 systems have missing critical security updates, whereas the IT application owner requires a drill down on which systems and security updates were missing. Transparent The target audience of a report must understand the controls or output used in the report. The labels should be plain and consistent; and clear measures for controls or input/output should be used. In addition, they must understand how these results came to be. This is especially true for indexes that comprise several controls. If this is the case, it should be clear as to which controls are included. If possible, indexes should be avoided as they average out the value presented in the report; for example, System Center Service Manager allows a report on incidents in the category under Security Compliance | System Security. The overall status of System Security may be green, as all controls besides the patch management control relating to critical security update did not report any issues. In case the affected systems hold sensitive information or are accessible by the public, this could be a factor in the decision process for immediate remediation steps. Understanding which controls are within the System Security category and being able to drill down into those controls or input/output used is important for a report. How it works... The focus is to give you an understanding of the type of reports you should consider and some basic considerations you should include in planning your reports. The first step is the planning of your control activities. If the control activities do not provide a measure of the effectiveness of your control objectives, then no report will be able to answer this question. Hence, qualified input/output controls are required. In this regard, the stakeholder of controls must provide input and sometimes must be included to improve processes or controls. Use the questions in this recipe to start out with the creation of your reports, but keep in mind that you have to adapt the information provided here to meet your business requirements and objectives. There's more... All System Center reports are based on the SQL Reporting service. This means you can create customized reports should the out-of-the-box reports provide the information you require. See also Detailed information on how to plan and implement reports based on System Center may be found in the book Security Metrics, Andrew Jaquith, Addison-Wesley Professional (http://www.amazon.com/Security-Metrics-Andrew-Jaquith-ebook/dp/B0050 G2RC8). Security Metrics is a book focusing more on effective measuring of IT security operations. It provides insights into implementing qualitative and meaningful data sources, ensuring reports that provide knowledge to help make the right strategic decisions. Look out for an upcoming cookbook by Sam Erskine from Packt Publishing. This book on System Center Reporting will provide detailed information on how to plan and implement reports based on System Center. Summary This article provided recipes on how to integrate the System Center products. The recipes use hands-on examples to show the required planning and implementation that must be made to align the System Center tools with the compliance process. Resources for Article: Further resources on this subject: VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] A Virtual Machine for a Virtual World [Article] An Introduction to Microsoft Remote Desktop Services and VDI [Article]
Read more
  • 0
  • 0
  • 1700
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-importance-windows-rds-horizon-view
Packt
30 Oct 2014
15 min read
Save for later

Importance of Windows RDS in Horizon View

Packt
30 Oct 2014
15 min read
In this article by Jason Ventresco, the author of VMware Horizon View 6 Desktop Virtualization Cookbook, has explained about the Windows Remote Desktop Services (RDS) and how they are implemented in Horizon View. He will discuss about configuring the Windows RDS server and also about creating RDS farm in Horizon View. (For more resources related to this topic, see here.) Configuring the Windows RDS server for use with Horizon View This recipe will provide an introduction to the minimum steps required to configure Windows RDS and integrate it with our Horizon View pod. For a more in-depth discussion on Windows RDS optimization and management, consult the Microsoft TechNet page for Windows Server 2012 R2 (http://technet.microsoft.com/en-us/library/hh801901.aspx). Getting ready VMware Horizon View supports the following versions of Window server for use with RDS: Windows Server 2008 R2: Standard, Enterprise, or Datacenter, with SP1 or later installed Windows Server 2012: Standard or Datacenter Windows Server 2012 R2: Standard or Datacenter The examples shown in this article were performed on Windows Server 2012 R2. Additionally, all of the applications required have already been installed on the server, which in this case included Microsoft Office 2010. Microsoft Office has specific licensing requirements when used with a Windows Server RDS. Consult Microsoft's Licensing of Microsoft Desktop Application Software for Use with Windows Server Remote Desktop Services document (http://www.microsoft.com/licensing/about-licensing/briefs/remote-desktop-services.aspx), for additional information. The Windows RDS feature requires a licensing server component called the Remote Desktop Licensing role service. For reasons of availability, it is not recommended that you install it on the RDS host itself, but rather, on an existing server that serves some other function or even on a dedicated server if possible. Ideally, the RDS licensing role should be installed on multiple servers for redundancy reasons. The Remote Desktop Licensing role service is different from the Microsoft Windows Key Management System (KMS), as it is used solely for Windows RDS hosts. Consult the Microsoft TechNet article, RD Licensing Configuration on Windows Server 2012 (http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx), for the steps required to install the Remote Desktop Licensing role service. Additionally, consult Microsoft document Licensing Windows Server 2012 R2 Remote Desktop Services (http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf) for information about the licensing options for Windows RDS, which include both per-user and per-device options. Windows RDS host – hardware recommendations The following resources represent a starting point for assigning CPU and RAM resources to Windows RDS hosts. The actual resources required will vary based on the applications being used and the number of concurrent users; so, it is important to monitor server utilization and adjust the CPU and RAM specifications if required. The following are the requirements: One vCPU for each of the 15 concurrent RDS sessions 2 GB RAM, base RAM amount equal to 2 GB per vCPU, plus 64 MB of additional RAM for each concurrent RDS session An additional RAM equal to the application requirements, multiplied by the estimated number of concurrent users of the application Sufficient hard drive space to store RDS user profiles, which will vary based on the configuration of the Windows RDS host: Windows RDS supports multiple options to control user profiles' configuration and growth, including a RD user home directory, RD roaming user profiles, and mandatory profiles. For information about these and other options, consult the Microsoft TechNet article, Manage User Profiles for Remote Desktop Services, at http://technet.microsoft.com/en-us/library/cc742820.aspx. This space is only required if you intend to store user profiles locally on the RDS hosts. Horizon View Persona Management is not supported and will not work with Windows RDS hosts. Consider native Microsoft features such as those described previously in this recipe, or third-party tools such as AppSense Environment Manager (http://www.appsense.com/products/desktop/desktopnow/environment-manager). Based on these values, a Windows Server 2012 R2 RDS host running Microsoft Office 2010 that will support 100 concurrent users will require the following resources: Seven vCPU to support upto 105 concurrent RDS sessions 45.25 GB of RAM, based on the following calculations: 20.25 GB of base RAM (2 GB for each vCPU, plus 64 MB for each of the 100 users) A total of 25 GB additional RAM to support Microsoft Office 2010 (Office 2010 recommends 256 MB of RAM for each user) While the vCPU and RAM requirements might seem excessive at first, remember that to deploy a virtual desktop for each of these 100 users, we would need at least 100 vCPUs and 100 GB of RAM, which is much more than what our Windows RDS host requires. By default, Horizon View allows only 150 unique RDS user sessions for each available Windows RDS host; so, we need to deploy multiple RDS hosts if users need to stream two applications at once or if we anticipate having more than 150 connections. It is possible to change the number of supported sessions, but it is not recommended due to potential performance issues. Importing the Horizon View RDS AD group policy templates Some of the settings configured throughout this article are applied using AD group policy templates. Prior to using the RDS feature, these templates should be distributed to either the RDS hosts in order to be used with the Windows local group policy editor, or to an AD domain controller where they can be applied using the domain. Complete the following steps to install the View RDS group policy templates: When referring to VMware Horizon View installation packages, y.y.y refers to the version number and xxxxxx refers to the build number. When you download packages, the actual version and build numbers will be in a numeric format. For example, the filename of the current Horizon View 6 GPO bundle is VMware-Horizon-View-Extras-Bundle-3.1.0-2085634.zip. Obtain the VMware-Horizon-View-GPO-Bundle-x.x.x-yyyyyyy.zip file, unzip it, and copy the en-US folder, the vmware_rdsh.admx file, and the vmware_rdsh_server.admx file to the C:WindowsPolicyDefinitions folder on either an AD domain controller or your target RDS host, based on how you wish to manage the policies. Make note of the following points while doing so: If you want to set the policies locally on each RDS host, you will need to copy the files to each server If you wish to set the policies using domain-based AD group policies, you will need to copy the files to the domain controllers, the group policy Central Store (http://support.microsoft.com/kb/929841), or to the workstation from which we manage these domain-based group policies. How to do it… The following steps outline the procedure to enable RDS on a Windows Server 2012 R2 host. The host used in this recipe has already been connected to the domain and has logged in with an AD account that has administrative permissions on the server. Perform the following steps: Open the Windows Server Manager utility and go to Manage | Add Roles and Features to open the Add Roles and Features Wizard. On the Before you Begin page, click on Next. On the Installation Type page, shown in the following screenshot, select Remote Desktop Services installation and click on Next. This is shown in the following screenshot: On the Deployment Type page, select Quick Start and click on Next. You can also implement the required roles using the standard deployment method outlined in the Deploy the Session Virtualization Standard deployment section of the Microsoft TechNet article, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment (http://technet.microsoft.com/en-us/library/hh831610.aspx). If you use this method, you will complete the component installation and proceed to step 9 in this recipe. On the Deployment Scenario page, select Session-based desktop deployment and click on Next. On the Server Selection page, select a server from the list under Server Pool, click the red, highlighted button to add the server to the list of selected servers, and click on Next. This is shown in the following screenshot: On the Confirmation page, check the box marked Restart the destination server automatically if required and click on Deploy. On the Completion page, monitor the installation process and click on Close when finished in order to complete the installation. If a reboot is required, the server will reboot without the need to click on Close. Once the reboot completes, proceed with the remaining steps. Set the RDS licensing server using the Set-RDLicenseConfiguration Windows PowerShell command. In this example, we are configuring the local RDS host to point to redundant license servers (RDS-LIC1 and RDS-LIC2) and setting the license mode to PerUser. This command must be executed on the target RDS host. After entering the command, confirm the values for the license mode and license server name by answering Y when prompted. Refer to the following code: Set-RDLicenseConfiguration -LicenseServer @("RDS-LIC1.vjason.local","RDS-LIC2.vjason.local") -Mode PerUser This setting might also be set using group policies applied either to the local computer or using Active Directory (AD). The policies are shown in the following screenshot, and you can locate them by going to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Licensing when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path: Use local computer or AD group policies to limit users to one session per RDS host using the Restrict Remote Desktop Services users to a single Remote Desktop Services session policy. The policy is shown in the following screenshot, and you can locate it by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connections: Use local computer or AD group policies to enable Timezone redirection. You can locate the policy by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Horizon View RDSH Services | Remote Desktop Session Host | Device and Resource Redirection when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To enable the setting, set Allow time zone redirection to Enabled. Use local computer or AD group policies to enable Windows Basic Aero-Styled Theme. You can locate the policy by going to User Configuration | Policies | Administrative Templates | Control Panel | Personalization when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the theme, set Force a specific visual style file or force Windows Classic to Enabled and set Path to Visual Style to %windir%resourcesThemesAeroaero.msstyles. Use local computer or AD group policies to start Runonce.exe when the RDS session starts. You can locate the policy by going to User Configuration | Policies | Windows Settings | Scripts (Logon/Logoff) when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the logon settings, double-click on Logon, click on Add, enter runonce.exe in the Script Name box, and enter /AlternateShellStartup in the Script Parameters box. On the Windows RDS host, double-click on the 64-bit Horizon View Agent installer to begin the installation process. The installer should have a name similar to VMware-viewagent-x86_64-y.y.y-xxxxxx.exe. On the Welcome to the Installation Wizard for VMware Horizon View Agent page, click on Next. On the License Agreement page, select the I accept the terms in the license agreement radio check box and click on Next. On the Custom Setup page, either leave all the options set to default, or if you are not using vCenter Operations Manager, deselect this optional component of the agent and click on Next. On the Register with Horizon View Connection Server page, shown in the following screenshot, enter the hostname or IP address of one of the Connection Servers in the pod where the RDS host will be used. If the user performing the installation of the agent software is an administrator in the Horizon View environment, leave the Authentication setting set to default; otherwise, select the Specify administrator credentials radio check box and provide the username and password of an account that has administrative rights in Horizon View. Click on Next to continue: On the Ready to Install the Program page, click on Install to begin the installation. When the installation completes, reboot the server if prompted. The Windows RDS service is now enabled, configured with the optimal settings for use with VMware Horizon View, and has the necessary agent software installed. This process should be repeated on additional RDS hosts, as needed, to support the target number of concurrent RDS sessions. How it works… The following resources provide detailed information about the configuration options used in this recipe: Microsoft TechNet's Set-RDLicenseConfiguration article at http://technet.microsoft.com/en-us/library/jj215465.aspx provides the complete syntax of the PowerShell command used to configure the RDS licensing settings. Microsoft TechNet's Remote Desktop Services Client Access Licenses (RDS CALs) article at http://technet.microsoft.com/en-us/library/cc753650.aspx explains the different RDS license types, which reveals that an RDS per-user Client Access License (CAL) allows our Horizon View clients to access the RDS servers from an unlimited number of endpoints while still consuming only one RDS license. The Microsoft TechNet article, Remote Desktop Session Host, Licensing (http://technet.microsoft.com/en-us/library/ee791926(v=ws.10).aspx) provides additional information on the group policies used to configure the RDS licensing options. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/index.jsp?topic=%2Fcom.vmware.horizon-view.desktops.doc%2FGUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html) explains that the Windows Basic aero-styled theme is the only theme supported by Horizon View, and demonstrates how to implement it. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-443F9F6D-C9CB-4CD9-A783-7CC5243FBD51.html) explains why time zone redirection is required, as it ensures that the Horizon View RDS client session will use the same time zone as the client device. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-85E4EE7A-9371-483E-A0C8-515CF11EE51D.html) explains why we need to add the runonce.exe /AlternateShellStartup command to the RDS logon script. This ensures that applications which require Windows Explorer will work properly when streamed using Horizon View. Creating an RDS farm in Horizon View This recipe will discuss the steps that are required to create an RDS farm in our Horizon View pod. An RDS farm is a collection of Windows RDS hosts and serves as the point of integration between the View Connection Server and the individual applications installed on each RDS server. Additionally, key settings concerning client session handling and client connection protocols are set at the RDS farm level within Horizon View. Getting ready To create an RDS farm in Horizon View, we need to have at least one RDS host registered with our View pod. Assuming that the Horizon View Agent installation completed successfully in the previous recipe, we should see the RDS hosts registered in the Registered Machines menu under View Configuration of our View Manager Admin console. The tasks required to create the RDS pod are performed using the Horizon View Manager Admin console. How to do it… The following steps outline the procedure used to create a RDS farm. In this example, we have already created and registered two Window RDS hosts named WINRDS01 and WINRDS02. Perform the following steps: Navigate to Resources | Farms and click on Add, as shown in the following screenshot: On the Identification and Settings page, shown in the following screenshot, provide a farm ID, a description if desired, make any desired changes to the default settings, and then click on Next. The settings can be changed to On if needed: On the Select RDS Hosts page, shown in the following screenshot, click on the RDS hosts to be added to the farm and then click on Next: On the Ready to Complete page, review the configuration and click on Finish. The RDS farm has been created, which allows us to create application. How it works… The following RDS farm settings can be changed at any time and are described in the following points: Default display protocol: PCoIP (default) and RDP are available. Allow users to choose protocol: By default, Horizon View Clients can select their preferred protocol; we can change this setting to No in order to enforce the farm defaults. Empty session timeout (applications only): This denotes the amount of time that must pass after a client closes all RDS applications before the RDS farm will take the action specified in the When timeout occurs setting. The default setting is 1 minute. When timeout occurs: This determines which action is taken by the RDS farm when the session's timeout deadline passes; the options are Log off or Disconnect (default). Log off disconnected sessions: This determines what happens when a View RDS session is disconnected; the options are Never (default), Immediate, or After. If After is selected, a time in minutes must be provided. Summary We have learned about configuring the Windows RDS server for use in Horizon View and also about creating RDS farm in Horizon View. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] An Introduction to VMware Horizon Mirage [Article] Designing and Building a Horizon View 6.0 Infrastructure [Article]
Read more
  • 0
  • 0
  • 11468

article-image-hosting-service-iis-using-tcp-protocol
Packt
30 Oct 2014
8 min read
Save for later

Hosting the service in IIS using the TCP protocol

Packt
30 Oct 2014
8 min read
In this article by Mike Liu, the author of WCF Multi-layer Services Development with Entity Framework, Fourth Edtion, we will learn how to create and host a service in IIS using the TCP protocol. (For more resources related to this topic, see here.) Hosting WCF services in IIS using the HTTP protocol gives the best interoperability to the service, because the HTTP protocol is supported everywhere today. However, sometimes interoperability might not be an issue. For example, the service may be invoked only within your network with all Microsoft clients only. In this case, hosting the service by using the TCP protocol might be a better solution. Benefits of hosting a WCF service using the TCP protocol Compared to HTTP, there are a few benefits in hosting a WCF service using the TCP protocol: It supports connection-based, stream-oriented delivery services with end-to-end error detection and correction It is the fastest WCF binding for scenarios that involve communication between different machines It supports duplex communication, so it can be used to implement duplex contracts It has a reliable data delivery capability (this is applied between two TCP/IP nodes and is not the same thing as WS-ReliableMessaging, which applies between endpoints) Preparing the folders and files First, we need to prepare the folders and files for the host application, just as we did for hosting the service using the HTTP protocol. We will use the previous HTTP hosting application as the base to create the new TCP hosting application: Create the folders: In Windows Explorer, create a new folder called HostIISTcp under C:SOAwithWCFandEFProjectsHelloWorld and a new subfolder called bin under the HostIISTcp folder. You should now have the following new folders: C:SOAwithWCFandEFProjectsHelloWorld HostIISTcp and a bin folder inside the HostIISTcp folder. Copy the files: Now, copy all the files from the HostIIS hosting application folder at C:SOAwithWCFandEFProjectsHelloWorldHostIIS to the new folder that we created at C:SOAwithWCFandEFProjectsHelloWorldHostIISTcp. Create the Visual Studio solution folder: To make it easier to be viewed and managed from the Visual Studio Solution Explorer, you can add a new solution folder, HostIISTcp, to the solution and add the Web.config file to this folder. Add another new solution folder, bin, under HostIISTcp and add the HelloWorldService.dll and HelloWorldService.pdb files under this bin folder. Add the following post-build events to the HelloWorldService project, so next time, all the files will be copied automatically when the service project is built: xcopy "$(AssemblyName).dll" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y xcopy "$(AssemblyName).pdb" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y Modify the Web.config file: The Web.config file that we have copied from HostIIS is using the default basicHttpBinding as the service binding. To make our service use the TCP binding, we need to change the binding to TCP and add a TCP base address. Open the Web.config file and add the following node to it under the <system.serviceModel> node: <services> <service name="HelloWorldService.HelloWorldService">    <endpoint address="" binding="netTcpBinding"    contract="HelloWorldService.IHelloWorldService"/>    <host>      <baseAddresses>        <add baseAddress=        "net.tcp://localhost/HelloWorldServiceTcp/"/>      </baseAddresses>    </host> </service> </services> In this new services node, we have defined one service called HelloWorldService.HelloWorldService. The base address of this service is net.tcp://localhost/HelloWorldServiceTcp/. Remember, we have defined the host activation relative address as ./HelloWorldService.svc, so we can invoke this service from the client application with the following URL: http://localhost/HelloWorldServiceTcp/HelloWorldService.svc. For the file-less WCF activation, if no endpoint is defined explicitly, HTTP and HTTPS endpoints will be defined by default. In this example, we would like to expose only one TCP endpoint, so we have added an endpoint explicitly (as soon as this endpoint is added explicitly, the default endpoints will not be added). If you don't add this TCP endpoint explicitly here, the TCP client that we will create in the next section will still work, but on the client config file you will see three endpoints instead of one and you will have to specify which endpoint you are using in the client program. The following is the full content of the Web.config file: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=169433 --> <configuration> <system.web>    <compilation debug="true" targetFramework="4.5"/>    <httpRuntime targetFramework="4.5" /> </system.web>   <system.serviceModel>    <serviceHostingEnvironment >      <serviceActivations>        <add factory="System.ServiceModel.Activation.ServiceHostFactory"          relativeAddress="./HelloWorldService.svc"          service="HelloWorldService.HelloWorldService"/>      </serviceActivations>    </serviceHostingEnvironment>      <behaviors>      <serviceBehaviors>        <behavior>          <serviceMetadata httpGetEnabled="true"/>        </behavior>      </serviceBehaviors>    </behaviors>    <services>      <service name="HelloWorldService.HelloWorldService">        <endpoint address="" binding="netTcpBinding"         contract="HelloWorldService.IHelloWorldService"/>        <host>          <baseAddresses>            <add baseAddress=            "net.tcp://localhost/HelloWorldServiceTcp/"/>          </baseAddresses>        </host>      </service>    </services> </system.serviceModel>   </configuration> Enabling the TCP WCF activation for the host machine By default, the TCP WCF activation service is not enabled on your machine. This means your IIS server won't be able to host a WCF service with the TCP protocol. You can follow these steps to enable the TCP activation for WCF services: Go to Control Panel | Programs | Turn Windows features on or off. Expand the Microsoft .Net Framework 3.5.1 node on Windows 7 or .Net Framework 4.5 Advanced Services on Windows 8. Check the checkbox for Windows Communication Foundation Non-HTTP Activation on Windows 7 or TCP Activation on Windows 8. The following screenshot depicts the options required to enable WCF activation on Windows 7: The following screenshot depicts the options required to enable TCP WCF activation on Windows 8: Repair the .NET Framework: After you have turned on the TCP WCF activation, you have to repair .NET. Just go to Control Panel, click on Uninstall a Program, select Microsoft .NET Framework 4.5.1, and then click on Repair. Creating the IIS application Next, we need to create an IIS application named HelloWorldServiceTcp to host the WCF service, using the TCP protocol. Follow these steps to create this application in IIS: Open IIS Manager. Add a new IIS application, HelloWorldServiceTcp, pointing to the HostIISTcp physical folder under your project's folder. Choose DefaultAppPool as the application pool for the new application. Again, make sure your default app pool is a .NET 4.0.30319 application pool. Enable the TCP protocol for the application. Right-click on HelloWorldServiceTcp, select Manage Application | Advanced Settings, and then add net.tcp to Enabled Protocols. Make sure you use all lowercase letters and separate it from the existing HTTP protocol with a comma. Now the service is hosted in IIS using the TCP protocol. To view the WSDL of the service, browse to http://localhost/HelloWorldServiceTcp/HelloWorldService.svc and you should see the service description and a link to the WSDL of the service. Testing the WCF service hosted in IIS using the TCP protocol Now, we have the service hosted in IIS using the TCP protocol; let's create a new test client to test it: Add a new console application project to the solution, named HelloWorldClientTcp. Add a reference to System.ServiceModel in the new project. Add a service reference to the WCF service in the new project, naming the reference HelloWorldServiceRef and use the URL http://localhost/HelloWorldServiceTcp/HelloWorldService.svc?wsdl. You can still use the SvcUtil.exe command-line tool to generate the proxy and config files for the service hosted with TCP, just as we did in previous sections. Actually, behind the scenes Visual Studio is also calling SvcUtil.exe to generate the proxy and config files. Add the following code to the Main method of the new project: var client = new HelloWorldServiceRef.HelloWorldServiceClient (); Console.WriteLine(client.GetMessage("Mike Liu")); Finally, set the new project as the startup project. Now, if you run the program, you will get the same result as before; however, this time the service is hosted in IIS using the TCP protocol. Summary In this article, we created and tested an IIS application to host the service with the TCP protocol. Resources for Article: Further resources on this subject: Microsoft WCF Hosting and Configuration [Article] Testing and Debugging Windows Workflow Foundation 4.0 (WF) Program [Article] Applying LINQ to Entities to a WCF Service [Article]
Read more
  • 0
  • 0
  • 17467

article-image-creating-a-3d-printed-kite
Michael Ang
30 Oct 2014
9 min read
Save for later

Polygon Construction - Make a 3D printed kite

Michael Ang
30 Oct 2014
9 min read
3D printers are incredible machines, but let's face it, it takes them a long time to work their magic! Printing small objects takes a relatively short amount of time, but larger objects can take hours and hours to print. Is there a way we can get the speed of printing small objects, while still making something big—even bigger than our printer can make in one piece? This tutorial shows a technique I'm calling "polygon construction", where you 3D print connectors and attach them with rods to make a larger structure. This technique is the basis for my Polygon Construction Kit (Polycon). A Polycon object, with 3D printer for scale I’m going to start simple, showing how even one connector can form the basis of a rather delightful object—a simple flying kite! The kite we're making today is a version of the Eddy diamond kite, originally invented in the 1890s by William Eddy. This classic diamond kite is easy to make, and flies well. We'll design and print the central connector in the kite, and use wooden rods to extend the shape. The total size of the kite is 50 centimeters (about 20 inches) tall and wide, which is bigger than most print beds. We'll design the connector so that it's parametric—we'll be able to change the important sizes of the connector just by changing a few numbers. Because the connector is a small object to print out, and the design is parametric, we'll be able to iterate quickly if we want to make changes. A finished kite The connector we need is a cross that holds two of the "arms" up at an angle. This angle is called the dihedral angle, and is one of the secrets to why the Eddy kite flies well. We'll use the OpenSCAD modeling program to create the connector. OpenSCAD allows you to create solid objects for printing by combining simple shapes such as cylinders and boxes using a basic programming language. It's open source and multi-platform. You can download OpenSCAD from http://openscad.org. Open OpenSCAD. You should have a blank document. Let's set up a few variables to represent the important dimensions in our connector, and make the first part of our connector, which is a cylinder that will be one of the four "arms" of the cross. Go to Design->Compile  to see the result. rod_diameter = 4; // in millimeters wall_thickness = 2; tube_length = 20; angle = 15; // degrees cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); First part of the connector Now let's add the same shape, but translate down. You can see the axis indicator in the lower-left corner of the output window. The blue axis pointing up is the Z-axis, so you want to move down (negative) in the Z-axis. Add this line to your file and recompile (Design->Compile). translate([0,0,-tube_length]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); Second part of the main tube Now that we have the long straight part of our connector, let's add the angled arms. We want the angled part of the connector to be 90 degrees from the straight part, and then rotated by our dihedral angle (in this case 15 degrees). In OpenSCAD, rotations can be specified as a rotation around X, Y, and then Z axes. If you look at the axis indicator, you can see that a rotation around the Y-axis of 90 degrees, followed by a rotation around the Z-axis of 15 degrees that will get us to the right place. Here's the code: rotate([0,90,angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length);    First angled part Let's do the same thing, but for the other side. Instead of rotating by 15 degrees, we'll rotate by 180 degrees and then subtract out the 15 degrees to put the new cylinder on the opposite side. rotate([0,90,180-angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); Opposite angled part Awesome, we have the shape of our connector! There's only one problem: how do we make the holes for the rods to go in? To do this we'll make the same shape, but a little smaller, and then subtract it out of the shape we already made. OpenSCAD supports Boolean operations on shapes, and in this case the Boolean operation we want is difference. To make the Boolean operation easier, we'll group the different parts of the shape together by putting them into modules. Once we have the parts we want together in a module, we can make a difference of the two modules. Here's the complete new version: rod_diameter = 4; // in millimeters wall_thickness = 2; tube_length = 20; angle = 15; // degrees // Connector as a solid object (no holes) module solid() { cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); translate([0,0,-tube_length]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); rotate([0,90,angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); rotate([0,90,180-angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); } // Object representing the space for the rods. module hole_cutout() { cut_overlap = 0.2; // Extra length to make clean cut out of main shape cylinder(r = rod_diameter, h = tube_length + cut_overlap); translate([0,0,-tube_length-cut_overlap]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); rotate([0,90,angle]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); rotate([0,90,180-angle]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); } difference() { solid(); hole_cutout(); } Completed connector We've finished modeling our kite connector! But what if our rod isn't 4mm in diameter? What if it's 1/8"? Since we've written a program to describe our kite connector, making the change is easy. We can change the parameters at the beginning of the file to change the shape of the connector. There are 25.4 millimeters in an inch, and OpenSCAD can do the math for us to convert from inches to millimeters. Let's change the rod diameter to 1/8" and also change the dihedral angle, so there's more of a visible change. Change the parameters at the top of the file and recompile (Design->Compile). rod_diameter = 1/8 * 25.4; // inches to millimeters wall_thickness = 2; tube_length = 20; angle = 20; // degrees Different angle and rod diameter Now you start to see the power of using a parametric model—making a new connector can be as simple as changing a few numbers and recompiling the design. To get the model ready for printing, change the rod diameter to the size of the rod you actually have, and change the angle back to 15 degrees. Now go to Design->Compile and Render so that the connector is fully rendered inside OpenSCAD. Go to File->Export->Export as STL and save the file. Open the .stl file in the software for your 3D printer. I have a Prusa i3 Berlin RepRap printer and use Cura as my printer software, but almost any printer/software combination should work. You may want to rotate the part so that it doesn't need as much support under the overhanging arms, but be aware that the orientation of the layers will affect the final strength (if the tube breaks, it's almost always from the layers splitting apart). It's worth experimenting with changing the orientation of the part to increase its strength. Orienting the layers slightly at an angle to the length of the tube seems to give the best strength. Cura default part orientation Part reoriented for printing, showing layers Print your connector, and see if it fits on your rods. You may need to adjust the size a little to get a tight fit. Since the model is parametric, adjusting the size of the connector should just take a few minutes! To get the most strength in your print you can make multiple prints of the connector with different settings (temperature, wall thickness, and so on) and see how much force it takes to break them. This is a good technique in general for getting strong prints. A printed connector Kite dimensions Now that we have the connector printed, we need to finish off the rest of the kite. You can see full instructions on making an Eddy kite, but here's the short version. I've built this kite by taking a 1m long 4mm diameter wooden rod from a kite store and cutting it into one 50cm piece and two 25cm pieces. The center connector goes 10cm from the top of the long rod. For the "sail", paper does fine (cut to fit the frame, making sure that the sail is symmetrical), and you can just tape the paper to the rods. Tie a piece of string about 80cm long between the center connector and a point 4cm from the tail to make a bridle. To find the right place to tie on the long flying line, take the kite out on a breezy day and hold by the bridle, moving your hand up and down until you find a spot where the kite doesn't try to fly up too much, or fall back down. That's the spot to tie on your long flying line. If the kite is unstable while flying, you can add a long tail to the kite, but I haven't found it to be necessary (though it adds to the classic look). Assembled kite Back side of kite, showing printed connector Being able to print your own kite parts makes it easy to experiment. If you want to try a different dihedral angle, just print a new center connector. It's quite a sight to see your kite flying high up in the sky, held together by a part you printed yourself. You can download a slightly different version of this code that includes additional bracing between the angled arms at http://www.thingiverse.com/thing:415345. For an idea of what's possible using the "polygon construction" technique, have a look at my Polygon Construction Kit for some examples of larger structures with multiple connectors. Happy flying! Sky high About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 8270

article-image-default-lifecycle
Packt
30 Oct 2014
7 min read
Save for later

The default lifecycle

Packt
30 Oct 2014
7 min read
 This article is written by Lorenzo Anardu, Roberto Baldi, Umberto Antonio Cicero, Riccardo Giomi, and Giacomo Veneri, the authors of the book, Maven Build Customization. A lifecycle is a sequence of phases. In each phase, depending on the POM configuration, one or more tasks are executed. These tasks are called goals. Despite the enormous variety of work that can be accomplished by Maven, there are only three built-in Maven lifecycles: default, clean, and site. (For more resources related to this topic, see here.) In this article, we will discuss the default lifecycle. The default lifecycle is responsible for the build process, so it's the most interesting. Among its phases, the most important phases are described in the following table: Phase Actions process-resources Filter the resource files and copy them in the output directory compile Compile the source code process-test-resources Filter the test resource files and copy them in the test output directory test-compile Compile the test source code test Run the unit tests package Produce the packaged artifact (JAR, WAR, EAR) install Install the package in the local repository so that other projects can use it as a dependency deploy Install the package in a remote repository We'll speak later about local and remote Maven repositories. When we invoke one phase from the command line, Maven executes all the phases of the lifecycle from the beginning up to the specified phase (included). In fact, one of the most common ways to run Maven is just to use the following syntax: $ mvn <phase> It will run all the portions of the respective lifecycle, ending with this phase. Let's consider an example. Suppose that the POM file of our transportation-acq-ejb module is the following, and it is located in the /transportation-project/transportation-acq-ejb directory: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">   <modelVersion>4.0.0</modelVersion>   <parent>       <groupId>com.packt.examples</groupId>       <artifactId>transportation-project</artifactId>       <version>0.0.1-SNAPSHOT</version> </parent>   <artifactId>transportation-acq-ejb</artifactId>   <packaging>jar</packaging>   <name>transportation-acq-ejb</name>   <dependencies>       <dependency>           <groupId>javax</groupId>           <artifactId>javaee-api</artifactId>           <version>6.0</version>           <scope>provided</scope>       </dependency>   </dependencies></project> As we can see in the preceding code, the transportation-acq-ejb module's parent is the transportation-project parent project. We can add some sample Java classes and interfaces in the transportation-acq-ejb project. First, we add an EJB local interface, MyEjb.java: package com.packt.samples;import javax.ejb.Local;@Localpublic interface MyEjb{   public int myMethod();} Then, we add a dummy implementation, MyEjbImpl.java: package com.packt.samples;import javax.ejb.Stateless;@Statelesspublic class MyEjbImpl implements MyEjb{   @Override   public int myMethod()   {               return 0;   }} Finally, we add a unit test class, SampleTest.java: package com.packt.samples;import static org.junit.Assert.*;import org.junit.Test;public class SampleTest{   @Test   public void test()   {       assertTrue(true);   }} The directory structure of the transportation-acq-ejb module is as shown in the following screenshot: In a Maven project, we have to put the project sources under /src/main/java and the test sources under /src/main/test. These default conventional values can be overridden, but this is not recommended; remember the convention over configuration paradigm! Execute the following command: $ mvn install We'll see the following output after executing the preceding command: [INFO] Scanning for projects...[...][INFO] -----------------------------------------------------------[INFO] Building transportation-acq-ejb 0.0.1-SNAPSHOT[INFO][INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ transportation-acq-ejb ---[…][INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ transportation-acq-ejb ---[INFO] Compiling 2 source files to ~/transportation-project/transportation-acq-ejb/target/classes[INFO][INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ transportation-acq-ejb ---[…][INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ transportation-acq-ejb ---[INFO] Compiling 1 source file to ~/transportation-project/transportation-acq-ejb/target/test-classes[INFO][INFO] --- maven-surefire-plugin:2.17:test (default-test) @ transportation-acq-ejb ---[INFO] Surefire report directory: ~/transportation-project/transportation-acq-ejb/target/surefire-reports-------------------------------------------------------T E S T S-------------------------------------------------------Running com.packt.samples.SampleTestTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec - in com.packt.samples.SampleTestResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ transportation-acq-ejb ---[INFO] Building jar: ~/transportation-project/transportation-acq-ejb/target/transportation-acq-ejb-0.0.1-SNAPSHOT.jar[INFO][INFO] --- maven-install-plugin:2.4:install (default-install) @ transportation-acq-ejb ---[INFO] Installing ~/transportation-project/transportation-acq-ejb-0.0.1-SNAPSHOT.jar to ~/.m2/repository/com/packt/examples/transportation-acq-ejb/0.0.1-SNAPSHOT/transportation-acq-ejb-0.0.1-SNAPSHOT.jar[INFO] Installing ~/transportation-project/transportation-acq-ejb/pom.xml to ~/.m2/repository/com/packt/examples/transportation-acq-ejb/0.0.1-SNAPSHOT/transportation-acq-ejb-0.0.1-SNAPSHOT.pom[INFO] -----------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ----------------------------------------------------------- If we run Maven for the first time, in addition to the preceding output shown, we'll see a lot of other output lines saying that project plugins and dependencies are being downloaded from the Maven central repository. So, as we can see, the sequence of operations performed by Maven follows the steps specified in the default lifecycle. You will also notice that each action performed by Maven is delegated to a certain plugin. In order to compile the project, Maven will download and use the specified dependencies (in this case, Java EE API is needed to compile the EJB classes). Plugins and dependencies are downloaded on-demand, and they are saved in the local repository, which is located under the local user home in the /.m2/repository subdirectory by default. For Linux users, the local repository is located under ~/.m2/repository, where ~ means the user home directory that usually has the /home/<username> path. For Windows users, the local repository is (usually) located under C:Users<username>.m2repository. Once Maven downloads an artifact or a plugin, it will reuse its stored copy and never search the same version of this artifact or plugin in the Maven central repository or in other remote repositories that can be specified in our POM file again. The only exception to this rule regarding the snapshot versions is that if the version of a dependency or plugin is marked with the -SNAPSHOT suffix, this version is currently on development. For this reason, Maven will periodically attempt to download this artifact from all the remote repositories that have snapshots enabled in their configurations. If we look in the /target directory, we'll see all the work done by Maven; in this case, the compiled classes, unit test reports, and packaged artifact transportation-acq-ejb-0.0.1-SNAPSHOT.jar: Build output of the transportation-acq-ejb module Note that if instead of running the previous command, we run the mvn package command, the lifecycle execution will stop with the package phase and the artifact will not be installed in the local repository. This can be a problem if the artifact is needed by other projects as a dependency. Summary In this article, we saw that every Maven build process relies on a skeleton called build lifecycle and we discussed the default lifecycle. Resources for Article: Further resources on this subject: Apache Maven and m2eclipse [article] Dynamic POM [article] Understanding Maven [article]
Read more
  • 0
  • 0
  • 1450
article-image-theming-highcharts
Packt
30 Oct 2014
10 min read
Save for later

Theming with Highcharts

Packt
30 Oct 2014
10 min read
Besides the charting capabilities offered by Highcharts, theming is yet another strong feature of Highcharts. With its extensive theming API, charts can be customized completely to match the branding of a website or an app. Almost all of the chart elements are customizable through this API. In this article by Bilal Shahid, author of Highcharts Essentials, we will do the following things: (For more resources related to this topic, see here.) Use different fill types and fonts Create a global theme for our charts Use jQuery easing for animations Using Google Fonts with Highcharts Google provides an easy way to include hundreds of high quality web fonts to web pages. These fonts work in all major browsers and are served by Google CDN for lightning fast delivery. These fonts can also be used with Highcharts to further polish the appearance of our charts. This section assumes that you know the basics of using Google Web Fonts. If you are not familiar with them, visit https://developers.google.com/fonts/docs/getting_started. We will style the following example with Google Fonts. We will use the Merriweather family from Google Fonts and link to its style sheet from our web page inside the <head> tag: <link href='http://fonts.googleapis.com/css?family=Merriweather:400italic,700italic' rel='stylesheet' type='text/css'> Having included the style sheet, we can actually use the font family in our code for the labels in yAxis: yAxis: [{ ... labels: {    style: {      fontFamily: 'Merriweather, sans-serif',      fontWeight: 400,      fontStyle: 'italic',      fontSize: '14px',      color: '#ffffff'    } } }, { ... labels: {    style: {      fontFamily: 'Merriweather, sans-serif',      fontWeight: 700,      fontStyle: 'italic',      fontSize: '21px',      color: '#ffffff'    },    ... } }] For the outer axis, we used a font size of 21px with font weight of 700. For the inner axis, we lowered the font size to 14px and used font weight of 400 to compensate for the smaller font size. The following is the modified speedometer: In the next section, we will continue with the same example to include jQuery UI easing in chart animations. Using jQuery UI easing for series animation Animations occurring at the point of initialization of charts can be disabled or customized. The customization requires modifying two properties: animation.duration and animation.easing. The duration property accepts the number of milliseconds for the duration of the animation. The easing property can have various values depending on the framework currently being used. For a standalone jQuery framework, the values can be either linear or swing. Using the jQuery UI framework adds a couple of more options for the easing property to choose from. In order to follow this example, you must include the jQuery UI framework to the page. You can also grab the standalone easing plugin from http://gsgd.co.uk/sandbox/jquery/easing/ and include it inside your <head> tag. We can now modify the series to have a modified animation: plotOptions: { ... series: {    animation: {      duration: 1000,      easing: 'easeOutBounce'    } } } The preceding code will modify the animation property for all the series in the chart to have duration set to 1000 milliseconds and easing to easeOutBounce. Each series can have its own different animation by defining the animation property separately for each series as follows: series: [{ ... animation: {    duration: 500,    easing: 'easeOutBounce' } }, { ... animation: {    duration: 1500,    easing: 'easeOutBounce' } }, { ... animation: {      duration: 2500,    easing: 'easeOutBounce' } }] Different animation properties for different series can pair nicely with column and bar charts to produce visually appealing effects. Creating a global theme for our charts A Highcharts theme is a collection of predefined styles that are applied before a chart is instantiated. A theme will be applied to all the charts on the page after the point of its inclusion, given that the styling options have not been modified within the chart instantiation. This provides us with an easy way to apply custom branding to charts without the need to define styles over and over again. In the following example, we will create a basic global theme for our charts. This way, we will get familiar with the fundamentals of Highcharts theming and some API methods. We will define our theme inside a separate JavaScript file to make the code reusable and keep things clean. Our theme will be contained in an options object that will, in turn, contain styling for different Highcharts components. Consider the following code placed in a file named custom-theme.js. This is a basic implementation of a Highcharts custom theme that includes colors and basic font styles along with some other modifications for axes: Highcharts.customTheme = {      colors: ['#1BA6A6', '#12734F', '#F2E85C', '#F27329', '#D95D30', '#2C3949', '#3E7C9B', '#9578BE'],      chart: {        backgroundColor: {            radialGradient: {cx: 0, cy: 1, r: 1},            stops: [                [0, '#ffffff'],                [1, '#f2f2ff']            ]        },        style: {            fontFamily: 'arial, sans-serif',            color: '#333'        }    },    title: {        style: {            color: '#222',            fontSize: '21px',            fontWeight: 'bold'        }    },    subtitle: {        style: {            fontSize: '16px',            fontWeight: 'bold'        }    },    xAxis: {        lineWidth: 1,        lineColor: '#cccccc',        tickWidth: 1,        tickColor: '#cccccc',        labels: {            style: {                fontSize: '12px'            }        }    },    yAxis: {        gridLineWidth: 1,        gridLineColor: '#d9d9d9',        labels: {           style: {                fontSize: '12px'            }        }    },    legend: {        itemStyle: {            color: '#666',            fontSize: '9px'        },        itemHoverStyle:{            color: '#222'        }      } }; Highcharts.setOptions( Highcharts.customTheme ); We start off by modifying the Highcharts object to include an object literal named customTheme that contains styles for our charts. Inside customTheme, the first option we defined is for series colors. We passed an array containing eight colors to be applied to series. In the next part, we defined a radial gradient as a background for our charts and also defined the default font family and text color. The next two object literals contain basic font styles for the title and subtitle components. Then comes the styles for the x and y axes. For the xAxis, we define lineColor and tickColor to be #cccccc with the lineWidth value of 1. The xAxis component also contains the font style for its labels. The y axis gridlines appear parallel to the x axis that we have modified to have the width and color at 1 and #d9d9d9 respectively. Inside the legend component, we defined styles for the normal and mouse hover states. These two states are stated by itemStyle and itemHoverStyle respectively. In normal state, the legend will have a color of #666 and font size of 9px. When hovered over, the color will change to #222. In the final part, we set our theme as the default Highcharts theme by using an API method Highcharts.setOptions(), which takes a settings object to be applied to Highcharts; in our case, it is customTheme. The styles that have not been defined in our custom theme will remain the same as the default theme. This allows us to partially customize a predefined theme by introducing another theme containing different styles. In order to make this theme work, include the file custom-theme.js after the highcharts.js file: <script src="js/highcharts.js"></script> <script src="js/custom-theme.js"></script> The output of our custom theme is as follows: We can also tell our theme to include a web font from Google without having the need to include the style sheet manually in the header, as we did in a previous section. For that purpose, Highcharts provides a utility method named Highcharts.createElement(). We can use it as follows by placing the code inside the custom-theme.js file: Highcharts.createElement( 'link', {    href: 'http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,700italic,400,300,700',    rel: 'stylesheet',    type: 'text/css' }, null, document.getElementsByTagName( 'head' )[0], null ); The first argument is the name of the tag to be created. The second argument takes an object as tag attributes. The third argument is for CSS styles to be applied to this element. Since, there is no need for CSS styles on a link element, we passed null as its value. The final two arguments are for the parent node and padding, respectively. We can now change the default font family for our charts to 'Open Sans': chart: {    ...    style: {        fontFamily: "'Open Sans', sans-serif",        ...    } } The specified Google web font will now be loaded every time a chart with our custom theme is initialized, hence eliminating the need to manually insert the required font style sheet inside the <head> tag. This screenshot shows a chart with 'Open Sans' Google web font. Summary In this article, you learned about incorporating Google fonts and jQuery UI easing into our chart for enhanced styling. Resources for Article: Further resources on this subject: Integrating with other Frameworks [Article] Highcharts [Article] More Line Charts, Area Charts, and Scatter Plots [Article]
Read more
  • 0
  • 0
  • 8224

Packt
29 Oct 2014
7 min read
Save for later

LeJOS – Unleashing EV3

Packt
29 Oct 2014
7 min read
In this article by Abid H. Mujtaba, author of Lego Mindstorms EV3 Essentials, we'll have look at a powerful framework designed to grant an extraordinary degree of control over EV3, namely LeJOS: (For more resources related to this topic, see here.) Classic programming on EV3 LeJOS is what happens when robot and software enthusiasts set out to hack a robotics kit. Although lego initially intended the Mindstorms series to be primarily targeted towards children, it was taken up with gleeful enthusiasm by adults. The visual programming language, which was meant to be used both on the brick and on computers, was also designed with children in mind. The visual programming language, although very powerful, has a number of limitations and shortcomings. Enthusiasts have continually been on the lookout for ways to program Mindstorms using traditional programming languages. As a result, a number of development kits have been created by enthusiasts to allow the programming of EV3 in a traditional fashion, by writing and compiling code in traditional languages. A development kit for EV3 consists of the following: A traditional programming language (C, C++, Java, and so on) Firmware for the brick (basically, a new OS) An API in the chosen programming language, giving access to the robot's inputs and outputs A compiler that compiles code on a traditional computer to produce executable code for the brick Optionally, an Integrated Development Environment (IDE) to consolidate and simplify the process of developing the brick The release of each robot in the Mindstorms series has been associated with a consolidated effort by the open source community to hack the brick and make available a number of frameworks for programming robots using traditional programming languages. Some of the common frameworks available for Mindstorms are GNAT GPL (Ada), ROBOTC, Next Byte Code (NBC), an assembly language, Not Quite C (NQC), LeJOS, and many others. This variety of frameworks is particularly useful for Linux users, not only because they love having the ability to program in their language of choice, but also because the visual programming suite for EV3 does not run on Linux at all. In its absence, these frameworks are essential for anyone who is looking to create programs of significant complexity for EV3. LeJOS – introduction LeJOS is a development kit for Mindstorms robots based on the Java programming language. There is no official pronunciation, with people using lay-joss, le-J-OS (claiming it is French for "the Java Operating System", including myself), or lay-hoss if you prefer the Latin-American touch. After considerable success with NXT, LeJOS was the first (and in my opinion, the most complete) framework released for EV3. This is a testament both to the prowess of the developers working on LeJOS and the fact that lego built EV3 to be extremely hackable by running Linux under its hood and making its source publicly available. Within weeks, LeJOS had been ported to EV3, and you could program robots using Java. LeJOS works by installing its own OS (operating system) on the EV3's SD card as an alternate firmware. Before EV3, this would involve a slightly difficult and dangerous tinkering with the brick itself, but one of the first things that EV3 does on booting up is to check for a bootable partition on the SD card. If it is found, the OS/firmware is loaded from the SD card instead of being loaded internally. Thus, in order to run LeJOS, you only need a suitably prepared SD card inserted into EV3 and it will take over the brick. When you want to return to the default firmware, simply remove the SD card before starting EV3. It's that simple! Lego wasn't kidding about the hackability of EV3. The firmware for LeJOS basically runs a Java Virtual Machine (JVM) inside EV3, which allows it to execute compiled Java code. Along with the JVM, LeJOS installs an API library, defining methods that can be programmatically used to access the inputs and outputs attached to the brick. These API methods are used to control the various components of the robot. The LeJOS project also releases tools that can be installed on all modern computers. These tools are used to compile programs that are then transferred to EV3 and executed. These tools can be imported into any IDE that supports Java (Eclipse, NetBeans, IntelliJ, Android Studio, and so on) or used with a plain text editor combined with Ant or Gradle. Thus, leJOS qualifies as a complete development kit for EV3. The advantages of LeJOS Some of the obvious advantages of using LeJOS are: This was the first framework to have support for EV3 Its API is stable and complete This is an active developer and user base (the last stable version came out in March 2014, and the new beta was released in April) The code base is maintained in a public Git repository Ease of installation Ease of use The other advantages of using LeJOS are linked to the fact that its API as well as the programs you write yourself are all written in Java. The development kits allow a number of languages to be used for programming EV3, with the most popular ones being C and Java. C is a low-level language that gives you greater control over the hardware, but it comes at a price. Your instructions need to be more explicit, and the chances of making a subtle mistake are much higher. For every line of Java code, you might have to write dozens of lines of C code to get the same functionality. Java is a high-level language that is compiled into the byte code that runs on the JVM. This results in a lesser degree of control over the hardware, but in return, you get a powerful and stable API (that LeJOS provides) to access the inputs and outputs of EV3. The LeJOS team is committed to ensure that this API works well and continues to grow. The use of a high-level language such as Java lowers the entry threshold to robotic programming, especially for people who already know at least one programming language. Even people who don't know programming yet can learn Java easily, much more so than C. Finally, two features of Java that are extremely useful when programming robots are its object-oriented nature (the heavy use of classes, interfaces, and inheritance) and its excellent support for multithreading. You can create and reuse custom classes to encapsulate common functionality and can integrate sensors and motors using different threads that communicate with each other. The latter allows the construction of subsumption architectures, an important development in robotics that allows for extremely responsive robots. I hope that I have made a compelling case for why you should choose to use LeJOS as your framework in order to take EV3 programming to the next level. However, the proof is in the pudding. Summary In this article, we learned how EV3's extremely hackable nature has led to the proliferation of alternate frameworks that allow EV3 to be programmed using traditional programming languages. One of these alternatives is LeJOS, a powerful framework based on the Java programming language. We studied the fundamentals of LeJOS and learned its advantages over other frameworks. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article] Managing Test Structure with Robot Framework [Article]
Read more
  • 0
  • 0
  • 7134

article-image-building-iphone-app-using-swift-part-2
Ryan Loomba
29 Oct 2014
5 min read
Save for later

Building an iPhone App Using Swift: Part 2

Ryan Loomba
29 Oct 2014
5 min read
Let’s continue on from Part 1, and add a new table view to our app. In our storyboard, let’s add a table view controller by searching in the bottom right and dragging. Next, let’s add a button to our main view controller that will link to our new table view controller. Similar to what we did with the web view, Ctrl + click on this button and drag it to the newly created table view controller.Upon release, choose push. Now, let’s make sure everything works properly. Hit the large play button and click on Table View. You should now be taken to a blank table: Let’s populate this table with some text. Go to File ->  New ->  File  and choose a Cocoa Touch Class. Let’s call this file TableViewController, and make this a subclass of UITableViewController in the Swift language. Once the file is saved, we’ll be presented with a file with some boilerplate code.  On the first line in our class file, let’s declare a constant. This constant will be an array of strings that will be inserted into our table: let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] Let’s modify the function that has this signature: func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int This function returns the number of rows in our table view. Instead of setting this to zero, let’s change this to ten. Next, let’s uncomment the function that has this signature: override func numberOfSectionsInTableView(tableView: UITableView!) -> Int This function controls how many sections we will have in our table view. Let’s modify this function to return 1.  Finally, let’s add a function that will populate our cells: override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell }  This function iterates through each row in our table and sets the text value to be equal to the fruits we declared at the top of the class file. The final file should look like this: class TableViewController: UITableViewController { let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] override func viewDidLoad() { super.viewDidLoad() } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } // MARK: - Table view data source override func numberOfSectionsInTableView(tableView: UITableView!) -> Int { // #warning Potentially incomplete method implementation. // Return the number of sections. return 1 } override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. return tableArray.count } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell } } Finally, we need to go back to our storyboard and link to our custom table view controller class. Select the storyboard, click on the table view controller, choose the identity inspector and fill in TableViewController  for the custom class. If we click the play button to build our project and then click on our table view button, we should see our table populated with names of fruit: Adding a map view Click on the Sample Swift App icon in the top left of the screen and then choose Build Phases. Under Link Binary with Libraries, click the plus button and search for MapKit. Once found, click Add: In the story board, add another view controller. Search for a MKMapView and drag it into the newly created controller. In the main navigation controller, create another button named Map View, Ctrl + click + drag to the newly created view controller, and upon release choose push: Additionally, choose the Map View in the storyboard, click on the connections inspector, Ctrl + click on delegate and drag to the map view controller. Next, let’s create a custom view controller that will control our map view. Go to File -> New -> File and choose Cocoa Touch. Let’s call this file MapViewController and inherit from UIViewController. Let’s now link our map view in our storyboard to our newly created map view controller file. In the storyboard, Ctrl + click on the map view and drag to our Map View Controller to create an IBOutlet variable. It should look something like this: @IBOutlet var mapView: MKMapView! Let’s add some code to our controller that will display the map around Apple’s campus in Cupertino, CA. I’ve looked up the GPS coordinates already, so here is what the completed code should look like: import UIKit import MapKit class MapViewController: UIViewController, MKMapViewDelegate { @IBOutlet var mapView: MKMapView! override func viewDidLoad() { super.viewDidLoad() let latitude:CLLocationDegrees = 37.331789 let longitude:CLLocationDegrees = -122.029620 let latitudeDelta:CLLocationDegrees = 0.01 let longitudeDelta:CLLocationDegrees = 0.01 let span:MKCoordinateSpan = MKCoordinateSpan(latitudeDelta: latitudeDelta, longitudeDelta: longitudeDelta) let location:CLLocationCoordinate2D = CLLocationCoordinate2DMake(latitude, longitude) let region: MKCoordinateRegion = MKCoordinateRegionMake(location, span) self.mapView.setRegion(region, animated: true) // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } This should now build, and when you click on the Map View button, you should be able to see a map showing Apple’s campus at the center of the screen.  About the Author Ryan is a software engineer and electronic dance music producer currently residing in San Francisco, CA. Ryan started up as a biomedical engineer but fell in love with web/mobile programming after building his first Android app. You can find him on GitHub @rloomba.
Read more
  • 0
  • 0
  • 5494
article-image-webrtc-sip-and-ims
Packt
29 Oct 2014
28 min read
Save for later

WebRTC with SIP and IMS

Packt
29 Oct 2014
28 min read
In this article by Altanai Bisht, the author of the book, WebRTC Integrator's Guide, has discussed about the interaction of WebRTC client with important IMS nodes and modules. IP Multimedia Subsystem (IMS) is an architectural framework for IP Multimedia communications and IP telephony based on Convergent applications. It specifies three layers in a telecom network: Transport or Access layer: This is the bottom-most segment responsible for interacting with end systems such as phones. IMS layer: This is the middleware responsible for authenticating and routing the traffic and facilitating call control through the Service layer. Service or Application layer: This is the top-most layer where all of the call control applications and Value Added Services (VAS) are hosted. (For more resources related to this topic, see here.) IMS standards are defined by Third Generation Partnership Project (3GPP) which adopt and promote Internet Engineering Task Force (IETF) Request for Comments (RFCs). Refer to http://www.3gpp.org/technologies/keywords-acronyms/109-ims to learn more about 3GPP IMS specification releases. This article will walk us through the interaction of WebRTC client with important IMS nodes and modules. The WebRTC gateway is the first point of contact for the SIP requests from the WebRTC client to enter into the IMS network. The WebRTC gateway converts SIP over WebSocket implementation to legacy/plain SIP, that is, a WebRTC to SIP gateway that connects to the IMS world and is able to communicate with a legacy SIP environment. It also can translate other REST- or JSON-based signaling protocols into SIP. The gateway also handles the media operation that involves DTLS, SRTP, RTP, transcoding, demuxing, and so on. In this article, we will study a case where there exists a simple IMS core environment, and the WebRTC clients are meant to interact after the signals are traversed through core IMS nodes such as Call Session Control Function (CSCF), Home Subscriber Server (HSS), and Telecom Application Server (TAS). The Interaction with core IMS nodes This section describes the sequence of steps that must be followed for the integration of the WebRTC client with IMS. Before you go ahead, set up a Session Border Controller (SBC) / WebRTC gateway / SIP proxy node for the WebRTC client to interact with the IMS control layer. Direct the control towards the CSCF nodes of IMS, namely, Proxy-CSCF, Interrogating-CSCF, and Serving-CSCF. The subscriber details and the location are updated in the HSS. Serving-CSCF (SCSCF) routes the call through the SIP Application Server to invoke any services before the call is processed. The Application Server, which is part of the IMS service layer, is the point of adding logic to call processing in the form of VAS. Additionally, we will uncover the process of integrating media server for an inter-codec conversion between legacy SIP phones and WebRTC clients. The setup will allow us to support all SIP nodes and endpoints as part of the IMS land-scape. The following figure shows the placement of the SIPWS to SIP gateway in the IMS network: The WebRTC client is a web-based dynamic application that is run over a Web Application Server. For simplification, we can club the components of the WebRTC client and the Web Application Server together and address them jointly as the WebRTC client, as shown in the following diagram: There are four major components of the OpenIMS core involved in this setup as described in the following sections. Along with these, two components of the WebRTC infrastructure (the client and the gateway) are also necessary to connect the WebRTC endpoints. Three optional entities are also described as part of this setup. The components of Open IMS are CSCF nodes and HSS. More information on each component is given in the following sections. The Call Session Control Function The three parts of CSCF are described as follows: Proxy-CSCF (P-CSCF) is the first point of contact for a user agent (UA) to which all user equipments (UEs) are attached. It is responsible for routing an incoming SIP request to other IMS nodes, such as registrar and Policy and Charging Rules Function (PCRF), among others. Interrogating-CSCF (I-CSCF) is the inbound SIP proxy server for querying the HSS as to which S-CSCF should be serving the incoming request. Serving-CSCF (S-CFCS) is the heart of the IMS core as it enables centralized IMS service control by defining routing paths that act like the registrar, interact with the Media Server, and much more. Home Subscriber System IMS core Home Subscriber System (HSS) is the database component responsible for maintaining user profiles, subscriptions, and location information. The data is used in functions such as authentication and authorization of users while using IM services. The components of the WebRTC infrastructure primarily comprises of WebRTC Web Application Servers, WebRTC web-based clients, and the SIP gateway. WebRTC Web Application Server and client: The WebRTC client is intrinsically a web application that is composed of user interfaces, data access objects, and controllers to handle HTTP requests. A Web Application Server is where an application is hosted. As WebRTC is a browser-based technique, it is meant to be an HTML-based web application. The call functionalities are rendered through the SIP JavaScript files. The browser's native WebRTC capabilities are utilized to capture and transmit the data. A WebRTC service provider must embed the SIP call functions on a web page that has a call interface. It must provide values for the To and From SIP addresses, div to play audio/video content, and access to users' resources such as camera, mic, and speakers. WebRTC to IMS gateway: This is the point where the conversion of the signal from SIP over WebSockets to legacy/plain SIP takes place. It renders the signaling into a state that the IMS network nodes can understand. For media, it performs the transcoding from WebRTC standard codecs to others. It also performs decryption and demux of audio/video/RTCP/RTP. There are other servers that act as IMS nodes as well, such as the STUN/TURN Server, Media Server, and Application Server. They are described as follows: STUN/TURN Server: These are employed for NAT traversals and overcoming firewall restrictions through ICE candidates. They might not be needed when the WebRTC client is on the Internet and the WebRTC gateway is also listening on a publicly accessible IP. Media Server: Media server plays a role when media relay is required between the UEs instead of a direct peer-to-peer communication. It also comes into picture for services such as voicemail, Interactive Voice Response (IVR), playback, and recording. Application Server (AS): Application Server is the point where developers can make customized logic for call control such as VAS in the form of call redirecting in cases when the receiver is absent and selective call screening. The IP Multimedia Subsystem core IMS is an architecture for real-time multimedia (voice, data, video, and messaging) services using a common IP network. It defines a layered architecture. According to the 3GPP specification, IMS entities are classified into six categories: Session management and route (CSCF, GGSN, and SGSN) Database (HSS and SLF) Interworking elements (BGCF, MGCF, IM-MGW, and SGW) Service (Application Server, MRFC and MRFP) Strategy support entities (PDF) Billing Interoperability with the SIP infrastructure requires a session border controller to decrypt the WebRTC control and media flows. A media node is also set up for transcoding between WebRTC codecs and other legacy phones. When a gateway is involved, the WebRTC voice and video peer connections are between the browser and the border controller. In our case, we have been using Kamailio in this role. Kamailio is an open source SIP server capable of processing both SIP and SIPWS signaling. As WebRTC is made to function over SIP-based signaling, it is applicable to enjoy all of the services and solutions made for the IMS environment. The telecom operators can directly mount the services in the Service layer, and subscribers can avail the services right from their web browsers through the WebRTC client. This adds a new dimension to user accessibility and experience. A WebRTC client's true potential will come into effect only when it is integrated with the IMS framework. We have some readymade, open IMS setups that have been tested for WebRTC-to-IMS integration. The setups are as follows: 3GPP IMS: This is the IMS specification by 3GPP, which is an association of telecommunications group OpenIMS: This is the open source implementation of the IMS CSCFs and a lightweight HSS for the IMS core DubangoIMS: This is the cross-platform and open source 3GPP IMS/LTE framework KamailioIMS: Kamailio Version 4.0 and above incorporates IMS support by means of OpenIMS We can also use any other IMS structure for the integration. In this article, we will demonstrate the use of OpenIMS. For this, it is required that a WebRTC client and a non-WebRTC client must be interoperable by means of signaling and media transcoding. Also, the essential components of IMS world, such as HSS, Media Server, and Application Server, should be integrated with the WebRTC setup. The OpenIMS Core The Open IMS Core is an open source implementation for core elements of the IMS network that includes IMS CSCFs nodes and HSS. The following diagram shows how a connection is made from WebRTC to CSCF: The following are the prerequisites to install the Open IMS core: Make sure that you have the following packages installed on your Linux machine, as their absence can hinder the IMS installation process: Git and Subversion GCC3/4, Make, JDK1.5, Ant MySQL as the database Bison and Flex, the Linux utilities libxml2 (Version 2.6 and above) and libmysql with development versions Install these packages from the Synaptic package manager or using the command prompt. For the LoST interface of E-CSCF, use the following command lines: sudo apt-get install mysql-server libmysqlclient15-dev libxml2libxml2-dev bind9 ant flex bison curl libcurl4-gnutls-dev sudo apt-get install curl libcurl4-gnutls-dev The Domain Name Server (DNS), bind9, should be installed and run. To do this, we can run the following command line: sudo apt-get install bind9 We need a web browser to review the status of the connection on the web console. To download a web browser, go to its download page. For example, Chrome can be downloaded from https://www.google.com/intl/en_in/chrome/browser/. We must verify that the Java version installed is above 1.5 so as to not break the compilation process in between, and set the path of JAVA_HOME as follows: export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre The output of the command line that checks the Java version is as follows: The following are the steps to install OpenIMS. As the source code is preconfigured to work from a standard file path of /opt, we will use the predefined directory for installation. Go to the /opt folder and create a directory to store the OpenIMS core, using the following command lines: mkdir /opt/OpenIMSCorecd /opt/OpenIMSCore Create a directory to store FHOSS, check out the HSS, and compile the source using the following command lines: mkdir FHoSS svn checkout http://svn.berlios.de/svnroot/repos/openimscore/FHoSS/trunk FHoSS cd FHoSS ant compile deploy Note that the code requires Java Version 7 or lower to work. Also, create a directory to store ser_ims, check out the CFCs, and then install ser_ims using the following command lines: mkdir ser_ims svn checkout http://svn.berlios.de/svnroot/repos/openimscore/ser_ims/trunk ser_ims cd ser_ims make install-libs all After downloading and installing the OpenIMS installation directory, its contents are as follows: By default, the nodes are configured to work only on the local loopback, and the default domain configured is open-ims.test. The MySQL access rights are also set only for local access. However, this can be modified using the following steps: Run the following command line: ./opt/ser_ims/cfg/configurator.sh Replace 127.0.0.1 (the default IP for the local host) with the new IP address that is required to configure the IMS Core server. Replace the home domain (open-ims.test) with the required domain name. Change the database passwords. The following figure depicts the domain change process through configurator.sh: To resolve the domain name, we need to add a new IMS domain to bind the configuration directory. Change to the system's bind folder (cd /etc/bind) and copy the open-ims.dnszone file there after replacing the domain name. sudo cp /opt/OpenIMSCore/ser_ims/cfg/open-ims.dnszone /etc/bind/ Open the name.conf file and include open-ims.dnszone in the list that already exists: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; include "/etc/bind/open-ims.dnszone"; One can also add a reverse zone file, which, contrary to the DNS zone file, converts an address to a name. Restart the naming server using the following command: sudo bind9 restart On occasion of any failure or error note, the system logs/reports can be generated using the following command line: tail -f /var/log/syslog Open the MySQL client (sudo mysql) and add the SQL scripts for the creation of database and tables for HSS operations: mysql -u root -p -h localhost<ser_ims/cfg/icscf.sql mysql -u root -p -h localhost<FHoSS/scripts/hss_db.sql mysql -u root -p -h localhost<FHoSS/scripts/userdata.sql The following screenshot shows the tables for the HSS database: Users should be registered with a domain (that is, one needs to make changes in the userdata.sql file by replacing the default domain name with the required domain name). Note that while it is not mandatory to change the domain, it is a good practice to add a new domain that describes the enterprise or service provider's name. The following screenshot shows user domains changed from the default to the personal domain: Copy the pcscf.cfg, pcscf.sh, icscf.cfg, icscf.xml, icscf.sh, scscf.cfg, scscf.xml, and scscf.sh files to the /opt/OpenIMSCore location. Start the Policy Call Session Control Function (PCSCF) by executing the pcscf.sh script. The default element port assigned for P-CSCF is 4060. A screenshot of the running of PCSCF is as follows: Start the Interrogating Call Session Control Function (I-CSCF) by executing the icscf.sh script. The default element port assigned to I-CSCF is 5060. If the scripts display a warning about connection, it is just because the FHoSS client still needs to be started. A screenshot of the running I-CSCF is as follows: Start SCSCF by executing the scscf.sh script. The default element port assignment for S-CSCF is 6060. A screenshot of the running SCSCF is as follows: Start the FOKUS Home Subscriber Server (FHoSS) by executing FHoss/deploy/startup.sh. The HSS interacts using the diameter protocol. The ports used for this protocol are 3868, 3869, and 3870. A screenshot of the running HSS is shown as follows: Go to http://<yourip>:8080 and log in to the web console with hssAdmin as the username and hss as the password as shown in the following screenshot. To register the WebRTC client with OpenIMS, we must use an IMS gateway that performs the function of converting the SIP over WebSocket format to SIP. In order to achieve this, use the IP port or domain of the PCSCF node while registering the client. The flow will be from the WebRTC client to the IMS gateway to the PCSCF of the IMS Core. The flow can also be from the SIPML5 WebRTC client to the webrtc2sip gateway to the PCSCF of the OpenIMS Core. The subscribers are visible in the IMS subscription section of the portal of OpenIMS. The following screenshot shows the user identities and their statuses on a web-based admin console: As far as other components are concerned, they can be subsequently added to the core network over their respective interfaces. The Telecom server The TAS is where the logic for processing a call resides. It can be used to add applications such as call blocking, call forwarding, and call redirection according to the predefined values. The inputs can be assigned at runtime or stored in a database using a suitable provisioning system. The following diagram shows the connection between WebRTC and the IMS Core Server: For demonstration purposes, we can use an Application Server that can host SIP servlets and integrate them with IMS core. The Mobicents Telecom Application Server Mobicents SIP Servlet and Java APIs for Integrated Networks-Service Logic Execution Environment (JAIN-SLEE) are open platforms to deploy new call controller logic and other converged applications. The steps to install Mobicents TAS are as follows: Download the SIP Application Server logic package from https://code.google.com/p/sipservlets/wiki/Downloads. Unzip the contents. Make sure that the Java environment variables are in place. Start the JBoss container from mobicentsjboss-5.1.0.GAbin In case of MS Windows, click on run.bat, and for Linux, click on run.sh. The following figure displays the traces on the console when the server is started on JBoss: The Mobicents application can also be developed by installing the Tomcat/Mobicents plugin in Eclipse IDE. The server can also be added for Mobicents instance, enabling quick deployment of applications. Open the web console to review the settings. The following screenshot displays the process: In order to deploy Resource Adaptors, enter: ant -f resources/<name of resource adapter>/build.xml deploy To undeploy the resource adapters, execute antundeploy with the name of the resource adapter: ant -f resources/<name of resource adapter>/build.xml undeploy Make sure that you have Apache Ant 1.7. The deployed instances should be visible in a web console as follows: To deploy and run SIP Servlet applications, use the following command line: ant -f examples/<name of application directory>/build.xml deploy-all Configure CSCF to include the Application Server in the path of every incoming SIP request and response. With the introduction of TAS, it is now possible to provide customized call control logic to all subscribers or particular subscribers. The SIP solution and services can range from simple activities, such as call screening and call rerouting, to a complex call-handling application, such as selective call screening based on the user's calendar. Some more examples of SIP applications are given as follows: Speed Dial: This application lets the user make a call using pre-programmed numbers that map to actual SIPURIs of users. Click to Dial: This application makes a call using a web-based GUI. However, it is very different from WebRTC, as it makes/receives the call through an external SIP phone. Find me Follow Me: This application is beneficial if the user is registered on multiple devices simultaneously, for example, SIP phone, X-Lite, and WebRTC. In such a case, when there is an incoming call, each of the user's devices rings for few seconds in order of their recent use so that the user can pick the call from the device that is nearest to him. These services are often referred to as VAS, which can be innovative and can take the user experience to new heights. The Media Server To enable various features such as Interactive Voice Respondent (IVR), record voice mails, and play announcements, the Media Server plays a critical role. The Media Server can be used as a standalone entity in the WebRTC infrastructure or it can be referenced from the SIP server in the IMS environment. The FreeSWITCH Media Server FreeSWITCH has powerful Media Server capabilities, including those for functions such as IVR, conferencing, and voice mails. We will first see how to use FreeSWITCH as a standalone entity that provides SIP and RTP proxy features. Let's try to configure and install a basic setup of FreeSWITCH Media Server using the following steps: Download and store the source code for compilation in the /usr/src folder, and run the following command lines: cd usr/src git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git A directory named freeswitch is made using the following command line and binaries will be stored in this folder. Assign all permissions to it. sudo chown -R <username> /usr/local/freeswitch Replace <username> with the name of the user who has the ownership of the folder. Go to the directory where the source will be stored, that is, the following directory: cd /usr/src/freeswitch Then, run bootstrap using the following command line: ./bootstrap.sh One can add additional modules by editing the configuration file using the vi editor. We can open our file using the following command line: vi modules.conf The names of the module are already listed. Remove the # symbol before the name to include the module at runtime, and add # to skip the module. Then, run the configure command: ./configure --enable-core-pgsql-support Use the make command and install the components: make && make install Go to the Sofia profile and uncomment the parameters defined for WebSocket binding. By doing so, the WebRTC clients can register with FreeSWITCH on port 443. Sofia is an SIP stack used by FreeSWITCH. By default, it supports only pure SIP requests. To get WebRTC clients, register with FreeSWITCH's SIP Server. <!-- uncomment for SIP over WebSocket support --><!-- <param name="ws-binding" value=":443"/> Install the sound files using the following command line: make all cd-sounds-install cd-moh-install Go to the installation directory, and in the vars.xml file under freeswitch/conf/ make sure that the codec preferences are set as follows: <X-PRE-PROCESS cmd="set" data="global_codec_prefs=G722,PCMA,PCMU,GSM"/> <X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=G722,PCMA,PCMU,GSM"/> Make sure that the SIP profile is directly using the codec values as follows: <param name="inbound-codec-prefs" value="$${global_codec_prefs}"/> <param name="outbound-codec-prefs" value="$${global_codec_prefs}"/> We can later add more codecs such as vp8 for video calling/conferencing. To start FreeSWITCH, go to the /freeswitch/bin installation directory and run FreeSWITCH. Run the command-line console that will be used to control and monitor the passing SIP packets by going to the /freeswitch/bin installation directory and executing fs_cli. The following is the screenshot of the FreeSWITCH client console: Go to the /freeswitch/conf/SIP_profile installation-directory and look for the existing configuration files. Load and start the SIP profile using the following command line: sofia profile <name of profile> start load Restart and reload the profile in case of changes using the following command line: sofia profile <name of profile>restart reload Check its working by executing the following command line: Sofia status We can check the status of the individual SIP profile by executing the following command line: sofia status profile <name of profile> reg The preceding figure depicts the status of the users registered with the server at one point of time. Media Services The following steps outline the process of using the FreeSWITCH media services: Register the SIP softphone and WebRTC client using FreeSWITCH. Use sample values between 1000 and 1020 initially. Later, we can configure for more users as specified by the /freeswitch/conf/directory installation directory. The following are the sample values to register Kapanga:      Username: 1002      Display name: any      Domain/ Realm: 14.67.87.45      Outbound proxy: 14.67.87.45:5080      Authorization user: 1002      Password: 1234 The sample value for WebRTC client registration, if, for example, we decide to use the Sipml5webrtc client, for example, will be as follows:      Display name: any      Private identity: 1001      Public identity: SIP:1001@14.67.87.45      Password: 1234      Realm: 14.67.87.45      WebSocket Server URL: ws://14.67.87.45:443 Note that the values used here are arbitrary for the purpose of understanding. IP denotes the public IP of the FreeSWITCH machine and the port is the WebSocket configured port in the Sofia profile. As seen in the following screenshot, it is required that we tick the Enable RTCWeb Breaker option in Expert settings to compensate for the incompatibility between the WebSocket and SIP standards that might arise: Make a call between the SIP softphone and WebRTC client. In this case, the signal and media are passing through FreeSWITCH as proxy. Call from a WebRTC client is depicted in the following screenshot, which consists of SIP messages passing through the FreeSWITCH server and are therefore visible in the FreeSWITCH client console. In this case, the server is operating in the default mode; other modes are bypass and proxy modes. Make a call between two WebRTC clients, where SIP and RTP are passing through FreeSWITCH as proxy. We can use other services of FreeSWITCH as well, such as voicemail, IVR, and conferencing. We can also configure this setup in such a way that media passes through the FreeSWITCH Media Server, and the SIP signaling is via the Telecom Kamailio SIP server. Use the RTP proxy in the SIP proxy server, in our case, Kamailio, to pass the RTP media through the Media Server. The RTP proxy module of Kamailio should be built in a format and configured in the kamailio.cfg file. The RTP proxy forces the RTP to pass through a node as specified in the settings parameters. It makes the communication between SIP user agents behind NAT and will also be used to set up a relaying host for RTP streams. Configure the RTP Engine as the media proxy agent for RTP. It will be used to force the WebRTC media through it and not in the old peer-to-peer fashion in which WebRTC is designed to operate. Perform the following steps to configure the RTP Engine: Go to the Kamailio installation directory and then to the RTPProxy module. Run the make command and install the proxy engine: cd rtpproxy ./configure && make Load the module and parameters in the kamailio.cfg file: listen=udp:<ip>:<port> .. loadmodule "rtpproxy.so" .. modparam("rtpproxy", "rtpproxy_sock",   "unix:/var/run/rtpproxy/rtpproxy.sock") Add rtpproxy_manage() for all of the requests and responses in the kamailio.cfg file. The example of rtpproxy_manage for INVITE is: if (is_method("INVITE")) { ... rtpproxy_manage(); ... }; Get the source code for the RTP Engine using git as follows: https://github.com/sipwise/rtpengine.git Go to the daemon folder in the installation directory and run the make command as follows: sudo make Start rtpengine in the default user space mode on the local machine: sudo ./rtpengine --ip=10.1.5.14 --listen-ng=12334 Check the status of rtpengine, which is running, using the following command: ps -ef|grep rtpengine Note that rtpengine must be installed on the same machine as the Kamailio SIP server. In case of the sipml5 client, after configuring the modules described in the preceding section and before making a call through the Media Server, the flow for the media will become one of the following:      In case of Voicemail/IVR, the flow is as follows:     WebRTC client to RTP proxy node to Media Server      In case of a call through media relay, the flow is as follows:     WebRTC client A to RTP proxy node to Media Server to RTP Proxy to WebRTC client B The following diagram shows the MediaProxy relay between WebRTC clients: The potential of media server lies in its media transcoding of various codecs. Different phones / call clients / softwares that support SIP as the signaling protocol do not necessarily support the same media codecs. In the situation where Media Server is absent and the codecs do not match between a caller and receiver, the attempt to make a call is abruptly terminated when the media exchange needs to take place, that is, after invite, success, response, and acknowledgement are sent. In the following figure, the setup to traverse media through the FreeSWITCH Media Server and signaling through the Kamailio SIP server is depicted: The role of the rtpproxyng engine is to enable media to pass via Media Server; this is shown in the following diagram: WebRTC over firewalls and proxies There are many complicated issues involved with the correct working of WebRTC across domains, NATS, geographies, and so on. It is important for now that the firewall of a system, or any kind of port-blocking policy, should be turned off to be able to make a successful audio-video WebRTC call across any two parties that are not on the same Local Area Network (LAN). For the user to not have to switch the firewall off, we need to configure the Simple Traversal of UDP through NAT (STUN) server or modify the Interactive Connectivity Establishment (ICE) parameter in the SDP exchanged. STUN helps in packet routing of devices behind a NAT firewall. STUN only helps in device discoverability by assigning publicly accessible addresses to devices within a private local network. Traversal Using Relay NAT (TURN) servers also serve to accomplish the task of inter-connecting the endpoints behind NAT. As the name suggests, TURN forces media to be proxied through the server. To learn more about ICE as a NAT-traversal mechanism, refer to the official document named RFC 5245. The ICE features are defined by sipML5 in the sipml.js file. It is added to SIP SDP during the initial phase of setting up the SIP stack. Snippets from the sipml.js file regarding ICE declaration are given as follows: var configuration = { ... websocket_proxy_url: 'ws://192.168.0.10:5060', outbound_proxy_url: 'udp://192.168.0.12:5060', ice_servers: [{ url: 'stun:stun.l.google.com:19302'}, {    url:'turn:user@numb.viagenie.ca', credential:'myPassword'}], ... }; Under the postInit function in the call.htm page add the following function: oConfigCall = { ... events_listener: { events: '*', listener: onSipEventSession },    SIP_caps: [      { name: '+g.oma.SIP-im' },      { name: '+SIP.ice' },      { name: 'language', value: '"en,fr"' }    ] }; Therefore, the WebRTC client is able to reach the client behind the firewall itself; however, the media displays unpredicted behavior. In the need to create our own STUN-TURN server, you can take the help of RFC 5766, or you can refer to open source implementations, such as the project at the following site: https://code.google.com/p/rfc5766-turn-server/ When setting the parameters for WebRTC, we can add our own STUN/TURN server. The following screenshot shows the inputs suitable for ICE Servers if you are using your own TURN/STUN server: If there are no firewall restrictions, for example, if the users are on the same network without any corporate proxies and port blocks, we can omit the ICE by entering empty brackets, [], in the ICE Servers option on the Expert settings page in the WebRTC client. The final architecture for the WebRTC-to-IMS integration At the end of this article, we have arrived at an architecture similar to the following diagram. The diagram depicts a basic WebRTC-to-IMS architecture. The diagram depicts the WebRTC client in the Transport Layer as it is the user end-point. The IMS entities (CSCF and HSS), WebRTC to IMS gateway, and Media Server nodes are placed on the Network Control Layer as they help in signal and media routing. The applications for call control are placed in the top-most Application Layer that processes the call control logic. This architecture serves to provide a basic IMS-based setup for SIP-based WebRTC client interaction. Summary In this article, we saw how to interconnect the WebRTC setup with the IMS infrastructure. It included interaction with CSCF nodes, namely PCSCF, ICSCF, and SCSCF, after building and installing them from their sources. Also, FreeSWITCH Media Server was discussed, and the steps to build and integrate it were practiced. The Application Server to embed call control logic is Kamailio. NAT traversal via STUN / TURN server was also discussed and its importance was highlighted. To deploy the WebRTC solution integrated with the IMS network, we must ensure that all of the required IMS nodes are consulted while making a call, the values are reflected in the HSS data store, and the incoming SIP request and responses are routed via call logic of the Application Server before connecting a call. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 8899

article-image-openshift-java-developers
Packt
29 Oct 2014
21 min read
Save for later

OpenShift for Java Developers

Packt
29 Oct 2014
21 min read
This article written by Shekhar Gulati, the author of OpenShift Cookbook, covers how Java developers can openly use OpenShift to develop and deploy Java applications. It also teaches us how to deploy Java EE 6 and Spring applications on OpenShift. (For more resources related to this topic, see here.) Creating and deploying Java EE 6 applications using the JBoss EAP and PostgreSQL 9.2 cartridges Gone are the days when Java EE or J2EE (as it was called in the olden days) was considered evil. Java EE now provides a very productive environment to build web applications. Java EE has embraced convention over configuration and annotations, which means that you are no longer required to maintain XML to configure each and every component. In this article, you will learn how to build a Java EE 6 application and deploy it on OpenShift. This article assumes that you have basic knowledge of Java and Java EE 6. If you are not comfortable with Java EE 6, please read the official tutorial at http://docs.oracle.com/javaee/6/tutorial/doc/. In this article, you will build a simple job portal that will allow users to post job openings and view a list of all the persisted jobs in the system. These two functionalities will be exposed using two REST endpoints. The source code for the application created in this article is on GitHub at https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6-simple. The example application that you will build in this article is a simple version of the jobstore application with only a single domain class and without any application interface. You can get the complete jobstore application source code on GitHub as well at https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6. Getting ready To complete this article, you will need the rhc command-line client installed on your machine. Also, you will need an IDE to work with the application code. The recommended IDE to work with OpenShift is Eclipse Luna, but you can also work with other IDEs, such as IntelliJ Idea and NetBeans. Download and install the Eclipse IDE for Java EE developers from the official website at https://www.eclipse.org/downloads/. How to do it… Perform the following steps to create the jobstore application: Open a new command-line terminal, and go to a convenient location. Create a new JBoss EAP application by executing the following command: $ rhc create-app jobstore jbosseap-6 The preceding command will create a Maven project and clone it to your local machine. Change the directory to jobstore, and execute the following command to add the PostgreSQL 9.2 cartridge to the application: $ rhc cartridge-add postgresql-9.2 Open Eclipse and navigate to the project workspace. Then, import the application created in step 1 as a Maven project. To import an existing Maven project, navigate to File|Import|Maven|Existing Maven Projects. Then, navigate to the location of your OpenShift Maven application created in step 1. Next, update pom.xml to use Java 7. The Maven project created by OpenShift is configured to use JDK 6. Replace the properties with the one shown in the following code: <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> Update the Maven project to allow the changes to take effect. You can update the Maven project by right-clicking on the project and navigating to Maven|Update Project. Now, let us write the domain classes for our application. Java EE uses JPA to define the data model and manage entities. The application has one domain class: Job. Create a new package called org.osbook.jobstore.domain, and then create a new Java class called Job inside it. Have a look at the following code: @Entity public class Job {   @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id;   @NotNull private String title;   @NotNull @Size(max = 4000) private String description;   @Column(updatable = false) @Temporal(TemporalType.DATE) @NotNull private Date postedAt = new Date();   @NotNull private String company;   //setters and getters removed for brevity   } Create a META-INF folder at src/main/resources, and then create a persistence.xml file with the following code: <?xml version="1.0" encoding="UTF-8"?> <persistence xsi_schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0" >   <persistence-unit name="jobstore" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data- source>java:jboss/datasources/PostgreSQLDS</jta-data- source>   <exclude-unlisted-classes>false</exclude-unlisted- classes>   <properties> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.hbm2ddl.auto" value="update" /> </properties> </persistence-unit>   </persistence> Now, we will create the JobService class that will use the JPA EntityManager API to work with the database. Create a new package called org.osbook.jobstore.services, and create a new Java class as shown in the following code. It defines the save and findAll operations on the Job entity. @Stateless public class JobService {   @PersistenceContext(unitName = "jobstore") private EntityManager entityManager;   public Job save(Job job) { entityManager.persist(job); return job; }   public List<Job> findAll() { return entityManager .createQuery("SELECT j from org.osbook.jobstore.domain.Job j order by j.postedAt desc", Job.class) .getResultList(); } } Next, enable Contexts and Dependency Injection (CDI) in the jobstore application by creating a file with the name beans.xml in the src/main/webapp/WEB-INF directory as follows: <?xml version="1.0"?> <beans xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://jboss.org/schema/cdi/beans_1_0.xsd"/> The jobstore application will expose the REST JSON web service. Before you can write the JAX-RS resources, you have to configure JAX-RS in your application. Create a new package called org.osbook.jobstore.rest and a new class called RestConfig, as shown in the following code: @ApplicationPath("/api/v1") public class RestConfig extends Application { } Create a JAX-RS resource to expose the create and findAll operations of JobService as REST endpoints as follows: @Path("/jobs") public class JobResource {   @Inject private JobService jobService;   @POST @Consumes(MediaType.APPLICATION_JSON) public Response createNewJob(@Valid Job job) { job = jobService.save(job); return Response.status(Status.CREATED).build(); }   @GET @Produces(MediaType.APPLICATION_JSON) public List<Job> showAll() { return jobService.findAll(); } } Commit the code, and push it to the OpenShift application as shown in the following commands: $ git add . $ git commit -am "jobstore application created" $ git push After the build finishes successfully, the application will be accessible at http://jobstore-{domain-name}.rhcloud.com. Please replace domain-name with your own domain name. To test the REST endpoints, you can use curl. curl is a command-line tool for transferring data across various protocols. We will use it to test our REST endpoints. To create a new job, you will run the following curl command: $ curl -i -X POST -H "Content-Type: application/json" -H "Accept: application/json" -d '{"title":"OpenShift Evangelist","description":"OpenShift Evangelist","company":"Red Hat"}'http://jobstore-{domain- name}.rhcloud.com/api/v1/jobs To view all the jobs, you can run the following curl command: $ curl http://jobstore-{domain-name}.rhcloud.com/api/v1/jobs How it works… In the preceding steps, we created a Java EE application and deployed it on OpenShift. In step 1, you used the rhc create-app command to create a JBoss EAP web cartridge application. The rhc command-line tool makes a request to the OpenShift broker and asks it to create a new application using the JBoss EAP cartridge. Every OpenShift web cartridge specifies a template application that will be used as the default source code of the application. For Java web cartridges (JBoss EAP, JBoss AS7, Tomcat 6, and Tomcat 7), the template is a Maven-based application. After the application is created, it is cloned to the local machine using Git. The directory structure of the application is shown in the following command: $ ls -a .git .openshift README.md pom.xml deployments src As you can see in the preceding command, apart from the .git and .openshift directories, this looks like a standard Maven project. OpenShift uses Maven to manage application dependencies and build your Java applications. Let us take a look at what's inside the jobstore directory to better understand the layout of the application: The src directory: This directory contains the source code for the template application generated by OpenShift. You need to add your application source code here. The src folder helps in achieving source code deployment when following the standard Maven directory conventions. The pom.xml file: The Java applications created by OpenShift are Maven-based projects. So, a pom.xml file is required when you do source code deployment on OpenShift. This pom.xml file has a profile called openshift, which will be executed when you push code to OpenShift as shown in the following code. This profile will create a ROOT WAR file based upon your application source code. <profiles> <profile> <id>openshift</id> <build> <finalName>jobstore</finalName> <plugins> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.1.1</version> <configuration> <outputDirectory>deployments</outputDirectory> <warName>ROOT</warName> </configuration> </plugin> </plugins> </build> </profile> </profiles> The deployments directory: You should use this directory if you want to do binary deployments on OpenShift, that is, you want to deploy a WAR or EAR file directly instead of pushing the source code. The .git directory: This is a local Git repository. This directory contains the complete history of the repository. The config file in.git/ contains the configuration for the repository. It defines a Git remote origin that points to the OpenShift application gear SSH URL. This makes sure that when you do git push, the source code is pushed to the remote Git repository hosted on your application gear. You can view the details of the origin Git remote by executing the following command: $ git remote show origin The .openshift directory: This is an OpenShift-specific directory, which can be used for the following purposes: The files under the action_hooks subdirectory allow you to hook onto the application lifecycle. The files under the config subdirectory allow you to make changes to the JBoss EAP configuration. The directory contains the standalone.xml JBoss EAP-specific configuration file. The files under the cron subdirectory are used when you add the cron cartridge to your application. This allows you to run scripts or jobs on a periodic basis. The files under the markers subdirectory allow you to specify whether you want to use Java 6 or Java 7 or you want to do hot deploy or debug the application running in the Cloud, and so on. In step 2, you added the PostgreSQL 9.2 cartridge to the application using the rhc cartridge-add command. We will use the PostgreSQL database to store the jobstore application data. Then, in step 3, you imported the project in the Eclipse IDE as a Maven project. Eclipse Kepler has inbuilt support for Maven applications, which makes it easier to work with Maven-based applications. From step 3 through step 5, you updated the project to use JDK 1.7 for the Maven compiler plugin. All the OpenShift Java applications use OpenJDK 7, so it makes sense to update the application to also use JDK 1.7 for compilation. In step 6, you created the job domain class and annotated it with JPA annotations. The @Entity annotation marks the class as a JPA entity. An entity represents a table in the relational database, and each entity instance corresponds to a row in the table. Entity class fields represent the persistent state of the entity. You can learn more about JPA by reading the official documentation at http://docs.oracle.com/javaee/6/tutorial/doc/bnbpz.html. The @NotNull and @Size annotation marks are Bean Validation annotations. Bean Validation is a new validation model available as a part of the Java EE 6 platform. The @NotNull annotation adds a constraint that the value of the field must not be null. If the value is null, an exception will be raised. The @Size annotation adds a constraint that the value must match the specified minimum and maximum boundaries. You can learn more about Bean Validation by reading the official documentation at http://docs.oracle.com/javaee/6/tutorial/doc/gircz.html. In JPA, entities are managed within a persistence context. Within the persistence context, the entity manager manages the entities. The configuration of the entity manager is defined in a standard configuration XML file called persitence.xml. In step 7, you created the persistence.xml file. The most important configuration option is the jta-datasource-source configuration tag. It points to java:jboss/datasources/PostgreSQLDS. When a user creates a JBoss EAP 6 application, then OpenShift defines a PostgreSQL datasource in the standalone.xml file. The standalone.xml file is a JBoss configuration file, which includes the technologies required by the Java EE 6 full profile specification plus Java Connector 1.6 architecture, Java XML API for RESTful web services, and OSGi. Developers can override the configuration by making changes to the standalone.xml file in the .openshift/config location of your application directory. So, if you open the standalone.xml file in .openshift/config/ in your favorite editor, you will find the following PostgreSQL datasource configuration: <datasource jndi-name="java:jboss/datasources/PostgreSQLDS" enabled="${postgresql.enabled}" use-java-context="true" pool- name="PostgreSQLDS" use-ccm="true"> <connection- url>jdbc:postgresql://${env.OPENSHIFT_POSTGRESQL_DB_HOST}:${env.OP ENSHIFT_POSTGRESQL_DB_PORT}/${env.OPENSHIFT_APP_NAME} </connection-url> <driver>postgresql</driver> <security> <user-name>${env.OPENSHIFT_POSTGRESQL_DB_USERNAME}</user-name> <password>${env.OPENSHIFT_POSTGRESQL_DB_PASSWORD}</password> </security> <validation> <check-valid-connection-sql>SELECT 1</check-valid-connection- sql> <background-validation>true</background-validation> <background-validation-millis>60000</background-validation- millis> <!--<validate-on-match>true</validate-on-match> --> </validation> <pool> <flush-strategy>IdleConnections</flush-strategy> <allow-multiple-users /> </pool> </datasource> In step 8, you created stateless Enterprise JavaBeans (EJBs) for our application service layer. The service classes work with the EntityManager API to perform operations on the Job entity. In step 9, you configured CDI by creating the beans.xml file in the src/main/webapp/WEB-INF directory. We are using CDI in our application so that we can use dependency injection instead of manually creating the objects ourselves. The CDI container will manage the bean life cycle, and the developer just has to write the business logic. To let the JBoss application server know that we are using CDI, we need to create a file called beans.xml in our WEB-INF directory. The file can be completely blank, but its presence tells the container that the CDI framework needs to be loaded. In step 10 and step 11, you configured JAX-RS and defined the REST resources for the Job entity. You activated JAX-RS by creating a class that extends javax.ws.rs.ApplicationPath. You need to specify the base URL under which your web service will be available. This is done by annotating the RestConfig class with the ApplicationPath annotation. You used /api/v1 as the application path. In step 12, you added and committed the changes to the local repository and then pushed the changes to the application gear. After the bits are pushed, OpenShift will stop all the cartridges and then invoke the mvn -e clean package -Popenshift -DskipTests command to build the project. Maven will build a ROOT.war file, which will be copied to the JBoss EAP deployments folder. After the build successfully finishes, all the cartridges are started. Then the new updated ROOT.war file will be deployed. You can view the running application at http://jobstore-{domain-name}.rhcloud.com. Please replace {domain-name} with your account domain name. Finally, you tested the REST endpoints using curl in step 14. There's more… You can perform all the aforementioned steps with just a single command as follows: $ rhc create-app jobstore jbosseap postgresql-9.2 --from-code https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6-simple.git --timeout 180 Configuring application security by defining the database login module in standalone.xml The application allows you to create company entities and then assign jobs to them. The problem with the application is that it is not secured. The Java EE specification defines a simple, role-based security model for EJBs and web components. JBoss security is an extension to the application server and is included by default with your OpenShift JBoss applications. You can view the extension in the JBoss standalone.xml configuration file. The standalone.xml file exists in the .openshift/config location. The following code shows the extension: <extension module="org.jboss.as.security" /> OpenShift allows developers to update the standalone.xml configuration file to meet their application needs. You make a change to the standalone.xml configuration file, commit the change to the local Git repository, then push the changes to the OpenShift application gear. Then, after the successful build, OpenShift will replace the existing standalone.xml file with your updated configuration file and then finally start the server. But please make sure that your changes are valid; otherwise, the application will fail to start. In this article, you will learn how to define the database login module in standalone.xml to authenticate users before they can perform any operation with the application. The source code for the application created in this article is on GitHub at https://github.com/OpenShift-Cookbook/chapter7-jobstore-security. Getting ready This article builds on the Java EE 6 application. How to do it… Perform the following steps to add security to your web application: Create the OpenShift application using the following command: $ rhc create-app jobstore jbosseap postgresql-9.2 --from-code https://github.com/OpenShift-Cookbook/chapter7-jobstore- javaee6-simple.git --timeout 180 After the application creation, SSH into the application gear, and connect with the PostgreSQL database using the psql client. Then, create the following tables and insert the test data: $ rhc ssh $ psql jobstore=# CREATE TABLE USERS(email VARCHAR(64) PRIMARY KEY, password VARCHAR(64)); jobstore=# CREATE TABLE USER_ROLES(email VARCHAR(64), role VARCHAR(32)); jobstore=# INSERT into USERS values('admin@jobstore.com', 'ISMvKXpXpadDiUoOSoAfww=='); jobstore=# INSERT into USER_ROLES values('admin@jobstore.com', 'admin'); Exit from the SSH shell, and open the standalone.xml file in the.openshift/config directory. Update the security domain with the following code: <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass" /> </login-module> <login-module code="Database" flag="required"> <module-option name="dsJndiName" value="java:jboss/datasources/PostgreSQLDS" /> <module-option name="principalsQuery" value="select password from USERS where email=?" /> <module-option name="rolesQuery" value="select role, 'Roles' from USER_ROLES where email=?" /> <module-option name="hashAlgorithm" value="MD5" /> <module-option name="hashEncoding" value="base64" /> </login-module> </authentication> </security-domain> Create the web deployment descriptor (that is, web.xml) in the src/main/webapp/WEB-INF folder. Add the following content to it: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd">   <security-constraint> <web-resource-collection> <web-resource-name>WebAuth</web-resource-name> <description>application security constraints </description> <url-pattern>/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <realm-name>jdbcRealm</realm-name> <form-login-config> <form-login-page>/login.html</form-login- page> <form-error-page>/error.html</form-error- page> </form-login-config> </login-config> <security-role> <role-name>admin</role-name> </security-role>   </web-app> Create the login.html file in the src/main/webapp directory. The login.html page will be used for user authentication. The following code shows the contents of this file: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Login</title> <link href="//cdnjs.cloudflare.com/ajax/libs/twitter- bootstrap/3.1.1/css/bootstrap.css" rel="stylesheet"> </head> <body> <div class="container"> <form class="form-signin" role="form" method="post" action="j_security_check"> <h2 class="form-signin-heading">Please sign in</h2> <input type="text" id="j_username" name="j_username" class="form-control" placeholder="Email address" required autofocus> <input type="password" id="j_password" name="j_password" class="form-control" placeholder="Password" required> <button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button> </form> </div> </body> </html> Create an error.html file in the src/main/webapp directory. The error.html page will be shown after unsuccessful authentication. The following code shows the contents of this file: <!DOCTYPE html> <html> <head> <meta charset="US-ASCII"> <title>Error page</title> </head> <body> <h2>Incorrect username/password</h2> </body> </html> Commit the changes, and push them to the OpenShift application gear: $ git add . $ git commit –am "enabled security" $ git push Go to the application page at http://jobstore-{domain-name}.rhcloud.com, and you will be asked to log in before you can view the application. Use admin@jobstore.com/admin as the username-password combination to log in to the application. How it works… Let's now understand what you did in the preceding steps. In step 1, you recreated the jobstore application we developed previously. Next, in step 2, you performed an SSH into the application gear and created the USERS and USER_ROLES tables. These tables will be used by the JBoss database login module to authenticate users. As our application does not have the user registration functionality, we created a default user for the application. Storing the password as a clear text string is a bad practice, so we have stored the MD5 hash of the password. The MD5 hash of the admin password is ISMvKXpXpadDiUoOSoAfww==. If you want to generate the hashed password in your application, I have included a simple Java class, which uses org.jboss.crypto.CryptoUtil to generate the MD5 hash of any string. The CryptoUtil class is part of the picketbox library. The following code depicts this: import org.jboss.crypto.CryptoUtil;   public class PasswordHash {   public static String getPasswordHash(String password) { return CryptoUtil.createPasswordHash("MD5", CryptoUtil.BASE64_ENCODING, null, null, password); }   public static void main(String[] args) throws Exception { System.out.println(getPasswordHash("admin")); } } In step 3, you logged out of the SSH session and updated the standalone.xml JBoss configuration file with the database login module configuration. There are several login module implementations available out of the box. This article will only talk about the database login module, as discussing all the modules is outside the scope of this article. You can read about all the login modules at https://docs.jboss.org/author/display/AS7/Security+subsystem+configuration. The database login module checks the user credentials against a relational database. To configure the database login module, you have to specify a few configuration options. The dsJndiName option is used to specify the application datasource. As we are using a configured PostgreSQL datasource for our application, you specified the same dsJndiName option value. Next, you have to specify the SQL queries to fetch the user and its roles. Then, you have specified that the password will be hashed against an MD5 hash algorithm by specifying the hashAlgorithm configuration. In step 4, you applied the database login module to the jobstore application by defining the security constraints in web.xml. This configuration will add a security constraint on all the web resources of the application that will restrict access to authenticated users with role admin. You have also configured your application to use FORM-based authentication. This will make sure that when unauthenticated users visit the website, they will be redirected to the login.html page created in step 5. If the user enters a wrong e-mail/password combination, then they will be redirected to the error.html page created in step 6. Finally, in step 7, you committed the changes to the local Git repository and pushed the changes to the application gear. OpenShift will make sure that the JBoss EAP application server uses the updated standalone.xml configuration file. Now, the user will be asked to authenticate before they can work with the application. Summary This article showed us how to configure application security. In, this article, we also learned about the different ways in which Java applications can be developed on OpenShift. The article explained the database login module. Further resources on this subject: Using OpenShift [Article] Troubleshooting [Article] Schemas and Models [Article]
Read more
  • 0
  • 0
  • 2837
Modal Close icon
Modal Close icon