Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-creating-a-3d-printed-kite
Michael Ang
30 Oct 2014
9 min read
Save for later

Polygon Construction - Make a 3D printed kite

Michael Ang
30 Oct 2014
9 min read
3D printers are incredible machines, but let's face it, it takes them a long time to work their magic! Printing small objects takes a relatively short amount of time, but larger objects can take hours and hours to print. Is there a way we can get the speed of printing small objects, while still making something big—even bigger than our printer can make in one piece? This tutorial shows a technique I'm calling "polygon construction", where you 3D print connectors and attach them with rods to make a larger structure. This technique is the basis for my Polygon Construction Kit (Polycon). A Polycon object, with 3D printer for scale I’m going to start simple, showing how even one connector can form the basis of a rather delightful object—a simple flying kite! The kite we're making today is a version of the Eddy diamond kite, originally invented in the 1890s by William Eddy. This classic diamond kite is easy to make, and flies well. We'll design and print the central connector in the kite, and use wooden rods to extend the shape. The total size of the kite is 50 centimeters (about 20 inches) tall and wide, which is bigger than most print beds. We'll design the connector so that it's parametric—we'll be able to change the important sizes of the connector just by changing a few numbers. Because the connector is a small object to print out, and the design is parametric, we'll be able to iterate quickly if we want to make changes. A finished kite The connector we need is a cross that holds two of the "arms" up at an angle. This angle is called the dihedral angle, and is one of the secrets to why the Eddy kite flies well. We'll use the OpenSCAD modeling program to create the connector. OpenSCAD allows you to create solid objects for printing by combining simple shapes such as cylinders and boxes using a basic programming language. It's open source and multi-platform. You can download OpenSCAD from http://openscad.org. Open OpenSCAD. You should have a blank document. Let's set up a few variables to represent the important dimensions in our connector, and make the first part of our connector, which is a cylinder that will be one of the four "arms" of the cross. Go to Design->Compile  to see the result. rod_diameter = 4; // in millimeters wall_thickness = 2; tube_length = 20; angle = 15; // degrees cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); First part of the connector Now let's add the same shape, but translate down. You can see the axis indicator in the lower-left corner of the output window. The blue axis pointing up is the Z-axis, so you want to move down (negative) in the Z-axis. Add this line to your file and recompile (Design->Compile). translate([0,0,-tube_length]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); Second part of the main tube Now that we have the long straight part of our connector, let's add the angled arms. We want the angled part of the connector to be 90 degrees from the straight part, and then rotated by our dihedral angle (in this case 15 degrees). In OpenSCAD, rotations can be specified as a rotation around X, Y, and then Z axes. If you look at the axis indicator, you can see that a rotation around the Y-axis of 90 degrees, followed by a rotation around the Z-axis of 15 degrees that will get us to the right place. Here's the code: rotate([0,90,angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length);    First angled part Let's do the same thing, but for the other side. Instead of rotating by 15 degrees, we'll rotate by 180 degrees and then subtract out the 15 degrees to put the new cylinder on the opposite side. rotate([0,90,180-angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); Opposite angled part Awesome, we have the shape of our connector! There's only one problem: how do we make the holes for the rods to go in? To do this we'll make the same shape, but a little smaller, and then subtract it out of the shape we already made. OpenSCAD supports Boolean operations on shapes, and in this case the Boolean operation we want is difference. To make the Boolean operation easier, we'll group the different parts of the shape together by putting them into modules. Once we have the parts we want together in a module, we can make a difference of the two modules. Here's the complete new version: rod_diameter = 4; // in millimeters wall_thickness = 2; tube_length = 20; angle = 15; // degrees // Connector as a solid object (no holes) module solid() { cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); translate([0,0,-tube_length]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); rotate([0,90,angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); rotate([0,90,180-angle]) cylinder(r = rod_diameter + wall_thickness * 2, h = tube_length); } // Object representing the space for the rods. module hole_cutout() { cut_overlap = 0.2; // Extra length to make clean cut out of main shape cylinder(r = rod_diameter, h = tube_length + cut_overlap); translate([0,0,-tube_length-cut_overlap]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); rotate([0,90,angle]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); rotate([0,90,180-angle]) cylinder(r = rod_diameter, h = tube_length + cut_overlap); } difference() { solid(); hole_cutout(); } Completed connector We've finished modeling our kite connector! But what if our rod isn't 4mm in diameter? What if it's 1/8"? Since we've written a program to describe our kite connector, making the change is easy. We can change the parameters at the beginning of the file to change the shape of the connector. There are 25.4 millimeters in an inch, and OpenSCAD can do the math for us to convert from inches to millimeters. Let's change the rod diameter to 1/8" and also change the dihedral angle, so there's more of a visible change. Change the parameters at the top of the file and recompile (Design->Compile). rod_diameter = 1/8 * 25.4; // inches to millimeters wall_thickness = 2; tube_length = 20; angle = 20; // degrees Different angle and rod diameter Now you start to see the power of using a parametric model—making a new connector can be as simple as changing a few numbers and recompiling the design. To get the model ready for printing, change the rod diameter to the size of the rod you actually have, and change the angle back to 15 degrees. Now go to Design->Compile and Render so that the connector is fully rendered inside OpenSCAD. Go to File->Export->Export as STL and save the file. Open the .stl file in the software for your 3D printer. I have a Prusa i3 Berlin RepRap printer and use Cura as my printer software, but almost any printer/software combination should work. You may want to rotate the part so that it doesn't need as much support under the overhanging arms, but be aware that the orientation of the layers will affect the final strength (if the tube breaks, it's almost always from the layers splitting apart). It's worth experimenting with changing the orientation of the part to increase its strength. Orienting the layers slightly at an angle to the length of the tube seems to give the best strength. Cura default part orientation Part reoriented for printing, showing layers Print your connector, and see if it fits on your rods. You may need to adjust the size a little to get a tight fit. Since the model is parametric, adjusting the size of the connector should just take a few minutes! To get the most strength in your print you can make multiple prints of the connector with different settings (temperature, wall thickness, and so on) and see how much force it takes to break them. This is a good technique in general for getting strong prints. A printed connector Kite dimensions Now that we have the connector printed, we need to finish off the rest of the kite. You can see full instructions on making an Eddy kite, but here's the short version. I've built this kite by taking a 1m long 4mm diameter wooden rod from a kite store and cutting it into one 50cm piece and two 25cm pieces. The center connector goes 10cm from the top of the long rod. For the "sail", paper does fine (cut to fit the frame, making sure that the sail is symmetrical), and you can just tape the paper to the rods. Tie a piece of string about 80cm long between the center connector and a point 4cm from the tail to make a bridle. To find the right place to tie on the long flying line, take the kite out on a breezy day and hold by the bridle, moving your hand up and down until you find a spot where the kite doesn't try to fly up too much, or fall back down. That's the spot to tie on your long flying line. If the kite is unstable while flying, you can add a long tail to the kite, but I haven't found it to be necessary (though it adds to the classic look). Assembled kite Back side of kite, showing printed connector Being able to print your own kite parts makes it easy to experiment. If you want to try a different dihedral angle, just print a new center connector. It's quite a sight to see your kite flying high up in the sky, held together by a part you printed yourself. You can download a slightly different version of this code that includes additional bracing between the angled arms at http://www.thingiverse.com/thing:415345. For an idea of what's possible using the "polygon construction" technique, have a look at my Polygon Construction Kit for some examples of larger structures with multiple connectors. Happy flying! Sky high About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 8270

article-image-hosting-service-iis-using-tcp-protocol
Packt
30 Oct 2014
8 min read
Save for later

Hosting the service in IIS using the TCP protocol

Packt
30 Oct 2014
8 min read
In this article by Mike Liu, the author of WCF Multi-layer Services Development with Entity Framework, Fourth Edtion, we will learn how to create and host a service in IIS using the TCP protocol. (For more resources related to this topic, see here.) Hosting WCF services in IIS using the HTTP protocol gives the best interoperability to the service, because the HTTP protocol is supported everywhere today. However, sometimes interoperability might not be an issue. For example, the service may be invoked only within your network with all Microsoft clients only. In this case, hosting the service by using the TCP protocol might be a better solution. Benefits of hosting a WCF service using the TCP protocol Compared to HTTP, there are a few benefits in hosting a WCF service using the TCP protocol: It supports connection-based, stream-oriented delivery services with end-to-end error detection and correction It is the fastest WCF binding for scenarios that involve communication between different machines It supports duplex communication, so it can be used to implement duplex contracts It has a reliable data delivery capability (this is applied between two TCP/IP nodes and is not the same thing as WS-ReliableMessaging, which applies between endpoints) Preparing the folders and files First, we need to prepare the folders and files for the host application, just as we did for hosting the service using the HTTP protocol. We will use the previous HTTP hosting application as the base to create the new TCP hosting application: Create the folders: In Windows Explorer, create a new folder called HostIISTcp under C:SOAwithWCFandEFProjectsHelloWorld and a new subfolder called bin under the HostIISTcp folder. You should now have the following new folders: C:SOAwithWCFandEFProjectsHelloWorld HostIISTcp and a bin folder inside the HostIISTcp folder. Copy the files: Now, copy all the files from the HostIIS hosting application folder at C:SOAwithWCFandEFProjectsHelloWorldHostIIS to the new folder that we created at C:SOAwithWCFandEFProjectsHelloWorldHostIISTcp. Create the Visual Studio solution folder: To make it easier to be viewed and managed from the Visual Studio Solution Explorer, you can add a new solution folder, HostIISTcp, to the solution and add the Web.config file to this folder. Add another new solution folder, bin, under HostIISTcp and add the HelloWorldService.dll and HelloWorldService.pdb files under this bin folder. Add the following post-build events to the HelloWorldService project, so next time, all the files will be copied automatically when the service project is built: xcopy "$(AssemblyName).dll" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y xcopy "$(AssemblyName).pdb" "C:SOAwithWCFandEFProjectsHelloWorldHostIISTcpbin" /Y Modify the Web.config file: The Web.config file that we have copied from HostIIS is using the default basicHttpBinding as the service binding. To make our service use the TCP binding, we need to change the binding to TCP and add a TCP base address. Open the Web.config file and add the following node to it under the <system.serviceModel> node: <services> <service name="HelloWorldService.HelloWorldService">    <endpoint address="" binding="netTcpBinding"    contract="HelloWorldService.IHelloWorldService"/>    <host>      <baseAddresses>        <add baseAddress=        "net.tcp://localhost/HelloWorldServiceTcp/"/>      </baseAddresses>    </host> </service> </services> In this new services node, we have defined one service called HelloWorldService.HelloWorldService. The base address of this service is net.tcp://localhost/HelloWorldServiceTcp/. Remember, we have defined the host activation relative address as ./HelloWorldService.svc, so we can invoke this service from the client application with the following URL: http://localhost/HelloWorldServiceTcp/HelloWorldService.svc. For the file-less WCF activation, if no endpoint is defined explicitly, HTTP and HTTPS endpoints will be defined by default. In this example, we would like to expose only one TCP endpoint, so we have added an endpoint explicitly (as soon as this endpoint is added explicitly, the default endpoints will not be added). If you don't add this TCP endpoint explicitly here, the TCP client that we will create in the next section will still work, but on the client config file you will see three endpoints instead of one and you will have to specify which endpoint you are using in the client program. The following is the full content of the Web.config file: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=169433 --> <configuration> <system.web>    <compilation debug="true" targetFramework="4.5"/>    <httpRuntime targetFramework="4.5" /> </system.web>   <system.serviceModel>    <serviceHostingEnvironment >      <serviceActivations>        <add factory="System.ServiceModel.Activation.ServiceHostFactory"          relativeAddress="./HelloWorldService.svc"          service="HelloWorldService.HelloWorldService"/>      </serviceActivations>    </serviceHostingEnvironment>      <behaviors>      <serviceBehaviors>        <behavior>          <serviceMetadata httpGetEnabled="true"/>        </behavior>      </serviceBehaviors>    </behaviors>    <services>      <service name="HelloWorldService.HelloWorldService">        <endpoint address="" binding="netTcpBinding"         contract="HelloWorldService.IHelloWorldService"/>        <host>          <baseAddresses>            <add baseAddress=            "net.tcp://localhost/HelloWorldServiceTcp/"/>          </baseAddresses>        </host>      </service>    </services> </system.serviceModel>   </configuration> Enabling the TCP WCF activation for the host machine By default, the TCP WCF activation service is not enabled on your machine. This means your IIS server won't be able to host a WCF service with the TCP protocol. You can follow these steps to enable the TCP activation for WCF services: Go to Control Panel | Programs | Turn Windows features on or off. Expand the Microsoft .Net Framework 3.5.1 node on Windows 7 or .Net Framework 4.5 Advanced Services on Windows 8. Check the checkbox for Windows Communication Foundation Non-HTTP Activation on Windows 7 or TCP Activation on Windows 8. The following screenshot depicts the options required to enable WCF activation on Windows 7: The following screenshot depicts the options required to enable TCP WCF activation on Windows 8: Repair the .NET Framework: After you have turned on the TCP WCF activation, you have to repair .NET. Just go to Control Panel, click on Uninstall a Program, select Microsoft .NET Framework 4.5.1, and then click on Repair. Creating the IIS application Next, we need to create an IIS application named HelloWorldServiceTcp to host the WCF service, using the TCP protocol. Follow these steps to create this application in IIS: Open IIS Manager. Add a new IIS application, HelloWorldServiceTcp, pointing to the HostIISTcp physical folder under your project's folder. Choose DefaultAppPool as the application pool for the new application. Again, make sure your default app pool is a .NET 4.0.30319 application pool. Enable the TCP protocol for the application. Right-click on HelloWorldServiceTcp, select Manage Application | Advanced Settings, and then add net.tcp to Enabled Protocols. Make sure you use all lowercase letters and separate it from the existing HTTP protocol with a comma. Now the service is hosted in IIS using the TCP protocol. To view the WSDL of the service, browse to http://localhost/HelloWorldServiceTcp/HelloWorldService.svc and you should see the service description and a link to the WSDL of the service. Testing the WCF service hosted in IIS using the TCP protocol Now, we have the service hosted in IIS using the TCP protocol; let's create a new test client to test it: Add a new console application project to the solution, named HelloWorldClientTcp. Add a reference to System.ServiceModel in the new project. Add a service reference to the WCF service in the new project, naming the reference HelloWorldServiceRef and use the URL http://localhost/HelloWorldServiceTcp/HelloWorldService.svc?wsdl. You can still use the SvcUtil.exe command-line tool to generate the proxy and config files for the service hosted with TCP, just as we did in previous sections. Actually, behind the scenes Visual Studio is also calling SvcUtil.exe to generate the proxy and config files. Add the following code to the Main method of the new project: var client = new HelloWorldServiceRef.HelloWorldServiceClient (); Console.WriteLine(client.GetMessage("Mike Liu")); Finally, set the new project as the startup project. Now, if you run the program, you will get the same result as before; however, this time the service is hosted in IIS using the TCP protocol. Summary In this article, we created and tested an IIS application to host the service with the TCP protocol. Resources for Article: Further resources on this subject: Microsoft WCF Hosting and Configuration [Article] Testing and Debugging Windows Workflow Foundation 4.0 (WF) Program [Article] Applying LINQ to Entities to a WCF Service [Article]
Read more
  • 0
  • 0
  • 17467

Packt
30 Oct 2014
7 min read
Save for later

concrete5 – Creating Blocks

Packt
30 Oct 2014
7 min read
In this article by Sufyan bin Uzayr, author of the book concrete5 for Developers, you will be introduced to concrete5. Basically, we will be talking about the creation of concrete5 blocks. (For more resources related to this topic, see here.) Creating a new block Creating a new block in concrete5 can be a daunting task for beginners, but once you get the hang of it, the process is pretty simple. For the sake of clarity, we will focus on the creation of a new block from scratch. If you already have some experience with block building in concrete5, you can skip the initial steps of this section. The steps to create a new block are as follows: First, create a new folder within your project's blocks folder. Ideally, the name of the folder should bear relevance to the actual purpose of the block. Thus, a slideshow block can be slide. Assuming that we are building a contact form block, let's name our block's folder contact. Next, you need to add a controller class to your block. Again, if you have some level of expertise with concrete5 development, you will already be aware of the meaning and purpose of the controller class. Basically, a controller is used to control the flow of an application, say, it can accept requests from the user, process them, and then prepare the data to present it in the result, and so on. For now, we need to create a file named controller.php in our block's folder. For the contact form block, this is how it is going to look (don't forget the PHP tags): class ContactBlockController extends BlockController {protected $btTable = 'btContact';/*** Used for internationalization (i18n).*/public function getBlockTypeDescription() {return t('Display a contact form.');}public function getBlockTypeName() {return t('Contact');}public function view() {// If the block is rendered}public function add() {// If the block is added to a page}public function edit() {// If the block instance is edited}} The preceding code is pretty simple and seems to have become the industry norm when it comes to block creation in concrete5. Basically, our class extends BlockController, which is responsible for installing the block, saving the data, and rendering templates. The name of the class should be the Camel Case version of the block handle, followed by BlockController. We also need to specify the name of the database table in which the block's data will be saved. More importantly, as you must have noticed, we have three separate functions: view(), add(), and edit(). The roles of these functions have been described earlier. Next, create three files within the block's folder: view.php, add.php, and edit.php (yes, the same names as the functions in our code). The names are self-explanatory: add.php will be used when a new block is added to a given page, edit.php will be used when an existing block is edited, and view.php jumps into action when users view blocks live on the page. Often, it becomes necessary to have more than one template file within a block. If so, you need to dynamically render templates in order to decide which one to use in a given situation. As discussed in the previous table, the BlockController class has a render($view) method that accepts a single parameter in the form of the template's filename. To do this from controller.php, we can use the code as follows: public function view() {if ($this->isPost()) {$this->render('block_pb_view');}} In the preceding example, the file named block_pb_view.php will be rendered instead of view.php. To reiterate, we should note that the render($view) method does not require the .php extension with its parameters. Now, it is time to display the contact form. The file in question is view.php, where we can put virtually any HTML or PHP code that suits our needs. For example, in order to display our contact form, we can hardcode the HTML markup or make use of Form Helper to display the HTML markup. Thus, a hardcoded version of our contact form might look as follows: <?php defined('C5_EXECUTE') or die("Access Denied.");global $c; ?><form method="post" action="<?php echo $this->action('contact_submit');?>"><label for="txtContactTitle">SampleLabel</label><input type="text" name="txtContactTitle" /><br /><br /><label for="taContactMessage"></label><textarea name="taContactMessage"></textarea><br /><br /><input type="submit" name="btnContactSubmit" /></form> Each time the block is displayed, the view() function from controller.php will be called. The action() method in the previous code generates URLs and verifies the submitted values each time a user inputs content in our contact form. Much like any other contact form, we now need to handle contact requests. The procedure is pretty simple and almost the same as what we will use in any other development environment. We need to verify that the request in question is a POST request and accordingly, call the $post variable. If not, we need to discard the entire request. We can also use the mail helper to send an e-mail to the website owner or administrator. Before our block can be fully functional, we need to add a database table because concrete5, much like most other CMSs in its league, tends to work with a database system. In order to add a database table, create a file named db.xml within the concerned block's folder. Thereafter, concrete5 will automatically parse this file and create a relevant table in the database for your block. For our previous contact form block, and for other basic block building purposes, this is how the db.xml file should look: <?xml version="1.0"?><schema version="0.3"><table name="btContact"><field name="bID" type="I"><key /><unsigned /></field></table></schema> You can make relevant changes in the preceding schema definitions to suit your needs. For instance, this is how the default YouTube block's db.xml file will look: <?xml version="1.0"?><schema version="0.3"><table name="btYouTube"><field name="bID" type="I"><key /><unsigned /></field><field name="title" type="C" size="255"></field><field name="videoURL" type="C" size="255"></field></table></schema> The preceding steps enumerate the process of creating your first block in concrete5. However, while you are now aware of the steps involved in the creation of blocks and can easily work with concrete5 blocks for the most part, there are certain additional details that you should be aware of if you are to utilize the block's functionality in concrete5 to its fullest abilities. The first and probably the most useful of such detail is validation of user inputs within blocks and forms. Summary In this article, we learned how to create our very first block in concrete5. Resources for Article: Further resources on this subject: Alfresco 3: Writing and Executing Scripts [Article] Integrating Moodle 2.0 with Alfresco to Manage Content for Business [Article] Alfresco 3 Business Solutions: Types of E-mail Integration [Article]
Read more
  • 0
  • 0
  • 10371

article-image-untangle-vpn-services
Packt
30 Oct 2014
18 min read
Save for later

Untangle VPN Services

Packt
30 Oct 2014
18 min read
This article by Abd El-Monem A. El-Bawab, the author of Untangle Network Security, covers the Untangle solution, OpenVPN. OpenVPN is an SSL/TLS-based VPN, which is mainly used for remote access as it is easy to configure and uses clients that can work on multiple operating systems and devices. OpenVPN can also provide site-to-site connections (only between two Untangle servers) with limited features. (For more resources related to this topic, see here.) OpenVPN Untangle's OpenVPN is an SSL-based VPN solution that is based on the well-known open source application, OpenVPN. Untangle's OpenVPN is mainly used for client-to-site connections with a client feature that is easy to deploy and configure, which is widely available for Windows, Mac, Linux, and smartphones. Untangle's OpenVPN can also be used for site-to-site connections but the two sites need to have Untangle servers. Site-to-site connections between Untangle and third-party devices are not supported. How OpenVPN works In reference to the OSI model, an SSL/TLS-based VPN will only encrypt the application layer's data, while the lower layer's information will be transferred unencrypted. In other words, the application packets will be encrypted. The IP addresses of the server and client are visible; the port number that the server uses for communication between the client and server is also visible, but the actual application port number is not visible. Furthermore, the destination IP address will not be visible; only the VPN server IP address is seen. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) refer to the same thing. SSL is the predecessor of TLS. SSL was originally developed by Netscape and many releases were produced (V.1 to V.3) till it got standardized under the TLS name. The steps to create an SSL-based VPN are as follows: The client will send a message to the VPN server that it wants to initiate an SSL session. Also, it will send a list of all ciphers (hash and encryption protocols) that it supports. The server will respond with a set of selected ciphers and will send its digital certificate to the client. The server's digital certificate includes the server's public key. The client will try to verify the server's digital certificate by checking it against trusted certificate authorities and by checking the certificate's validity (valid from and valid through dates). The server may need to authenticate the client before allowing it to connect to the internal network. This could be achieved either by asking for a valid username and password or by using the user's digital identity certificates. Untangle NGFW uses the digital certificates method. The client will create a session key (which will be used to encrypt the transferred data between the two devices) and will send this key to the server encrypted using the server's public key. Thus, no third party can get the session key as the server is the only device that can decrypt the session key as it's the only party that has the private key. The server will acknowledge the client that it received the session key and is ready for the encrypted data transformation. Configuring Untangle's OpenVPN server settings After installing the OpenVPN application, the application will be turned off. You'll need to turn it on before you can use it. You can configure Untangle's OpenVPN server settings under OpenVPN settings | Server. The settings configure how OpenVPN will be a server for remote clients (which can be clients on Windows, Linux, or any other operating systems, or another Untangle server). The different available settings are as follows: Site Name: This is the name of the OpenVPN site that is used to define the server among other OpenVPN servers inside your origination. This name should be unique across all Untangle servers in the organization. A random name is automatically chosen for the site name. Site URL: This is the URL that the remote client will use to reach this OpenVPN server. This can be configured under Config | Administration | Public Address. If you have more than one WAN interface, the remote client will first try to initiate the connection using the settings defined in the public address. If this fails, it will randomly try the IP of the remaining WAN interfaces. Server Enabled: If checked, the OpenVPN server will run and accept connections from the remote clients. Address Space: This defines the IP subnet that will be used to assign IPs for the remote VPN clients. The value in Address Space must be unique and separate across all existing networks and other OpenVPN address spaces. A default address space will be chosen that does not conflict with the existing configuration: Configuring Untangle's OpenVPN remote client settings Untangle's OpenVPN allows you to create OpenVPN clients to give your office employees, who are out of the company, the ability to remotely access your internal network resources via their PCs and/or smartphones. Also, an OpenVPN client can be imported to another Untangle server to provide site-to-site connection. Each OpenVPN client will have its unique IP (from the address space range defined previously). Thus, each OpenVPN client can only be used for one user. For multiple users, you'll have to create multiple clients as using the same client for multiple users will result in client disconnection issues. Creating a remote client You can create remote access clients by clicking on the Add button located under OpenVPN Settings | Server | Remote Clients. A new window will open, which has the following settings: Enabled: If this checkbox is checked, it will allow the client connection to the OpenVPN server. If unchecked, it will not allow the client connection. Client Name: Give a unique name for the client; this will help you identify the client. Only alphanumeric characters are allowed. Group: Specify the group the client will be a member of. Groups are used to apply similar settings to their members. Type: Select Individual Client for remote access and Network for site-to-site VPN. The following screenshot shows a remote access client created for JDoe: After configuring the client settings, you'll need to press the Done button and then the OK or Apply button to save this client configuration. The new client will be available under the Remote Clients tab, as shown in the following screenshot: Understanding remote client groups Groups are used to group clients together and apply similar settings to the group members. By default, there will be a Default Group. Each group has the following settings: Group Name: Give a suitable name for the group that describes the group settings (for example, full tunneling clients) or the target clients (for example, remote access clients). Full Tunnel: If checked, all the traffic from the remote clients will be sent to the OpenVPN server, which allows Untangle to filter traffic directed to the Internet. If unchecked, the remote client will run in the split tunnel mode, which means that the traffic directed to local resources behind Untangle is sent through VPN, and the traffic directed to the Internet is sent by the machine's default gateway. You can't use Full Tunnel for site-to-site connections. Push DNS: If checked, the remote OpenVPN client will use the DNS settings defined by the OpenVPN server. This is useful to resolve local names and services. Push DNS server: If the OpenVPN server is selected, remote clients will use the OpenVPN server for DNS queries. If set to Custom, DNS servers configured here will be used for DNS queries. Push DNS Custom 1: If the Push DNS server is set to Custom, the value configured here will be used as a primary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Custom 2: If the Push DNS server is set to Custom, the value configured here will be used as a secondary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Domain: The configured value will be pushed to the remote clients to extend their domain's search path during DNS resolution. The following screenshot illustrates all these settings: Defining the exported networks Exported networks are used to define the internal networks behind the OpenVPN server that the remote client can reach after successful connection. Additional routes will be added to the remote client's routing table that state that the exported networks (the main site's internal subnet) are reachable through the OpenVPN server. By default, each static non-WAN interface network will be listed in the Exported Networks list: You can modify the default settings or create new entries. The Exported Networks settings are as follows: Enabled: If checked, the defined network will be exported to the remote clients. Export Name: Enter a suitable name for the exported network. Network: This defines the exported network. The exported network should be written in CIDR form. These settings are illustrated in the following screenshot: Using OpenVPN remote access clients So far, we have been configuring the client settings but didn't create the real package to be used on remote systems. We can get the remote client package by pressing the Download Client button located under OpenVPN Settings | Server | Remote Clients, which will start the process of building the OpenVPN client that will be distributed: There are three available options to download the OpenVPN client. The first option is to download the client as a .exe file to be used with the Windows operating system. The second option is to download the client configuration files, which can be used with the Apple and Linux operating systems. The third option is similar to the second one except that the configuration file will be imported to another Untangle NGFW server, which is used for site-to-site scenarios. The following screenshot illustrates this: The configuration files include the following files: <Site_name>.ovpn <Site_name>.conf Keys<Site_name>.-<User_name>.crt Keys<Site_name>.-<User_name>.key Keys<Site_name>.-<User_name>-ca.crt The certificate files are for the client authentication, and the .ovpn and .conf files have the defined connection settings (that is, the OpenVPN server IP, used port, and used ciphers). The following screenshot shows the .ovpn file for the site Untangle-1849: As shown in the following screenshot, the created file (openvpn-JDoe-setup.exe) includes the client name, which helps you identify the different clients and simplifies the process of distributing each file to the right user: Using an OpenVPN client with Windows OS Using an OpenVPN client with the Windows operating system is really very simple. To do this, perform the following steps: Set up the OpenVPN client on the remote machine. The setup is very easy and it's just a next, next, install, and finish setup. To set up and run the application as an administrator is important in order to allow the client to write the VPN routes to the Windows routing table. You should run the client as an administrator every time you use it so that the client can create the required routes. Double-click on the OpenVPN icon on the Windows desktop: The application will run in the system tray: Right-click on the system tray of the application and select Connect. The client will start to initiate the connection to the OpenVPN server and a window with the connection status will appear, as shown in the following screenshot: Once the VPN tunnel is initiated, a notification will appear from the client with the IP assigned to it, as shown in the following screenshot: If the OpenVPN client was running in the task bar and there was an established connection, the client will automatically reconnect to the OpenVPN server if the tunnel was dropped due to Windows being asleep. By default, the OpenVPN client will not start at the Windows login. We can change this and allow it to start without requiring administrative privileges by going to Control Panel | Administrative Tools | Services and changing the OpenVPN service's Startup Type to automatic. Now, in the start parameters field, put –-connect <Site_name>.ovpn; you can find the <site_name>.ovpn under C:Program FilesOpenVPNconfig. Using OpenVPN with non-Windows clients The method to configure OpenVPN clients to work with Untangle is the same for all non-Windows clients. Simply download the .zip file provided by Untangle, which includes the configuration and certificate files, and place them into the application's configuration folder. The steps are as follows: Download and install any of the following OpenVPN-compatible clients for your operating system: For Mac OS X, Untangle, Inc. suggests using Tunnelblick, which is available at http://code.google.com/p/tunnelblick For Linux, OpenVPN clients for different Linux distros can be found at https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html OpenVPN connect for iOS is available at https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8 OpenVPN for Android 4.0+ is available at https://play.google.com/store/apps/details?id=net.openvpn.openvpn Log in to the Untangle NGFW server, download the .zip client configuration file, and extract the files from the .zip file. Place the configuration files into any of the following OpenVPN-compatible applications: Tunnelblick: Manually copy the files into the Configurations folder located at ~/Library/Application Support/Tunnelblick. Linux: Copy the extracted files into /etc/openvpn, and then you can connect using sudo openvpn /etc/openvpn/<Site_name>.conf. iOS: Open iTunes and select the files from the config ZIP file to add to the app on your iPhone or iPad. Android: From OpenVPN for an Android application, click on all your precious VPNs. In the top-right corner, click on the folder, and then browse to the folder where you have the OpenVPN .Conf file. Click on the file and hit Select. Then, in the top-right corner, hit the little floppy disc icon to save the import. Now, you should see the imported profile. Click on it to connect to the tunnel. For more information on this, visit http://forums.untangle.com/openvpn/30472-openvpn-android-4-0-a.html. Run the OpenVPN-compatible client. Using OpenVPN for site-to-site connection To use OpenVPN for site-to-site connection, one Untangle NGFW server will run on the OpenVPN server mode, and the other server will run on the client mode. We will need to create a client that will be imported in the remote server. The client settings are shown in the following screenshot: We will need to download the client configuration that is supposed to be imported on another Untangle server (the third option available on the client download menu), and then import this client configuration's zipped file on the remote server. To import the client, on the remote server under the Client tab, browse to the .zip file and press the Submit button. The client will be shown as follows: You'll need to restart the two servers before being able to use the OpenVPN site-to-site connection. The site-to-site connection is bidirectional. Reviewing the connection details The current connected clients (either they were OS clients or another Untangle NGFW client) will appear under Connected Remote Clients located under the Status tab. The screen will show the client name, its external address, and the address assigned to it by OpenVPN. In addition to the connection start time, the amount of transmitted and received MB during this connection is also shown: For the site-to-site connection, the client server will show the name of the remote server, whether the connection is established or not, in addition to the amount of transmitted and received data in MB: Event logs show a detailed connection history as shown in the following screenshot: In addition, there are two reports available for Untangle's OpenVPN: Bandwidth usage: This report shows the maximum and average data transfer rate (KB/s) and the total amount of data transferred that day Top users: This report shows the top users connected to the Untangle OpenVPN server Troubleshooting Untangle's OpenVPN In this section, we will discuss some points to consider when dealing with Untangle NGFW OpenVPN. OpenVPN acts as a router as it will route between different networks. Using OpenVPN with Untangle NGFW in the bridge mode (Untangle NGFW server is behind another router) requires additional configurations. The required configurations are as follows: Create a static route on the router that will route any traffic from the VPN range (the VPN address pool) to the Untangle NGFW server. Create a Port Forward rule for the OpenVPN port 1194 (UDP) on the router to Untangle NGFW. Verify that your setting is correct by going to Config | Administration | Public Address as it is used by Untangle to configure OpenVPN clients, and ensure that the configured address is resolvable from outside the company. If the OpenVPN client is connected, but you can't access anything, perform the following steps: Verify that the hosts you are trying to reach are exported in Exported Networks. Try to ping Untangle NGFW LAN IP address (if exported). Try to bring up the Untangle NGFW GUI by entering the IP address in a browser. If the preceding tasks work, your tunnel is up and operational. If you can't reach any clients inside the network, check for the following conditions: The client machine's firewall is not preventing the connection from the OpenVPN client. The client machine uses Untangle as a gateway or has a static route to send the VPN address pool to Untangle NGFW. In addition, some port forwarding rules on Untangle NGFW are needed for OpenVPN to function properly. The required ports are 53, 445, 389, 88, 135, and 1025. If the site-to-site tunnel is set up correctly, but the two sites can't talk to each other, the reason may be as follows: If your sites have IPs from the same subnet (this probably happens when you use a service from the same ISP for both branches), OpenVPN may fail as it consider no routing is needed from IPs in the same subnet, you should ask your ISP to change the IPs. To get DNS resolution to work over the site-to-site tunnel, you'll need to go to Config | Network | Advanced | DNS Server | Local DNS Servers and add the IP of the DNS server on the far side of the tunnel. Enter the domain in the Domain List column and use the FQDN when accessing resources. You'll need to do this on both sides of the tunnel for it to work from either side. If you are using site-to-site VPN in addition to the client-to-site VPN. However, the OpenVPN client is able to connect to the main site only: You'll need to add VPN Address Pool to Exported Hosts and Networks Lab-based training This section will provide training for the OpenVPN site-to-site and client-to-site scenarios. In this lab, we will mainly use Untangle-01, Untangle-03, and a laptop (192.168.1.7). The ABC bank started a project with Acme schools. As a part of this project, the ABC bank team needs to periodically access files located on Acme-FS01. So, the two parties decided to opt for OpenVPN. However, Acme's network team doesn't want to leave access wide open for ABC bank members, so they set firewall rules to limit ABC bank's access to the file server only. In addition, the IT team director wants to have VPN access from home to the Acme network, which they decided to accomplish using OpenVPN. The following diagram shows the environment used in the site-to-site scenario: To create the site-to-site connection, we will need to do the following steps: Enable OpenVPN Server on Untangle-01. Create a network type client with a remote network of 172.16.1.0/24. Download the client and import it under the Client tab in Untangle-03. Restart the two servers. After the restart, you have a site-to-site VPN connection. However, the Acme network is wide open to the ABC bank, so we need to create a firewall-limiting rule. On Untangle-03, create a rule that will allow any traffic that comes from the OpenVPN interface, and its source is 172.16.136.10 (Untangle-01 Client IP) and is directed to 172.16.1.7 (Acme-FS01). The rule is shown in the following screenshot: Also, we will need a general block rule that comes after the preceding rule in the rule evaluation order. The environment used for the client-to-site connection is shown in the following diagram: To create a client-to-site VPN connection, we need to perform the following steps: Enable the OpenVPN server on Untangle-03. Create an individual client type client on Untangle-03. Distribute the client to the intended user (that is 192.168.1.7). Install OpenVPN on your laptop. Connect using the installed OpenVPN and try to ping Acme-DC01 using its name. The ping will fail because the client is not able to query the Acme DNS. So, in the Default Group settings, change Push DNS Domain to Acme.local. Changing the group settings will not affect the OpenVPN client till the client is restarted. Now, the ping process will be a success. Summary In this article, we covered the VPN services provided by Untangle NGFW. We went deeply into understanding how each solution works. This article also provided a guide on how to configure and deploy the services. Untangle provides a free solution that is based on the well-known open source OpenVPN, which provides an SSL-based VPN. Resources for Article: Further resources on this subject: Important Features of Gitolite [Article] Target Exploitation [Article] IPv6 on Packet Tracer [Article]
Read more
  • 0
  • 0
  • 18642

article-image-importance-windows-rds-horizon-view
Packt
30 Oct 2014
15 min read
Save for later

Importance of Windows RDS in Horizon View

Packt
30 Oct 2014
15 min read
In this article by Jason Ventresco, the author of VMware Horizon View 6 Desktop Virtualization Cookbook, has explained about the Windows Remote Desktop Services (RDS) and how they are implemented in Horizon View. He will discuss about configuring the Windows RDS server and also about creating RDS farm in Horizon View. (For more resources related to this topic, see here.) Configuring the Windows RDS server for use with Horizon View This recipe will provide an introduction to the minimum steps required to configure Windows RDS and integrate it with our Horizon View pod. For a more in-depth discussion on Windows RDS optimization and management, consult the Microsoft TechNet page for Windows Server 2012 R2 (http://technet.microsoft.com/en-us/library/hh801901.aspx). Getting ready VMware Horizon View supports the following versions of Window server for use with RDS: Windows Server 2008 R2: Standard, Enterprise, or Datacenter, with SP1 or later installed Windows Server 2012: Standard or Datacenter Windows Server 2012 R2: Standard or Datacenter The examples shown in this article were performed on Windows Server 2012 R2. Additionally, all of the applications required have already been installed on the server, which in this case included Microsoft Office 2010. Microsoft Office has specific licensing requirements when used with a Windows Server RDS. Consult Microsoft's Licensing of Microsoft Desktop Application Software for Use with Windows Server Remote Desktop Services document (http://www.microsoft.com/licensing/about-licensing/briefs/remote-desktop-services.aspx), for additional information. The Windows RDS feature requires a licensing server component called the Remote Desktop Licensing role service. For reasons of availability, it is not recommended that you install it on the RDS host itself, but rather, on an existing server that serves some other function or even on a dedicated server if possible. Ideally, the RDS licensing role should be installed on multiple servers for redundancy reasons. The Remote Desktop Licensing role service is different from the Microsoft Windows Key Management System (KMS), as it is used solely for Windows RDS hosts. Consult the Microsoft TechNet article, RD Licensing Configuration on Windows Server 2012 (http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx), for the steps required to install the Remote Desktop Licensing role service. Additionally, consult Microsoft document Licensing Windows Server 2012 R2 Remote Desktop Services (http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf) for information about the licensing options for Windows RDS, which include both per-user and per-device options. Windows RDS host – hardware recommendations The following resources represent a starting point for assigning CPU and RAM resources to Windows RDS hosts. The actual resources required will vary based on the applications being used and the number of concurrent users; so, it is important to monitor server utilization and adjust the CPU and RAM specifications if required. The following are the requirements: One vCPU for each of the 15 concurrent RDS sessions 2 GB RAM, base RAM amount equal to 2 GB per vCPU, plus 64 MB of additional RAM for each concurrent RDS session An additional RAM equal to the application requirements, multiplied by the estimated number of concurrent users of the application Sufficient hard drive space to store RDS user profiles, which will vary based on the configuration of the Windows RDS host: Windows RDS supports multiple options to control user profiles' configuration and growth, including a RD user home directory, RD roaming user profiles, and mandatory profiles. For information about these and other options, consult the Microsoft TechNet article, Manage User Profiles for Remote Desktop Services, at http://technet.microsoft.com/en-us/library/cc742820.aspx. This space is only required if you intend to store user profiles locally on the RDS hosts. Horizon View Persona Management is not supported and will not work with Windows RDS hosts. Consider native Microsoft features such as those described previously in this recipe, or third-party tools such as AppSense Environment Manager (http://www.appsense.com/products/desktop/desktopnow/environment-manager). Based on these values, a Windows Server 2012 R2 RDS host running Microsoft Office 2010 that will support 100 concurrent users will require the following resources: Seven vCPU to support upto 105 concurrent RDS sessions 45.25 GB of RAM, based on the following calculations: 20.25 GB of base RAM (2 GB for each vCPU, plus 64 MB for each of the 100 users) A total of 25 GB additional RAM to support Microsoft Office 2010 (Office 2010 recommends 256 MB of RAM for each user) While the vCPU and RAM requirements might seem excessive at first, remember that to deploy a virtual desktop for each of these 100 users, we would need at least 100 vCPUs and 100 GB of RAM, which is much more than what our Windows RDS host requires. By default, Horizon View allows only 150 unique RDS user sessions for each available Windows RDS host; so, we need to deploy multiple RDS hosts if users need to stream two applications at once or if we anticipate having more than 150 connections. It is possible to change the number of supported sessions, but it is not recommended due to potential performance issues. Importing the Horizon View RDS AD group policy templates Some of the settings configured throughout this article are applied using AD group policy templates. Prior to using the RDS feature, these templates should be distributed to either the RDS hosts in order to be used with the Windows local group policy editor, or to an AD domain controller where they can be applied using the domain. Complete the following steps to install the View RDS group policy templates: When referring to VMware Horizon View installation packages, y.y.y refers to the version number and xxxxxx refers to the build number. When you download packages, the actual version and build numbers will be in a numeric format. For example, the filename of the current Horizon View 6 GPO bundle is VMware-Horizon-View-Extras-Bundle-3.1.0-2085634.zip. Obtain the VMware-Horizon-View-GPO-Bundle-x.x.x-yyyyyyy.zip file, unzip it, and copy the en-US folder, the vmware_rdsh.admx file, and the vmware_rdsh_server.admx file to the C:WindowsPolicyDefinitions folder on either an AD domain controller or your target RDS host, based on how you wish to manage the policies. Make note of the following points while doing so: If you want to set the policies locally on each RDS host, you will need to copy the files to each server If you wish to set the policies using domain-based AD group policies, you will need to copy the files to the domain controllers, the group policy Central Store (http://support.microsoft.com/kb/929841), or to the workstation from which we manage these domain-based group policies. How to do it… The following steps outline the procedure to enable RDS on a Windows Server 2012 R2 host. The host used in this recipe has already been connected to the domain and has logged in with an AD account that has administrative permissions on the server. Perform the following steps: Open the Windows Server Manager utility and go to Manage | Add Roles and Features to open the Add Roles and Features Wizard. On the Before you Begin page, click on Next. On the Installation Type page, shown in the following screenshot, select Remote Desktop Services installation and click on Next. This is shown in the following screenshot: On the Deployment Type page, select Quick Start and click on Next. You can also implement the required roles using the standard deployment method outlined in the Deploy the Session Virtualization Standard deployment section of the Microsoft TechNet article, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment (http://technet.microsoft.com/en-us/library/hh831610.aspx). If you use this method, you will complete the component installation and proceed to step 9 in this recipe. On the Deployment Scenario page, select Session-based desktop deployment and click on Next. On the Server Selection page, select a server from the list under Server Pool, click the red, highlighted button to add the server to the list of selected servers, and click on Next. This is shown in the following screenshot: On the Confirmation page, check the box marked Restart the destination server automatically if required and click on Deploy. On the Completion page, monitor the installation process and click on Close when finished in order to complete the installation. If a reboot is required, the server will reboot without the need to click on Close. Once the reboot completes, proceed with the remaining steps. Set the RDS licensing server using the Set-RDLicenseConfiguration Windows PowerShell command. In this example, we are configuring the local RDS host to point to redundant license servers (RDS-LIC1 and RDS-LIC2) and setting the license mode to PerUser. This command must be executed on the target RDS host. After entering the command, confirm the values for the license mode and license server name by answering Y when prompted. Refer to the following code: Set-RDLicenseConfiguration -LicenseServer @("RDS-LIC1.vjason.local","RDS-LIC2.vjason.local") -Mode PerUser This setting might also be set using group policies applied either to the local computer or using Active Directory (AD). The policies are shown in the following screenshot, and you can locate them by going to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Licensing when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path: Use local computer or AD group policies to limit users to one session per RDS host using the Restrict Remote Desktop Services users to a single Remote Desktop Services session policy. The policy is shown in the following screenshot, and you can locate it by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connections: Use local computer or AD group policies to enable Timezone redirection. You can locate the policy by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Horizon View RDSH Services | Remote Desktop Session Host | Device and Resource Redirection when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To enable the setting, set Allow time zone redirection to Enabled. Use local computer or AD group policies to enable Windows Basic Aero-Styled Theme. You can locate the policy by going to User Configuration | Policies | Administrative Templates | Control Panel | Personalization when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the theme, set Force a specific visual style file or force Windows Classic to Enabled and set Path to Visual Style to %windir%resourcesThemesAeroaero.msstyles. Use local computer or AD group policies to start Runonce.exe when the RDS session starts. You can locate the policy by going to User Configuration | Policies | Windows Settings | Scripts (Logon/Logoff) when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the logon settings, double-click on Logon, click on Add, enter runonce.exe in the Script Name box, and enter /AlternateShellStartup in the Script Parameters box. On the Windows RDS host, double-click on the 64-bit Horizon View Agent installer to begin the installation process. The installer should have a name similar to VMware-viewagent-x86_64-y.y.y-xxxxxx.exe. On the Welcome to the Installation Wizard for VMware Horizon View Agent page, click on Next. On the License Agreement page, select the I accept the terms in the license agreement radio check box and click on Next. On the Custom Setup page, either leave all the options set to default, or if you are not using vCenter Operations Manager, deselect this optional component of the agent and click on Next. On the Register with Horizon View Connection Server page, shown in the following screenshot, enter the hostname or IP address of one of the Connection Servers in the pod where the RDS host will be used. If the user performing the installation of the agent software is an administrator in the Horizon View environment, leave the Authentication setting set to default; otherwise, select the Specify administrator credentials radio check box and provide the username and password of an account that has administrative rights in Horizon View. Click on Next to continue: On the Ready to Install the Program page, click on Install to begin the installation. When the installation completes, reboot the server if prompted. The Windows RDS service is now enabled, configured with the optimal settings for use with VMware Horizon View, and has the necessary agent software installed. This process should be repeated on additional RDS hosts, as needed, to support the target number of concurrent RDS sessions. How it works… The following resources provide detailed information about the configuration options used in this recipe: Microsoft TechNet's Set-RDLicenseConfiguration article at http://technet.microsoft.com/en-us/library/jj215465.aspx provides the complete syntax of the PowerShell command used to configure the RDS licensing settings. Microsoft TechNet's Remote Desktop Services Client Access Licenses (RDS CALs) article at http://technet.microsoft.com/en-us/library/cc753650.aspx explains the different RDS license types, which reveals that an RDS per-user Client Access License (CAL) allows our Horizon View clients to access the RDS servers from an unlimited number of endpoints while still consuming only one RDS license. The Microsoft TechNet article, Remote Desktop Session Host, Licensing (http://technet.microsoft.com/en-us/library/ee791926(v=ws.10).aspx) provides additional information on the group policies used to configure the RDS licensing options. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/index.jsp?topic=%2Fcom.vmware.horizon-view.desktops.doc%2FGUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html) explains that the Windows Basic aero-styled theme is the only theme supported by Horizon View, and demonstrates how to implement it. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-443F9F6D-C9CB-4CD9-A783-7CC5243FBD51.html) explains why time zone redirection is required, as it ensures that the Horizon View RDS client session will use the same time zone as the client device. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-85E4EE7A-9371-483E-A0C8-515CF11EE51D.html) explains why we need to add the runonce.exe /AlternateShellStartup command to the RDS logon script. This ensures that applications which require Windows Explorer will work properly when streamed using Horizon View. Creating an RDS farm in Horizon View This recipe will discuss the steps that are required to create an RDS farm in our Horizon View pod. An RDS farm is a collection of Windows RDS hosts and serves as the point of integration between the View Connection Server and the individual applications installed on each RDS server. Additionally, key settings concerning client session handling and client connection protocols are set at the RDS farm level within Horizon View. Getting ready To create an RDS farm in Horizon View, we need to have at least one RDS host registered with our View pod. Assuming that the Horizon View Agent installation completed successfully in the previous recipe, we should see the RDS hosts registered in the Registered Machines menu under View Configuration of our View Manager Admin console. The tasks required to create the RDS pod are performed using the Horizon View Manager Admin console. How to do it… The following steps outline the procedure used to create a RDS farm. In this example, we have already created and registered two Window RDS hosts named WINRDS01 and WINRDS02. Perform the following steps: Navigate to Resources | Farms and click on Add, as shown in the following screenshot: On the Identification and Settings page, shown in the following screenshot, provide a farm ID, a description if desired, make any desired changes to the default settings, and then click on Next. The settings can be changed to On if needed: On the Select RDS Hosts page, shown in the following screenshot, click on the RDS hosts to be added to the farm and then click on Next: On the Ready to Complete page, review the configuration and click on Finish. The RDS farm has been created, which allows us to create application. How it works… The following RDS farm settings can be changed at any time and are described in the following points: Default display protocol: PCoIP (default) and RDP are available. Allow users to choose protocol: By default, Horizon View Clients can select their preferred protocol; we can change this setting to No in order to enforce the farm defaults. Empty session timeout (applications only): This denotes the amount of time that must pass after a client closes all RDS applications before the RDS farm will take the action specified in the When timeout occurs setting. The default setting is 1 minute. When timeout occurs: This determines which action is taken by the RDS farm when the session's timeout deadline passes; the options are Log off or Disconnect (default). Log off disconnected sessions: This determines what happens when a View RDS session is disconnected; the options are Never (default), Immediate, or After. If After is selected, a time in minutes must be provided. Summary We have learned about configuring the Windows RDS server for use in Horizon View and also about creating RDS farm in Horizon View. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] An Introduction to VMware Horizon Mirage [Article] Designing and Building a Horizon View 6.0 Infrastructure [Article]
Read more
  • 0
  • 0
  • 11468

Packt
29 Oct 2014
7 min read
Save for later

LeJOS – Unleashing EV3

Packt
29 Oct 2014
7 min read
In this article by Abid H. Mujtaba, author of Lego Mindstorms EV3 Essentials, we'll have look at a powerful framework designed to grant an extraordinary degree of control over EV3, namely LeJOS: (For more resources related to this topic, see here.) Classic programming on EV3 LeJOS is what happens when robot and software enthusiasts set out to hack a robotics kit. Although lego initially intended the Mindstorms series to be primarily targeted towards children, it was taken up with gleeful enthusiasm by adults. The visual programming language, which was meant to be used both on the brick and on computers, was also designed with children in mind. The visual programming language, although very powerful, has a number of limitations and shortcomings. Enthusiasts have continually been on the lookout for ways to program Mindstorms using traditional programming languages. As a result, a number of development kits have been created by enthusiasts to allow the programming of EV3 in a traditional fashion, by writing and compiling code in traditional languages. A development kit for EV3 consists of the following: A traditional programming language (C, C++, Java, and so on) Firmware for the brick (basically, a new OS) An API in the chosen programming language, giving access to the robot's inputs and outputs A compiler that compiles code on a traditional computer to produce executable code for the brick Optionally, an Integrated Development Environment (IDE) to consolidate and simplify the process of developing the brick The release of each robot in the Mindstorms series has been associated with a consolidated effort by the open source community to hack the brick and make available a number of frameworks for programming robots using traditional programming languages. Some of the common frameworks available for Mindstorms are GNAT GPL (Ada), ROBOTC, Next Byte Code (NBC), an assembly language, Not Quite C (NQC), LeJOS, and many others. This variety of frameworks is particularly useful for Linux users, not only because they love having the ability to program in their language of choice, but also because the visual programming suite for EV3 does not run on Linux at all. In its absence, these frameworks are essential for anyone who is looking to create programs of significant complexity for EV3. LeJOS – introduction LeJOS is a development kit for Mindstorms robots based on the Java programming language. There is no official pronunciation, with people using lay-joss, le-J-OS (claiming it is French for "the Java Operating System", including myself), or lay-hoss if you prefer the Latin-American touch. After considerable success with NXT, LeJOS was the first (and in my opinion, the most complete) framework released for EV3. This is a testament both to the prowess of the developers working on LeJOS and the fact that lego built EV3 to be extremely hackable by running Linux under its hood and making its source publicly available. Within weeks, LeJOS had been ported to EV3, and you could program robots using Java. LeJOS works by installing its own OS (operating system) on the EV3's SD card as an alternate firmware. Before EV3, this would involve a slightly difficult and dangerous tinkering with the brick itself, but one of the first things that EV3 does on booting up is to check for a bootable partition on the SD card. If it is found, the OS/firmware is loaded from the SD card instead of being loaded internally. Thus, in order to run LeJOS, you only need a suitably prepared SD card inserted into EV3 and it will take over the brick. When you want to return to the default firmware, simply remove the SD card before starting EV3. It's that simple! Lego wasn't kidding about the hackability of EV3. The firmware for LeJOS basically runs a Java Virtual Machine (JVM) inside EV3, which allows it to execute compiled Java code. Along with the JVM, LeJOS installs an API library, defining methods that can be programmatically used to access the inputs and outputs attached to the brick. These API methods are used to control the various components of the robot. The LeJOS project also releases tools that can be installed on all modern computers. These tools are used to compile programs that are then transferred to EV3 and executed. These tools can be imported into any IDE that supports Java (Eclipse, NetBeans, IntelliJ, Android Studio, and so on) or used with a plain text editor combined with Ant or Gradle. Thus, leJOS qualifies as a complete development kit for EV3. The advantages of LeJOS Some of the obvious advantages of using LeJOS are: This was the first framework to have support for EV3 Its API is stable and complete This is an active developer and user base (the last stable version came out in March 2014, and the new beta was released in April) The code base is maintained in a public Git repository Ease of installation Ease of use The other advantages of using LeJOS are linked to the fact that its API as well as the programs you write yourself are all written in Java. The development kits allow a number of languages to be used for programming EV3, with the most popular ones being C and Java. C is a low-level language that gives you greater control over the hardware, but it comes at a price. Your instructions need to be more explicit, and the chances of making a subtle mistake are much higher. For every line of Java code, you might have to write dozens of lines of C code to get the same functionality. Java is a high-level language that is compiled into the byte code that runs on the JVM. This results in a lesser degree of control over the hardware, but in return, you get a powerful and stable API (that LeJOS provides) to access the inputs and outputs of EV3. The LeJOS team is committed to ensure that this API works well and continues to grow. The use of a high-level language such as Java lowers the entry threshold to robotic programming, especially for people who already know at least one programming language. Even people who don't know programming yet can learn Java easily, much more so than C. Finally, two features of Java that are extremely useful when programming robots are its object-oriented nature (the heavy use of classes, interfaces, and inheritance) and its excellent support for multithreading. You can create and reuse custom classes to encapsulate common functionality and can integrate sensors and motors using different threads that communicate with each other. The latter allows the construction of subsumption architectures, an important development in robotics that allows for extremely responsive robots. I hope that I have made a compelling case for why you should choose to use LeJOS as your framework in order to take EV3 programming to the next level. However, the proof is in the pudding. Summary In this article, we learned how EV3's extremely hackable nature has led to the proliferation of alternate frameworks that allow EV3 to be programmed using traditional programming languages. One of these alternatives is LeJOS, a powerful framework based on the Java programming language. We studied the fundamentals of LeJOS and learned its advantages over other frameworks. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article] Managing Test Structure with Robot Framework [Article]
Read more
  • 0
  • 0
  • 7134
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-openshift-java-developers
Packt
29 Oct 2014
21 min read
Save for later

OpenShift for Java Developers

Packt
29 Oct 2014
21 min read
This article written by Shekhar Gulati, the author of OpenShift Cookbook, covers how Java developers can openly use OpenShift to develop and deploy Java applications. It also teaches us how to deploy Java EE 6 and Spring applications on OpenShift. (For more resources related to this topic, see here.) Creating and deploying Java EE 6 applications using the JBoss EAP and PostgreSQL 9.2 cartridges Gone are the days when Java EE or J2EE (as it was called in the olden days) was considered evil. Java EE now provides a very productive environment to build web applications. Java EE has embraced convention over configuration and annotations, which means that you are no longer required to maintain XML to configure each and every component. In this article, you will learn how to build a Java EE 6 application and deploy it on OpenShift. This article assumes that you have basic knowledge of Java and Java EE 6. If you are not comfortable with Java EE 6, please read the official tutorial at http://docs.oracle.com/javaee/6/tutorial/doc/. In this article, you will build a simple job portal that will allow users to post job openings and view a list of all the persisted jobs in the system. These two functionalities will be exposed using two REST endpoints. The source code for the application created in this article is on GitHub at https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6-simple. The example application that you will build in this article is a simple version of the jobstore application with only a single domain class and without any application interface. You can get the complete jobstore application source code on GitHub as well at https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6. Getting ready To complete this article, you will need the rhc command-line client installed on your machine. Also, you will need an IDE to work with the application code. The recommended IDE to work with OpenShift is Eclipse Luna, but you can also work with other IDEs, such as IntelliJ Idea and NetBeans. Download and install the Eclipse IDE for Java EE developers from the official website at https://www.eclipse.org/downloads/. How to do it… Perform the following steps to create the jobstore application: Open a new command-line terminal, and go to a convenient location. Create a new JBoss EAP application by executing the following command: $ rhc create-app jobstore jbosseap-6 The preceding command will create a Maven project and clone it to your local machine. Change the directory to jobstore, and execute the following command to add the PostgreSQL 9.2 cartridge to the application: $ rhc cartridge-add postgresql-9.2 Open Eclipse and navigate to the project workspace. Then, import the application created in step 1 as a Maven project. To import an existing Maven project, navigate to File|Import|Maven|Existing Maven Projects. Then, navigate to the location of your OpenShift Maven application created in step 1. Next, update pom.xml to use Java 7. The Maven project created by OpenShift is configured to use JDK 6. Replace the properties with the one shown in the following code: <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> Update the Maven project to allow the changes to take effect. You can update the Maven project by right-clicking on the project and navigating to Maven|Update Project. Now, let us write the domain classes for our application. Java EE uses JPA to define the data model and manage entities. The application has one domain class: Job. Create a new package called org.osbook.jobstore.domain, and then create a new Java class called Job inside it. Have a look at the following code: @Entity public class Job {   @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id;   @NotNull private String title;   @NotNull @Size(max = 4000) private String description;   @Column(updatable = false) @Temporal(TemporalType.DATE) @NotNull private Date postedAt = new Date();   @NotNull private String company;   //setters and getters removed for brevity   } Create a META-INF folder at src/main/resources, and then create a persistence.xml file with the following code: <?xml version="1.0" encoding="UTF-8"?> <persistence xsi_schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0" >   <persistence-unit name="jobstore" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data- source>java:jboss/datasources/PostgreSQLDS</jta-data- source>   <exclude-unlisted-classes>false</exclude-unlisted- classes>   <properties> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.hbm2ddl.auto" value="update" /> </properties> </persistence-unit>   </persistence> Now, we will create the JobService class that will use the JPA EntityManager API to work with the database. Create a new package called org.osbook.jobstore.services, and create a new Java class as shown in the following code. It defines the save and findAll operations on the Job entity. @Stateless public class JobService {   @PersistenceContext(unitName = "jobstore") private EntityManager entityManager;   public Job save(Job job) { entityManager.persist(job); return job; }   public List<Job> findAll() { return entityManager .createQuery("SELECT j from org.osbook.jobstore.domain.Job j order by j.postedAt desc", Job.class) .getResultList(); } } Next, enable Contexts and Dependency Injection (CDI) in the jobstore application by creating a file with the name beans.xml in the src/main/webapp/WEB-INF directory as follows: <?xml version="1.0"?> <beans xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://jboss.org/schema/cdi/beans_1_0.xsd"/> The jobstore application will expose the REST JSON web service. Before you can write the JAX-RS resources, you have to configure JAX-RS in your application. Create a new package called org.osbook.jobstore.rest and a new class called RestConfig, as shown in the following code: @ApplicationPath("/api/v1") public class RestConfig extends Application { } Create a JAX-RS resource to expose the create and findAll operations of JobService as REST endpoints as follows: @Path("/jobs") public class JobResource {   @Inject private JobService jobService;   @POST @Consumes(MediaType.APPLICATION_JSON) public Response createNewJob(@Valid Job job) { job = jobService.save(job); return Response.status(Status.CREATED).build(); }   @GET @Produces(MediaType.APPLICATION_JSON) public List<Job> showAll() { return jobService.findAll(); } } Commit the code, and push it to the OpenShift application as shown in the following commands: $ git add . $ git commit -am "jobstore application created" $ git push After the build finishes successfully, the application will be accessible at http://jobstore-{domain-name}.rhcloud.com. Please replace domain-name with your own domain name. To test the REST endpoints, you can use curl. curl is a command-line tool for transferring data across various protocols. We will use it to test our REST endpoints. To create a new job, you will run the following curl command: $ curl -i -X POST -H "Content-Type: application/json" -H "Accept: application/json" -d '{"title":"OpenShift Evangelist","description":"OpenShift Evangelist","company":"Red Hat"}'http://jobstore-{domain- name}.rhcloud.com/api/v1/jobs To view all the jobs, you can run the following curl command: $ curl http://jobstore-{domain-name}.rhcloud.com/api/v1/jobs How it works… In the preceding steps, we created a Java EE application and deployed it on OpenShift. In step 1, you used the rhc create-app command to create a JBoss EAP web cartridge application. The rhc command-line tool makes a request to the OpenShift broker and asks it to create a new application using the JBoss EAP cartridge. Every OpenShift web cartridge specifies a template application that will be used as the default source code of the application. For Java web cartridges (JBoss EAP, JBoss AS7, Tomcat 6, and Tomcat 7), the template is a Maven-based application. After the application is created, it is cloned to the local machine using Git. The directory structure of the application is shown in the following command: $ ls -a .git .openshift README.md pom.xml deployments src As you can see in the preceding command, apart from the .git and .openshift directories, this looks like a standard Maven project. OpenShift uses Maven to manage application dependencies and build your Java applications. Let us take a look at what's inside the jobstore directory to better understand the layout of the application: The src directory: This directory contains the source code for the template application generated by OpenShift. You need to add your application source code here. The src folder helps in achieving source code deployment when following the standard Maven directory conventions. The pom.xml file: The Java applications created by OpenShift are Maven-based projects. So, a pom.xml file is required when you do source code deployment on OpenShift. This pom.xml file has a profile called openshift, which will be executed when you push code to OpenShift as shown in the following code. This profile will create a ROOT WAR file based upon your application source code. <profiles> <profile> <id>openshift</id> <build> <finalName>jobstore</finalName> <plugins> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.1.1</version> <configuration> <outputDirectory>deployments</outputDirectory> <warName>ROOT</warName> </configuration> </plugin> </plugins> </build> </profile> </profiles> The deployments directory: You should use this directory if you want to do binary deployments on OpenShift, that is, you want to deploy a WAR or EAR file directly instead of pushing the source code. The .git directory: This is a local Git repository. This directory contains the complete history of the repository. The config file in.git/ contains the configuration for the repository. It defines a Git remote origin that points to the OpenShift application gear SSH URL. This makes sure that when you do git push, the source code is pushed to the remote Git repository hosted on your application gear. You can view the details of the origin Git remote by executing the following command: $ git remote show origin The .openshift directory: This is an OpenShift-specific directory, which can be used for the following purposes: The files under the action_hooks subdirectory allow you to hook onto the application lifecycle. The files under the config subdirectory allow you to make changes to the JBoss EAP configuration. The directory contains the standalone.xml JBoss EAP-specific configuration file. The files under the cron subdirectory are used when you add the cron cartridge to your application. This allows you to run scripts or jobs on a periodic basis. The files under the markers subdirectory allow you to specify whether you want to use Java 6 or Java 7 or you want to do hot deploy or debug the application running in the Cloud, and so on. In step 2, you added the PostgreSQL 9.2 cartridge to the application using the rhc cartridge-add command. We will use the PostgreSQL database to store the jobstore application data. Then, in step 3, you imported the project in the Eclipse IDE as a Maven project. Eclipse Kepler has inbuilt support for Maven applications, which makes it easier to work with Maven-based applications. From step 3 through step 5, you updated the project to use JDK 1.7 for the Maven compiler plugin. All the OpenShift Java applications use OpenJDK 7, so it makes sense to update the application to also use JDK 1.7 for compilation. In step 6, you created the job domain class and annotated it with JPA annotations. The @Entity annotation marks the class as a JPA entity. An entity represents a table in the relational database, and each entity instance corresponds to a row in the table. Entity class fields represent the persistent state of the entity. You can learn more about JPA by reading the official documentation at http://docs.oracle.com/javaee/6/tutorial/doc/bnbpz.html. The @NotNull and @Size annotation marks are Bean Validation annotations. Bean Validation is a new validation model available as a part of the Java EE 6 platform. The @NotNull annotation adds a constraint that the value of the field must not be null. If the value is null, an exception will be raised. The @Size annotation adds a constraint that the value must match the specified minimum and maximum boundaries. You can learn more about Bean Validation by reading the official documentation at http://docs.oracle.com/javaee/6/tutorial/doc/gircz.html. In JPA, entities are managed within a persistence context. Within the persistence context, the entity manager manages the entities. The configuration of the entity manager is defined in a standard configuration XML file called persitence.xml. In step 7, you created the persistence.xml file. The most important configuration option is the jta-datasource-source configuration tag. It points to java:jboss/datasources/PostgreSQLDS. When a user creates a JBoss EAP 6 application, then OpenShift defines a PostgreSQL datasource in the standalone.xml file. The standalone.xml file is a JBoss configuration file, which includes the technologies required by the Java EE 6 full profile specification plus Java Connector 1.6 architecture, Java XML API for RESTful web services, and OSGi. Developers can override the configuration by making changes to the standalone.xml file in the .openshift/config location of your application directory. So, if you open the standalone.xml file in .openshift/config/ in your favorite editor, you will find the following PostgreSQL datasource configuration: <datasource jndi-name="java:jboss/datasources/PostgreSQLDS" enabled="${postgresql.enabled}" use-java-context="true" pool- name="PostgreSQLDS" use-ccm="true"> <connection- url>jdbc:postgresql://${env.OPENSHIFT_POSTGRESQL_DB_HOST}:${env.OP ENSHIFT_POSTGRESQL_DB_PORT}/${env.OPENSHIFT_APP_NAME} </connection-url> <driver>postgresql</driver> <security> <user-name>${env.OPENSHIFT_POSTGRESQL_DB_USERNAME}</user-name> <password>${env.OPENSHIFT_POSTGRESQL_DB_PASSWORD}</password> </security> <validation> <check-valid-connection-sql>SELECT 1</check-valid-connection- sql> <background-validation>true</background-validation> <background-validation-millis>60000</background-validation- millis> <!--<validate-on-match>true</validate-on-match> --> </validation> <pool> <flush-strategy>IdleConnections</flush-strategy> <allow-multiple-users /> </pool> </datasource> In step 8, you created stateless Enterprise JavaBeans (EJBs) for our application service layer. The service classes work with the EntityManager API to perform operations on the Job entity. In step 9, you configured CDI by creating the beans.xml file in the src/main/webapp/WEB-INF directory. We are using CDI in our application so that we can use dependency injection instead of manually creating the objects ourselves. The CDI container will manage the bean life cycle, and the developer just has to write the business logic. To let the JBoss application server know that we are using CDI, we need to create a file called beans.xml in our WEB-INF directory. The file can be completely blank, but its presence tells the container that the CDI framework needs to be loaded. In step 10 and step 11, you configured JAX-RS and defined the REST resources for the Job entity. You activated JAX-RS by creating a class that extends javax.ws.rs.ApplicationPath. You need to specify the base URL under which your web service will be available. This is done by annotating the RestConfig class with the ApplicationPath annotation. You used /api/v1 as the application path. In step 12, you added and committed the changes to the local repository and then pushed the changes to the application gear. After the bits are pushed, OpenShift will stop all the cartridges and then invoke the mvn -e clean package -Popenshift -DskipTests command to build the project. Maven will build a ROOT.war file, which will be copied to the JBoss EAP deployments folder. After the build successfully finishes, all the cartridges are started. Then the new updated ROOT.war file will be deployed. You can view the running application at http://jobstore-{domain-name}.rhcloud.com. Please replace {domain-name} with your account domain name. Finally, you tested the REST endpoints using curl in step 14. There's more… You can perform all the aforementioned steps with just a single command as follows: $ rhc create-app jobstore jbosseap postgresql-9.2 --from-code https://github.com/OpenShift-Cookbook/chapter7-jobstore-javaee6-simple.git --timeout 180 Configuring application security by defining the database login module in standalone.xml The application allows you to create company entities and then assign jobs to them. The problem with the application is that it is not secured. The Java EE specification defines a simple, role-based security model for EJBs and web components. JBoss security is an extension to the application server and is included by default with your OpenShift JBoss applications. You can view the extension in the JBoss standalone.xml configuration file. The standalone.xml file exists in the .openshift/config location. The following code shows the extension: <extension module="org.jboss.as.security" /> OpenShift allows developers to update the standalone.xml configuration file to meet their application needs. You make a change to the standalone.xml configuration file, commit the change to the local Git repository, then push the changes to the OpenShift application gear. Then, after the successful build, OpenShift will replace the existing standalone.xml file with your updated configuration file and then finally start the server. But please make sure that your changes are valid; otherwise, the application will fail to start. In this article, you will learn how to define the database login module in standalone.xml to authenticate users before they can perform any operation with the application. The source code for the application created in this article is on GitHub at https://github.com/OpenShift-Cookbook/chapter7-jobstore-security. Getting ready This article builds on the Java EE 6 application. How to do it… Perform the following steps to add security to your web application: Create the OpenShift application using the following command: $ rhc create-app jobstore jbosseap postgresql-9.2 --from-code https://github.com/OpenShift-Cookbook/chapter7-jobstore- javaee6-simple.git --timeout 180 After the application creation, SSH into the application gear, and connect with the PostgreSQL database using the psql client. Then, create the following tables and insert the test data: $ rhc ssh $ psql jobstore=# CREATE TABLE USERS(email VARCHAR(64) PRIMARY KEY, password VARCHAR(64)); jobstore=# CREATE TABLE USER_ROLES(email VARCHAR(64), role VARCHAR(32)); jobstore=# INSERT into USERS values('admin@jobstore.com', 'ISMvKXpXpadDiUoOSoAfww=='); jobstore=# INSERT into USER_ROLES values('admin@jobstore.com', 'admin'); Exit from the SSH shell, and open the standalone.xml file in the.openshift/config directory. Update the security domain with the following code: <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass" /> </login-module> <login-module code="Database" flag="required"> <module-option name="dsJndiName" value="java:jboss/datasources/PostgreSQLDS" /> <module-option name="principalsQuery" value="select password from USERS where email=?" /> <module-option name="rolesQuery" value="select role, 'Roles' from USER_ROLES where email=?" /> <module-option name="hashAlgorithm" value="MD5" /> <module-option name="hashEncoding" value="base64" /> </login-module> </authentication> </security-domain> Create the web deployment descriptor (that is, web.xml) in the src/main/webapp/WEB-INF folder. Add the following content to it: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd">   <security-constraint> <web-resource-collection> <web-resource-name>WebAuth</web-resource-name> <description>application security constraints </description> <url-pattern>/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <realm-name>jdbcRealm</realm-name> <form-login-config> <form-login-page>/login.html</form-login- page> <form-error-page>/error.html</form-error- page> </form-login-config> </login-config> <security-role> <role-name>admin</role-name> </security-role>   </web-app> Create the login.html file in the src/main/webapp directory. The login.html page will be used for user authentication. The following code shows the contents of this file: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Login</title> <link href="//cdnjs.cloudflare.com/ajax/libs/twitter- bootstrap/3.1.1/css/bootstrap.css" rel="stylesheet"> </head> <body> <div class="container"> <form class="form-signin" role="form" method="post" action="j_security_check"> <h2 class="form-signin-heading">Please sign in</h2> <input type="text" id="j_username" name="j_username" class="form-control" placeholder="Email address" required autofocus> <input type="password" id="j_password" name="j_password" class="form-control" placeholder="Password" required> <button class="btn btn-lg btn-primary btn-block" type="submit">Sign in</button> </form> </div> </body> </html> Create an error.html file in the src/main/webapp directory. The error.html page will be shown after unsuccessful authentication. The following code shows the contents of this file: <!DOCTYPE html> <html> <head> <meta charset="US-ASCII"> <title>Error page</title> </head> <body> <h2>Incorrect username/password</h2> </body> </html> Commit the changes, and push them to the OpenShift application gear: $ git add . $ git commit –am "enabled security" $ git push Go to the application page at http://jobstore-{domain-name}.rhcloud.com, and you will be asked to log in before you can view the application. Use admin@jobstore.com/admin as the username-password combination to log in to the application. How it works… Let's now understand what you did in the preceding steps. In step 1, you recreated the jobstore application we developed previously. Next, in step 2, you performed an SSH into the application gear and created the USERS and USER_ROLES tables. These tables will be used by the JBoss database login module to authenticate users. As our application does not have the user registration functionality, we created a default user for the application. Storing the password as a clear text string is a bad practice, so we have stored the MD5 hash of the password. The MD5 hash of the admin password is ISMvKXpXpadDiUoOSoAfww==. If you want to generate the hashed password in your application, I have included a simple Java class, which uses org.jboss.crypto.CryptoUtil to generate the MD5 hash of any string. The CryptoUtil class is part of the picketbox library. The following code depicts this: import org.jboss.crypto.CryptoUtil;   public class PasswordHash {   public static String getPasswordHash(String password) { return CryptoUtil.createPasswordHash("MD5", CryptoUtil.BASE64_ENCODING, null, null, password); }   public static void main(String[] args) throws Exception { System.out.println(getPasswordHash("admin")); } } In step 3, you logged out of the SSH session and updated the standalone.xml JBoss configuration file with the database login module configuration. There are several login module implementations available out of the box. This article will only talk about the database login module, as discussing all the modules is outside the scope of this article. You can read about all the login modules at https://docs.jboss.org/author/display/AS7/Security+subsystem+configuration. The database login module checks the user credentials against a relational database. To configure the database login module, you have to specify a few configuration options. The dsJndiName option is used to specify the application datasource. As we are using a configured PostgreSQL datasource for our application, you specified the same dsJndiName option value. Next, you have to specify the SQL queries to fetch the user and its roles. Then, you have specified that the password will be hashed against an MD5 hash algorithm by specifying the hashAlgorithm configuration. In step 4, you applied the database login module to the jobstore application by defining the security constraints in web.xml. This configuration will add a security constraint on all the web resources of the application that will restrict access to authenticated users with role admin. You have also configured your application to use FORM-based authentication. This will make sure that when unauthenticated users visit the website, they will be redirected to the login.html page created in step 5. If the user enters a wrong e-mail/password combination, then they will be redirected to the error.html page created in step 6. Finally, in step 7, you committed the changes to the local Git repository and pushed the changes to the application gear. OpenShift will make sure that the JBoss EAP application server uses the updated standalone.xml configuration file. Now, the user will be asked to authenticate before they can work with the application. Summary This article showed us how to configure application security. In, this article, we also learned about the different ways in which Java applications can be developed on OpenShift. The article explained the database login module. Further resources on this subject: Using OpenShift [Article] Troubleshooting [Article] Schemas and Models [Article]
Read more
  • 0
  • 0
  • 2837

article-image-building-iphone-app-using-swift-part-2
Ryan Loomba
29 Oct 2014
5 min read
Save for later

Building an iPhone App Using Swift: Part 2

Ryan Loomba
29 Oct 2014
5 min read
Let’s continue on from Part 1, and add a new table view to our app. In our storyboard, let’s add a table view controller by searching in the bottom right and dragging. Next, let’s add a button to our main view controller that will link to our new table view controller. Similar to what we did with the web view, Ctrl + click on this button and drag it to the newly created table view controller.Upon release, choose push. Now, let’s make sure everything works properly. Hit the large play button and click on Table View. You should now be taken to a blank table: Let’s populate this table with some text. Go to File ->  New ->  File  and choose a Cocoa Touch Class. Let’s call this file TableViewController, and make this a subclass of UITableViewController in the Swift language. Once the file is saved, we’ll be presented with a file with some boilerplate code.  On the first line in our class file, let’s declare a constant. This constant will be an array of strings that will be inserted into our table: let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] Let’s modify the function that has this signature: func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int This function returns the number of rows in our table view. Instead of setting this to zero, let’s change this to ten. Next, let’s uncomment the function that has this signature: override func numberOfSectionsInTableView(tableView: UITableView!) -> Int This function controls how many sections we will have in our table view. Let’s modify this function to return 1.  Finally, let’s add a function that will populate our cells: override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell }  This function iterates through each row in our table and sets the text value to be equal to the fruits we declared at the top of the class file. The final file should look like this: class TableViewController: UITableViewController { let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] override func viewDidLoad() { super.viewDidLoad() } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } // MARK: - Table view data source override func numberOfSectionsInTableView(tableView: UITableView!) -> Int { // #warning Potentially incomplete method implementation. // Return the number of sections. return 1 } override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. return tableArray.count } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell } } Finally, we need to go back to our storyboard and link to our custom table view controller class. Select the storyboard, click on the table view controller, choose the identity inspector and fill in TableViewController  for the custom class. If we click the play button to build our project and then click on our table view button, we should see our table populated with names of fruit: Adding a map view Click on the Sample Swift App icon in the top left of the screen and then choose Build Phases. Under Link Binary with Libraries, click the plus button and search for MapKit. Once found, click Add: In the story board, add another view controller. Search for a MKMapView and drag it into the newly created controller. In the main navigation controller, create another button named Map View, Ctrl + click + drag to the newly created view controller, and upon release choose push: Additionally, choose the Map View in the storyboard, click on the connections inspector, Ctrl + click on delegate and drag to the map view controller. Next, let’s create a custom view controller that will control our map view. Go to File -> New -> File and choose Cocoa Touch. Let’s call this file MapViewController and inherit from UIViewController. Let’s now link our map view in our storyboard to our newly created map view controller file. In the storyboard, Ctrl + click on the map view and drag to our Map View Controller to create an IBOutlet variable. It should look something like this: @IBOutlet var mapView: MKMapView! Let’s add some code to our controller that will display the map around Apple’s campus in Cupertino, CA. I’ve looked up the GPS coordinates already, so here is what the completed code should look like: import UIKit import MapKit class MapViewController: UIViewController, MKMapViewDelegate { @IBOutlet var mapView: MKMapView! override func viewDidLoad() { super.viewDidLoad() let latitude:CLLocationDegrees = 37.331789 let longitude:CLLocationDegrees = -122.029620 let latitudeDelta:CLLocationDegrees = 0.01 let longitudeDelta:CLLocationDegrees = 0.01 let span:MKCoordinateSpan = MKCoordinateSpan(latitudeDelta: latitudeDelta, longitudeDelta: longitudeDelta) let location:CLLocationCoordinate2D = CLLocationCoordinate2DMake(latitude, longitude) let region: MKCoordinateRegion = MKCoordinateRegionMake(location, span) self.mapView.setRegion(region, animated: true) // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } This should now build, and when you click on the Map View button, you should be able to see a map showing Apple’s campus at the center of the screen.  About the Author Ryan is a software engineer and electronic dance music producer currently residing in San Francisco, CA. Ryan started up as a biomedical engineer but fell in love with web/mobile programming after building his first Android app. You can find him on GitHub @rloomba.
Read more
  • 0
  • 0
  • 5494

article-image-webrtc-sip-and-ims
Packt
29 Oct 2014
28 min read
Save for later

WebRTC with SIP and IMS

Packt
29 Oct 2014
28 min read
In this article by Altanai Bisht, the author of the book, WebRTC Integrator's Guide, has discussed about the interaction of WebRTC client with important IMS nodes and modules. IP Multimedia Subsystem (IMS) is an architectural framework for IP Multimedia communications and IP telephony based on Convergent applications. It specifies three layers in a telecom network: Transport or Access layer: This is the bottom-most segment responsible for interacting with end systems such as phones. IMS layer: This is the middleware responsible for authenticating and routing the traffic and facilitating call control through the Service layer. Service or Application layer: This is the top-most layer where all of the call control applications and Value Added Services (VAS) are hosted. (For more resources related to this topic, see here.) IMS standards are defined by Third Generation Partnership Project (3GPP) which adopt and promote Internet Engineering Task Force (IETF) Request for Comments (RFCs). Refer to http://www.3gpp.org/technologies/keywords-acronyms/109-ims to learn more about 3GPP IMS specification releases. This article will walk us through the interaction of WebRTC client with important IMS nodes and modules. The WebRTC gateway is the first point of contact for the SIP requests from the WebRTC client to enter into the IMS network. The WebRTC gateway converts SIP over WebSocket implementation to legacy/plain SIP, that is, a WebRTC to SIP gateway that connects to the IMS world and is able to communicate with a legacy SIP environment. It also can translate other REST- or JSON-based signaling protocols into SIP. The gateway also handles the media operation that involves DTLS, SRTP, RTP, transcoding, demuxing, and so on. In this article, we will study a case where there exists a simple IMS core environment, and the WebRTC clients are meant to interact after the signals are traversed through core IMS nodes such as Call Session Control Function (CSCF), Home Subscriber Server (HSS), and Telecom Application Server (TAS). The Interaction with core IMS nodes This section describes the sequence of steps that must be followed for the integration of the WebRTC client with IMS. Before you go ahead, set up a Session Border Controller (SBC) / WebRTC gateway / SIP proxy node for the WebRTC client to interact with the IMS control layer. Direct the control towards the CSCF nodes of IMS, namely, Proxy-CSCF, Interrogating-CSCF, and Serving-CSCF. The subscriber details and the location are updated in the HSS. Serving-CSCF (SCSCF) routes the call through the SIP Application Server to invoke any services before the call is processed. The Application Server, which is part of the IMS service layer, is the point of adding logic to call processing in the form of VAS. Additionally, we will uncover the process of integrating media server for an inter-codec conversion between legacy SIP phones and WebRTC clients. The setup will allow us to support all SIP nodes and endpoints as part of the IMS land-scape. The following figure shows the placement of the SIPWS to SIP gateway in the IMS network: The WebRTC client is a web-based dynamic application that is run over a Web Application Server. For simplification, we can club the components of the WebRTC client and the Web Application Server together and address them jointly as the WebRTC client, as shown in the following diagram: There are four major components of the OpenIMS core involved in this setup as described in the following sections. Along with these, two components of the WebRTC infrastructure (the client and the gateway) are also necessary to connect the WebRTC endpoints. Three optional entities are also described as part of this setup. The components of Open IMS are CSCF nodes and HSS. More information on each component is given in the following sections. The Call Session Control Function The three parts of CSCF are described as follows: Proxy-CSCF (P-CSCF) is the first point of contact for a user agent (UA) to which all user equipments (UEs) are attached. It is responsible for routing an incoming SIP request to other IMS nodes, such as registrar and Policy and Charging Rules Function (PCRF), among others. Interrogating-CSCF (I-CSCF) is the inbound SIP proxy server for querying the HSS as to which S-CSCF should be serving the incoming request. Serving-CSCF (S-CFCS) is the heart of the IMS core as it enables centralized IMS service control by defining routing paths that act like the registrar, interact with the Media Server, and much more. Home Subscriber System IMS core Home Subscriber System (HSS) is the database component responsible for maintaining user profiles, subscriptions, and location information. The data is used in functions such as authentication and authorization of users while using IM services. The components of the WebRTC infrastructure primarily comprises of WebRTC Web Application Servers, WebRTC web-based clients, and the SIP gateway. WebRTC Web Application Server and client: The WebRTC client is intrinsically a web application that is composed of user interfaces, data access objects, and controllers to handle HTTP requests. A Web Application Server is where an application is hosted. As WebRTC is a browser-based technique, it is meant to be an HTML-based web application. The call functionalities are rendered through the SIP JavaScript files. The browser's native WebRTC capabilities are utilized to capture and transmit the data. A WebRTC service provider must embed the SIP call functions on a web page that has a call interface. It must provide values for the To and From SIP addresses, div to play audio/video content, and access to users' resources such as camera, mic, and speakers. WebRTC to IMS gateway: This is the point where the conversion of the signal from SIP over WebSockets to legacy/plain SIP takes place. It renders the signaling into a state that the IMS network nodes can understand. For media, it performs the transcoding from WebRTC standard codecs to others. It also performs decryption and demux of audio/video/RTCP/RTP. There are other servers that act as IMS nodes as well, such as the STUN/TURN Server, Media Server, and Application Server. They are described as follows: STUN/TURN Server: These are employed for NAT traversals and overcoming firewall restrictions through ICE candidates. They might not be needed when the WebRTC client is on the Internet and the WebRTC gateway is also listening on a publicly accessible IP. Media Server: Media server plays a role when media relay is required between the UEs instead of a direct peer-to-peer communication. It also comes into picture for services such as voicemail, Interactive Voice Response (IVR), playback, and recording. Application Server (AS): Application Server is the point where developers can make customized logic for call control such as VAS in the form of call redirecting in cases when the receiver is absent and selective call screening. The IP Multimedia Subsystem core IMS is an architecture for real-time multimedia (voice, data, video, and messaging) services using a common IP network. It defines a layered architecture. According to the 3GPP specification, IMS entities are classified into six categories: Session management and route (CSCF, GGSN, and SGSN) Database (HSS and SLF) Interworking elements (BGCF, MGCF, IM-MGW, and SGW) Service (Application Server, MRFC and MRFP) Strategy support entities (PDF) Billing Interoperability with the SIP infrastructure requires a session border controller to decrypt the WebRTC control and media flows. A media node is also set up for transcoding between WebRTC codecs and other legacy phones. When a gateway is involved, the WebRTC voice and video peer connections are between the browser and the border controller. In our case, we have been using Kamailio in this role. Kamailio is an open source SIP server capable of processing both SIP and SIPWS signaling. As WebRTC is made to function over SIP-based signaling, it is applicable to enjoy all of the services and solutions made for the IMS environment. The telecom operators can directly mount the services in the Service layer, and subscribers can avail the services right from their web browsers through the WebRTC client. This adds a new dimension to user accessibility and experience. A WebRTC client's true potential will come into effect only when it is integrated with the IMS framework. We have some readymade, open IMS setups that have been tested for WebRTC-to-IMS integration. The setups are as follows: 3GPP IMS: This is the IMS specification by 3GPP, which is an association of telecommunications group OpenIMS: This is the open source implementation of the IMS CSCFs and a lightweight HSS for the IMS core DubangoIMS: This is the cross-platform and open source 3GPP IMS/LTE framework KamailioIMS: Kamailio Version 4.0 and above incorporates IMS support by means of OpenIMS We can also use any other IMS structure for the integration. In this article, we will demonstrate the use of OpenIMS. For this, it is required that a WebRTC client and a non-WebRTC client must be interoperable by means of signaling and media transcoding. Also, the essential components of IMS world, such as HSS, Media Server, and Application Server, should be integrated with the WebRTC setup. The OpenIMS Core The Open IMS Core is an open source implementation for core elements of the IMS network that includes IMS CSCFs nodes and HSS. The following diagram shows how a connection is made from WebRTC to CSCF: The following are the prerequisites to install the Open IMS core: Make sure that you have the following packages installed on your Linux machine, as their absence can hinder the IMS installation process: Git and Subversion GCC3/4, Make, JDK1.5, Ant MySQL as the database Bison and Flex, the Linux utilities libxml2 (Version 2.6 and above) and libmysql with development versions Install these packages from the Synaptic package manager or using the command prompt. For the LoST interface of E-CSCF, use the following command lines: sudo apt-get install mysql-server libmysqlclient15-dev libxml2libxml2-dev bind9 ant flex bison curl libcurl4-gnutls-dev sudo apt-get install curl libcurl4-gnutls-dev The Domain Name Server (DNS), bind9, should be installed and run. To do this, we can run the following command line: sudo apt-get install bind9 We need a web browser to review the status of the connection on the web console. To download a web browser, go to its download page. For example, Chrome can be downloaded from https://www.google.com/intl/en_in/chrome/browser/. We must verify that the Java version installed is above 1.5 so as to not break the compilation process in between, and set the path of JAVA_HOME as follows: export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre The output of the command line that checks the Java version is as follows: The following are the steps to install OpenIMS. As the source code is preconfigured to work from a standard file path of /opt, we will use the predefined directory for installation. Go to the /opt folder and create a directory to store the OpenIMS core, using the following command lines: mkdir /opt/OpenIMSCorecd /opt/OpenIMSCore Create a directory to store FHOSS, check out the HSS, and compile the source using the following command lines: mkdir FHoSS svn checkout http://svn.berlios.de/svnroot/repos/openimscore/FHoSS/trunk FHoSS cd FHoSS ant compile deploy Note that the code requires Java Version 7 or lower to work. Also, create a directory to store ser_ims, check out the CFCs, and then install ser_ims using the following command lines: mkdir ser_ims svn checkout http://svn.berlios.de/svnroot/repos/openimscore/ser_ims/trunk ser_ims cd ser_ims make install-libs all After downloading and installing the OpenIMS installation directory, its contents are as follows: By default, the nodes are configured to work only on the local loopback, and the default domain configured is open-ims.test. The MySQL access rights are also set only for local access. However, this can be modified using the following steps: Run the following command line: ./opt/ser_ims/cfg/configurator.sh Replace 127.0.0.1 (the default IP for the local host) with the new IP address that is required to configure the IMS Core server. Replace the home domain (open-ims.test) with the required domain name. Change the database passwords. The following figure depicts the domain change process through configurator.sh: To resolve the domain name, we need to add a new IMS domain to bind the configuration directory. Change to the system's bind folder (cd /etc/bind) and copy the open-ims.dnszone file there after replacing the domain name. sudo cp /opt/OpenIMSCore/ser_ims/cfg/open-ims.dnszone /etc/bind/ Open the name.conf file and include open-ims.dnszone in the list that already exists: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; include "/etc/bind/open-ims.dnszone"; One can also add a reverse zone file, which, contrary to the DNS zone file, converts an address to a name. Restart the naming server using the following command: sudo bind9 restart On occasion of any failure or error note, the system logs/reports can be generated using the following command line: tail -f /var/log/syslog Open the MySQL client (sudo mysql) and add the SQL scripts for the creation of database and tables for HSS operations: mysql -u root -p -h localhost<ser_ims/cfg/icscf.sql mysql -u root -p -h localhost<FHoSS/scripts/hss_db.sql mysql -u root -p -h localhost<FHoSS/scripts/userdata.sql The following screenshot shows the tables for the HSS database: Users should be registered with a domain (that is, one needs to make changes in the userdata.sql file by replacing the default domain name with the required domain name). Note that while it is not mandatory to change the domain, it is a good practice to add a new domain that describes the enterprise or service provider's name. The following screenshot shows user domains changed from the default to the personal domain: Copy the pcscf.cfg, pcscf.sh, icscf.cfg, icscf.xml, icscf.sh, scscf.cfg, scscf.xml, and scscf.sh files to the /opt/OpenIMSCore location. Start the Policy Call Session Control Function (PCSCF) by executing the pcscf.sh script. The default element port assigned for P-CSCF is 4060. A screenshot of the running of PCSCF is as follows: Start the Interrogating Call Session Control Function (I-CSCF) by executing the icscf.sh script. The default element port assigned to I-CSCF is 5060. If the scripts display a warning about connection, it is just because the FHoSS client still needs to be started. A screenshot of the running I-CSCF is as follows: Start SCSCF by executing the scscf.sh script. The default element port assignment for S-CSCF is 6060. A screenshot of the running SCSCF is as follows: Start the FOKUS Home Subscriber Server (FHoSS) by executing FHoss/deploy/startup.sh. The HSS interacts using the diameter protocol. The ports used for this protocol are 3868, 3869, and 3870. A screenshot of the running HSS is shown as follows: Go to http://<yourip>:8080 and log in to the web console with hssAdmin as the username and hss as the password as shown in the following screenshot. To register the WebRTC client with OpenIMS, we must use an IMS gateway that performs the function of converting the SIP over WebSocket format to SIP. In order to achieve this, use the IP port or domain of the PCSCF node while registering the client. The flow will be from the WebRTC client to the IMS gateway to the PCSCF of the IMS Core. The flow can also be from the SIPML5 WebRTC client to the webrtc2sip gateway to the PCSCF of the OpenIMS Core. The subscribers are visible in the IMS subscription section of the portal of OpenIMS. The following screenshot shows the user identities and their statuses on a web-based admin console: As far as other components are concerned, they can be subsequently added to the core network over their respective interfaces. The Telecom server The TAS is where the logic for processing a call resides. It can be used to add applications such as call blocking, call forwarding, and call redirection according to the predefined values. The inputs can be assigned at runtime or stored in a database using a suitable provisioning system. The following diagram shows the connection between WebRTC and the IMS Core Server: For demonstration purposes, we can use an Application Server that can host SIP servlets and integrate them with IMS core. The Mobicents Telecom Application Server Mobicents SIP Servlet and Java APIs for Integrated Networks-Service Logic Execution Environment (JAIN-SLEE) are open platforms to deploy new call controller logic and other converged applications. The steps to install Mobicents TAS are as follows: Download the SIP Application Server logic package from https://code.google.com/p/sipservlets/wiki/Downloads. Unzip the contents. Make sure that the Java environment variables are in place. Start the JBoss container from mobicentsjboss-5.1.0.GAbin In case of MS Windows, click on run.bat, and for Linux, click on run.sh. The following figure displays the traces on the console when the server is started on JBoss: The Mobicents application can also be developed by installing the Tomcat/Mobicents plugin in Eclipse IDE. The server can also be added for Mobicents instance, enabling quick deployment of applications. Open the web console to review the settings. The following screenshot displays the process: In order to deploy Resource Adaptors, enter: ant -f resources/<name of resource adapter>/build.xml deploy To undeploy the resource adapters, execute antundeploy with the name of the resource adapter: ant -f resources/<name of resource adapter>/build.xml undeploy Make sure that you have Apache Ant 1.7. The deployed instances should be visible in a web console as follows: To deploy and run SIP Servlet applications, use the following command line: ant -f examples/<name of application directory>/build.xml deploy-all Configure CSCF to include the Application Server in the path of every incoming SIP request and response. With the introduction of TAS, it is now possible to provide customized call control logic to all subscribers or particular subscribers. The SIP solution and services can range from simple activities, such as call screening and call rerouting, to a complex call-handling application, such as selective call screening based on the user's calendar. Some more examples of SIP applications are given as follows: Speed Dial: This application lets the user make a call using pre-programmed numbers that map to actual SIPURIs of users. Click to Dial: This application makes a call using a web-based GUI. However, it is very different from WebRTC, as it makes/receives the call through an external SIP phone. Find me Follow Me: This application is beneficial if the user is registered on multiple devices simultaneously, for example, SIP phone, X-Lite, and WebRTC. In such a case, when there is an incoming call, each of the user's devices rings for few seconds in order of their recent use so that the user can pick the call from the device that is nearest to him. These services are often referred to as VAS, which can be innovative and can take the user experience to new heights. The Media Server To enable various features such as Interactive Voice Respondent (IVR), record voice mails, and play announcements, the Media Server plays a critical role. The Media Server can be used as a standalone entity in the WebRTC infrastructure or it can be referenced from the SIP server in the IMS environment. The FreeSWITCH Media Server FreeSWITCH has powerful Media Server capabilities, including those for functions such as IVR, conferencing, and voice mails. We will first see how to use FreeSWITCH as a standalone entity that provides SIP and RTP proxy features. Let's try to configure and install a basic setup of FreeSWITCH Media Server using the following steps: Download and store the source code for compilation in the /usr/src folder, and run the following command lines: cd usr/src git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git A directory named freeswitch is made using the following command line and binaries will be stored in this folder. Assign all permissions to it. sudo chown -R <username> /usr/local/freeswitch Replace <username> with the name of the user who has the ownership of the folder. Go to the directory where the source will be stored, that is, the following directory: cd /usr/src/freeswitch Then, run bootstrap using the following command line: ./bootstrap.sh One can add additional modules by editing the configuration file using the vi editor. We can open our file using the following command line: vi modules.conf The names of the module are already listed. Remove the # symbol before the name to include the module at runtime, and add # to skip the module. Then, run the configure command: ./configure --enable-core-pgsql-support Use the make command and install the components: make && make install Go to the Sofia profile and uncomment the parameters defined for WebSocket binding. By doing so, the WebRTC clients can register with FreeSWITCH on port 443. Sofia is an SIP stack used by FreeSWITCH. By default, it supports only pure SIP requests. To get WebRTC clients, register with FreeSWITCH's SIP Server. <!-- uncomment for SIP over WebSocket support --><!-- <param name="ws-binding" value=":443"/> Install the sound files using the following command line: make all cd-sounds-install cd-moh-install Go to the installation directory, and in the vars.xml file under freeswitch/conf/ make sure that the codec preferences are set as follows: <X-PRE-PROCESS cmd="set" data="global_codec_prefs=G722,PCMA,PCMU,GSM"/> <X-PRE-PROCESS cmd="set" data="outbound_codec_prefs=G722,PCMA,PCMU,GSM"/> Make sure that the SIP profile is directly using the codec values as follows: <param name="inbound-codec-prefs" value="$${global_codec_prefs}"/> <param name="outbound-codec-prefs" value="$${global_codec_prefs}"/> We can later add more codecs such as vp8 for video calling/conferencing. To start FreeSWITCH, go to the /freeswitch/bin installation directory and run FreeSWITCH. Run the command-line console that will be used to control and monitor the passing SIP packets by going to the /freeswitch/bin installation directory and executing fs_cli. The following is the screenshot of the FreeSWITCH client console: Go to the /freeswitch/conf/SIP_profile installation-directory and look for the existing configuration files. Load and start the SIP profile using the following command line: sofia profile <name of profile> start load Restart and reload the profile in case of changes using the following command line: sofia profile <name of profile>restart reload Check its working by executing the following command line: Sofia status We can check the status of the individual SIP profile by executing the following command line: sofia status profile <name of profile> reg The preceding figure depicts the status of the users registered with the server at one point of time. Media Services The following steps outline the process of using the FreeSWITCH media services: Register the SIP softphone and WebRTC client using FreeSWITCH. Use sample values between 1000 and 1020 initially. Later, we can configure for more users as specified by the /freeswitch/conf/directory installation directory. The following are the sample values to register Kapanga:      Username: 1002      Display name: any      Domain/ Realm: 14.67.87.45      Outbound proxy: 14.67.87.45:5080      Authorization user: 1002      Password: 1234 The sample value for WebRTC client registration, if, for example, we decide to use the Sipml5webrtc client, for example, will be as follows:      Display name: any      Private identity: 1001      Public identity: SIP:1001@14.67.87.45      Password: 1234      Realm: 14.67.87.45      WebSocket Server URL: ws://14.67.87.45:443 Note that the values used here are arbitrary for the purpose of understanding. IP denotes the public IP of the FreeSWITCH machine and the port is the WebSocket configured port in the Sofia profile. As seen in the following screenshot, it is required that we tick the Enable RTCWeb Breaker option in Expert settings to compensate for the incompatibility between the WebSocket and SIP standards that might arise: Make a call between the SIP softphone and WebRTC client. In this case, the signal and media are passing through FreeSWITCH as proxy. Call from a WebRTC client is depicted in the following screenshot, which consists of SIP messages passing through the FreeSWITCH server and are therefore visible in the FreeSWITCH client console. In this case, the server is operating in the default mode; other modes are bypass and proxy modes. Make a call between two WebRTC clients, where SIP and RTP are passing through FreeSWITCH as proxy. We can use other services of FreeSWITCH as well, such as voicemail, IVR, and conferencing. We can also configure this setup in such a way that media passes through the FreeSWITCH Media Server, and the SIP signaling is via the Telecom Kamailio SIP server. Use the RTP proxy in the SIP proxy server, in our case, Kamailio, to pass the RTP media through the Media Server. The RTP proxy module of Kamailio should be built in a format and configured in the kamailio.cfg file. The RTP proxy forces the RTP to pass through a node as specified in the settings parameters. It makes the communication between SIP user agents behind NAT and will also be used to set up a relaying host for RTP streams. Configure the RTP Engine as the media proxy agent for RTP. It will be used to force the WebRTC media through it and not in the old peer-to-peer fashion in which WebRTC is designed to operate. Perform the following steps to configure the RTP Engine: Go to the Kamailio installation directory and then to the RTPProxy module. Run the make command and install the proxy engine: cd rtpproxy ./configure && make Load the module and parameters in the kamailio.cfg file: listen=udp:<ip>:<port> .. loadmodule "rtpproxy.so" .. modparam("rtpproxy", "rtpproxy_sock",   "unix:/var/run/rtpproxy/rtpproxy.sock") Add rtpproxy_manage() for all of the requests and responses in the kamailio.cfg file. The example of rtpproxy_manage for INVITE is: if (is_method("INVITE")) { ... rtpproxy_manage(); ... }; Get the source code for the RTP Engine using git as follows: https://github.com/sipwise/rtpengine.git Go to the daemon folder in the installation directory and run the make command as follows: sudo make Start rtpengine in the default user space mode on the local machine: sudo ./rtpengine --ip=10.1.5.14 --listen-ng=12334 Check the status of rtpengine, which is running, using the following command: ps -ef|grep rtpengine Note that rtpengine must be installed on the same machine as the Kamailio SIP server. In case of the sipml5 client, after configuring the modules described in the preceding section and before making a call through the Media Server, the flow for the media will become one of the following:      In case of Voicemail/IVR, the flow is as follows:     WebRTC client to RTP proxy node to Media Server      In case of a call through media relay, the flow is as follows:     WebRTC client A to RTP proxy node to Media Server to RTP Proxy to WebRTC client B The following diagram shows the MediaProxy relay between WebRTC clients: The potential of media server lies in its media transcoding of various codecs. Different phones / call clients / softwares that support SIP as the signaling protocol do not necessarily support the same media codecs. In the situation where Media Server is absent and the codecs do not match between a caller and receiver, the attempt to make a call is abruptly terminated when the media exchange needs to take place, that is, after invite, success, response, and acknowledgement are sent. In the following figure, the setup to traverse media through the FreeSWITCH Media Server and signaling through the Kamailio SIP server is depicted: The role of the rtpproxyng engine is to enable media to pass via Media Server; this is shown in the following diagram: WebRTC over firewalls and proxies There are many complicated issues involved with the correct working of WebRTC across domains, NATS, geographies, and so on. It is important for now that the firewall of a system, or any kind of port-blocking policy, should be turned off to be able to make a successful audio-video WebRTC call across any two parties that are not on the same Local Area Network (LAN). For the user to not have to switch the firewall off, we need to configure the Simple Traversal of UDP through NAT (STUN) server or modify the Interactive Connectivity Establishment (ICE) parameter in the SDP exchanged. STUN helps in packet routing of devices behind a NAT firewall. STUN only helps in device discoverability by assigning publicly accessible addresses to devices within a private local network. Traversal Using Relay NAT (TURN) servers also serve to accomplish the task of inter-connecting the endpoints behind NAT. As the name suggests, TURN forces media to be proxied through the server. To learn more about ICE as a NAT-traversal mechanism, refer to the official document named RFC 5245. The ICE features are defined by sipML5 in the sipml.js file. It is added to SIP SDP during the initial phase of setting up the SIP stack. Snippets from the sipml.js file regarding ICE declaration are given as follows: var configuration = { ... websocket_proxy_url: 'ws://192.168.0.10:5060', outbound_proxy_url: 'udp://192.168.0.12:5060', ice_servers: [{ url: 'stun:stun.l.google.com:19302'}, {    url:'turn:user@numb.viagenie.ca', credential:'myPassword'}], ... }; Under the postInit function in the call.htm page add the following function: oConfigCall = { ... events_listener: { events: '*', listener: onSipEventSession },    SIP_caps: [      { name: '+g.oma.SIP-im' },      { name: '+SIP.ice' },      { name: 'language', value: '"en,fr"' }    ] }; Therefore, the WebRTC client is able to reach the client behind the firewall itself; however, the media displays unpredicted behavior. In the need to create our own STUN-TURN server, you can take the help of RFC 5766, or you can refer to open source implementations, such as the project at the following site: https://code.google.com/p/rfc5766-turn-server/ When setting the parameters for WebRTC, we can add our own STUN/TURN server. The following screenshot shows the inputs suitable for ICE Servers if you are using your own TURN/STUN server: If there are no firewall restrictions, for example, if the users are on the same network without any corporate proxies and port blocks, we can omit the ICE by entering empty brackets, [], in the ICE Servers option on the Expert settings page in the WebRTC client. The final architecture for the WebRTC-to-IMS integration At the end of this article, we have arrived at an architecture similar to the following diagram. The diagram depicts a basic WebRTC-to-IMS architecture. The diagram depicts the WebRTC client in the Transport Layer as it is the user end-point. The IMS entities (CSCF and HSS), WebRTC to IMS gateway, and Media Server nodes are placed on the Network Control Layer as they help in signal and media routing. The applications for call control are placed in the top-most Application Layer that processes the call control logic. This architecture serves to provide a basic IMS-based setup for SIP-based WebRTC client interaction. Summary In this article, we saw how to interconnect the WebRTC setup with the IMS infrastructure. It included interaction with CSCF nodes, namely PCSCF, ICSCF, and SCSCF, after building and installing them from their sources. Also, FreeSWITCH Media Server was discussed, and the steps to build and integrate it were practiced. The Application Server to embed call control logic is Kamailio. NAT traversal via STUN / TURN server was also discussed and its importance was highlighted. To deploy the WebRTC solution integrated with the IMS network, we must ensure that all of the required IMS nodes are consulted while making a call, the values are reflected in the HSS data store, and the incoming SIP request and responses are routed via call logic of the Application Server before connecting a call. Resources for Article: Further resources on this subject: Using the WebRTC Data API [Article] Implementing Stacks using JavaScript [Article] Applying WebRTC for Education and E-learning [Article]
Read more
  • 0
  • 0
  • 8899

article-image-scaling-friendly-font-rendering-distance-fields
Packt
28 Oct 2014
8 min read
Save for later

Scaling friendly font rendering with distance fields

Packt
28 Oct 2014
8 min read
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem. (For more resources related to this topic, see here.) Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures! In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx. Getting ready For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from http://libgdx.badlogicgames.com/releases and unzip it. Make sure the samples projects are in your workspace. Please visit the link https://github.com/siondream/libgdx-cookbook to download the sample projects which you will need. How to do it… First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx. Generating distance field fonts with Hiero Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero Select the font using either the System or File options. This time, you don't need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time. Remove the Color effect, and add a white Distance field effect. Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot. To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding. Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property. Generate the font by going to File | Save BMFont files (text).... The following is the Hiero UI showing a font texture with a Distance field effect applied to it: Distance field fonts shader We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later. First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows: #ifdef GL_ES precision mediump float; precision mediump int; #endif   uniform sampler2D u_texture;   varying vec4 v_color; varying vec2 v_texCoord;   const float smoothing = 1.0/128.0;   void main() {    float distance = texture2D(u_texture, v_texCoord).a;    float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);    gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); } The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code. Rendering distance field fonts in Libgdx Let's move on to DistanceFieldFontSample.java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader. In the create() method, we instantiate both the fonts and shader normally: normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f);   distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f);   fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag"));   if (!fontShader.isCompiled()) {    Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); } We need to make sure that the texture our distanceFont just loaded is using linear filtering: distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear); Remember to free up resources in the dispose() method, and let's get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader: batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f);   batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end(); The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though: How it works… Let's explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf). A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process: Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case: This produces a clean result. There's more… We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this. Open a command-line window, access your Libgdx package folder and enter the following command: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator The distance field font generator takes the following parameters: --color: This parameter is in hexadecimal RGB format; the default is ffffff --downscale: This is the factor by which the original texture will be downscaled --spread: This is the edge scan distance, expressed in terms of the input Take a look at this example: java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution (https://github.com/jrenner/gdx-smart-font). Summary In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx. Further resources on this subject: Cross-platform Development - Build Once, Deploy Anywhere [Article] Getting into the Store [Article] Adding Animations [Article]
Read more
  • 0
  • 0
  • 8868
article-image-execution-test-plans
Packt
28 Oct 2014
23 min read
Save for later

Execution of Test Plans

Packt
28 Oct 2014
23 min read
In this article by Bayo Erinle, author of JMeter Cookbook, we will cover the following recipes: Using the View Results Tree listener Using the Aggregate Report listener Debugging with Debug Sampler Using Constant Throughput Timer Using the JSR223 postprocessor Analyzing Response Times Over Time Analyzing transactions per second (For more resources related to this topic, see here.) One of the critical aspects of performance testing is knowing the right tools to use to attain your desired targets. Even when you settle on a tool, it is helpful to understand its features, component sets, and extensions, and appropriately apply them when needed. In this article, we will go over some helpful components that will aid you in recording robust and realistic test plans while effectively analyzing reported results. We will also cover some components to help you debug test plans. Using the View Results Tree listener One of the most often used listeners in JMeter is the View Results Tree listener. This listener shows a tree of all sample responses, giving you quick navigation of any sample's response time, response codes, response content, and so on. The component offers several ways to view the response data, some of which allow you to debug CSS/jQuery, regular expressions, and XPath queries, among other things. In addition, the component offers the ability to save responses to file, in case you need to store them for offline viewing or run some other processes on them. Along with the various bundled testers, the component provides a search functionality that allows you to quickly search for the responses of relevant items. How to do it… In this recipe, we will cover how to add the View Results Tree listener to a test plan and then use its in-built testers to test the response and derive expressions that we can use in postprocessor components. Perform the following steps: Launch JMeter. Add Thread Group to the test plan by navigating to Test Plan | Add | Threads (Users) | Thread Group. Add HTTP Request to the thread group by navigating to Thread Group | Add | Sampler | HTTP Request. Fill in the following details:    Server Name or IP: dailyjs.com Add the View Results Tree listener to the test plan by navigating to Test Plan | Add | Listener | View Results Tree. Save and run the test plan. Once done, navigate to the View Results Tree component and click on the Response Data tab. Observe some of the built-in renders. Switch to the HTML render view by clicking on the dropdown and use the search textbox to search for any word on the page. Switch to the HTML (download resources) render view by clicking on the dropdown. Switch to the XML render view by clicking on the dropdown. Notice the entire HTML DOM structure is presented as the XML node elements. Switch to the RegExp Tester render view by clicking on the dropdown and try out some regular expression queries. Switch to the XPath Query Tester render view and try out some XPath queries. Switch to the CSS/jQuery Tester render view and try out some jQuery queries, for example, selecting all links inside divs marked with a class preview (Selector: div.preview a, Attribute: href, CSS/jQuery Implementation: JSOUP). How it works… As your test plans execute, the View Result Tree listener reports each sampler in your test plans individually. The Sampler Result tab of the component gives you a summarized view of the request and response including information such as load time, latency, response headers, body content sizes, response code and messages, response header content, and so on. The Request tab shows the actual request that got fulfilled by the sampler, which could be any of the acceptable requests the server can fulfill (for example, GET, POST, PUT, DELETE, and so on) along with details of the request headers. Finally, the Response Data tab gives the rendered view of the response received back from the server. The component includes several built-in renders along with tester components (CSS/JQuery, RegExp, and XPath) that allow us to test and come up with the right expressions or queries needed to use in postprocessor components within our test plans. This is a huge time saver as it means we don't have to exercise the same tests repeatedly to nail down such expressions. There's more… As with most things bundled with JMeter, additional view renders can be added to the View Result Tree component. The defaults included are Document, HTML, HTML (download resources), JSON, Text, and XML. Should any of these not suit your needs, you can create additional ones by implementing org.apache.jmeter.visualizers.ResultRender interface and/or extending org.apache.jmeter.visualizers.SamplerResultTab abstract class, bundling up the compiled classes as a JAR file and placing them in the $JMETER_HOME/lib/ext directory to make them available for JMeter. The View Result Tree listener consumes a lot of memory and CPU resources, and should not be used during load testing. Use it only to debug and validate the test plans. See also The Debugging with Debug Sampler recipe The detailed component reference for the View Results Tree listener can be found at http://jmeter.apache.org/usermanual/component_reference.html#View_Results_Tree Using the Aggregate Report listener Another often used listener in JMeter is the Aggregate Report listener. This listener creates a row for each uniquely named request in the test plan. Each row gives a summarized view of useful information including Request Count, Average, Median, Min, Max, 90% Line, Error Rate, Throughput, Requests/second, and KB/sec. The 90% Line column is particularly worth paying close attention to as you execute your tests. This figure gives you the time it takes for the majority of threads/users to execute a particular request. It is measured in milliseconds. Higher numbers here are indicative of slow requests and/or components within the application under test. Equally important is the Error % column, which reports the failure rate of each sampled request. It is reasonable to have some level of failure when exercising test runs, but too high a number is an indication of either errors in scripts or certain components in the application under test. Finally, of interest to stack holders might be the number of requests per second, which the Throughput column reports. The throughput values are approximate and let you know just how many requests per second the server is able to handle. How to do it… In this recipe, we will cover how to add an Aggregate Report listener to a test plan and then see the summarized view of our execution: Launch JMeter. Open the ch7_shoutbox.jmx script bundled with the code samples. Alternatively, you can download it from https://github.com/jmeter-cookbook/bundled-code/scripts/ch7/ch7_shoutbox.jmx. Add the Aggregate Report listener to Thread Group by navigating to Thread Group | Add | Listener | Aggregate Report. Save and run the test plan. Observe the real-time summary of results in the listener as the test proceeds. How it works… As your test plans execute, the Aggregate Report listener reports each sampler in your test plan on a separate row. Each row is packed with useful information. The Label column reflects the sample name, # Samples gives a count of each sampler, and Average, Mean, Min, and Max all give you the respective times of each sampler. As mentioned earlier, you should pay close attention to the 90% Line and Error % columns. This can help quickly pinpoint problematic components within the application under test and/or scripts. The Throughput column gives an idea of the responsiveness of the application under test and/or server. This can also be indicative of the capacity of the underlying server that the application under test runs on. This entire process is demonstrated in the following screenshot: Using the Aggregate Report listener See also http://jmeter.apache.org/usermanual/component_reference.html#Summary_Report Debugging with Debug Sampler Often, in the process of recording a new test plan or modifying an existing one, you will need to debug the scripts to finally get your desired results. Without such capabilities, the process will be a mix of trial and error and will become a time-consuming exercise. Debug Sampler is a nifty little component that generates a sample containing the values of all JMeter variables and properties. The generated values can then be seen in the Response Data tab of the View Results Tree listener. As such, to use this component, you need to have a View Results Tree listener added to your test plan. This component is especially useful when dealing with postprocessor components as it helps to verify the correct or expected values that were extracted during the test run. How to do it… In this recipe, we will see how we can use Debug Sampler to debug a postprocessor in our test plans. Perform the following steps: Launch JMeter. Open the prerecorded script ch7_debug_sampler.jmx bundled with the book. Alternatively, you can download it from http://git.io/debug_sampler. Add Debug Sampler to the test Thread Group by navigating to Thread Group | Add | Sampler | Debug Sampler. Save and run the test. Navigate to the View Results Tree listener component. Switch to RegExp Tester by clicking on the dropdown. Observe the response data of the Get All Requests sampler. What we want is a regular expression that will help us extract the ID of entries within this response. After a few attempts, we settle at "id":(d+). Enable all the currently disabled samplers, that is, Request/Create Holiday Request, Modify Holiday, Get All Requests, and Delete Holiday Request. You can achieve this by selecting all the disabled components, right-clicking on them, and clicking on Enable. Add the Regular Expression Extractor postprocessor to the Request/Create Holiday Request sampler by navigating to Request/Create Holiday Request | Add | Post Processors | Regular Expression Extractor. Fill in the following details:    Reference Name: id    Regular Expression: "id":(d+)    Template: $1$    Match No.: 0    Default Value: NOT_FOUND Save and rerun the test. Observe the ID of the newly created holiday request and whether it was correctly extracted and reported in Debug Sampler. How it works… Our goal was to test a REST API endpoint that allows us to list, modify, and delete existing resources or create new ones. When we create a new resource, the identifier (ID) is autogenerated from the server. To perform any other operations on the newly created resource, we need to grab its autogenerated ID, store that in a JMeter variable, and use it further down the execution chain. In step 7, we were able to observe the format of the server response for the resource when we executed the Get All Requests sampler. With the aid of RegExp Tester, we were able to nail down the right regular expression to use to extract the ID of a resource, that is, "id":(d+). Armed with this information, we added a Regular Expression Extractor postprocessor component to the Request/Create Holiday Request sampler and used the derived expression to get the ID of the newly created resource. We then used the ID stored in JMeter to modify and delete the resource down the execution chain. After test completion, with the help of Debug Sampler, we were able to verify whether the resource ID was properly extracted by the Regular Expression Extractor component and stored in JMeter as an ID variable. Using Constant Throughput Timer While running test simulations, it is sometimes necessary to be able to specify the throughput in terms of the number of requests per minute. This is the function of Constant Throughput Timer. This component introduces pauses to the test plan in such a way as to keep the throughput as close as possible to the target value specified. Though the name implies it is constant, various factors affect the behavior, such as server capacity, other timers or time-consuming elements in the test plan, and so on. As a result, the targeted throughput could be lowered. How to do it… In this recipe, we will add Constant Throughput Timer to our test plan and see how we can specify the expected throughput with it. Perform the following steps: Launch JMeter. Open the prerecorded script ch7_constant_throughput.jmx bundled with the book. Alternatively, you can download it from http://git.io/constant_throughput. Add Constant Throughput Timer to Thread Group by navigating to Thread Group | Add | Timer | Constant Throughput Timer. Fill in the following details:    Target throughput (in samples per minute): 200    Calculate Throughput based on: this thread only Save and run the test plan. Allow the test to run for about 5 minutes. Observe the result in the Aggregate Result listener as the test is going on. Stop the test manually as it is currently set to run forever. How it works… The goal of the Constant Throughput Timer component is to get your test plan samples as close as possible to a specified desired throughput. It achieves this by introducing variable pauses to the test plan in such a manner that will keep numbers as close as possible to the desired throughput. That said, throughput will be lowered if the server resources of the system under test can't handle the load. Also, other elements (for example, other timers, the number of specified threads, and so on) within the test plan can affect attaining the desired throughput. In our recipe, we have specified the throughput rate to be calculated based on a single thread, but Constant Throughput Timer also allows throughput to be calculated based on all active threads and all active threads in the current thread group. Each of these settings can be used to alter the behavior of the desired throughput. As a rule of thumb, avoid using other timers at the same time you use Constant Throughput Timer, since you'll not achieve the desired throughput. See also The Using Throughput Shaping Timer recipe http://jmeter.apache.org/usermanual/component_reference.html#timers Using the JSR223 postprocessor The JSR223 postprocessor allows you to use precompiled scripts within test plans. The fact that the scripts are compiled before they are actually used brings a significant performance boost compared to other postprocessors. This also allows a variety of programming languages to be used, including Java, Groovy, BeanShell, JEXL, and so on. This allows us to harness the powerful language features in those languages within our test plans. JSR223 components, for example, could help us tackle preprocessor or postprocessor elements and samplers, allowing us more control over how elements are extracted from responses and stored as JMeter variables. How to do it… In this recipe, we will see how to use a JSR223 postprocessor within our test plan. We have chosen Groovy (http://groovy.codehaus.org/) as our choice of scripting language, but any of the other supporting languages will do: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Download the groovy-all JAR file from http://devbucket-afriq.s3.amazonaws.com/jmeter-cookbook/groovy-all-2.3.3.jar and add it to the $JMETER_HOME/lib directory. Launch JMeter. Add Thread Group by navigating to Test Plan | Add | Threads(Users) | Thread Group. Add Dummy Sampler to Thread Group by navigating to Thread Group | Add | Sampler | jp@gc - Dummy Sampler. In the Response Data text area, add the following content: <records>   <car name='HSV Maloo' make='Holden' year='2006'>       <country>Australia</country>       <record type='speed'>Production Pickup Truck with speed of 271kph</record>   </car>   <car name='P50' make='Peel' year='1962'>       <country>Isle of Man</country>       <record type='size'>Smallest Street-Legal Car at 99cm wide and 59 kg in weight</record>   </car>   <car name='Royale' make='Bugatti' year='1931'>       <country>France</country>       <record type='price'>Most Valuable Car at $15 million</record>   </car></records> Download the Groovy script file from http://git.io/8jCXMg to any location of your choice. Alternatively, you can get it from the code sample bundle accompanying the book (ch7_jsr223.groovy). Add JSR223 PostProcessor as a child of Dummy Sampler by navigating to jp@gc - Dummy Sampler | Add | Post Processors | JSR223 PostProcessor. Select Groovy as the language of choice in the Language drop-down box. In the File Name textbox, put in the absolute path to where the Groovy script file is, for example, /tmp/scripts/ch7/ch7_jsr223.groovy. Add the View Results Tree listener to the test plan by navigating to Test Plan | Add | Listener | View Results Tree. Add Debug Sampler to Thread Group by navigating to Thread Group | Add | Sampler | Debug Sampler. Save and run the test. Observe the Response Data tab of Debug Sampler and see how we now have the JMeter variables car_0, car_1, and car_2, all extracted from the Response Data tab and populated by our JSR223 postprocessor component. How it works… JMeter exposes certain variables to the JSR223 component, allowing it to get hold of sample details and information, perform logic operations, and store the results as JMeter variables. The exposed attributes include Log, Label, Filename, Parameters, args[], ctx, vars, props, prev, sampler, and OUT. Each of these allows access to important and useful information that can be used during the postprocessing of sampler responses. The log gives access to Logger (an instance of an Apache Commons Logging log instance; see http://bit.ly/1xt5dmd), which can be used to write log statements to the logfile. The Label and Filename attributes give us access to the sample label and script file name respectively. The Parameters and args[] attributes give us access to parameters sent to the script. The ctx attribute gives access to the current thread's JMeter context (http://bit.ly/1lM31MC). vars gives access to write values into JMeter variables (http://bit.ly/1o5DDBr), exposing them to the result of the test plan. The props attribute gives us access to JMeterProperties. The sampler attribute gives us access to the current sampler while OUT allows us to write log statements to the standard output, that is, System.out. Finally, the prev sample gives access to previous sample results (http://bit.ly/1rKn8Cs), allowing us to get useful information such as the response data, headers, assertion results, and so on. In our script, we made use of the prev and vars attributes. With prev, we were able to get hold of the XML response from the sample. Using Groovy's XmlSlurper (http://bit.ly/1AoRMnb), we were able to effortlessly process the XML response and compose the interesting bits, storing them as JMeter variables using the vars attribute. Using this technique, we are able to accomplish tasks that might have otherwise been cumbersome to achieve using any other postprocessor elements we have seen in other recipes. We are able to take full advantage of the language features of any chosen scripting language. In our case, we used Groovy, but any other supported scripting languages you are comfortable with will do as well. See also http://jmeter.apache.org/api http://jmeter.apache.org/usermanual/component_reference.html#BSF_PostProcessor http://jmeter.apache.org/api/org/apache/jmeter/threads/JMeterContext.html http://jmeter.apache.org/api/org/apache/jmeter/threads/JMeterVariables.html http://jmeter.apache.org/api/org/apache/jmeter/samplers/SampleResult.html Analyzing Response Times Over Time An important aspect of performance testing is the response times of the application under test. As such, it is often important to visually see the response times over a duration of time as the test plan is executed. Out of the box, JMeter comes with the Response Time Graph listener for this purpose, but it is limited and lacks some features. Such features include the ability to focus on a particular sample when viewing chat results, controlling the granularity of timeline values, selectively choosing which samples appear or not in the resulting chart, controlling whether to use relative graphs or not, and so on. To address all these and more, the Response Times Over Time listener extension from the JMeter plugins project comes to the rescue. It shines in areas where the Response Time Graph falls short. How to do it… In this recipe, we will see how to use the Response Times Over Time listener extension in our test plan and get the response times of our samples over time. Perform the following steps: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Launch JMeter. Open any of your existing prerecorded scripts or record a new one. Alternatively, you can open the ch7_response_times_over_time.jmx script accompanying the book or download it from http://git.io/response_times_over_time. Add the Response Times Over Time listener to the test plan by navigating to Test Plan | Add | Listener | jp@gc - Response Times Over Time. Save and execute the test plan. View the resulting chart in the tab by clicking on the Response Times Over Time component. Observe the time elapsed on the x axis and the response time in milliseconds on the y axis for all samples contained in the test plan. Navigate to the Rows tab and exclude some of the samples from the chart by unchecking the selection boxes next to the samples. Switch back to the Chart tab and observe that the chart now reflects your changes, allowing you to focus in on interested samples. Switch to the Settings tab and see all the available configuration options. Change some options and repeat the test execution. This is shown in the following screenshot: Analyzing Response Times Over Time How it works… Just like its name implies, the Response Times Over Time listener extension displays the average response time in milliseconds for each sampler in the test plan. It comes with various configuration options that allow you to customize the resulting graph to your heart's content. More importantly, it allows you to focus in on specific samples in your test plan, helping you pinpoint potential bottlenecks or problematic modules within the application under test. For graphs to be more meaningful, it helps to give samples sensible descriptive names and tweak the granularity of the elapsed time to a higher number in the Settings tab if you have long running tests. After test execution, data of any chart can also be exported to a CSV file for further analysis or use as you desire. Any listener that charts results will have some impact on performance and shouldn't be used during high volume load testing. Analyzing transactions per second Sometimes we are tasked with testing backend services, application program interfaces (APIs), or some other components that may not necessarily have a graphical user interface (GUI) attached to it, for example, a classic web application. At such times, the measure of the responsiveness of the module, for example, will be how many transactions per second it can withstand before slowness is observed. For example, Transactions Per Second (TPS) is useful information for stakeholders who are providing services that can be consumed by various third-party components or other services. Good examples of these include the Google search engine, which can be consumed by third-parties, and the Twitter and Facebook APIs, which allow developers to integrate their application with Twitter and Facebook respectively. The Transactions Per Second listener extension component from the JMeter plugins project allows us to measure the transactions per second. It plots a chart of the transactions per second over an elapsed duration of time. How to do it… In this recipe, we will see how to use the Transactions Per Second listener extension in our test plan and get the transactions per second for a test API service: Download the standard set of plugins from http://jmeter-plugins.org/. Install the plugins by doing the following:    Extract the ZIP archive to the location of your chosen directory    Copy the lib folder in the extracted directory into the $JMETER_HOME directory Launch JMeter. Open the ch7_transaction_per_sec.jmx script accompanying the book or download it from http://git.io/trans_per_sec. Add the Transactions Per Second listener to the test plan by navigating to Test Plan | Add | Listener | jp@gc - Transactions per Second. Save and execute the test plan. View the resulting chart in the tab by clicking on the Transactions Per Second component. Observe the time elapsed on the x axis and the transactions/sec on the y axis for all samples contained in the test plan. Navigate to the Rows tab and exclude some of the samples from the chart by unchecking the selection boxes next to the samples. Switch back to the Chart tab and observe that the chart now reflects your changes, allowing you to focus in on interesting samples. Switch to the Settings tab and see all the available configuration options. Change some options and repeat the test execution. How it works… The Transactions Per Second listener extension displays the transactions per second for each sample in the test plan by counting the number of successfully completed transactions each second. It comes with various configuration options that allow you to customize the resulting graph. Such configurations allow you to focus in on specific samples of interest in your test plan, helping you to get at impending bottlenecks within the application under test. It is helpful to give your samples sensible descriptive names to help make better sense of the resulting graphs and data points. This is shown in the following screenshot: Analyzing Transactions per Second Summary In this article, you learned how to build a test plan using the steps mentioned in the recipe. Furthermore, you saw how to debug and analyze the result of a test plan after building it. Resources for Article: Further resources on this subject: Functional Testing with JMeter [article] Performance Testing Fundamentals [article] Common performance issues [article]
Read more
  • 0
  • 0
  • 3356

article-image-basic-concepts-proxmox-virtual-environment
Packt
27 Oct 2014
20 min read
Save for later

Basic Concepts of Proxmox Virtual Environment

Packt
27 Oct 2014
20 min read
 This article by Simon M.C. Cheng, author of the book Proxmox High Availability, will show you some basic concepts of Proxmox VE before actually using it, including the technology used, basic administration, and some options available during set up. The following topics are going to be covered in this chapter: An explanation of server virtualization used by Proxmox VE An introduction of basic administrative tools available in Proxmox VE An explanation of different virtualization modes and storage options (For more resources related to this topic, see here.) Introduction to server virtualization Have you ever heard about cloud computing? It is a hot topic in the IT industry and claims that you can allocate nearly unlimited computer resources in a pay-as-you-go basis. Are you not curious to know on how they are able to provide such a service? The underlying technology that allows them to be able to provide such a service is hardware virtualization. Depending on the kind of processor used, there are three different types of virtualizations available: full virtualization, para-virtualization, and hardware assisted virtualization. Full virtualization: In this the VMM is placed under Ring 0 while the virtualized guest OS is installed under Ring 1. However, some system calls can only be executed under Ring 0. Therefore, a process called binary translation is used to translate such system calls, and thus, the performance is degraded. In this mode, the guest OS does not know it is being virtualized, so it does not require kernel modification. Here is a simple structure for this type of virtualization: Para-virtualization: It is very similar to full virtualization, but custom drivers are installed on the guest OS in order to access CPU resources without downgrading to Ring 1. So, the performance of the guest OS is near to that of the physical machine because the translation process is not needed, but the guest OS requires a modified kernel. Thus, the guest cannot run operating system different from the host operation system. The following shows the structure of this virtualization: Hardware assisted virtualization: CPU manufacturers introduce a new functionality for a virtualized platform, Intel VT-x and AMD-V. The ring level 0 to 3 is categorized into non-root modes, and a new level, -1, is introduced as the root mode. The guest OS is now installed to Ring 0, which means it can access hardware directly. Because it does not need a custom API to make system calls under Ring 0, no kernel modification is needed. The following diagram shows you the structure of the virtualization mode: Comparing server virtualization software We have discussed why we need to learn server virtualization and how virtualization works, so are you curious on how many major virtualization software are in the market? What are the differences between them? Let's take a deep look at it: Proxmox VE: As mentioned earlier, Proxmox VE is an open source hypervisor based on GNU/Linux (Debian based) with a RHEL-based kernel and published under GNU AGPL v3. It differs from the alternatives virtualization software as Proxmox provides a central web-based management without further installation. The underlying technologies used are Open Virtuozzo (OpenVZ) and Kernel-based Virtual Machine (KVM), which will be described in Version 1.6. Subscription plans are available for accessing enterprise repository, software updates, and user support. XenServer: It is a native hypervisor based on GNU/Linux developed by the Xen Project and published under GNU AGPL v2 as open source. For XenServer, a concept of domain is used for all virtual machines, and the most privileged domain (for example, have direct access to hardware)—dom0, is used by the hypervisor to manage other domU virtual machines. It supports para-virtualization, which allows a user to run virtualized guests on a CPU without support for virtualization; for example, no Intel VT-x or AMD-V is needed. Amazon Web Service (AWS) is a production sample of using XenServer. VMware ESX/ESXi: This is a bare-metal hypervisor developed by VMware based on a customized kernel called vmkernel, which is a microkernel developed by VMware.The difference between ESX and ESXi is that ESXi is a free version of ESX with some resource limitations. ESX has a hardware compatibility list that includes many drivers for network cards and SCSI cards. An extra hardware driver can be added to the base installation media if needed. On top of the para-virtualization and hardware-assisted virtualization, ESX provides full virtualization as another option.There are two management tools available: vSphere client and vCenter server. VSphere client is enough for normal administration operation on one ESX while vCenter server allows the user to manage multiple ESXs, including the configuration of advanced features such as high availability and live migration. Hyper-V server: This is a proprietary virtualization platform produced by Microsoft Corporation running under Windows platform starting from Windows Server 2008. If you are mainly using a Windows platform as your virtualized guest, it is recommended that you use Hyper-V, especially if you have enabled an Active Directory domain services.Hyper-V provides better migration options to users; it not only provides live migration, but also provides unlimited guest movements between hosts. The benefit of the features of an Active Directory domain is that Hyper-V provides replica on virtual machines, which allows a user to copy a specific VM from the source site to the target site asynchronously via a WAN or a secure VPN. Virtualization options explained in Proxmox VE There are two types of virtualization available in Proxmox: OpenVZ and KVM. OpenVZ is an operating-system-level virtualization based on the GNU/Linux kernel and the host operation system. Theoretically, OpenVZ is not a type of virtualization but more like the jail concept in Linux. Since a patched Linux kernel is needed, only Linux guests can be created. All guests are called containers that share the same kernel and architecture as long as the host OS, while each container reserves a separate user space. Kernel-based Virtual Machine (KVM) is basically a hardware-assisted virtualization with the modified Linux kernel built with KVM module. KVM itself does not perform any emulation or virtualization. Instead, it simply exposes the /dev/kvm interface. QEMU is chosen as a software-based emulator to simulate hardware for the virtualized environment. Virtual disk options under Proxmox VE During the virtual machine creation, the following virtual disk options are available: RAW: This is a raw file format. The disk space is allocated during the creation and will use up the specified size. When compared with QCOW2, it gives a better overall performance. QCOW2: This is an enhanced version of QCOW, which offers a provisioning ability for disk storage used by QEMU. QCOW2 offers a capability to create multiple virtual machine snapshots compared to the older version. The following features are supported: Thin-provisioning: During the disk creation, a file smaller than the specified size is created, and the specified size will be configured as the maximum size of the disk image. The file size will grow according to the usage inside the guest system; this is called thin-provisioning. Snapshot: QCOW2 allows the user to create snapshots of the current system. With the use of the copy-on-write technology and a read-only base image, differential backup can be achieved. VMDK: This file format for a disk image is used by VMware. The virtual disk file can be imported back to VMware Player, ESX, and ESXi. It also provides a thin-provisioning function such as QCOW2. What is availability? What means availability? Let's take a look on the formula of calculating the availability; we need to divide the subtraction of Downtime duration (DD) from Expected uptime (EU) with Expected uptime (EU), then multiply it with 100. Availability is expressed as a percentage of uptime with a year. Here is the formula: Downtime duration (DU): Refers to the number of hours which the system is unavailable. Expected uptime (EU): Refers to the expected system availability, normally we expect the system is 365 x 24 x 7 hours available. Negative effects on system downtime What are the problems brought from downtime? Let's have a look: Lose of customer trust: This is a huge impact if your application is an online buying platform. When the user tries to pay for the products or services they have selected, the system responses with an error page or even worse a white screen. If you were the customer, would you still trust this platform as a reliable one? I think the answer is no. Customers tend to share bad experience to friends and thus company reputation is damaged. System recovery: At the background side, lots of system recovery and troubleshooting tasks must be made. Sometimes we have to wait for the support from vendor and there might not have essential service parts for older system. If that's the case, the repairing time would be longer than normal while you still have to pay for the rack rental fee in the data center. Reduce the productivity of internal staff: If the affected server contains an internal system, the daily operation of a group of staff is affected. For example, when it is a CRM system, sales staff cannot load customer information to process. When it is a financial system, the accountant cannot send and receive money from banks and customers. Introduction on DRBD DRBD is a short form for Distributed Replicated Block Device, it is intended to use under high availability environment. DRBD provides high availability by mirroring existing system to another machine including the disk storage, network card status and services running under existing system. So if the existing system is out of service, we can instantly switch to the backup system to avoid service interruption. Besides high availability, there are a few more functions provided by Proxmox cluster mode but the most important one is live migration. Unlike normal migration, in Proxmox cluster, migration can be processed without shutting down the virtual machine. Such approach is called live migration which greatly reduces the downtime of each virtual machine. The Proxmox Cluster file system (pmxcfs) Are you curious on how to manage multiple configuration files in Proxmox cluster mode? Proxmox Cluster file system (pmxcfs) is a built-in function which Proxmox cluster provided to synchronize configuration files between cluster member nodes. It is an essential component for Proxmox cluster as a version control on configuration files including cluster configuration, the virtual machine configuration, etc. It is basically a database-driven file system to store configuration files for all host servers and replicate in real time on all host nodes using corosync. The underlying file system is created by FUSE, with maximum size of 30 MB now. Here are the concepts for this file system: FUSE: It is a short form for Filesystem in Userspace which allows users to define their own device under their own userspace. With the use of FUSE, we don't have to worry the system will be crashed if we have mis-configured a file system because FUSE is not linked to system kernel. Corosync: It is a short form for Corosync Cluster Engine, a group communication system allowing clients to communicate with each other. The following diagram shows the structure for the Proxmox Cluster file system: An explanation on Logical Volume Manager (LVM) Unlike building a local RAID1 device by using mdadm command, we need to form LVM volume with dedicated local disk in multiple servers. LVM is used to simplify the disk management process of large hard disks. By adding an abstruction layer, users are able to add/replace their hard disks without downtime in combining with hot swapping. Besides, users are able to add/remove/resize their LVM volumes or even create a RAID volume easily. The structure of LVM is shown as follows: The Gluster file system GlusterFS is a distributed filesystem running in server-client architecture. It makes used of native Gluster protocol but also be seen as a NFS share or even work as an object-storage (Amazon S3-like networked key-value storage) with GlusterFS UFO. Gluster over LVM with iSCSI provides auto healing function. With auto healing, Gluster client would still be able to read/write files even if one Gluster server has failed which is similar to what RAID 1 offered. Let's check out how Gluster file system handles server failure: Initially, we need to haveat least two storage servers installed with Gluster server package in order to enjoy the functionality of auto healing. On the client side, we have configured to use Replicate mode and mount the file system to /glusterfs. The file content will be stored in both storages in this mode as follows: If Storage 1 is failed, Gluster client will redirect the request to Storage 2. When Storage 1 becomes available, the updated content will be synchronized from Storage 2. Therefore, client will not notice there is a server failure. This is shown in the following diagram: Thus, Gluster file system can provide high availability if we are using replication mode. For performance, we can distribute the file to more servers as follow: The Ceph filesystem Ceph is also a distributed filesystem providing petabyte-level storage but is more focused on eliminating a single point of failure. To ensure high availability, replicas are created on other storage nodes. Ceph is developed based on the concepts of RADOS (reliable autonomic distributed object store) with different accessing methods provided: Accessing method Support Platforms Usage Library packages C, C++, JAVA, Python, Ruby, PHP Programming RADOS gateway Amazon S3, Swift Cloud platform RBD KVM Virtualization CEPH file system Linux kernel, FUSE File system Here is a simple diagram demonstrating the structure of CEPH file system: What is fencing device? Fencing device, as name stated, is a virtual fence to prevent the communications between two nodes. It is used to separate the failed node from accessing shared resources. If there are two nodes accessing the shared resources at the same time, collision occurs which might corrupt the shared data which is the data inside virtual machines. Available fencing device options It is very important to protect our data without any corruption, what types of fencing devices are available and how can they build their fences during node failure? There are two approaches as listed below: Power fencing: Both nodes are added to fencing device for monitoring. If there is any suspicious failure on production node for a period of time, fencing device simply turns off the power of outlet which the affected server is connected to while the backup node will take over the position to provide services. For the failed node, power switch will send notification to system admin and manual recovery is required but no service interruption on the client side. Network fencing: Server nodes are connected to network switch instead of power switch. There are two types of network fencing: IPMI: It requires a separate IMPI card or onboard IPMI port to function. During the running operating system, periodically query checks can be performed to ensure the accessibility of the monitored server. If the query check failed for many times, IPMI message will be sent to this failed node to turn it off. Here is how it operates: SNMP: When there is suspicious failure on server node in a period of time, the network port between network switch and the failed server is disabled which prevents failed server to access shared storage. When operator requires turning the server back to service, manual configuration is required. Here is a diagram on how it operates: The concept for quorum device Since the voting system is a democratic system, which means there is one vote for each node. So if we only have two nodes, no one could win the race which causes the racing problem. As a result, we need to add a 3rd node joining this system (i.e. the quorum in our case). Here is a sample on why the racing problem appears and how we can fix it: Assumes we have a cluster system with only two nodes, the above diagram show the initial state for the cluster. We have marked Node 1 as the Primary node. Here, Node 1 is disconnected and therefore Node 2 would like to take over its position to become primary node. But it cannot be successful because two votes are needed for role switching operation. Therefore the cluster will become non-operational until Node 1 is recovered as follows: When Node 1 is recovered from failure, it tries to join back the cluster but failed because the cluster has stopped working. To solve the problem, an extra node is recommended to join the cluster in order to create a high availability environment. Here is an example when a node failed and Node 2 would like to be the primary node: The concept on bonding device For the network interface, a bonding device (Bond0 and Bond1) will be created in Proxmox. Bonding device is also called NIC teaming which is a native Linux kernel feature allowing user to double network speed performance or perform network redundancy. There are two options for network redundancy, 802.1ad and Active-backup. They have different response patterns when handling multiple sessions: These network redundancy options are explained in the following points: In 802.1ad, both network interfaces are active therefore sessions can be processed by different network card which is an active-active model. On the other hand, only one interface is in active state in Active-backup mode. The backup interface will become active only if the active one fails. The concepts of cluster manager The following points explain the concepts of cluster manager: Resource Group Manager (RGManager) combines with cluster manager (CMAN) and distributed lock manager (DLM) processes to manage and provide failover capabilities for collections of resources called services, resource groups, or resource trees. It is the essential process for high availability on services. If this service is turned off, HA function is disabled. Cluster Manager (CMAN) is the main process of the cluster architecture. CMAN manages the state of quorum and the status of different cluster members. In order to check the status of all cluster members, monitoring messages are sent periodically on all cluster nodes. If there is any status change on the cluster member, it will be distributed to all other cluster nodes. It is also responsible for the quorum management. When more than half node members are active, a cluster is said to be healthy. If the number of active member nodes is decreased to less than half, all cluster-related activity is blocked: Any change to cluster.conf file is not allowed Unable to start resource manager which disables HA function Any operation on VM creation is blocked NOTE: The operations of the existing virtual machines without high availability are not affected. Distributed Lock Manager (DLM) is used by resource group manager by applying different lock modes to resource to prevent multiple accesses. For details, please refer: http://en.wikipedia.org/wiki/Distributed_lock_manager I have prepared a simple diagram showing the relationship between them: Backup virtual machine data in Proxmox After we have made a copy of the container configurations, we are going to back the actual data inside the virtual machine. There are two different methods—manual backup with vzdump command for both KVM and openVZ guests and backup via GUI management console. Backup with vzdump for virtual machines There are three different backup approaches in the vzdump command: Stop mode: Stop the VM during backup which takes long backup time. Suspend mode: Use rsync command to copy data to a temporary location (defined in--tmpdir), then performs second rsync operation while suspending the container. When the second rsync operation completes, the suspended VM is resumed. Snapshot mode: This mode makes use of LVM2 snapshot function. It requires extra space within the LVM volume. Backup with web management console Except from manually backup the container under command-line interface, we can also make it under web management interface too. Here are the steps to perform backup with GUI: Log in to the web management console with root account information. Browse the left panel to locate the virtual machine to be backed up. Choose the Backup tab in the right panel and you will only see the latest backup files you have created in the previous steps: Then we can simply click on the Backup button to initialize the backup dialog:Notice that Proxmox uses TAR package as the compression method and make used of Snapshot mode by default. Therefore, make sure you have enough free space in your Volume Group which stores the data of virtual machines before using default values. By default, the volume group used is pve which is mounted at /var/lib/vz and you cannot place your dump file the same volume group. From the dialog, we can choose if the backup output file is compressed or not. To conserve disk space, here we choose GZIP as the compression method and choose snapshot to enjoy the zero downtime backup process as follow: Building up our own OS template There are two types of templates: one is OpenVZ template and another one is VM template. OpenVZ template: is only used for building up OpenVZ containers but not for KVM machine which limits the choice on operating system that it must be Linux platform. VM template: is introduced with Proxmox 3.x series, used to deploy KVM virtual machine which therefore escapes from the limitation on operating system. Here are the steps to download an OpenVZ template: Log in to the web interface of Proxmox, and find local storage from the panel on the left-hand side. Click on the Content tab and choose Templates as shown in the following screenshot: Next, we need to find a suitable template to download; for example, we can download a system with Drupal installed, as shown in the following screenshot: Then we click on the Download button. When the download completes, the template file is listed on the templates page, as shown in the following screenshot: Troubleshooting on system access problems Basically, it should not be difficult for you to install a Proxmox server from scratch. But after I have performed a few installations on different platforms, I noticed there are few scenarios which might cause you into trouble. Here are the problems I have found. Undefined video mode number Symptom: In some motherboards, you would receive the Undefined video mode number warning after you have pressed Enter to begin installation. It simply tells you that you cannot run the fancy installation wizard as below: Root cause: The main problem is the display chipset. When your motherboard is using display chipset which is not VESA2.0 compatible, this error message appears. To learn more about the VESA2.0, please find the following links: Solution: Then, you will be asked to press either <ENTER>, <SPACE> or wait for 30 seconds to continue. If you have pressed <ENTER>, the possible video modes available on your system will be shown: You can pick up a display mode number based on the list mentioned above. Normally, you can choose display mode 314 with 800 x 600 resolutions and 16-bit color depth or you can choose display mode 311 which provides you with 640 x 480 resolutions and 16-bit color depth. Then you should be able to continue the installation process. Prevention: I found that this problem normally happened in Nvidia display cards. If it is possible, you can try to replace it with Intel or ATI display cards during your installation. Summary In this article, we explained the concept of virtualization and compared Proxmox with other virtualization software. Resources for Article:  Further resources on this subject: A Virtual Machine for a Virtual World [article] Planning Desktop Virtualization [article] Setting up of Software Infrastructure on the Cloud [article]
Read more
  • 0
  • 2
  • 26773

article-image-emr-architecture
Packt
27 Oct 2014
6 min read
Save for later

The EMR Architecture

Packt
27 Oct 2014
6 min read
This article is written by Amarkant Singh and Vijay Rayapati, the authors of Learning Big Data with Amazon Elastic MapReduce. The goal of this article is to introduce you to the EMR architecture and EMR use cases. (For more resources related to this topic, see here.) Traditionally, very few companies had access to large-scale infrastructure to build Big Data applications. However, cloud computing has democratized the access to infrastructure allowing developers and companies to quickly perform new experiments without worrying about the need for setting up or scaling infrastructure. A cloud provides an infrastructure as a service platform to allow businesses to build applications and host them reliably with scalable infrastructure. It includes a variety of application-level services to help developers to accelerate their development and deployment times. Amazon EMR is one of the hosted services provided by AWS and is built on top of a scalable AWS infrastructure to build Big Data applications. The EMR architecture Let's get familiar with the EMR. This section outlines the key concepts of EMR. Hadoop offers distributed processing by using the MapReduce framework for execution of tasks on a set of servers or compute nodes (also known as a cluster). One of the nodes in the Hadoop cluster will be controlling the distribution of tasks to other nodes and it's called the Master Node. The nodes executing the tasks using MapReduce are called Slave Nodes: Amazon EMR is designed to work with many other AWS services such as S3 for input/output data storage, DynamoDB, and Redshift for output data. EMR uses AWS CloudWatch metrics to monitor the cluster performance and raise notifications for user-specified alarms. We can create on-demand Hadoop clusters using EMR while storing the input and output data in S3 without worrying about managing a 24*7 cluster or HDFS for data storage. The Amazon EMR job flow is shown in the following diagram: Types of nodes Amazon EMR provides three different roles for the servers or nodes in the cluster and they map to the Hadoop roles of master and slave nodes. When you create an EMR cluster, then it's called a Job Flow, which has been created to execute a set of jobs or job steps one after the other: Master node: This node controls and manages the cluster. It distributes the MapReduce tasks to nodes in the cluster and monitors the status of task execution. Every EMR cluster will have only one master node in a master instance group. Core nodes: These nodes will execute MapReduce tasks and provide HDFS for storing the data related to task execution. The EMR cluster will have core nodes as part of it in a core instance group. The core node is related to the slave node in Hadoop. So, basically these nodes have two-fold responsibility: the first one is to execute the map and reduce tasks allocated by the master and the second is to hold the data blocks. Task nodes: These nodes are used for only MapReduce task execution and they are optional while launching the EMR cluster. The task node is related to the slave node in Hadoop and is part of a task instance group in EMR. When you scale down your clusters, you cannot remove any core nodes. This is because EMR doesn't want to let you lose your data blocks. You can remove nodes from a task group while scaling down your cluster. You should also be using only task instance groups to have spot instances, as spot instances can be taken away as per your bid price and you would not want to lose your data blocks. You can launch a cluster having just one node, that is, with just one master node and no other nodes. In that case, the same node will act as both master and core nodes. For simplicity, you can assume a node as EC2 server in EMR. EMR use cases Amazon EMR can be used to build a variety of applications such as recommendation engines, data analysis, log processing, event/click stream analysis, data transformations (ETL), fraud detection, scientific simulations, genomics, financial analysis, or data correlation in various industries. The following section outlines some of the use cases in detail. Web log processing We can use EMR to process logs to understand the usage of content such as video, file downloads, top web URLs accessed by end users, user consumption from different parts of the world, and many more. We can process any web or mobile application logs using EMR to understand specific business insights relevant for your business. We can move all our web access application or mobile logs to Amazon S3 for analysis using EMR even if we are not using AWS for running our production applications. Clickstream analysis By using clickstream analysis, we can segment users into different groups and understand their behaviors with respect to advertisements or application usage. Ad networks or advertisers can perform clickstream analysis on ad-impression logs to deliver more effective campaigns or advertisements to end users. Reports generated from this analysis can include various metrics such as source traffic distribution, purchase funnel, lead source ROI, and abandoned carts among others. Product recommendation engine Recommendation engines can be built using EMR for e-commerce, retail, or web businesses. Many of the e-commerce businesses have a large inventory of products across different categories while regularly adding new products or categories. It will be very difficult for end users to search and identify the products quickly. With recommendation engines, we can help end users to quickly find relevant products or suggest products based on what they are viewing and so on. We may also want to notify users via an e-mail based on their past purchase behavior. Scientific simulations When you need distributed processing with large-scale infrastructure for scientific or research simulations, EMR can be of great help. We can quickly launch large clusters in a matter of minutes and install specific MapReduce programs for analysis using EMR. AWS also offers genomics datasets for free on S3. Data transformations We can perform complex extract, transform, and load (ETL) processes using EMR for either data analysis or data warehousing needs. It can be as simple as transforming XML file data into JSON data for further usage or moving all financial transaction records of a bank into a common date-time format for archiving purposes. You can also use EMR to move data between different systems in AWS such as DynamoDB, Redshift, S3, and many more. Summary In this article, we learned about the EMR architecture. We understood the concepts related to EMR for various node types in detail. Resources for Article: Further resources on this subject: Introduction to MapReduce [Article] Understanding MapReduce [Article] HDFS and MapReduce [Article]
Read more
  • 0
  • 0
  • 7334
article-image-animation-and-unity3d-physics
Packt
27 Oct 2014
5 min read
Save for later

Animation and Unity3D Physics

Packt
27 Oct 2014
5 min read
In this article, written by K. Aava Rani, author of the book, Learning Unity Physics, you will learn to use Physics in animation creation. We will see that there are several animations that can be easily handled by Unity3D's Physics. During development, you will come to know that working with animations and Physics is easy in Unity3D. You will find the combination of Physics and animation very interesting. We are going to cover the following topics: Interpolate and Extrapolate Cloth component and its uses in animation (For more resources related to this topic, see here.) Developing simple and complex animations As mentioned earlier, you will learn how to handle and create simple and complex animations using Physics, for example, creating a rope animation and hanging ball. Let's start with the Physics properties of a Rigidbody component, which help in syncing animation. Interpolate and Extrapolate Unity3D offers a way that its Rigidbody component can help in the syncing of animation. Using the interpolation and extrapolation properties, we sync animations. Interpolation is not only for animation, it works also with Rigidbody. Let's see in detail how interpolation and extrapolation work: Create a new scene and save it. Create a Cube game object and apply Rigidbody on it. Look at the Inspector panel shown in the following screenshot. On clicking Interpolate, a drop-down list that contains three options will appear, which are None, Interpolate, and Extrapolate. By selecting any one of them, we can apply the feature. In interpolation, the position of an object is calculated by the current update time, moving it backwards one Physics update delta time. Delta time or delta timing is a concept used among programmers in relation to frame rate and time. For more details, check out http://docs.unity3d.com/ScriptReference/Time-deltaTime.html. If you look closely, you will observe that there are at least two Physics updates, which are as follows: Ahead of the chosen time Behind the chosen time Unity interpolates between these two updates to get a position for the update position. So, we can say that the interpolation is actually lagging behind one Physics update. The second option is Extrapolate, which is to use for extrapolation. In this case, Unity predicts the future position for the object. Although, this does not show any lag, but incorrect prediction sometime causes a visual jitter. One more important component that is widely used to animate cloth is the Cloth component. Here, you will learn about its properties and how to use it. The Cloth component To make animation easy, Unity provides an interactive component called Cloth. In the GameObject menu, you can directly create the Cloth game object. Have a look at the following screenshot: Unity also provides Cloth components in its Physics sections. To apply this, let's look at an example: Create a new scene and save it. Create a Plane game object. (We can also create a Cloth game object.) Navigate to Component | Physics and choose InteractiveCloth. As shown in the following screenshot, you will see the following properties in the Inspector panel: Let's have a look at the properties one by one. Blending Stiffness and Stretching Stiffness define the blending and stretching stiffness of the Cloth while Damping defines the damp motion of the Cloth. Using the Thickness property, we decide thickness of the Cloth, which ranges from 0.001 to 10,000. If we enable the Use Gravity property, it will affect the Cloth simulation. Similarly, if we enable Self Collision, it allows the Cloth to collide with itself. For a constant or random acceleration, we apply the External Acceleration and Random Acceleration properties, respectively. World Velocity Scale decides movement of the character in the world, which will affect the Cloth vertices. The higher the value, more movement of the character will affect. World Acceleration works similarly. The Interactive Cloth component depends on the Cloth Renderer components. Lots of Cloth components in a game reduces the performance of game. To simulate clothing in characters, we use the Skinned Cloth component. Important points while using the Cloth component The following are the important points to remember while using the Cloth component: Cloth simulation will not generate tangents. So, if you are using a tangent dependent shader, the lighting will look wrong for a Cloth component that has been moved from its initial position. We cannot directly change the transform of moving Cloth game object. This is not supported. Disabling the Cloth before changing the transform is supported. The SkinnedCloth component works together with SkinnedMeshRenderer to simulate clothing on a character. As shown in the following screenshot, we can apply Skinned Cloth: As you can see in the following screenshot, there are different properties that we can use to get the desired effect: We can disable or enable the Skinned Cloth component at any time to turn it on or off. Summary In this article, you learned about how interpolation and extrapolation work. We also learned about Cloth component and its uses in animation with an example. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Unity Networking – The Pong Game [article] The physics engine [article]
Read more
  • 0
  • 0
  • 17974

article-image-data-visualization
Packt
27 Oct 2014
8 min read
Save for later

Data visualization

Packt
27 Oct 2014
8 min read
Data visualization is one of the most important tasks in data science track. Through effective visualization we can easily uncover underlying pattern among variables with doing any sophisticated statistical analysis. In this cookbook we have focused on graphical analysis using R in a very simple way with each independent example. We have covered default R functionality along with more advance visualization techniques such as lattice, ggplot2, and three-dimensional plots. Readers will not only learn the code to produce the graph but also learn why certain code has been written with specific examples. R Graphs Cookbook Second Edition written by Jaynal Abedin and Hrishi V. Mittal is such a book where the user will learn how to produce various graphs using R and how to customize them and finally how to make ready for publication. This practical recipe book starts with very brief description about R graphics system and then gradually goes through basic to advance plots with examples. Beside the R default graphics this recipe book introduces advance graphic system such as lattice and ggplot2; the grammar of graphics. We have also provided examples on how to inspect large dataset using advanced visualization such as tableplot and three dimensional visualizations. We also cover the following topics: How to create various types of bar charts using default R functions, lattice and ggplot2 How to produce density plots along with histograms using lattice and ggplot2 and customized them for publication How to produce graphs of frequency tabulated data How to inspect large dataset by simultaneously visualizing numeric and categorical variables in a single plot How to annotate graphs using ggplot2 (For more resources related to this topic, see here.) This recipe book is targeted to those reader groups who already exposed to R programming and want to learn effective graphics with the power of R and its various libraries. This hands-on guide starts with very short introduction to R graphics system and then gets straight to the point – actually creating graphs, instead of just theoretical learning. Each recipe is specifically tailored to full fill reader’s appetite for visually representing the data in the best way possible. Now, we will present few examples so that you can have an idea about the content of this recipe book: The ggplot2 R package is based on The Grammar of Graphics by Leland Wilkinson, Springer). Using this package, we can produce a variety of traditional graphics, and the user can produce their customized graphs as well. The beauty of this package is in its layered graphics facilities; through the use of layered graphics utilities, we can produce almost any kind of data visualization. Recently, ggplot2 is the most searched keyword in the R community, including the most popular R blog (www.r-bloggers.com). The comprehensive theme system allows the user to produce publication quality graphs with a variety of themes of choice. If we want to explain this package in a single sentence, then we can say that if whatever we can think about data visualization can be structured in a data frame, the visualization is a matter of few seconds. In the specific chapter on ggplot2 , we will see different examples and use themes to produce publication quality graphs. However, in this introductory chapter, we will show you one of the important features of the ggplot2 package that produces various types of graphs. The main function is ggplot(), but with the help of a different geom function, we can easily produce different types of graphs, such as the following: geom_point(): This will create scatter plot geom_line(): This will create a line chart geom_bar(): This will create a bar chart geom_boxplot(): This will create a box plot geom_text(): This will write certain text inside the plot area Now, we will see a simple example of the use of different geom functions with the default R mtcars dataset: # loading ggplot2 library library(ggplot2) # creating a basic ggplot object p <- ggplot(data=mtcars) # Creating scatter plot of mpg and disp variable p1 <- p+geom_point(aes(x=disp,y=mpg)) # creating line chart from the same ggplot object but different # geom function p2 <- p+geom_line(aes(x=disp,y=mpg)) # creating bar chart of mpg variable p3 <- p+geom_bar(aes(x=mpg)) # creating boxplot of mpg over gear p4 <- p+geom_boxplot(aes(x=factor(gear),y=mpg)) # writing certain text into the scatter plot p5 <- p1+geom_text(x=200,y=25,label="Scatter plot") The visualization of the preceding five plot will look like the following figure: Visualizing an empirical Cumulative Distribution function The empirical Cumulative Distribution function (CDF) is the non-parametric maximum-likelihood estimation of the CDF. In this recipe, we will see how the empirical CDF can be produced. Getting ready To produce this plot, we need to use the latticeExtra library. We will use the simulated dataset as shown in the following code: # Set a seed value to make the data reproducible set.seed(12345) qqdata <-data.frame(disA=rnorm(n=100,mean=20,sd=3),                disB=rnorm(n=100,mean=25,sd=4),                disC=rnorm(n=100,mean=15,sd=1.5),                age=sample((c(1,2,3,4)),size=100,replace=T),                sex=sample(c("Male","Female"),size=100,replace=T),                 econ_status=sample(c("Poor","Middle","Rich"),                size=100,replace=T)) How to do it… To plot an empirical CDF, we first need to call the latticeExtra library (note that this library has a dependency on RColorBrewer). Now, to plot the empirical CDF, we can use the following simple code: library(latticeExtra) ecdfplot(~disA|sex,data=qqdata) Graph annotation with ggplot To produce publication-quality data visualization, we often need to annotate the graph with various texts, symbols, or even shapes. In this recipe, we will see how we can easily annotate an existing graph. Getting ready In this recipe, we will use the disA and disD variables from ggplotdata. Let's call ggplotdata for this recipe. We also need to call the grid and gridExtra libraries for this recipe. How to do it... In this recipe, we will execute the following annotation on an existing scatter plot. So, the whole procedure will be as follows: Create a scatter plot Add customized text within the plot Highlight certain region to indicate extreme values Draw a line segment with an arrow within the scatter plot to indicate a single extreme observation Now, we will implement each of the steps one by one: library(grid) library(gridExtra) # creating scatter plot and print it annotation_obj <- ggplot(data=ggplotdata,aes(x=disA,y=disD))+geom_point() annotation_obj # Adding custom text at (18,29) position annotation_obj1 <- annotation_obj + annotate(geom="text",x=18,y=29,label="Extreme value",size=3) annotation_obj1 # Highlight certain regions with a box annotation_obj2 <- annotation_obj1+ annotate("rect", xmin = 24, xmax = 27,ymin=17,ymax=22,alpha = .2) annotation_obj2 # Drawing line segment with arrow annotation_obj3 <- annotation_obj2+ annotate("segment",x = 16,xend=17.5,y=25,yend=27.5,colour="red", arrow = arrow(length = unit(0.5, "cm")),size=2) annotation_obj3 The preceding four steps are displayed in the following single graph: How it works... The annotate() function takes input of a geom such as “segment”, “text” etc, and then it takes another input regarding position of that geom that is where to draw or where to place.. In this particular recipe, we used three geom instances, such as text to write customized text within the plot, rect to highlight a certain region in the plot, and segment to draw an arrow. The alpha argument represents the transparency of the region and size argument to represent the size of the text and line width of the line segment. Summary This article just gives a sample recipe of what kind of recipes are included in the book, and how the structure of each recipe is. Resources for Article: Further resources on this subject: Using R for Statistics, Research, and Graphics [Article] First steps with R [Article] Aspects of Data Manipulation in R [Article]
Read more
  • 0
  • 0
  • 3747
Modal Close icon
Modal Close icon