Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-customizing-app-controller
Packt
23 Feb 2015
15 min read
Save for later

Customizing App Controller

Packt
23 Feb 2015
15 min read
In this article by Nasir Naeem, the author of Learning System Center App Controller, introduces you to the App Controller administrative portal. Further instructions are provided to integrate the SCVMM server with App Controller. It also covers integrating the Azure cloud subscription, roles-based access, adding network share to the SCVMM server, and configuring the SSL certificate for the App Controller website. System Center 2012 R2 App Controller provides a web-based portal to manage an on-premises Azure Cloud and a third party cloud solution through a single pane of glass. Before we can manage these solutions, we will need to connect to these resources by integrating them in the App Controller admin console. This article will walk you through the steps required for integration. (For more resources related to this topic, see here.) Logging in to the App Controller interface In this section, we will log in to the App Controller web portal for the first time. Perform the following steps to open the App Controller admin console: Before attempting to log on to the App Controller server, ensure that Microsoft Silverlight is installed on the server. It can be downloaded from http://www.microsoft.com/silverlight/. Log on to the App Controller server. Launch the Internet Explorer: type https://localhost, and press the Enter key. If a warning message saying There is a problem with this website's security certificate shows up, select the Continue to this website (not recommended) link. Now, you will be presented with the logon screen. Provide administrative credentials to log on to start with, it is the service account details of the App Controller administrator. Click on the Sign In button in the browser. Depending on the speed of your system, the Microsoft System Center 2012 R2 App Controller Admin portal will open. The Admin portal is based on the Silverlight technology and looks very similar to the Virtual Machine Manager Console as follows: The App Controller Admin console is divided into seven sections. By default, the Overview page shows up every time we log in to the admin portal. We can manage multiple subscriptions and common tasks in the Overview page. There are three main categories in the Overview page. Out of them, Status contains Private Clouds created in the VMM server, Public Clouds displays Microsoft Azure subscriptions being managed by App Controller, and Hosting Service Providers shows third party service providers. Integrating the Virtual Machine Manager server for private cloud management In this section, we will integrate our previously installed Virtual Machine Manager server with the App Controller server. Perform the following steps to complete the task: Log on to the App Controller server. Launch Internet Explorer and log in with an account that has local administrative access on the App Controller server. On the Overview page, under the Private Clouds subsection of the Status section, click on the Connect a Virtual Machine Manager server link. In future, we should click on the Settings link in the left pane, select Connections in the submenu, click on Connect in the middle pane, and select SCVMM from the pop-up menu. In the pop-up dialog box, provide a Connection Name, Description, Server name, and a Port for VMM communication. Ensure that you select Automatically import SSL certificates. Then click on the OK button. After a couple of minutes, VMM integration will be completed and the Private Clouds section will be populated with the current configuration set in the VMM server. We can also see the configuration of the configured clouds in Virtual Machine Manager. By clicking on Clouds in the left pane, Contoso Cloud can be seen in the middle pane with a description and cloud name assigned. To see computer limitations set on the cloud, we can change the View option to show information cards by clicking on the Show items as cards button in the top-right corner. Configuring a Microsoft Azure subscription In this section, we will configure the on-premises App Controller deployment to connect to the Windows Azure subscription. The following capabilities will be enabled for our private cloud users in both private and public cloud: Start virtual machines Stop virtual machines Shut down virtual machines Restart virtual machines Connect to virtual machines Modify existing virtual machines Copy existing virtual machines to Azure Deploy virtual machines Deploy cloud services Add virtual machines to cloud services Modify existing services View and manage jobs To connect App Controller to Windows Azure, we have to first create a self-signed certificate. Then export the certificate package with private keys and also export the certificate without private keys. Next, we need to upload the certificate without private keys to the Windows Azure management portal and import the certificate package with the subscription ID into App Controller. Perform the following steps to complete the task: Log on to the App Controller server and launch the IIS manager console. Left-click on the Server name in the console. Double-click on Server Certificates in the IIS feature section, as follows: In the Actions pane on the right side of the console. Click on the Create Self-Signed Certificate link. In the Create Self-Signed Certificate wizard, provide a friendly name like AzureManagementCertificate and store it in the Personal store. Then click on OK. Launch MMC by typing MMC in the Run dialog box. Add Certificate snap-in from Add/remove snap-ins. Select Computer account for managing certificates store. Then click on Next. In the Select the computer you want this snap-in to manage section dialog page, select Local computer. Next, click on Finish and then click on OK. Back in the MMC console, expand Certificates (Local Computer). Expand Personal, then Highlight Certificates. In the middle pane, we can see the list of certificates available. Right-click on AzureManagementCertificate. Select All Tasks and click on Export…. Click on Next to start the Certificate Wizard. Make sure the Yes export the private key option is selected. Then click on Next. Leave default settings for Export File Format and click on Next. Provide a strong password to protect the PFX package. Then click on Next. Provide a path to store the package locally and click on Next. Finally, click on Finish. Now log on to the Windows Azure portal. Sign in with your Azure Administrative ID details. In the Management Portal, select Settings in the left pane. In the middle pane, click on the MANAGEMENT CERTIFICATES link. Then click on the UPLOAD A MANAGEMENT CERTIFICATE link. Browse for the CER file without private keys. Wait for the upload to complete. Take a note of the Subscription ID in the Management Certificates section. This will be used during the App Controller connection configuration. Now, we are ready to connect App Controller to the Azure subscription. Azure cloud subscription will use certificate authentication. The certificate uploaded in the previous step will be used for encrypting traffic between App Controller and Azure cloud. Back in the App Controller admin portal, click on Clouds in the left pane. Click on the down arrow on the Connect button in the middle pane. Then select Windows Azure subscription. Provide a friendly name for the subscription and values for the Description, Subscription ID, and Management certificate fields with a private key and Management certificate password for the PFX package file. Then click on OK. After a couple of minutes, Azure subscription will be added to the App Controller environment. Now we can manage Services and Virtual Machines attached to this subscription. In the Cloud section, we can also see the new Windows Azure connection as shown in the following screenshot: Configuring roles-based access In this section, we will be adding a new tenant user to the App Controller. This user will be assigned particular settings to manage their environment. I have created a standard domain user called Contoso_Tenant01 for demo purposes. This account will be given full administrative access to the Contoso Cloud only. Follow the following steps to complete this task: Log on to the Virtual Machine Manager server. Launch VMM Console. Select Settings in the left pane. Expand Security and select User Roles. In the ribbon, click on Create User Role. After the Create User Role wizard launches, provide the Name and Description and then click on Next. I have used Contoso Cloud Administrator and Administrator of Contoso Cloud. Select Tenant Administrator and click on Next. In the Members section, add a security group on individual user accounts. I have added a Contoso_Tenant01 account to the members list. Then click on Next. In the Scope section of the wizard, select the checkbox next to Contoso Cloud and click on Next. In the Quotas for the Contoso Cloud section, adjust Role Level and Member level quotas as required. We will be using the default settings of Use Maximum for all settings. Then click on Next. In the Networking section of the wizard, add Logical network belonging to Contoso by clicking on the Add button. Then click on Next. In the Resources section of the wizard, add resources that this tenant administrator can use. I have selected OS profiles, Small HW hardware profile, VM Template, and Service Template, available in the list. Then click on Next. In the Permissions section of the wizard, we can specify tasks that this user account can perform in the environment. Switch to Contoso Cloud in the middle pane. Click on the Select All button. Then click on Next In the Run As Accounts section of the create User Roles wizard, specify a privileged account that is required in Contoso Cloud and click on Next. In the Summary section of the wizard, review specified settings and then click on Finish. Adding a new VMM Library share In this section, we will add a new Virtual Machine Library share to SCVMM. Perform the following steps for the new Virtual Machine Library share to SCVMM: To specify a dedicated folder to upload data by this user, I have created a folder called Contoso_Cloud in the root of system drive. Give full permission to the VMM computer account and the VMM service account on the security tab and share the folder. Add the new share to the VMM Library by clicking on Library in the left pane. Right-click on the VMM server name and select Add Library Shares. Select the checkbox next to the Contoso_Cloud share. Then, click on Next and Finish. Now click on Settings in the left pane. Select User Roles in the Security section in the left pane. Right-click on Contoso Cloud Administrator and select Properties. Switch to the Resources section in the left pane. Click on the Browse button in the Specify the library location where this user can upload data section. Select the Contoso_Cloud folder from the Select destination folder dialog box and click on OK. Now log on to the App Controller server. Open a new session in Internet Explorer and browse to the App Controller admin portal. Log on with a new account. In our case, it is domainnamecontoso_tenant01. After logging on, the Contoso_Tenant01 account can only see items that are allowed in the Virtual Machine Manager server. Adding a network share In this section, we will be adding a network share to the App Controller server. This share will be used as local cache during download or upload of the virtual machines. It can be any folder on the local network as long as the App Controller service account has the ability to make changes to the content of the shared folder. Perform the following steps to complete this task: Log on to the App Controller server. Launch Internet Explorer and log in with administrative credentials. We also need a shared folder with the correct permissions assigned. So launch Windows Explorer and create a folder. We will be creating a folder in the root of the system drive called SCAC_Share. Once the folder is created, right-click on the folder name and select Properties. Switch to the Security tab and add the App Controller service account. Give full control permission to the service account. In our case, the account name is srv_scac_acc. Click on Apply and then on OK. Repeat the same process by switching to Sharing tab. Click on the Share button. Add the service account if it is not already present and then click on the Share button. Now click on the Done button and click on the Close button on the folder properties dialog box. Now go back to the Internet Explorer browser and select the Overview page in the App Controller admin portal. Under the Next Steps section in the Common Tasks subsection, click on the Add a network file share link. Provide the UNC path to the folder that we created in step 3. The naming syntax is \<servername><sharename>. Then click on OK. A confirmation message will show up at the bottom of the screen for the task being completed. We can verify the addition of the share by clicking on Library in the left pane. Expanding Shares in the middle pane, we can also add more shares or remove listed shares in the Library's Shares section. Configuring SSL certificate for the App Controller website In this section, we will change the default SSL self-signed certificate to one that is generated by our internal certificate authority (CA). Building a PKI infrastructure is out of the scope of this book. Please look at the TechNet articles for creating a PKI infrastructure. Perform the following steps provided to complete this task: I will try to explain the tasks that have to be completed to get a certificate from the internal CA. To get the CA certificate published, log on to the CA server and launch the Certsrv.msc console. Expand the server name. Right-click on Certificate Templates and make a duplicate copy of Webserver template. Ensure that Server Authentication is listed in the Extensions tab. Give the template a unique name. I have used Generic Web SSL Certificate. In the Security tab, allow the App Controller server with the Enroll permission. Then right-click on Certificates Templates in the Certsrv console. Select New; select New Certificate templates to issue. From the list, select the new template. Now, reboot the App Controller server. After reboot, launch MMC console, add certificates snap-in, and ensure that it shows the Local computer store. Then expand to Personal and expand Certificates. Right-click on Certificates, select All Tasks and Request New Certificates. Select the new template we just published and click on the Add more information link. Change Type from FullDN to Common Name. Specify appcontroller.contoso.internal. Give this certificate a friendly name and then click on OK. Back in the Certificate enrolment wizard, click on Enroll. Log on to the App Controller server and launch the Internet Information Services console. The IIS Manager console can also be launched by pressing the windows key and typing in InetMgr.exe. Expand Server Name and also expand Sites. Right-click on the AppController website. Select Edit Bindings…. In the Edit Sites Bindings dialog box, select https and then click on the Edit button. Select appcontroller Webserver Cert from the drop-down list. Verify that certificate is correct by clicking on the View button. Click on the Select button and then click on the OK button. Now that we have a valid certificate assigned to the website in IIS Manager, create Host (A) record in DNS services. Specify appcontroller.contoso.internal as the FQDN and IP address of the App Controller server. Make sure Silverlight is installed on the testing machine. Launch Internet Explorer and browse to https://appcontroller.contoso.internal. After a couple of seconds, the App Controller logon screen will show up. Take a look in the browser address bar; the certificate error should have disappeared. We no longer get a warning message before the log on screen. We can also verify the certificate assigned to this website by going to Files | Properties of the site and clicking on the Certificates button. Customizing App Controller branding In some scenarios corporate branding is required. It is very simple to change the branding on App Controller management portal pages. The following screenshot highlights the areas that can be changed by altering or replacing specific files on the App Controller server: Both files are typically located at C:Program FilesMicrosoft System Center 2012 R2App Controllerwwwroot. Let's take a look at the following steps: To replace the top-left logo, create a file with the name SC2012_WebHeaderLeft_AC.png with dimensions of 213 x 38 pixels containing a transparent background. To replace the top-right log, create a file with the name SC2012_WebHeaderRight_AC.png with dimensions of 108 x 16 pixels containing a transparent background. Override the existing files on the App Controller server with the new files. Close the browser window. Open a new browser window and try to log in to the App Controller portal. The newly added logo files will be shown on top of the logon dialog box. The same new branding logos will be displayed after logging on to the App Controller Management portal, as shown in the following screenshot: Summary In this article, we integrated Virtual Machine Manager in the App Controller. We also attached a Windows Azure subscription to the App Controller. We added a network share to the App Controller environment and saw how to configure roles-based access users. We also changed the SSL certificate of the App Controller admin portal. Resources for Article: Further resources on this subject: Google App Engine [article] Securing Your Twilio App [article] Advanced Programming And Control [article]
Read more
  • 0
  • 0
  • 4782

article-image-selenium-testing-tools
Packt
23 Feb 2015
8 min read
Save for later

Selenium Testing Tools

Packt
23 Feb 2015
8 min read
In this article by Raghavendra Prasad MG, author of the book Learning Selenium Testing Tools, Third Edition you will be introduced to the Selenium IDE WebDriver by demonstrating its installation, basic features, its advanced features, and implementation of automation framework with programming language basics required for automation. Automation being a key point of success of any software organization, everybody is looking at the freeware and huge community supported tool like Selenium. Anybody who is willing to learn and work on automation with Selenium has an opportunity to learn the tool from basic to advanced stage with this book and the book would become a life time reference for the reader. (For more resources related to this topic, see here.) Key features of the book Following are the key features of the book: The book contain the information from basic level to advanced levels and user need not have know anything as a pre requisite. The book contains real time examples which make the reader to co-relate with there real time scenarios. The book contains basics of Java which is required for Selenium automation, hence reader need to go through other books exclusively which would contain the information which is no where required for the Selenium automation. The book contains the concept of automation framework design and implementation, which would definitely help reader to build and implement his/her own automation framework. What you will learn from this book? There are a lot of things you will learn from the book. A few of them are mentioned as follows: History of Selenium and its evolution Working with previous versions of Selenium that is, Selenium IDE Selenium WebDriver – basic to advanced state Basics of Java (only what is required for Selenium automation) WebElement handling with Selenium WebDriver Page Object Factory implementation Automation frameworks types, design and implementation Utilities building for automation Who this book is for? The book is for manual testers. Even any software professionals and the ones who wants to make their carrier in Selenium automation testing can use this book. This book also helps automation testers / automation architects who want to build or implement the automation / automation frameworks on Selenium automation tool. What this book covers The book covers the following major topics: Selenium IDE Selenium IDE is a Firefox add-on developed originally by Shinya Kasatani as a way to use the original Selenium Core code without having to copy Selenium Core onto the server. Selenium Core is the key JavaScript modules that allows Selenium to drive the browser. It has been developed using JavaScript so that it can interact with the DOM (Document Object Model) using native JavaScript calls. Selenium IDE has been developed to allow testers and developers to record their actions as they follow the workflow that they need to test. Locators Locators shows how we can find elements on the page to be used in our tests. We will use XPath, CSS, link text, and ID to find elements on the page so that we can interact with them. Locators allow us to find elements on a page that can be used in our tests. In the last chapter we managed to work against a page which had decent locators. In HTML, it is seen as a good practice to make sure that every element you need to interact with has an ID attribute and a name attribute. Unfortunately, following best practices can be extremely difficult, especially when building the HTML dynamically on the server before sending it back to the browser. Following are the locators used in Selenium IDE: Locators Description Example ID This element identifies an ID attribute on the page id=inputButton name This element identifies name attribute on the page name=buttonFind link This element identifies links by the text link=index XPath This element identifies by XPath xpath=//div[@class='classname'] CSS This element identifies by CSS css=#divinthecenter DOM This element identifies by DOM dom=document.getElementById("inputButton") Selenium WebDriver The primary feature of the Selenium WebDriver is the integration of the WebDriver API and its design to provide a simpler, more concise programming interface in addition to addressing some limitations in the Selenium-RC API. Selenium WebDriver was developed to better support dynamic web pages where elements of a page may change without the page itself being reloaded. WebDriver's goal is to supply a well designed object-oriented API that provides improved support for modern advanced web application testing problems. Finding elements When working with WebDriver on a web application, we will need to find elements on the page. This is the Core to being able to work. All the methods for performing actions to the web application, like typing and clicking require that we search the element first. Finding an element on the page by its ID The first item that we are going to look at is finding an element by ID. Searching elements by ID is one of the easiest ways to find an element. We start with findElementByID(). This method is a helper method that sets an argument for a more generic findElement call. We will see now how we can use it in action. The method's signature looks like the following line of code: findElementById(String using); The using variable takes the ID of the element that you wish to look for. It will return a WebElement object that we can then work with. Using findElementById() We find an element on the page by using the findElementById() method that is on each of the Browser Driver classes. findElement calls will return a WebElement object that we can perform actions on. Follow these steps to see how it works: Open your Java IDE. (IntelliJ or Eclipse are the one's that are mostly used) We are going to use the following command: WebElement element = ((FindsById)driver). findElementById("verifybutton"); Run the test from the IDE. It will look like the following screenshot: Page Objects In this section of the article, we are going to have a look at how we can apply some best practices to tests. You will learn how to make maintainable test suites that will allow you to update tests in seconds. We will have a look at creating your own DSL so that people can see intent. We will create tests using the Page Object pattern. Working with FirefoxDriver FirefoxDriver is the easiest driver to use, since everything that we need to use is all bundled with the Java client bindings. We do the basic task of loading the browser and type into the page as follows: Update the setUp() method to load the FirefoxDriver(); driver = new FirefoxDriver(); Now we need to find an element. We will find the one with the ID nextBid: WebElement element = driver.findElement(By.id("nextBid")); Now we need to type into that element as follows: element.sendKeys("100"); Run your test and it should look like the following: import org.openqa.selenium.*; import org.openqa.selenium.firefox.*; import org.testng.annotations.*; public class TestChapter6 {   WebDriver driver;   @BeforeTest public void setUp(){    driver = new FirefoxDriver();    driver.get      ("http://book.theautomatedtester.co.uk/chapter4"); }   @AfterTest public void tearDown(){    driver.quit(); }   @Test public void testExamples(){    WebElement element = driver.findElement(By.id("nextBid"));    element.sendKeys("100"); } } We are currently witnessing an explosion of mobile devices in the market. A lot of them are more powerful than your average computer was just over a decade ago. This means, that in addition to having nice clean, responsive, and functional desktop applications, we are starting to have to make sure the same basic functionality is available to mobile devices. We are going to be looking at how we can set up mobile devices to be used with Selenium WebDriver. We will learn the following topics: How to use the stock browser on Android How to test with Opera Mobile How to test on iOS Understanding Selenium Grid Selenium Grid is a version of Selenium that allows teams to set up a number of Selenium instances and then have one central point to send your Selenium commands to. This differs from what we saw in Selenium Remote WebDriver where we always had to explicitly say where the Selenium Server is as well as know what browsers that server can handle. With Selenium Grid we just ask for a specific browser, and then the hub that is part of Selenium Grid will route all the Selenium commands through to the Remote Control you want. Summary We have understood and learnt what is Selenium and its evolutions from IDE to WebDriver and Grid as well. In addition we have learnt how to identify the WebElements using WebDriver, and its design pattern and locators through WebDriver. And we learnt on the automation framework design and implementation, mobile application automation on Android and iOS. Finally we understood the concept of Selenium Grid. Resources for Article: Further resources on this subject: Quick Start into Selenium Tests [article] Getting Started With Selenium Webdriver and Python [article] Exploring Advanced Interactions of WebDriver [article]
Read more
  • 0
  • 0
  • 4957

article-image-actors-and-pawns
Packt
23 Feb 2015
7 min read
Save for later

Actors and Pawns

Packt
23 Feb 2015
7 min read
In this article by William Sherif, author of the book Learning C++ by Creating Games with UE4, we will really delve into UE4 code. At first, it is going to look daunting. The UE4 class framework is massive, but don't worry. The framework is massive, so your code doesn't have to be. You will find that you can get a lot done and a lot onto the screen using relatively less code. This is because the UE4 engine code is so extensive and well programmed that they have made it possible to get almost any game-related task done easily. Just call the right functions, and voila, what you want to see will appear on the screen. The entire notion of a framework is that it is designed to let you get the gameplay you want, without having to spend a lot of time in sweating out the details. (For more resources related to this topic, see here.) Actors versus pawns A Pawn is an object that represents something that you or the computer's Artificial Intelligence (AI) can control on the screen. The Pawn class derives from the Actor class, with the additional ability to be controlled by the player directly or by an AI script. When a pawn or actor is controlled by a controller or AI, it is said to be possessed by that controller or AI. Think of the Actor class as a character in a play. Your game world is going to be composed of a bunch of actors, all acting together to make the gameplay work. The game characters, Non-player Characters (NPCs), and even treasure chests will be actors. Creating a world to put your actors in Here, we will start from scratch and create a basic level into which we can put our game characters. The UE4 team has already done a great job of presenting how the world editor can be used to create a world in UE4. I want you to take a moment to create your own world. First, create a new, blank UE4 project to get started. To do this, in the Unreal Launcher, click on the Launch button beside your most recent engine installation, as shown in the following screenshot: That will launch the Unreal Editor. The Unreal Editor is used to visually edit your game world. You're going to spend a lot of time in the Unreal Editor, so please take your time to experiment and play around with it. I will only cover the basics of how to work with the UE4 editor. You will need to let your creative juices flow, however, and invest some time in order to become familiar with the editor. To learn more about the UE4 editor, take a look at the Getting Started: Introduction to the UE4 Editor playlist, which is available at https://www.youtube.com/playlist?list=PLZlv_N0_O1gasd4IcOe9Cx9wHoBB7rxFl. Once you've launched the UE4 editor, you will be presented with the Projects dialog. The following screenshot shows the steps to be performed with numbers corresponding to the order in which they need to be performed: Perform the following steps to create a project: Select the New Project tab at the top of the screen. Click on the C++ tab (the second subtab). Then select Basic Code from the available projects listing. Set the directory where your project is located (mine is Y:Unreal Projects). Choose a hard disk location with a lot of space (the final project will be around 1.5 GB). Name your project. I called mine GoldenEgg. Click on Create Project to finalize project creation. Once you've done this, the UE4 launcher will launch Visual Studio. There will only be a couple of source files in Visual Studio, but we're not going to touch those now. Make sure that Development Editor is selected from the Configuration Manager dropdown at the top of the screen, as shown in the following screenshot: Now launch your project by pressing Ctrl + F5 in Visual Studio. You will find yourself in the Unreal Engine 4 editor, as shown in the following screenshot: The UE4 editor We will explore the UE4 editor here. We'll start with the controls since it is important to know how to navigate in Unreal. Editor controls If you've never used a 3D editor before, the controls can be quite hard to learn. These are the basic navigation controls while in edit mode: Use the arrow keys to move around in the scene Press Page Up or Page Down to go up and down vertically Left mouse click + drag it left or right to change the direction you are facing Left mouse click + drag it up or down to dolly (move the camera forward and backward, same as pressing up/down arrow keys) Right mouse click + drag to change the direction you are facing Middle mouse click + drag to pan the view Right mouse click and the W, A, S, and D keys to move around the scene Play mode controls Click on the Play button in the bar at the top, as shown in the following screenshot. This will launch the play mode. Once you click on the Play button, the controls change. In play mode, the controls are as follows: The W, A, S, and D keys for movement The left or right arrow keys to look toward the left and right, respectively The mouse's motion to change the direction in which you look The Esc key to exit play mode and return to edit mode What I suggest you do at this point is try to add a bunch of shapes and objects into the scene and try to color them with different materials. Adding objects to the scene Adding objects to the scene is as easy as dragging and dropping them in from the Content Browser tab. The Content Browser tab appears, by default, docked at the left-hand side of the window. If it isn't seen, simply select Window and navigate to Content Browser in order to make it appear. Make sure that the Content Browser is visible in order to add objects to your level Next, select the Props folder on the left-hand side of the Content Browser. Drag and drop things from the Content Browser into your game world To resize an object, press R on your keyboard. The manipulators around the object will appear as boxes, which denotes resize mode. Press R on your keyboard to resize an object In order to change the material that is used to paint the object, simply drag and drop a new material from the Content Browser window inside the Materials folder. Drag and drop a material from the Content Browser's Materials folder to color things with a new color Materials are like paints. You can coat an object with any material you want by simply dragging and dropping the material you desire onto the object you desire it to be coated on. Materials are only skin-deep: they don't change the other properties of an object (such as weight). Starting from scratch If you want to start creating a level from scratch, simply click on File and navigate to New Level..., as shown here: You can then select between Default and Empty Level. I think selecting Empty Level is a good idea, for the reasons that are mentioned later. The new level will be completely black in color to start with. Try dragging and dropping some objects from the Content Browser tab again. This time, I added a resized shapes / box for the ground plane and textured it with moss, a couple of Props / SM_Rocks, Particles / P_Fire, and most importantly, a light source. Be sure to save your map. Here's a snapshot of my map (how does yours look?): Summary In this article, we reviewed how the realistic environments are created with actors and monsters which are part of the game and also seen how the various kind of levels are created from the scratch. Resources for Article: Further resources on this subject: Program structure, execution flow, and runtime objects [article] What is Quantitative Finance? [article] Creating and Utilizing Custom Entities [article]
Read more
  • 0
  • 0
  • 2928

article-image-android-and-udoo-home-automation
Packt
20 Feb 2015
6 min read
Save for later

Android and UDOO for Home Automation

Packt
20 Feb 2015
6 min read
This article written by Emanuele Palazzetti, the author of Getting Started with UDOO, will teach us about home automation and using sensors to monitor the CO2 emission by the wood heater in our home. (For more resources related to this topic, see here.) During the last few years, the maker culture greatly improved the way hobbyists, students, and, more in general, technology enthusiasts create new hardware devices. The advent of prototyping boards such as Arduino together with a widespread open source philosophy, changed how and where ideas are realized. If you're a maker or you have a friend that really like building prototypes and devices, probably both of you have already transformed the garage or the personal studio in a home lab. This process is so spontaneous that nowadays newcomers in the Makers Community begin to create their first device at home. UDOO and Home Automation Like other communities, the makers family self-sustained their ideas joining their passion with the Do It Yourself (DIY) philosophy, which led makers to build and use creative devices in their everyday life. The DIY movement was the key factor that triggered the Home Automation made with open source platforms, making this process funnier and maker-friendly. Indeed, if we take a look at some projects released on the Internet we can find thousands of prototypes, usually composed by an Arduino together with a Raspberry Pi, or any other combinations of similar prototyping boards. However, in this scenario, we have a new alternative proposed by the UDOO board—a stand alone computer that provides a multidevelopment-platform solution for Linux and Android operating systems with an integrated Arduino Due. Thanks to its flexibility, UDOO combines the best from two different worlds: the hardware makers and the software programmers. Applying the UDOO board to home automation is a natural process because of its power and versatility. Furthermore, the capability to install the Android operating system increases the capabilities of this board, because it offers out-of-the-box mobile applications ecosystem that can be easily reused in our appliances, enhancing the user experience of our prototype. Since 2011, Android supports the interaction with an Arduino compatible device through the Android Accessory Development Kit (ADK) reference. Designed to implement compelling accessories that could be connected to Android-powered devices, this reference defines the Android Open Accessory (AOA) protocol used to communicate with an Arduino device through an USB On-The-Go (OTG) port or via Bluetooth. UDOO makes use of this protocol and opens a serial communication between two processors soldered in the same board, which run the Android operating system and the microcontroller actions respectively. Mastering the ADK is not so easy, especially during the first approach. For this reason, the community has developed an Android library that simplifies a lot the use of the ADK: the Android ADKToolkit (http://adktoolkit.org). To put our hands-on code, we can imagine a scenario in which we want to monitor the carbon dioxide emissions (CO2) produced by the wood heater in our home. The first step is to connect the CO2 sensor to our board and use the manufacturer library, if available, in the Arduino sketch to retrieve the detected CO2. Using the ADK implementation, we can send this value back to the main UDOO's processor that runs the Android operating system, so it can use its powerful APIs, and a more powerful processor, to visualize or compute the collected data. The following example is a part of an Arduino sketch used to send sensor data back to the Android application using the ADK implementation: uint8_t buffer[128]; int co2Sensor = 0; void loop() { Usb.Task(); if (adk.isReady()) {    // Hypothetical function that    // gets the value from the sensor    co2Sensor = readFromCO2Sensor();    buffer[0] = co2Sensor;    // Sends the value to Android    adk.write(1, buffer);    delay(1000); } } On the Android side, using a traditional AsyncTask method or a more powerful ExecutorService method, we should read this value from the buffer. This task is greatly simplified through the ADKToolkit library that requires the initialization of the AdkManager class that holds the connection and exposes a handler to open or close the communication with Arduino. This instance could be initialized during the onCreate() activity callback with the following code: private AdkManager mAdkManager;@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.hello_world); mAdkManager = new AdkManager(this); mAdkManager.open(); } In a separated thread, we can use the mAdkManager instance to read data from the ADK buffer. The following is an example that reads the message from the buffer and uses the getInt() method to parse the value: private class SensorThread implements Runnable { @Override public void run() {    // Reads detected CO2 from ADK    AdkMessage response = mAdkManager.read();    int co2 = response.getInt();    // Continues the execution    doSomething(co2); } } Once we retrieve the detected CO2 value from the sensor, we can use it in our application. For instance, we may plot this value using an Android open source chart library or send collected data to an external web service. These preceding snippets are just examples to show how it's easy to implement the communication in UDOO between the Android operating system and the onboard Arduino. With a powerful operating system such as Android and one of the most widespread prototyping platform provided in a single board, UDOO can play a key role in our home automation projects. Summary As makers, if we manage to make enough experience into the home automation field, chances are that we will be able to develop and build a high-end system for our own house, flexible enough to be easily extended without any further knowledge. When I wrote Getting started with UDOO, the idea was to make a comprehensive guide and a collection of examples, to help developers grabbing quickly the key concepts of this prototyping board, focusing in the Android development to bring back to light the advantages provided by this widespread platform, if used in our prototypes and not only in our mobile devices. Resources for Article: Further resources on this subject: Android Virtual Device Manager [Article] Writing Tag Content [Article] The Arduino Mobile Robot [Article]
Read more
  • 0
  • 0
  • 20627

article-image-puppet-language-and-style
Packt
20 Feb 2015
18 min read
Save for later

Puppet Language and Style

Packt
20 Feb 2015
18 min read
In this article by Thomas Uphill, author of the book Puppet Cookbook Third Edition, we will cover the following recipes: Installing a package before starting a service Installing, configuring, and starting a service Using regular expressions in if statements Using selectors and case statements Creating a centralized Puppet infrastructure Creating certificates with multiple DNS names (For more resources related to this topic, see here.) Installing a package before starting a service To show how ordering works, we'll create a manifest that installs httpd and then ensures the httpd package service is running. How to do it... We start by creating a manifest that defines the service: service {'httpd':    ensure => running,    require => Package['httpd'], } The service definition references a package resource named httpd; we now need to define that resource: package {'httpd':    ensure => 'installed', } How it works... In this example, the package will be installed before the service is started. Using require within the definition of the httpd service ensures that the package is installed first, regardless of the order within the manifest file. Capitalization Capitalization is important in Puppet. In our previous example, we created a package named httpd. If we wanted to refer to this package later, we would capitalize its type (package) as follows: Package['httpd'] To refer to a class, for example, the something::somewhere class, which has already been included/defined in your manifest, you can reference it with the full path as follows: Class['something::somewhere'] When you have a defined type, for example the following defined type: example::thing {'one':} The preceding resource may be referenced later as follows: Example::Thing['one'] Knowing how to reference previously defined resources is necessary for the next section on metaparameters and ordering. Learning metaparameters and ordering All the manifests that will be used to define a node are compiled into a catalog. A catalog is the code that will be applied to configure a node. It is important to remember that manifests are not applied to nodes sequentially. There is no inherent order to the application of manifests. With this in mind, in the previous httpd example, what if we wanted to ensure that the httpd process started after the httpd package was installed? We couldn't rely on the httpd service coming after the httpd package in the manifests. What we have to do is use metaparameters to tell Puppet the order in which we want resources applied to the node. Metaparameters are parameters that can be applied to any resource and are not specific to any one resource type. They are used for catalog compilation and as hints to Puppet but not to define anything about the resource to which they are attached. When dealing with ordering, there are four metaparameters used: before require notify subscribe The before and require metaparameters specify a direct ordering; notify implies before and subscribe implies require. The notify metaparameter is only applicable to services; what notify does is tell a service to restart after the notifying resource has been applied to the node (this is most often a package or file resource). In the case of files, once the file is created on the node, a notify parameter will restart any services mentioned. The subscribe metaparameter has the same effect but is defined on the service; the service will subscribe to the file. Trifecta The relationship between package and service previously mentioned is an important and powerful paradigm of Puppet. Adding one more resource-type file into the fold, creates what puppeteers refer to as the trifecta. Almost all system administration tasks revolve around these three resource types. As a system administrator, you install a package, configure the package with files, and then start the service. Diagram of Trifecta (Files require package for directory, service requires files and package) Idempotency A key concept of Puppet is that the state of the system when a catalog is applied to a node cannot affect the outcome of Puppet run. In other words, at the end of Puppet run (if the run was successful), the system will be in a known state and any further application of the catalog will result in a system that is in the same state. This property of Puppet is known as idempotency. Idempotency is the property that no matter how many times you do something, it remains in the same state as the first time you did it. For instance, if you had a light switch and you gave the instruction to turn it on, the light would turn on. If you gave the instruction again, the light would remain on. Installing, configuring, and starting a service There are many examples of this pattern online. In our simple example, we will create an Apache configuration file under /etc/httpd/conf.d/cookbook.conf. The /etc/httpd/conf.d directory will not exist until the httpd package is installed. After this file is created, we would want httpd to restart to notice the change; we can achieve this with a notify parameter. How to do it... We will need the same definitions as our last example; we need the package and service installed. We now need two more things. We need the configuration file and index page (index.html) created. For this, we follow these steps: As in the previous example, we ensure the service is running and specify that the service requires the httpd package: service {'httpd':    ensure => running,    require => Package['httpd'], } We then define the package as follows: package {'httpd':    ensure => installed, } Now, we create the /etc/httpd/conf.d/cookbook.conf configuration file; the /etc/httpd/conf.d directory will not exist until the httpd package is installed. The require metaparameter tells Puppet that this file requires the httpd package to be installed before it is created: file {'/etc/httpd/conf.d/cookbook.conf':    content => "<VirtualHost *:80>nServername     cookbooknDocumentRoot     /var/www/cookbookn</VirtualHost>n",    require => Package['httpd'],    notify => Service['httpd'], } We then go on to create an index.html file for our virtual host in /var/www/cookbook. This directory won't exist yet, so we need to create this as well, using the following code: file {'/var/www/cookbook':    ensure => directory, } file {'/var/www/cookbook/index.html':    content => "<html><h1>Hello World!</h1></html>n",    require => File['/var/www/cookbook'], } How it works… The require attribute to the file resources tell Puppet that we need the /var/www/cookbook directory created before we can create the index.html file. The important concept to remember is that we cannot assume anything about the target system (node). We need to define everything on which the target depends. Anytime you create a file in a manifest, you have to ensure that the directory containing that file exists. Anytime you specify that a service should be running, you have to ensure that the package providing that service is installed. In this example, using metaparameters, we can be confident that no matter what state the node is in before running Puppet, after Puppet runs, the following will be true: httpd will be running The VirtualHost configuration file will exist httpd will restart and be aware of the VirtualHost file The DocumentRoot directory will exist An index.html file will exist in the DocumentRoot directory Using regular expressions in if statements Another kind of expression you can test in if statements and other conditionals is the regular expression. A regular expression is a powerful way to compare strings using pattern matching. How to do it… This is one example of using a regular expression in a conditional statement. Add the following to your manifest: if $::architecture =~ /64/ { notify { '64Bit OS Installed': } } else { notify { 'Upgrade to 64Bit': } fail('Not 64 Bit') } How it works… Puppet treats the text supplied between the forward slashes as a regular expression, specifying the text to be matched. If the match succeeds, the if expression will be true and so the code between the first set of curly braces will be executed. In this example, we used a regular expression because different distributions have different ideas on what to call 64bit; some use amd64, while others use x86_64. The only thing we can count on is the presence of the number 64 within the fact. Some facts that have version numbers in them are treated as strings to Puppet. For instance, $::facterversion. On my test system, this is 2.0.1, but when I try to compare that with 2, Puppet fails to make the comparison: Error: comparison of String with 2 failed at /home/thomas/.puppet/manifests/version.pp:1 on node cookbook.example.com If you wanted instead to do something if the text does not match, use !~ rather than =~: if $::kernel !~ /Linux/ { notify { 'Not Linux, could be Windows, MacOS X, AIX, or ?': } } There's more… Regular expressions are very powerful, but can be difficult to understand and debug. If you find yourself using a regular expression so complex that you can't see at a glance what it does, think about simplifying your design to make it easier. However, one particularly useful feature of regular expressions is the ability to capture patterns. Capturing patterns You can not only match text using a regular expression, but also capture the matched text and store it in a variable: $input = 'Puppet is better than manual configuration' if $input =~ /(.*) is better than (.*)/ { notify { "You said '${0}'. Looks like you're comparing ${1}    to ${2}!": } } The preceding code produces this output: You said 'Puppet is better than manual configuration'. Looks like you're comparing Puppet to manual configuration! The variable $0 stores the whole matched text (assuming the overall match succeeded). If you put brackets around any part of the regular expression, it creates a group, and any matched groups will also be stored in variables. The first matched group will be $1, the second $2, and so on, as shown in the preceding example. Regular expression syntax Puppet's regular expression syntax is the same as Ruby's, so resources that explain Ruby's regular expression syntax will also help you with Puppet. You can find a good introduction to Ruby's regular expression syntax at this website: http://www.tutorialspoint.com/ruby/ruby_regular_expressions.htm. Using selectors and case statements Although you could write any conditional statement using if, Puppet provides a couple of extra forms to help you express conditionals more easily: the selector and the case statement. How to do it… Here are some examples of selector and case statements: Add the following code to your manifest: $systemtype = $::operatingsystem ? { 'Ubuntu' => 'debianlike', 'Debian' => 'debianlike', 'RedHat' => 'redhatlike', 'Fedora' => 'redhatlike', 'CentOS' => 'redhatlike', default => 'unknown', }   notify { "You have a ${systemtype} system": } Add the following code to your manifest: class debianlike { notify { 'Special manifest for Debian-like systems': } }   class redhatlike { notify { 'Special manifest for RedHat-like systems': } }   case $::operatingsystem { 'Ubuntu', 'Debian': {    include debianlike } 'RedHat', 'Fedora', 'CentOS', 'Springdale': {    include redhatlike } default: {    notify { "I don't know what kind of system you have!":    } } } How it works… Our example demonstrates both the selector and the case statement, so let's see in detail how each of them works. Selector In the first example, we used a selector (the ? operator) to choose a value for the $systemtype variable depending on the value of $::operatingsystem. This is similar to the ternary operator in C or Ruby, but instead of choosing between two possible values, you can have as many values as you like. Puppet will compare the value of $::operatingsystem to each of the possible values we have supplied in Ubuntu, Debian, and so on. These values could be regular expressions (for example, for a partial string match, or to use wildcards), but in our case, we have just used literal strings. As soon as it finds a match, the selector expression returns whatever value is associated with the matching string. If the value of $::operatingsystem is Fedora, for example, the selector expression will return the redhatlike string and this will be assigned to the variable $systemtype. Case statement Unlike selectors, the case statement does not return a value. case statements come in handy when you want to execute different code depending on the value of some expression. In our second example, we used the case statement to include either the debianlike or redhatlike class, depending on the value of $::operatingsystem. Again, Puppet compares the value of $::operatingsystem to a list of potential matches. These could be regular expressions or strings, or as in our example, comma-separated lists of strings. When it finds a match, the associated code between curly braces is executed. So, if the value of $::operatingsystem is Ubuntu, then the code including debianlike will be executed. There's more… Once you've got a grip of the basic use of selectors and case statements, you may find the following tips useful. Regular expressions As with if statements, you can use regular expressions with selectors and case statements, and you can also capture the values of the matched groups and refer to them using $1, $2, and so on: case $::lsbdistdescription { /Ubuntu (.+)/: {    notify { "You have Ubuntu version ${1}": } } /CentOS (.+)/: {    notify { "You have CentOS version ${1}": } } default: {} } Defaults Both selectors and case statements let you specify a default value, which is chosen if none of the other options match (the style guide suggests you always have a default clause defined): $lunch = 'Filet mignon.' $lunchtype = $lunch ? { /fries/ => 'unhealthy', /salad/ => 'healthy', default => 'unknown', }   notify { "Your lunch was ${lunchtype}": } The output is as follows: t@mylaptop ~ $ puppet apply lunchtype.pp Notice: Your lunch was unknown Notice: /Stage[main]/Main/Notify[Your lunch was unknown]/message: defined 'message' as 'Your lunch was unknown' When the default action shouldn't normally occur, use the fail() function to halt the Puppet run. How to do it… The following steps will show you how to use the in operator: Add the following code to your manifest: if $::operatingsystem in [ 'Ubuntu', 'Debian' ] { notify { 'Debian-type operating system detected': } } elsif $::operatingsystem in [ 'RedHat', 'Fedora', 'SuSE',   'CentOS' ] { notify { 'RedHat-type operating system detected': } } else { notify { 'Some other operating system detected': } } Run Puppet: t@cookbook:~/.puppet/manifests$ puppet apply in.pp Notice: Compiled catalog for cookbook.example.com in environment production in 0.03 seconds Notice: Debian-type operating system detected Notice: /Stage[main]/Main/Notify[Debian-type operating system detected]/message: defined 'message' as 'Debian-type operating system detected'Notice: Finished catalog run in 0.02 seconds There's more… The value of an in expression is Boolean (true or false) so you can assign it to a variable: $debianlike = $::operatingsystem in [ 'Debian', 'Ubuntu' ]   if $debianlike { notify { 'You are in a maze of twisty little packages, all alike': } } Creating a centralized Puppet infrastructure A configuration management tool such as Puppet is best used when you have many machines to manage. If all the machines can reach a central location, using a centralized Puppet infrastructure might be a good solution. Unfortunately, Puppet doesn't scale well with a large number of nodes. If your deployment has less than 800 servers, a single Puppet master should be able to handle the load, assuming your catalogs are not complex (take less than 10 seconds to compile each catalog). If you have a larger number of nodes, I suggest a load balancing configuration described in Mastering Puppet, Thomas Uphill, Packt Publishing. A Puppet master is a Puppet server that acts as an X509 certificate authority for Puppet and distributes catalogs (compiled manifests) to client nodes. Puppet ships with a built-in web server called WEBrick, which can handle a very small number of nodes. In this section, we will see how to use that built-in server to control a very small (less than 10) number of nodes. Getting ready The Puppet master process is started by running puppet master; most Linux distributions have start and stop scripts for the Puppet master in a separate package. To get started, we'll create a new debian server named puppet.example.com. How to do it... Install Puppet on the new server and then use Puppet to install the Puppet master package: # puppet resource package puppetmaster ensure='installed' Notice: /Package[puppetmaster]/ensure: created package { 'puppetmaster':   ensure => '3.7.0-1puppetlabs1', } Now start the Puppet master service and ensure it will start at boot: # puppet resource service puppetmaster ensure=true enable=true service { 'puppetmaster':   ensure => 'running',   enable => 'true', } How it works... The Puppet master package includes the start and stop scripts for the Puppet master service. We use Puppet to install the package and start the service. Once the service is started, we can point another node at the Puppet master (you might need to disable the host-based firewall on your machine). From another node, run puppet agent to start a puppet agent, which will contact the server and request a new certificate: t@ckbk:~$ sudo puppet agent -t Info: Creating a new SSL key for cookbook.example.com Info: Caching certificate for ca Info: Creating a new SSL certificate request for cookbook.example.com Info: Certificate Request fingerprint (SHA256): 06:C6:2B:C4:97:5D:16:F2:73:82:C4:A9:A7:B1:D0:95:AC:69:7B:27:13:A9:1A:4C:98:20:21:C2:50:48:66:A2 Info: Caching certificate for ca Exiting; no certificate found and waitforcert is disabled Now on the Puppet server, sign the new key: root@puppet:~# puppet cert list pu "cookbook.example.com" (SHA256) 06:C6:2B:C4:97:5D:16:F2:73:82:C4:A9:A7:B1:D0:95:AC:69:7B:27:13:A9:1A:4C:98:20:21:C2:50:48:66:A2 root@puppet:~# puppet cert sign cookbook.example.com Notice: Signed certificate request for cookbook.example.com Notice: Removing file Puppet::SSL::CertificateRequestcookbook.example.com at'/var/lib/puppet/ssl/ca/requests/cookbook.example.com.pem' Return to the cookbook node and run Puppet again: t@ckbk:~$ sudo puppet agent –vt Info: Caching certificate for cookbook.example.com Info: Caching certificate_revocation_list for caInfo: Caching certificate for cookbook.example.comInfo: Retrieving pluginfactsInfo: Retrieving pluginInfo: Caching catalog for cookbookInfo: Applying configuration version '1410401823'Notice: Finished catalog run in 0.04 seconds There's more... When we ran puppet agent, Puppet looked for a host named puppet.example.com (since our test node is in the example.com domain); if it couldn't find that host, it would then look for a host named Puppet. We can specify the server to contact with the --server option to puppet agent. When we installed the Puppet master package and started the Puppet master service, Puppet created default SSL certificates based on our hostname. Creating certificates with multiple DNS names By default, Puppet will create an SSL certificate for your Puppet master that contains the fully qualified domain name of the server only. Depending on how your network is configured, it can be useful for the server to be known by other names. In this recipe, we'll make a new certificate for our Puppet master that has multiple DNS names. Getting ready Install the Puppet master package if you haven't already done so. You will then need to start the Puppet master service at least once to create a certificate authority (CA). How to do it... The steps are as follows: Stop the running Puppet master process with the following command: # service puppetmaster stop [ ok ] Stopping puppet master. Delete (clean) the current server certificate: # puppet cert clean puppet Notice: Revoked certificate with serial 6 Notice: Removing file Puppet::SSL::Certificate puppet at '/var/lib/puppet/ssl/ca/signed/puppet.pem' Notice: Removing file Puppet::SSL::Certificate puppet at '/var/lib/puppet/ssl/certs/puppet.pem' Notice: Removing file Puppet::SSL::Key puppet at '/var/lib/puppet/ssl/private_keys/puppet.pem' Create a new Puppet certificate using Puppet certificate generate with the --dns-alt-names option: root@puppet:~# puppet certificate generate puppet --dns-alt-names puppet.example.com,puppet.example.org,puppet.example.net --ca-location local Notice: puppet has a waiting certificate request true Sign the new certificate: root@puppet:~# puppet cert --allow-dns-alt-names sign puppet Notice: Signed certificate request for puppet Notice: Removing file Puppet::SSL::CertificateRequest puppet at '/var/lib/puppet/ssl/ca/requests/puppet.pem' Restart the Puppet master process: root@puppet:~# service puppetmaster restart[ ok ] Restarting puppet master. How it works... When your puppet agents connect to the Puppet server, they look for a host called Puppet, they then look for a host called Puppet.[your domain]. If your clients are in different domains, then you need your Puppet master to reply to all the names correctly. By removing the existing certificate and generating a new one, you can have your Puppet master reply to multiple DNS names. Summary Configuration management has become a requirement for system administrators. Knowing how to use configuration management tools, such as Puppet, enables administrators to take full advantage of automated provisioning systems and cloud resources. There is a natural progression from performing a task, scripting a task to creating a module in Puppet, or Puppetizing a task. Resources for Article: Further resources on this subject: Designing Puppet Architectures [article] External Tools and the Puppet Ecosystem [article] Puppet: Integrating External Tools [article]
Read more
  • 0
  • 0
  • 3903

article-image-sprites-action
Packt
20 Feb 2015
6 min read
Save for later

Sprites in Action

Packt
20 Feb 2015
6 min read
In this article by Milcho G. Milchev, author of the book SFML Essentials, we will see how we can use SFML to create a customized animation using a sequence of images. We will also see how SFML renders an animation. Animation exists in many forms. The traditional approach to animation is drawing a sequence of images which differ slightly from each other, and showing them on a screen one after the other. Even though this approach is still widely used, there are more elegant alternatives. For example, drawing (or modelling in 3D) only the limbs of a character and then animating how they move relative to time is a technique that saves a lot of time for artists. It also creates smoother results because not every frame of the animation has to be redrawn. In this book, we are going to explore only the traditional approach, since it is the simpler solution for programmers, and in many cases it is enough to bring life to any sprite. (For more resources related to this topic, see here.) The setup As we established earlier, the traditional approach involves a set of images that need to change over time. For our example, we will use a crystal, which rotates around its centre. Typically, an animation is kept in a single file (a sprite sheet), where each frame of the animation is stored, and in most cases, each frame is the same size—the size of the object. In our example, the sprite is, 32 x 32 pixels and has eight frames, which play for one second. Here is what the sprite sheet looks like: The following screenshot shows our animation setup in code: First of all, note that we are using the AssetManager class to load our sprite sheet. The next line sets the texture rectangle of the sprite to target the first image in our sprite sheet. Here is what this means in terms of the sprite sheet texture: Next, we will move this texture rectangle once in a while to simulate a rotating crystal. In the previous code, we set the number of frames to eight (as many as there are in  the sprite sheet), and set the time of the animation to one second in total, which means that each frame stays for about 0.125 seconds (the animation duration is divided by the number of frames) at a time. We know what we want to do now, so let's do it: In the code, we first measure the delta time since the last frame and add it to the accumulated time. The last two lines of the code actually do all the work. The first one looks intimidating at first glance, but it is simply a way to choose the correct frame, based on how much time has passed and how long the animation is. The formula timeAsSeconds / animationDuration gives us the time relative to the animation duration. So let's say that 0.4 seconds have passed and our animation duration is 1 second. This leaves us with 0.4 seconds in local animation time. Multiply this 0.4 seconds by the number of frames, and we get the following result: 0.4 * 8 = 3.2 This gives us which frame we should be on at the moment, and how long we have been there. The current frame index is the whole part of 3.2 (which is three), and the fraction part (0.2) is how long we have been on that frame. In this case, we are only interested in the current frame so we will take that by casting the whole expression to int. This rounds the number down if the number is positive (which it always is in this case). The last part, % frameNum is there to restart the animation when it reaches beyond its last frame. So in the case where 2.3 seconds have passed, we have the following result: 2.3 * 8 = 18.4 We do not have a 19th frame to show, so we show the frame which corresponds to that in our local scale [0…7]. In this case: 18 / 8 = 2 (and 2 remainder) Since the % operator takes the remainder of a division, we are left to show the frame with the index two, which is the third frame. (We start counting from zero as programmers, remember?) The last line of the code sets the texture rectangle to the current frame. The process is quite straightforward—since we only have frames on the x axis, we do not need to worry about the y coordinate of the rectangle, and so we will set it to zero. The x is computed by animFrame * spriteSize.x, which multiplies the current frame by the width of the frame. In the case, the current frame is two and the frame's width is 32, so we get: 2 * 32 = 64 Here is what the texture rectangle will look like: The last thing we need to do is render the sprite inside the render frame and we are done. If everything goes smoothly, we should have a rotating crystal on the screen with eight frames. With this technique, we can animate sprites of all kinds no matter how many frames they have or how long the animation is. There are problems with the current approach though - the code looks messy, and it is only useful for a single animation. What if we want multiple animations for a sprite (rotating the crystal in a vertical direction as well), and we want to be able to switch between them? Currently, we would have to duplicate all our code for each animation and each animated sprite. In the next section, we will talk about how to avoid these issues by building a fully featured animation system that requires as little code duplication as possible. Summary Sprite animations seem quite easy now, don't they? Just keep in mind that there is a lot more to explore when it comes to animation. Not only are there different techniques to doing them, but also perfecting what we've developed so far might take some time. Fortunately, what we have so far will work as is in the majority of cases so I would say that you are pretty much set to go. If you want to dig deeper, buy the book and read SFML Essentials in a simple step-by-step fashion by using SFML library to create realistic looking animations as well as to develop  2D and 3D games using the SFML. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Translating a File In Sdl Trados Studio [article] Adding Finesse To Your Game [article]
Read more
  • 0
  • 0
  • 26484
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-spark-programming-model
Packt
20 Feb 2015
13 min read
Save for later

The Spark Programming Model

Packt
20 Feb 2015
13 min read
In this article by Nick Pentreath, author of the book Machine Learning with Spark, we will delve into a high-level overview of Spark's design, we will introduce the SparkContext object as well as the Spark shell, which we will use to interactively explore the basics of the Spark programming model. While this section provides a brief overview and examples of using Spark, we recommend that you read the following documentation to get a detailed understanding:Spark Quick Start: http://spark.apache.org/docs/latest/quick-start.htmlSpark Programming guide, which covers Scala, Java, and Python: http://spark.apache.org/docs/latest/programming-guide.html (For more resources related to this topic, see here.) SparkContext and SparkConf The starting point of writing any Spark program is SparkContext (or JavaSparkContext in Java). SparkContext is initialized with an instance of a SparkConf object, which contains various Spark cluster-configuration settings (for example, the URL of the master node). Once initialized, we will use the various methods found in the SparkContext object to create and manipulate distributed datasets and shared variables. The Spark shell (in both Scala and Python, which is unfortunately not supported in Java) takes care of this context initialization for us, but the following lines of code show an example of creating a context running in the local mode in Scala: val conf = new SparkConf().setAppName("Test Spark App").setMaster("local[4]")val sc = new SparkContext(conf) This creates a context running in the local mode with four threads, with the name of the application set to Test Spark App. If we wish to use default configuration values, we could also call the following simple constructor for our SparkContext object, which works in exactly the same way: val sc = new SparkContext("local[4]", "Test Spark App") The Spark shell Spark supports writing programs interactively using either the Scala or Python REPL (that is, the Read-Eval-Print-Loop, or interactive shell). The shell provides instant feedback as we enter code, as this code is immediately evaluated. In the Scala shell, the return result and type is also displayed after a piece of code is run. To use the Spark shell with Scala, simply run ./bin/spark-shell from the Spark base directory. This will launch the Scala shell and initialize SparkContext, which is available to us as the Scala value, sc. Your console output should look similar to the following screenshot: To use the Python shell with Spark, simply run the ./bin/pyspark command. Like the Scala shell, the Python SparkContext object should be available as the Python variable sc. You should see an output similar to the one shown in this screenshot: Resilient Distributed Datasets The core of Spark is a concept called the Resilient Distributed Dataset (RDD). An RDD is a collection of "records" (strictly speaking, objects of some type) that is distributed or partitioned across many nodes in a cluster (for the purposes of the Spark local mode, the single multithreaded process can be thought of in the same way). An RDD in Spark is fault-tolerant; this means that if a given node or task fails (for some reason other than erroneous user code, such as hardware failure, loss of communication, and so on), the RDD can be reconstructed automatically on the remaining nodes and the job will still complete. Creating RDDs RDDs can be created from existing collections, for example, in the Scala Spark shell that you launched earlier: val collection = List("a", "b", "c", "d", "e")val rddFromCollection = sc.parallelize(collection) RDDs can also be created from Hadoop-based input sources, including the local filesystem, HDFS, and Amazon S3. A Hadoop-based RDD can utilize any input format that implements the Hadoop InputFormat interface, including text files, other standard Hadoop formats, HBase, Cassandra, and many more. The following code is an example of creating an RDD from a text file located on the local filesystem: val rddFromTextFile = sc.textFile("LICENSE") The preceding textFile method returns an RDD where each record is a String object that represents one line of the text file. Spark operations Once we have created an RDD, we have a distributed collection of records that we can manipulate. In Spark's programming model, operations are split into transformations and actions. Generally speaking, a transformation operation applies some function to all the records in the dataset, changing the records in some way. An action typically runs some computation or aggregation operation and returns the result to the driver program where SparkContext is running. Spark operations are functional in style. For programmers familiar with functional programming in Scala or Python, these operations should seem natural. For those without experience in functional programming, don't worry; the Spark API is relatively easy to learn. One of the most common transformations that you will use in Spark programs is the map operator. This applies a function to each record of an RDD, thus mapping the input to some new output. For example, the following code fragment takes the RDD we created from a local text file and applies the size function to each record in the RDD. Remember that we created an RDD of Strings. Using map, we can transform each string to an integer, thus returning an RDD of Ints: val intsFromStringsRDD = rddFromTextFile.map(line => line.size) You should see output similar to the following line in your shell; this indicates the type of the RDD: intsFromStringsRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[5] at map at <console>:14 In the preceding code, we saw the => syntax used. This is the Scala syntax for an anonymous function, which is a function that is not a named method (that is, one defined using the def keyword in Scala or Python, for example). The line => line.size syntax means that we are applying a function where the input variable is to the left of the => operator, and the output is the result of the code to the right of the => operator. In this case, the input is line, and the output is the result of calling line.size. In Scala, this function that maps a string to an integer is expressed as String => Int.This syntax saves us from having to separately define functions every time we use methods such as map; this is useful when the function is simple and will only be used once, as in this example. Now, we can apply a common action operation, count, to return the number of records in our RDD: intsFromStringsRDD.count The result should look something like the following console output: 14/01/29 23:28:28 INFO SparkContext: Starting job: count at <console>:17...14/01/29 23:28:28 INFO SparkContext: Job finished: count at <console>:17, took 0.019227 sres4: Long = 398 Perhaps we want to find the average length of each line in this text file. We can first use the sum function to add up all the lengths of all the records and then divide the sum by the number of records: val sumOfRecords = intsFromStringsRDD.sumval numRecords = intsFromStringsRDD.countval aveLengthOfRecord = sumOfRecords / numRecords The result will be as follows: aveLengthOfRecord: Double = 52.06030150753769 Spark operations, in most cases, return a new RDD, with the exception of most actions, which return the result of a computation (such as Long for count and Double for sum in the preceding example). This means that we can naturally chain together operations to make our program flow more concise and expressive. For example, the same result as the one in the preceding line of code can be achieved using the following code: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count An important point to note is that Spark transformations are lazy. That is, invoking a transformation on an RDD does not immediately trigger a computation. Instead, transformations are chained together and are effectively only computed when an action is called. This allows Spark to be more efficient by only returning results to the driver when necessary so that the majority of operations are performed in parallel on the cluster. This means that if your Spark program never uses an action operation, it will never trigger an actual computation, and you will not get any results. For example, the following code will simply return a new RDD that represents the chain of transformations: val transformedRDD = rddFromTextFile.map(line => line.size).filter(size => size > 10).map(size => size * 2) This returns the following result in the console: transformedRDD: org.apache.spark.rdd.RDD[Int] = MappedRDD[8] at map at <console>:14 Notice that no actual computation happens and no result is returned. If we now call an action, such as sum, on the resulting RDD, the computation will be triggered: val computation = transformedRDD.sum You will now see that a Spark job is run, and it results in the following console output: ...14/11/27 21:48:21 INFO SparkContext: Job finished: sum at <console>:16, took 0.193513 scomputation: Double = 60468.0 The complete list of transformations and actions possible on RDDs as well as a set of more detailed examples are available in the Spark programming guide (located at http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations), and the API documentation (the Scala API documentation) is located at http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.RDD). Caching RDDs One of the most powerful features of Spark is the ability to cache data in memory across a cluster. This is achieved through use of the cache method on an RDD: rddFromTextFile.cache Calling cache on an RDD tells Spark that the RDD should be kept in memory. The first time an action is called on the RDD that initiates a computation, the data is read from its source and put into memory. Hence, the first time such an operation is called, the time it takes to run the task is partly dependent on the time it takes to read the data from the input source. However, when the data is accessed the next time (for example, in subsequent queries in analytics or iterations in a machine learning model), the data can be read directly from memory, thus avoiding expensive I/O operations and speeding up the computation, in many cases, by a significant factor. If we now call the count or sum function on our cached RDD, we will see that the RDD is loaded into memory: val aveLengthOfRecordChained = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count Indeed, in the following output, we see that the dataset was cached in memory on the first call, taking up approximately 62 KB and leaving us with around 270 MB of memory free: ...14/01/30 06:59:27 INFO MemoryStore: ensureFreeSpace(63454) called with curMem=32960, maxMem=31138775014/01/30 06:59:27 INFO MemoryStore: Block rdd_2_0 stored as values to memory (estimated size 62.0 KB, free 296.9 MB)14/01/30 06:59:27 INFO BlockManagerMasterActor$BlockManagerInfo:Added rdd_2_0 in memory on 10.0.0.3:55089 (size: 62.0 KB, free: 296.9 MB)... Now, we will call the same function again: val aveLengthOfRecordChainedFromCached = rddFromTextFile.map(line => line.size).sum / rddFromTextFile.count We will see from the console output that the cached data is read directly from memory: ...14/01/30 06:59:34 INFO BlockManager: Found block rdd_2_0 locally... Spark also allows more fine-grained control over caching behavior. You can use the persist method to specify what approach Spark uses to cache data. More information on RDD caching can be found here: http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence. Broadcast variables and accumulators Another core feature of Spark is the ability to create two special types of variables: broadcast variables and accumulators. A broadcast variable is a read-only variable that is made available from the driver program that runs the SparkContext object to the nodes that will execute the computation. This is very useful in applications that need to make the same data available to the worker nodes in an efficient manner, such as machine learning algorithms. Spark makes creating broadcast variables as simple as calling a method on SparkContext as follows: val broadcastAList = sc.broadcast(List("a", "b", "c", "d", "e")) The console output shows that the broadcast variable was stored in memory, taking up approximately 488 bytes, and it also shows that we still have 270 MB available to us: 14/01/30 07:13:32 INFO MemoryStore: ensureFreeSpace(488) called with curMem=96414, maxMem=31138775014/01/30 07:13:32 INFO MemoryStore: Block broadcast_1 stored as values to memory(estimated size 488.0 B, free 296.9 MB)broadCastAList: org.apache.spark.broadcast.Broadcast[List[String]] = Broadcast(1) A broadcast variable can be accessed from nodes other than the driver program that created it (that is, the worker nodes) by calling value on the variable: sc.parallelize(List("1", "2", "3")).map(x => broadcastAList.value ++ x).collect This code creates a new RDD with three records from a collection (in this case, a Scala List) of ("1", "2", "3"). In the map function, it returns a new collection with the relevant record from our new RDD appended to the broadcastAList that is our broadcast variable. Notice that we used the collect method in the preceding code. This is a Spark action that returns the entire RDD to the driver as a Scala (or Python or Java) collection. We will often use collect when we wish to apply further processing to our results locally within the driver program. Note that collect should generally only be used in cases where we really want to return the full result set to the driver and perform further processing. If we try to call collect on a very large dataset, we might run out of memory on the driver and crash our program.It is preferable to perform as much heavy-duty processing on our Spark cluster as possible, preventing the driver from becoming a bottleneck. In many cases, however, collecting results to the driver is necessary, such as during iterations in many machine learning models. On inspecting the result, we will see that for each of the three records in our new RDD, we now have a record that is our original broadcasted List, with the new element appended to it (that is, there is now either "1", "2", or "3" at the end): ...14/01/31 10:15:39 INFO SparkContext: Job finished: collect at <console>:15, took 0.025806 sres6: Array[List[Any]] = Array(List(a, b, c, d, e, 1), List(a, b, c, d, e, 2), List(a, b, c, d, e, 3)) An accumulator is also a variable that is broadcasted to the worker nodes. The key difference between a broadcast variable and an accumulator is that while the broadcast variable is read-only, the accumulator can be added to. There are limitations to this, that is, in particular, the addition must be an associative operation so that the global accumulated value can be correctly computed in parallel and returned to the driver program. Each worker node can only access and add to its own local accumulator value, and only the driver program can access the global value. Accumulators are also accessed within the Spark code using the value method. For more details on broadcast variables and accumulators, see the Shared Variables section of the Spark Programming Guide: http://spark.apache.org/docs/latest/programming-guide.html#shared-variables. Summary In this article, we learned the basics of Spark's programming model and API using the interactive Scala console. Resources for Article: Further resources on this subject: Ridge Regression [article] Clustering with K-Means [article] Machine Learning Examples Applicable to Businesses [article]
Read more
  • 0
  • 0
  • 4922

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 5607

article-image-visualize
Packt
19 Feb 2015
17 min read
Save for later

Visualize This!

Packt
19 Feb 2015
17 min read
This article is written by Michael Phillips, the author of the book TIBCO Spotfire: A Comprehensive Primer, discusses that human beings are fundamentally visual in the way they process information. The invention of writing was as much about visually representing our thoughts to others as it was about record keeping and accountancy. In the modern world, we are bombarded with formalized visual representations of information, from the ubiquitous opinion poll pie chart to clever and sophisticated infographics. The website http://data-art.net/resources/history_of_vis.php provides an informative and entertaining quick history of data visualization. If you want a truly breathtaking demonstration of the power of data visualization, seek out Hans Rosling's The best stats you've ever seen at http://ted.com. (For more resources related to this topic, see here.) We will spend time getting to know some of Spotfire's data capabilities. It's important that you continue to think about data; how it's structured, how it's related, and where it comes from. Building good visualizations requires visual imagination, but it also requires data literacy. This article is all about getting you to think about the visualization of information and empowering you to use Spotfire to do so. Apart from learning the basic features and properties of the various Spotfire visualization types, there is much more to learn about the seamless interactivity that Spotfire allows you to build in to your analyses. We will be taking a close look at 7 of the 16 visualization types provided by Spotfire, but these 7 visualization types are the most commonly used. We will cover the following topics: Displaying information quickly in tabular form Enriching your visualizations with color categorization Visualizing categorical information using bar charts Dividing a visualization across a trellis grid Key Spotfire concept—marking Visualizing trends using line charts Visualizing proportions using pie charts Visualizing relationships using scatter plots Visualizing hierarchical relationships using treemaps Key Spotfire concept—filters Enhancing tabular presentations using graphical tables Now let's have some fun! Displaying information quickly in tabular form While working through the data examples, we used the Spotfire Table visualization, but now we're going to take a closer look. People will nearly always want to see the "underlying data", the details behind any visualization you create. The Table visualization meets this need. It's very important not to confuse a table in the general data sense with the Spotfire Table visualization; the underlying data table remains immutable and complete in the background. The Table visualization is a highly manipulatable view of the underlying data table and should be treated as a visualization, not a data table. The data used here is BaseballPlayerData.xls There is always more than one way to do the same thing in Spotfire, and this is particularly true for the manipulation of visualizations. Let's start with some very quick manipulations: First, insert a table visualization by going to the Insert menu, selecting New Visualization, and then Table. To move a column, left-click on the column name, hold, and drag it. To sort by a column, left-click on the column name. To sort by more than one column, left-click on the first column name and then press Shift + left-click on the subsequent columns in order of sort precedence. To widen or narrow a column, hover the mouse over the right-hand edge of the column title until you see the cursor change to a two-way arrow, and then click and drag it. These and other properties of the Table visualization are also accessed via visualization properties. As you work through the various Spotfire visualizations, you'll notice that some types have more options than others, but there are common trends and an overall consistency in conventions. Visualization properties can be opened in a number of ways: By right-clicking on the visualization, a table in this case, and selecting Properties. By going to the Edit menu and selecting Visualization Properties. By clicking on the Visualization Properties icon, as shown in the following screenshot, in the icon tray below the main menu bar. It's beyond the scope of this book to explore every property and option. The context-sensitive help provided by Spotfire is excellent and explains all the options in glorious detail. I'd like to highlight four important properties of the Table visualization: The General property allows you to change the table visualization title, not the name of the underlying data table. It also allows you to hide the title altogether. The Data property allows you to switch the underlying data table, if you have more than one table loaded into your analysis. The Columns property allows you to hide columns and order the columns you do want to show. The Show/Hide Items property allows you to limit what is shown by a rule you define, such as top five hitters. After clicking on the Add button, you select the relevant column from a dropdown list, choose Rule type (Top), and finally, choose Value for the rule (5). The resulting visualization will only show the rows of data that meet the rule you defined. Enriching your visualizations with color categorization Color is a strong feature in Spotfire and an important visualization tool, often underestimated by report creators. It can be seen as merely a nice-to-have customization, but paying attention to color can be the difference between creating a stimulating and intuitive data visualization rather than an uninspiring and even confusing corporate report. Take some pride and care in the visual aesthetics of your analytics creations! Let's take a look at the color properties of the Table visualization. Open the Table visualization properties again, select Colors, and then Add the column Runs. Now, you can play with a color gradient, adding points by clicking on the Add Point button and customizing the colors. It's as easy as left-clicking on any color box and then selecting from a prebuilt palette or going into a full RGB selection dialog by choosing More Colors…. The result is a heatmap type effect for runs scored, with yellow representing low run totals, transitioning to green as the run total approaches the average value in the data, and becoming blue for the highest run totals. Visualizing categorical information using bar charts We saw how the Table visualization is perfect for showing and ordering detailed information. It's quite similar to a spreadsheet. The Bar Chart visualization is very good for visualizing categorical information, that is, where you have categories with supporting hard numbers—sales by region, for example. The region is the category, whereas the sales is the hard number or fact. Bar charts are typically used to show a distribution. Depending on your data or your analytic requirement, the bars can be ordered by value, placed side by side, stacked on top of each other, or arranged vertically or horizontally. There is a special case of the category and value combination and that is where you want to plot the frequencies of a set of numerical values. This type of bar chart is referred to as a histogram, and although it is number against number, it is still, in essence, a distribution plot. It is very common in fact to transform the continuous number range in such cases into a set of discrete bins or categories for the plot. For example, you could take some demographic data and plot age as the category and the number of people at that age as the value (the frequency) on a bar chart. The result, for a general population, would approach a bell-shaped curve. Let's create a bar chart using the baseball data. The data we will use is BaseballPlayerData.xls, which you can download from http://www.insidespotfire.com. Create a new page by right-clicking on any page tab and selecting New Page. You can also select New Page from the Insert menu or click on the new page icon in the icon bar below the main menu. Create a Bar Chart visualization by left-clicking on the bar chart icon or by selecting New Visualization and then Bar Chart from the Insert menu. Spotfire will automatically create a default chart, that is, rarely exactly what you want, so the next step is to configure the chart. Two distributions might be interesting to look at: the distribution of home runs across all the teams and the distribution of player salaries across all the teams. The axes are easy to change; simply use the axes selectors.   If the bars are vertical, it means that the category—Team, in our case—should be on the horizontal axis, with the value—Home Runs or Salary—on the vertical axis, representing the height of the bars.   We're going to pick Home Runs from the vertical axis selector and then an appropriate aggregation dropdown, which is highlighted in red in the screenshot. Sum would be a valid option, but let's go with Avg (Average). Similarly, select Team from the horizontal axis dropdown selector. The vertical, or value, axis must be an aggregation because there is more than one home run value for each category. You must decide if you want a sum, an average, a minimum, and so on. You can modify the visualization properties just as you did for the Table visualization. Some of the options are the same; some are specific to the bar chart. We're going to select the Sort bars by value option in the Appearance property. This will order the bars in descending order of value. We're also going to check the option Vertically under Scale labels | Show labels for the Category Axis property. There are two more actions to perform: create an identical bar chart except with average salary as the value axis, and give each bar chart an appropriate title (Visualization Properties|General|Title:). To copy an existing visualization, simply right-click on it and select Duplicate Visualization. We can now compare the distribution of home run average and salary average across all the baseball teams, but there's a better way to do this in a single visualization using color. Close the salary distribution bar chart by left-clicking on X in the upper right-hand corner of the visualization (X appears when you hover the mouse) or right-clicking on the visualization and selecting Close. Now, open the home run bar chart visualization properties, go to the Colors property, and color by Avg(Salary). Select a Gradient color mode, and add a median point by clicking on the Add Point button and selecting Median from the dropdown list of options on the added point. Finally, choose a suitable heat map range of colors; something like blue (min) through pale yellow (median) through red (max). You will still see the distribution of home runs across the baseball teams, but now you will have a superimposed salary heat map. Texas and Cleveland appear to be getting much more bang for their buck than the NY Yankees. Dividing a visualization across a trellis grid Trellising, whereby you divide a series of visualizations into individual panels, is a useful technique when you want to subdivide your analysis. In the example we've been working with, we might, for instance, want to split the visualization by league. Open the visualization properties for the home runs distribution bar chart colored by salary and select the Trellis property. Go to Panels and split by League (use the dropdown column selector). Spotfire allows you to build layers of information with even basic visualizations such as the bar chart. In one chart, we see the home run distribution by team, salary distribution by team, and breakdown by league. Key Spotfire concept – marking It's time to introduce one of the most important Spotfire concepts, called marking, which is central to the interactivity that makes Spotfire such a powerful analysis tool. Marking refers to the action of selecting data in a visualization. Every element you see is selectable, or markable, that is, a single row or multiple rows in a table, a single bar or multiple bars in a bar chart. You need to understand two aspects to marking. First, there is the visual effect, or color(s) you see, when you mark (select) visualization elements. Second, there is the behavior that follows marking: what happens to data and the display of data when you mark something. How to change the marking color From Spotfire v5.5 onward, you can choose, on a visualization-by-visualization basis, two distinct visual effects for marking: Use a separate color for marked items: all marked items are uniformly colored with the marking color, and all unmarked items retain their existing color. Keep existing color attributes and fade out unmarked items: all marked items keep their existing color, and all unmarked items also keep their existing color but with a high degree of color fade applied, leaving the marked items strongly highlighted. The second option is not available in versions older than v5.5 but is the default option in Versions 5.5 onward. The setting is made in the visualization's Appearance property by checking or unchecking the option Use separate color for marked items. The default color when using a separate color for marked items is dark green, but this can be changed by going to Edit|Document Properties|Markings|Edit. The new option has the advantage of retaining any underlying coloring you defined, but you might not like how the rest of the chart is washed out. Which approach you choose depends on what information you think is critical for your particular situation. When you create a new analysis, a default marking is created and applied to every visualization you create by default. You can change the color of the marking in Document Properties, which is found in the Edit menu. Just open Document Properties, click on the Markings tab, select the marking, click on the Edit button, and change the color. You can also create as many markings as you need, giving them convenient names for reference purposes, but we'll just focus on using one for now. How to set the marking behavior of a visualization Marking behavior depends fundamentally on data relationships. The data within a single data table is intrinsically related; the data in separate data tables must be explicitly related before you configure marking behavior for visualizations based on separate datasets. When you mark something in a visualization, five things can happen depending on the data involved and how you configured your visualizations: Conditions Behavior Two visualizations with the same underlying data table (they can be on different pages in the analysis file) and the same marking scheme applied. Marking data on one visualization will automatically mark the same data on the other. Two visualizations with related underlying data tables and the same marking scheme applied. The same as the previous condition's behavior, but subject to differences in data granularity. For example, marking a baseball team in one visualization will mark all the team's players in another visualization that is based on a more detailed table related by team. Two visualizations with the same or related data tables where one has been configured with data dependency on the marking in the other. Nothing will display in the marking-dependent visualization other than what is marked in the reference visualization. Visualizations with unrelated underlying data tables. No marking interaction will occur, and the visualizations will mark completely independently of one another. Two visualizations with the same underlying data table or related data tables and with different marking schemes applied. Marking data on one visualization will not show on the other because the marking schemes are different. Here's how we set these behaviors: Open the visualization properties of the bar chart we have been working with and navigate to the Data property.   You'll notice that two settings refer to marking: Marking and Limit data using markings. Use the dropdown under Marking to select the marking to be used for the visualization. Having no marking is an option. Visualizations with the same marking will display synchronous selection, subject to the data relation conditions described earlier. The options under Limit data using markings determine how the visualization will be limited to marking elsewhere in the analysis. The default here is no dependency. If you select a marking, then the visualization will only display data selected elsewhere with that marking. It's not good to have the same marking for Marking and Limit data using markings. If you are using the limit data setting, select no marking, or create a second marking and select it under Marking. You're possibly a bit confused by now. Fortunately, marking is much harder to describe than to use! Let's build a tangible example. We'll start a new analysis, so close any analysis you have open and create a new one, loading the player-level baseball data (BaseballPlayerData.xls). Add two bar charts and a table. You can rearrange the layout by left-clicking on the title bar of a visualization, holding, and dragging it. Position the visualizations any way you wish, but you can place the two bar charts side by side, with the table below them spanning both. Save your analysis file at this point and at regular intervals. It's good behavior to save regularly as you build an analysis. It will save you a lot of grief if your PC fails in any way. There is no autosave function in Spotfire. For the first bar chart, set the following visualization properties: Property Value General | Title Home Runs Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Vertical bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Avg(Home Runs) Colors | Columns Avg(Salary) Colors | Color mode Gradient Add Point for median Max = strong red; Median = pale yellow; Min = strong blue Labels | Show labels for Marked Rows Labels | Types of labels | Complete bar Check For the second bar chart, set the following visualization properties: Property Value General | Title Roster Data | Marking Marking Data | Limit data using markings Nothing checked Appearance | Orientation Horizontal bars Appearance | Sort bars by value Check Category Axis | Columns Team Value Axis | Columns Count(Player Name) Colors | Columns Position Colors | Color mode Categorical For the table, set the following visualization properties: Property Value General | Title Details Data | Marking (None) Data | Limit data using markings Check Marking   Columns Team, Player Name, Games Played, Home Runs, Salary, Position Now start selecting visualization elements with your mouse. You can click on elements such as bars or segments of bars, or you can click and drag a rectangular block around multiple elements. When you select a bar on the Home Runs bar chart, the corresponding team bar automatically selects the Roster bar chart, and details for all the players in that team display in the Details table. When you select a bar segment on the Roster bar chart, the corresponding team bar automatically selects on the Home Runs bar chart and only players in the selected position for the team selected appear in the details. There are some very useful additional functions associated with marking, and you can access these by right-clicking on a marked item. They are Unmark, Invert, Delete, Filter To, and Filer Out. You can also unmark by left-clicking on any blank space in the visualization. Play with this analysis file until you are comfortable with the marking concept and functionality. Summary This article is a small taste of the book TIBCO Spotfire: A comprehensive primer. You've seen how the Table visualization is an easy and traditional way to display detailed information in tabular form and how the Bar Chart visualization is excellent for visualizing categorical information, such as distributions. You've learned how to enrich visualizations with color categorization and how to divide a visualization across a trellis grid. You've also been introduced to the key Spotfire concept of marking. Apart from gaining a functional understanding of these Spotfire concepts and techniques, you should have gained some insight into the science and art of data visualization. Resources for Article: Further resources on this subject: The Spotfire Architecture Overview [article] Interacting with Data for Dashboards [article] Setting Up and Managing E-mails and Batch Processing [article]
Read more
  • 0
  • 0
  • 6769

article-image-getting-twitter-data
Packt
19 Feb 2015
9 min read
Save for later

Getting Twitter data

Packt
19 Feb 2015
9 min read
In this article by Paulo A Pereira, the author of Elixir Cookbook, we will build an application that will query the Twitter timeline for a given word and will display any new tweet with that keyword in real time. We will be using an Elixir twitter client extwitter as well as an Erlang application to deal with OAuth. We will wrap all in a phoenix web application. (For more resources related to this topic, see here.) Getting ready Before getting started, we need to register a new application with Twitter to get the API keys that will allow the authentication and use of Twitter's API. To do this, we will go to https://apps.twitter.com and click on the Create New App button. After following the steps, we will have access to four items that we need: consumer_key, consumer_secret, access_token, and access_token_secret. These values can be used directly in the application or setup as environment variables in an initialization file for bash or zsh (if using Unix). After getting the keys, we are ready to start building the application. How to do it… To begin with building the application, we need to follow these steps: Create a new Phoenix application: > mix phoenix.new phoenix_twitter_stream code/phoenix_twitter_stream Add the dependencies in the mix.exs file: defp deps do   [     {:phoenix, "~> 0.8.0"},     {:cowboy, "~> 1.0"},     {:oauth, github: "tim/erlang-oauth"},     {:extwitter, "~> 0.1"}   ] end Get the dependencies and compile them: > mix deps.get && mix deps.compile Configure the application to use the Twitter API keys by adding the configuration block with the keys we got from Twitter in the Getting ready section of this article. Edit lib/phoenix_twitter_stream.ex so that it looks like this: defmodule PhoenixTweeterStream do   use Application   def start(_type, _args) do     import Supervisor.Spec, warn: false     ExTwitter.configure(       consumer_key: System.get_env("SMM_TWITTER_CONSUMER_KEY"),       consumer_secret: System.get_env("SMM_TWITTER_CONSUMER_SECRET"),       access_token: System.get_env("SMM_TWITTER_ACCESS_TOKEN"),       access_token_secret: System.get_env("SMM_TWITTER_ACCESS_TOKEN_SECRET")     )     children = [       # Start the endpoint when the application starts       worker(PhoenixTweeterStream.Endpoint, []),       # Here you could define other workers and supervisors as children       # worker(PhoenixTweeterStream.Worker, [arg1, arg2, arg3]),     ]     opts = [strategy: :one_for_one, name: PhoenixTweeterStream.Supervisor]     Supervisor.start_link(children, opts)   end   def config_change(changed, _new, removed) do     PhoenixTweeterStream.Endpoint.config_change(changed, removed)     :ok   end end In this case, the keys are stored as environment variables, so we use the System.get_env function: System.get_env("SMM_TWITTER_CONSUMER_KEY") (…) If you don't want to set the keys as environment variables, the keys can be directly declared as strings this way: consumer_key: "this-is-an-example-key" (…) Define a module that will handle the query for new tweets in the lib/phoenix_twitter_stream/tweet_streamer.ex file, and add the following code: defmodule PhoenixTwitterStream.TweetStreamer do   def start(socket, query) do     stream = ExTwitter.stream_filter(track: query)     for tweet <- stream do       Phoenix.Channel.reply(socket, "tweet:stream", tweet)     end   end end Create the channel that will handle the tweets in the web/channels/tweets.ex file: defmodule PhoenixTwitterStream.Channels.Tweets do   use Phoenix.Channel   alias PhoenixTwitterStream.TweetStreamer   def join("tweets", %{"track" => query}, socket) do     spawn(fn() -> TweetStreamer.start(socket, query) end)     {:ok, socket}   end  end Edit the application router (/web/router.ex) to register the websocket handler and the tweets channel. The file will look like this: defmodule PhoenixTwitterStream.Router do   use Phoenix.Router   pipeline :browser do     plug :accepts, ~w(html)     plug :fetch_session     plug :fetch_flash     plug :protect_from_forgery   end   pipeline :api do     plug :accepts, ~w(json)   end   socket "/ws" do     channel "tweets", PhoenixTwitterStream.Channels.Tweets   end   scope "/", PhoenixTwitterStream do     pipe_through :browser # Use the default browser stack     get "/", PageController, :index   end end Replace the index template (web/templates/page/index.html.eex) content with this: <div class="row">   <div class="col-lg-12">     <ul id="tweets"></ul>   </div>   <script src="/js/phoenix.js" type="text/javascript"></script>   <script src="https://code.jquery.com/jquery-2.1.1.js" type="text/javascript"></script>   <script type="text/javascript">     var my_track = "programming";     var socket = new Phoenix.Socket("ws://" + location.host + "/ws");     socket.join("tweets", {track: my_track}, function(chan){       chan.on("tweet:stream", function(message){         console.log(message);         $('#tweets').prepend($('<li>').text(message.text));         });     });   </script> </div> Start the application: > mix phoenix.server Go to http://localhost:4000/ and after a few seconds, tweets should start arriving and the page will be updated to display every new tweet at the top. How it works… We start by creating a Phoenix application. We could have created a simple application to output the tweets in the console. However, Phoenix is a great choice for our purposes, displaying a web page with tweets getting updated in real time via websockets! In step 2, we add the dependencies needed to work with the Twitter API. We use parroty's extwitter Elixir application (https://hex.pm/packages/extwitter) and Tim's erlang-oauth application (https://github.com/tim/erlang-oauth/). After getting the dependencies and compiling them, we add the Twitter API keys to our application (step 4). These keys will be used to authenticate against Twitter where we previously registered our application. In step 5, we define a function that, when started, will query Twitter for any tweets containing a specific query. The stream = ExTwitter.stream_filter(track: query) line defines a stream that is returned by the ExTwitter application and is the result of filtering Twitter's timeline, extracting only the entries (tracks) that contain the defined query. The next line, which is for tweet <- stream do Phoenix.Channel.reply(socket, "tweet:stream", tweet), is a stream comprehension. For every new entry in the stream defined previously, send the entry through a Phoenix channel. Step 6 is where we define the channel. This channel is like a websocket handler. Actually, we define a join function:  def join(socket, "stream", %{"track" => query}) do    reply socket, "join", %{status: "connected"}    spawn(fn() -> TweetStreamer.start(socket, query) end)    {:ok, socket}  end It is here, when the websocket connection is performed, that we initialize the module defined in step 5 in the spawn call. This function receives a query string defined in the frontend code as track and passes that string to ExTwitter, which will use it as the filter. In step 7, we register and mount the websocket handler in the router using use Phoenix.Router.Socket, mount: "/ws", and we define the channel and its handler module using channel "tweets", PhoenixTwitterStream.Channels.Tweets. The channel definition must occur outside any scope definition! If we tried to define it, say, right before get "/", PageController, :index, the compiler would issue an error message and the application wouldn't even start. The last code we need to add is related to the frontend. In step 8, we mix HTML and JavaScript on the same file that will be responsible for displaying the root page and establishing the websocket connection with the server. We use a phoenix.js library helper (<script src="/js/phoenix.js" type="text/javascript"></script>), providing some functions to deal with Phoenix websockets and channels. We will take a closer look at some of the code in the frontend: // initializes the query … in this case filter the timeline for // all tweets containing "programming"  var my_track = "programming"; // initialize the websocket connection. The endpoint is /ws.  //(we already have registered with the phoenix router on step 7) var socket = new Phoenix.Socket("ws://" + location.host + "/ws"); // in here we join the channel 'tweets' // this code triggers the join function we saw on step 6 // when a new tweet arrives from the server via websocket // connection it is prepended to the existing tweets in the page socket.join("tweets", "stream", {track: my_track}, function(chan){       chan.on("tweet:stream", function(message){         $('#tweets').prepend($('<li>').text(message.text));         });     }); There's more… If you wish to see the page getting updated really fast, select a more popular word for the query. Summary In this article, we looked at how we can use extwitter to query Twitter for relevant tweets. Resources for Article: Further resources on this subject: NMAP Fundamentals [article] Api With Mongodb And Node.JS [article] Creating a Restful Api [article]
Read more
  • 0
  • 0
  • 1975
article-image-raspberry-pi-gaming-operating-systems
Packt
19 Feb 2015
3 min read
Save for later

Raspberry Pi Gaming Operating Systems

Packt
19 Feb 2015
3 min read
In this article by Shea Silverman, author of the book Raspberry Pi Gaming Second Edition, we will see how the Raspberry Pi, while a powerful little device, is nothing without software to run on it. Setting up emulators, games, and an operating system can be a daunting task for those who are new to using Linux. Luckily, there are distributions (operating system images) that handle all of this for us. In this article, we will demonstrate a distribution that have been specially made for gaming. (For more resources related to this topic, see here.) PiPlay PiPlay is an open source premade distribution that combines numerous emulators, games, and a custom frontend that serves as the GUI for the Raspberry Pi. Created in 2012, PiPlay started as PiMAME. Originally, PiMAME was a version of Raspbian that included the AdvanceMAME and AdvanceMENU frontend. The distribution was set to autologin and start up AdvanceMENU at boot up. This project was founded because of the numerous issues users were facing to get MAME to compile and run on their own devices. As more and more emulators were released, PiMAME began to include them in the image, and changed its name to PiPlay, as it wasn't just for arcade emulation anymore. Currently, PiPlay contains the following emulators and games: AdvanceMAME (Arcade) MAME4ALL (Arcade) Final Burn Alpha (Capcom and Neo Geo) PCSX_ReARMed (PlayStation) Dgen (Genesis) SNES9x (Super Nintendo) FCEUX (NES) Gearboy (Gameboy) GPSP (Gameboy Advance) ScummVM (point-and-click games) Stella (Atari 2600) NXEngine (Cave Story) VICE (Commodore 64) Mednafen (Game Gear, Neo Geo Pocket Color, Sega Master System, Turbo Grafx 16/PC-Engine) To download the latest version of PiPlay, go to http://piplay.org and click on the Download option. Now you need to burn the PiPlay image to your SD card. When this is completed, insert the SD card into your Raspberry Pi and turn it on. Within a few moments, you should see an image like this on your screen: Once it's finished booting, you will be presented with the PiPlay menu screen: Here, you will see all the different emulators and tools you have available. PiPlay includes an extensive controller setup tool. By pressing Tab key or button 3 on your controller, a popup window will appear. Select Controller Setup and follow the onscreen guide to properly configure your controller: At the moment, there isn't much to do because you haven't loaded any games for the emulators. The easiest way to load your game files into PiPlay is to use the web frontend. If you connect your Pi to your network, an IP address should appear at the top right of your screen. Another way to find out your IP address is by running the command ifconfig on the command line. Navigate your computer's web browser to this address, and the PiPlay frontend will appear: Here, you can reboot, shutdown, and upload numerous files to the Pi via a drag and drop interface. Simply select the emulator you want to upload files to, find your game file, and drag it onto the box. In a few moments, the file will be uploaded. Summary In this article, you have been introduced to PiPlay Raspberry Pi distribution. All Raspberry Pi distributions share a lot in common, they go about implementing gaming in their own unique ways. Try all and use the one that fits your gaming style the best. Resources for Article: Further resources on this subject: Testing Your Speed [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 3409

article-image-animating-game-character
Packt
18 Feb 2015
8 min read
Save for later

Animating a Game Character

Packt
18 Feb 2015
8 min read
In this Article by Claudio Scolastici, author of the book Unity 2D Game Development Cookbook. we will cover the following recipes: Creating an animation tree Dealing with transitions (For more resources related to this topic, see here.) Now that we have imported the necessary graphic assets for a prototype, we can approach its actual building in Unity, starting by making an animation set for our character. Unity implements an easy-to-approach animation system, though quite powerful, called Mecanim. Mecanim" is a proprietary tool of Unity in which the animation clips belonging to a character are represented as boxes connected by directional arrows. Boxes represent states, which you can simply think of as idle, walk, run...you get the idea. Arrows, on" the other hand, represent the transitions between the states, which are responsible for actually blending between one animation clip and the next. Thanks to transitions, we can make characters that pass smoothly, for example, from a walking animation into a running one. The control of transitions is achieved" through parameters: variables belonging to different types that are stored in the character animator and are used to define and check the conditions that trigger an animation clip. The types available are common in programming and scripting languages: int, float, and bool. A distinctive type implemented in" Mecanim is the trigger, which is useful when you want a transition to be triggered as an all-or-nothing event. By the way, an animator is a built-in component of Unity, strictly connected with the Mecanim system, which is represented as a panel in the Unity interface. Inside this panel, the so-called animation tree of a character is actually built-up and the control parameters for the transitions are set and linked to the clips. Time for an image to help you better understand what we are talking about! The following picture shows an example of an animator of a "standard game character: As you can see, there are four states connected by transitions that configure the logic of the flow between one state and another. Inside these arrows, the parameters and their reference values to actually trigger the animations are stored. With Mecanim, it's quite easy to build the animation tree of a character and create the "logic that determines the conditions for the animations to be played. One example is to "use a float variable to blend between a walking and a running cycle, having the speed "of the character as the control parameter. Using a trigger or a boolean variable to add "a jumping animation to the character is another fairly common example. These are the subjects of our following two recipes, starting with trigger-based blending. Creating the animation tree In this recipe, we show you how to add animation clips to the animator component of a game object (our game character). This being done, we will be able to set the transitions between the clips and create the logic for the animations to be correctly played. Getting ready First of all, we need a set of "animation clips, imported in Unity and configured in Inspector. Before we proceed, be sure you have these four animation clips imported into your Unity project as FBX files: Char@Idle, Char@Run, Char@Jump, and Char@Walk. How to do it... The first operation is to create a folder to store the Animator Controller. From the project panel, select the Assets folder and create a new folder for the Animation Controller. Name this folder Animators. In the Animators folder, create a new Animator Controller option by navigating to Create | Animator Controller, as shown in the following screenshot: Name the "asset Character_Animator, or any other name you like. Double-click on Character_Animator to open the Animator panel in Unity. "Refer to the following screenshot; you should have an empty grid with a single magenta box called Any State: Access "the Models/Animations folder and select Char@Idle. Expand its hierarchy to access the actual animation clip named Idle; animation clips are represented by small play icons. Refer to the following screenshot for more clarity: Now drag" the clip into the Animator window. The clip should turn into a box inside the panel (colored in orange to represent that). Being the first clip imported into the Animator window, it is assumed to be the default animation for the character. That's exactly what we want! Repeat this operation with the clip named Jump, taken from the Char@Jump FBX file. The following screenshot shows what should appear in the Animator window: How it works... By dragging" animation clips from the project panel into the Animator editor, Mecanim creates a logic state for each of them. As states, the clips are available to connect through transitions and the animation tree of the character can come to life. With the Idle and Jump animations added to the Animator window, we can define the logic to control the conditions to switch between them. In the following recipe, we "create the transition to blend between these two animation clips. Dealing with transitions In this recipe, we create and set up the "transition for the character to switch between the Idle and Jump animation clips. For this task, we also need a parameter, which we will call bJump, to trigger the jump animation through code. Getting ready We will build on the previous recipe. Have the Animator window open, and be ready to follow our instructions. How to do it... As you move to the Animator panel in Unity, you should see a orange box representing the Idle animation, from our previous recipe. If it is not, right-click on it, and from the menu, select Set As Default. You can refer to the following screenshot: Right-click on the Idle clip and select Make Transition from the menu, as shown in the following screenshot: Drag the arrow that "appears onto the Jump clip and click to create the transition. "It should appear in the Inspector window, to the right of the Animator window. "Check the following screenshot to see whether you did it right: Now that we have got the "transition, we need a parameter to switch between Idle "and Jump. We use a boolean type for this, so we first need to create it. In the bottom-left corner of the Animator window, click on the small +, and from the "menu that appears, select Bool, as shown in the following screenshot: Name the newly created parameter bJump (the "b" stands for the boolean type; "it's a good habit to create meaningful variable names). Click on the white arrow representing the transition to access its properties in Inspector. There, a visual representation of the transition between the two clips "is available. By checking the "Conditions section in Inspector, you can see that the transition "is right now controlled by Exit Time, meaning that the Jump clip will be played only after the Idle clip has finished playing. The 0.97 value tells us that the transition is actually blending between the two clips for the last 3 percent of the idle animation. For your reference, you can adjust this value if you want to blend it a bit more or a "bit less. Please refer to the following screenshot: As we want our bJump parameter to control the transition, we need to change Exit Time using the tJump parameter. We do that by clicking on the drop-down menu on Exit Time and selecting tJump from the menu, as shown in the following screenshot: Note that "it is possible to add or remove conditions by acting on the small + "and - buttons in the interface if you need extra conditions to control one single transition. For now, we just want to be sure that the Atomic option is not flagged in the Inspector panel. The Atomic flag interrupts an animation, even if it has not finished playing yet. We don't want that to happen; when the character jumps, "the animation must get to its end before playing any other clip. The following screenshot highlights these options we just mentioned: How it works... We made our first "transition with Mecanim and used a boolean variable called bJump to control it. It is now possible to link bJump to an event, for example, pressing the spacebar "to trigger the Jump animation clip. Summary There was a time when building games was a cumbersome and almost exclusive activity, as you needed to program your own game engine, or pay a good amount of money to license one. Thanks to Unity, creating video games today is still a cumbersome activity, though less exclusive and expensive! With this article, we aim to provide you with a detailed guide to approach the development of an actual 2D game with Unity. As it is a complex process that requires several operations to be performed, we will do our best to support you at every step by providing all the relevant information to help you successfully make games with Unity. Resources for Article: Further resources on this subject: 2D Twin-stick Shooter [article] Components in Unity [article] Introducing the Building Blocks for Unity Scripts [article]
Read more
  • 0
  • 0
  • 9934

article-image-unit-testing-0
Packt
18 Feb 2015
18 min read
Save for later

Unit Testing

Packt
18 Feb 2015
18 min read
In this article by Mikael Lundin, author of the book Testing with F#, we will see how unit testing is the art of designing our program in such a way that we can easily test each function as isolated units and such verify its correctness. Unit testing is not only a tool for verification of functionality, but also mostly a tool for designing that functionality in a testable way. What you gain is the means of finding problems early, facilitating change, documentation, and design. In this article, we will dive into how to write good unit tests using F#: Testing in isolation Finding the abstraction level (For more resources related to this topic, see here.) FsUnit The current state of unit testing in F# is good. You can get all the major test frameworks running with little effort, but there is still something that feels a bit off with the way tests and asserts are expressed: open NUnit.Framework Assert.That(result, Is.EqualTo(42)) Using FsUnit, you can achieve much higher expressiveness in writing unit tests by simply reversing the way the assert is written: open FsUnit result |> should equal 42 The FsUnit framework is not a test runner in itself, but uses an underlying test framework to execute. The underlying framework can be of MSTest, NUnit, or xUnit. FsUnit can best be explained as having a different structure and syntax while writing tests. While this is a more dense syntax, the need for structure still exists and AAA is more needed more than ever. Consider the following test example: [<Measure>] type EUR [<Measure>] type SEK type Country = | Sweden | Germany | France   let calculateVat country (amount : float<'u>) =    match country with    | Sweden -> amount * 0.25    | Germany -> amount * 0.19    | France -> amount * 0.2   open NUnit.Framework open FsUnit   [<Test>] let ``Sweden should have 25% VAT`` () =    let amount = 200.<SEK>      calculateVat Sweden amount |> should equal 50<SEK> This code will calculate the VAT in Sweden in Swedish currency. What is interesting is that when we break down the test code and see that it actually follows the AAA structure, even it doesn't explicitly tell us this is so: [<Test>] let ``Germany should have 19% VAT`` () =    // arrange    let amount = 200.<EUR>    // act    calculateVat Germany amount    //assert    |> should equal 38<EUR> The only thing I did here was add the annotations for AAA. It gives us the perspective of what we're doing, what frames we're working inside, and the rules for writing good unit tests. Assertions We have already seen the equal assertion, which verifies that the test result is equal to the expected value: result |> should equal 42 You can negate this assertion by using the not' statement, as follows: result |> should not' (equal 43) With strings, it's quite common to assert that a string starts or ends with some value, as follows: "$12" |> should startWith "$" "$12" |> should endWith "12" And, you can also negate that, as follows: "$12" |> should not' (startWith "€") "$12" |> should not' (endWith "14") You can verify that a result is within a boundary. This will, in turn, verify that the result is somewhere between the values of 35-45: result |> should (equalWithin 5) 40 And, you can also negate that, as follows: result |> should not' ((equalWithin 1) 40) With the collection types list, array, and sequence, you can check that it contains a specific value: [1..10] |> should contain 5 And, you can also negate it to verify that a value is missing, as follows: [1; 1; 2; 3; 5; 8; 13] |> should not' (contain 7) It is common to test the boundaries of a function and then its exception handling. This means you need to be able to assert exceptions, as follows: let getPersonById id = failwith "id cannot be less than 0" (fun () -> getPersonById -1 |> ignore) |> should throw typeof<System.Exception> There is a be function that can be used in a lot of interesting ways. Even in situations where the equal assertion can replace some of these be structures, we can opt for a more semantic way of expressing our assertions, providing better error messages. Let us see examples of this, as follows: // true or false 1 = 1 |> should be True 1 = 2 |> should be False        // strings as result "" |> should be EmptyString null |> should be NullOrEmptyString   // null is nasty in functional programming [] |> should not' (be Null)   // same reference let person1 = new System.Object() let person2 = person1 person1 |> should be (sameAs person2)   // not same reference, because copy by value let a = System.DateTime.Now let b = a a |> should not' (be (sameAs b))   // greater and lesser result |> should be (greaterThan 0) result |> should not' (be lessThan 0)   // of type result |> should be ofExactType<int>   // list assertions [] |> should be Empty [1; 2; 3] |> should not' (be Empty) With this, you should be able to assert most of the things you're looking for. But there still might be a few edge cases out there that default FsUnit asserts won't catch. Custom assertions FsUnit is extensible, which makes it easy to add your own assertions on top of the chosen test runner. This has the possibility of making your tests extremely readable. The first example will be a custom assert which verifies that a given string matches a regular expression. This will be implemented using NUnit as a framework, as follows: open FsUnit open NUnit.Framework.Constraints open System.Text.RegularExpressions   // NUnit: implement a new assert type MatchConstraint(n) =    inherit Constraint() with       override this.WriteDescriptionTo(writer : MessageWriter) : unit =            writer.WritePredicate("matches")            writer.WriteExpectedValue(sprintf "%s" n)        override this.Matches(actual : obj) =            match actual with            | :? string as input -> Regex.IsMatch(input, n)            | _ -> failwith "input must be of string type"            let match' n = MatchConstraint(n)   open NUnit.Framework   [<Test>] let ``NUnit custom assert`` () =    "2014-10-11" |> should match' "d{4}-d{2}-d{2}"    "11/10 2014" |> should not' (match' "d{4}-d{2}-d{2}") In order to create your own assert, you need to create a type that implements the NUnit.Framework.Constraints.IConstraint interface, and this is easily done by inheriting from the Constraint base class. You need to override both the WriteDescriptionTo() and Matches() method, where the first one controls the message that will be output from the test, and the second is the actual test. In this implementation, I verify that input is a string; or the test will fail. Then, I use the Regex.IsMatch() static function to verify the match. Next, we create an alias for the MatchConstraint() function, match', with the extra apostrophe to avoid conflict with the internal F# match expression, and then we can use it as any other assert function in FsUnit. Doing the same for xUnit requires a completely different implementation. First, we need to add a reference to NHamcrest API. We'll find it by searching for the package in the NuGet Package Manager: Instead, we make an implementation that uses the NHamcrest API, which is a .NET port of the Java Hamcrest library for building matchers for test expressions, shown as follows: open System.Text.RegularExpressions open NHamcrest open NHamcrest.Core   // test assertion for regular expression matching let match' pattern =    CustomMatcher<obj>(sprintf "Matches %s" pattern, fun c ->        match c with        | :? string as input -> Regex.IsMatch(input, pattern)        | _ -> false)   open Xunit open FsUnit.Xunit   [<Fact>] let ``Xunit custom assert`` () =    "2014-10-11" |> should match' "d{4}-d{2}-d{2}"    "11/10 2014" |> should not' (match' "d{4}-d{2}-d{2}") The functionality in this implementation is the same as the NUnit version, but the implementation here is much easier. We create a function that receives an argument and return a CustomMatcher<obj> object. This will only take the output message from the test and the function to test the match. Writing an assertion for FsUnit driven by MSTest works exactly the same way as it would in Xunit, by NHamcrest creating a CustomMatcher<obj> object. Unquote There is another F# assertion library that is completely different from FsUnit but with different design philosophies accomplishes the same thing, by making F# unit tests more functional. Just like FsUnit, this library provides the means of writing assertions, but relies on NUnit as a testing framework. Instead of working with a DSL like FsUnit or API such as with the NUnit framework, the Unquote library assertions are based on F# code quotations. Code quotations is a quite unknown feature of F# where you can turn any code into an abstract syntax tree. Namely, when the F# compiler finds a code quotation in your source file, it will not compile it, but rather expand it into a syntax tree that represents an F# expression. The following is an example of a code quotation: <@ 1 + 1 @> If we execute this in F# Interactive, we'll get the following output: val it : Quotations.Expr = Call (None, op_Addition, [Value (1), Value (1)]) This is truly code as data, and we can use it to write code that operates on code as if it was data, which in this case, it is. It brings us closer to what a compiler does, and gives us lots of power in the metadata programming space. We can use this to write assertions with Unquote. Start by including the Unquote NuGet package in your test project, as shown in the following screenshot: And now, we can implement our first test using Unquote, as follows: open NUnit.Framework open Swensen.Unquote   [<Test>] let ``Fibonacci sequence should start with 1, 1, 2, 3, 5`` () =     test <@ fibonacci |> Seq.take 5 |> List.ofSeq = [1; 1; 2; 3; 5] @> This works by Unquote first finding the equals operation, and then reducing each side of the equals sign until they are equal or no longer able to reduce. Writing a test that fails and watching the output more easily explains this. The following test should fail because 9 is not a prime number: [<Test>] let ``prime numbers under 10 are 2, 3, 5, 7, 9`` () =    test <@ primes 10 = [2; 3; 5; 7; 9] @> // fail The test will fail with the following message: Test Name: prime numbers under 10 are 2, 3, 5, 7, 9 Test FullName: chapter04.prime numbers under 10 are 2, 3, 5, 7, 9 Test Outcome: Failed Test Duration: 0:00:00.077   Result Message: primes 10 = [2; 3; 5; 7; 9] [2; 3; 5; 7] = [2; 3; 5; 7; 9] false   Result StackTrace: at Microsoft.FSharp.Core.Operators.Raise[T](Exception exn) at chapter04.prime numbers under 10 are 2, 3, 5, 7, 9() In the resulting message, we can see both sides of the equals sign reduced until only false remains. It's a very elegant way of breaking down a complex assertion. Assertions The assertions in Unquote are not as specific or extensive as the ones in FsUnit. The idea of having lots of specific assertions for different situations is to get very descriptive error messages when the tests fail. Since Unquote actually outputs the whole reduction of the statements when the test fails, the need for explicit assertions is not that high. You'll get a descript error message anyway. The absolute most common is to check for equality, as shown before. You can also verify that two expressions are not equal: test <@ 1 + 2 = 4 - 1 @> test <@ 1 + 2 <> 4 @> We can check whether a value is greater or smaller than the expected value: test <@ 42 < 1337 @> test <@ 1337 > 42 @> You can check for a specific exception, or just any exception: raises<System.NullReferenceException> <@ (null : string).Length @> raises<exn> <@ System.String.Format(null, null) @> Here, the Unquote syntax excels compared to FsUnit, which uses a unit lambda expression to do the same thing in a quirky way. The Unquote library also has its reduce functionality in the public API, making it possible for you to reduce and analyze an expression. Using the reduceFully syntax, we can get the reduction in a list, as shown in the following: > <@ (1+2)/3 @> |> reduceFully |> List.map decompile;; val it : string list = ["(1 + 2) / 3"; "3 / 3"; "1"] If we just want the output to console output, we can run the unquote command directly: > unquote <@ [for i in 1..5 -> i * i] = ([1..5] |> List.map (fun i -> i * i)) @>;; Seq.toList (seq (Seq.delay (fun () -> Seq.map (fun i -> i * i) {1..5}))) = ([1..5] |> List.map (fun i -> i * i)) Seq.toList (seq seq [1; 4; 9; 16; ...]) = ([1; 2; 3; 4; 5] |> List.map (fun i -> i * i)) Seq.toList seq [1; 4; 9; 16; ...] = [1; 4; 9; 16; 25] [1; 4; 9; 16; 25] = [1; 4; 9; 16; 25] true It is important to know what tools are out there, and Unquote is one of those tools that is fantastic to know about when you run into a testing problem in which you want to reduce both sides of an equals sign. Most often, this belongs to difference computations or algorithms like price calculation. We have also seen that Unquote provides a great way of expressing tests for exceptions that is unmatched by FsUnit. Testing in isolation One of the most important aspects of unit testing is to test in isolation. This does not only mean to fake any external dependency, but also that the test code itself should not be tied up to some other test code. If you're not testing in isolation, there is a potential risk that your test fails. This is not because of the system under test, but the state that has lingered from a previous test run, or external dependencies. Writing pure functions without any state is one way of making sure your test runs in isolation. Another way is by making sure that the test creates all the needed state itself. Shared state, like connections, between tests is a bad idea. Using TestFixtureSetUp/TearDown attributes to set up a state for a set of tests is a bad idea. Keeping low performance resources around because they're expensive to set up is a bad idea. The most common shared states are the following: The ASP.NET Model View Controller (MVC) session state Dependency injection setup Database connection, even though it is no longer strictly a unit test Here's how one should think about unit testing in isolation, as shown in the following screenshot: Each test is responsible for setting up the SUT and its database/web service stubs in order to perform the test and assert on the result. It is equally important that the test cleans up after itself, which in the case of unit tests most often can be handed over to the garbage collector, and doesn't need to be explicitly disposed. It is common to think that one should only isolate a test fixture from other test fixtures, but this idea of a test fixture is bad. Instead, one should strive for having each test stand for itself to as large an extent as possible, and not be dependent on outside setups. This does not mean you will have unnecessary long unit tests, provided you write SUT and tests well within that context. The problem we often run into is that the SUT itself maintains some kind of state that is present between tests. The state can simply be a value that is set in the application domain and is present between different test runs, as follows: let getCustomerFullNameByID id =    if cache.ContainsKey(id) then        (cache.[id] :?> Customer).FullName    else        // get from database        // NOTE: stub code        let customer = db.getCustomerByID id        cache.[id] <- customer        customer.FullName The problem we see here is that the cache will be present from one test to another, so when the second test is running, it needs to make sure that its running with a clean cache, or the result might not be as expected. One way to test it properly would be to separate the core logic from the cache and test them each independently. Another would be to treat it as a black box and ignore the cache completely. If the cache makes the test fail, then the functionality fails as a whole. This depends on if we see the cache as an implementation detail of the function or a functionality by itself. Testing implementation details, or private functions, is dirty because our tests might break even if the functionality hasn't changed. And yet, there might be benefits into taking the implementation detail into account. In this case, we could use the cache functionality to easily stub out the database without the need of any mocking framework. Vertical slice testing Most often, we deal with dependencies as something we need to mock away, where as the better option would be to implement a test harness directly into the product. We know what kind of data and what kind of calls we need to make to the database, so right there, we have a public API for the database. This is often called a data access layer in a three-tier architecture (but no one ever does those anymore, right?). As we have a public data access layer, we could easily implement an in-memory representation that can be used not only by our tests, but in development of the product, as shown in the following image: When you're running the application in development mode, you configure it toward the in-memory version of the dependency. This provides you with the following benefits: You'll get a faster development environment Your tests will become simpler You have complete control of your dependency As your development environment is doing everything in memory, it becomes blazing fast. And as you develop your application, you will appreciate adjusting that public API and getting to understand completely what you expect from that dependency. It will lead to a cleaner API, where very few side effects are allowed to seep through. Your tests will become much simpler, as instead of mocking away the dependency, you can call the in-memory dependency and set whatever state you want. Here's an example of what a public data access API might look like: type IDataAccess =    abstract member GetCustomerByID : int -> Customer    abstract member FindCustomerByName : string -> Customer option    abstract member UpdateCustomerName : int -> string -> Customer    abstract member DeleteCustomerByID : int -> bool This is surely a very simple API, but it will demonstrate the point. There is a database with a customer inside it, and we want to do some operations on that. In this case, our in-memory implementation would look like this: type InMemoryDataAccess() =    let data = new System.Collections.Generic.Dictionary<int, Customer>()      // expose the add method    member this.Add customer = data.Add(customer.ID, customer)      interface IDataAccess with       // throw exception if not found        member this.GetCustomerByID id =            data.[id]        member this.FindCustomerByName fullName =            data.Values |> Seq.tryFind (fun customer -> customer.FullName = fullName)          member this.UpdateCustomerName id fullName =            data.[id] <- { data.[id] with FullName = fullName }            data.[id]          member this.DeleteCustomerByID id =            data.Remove(id) This is a simple implementation that provides the same functionality as the database would, but in memory. This makes it possible to run the tests completely in isolation without worrying about mocking away the dependencies. The dependencies are already substituted with in-memory replacements, and as seen with this example, the in-memory replacement doesn't have to be very extensive. The only extra function except from the interface implementation is the Add() function which lets us set the state prior to the test, as this is something the interface itself doesn't provide for us. Now, in order to sew it together with the real implementation, we need to create a configuration in order to select what version to use, as shown in the following code: open System.Configuration open System.Collections.Specialized   // TryGetValue extension method to NameValueCollection type NameValueCollection with    member this.TryGetValue (key : string) =        if this.Get(key) = null then            None        else            Some (this.Get key)   let dataAccess : IDataAccess =    match ConfigurationManager.AppSettings.TryGetValue("DataAccess") with    | Some "InMemory" -> new InMemoryDataAccess() :> IDataAccess    | Some _ | None -> new DefaultDataAccess() :> IDataAccess        // usage let fullName = (dataAccess.GetCustomerByID 1).FullName Again, with only a few lines of code, we manage to select the appropriate IDataAccess instance and execute against it without using dependency injection or taking a penalty in code readability, as we would in C#. The code is straightforward and easy to read, and we can execute any tests we want without touching the external dependency, or in this case, the database. Finding the abstraction level In order to start unit testing, you have to start writing tests; this is what they'll tell you. If you want to get good at it, just start writing tests, any and a lot of them. The rest will solve itself. I've watched experienced developers sit around staring dumbfounded at an empty screen because they couldn't get into their mind how to get started, what to test. The question is not unfounded. In fact, it is still debated in the Test Driven Development (TDD) community what should be tested. The ground rule is that the test should bring at least as much value as the cost of writing it, but that is a bad rule for someone new to testing, as all tests are expensive for them to write. Summary In this article, we've learned how to write unit tests by using the appropriate tools to our disposal: NUnit, FsUnit, and Unquote. We have also learned about different techniques for handling external dependencies, using interfaces and functional signatures, and executing dependency injection into constructors, properties, and methods. Resources for Article: Further resources on this subject: Learning Option Pricing [article] Pentesting Using Python [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 2340
article-image-testing-ui-using-webdriverjs
Packt
17 Feb 2015
30 min read
Save for later

Testing a UI Using WebDriverJS

Packt
17 Feb 2015
30 min read
In this article, by the author, Enrique Amodeo, of the book, Learning Behavior-driven Development with JavaScript, we will look into an advanced concept: how to test a user interface. For this purpose, you will learn the following topics: Using WebDriverJS to manipulate a browser and inspect the resulting HTML generated by our UI Organizing our UI codebase to make it easily testable The right abstraction level for our UI tests (For more resources related to this topic, see here.) Our strategy for UI testing There are two traditional strategies towards approaching the problem of UI testing: record-and-replay tools and end-to-end testing. The first approach, record-and-replay, leverages the use of tools capable of recording user activity in the UI and saves this into a script file. This script file can be later executed to perform exactly the same UI manipulation as the user performed and to check whether the results are exactly the same. This approach is not very compatible with BDD because of the following reasons: We cannot test-first our UI. To be able to use the UI and record the user activity, we first need to have most of the code of our application in place. This is not a problem in the waterfall approach, where QA and testing are performed after the codification phase is finished. However, in BDD, we aim to document the product features as automated tests, so we should write the tests before or during the coding. The resulting test scripts are low-level and totally disconnected from the problem domain. There is no way to use them as a live documentation for the requirements of the system. The resulting test suite is brittle and it will stop working whenever we make slight changes, even cosmetic ones, to the UI. The problem is that the tools record the low-level interaction with the system that depends on technical details of the HTML. The other classic approach is end-to-end testing, where we do not only test the UI layer, but also most of the system or even the whole of it. To perform the setup of the tests, the most common approach is to substitute the third-party systems with test doubles. Normally, the database is under the control of the development team, so some practitioners use a regular database for the setup. However, we could use an in-memory database or even mock the DAOs. In any case, this approach prompts us to create an integrated test suite where we are not only testing the correctness of the UI, but the business logic as well. In the context of this discussion, an integrated test is a test that checks several layers of abstraction, or subsystems, in combination. Do not confuse it with the act of testing several classes or functions together. This approach is not inherently against BDD; for example, we could use Cucumber.js to capture the features of the system and implement Gherkin steps using WebDriver to drive the UI and make assertions. In fact, for most people, when you say BDD they always interpret this term to refer to this kind of test. We will end up writing a lot of test cases, because we need to combine the scenarios from the business logic domain with the ones from the UI domain. Furthermore, in which language should we formulate the tests? If we use the UI language, maybe it will be too low-level to easily describe business concepts. If we use the business domain language, maybe we will not be able to test the important details of the UI because they are too low-level. Alternatively, we can even end up with tests that mix UI language with business terminology, so they will neither be focused nor very clear to anyone. Choosing the right tests for the UI If we want to test whether the UI works, why should we test the business rules? After all, this is already tested in the BDD test suite of the business logic layer. To decide which tests to write, we should first determine the responsibilities of the UI layer, which are as follows: Presenting the information provided by the business layer to the user in a nice way. Transforming user interaction into requests for the business layer. Controlling the changes in the appearance of the UI components, which includes things such as enabling/disabling controls, highlighting entry fields, showing/hiding UI elements, and so on. Orchestration between the UI components. Transferring and adapting information between the UI components and navigation between pages fall under this category. We do not need to write tests about business rules, and we should not assume much about the business layer itself, apart from a loose contract. How we should word our tests? We should use a UI-related language when we talk about what the user sees and does. Words such as fields, buttons, forms, links, click, hover, highlight, enable/disable, or show and hide are relevant in this context. However, we should not go too far; otherwise, our tests will be too brittle. Saying, for example, that the name field should have a pink border is too low-level. The moment that the designer decides to use red instead of pink, or changes his mind and decides to change the background color instead of the border, our test will break. We should aim for tests that express the real intention of the user interface; for example, the name field should be highlighted as incorrect. The testing architecture At this point, we could write tests relevant for our UI using the following testing architecture: A simple testing architecture for our UI We can use WebDriver to issue user gestures to interact with the browser. These user gestures are transformed by the browser in to DOM events that are the inputs of our UI logic and will trigger operations on it. We can use WebDriver again to read the resulting HTML in the assertions. We can simply use a test double to impersonate our server, so we can set up our tests easily. This architecture is very simple and sounds like a good plan, but it is not! There are three main problems here: UI testing is very slow. Take into account that the boot time and shutdown phase can take 3 seconds in a normal laptop. Each UI interaction using WebDriver can take between 50 and 100 milliseconds, and the latency with the fake server can be an extra 10 milliseconds. This gives us only around 10 tests per second, plus an extra 3 seconds. UI tests are complex and difficult to diagnose when they fail. What is failing? Our selectors used to tell WebDriver how to find the relevant elements. Some race condition we were not aware of? A cross-browser issue? Also note that our test is now distributed between two different processes, a fact that always makes debugging more difficult. UI tests are inherently brittle. We can try to make them less brittle with best practices, but even then a change in the structure of the HTML code will sometimes break our tests. This is a bad thing because the UI often changes more frequently than the business layer. As UI testing is very risky and expensive, we should try to code as less amount of tests that interact with the UI as possible. We can achieve this without losing testing power, with the following testing architecture:   A smarter testing architecture We have now split our UI layer into two components: the view and the UI logic. This design aligns with the family of MV* design patterns. In the context of this article, the view corresponds with a passive view, and the UI logic corresponds with the controller or the presenter, in combination with the model. A passive view is usually very hard to test; so in this article we will focus mostly on how to do it. You will often be able to easily separate the passive view from the UI logic, especially if you are using an MV* pattern, such as MVC, MVP, or MVVM. Most of our tests will be for the UI logic. This is the component that implements the client-side validation, orchestration of UI components, navigation, and so on. It is the UI logic component that has all the rules about how the user can interact with the UI, and hence it needs to maintain some kind of internal state. The UI logic component can be tested completely in memory using standard techniques. We can simply mock the XMLHttpRequest object, or the corresponding object in the framework we are using, and test everything in memory using a single Node.js process. No interaction with the browser and the HTML is needed, so these tests will be blazingly fast and robust. Then we need to test the view. This is a very thin component with only two responsibilities: Manipulating and updating the HTML to present the user with the information whenever it is instructed to do so by the UI logic component Listening for HTML events and transforming them into suitable requests for the UI logic component The view should not have more responsibilities, and it is a stateless component. It simply does not need to store the internal state, because it only transforms and transmits information between the HTML and the UI logic. Since it is the only component that interacts with the HTML, it is the only one that needs to be tested using WebDriver. The point of all of this is that the view can be tested with only a bunch of tests that are conceptually simple. Hence, we minimize the number and complexity of the tests that need to interact with the UI. WebDriverJS Testing the passive view layer is a technical challenge. We not only need to find a way for our test to inject native events into the browser to simulate user interaction, but we also need to be able to inspect the DOM elements and inject and execute scripts. This was very challenging to do approximately 5 years ago. In fact, it was considered complex and expensive, and some practitioners recommended not to test the passive view. After all, this layer is very thin and mostly contains the bindings of the UI to the HTML DOM, so the risk of error is not supposed to be high, specially if we use modern cross-browser frameworks to implement this layer. Nonetheless, nowadays the technology has evolved, and we can do this kind of testing without much fuss if we use the right tools. One of these tools is Selenium 2.0 (also known as WebDriver) and its library for JavaScript, which is WebDriverJS (https://code.google.com/p/selenium/wiki/WebDriverJs).  In this book, we will use WebDriverJS, but there are other bindings in JavaScript for Selenium 2.0, such as WebDriverIO (http://webdriver.io/). You can use the one you like most or even try both. The point is that the techniques I will show you here can be applied with any client of WebDriver or even with other tools that are not WebDriver. Selenium 2.0 is a tool that allows us to make direct calls to a browser automation API. This way, we can simulate native events, we can access the DOM, and we can control the browser. Each browser provides a different API and has its own quirks, but Selenium 2.0 will offer us a unified API called the WebDriver API. This allows us to interact with different browsers without changing the code of our tests. As we are accessing the browser directly, we do not need a special server, unless we want to control browsers that are on a different machine. Actually, this is only true, due some technical limitations, if we want to test against a Google Chrome or a Firefox browser using WebDriverJS. So, basically, the testing architecture for our passive view looks like this: Testing with WebDriverJS We can see that we use WebDriverJS for the following: Sending native events to manipulate the UI, as if we were the user, during the action phase of our tests Inspecting the HTML during the assert phase of our test Sending small scripts to set up the test doubles, check them, and invoke the update method of our passive view Apart from this, we need some extra infrastructure, such as a web server that serves our test HTML page and the components we want to test. As is evident from the diagram, the commands of WebDriverJS require some network traffic to able to send the appropriate request to the browser automation API, wait for the browser to execute, and get the result back through the network. This forces the API of WebDriverJS to be asynchronous in order to not block unnecessarily. That is why WebDriverJS has an API designed around promises. Most of the methods will return a promise or an object whose methods return promises. This plays perfectly well with Mocha and Chai.  There is a W3C specification for the WebDriver API. If you want to have a look, just visit https://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html. The API of WebDriverJS is a bit complex, and you can find its official documentation at http://selenium.googlecode.com/git/docs/api/javascript/module_selenium-webdriver.html. However, to follow this article, you do not need to read it, since I will now show you the most important API that WebDriverJS offers us. Finding and interacting with elements It is very easy to find an HTML element using WebDriverJS; we just need to use either the findElement or the findElements methods. Both methods receive a locator object specifying which element or elements to find. The first method will return the first element it finds, or simply fail with an exception, if there are no elements matching the locator. The findElements method will return a promise for an array with all the matching elements. If there are no matching elements, the promised array will be empty and no error will be thrown. How do we specify which elements we want to find? To do so, we need to use a locator object as a parameter. For example, if we would like to find the element whose identifier is order_item1, then we could use the following code: var By = require('selenium-webdriver').By;   driver.findElement(By.id('order_item1')); We need to import the selenium-webdriver module and capture its locator factory object. By convention, we store this locator factory in a variable called By. Later, we will see how we can get a WebDriverJS instance. This code is very expressive, but a bit verbose. There is another version of this: driver.findElement({ id: 'order_item1' }); Here, the locator criteria is passed in the form of a plain JSON object. There is no need to use the By object or any factory. Which version is better? Neither. You just use the one you like most. In this article, the plain JSON locator will be used. The following are the criteria for finding elements: Using the tag name, for example, to locate all the <li> elements in the document: driver.findElements(By.tagName('li'));driver.findElements({ tagName: 'li' }); We can also locate using the name attribute. It can be handy to locate the input fields. The following code will locate the first element named password: driver.findElement(By.name('password')); driver.findElement({ name: 'password' }); Using the class name; for example, the following code will locate the first element that contains a class called item: driver.findElement(By.className('item')); driver.findElement({ className: 'item' }); We can use any CSS selector that our target browser understands. If the target browser does not understand the selector, it will throw an exception; for example, to find the second item of an order (assuming there is only one order on the page): driver.findElement(By.css('.order .item:nth-of-type(2)')); driver.findElement({ css: '.order .item:nth-of-type(2)' }); Using only the CSS selector you can locate any element, and it is the one I recommend. The other ones can be very handy in specific situations. There are more ways of locating elements, such as linkText, partialLinkText, or xpath, but I seldom use them. Locating elements by their text, such as in linkText or partialLinkText, is brittle because small changes in the wording of the text can break the tests. Also, locating by xpath is not as useful in HTML as using a CSS selector. Obviously, it can be used if the UI is defined as an XML document, but this is very rare nowadays. In both methods, findElement and findElements, the resulting HTML elements are wrapped as a WebElement object. This object allows us to send an event to that element or inspect its contents. Some of its methods that allow us to manipulate the DOM are as follows: clear(): This will do nothing unless WebElement represents an input control. In this case, it will clear its value and then trigger a change event. It returns a promise that will be fulfilled whenever the operation is done. sendKeys(text or key, …): This will do nothing unless WebElement is an input control. In this case, it will send the equivalents of keyboard events to the parameters we have passed. It can receive one or more parameters with a text or key object. If it receives a text, it will transform the text into a sequence of keyboard events. This way, it will simulate a user typing on a keyboard. This is more realistic than simply changing the value property of an input control, since the proper keyDown, keyPress, and keyUp events will be fired. A promise is returned that will be fulfilled when all the key events are issued. For example, to simulate that a user enters some search text in an input field and then presses Enter, we can use the following code: var Key = require('selenium-webdriver').Key;   var searchField = driver.findElement({name: 'searchTxt'}); searchField.sendKeys('BDD with JS', Key.ENTER);  The webdriver.Key object allows us to specify any key that does not represent a character, such as Enter, the up arrow, Command, Ctrl, Shift, and so on. We can also use its chord method to represent a combination of several keys pressed at the same time. For example, to simulate Alt + Command + J, use driver.sendKeys(Key.chord(Key.ALT, Key.COMMAND, 'J'));. click(): This will issue a click event just in the center of the element. The returned promise will be fulfilled when the event is fired.  Sometimes, the center of an element is nonclickable, and an exception is thrown! This can happen, for example, with table rows, since the center of a table row may just be the padding between cells! submit(): This will look for the form that contains this element and will issue a submit event. Apart from sending events to an element, we can inspect its contents with the following methods: getId(): This will return a promise with the internal identifier of this element used by WebDriver. Note that this is not the value of the DOM ID property! getText(): This will return a promise that will be fulfilled with the visible text inside this element. It will include the text in any child element and will trim the leading and trailing whitespaces. Note that, if this element is not displayed or is hidden, the resulting text will be an empty string! getInnerHtml() and getOuterHtml(): These will return a promise that will be fulfilled with a string that contains innerHTML or outerHTML of this element. isSelected(): This will return a promise with a Boolean that determines whether the element has either been selected or checked. This method is designed to be used with the <option> elements. isEnabled(): This will return a promise with a Boolean that determines whether the element is enabled or not. isDisplayed(): This will return a promise with a Boolean that determines whether the element is displayed or not. Here, "displayed" is taken in a broad sense; in general, it means that the user can see the element without resizing the browser. For example, whether the element is hidden, whether it has diplay: none, or whether it has no size, or is in an inaccessible part of the document, the returned promise will be fulfilled as false. getTagName(): This will return a promise with the tag name of the element. getSize(): This will return a promise with the size of the element. The size comes as a JSON object with width and height properties that indicate the height and width in pixels of the bounding box of the element. The bounding box includes padding, margin, and border. getLocation(): This will return a promise with the position of the element. The position comes as a JSON object with x and y properties that indicate the coordinates in pixels of the element relative to the page. getAttribute(name): This will return a promise with the value of the specified attribute. Note that WebDriver does not distinguish between attributes and properties! If there is neither an attribute nor a property with that name, the promise will be fulfilled as null. If the attribute is a "boolean" HTML attribute (such as checked or disabled), the promise will be evaluated as true only if the attribute is present. If there is both an attribute and a property with the same name, the attribute value will be used.  If you really need to be precise about getting an attribute or a property, it is much better to use an injected script to get it. getCssValue(cssPropertyName): This will return a promise with a string that represents the computed value of the specified CSS property. The computed value is the resulting value after the browser has applied all the CSS rules and the style and class attributes. Note that the specific representation of the value depends on the browser; for example, the color property can be returned as red, #ff0000, or rgb(255, 0, 0) depending on the browser. This is not cross-browser, so we should avoid this method in our tests. findElement(locator) and findElements(locator): These will return an element, or all the elements that are the descendants of this element, and match the locator. isElementPresent(locator): This will return a promise with a Boolean that indicates whether there is at least one descendant element that matches this locator. As you can see, the WebElement API is pretty simple and allows us to do most of our tests easily. However, what if we need to perform some complex interaction with the UI, such as drag-and-drop? Complex UI interaction WebDriverJS allows us to define a complex action gesture in an easy way using the DSL defined in the webdriver.ActionSequence object. This DSL allows us to define any sequence of browser events using the builder pattern. For example, to simulate a drag-and-drop gesture, proceed with the following code: var beverageElement = driver.findElement({ id: 'expresso' });var orderElement = driver.findElement({ id: 'order' });driver.actions()    .mouseMove(beverageElement)    .mouseDown()    .mouseMove(orderElement)    .mouseUp()    .perform(); We want to drag an espresso to our order, so we move the mouse to the center of the espresso and press the mouse. Then, we move the mouse, by dragging the element, over the order. Finally, we release the mouse button to drop the espresso. We can add as many actions we want, but the sequence of events will not be executed until we call the perform method. The perform method will return a promise that will be fulfilled when the full sequence is finished. The webdriver.ActionSequence object has the following methods: sendKeys(keys...): This sends a sequence of key events, exactly as we saw earlier, to the method with the same name in the case of WebElement. The difference is that the keys will be sent to the document instead of a specific element. keyUp(key) and keyDown(key): These send the keyUp and keyDown events. Note that these methods only admit the modifier keys: Alt, Ctrl, Shift, command, and meta. mouseMove(targetLocation, optionalOffset): This will move the mouse from the current location to the target location. The location can be defined either as a WebElement or as page-relative coordinates in pixels, using a JSON object with x and y properties. If we provide the target location as a WebElement, the mouse will be moved to the center of the element. In this case, we can override this behavior by supplying an extra optional parameter indicating an offset relative to the top-left corner of the element. This could be needed in the case that the center of the element cannot receive events. mouseDown(), click(), doubleClick(), and mouseUp(): These will issue the corresponding mouse events. All of these methods can receive zero, one, or two parameters. Let's see what they mean with the following examples: var Button = require('selenium-webdriver').Button;   // to emit the event in the center of the expresso element driver.actions().mouseDown(expresso).perform(); // to make a right click in the current position driver.actions().click(Button.RIGHT).perform(); // Middle click in the expresso element driver.actions().click(expresso, Button.MIDDLE).perform();  The webdriver.Button object defines the three possible buttons of a mouse: LEFT, RIGHT, and MIDDLE. However, note that mouseDown() and mouseUp() only support the LEFT button! dragAndDrop(element, location): This is a shortcut to performing a drag-and-drop of the specified element to the specified location. Again, the location can be WebElement of a page-relative coordinate. Injecting scripts We can use WebDriver to execute scripts in the browser and then wait for its results. There are two methods for this: executeScript and executeAsyncScript. Both methods receive a script and an optional list of parameters and send the script and the parameters to the browser to be executed. They return a promise that will be fulfilled with the result of the script; it will be rejected if the script failed. An important detail is how the script and its parameters are sent to the browser. For this, they need to be serialized and sent through the network. Once there, they will be deserialized, and the script will be executed inside an autoexecuted function that will receive the parameters as arguments. As a result of of this, our scripts cannot access any variable in our tests, unless they are explicitly sent as parameters. The script is executed in the browser with the window object as its execution context (the value of this). When passing parameters, we need to take into consideration the kind of data that WebDriver can serialize. This data includes the following: Booleans, strings, and numbers. The null and undefined values. However, note that undefined will be translated as null. Any function will be transformed to a string that contains only its body. A WebElement object will be received as a DOM element. So, it will not have the methods of WebElement but the standard DOM method instead. Conversely, if the script results in a DOM element, it will be received as WebElement in the test. Arrays and objects will be converted to arrays and objects whose elements and properties have been converted using the preceding rules. With this in mind, we could, for example, retrieve the identifier of an element, such as the following one: var elementSelector = ".order ul > li"; driver.executeScript(     "return document.querySelector(arguments[0]).id;",     elementSelector ).then(function(id) {   expect(id).to.be.equal('order_item0'); }); Notice that the script is specified as a string with the code. This can be a bit awkward, so there is an alternative available: var elementSelector = ".order ul > li"; driver.executeScript(function() {     var selector = arguments[0];     return document.querySelector(selector).id; }, elementSelector).then(function(id) {   expect(id).to.be.equal('order_item0'); }); WebDriver will just convert the body of the function to a string and send it to the browser. Since the script is executed in the browser, we cannot access the elementSelector variable, and we need to access it through parameters. Unfortunately, we are forced to retrieve the parameters using the arguments pseudoarray, because WebDriver have no way of knowing the name of each argument. As its name suggest, executeAsyncScript allows us to execute an asynchronous script. In this case, the last argument provided to the script is always a callback that we need to call to signal that the script has finalized. The result of the script will be the first argument provided to that callback. If no argument or undefined is explicitly provided, then the result will be null. Note that this is not directly compatible with the Node.js callback convention and that any extra parameters passed to the callback will be ignored. There is no way to explicitly signal an error in an asynchronous way. For example, if we want to return the value of an asynchronous DAO, then proceed with the following code: driver.executeAsyncScript(function() {   var cb = arguments[1],       userId = arguments[0];   window.userDAO.findById(userId).then(cb, cb); }, 'user1').then(function(userOrError) {   expect(userOrError).to.be.equal(expectedUser); }); Command control flows All the commands in WebDriverJS are asynchronous and return a promise or WebElement. How do we execute an ordered sequence of commands? Well, using promises could be something like this: return driver.findElement({name:'quantity'}).sendKeys('23')     .then(function() {       return driver.findElement({name:'add'}).click();     })     .then(function() {       return driver.findElement({css:firstItemSel}).getText();     })     .then(function(quantity) {       expect(quantity).to.be.equal('23');     }); This works because we wait for each command to finish before issuing the next command. However, it is a bit verbose. Fortunately, with WebDriverJS we can do the following: driver.findElement({name:'quantity'}).sendKeys('23'); driver.findElement({name:'add'}).click(); return expect(driver.findElement({css:firstItemSel}).getText())     .to.eventually.be.equal('23'); How can the preceding code work? Because whenever we tell WebDriverJS to do something, it simply schedules the requested command in a queue-like structure called the control flow. The point is that each command will not be executed until it reaches the top of the queue. This way, we do not need to explicitly wait for the sendKeys command to be completed before executing the click command. The sendKeys command is scheduled in the control flow before click, so the latter one will not be executed until sendKeys is done. All the commands are scheduled against the same control flow queue that is associated with the WebDriver object. However, we can optionally create several control flows if we want to execute commands in parallel: var flow1 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); var flow2 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); webdriver.promise.fullyResolved([flow1, flow2]).then(function(){   // Wait for flow1 and flow2 to finish and do something }); We need to create each control flow instance manually and, inside each flow, create a separate WebDriver instance. The commands in both flows will be executed in parallel, and we can wait for both of them to be finalized to do something else using fullyResolved. In fact, we can even nest flows if needed to create a custom parallel command-execution graph. Taking screenshots Sometimes, it is useful to take some screenshots of the current screen for debugging purposes. This can be done with the takeScreenshot() method. This method will return a promise that will be fulfilled with a string that contains a base-64 encoded PNG. It is our responsibility to save this string as a PNG file. The following snippet of code will do the trick: driver.takeScreenshot()     .then(function(shot) {       fs.writeFileSync(fileFullPath, shot, 'base64');     });  Note that not all browsers support this capability. Read the documentation for the specific browser adapter to see if it is available. Working with several tabs and frames WebDriver allows us to control several tabs, or windows, for the same browser. This can be useful if we want to test several pages in parallel or if our test needs to assert or manipulate things in several frames at the same time. This can be done with the switchTo() method that will return a webdriver.WebDriver.TargetLocator object. This object allows us to change the target of our commands to a specific frame or window. It has the following three main methods: frame(nameOrIndex): This will switch to a frame with the specified name or index. It will return a promise that is fulfilled when the focus has been changed to the specified frame. If we specify the frame with a number, this will be interpreted as a zero-based index in the window.frames array. window(windowName): This will switch focus to the window named as specified. The returned promise will be fulfilled when it is done. alert(): This will switch the focus to the active alert window. We can dismiss an alert with driver.switchTo().alert().dismiss();. The promise returned by these methods will be rejected if the specified window, frame, or alert window is not found. To make tests on several tabs at the same time, we must ensure that they do not share any kind of state, or interfere with each other through cookies, local storage, or an other kind of mechanism. Summary This article showed us that a good way to test the UI of an application is actually to split it into two parts and test them separately. One part is the core logic of the UI that takes responsibility for control logic, models, calls to the server, validations, and so on. This part can be tested in a classic way, using BDD, and mocking the server access. No new techniques are needed for this, and the tests will be fast. Here, we can involve nonengineer stakeholders, such as UX designers, users, and so on, to write some nice BDD features using Gherkin and Cucumber.js. The other part is a thin view layer that follows a passive view design. It only updates the HTML when it is asked for, and listens to DOM events to transform them as requests to the core logic UI layer. This layer has no internal state or control rules; it simply transforms data and manipulates the DOM. We can use WebDriverJS to test the view. This is a good approach because the most complex part of the UI can be fully test-driven easily, and the hard and slow parts to test the view do not need many tests since they are very simple. In this sense, the passive view should not have a state; it should only act as a proxy of the DOM. Resources for Article: Further resources on this subject: Dart With Javascript [article] Behavior-Driven Development With Selenium WebDriver [article] Event-Driven Programming [article]
Read more
  • 0
  • 0
  • 9859

article-image-asynctask-and-hardwaretask-classes
Packt
17 Feb 2015
10 min read
Save for later

The AsyncTask and HardwareTask Classes

Packt
17 Feb 2015
10 min read
This article is written by Andrew Henderson, the author of Android for the BeagleBone Black. This article will cover the usage of the AsyncTask and HardwareTask classes. (For more resources related to this topic, see here.) Understanding the AsyncTask class HardwareTask extends the AsyncTask class, and using it provides a major advantage over the way hardware interfacing is implemented in the gpio app. AsyncTasks allows you to perform complex and time-consuming hardware-interfacing tasks without your app becoming unresponsive while the tasks are executed. Each instance of an AsyncTask class can create a new thread of execution within Android. This is similar to how multithreaded programs found on other OSes spin new threads to handle file and network I/O, manage UIs, and perform parallel processing. The gpio app only used a single thread during its execution. This thread is the main UI thread that is part of all Android apps. The UI thread is designed to handle UI events as quickly as possible. When you interact with a UI element, that element's handler method is called by the UI thread. For example, clicking a button causes the UI thread to invoke the button's onClick() handler. The onClick() handler then executes a piece of code and returns to the UI thread. Android is constantly monitoring the execution of the UI thread. If a handler takes too long to finish its execution, Android shows an Application Not Responding (ANR) dialog to the user. You never want an ANR dialog to appear to the user. It is a sign that your app is running inefficiently (or even not at all!) by spending too much time in handlers within the UI thread. The Application Not Responding dialog in Android The gpio app performed reads and writes of the GPIO states very quickly from within the UI thread, so the risk of triggering the ANR was very small. Interfacing with the FRAM is a much slower process. With the BBB's I2C bus clocked at its maximum speed of 400 KHz, it takes approximately 25 microseconds to read or write a byte of data when using the FRAM. While this is not a major concern for small writes, reading or writing the entire 32,768 bytes of the FRAM can take close to a full second to execute! Multiple reads and writes of the full FRAM can easily trigger the ANR dialog, so it is necessary to move these time-consuming activities out of the UI thread. By placing your hardware interfacing into its own AsyncTask class, you decouple the execution of these time-intensive tasks from the execution of the UI thread. This prevents your hardware interfacing from potentially triggering the ANR dialog. Learning the details of the HardwareTask class The AsyncTask base class of HardwareTask provides many different methods, which you can further explore by referring to the Android API documentation. The four AsyncTask methods that are of immediate interest for our hardware-interfacing efforts are: onPreExecute() doInBackground() onPostExecute() execute() Of these four methods, only the doInBackground() method executes within its own thread. The other three methods all execute within the context of the UI thread. Only the methods that execute within the UI thread context are able to update screen UI elements.   The thread contexts in which the HardwareTask methods and the PacktHAL functions are executed Much like the MainActivity class of the gpio app, the HardwareTask class provides four native methods that are used to call PacktHAL JNI functions related to FRAM hardware interfacing: public class HardwareTask extends AsyncTask<Void, Void, Boolean> {   private native boolean openFRAM(int bus, int address); private native String readFRAM(int offset, int bufferSize); private native void writeFRAM(int offset, int bufferSize,      String buffer); private native boolean closeFRAM(); The openFRAM() method initializes your app's access to a FRAM located on a logical I2C bus (the bus parameter) and at a particular bus address (the address parameter). Once the connection to a particular FRAM is initialized via an openFRAM() call, all readFRAM() and writeFRAM() calls will be applied to that FRAM until a closeFRAM() call is made. The readFRAM() method will retrieve a series of bytes from the FRAM and return it as a Java String. A total of bufferSize bytes are retrieved starting at an offset of offset bytes from the start of the FRAM. The writeFRAM() method will store a series of bytes to the FRAM. A total of bufferSize characters from the Java string buffer are stored in the FRAM started at an offset of offset bytes from the start of the FRAM. In the fram app, the onClick() handlers for the Load and Save buttons in the MainActivity class each instantiate a new HardwareTask. Immediately after the instantiation of HardwareTask, either the loadFromFRAM() or saveToFRAM() method is called to begin interacting with the FRAM: public void onClickSaveButton(View view) {    hwTask = new HardwareTask();    hwTask.saveToFRAM(this); }    public void onClickLoadButton(View view) {    hwTask = new HardwareTask();    hwTask.loadFromFRAM(this); } Both the loadFromFRAM() and saveToFRAM() methods in the HardwareTask class call the base AsyncTask class execution() method to begin the new thread creation process: public void saveToFRAM(Activity act) {    mCallerActivity = act;    isSave = true;    execute(); }    public void loadFromFRAM(Activity act) {    mCallerActivity = act;    isSave = false;    execute(); } Each AsyncTask instance can only have its execute() method called once. If you need to run an AsyncTask a second time, you must instantiate a new instance of it and call the execute() method of the new instance. This is why we instantiate a new instance of HardwareTask in the onClick() handlers of the Load and Save buttons, rather than instantiating a single HardwareTask instance and then calling its execute() method many times. The execute() method automatically calls the onPreExecute() method of the HardwareTask class. The onPreExecute() method performs any initialization that must occur prior to the start of the new thread. In the fram app, this requires disabling various UI elements and calling openFRAM() to initialize the connection to the FRAM via PacktHAL: protected void onPreExecute() {    // Some setup goes here    ...   if ( !openFRAM(2, 0x50) ) {      Log.e("HardwareTask", "Error opening hardware");      isDone = true; } // Disable the Buttons and TextFields while talking to the hardware saveText.setEnabled(false); saveButton.setEnabled(false); loadButton.setEnabled(false); } Disabling your UI elements When you are performing a background operation, you might wish to keep your app's user from providing more input until the operation is complete. During a FRAM read or write, we do not want the user to press any UI buttons or change the data held within the saveText text field. If your UI elements remain enabled all the time, the user can launch multiple AsyncTask instances simultaneously by repeatedly hitting the UI buttons. To prevent this, disable any UI elements required to restrict user input until that input is necessary. Once the onPreExecute() method finishes, the AsyncTask base class spins a new thread and executes the doInBackground() method within that thread. The lifetime of the new thread is only for the duration of the doInBackground() method. Once doInBackground() returns, the new thread will terminate. As everything that takes place within the doInBackground() method is performed in a background thread, it is the perfect place to perform any time-consuming activities that would trigger an ANR dialog if they were executed from within the UI thread. This means that the slow readFRAM() and writeFRAM() calls that access the I2C bus and communicate with the FRAM should be made from within doInBackground(): protected Boolean doInBackground(Void... params) {    ...    Log.i("HardwareTask", "doInBackground: Interfacing with hardware");    try {      if (isSave) {          writeFRAM(0, saveData.length(), saveData);      } else {        loadData = readFRAM(0, 61);      }    } catch (Exception e) {      ... The loadData and saveData string variables used in the readFRAM() and writeFRAM() calls are both class variables of HardwareTask. The saveData variable is populated with the contents of the saveEditText text field via a saveEditText.toString() call in the HardwareTask class' onPreExecute() method. How do I update the UI from within an AsyncTask thread? While the fram app does not make use of them in this example, the AsyncTask class provides two special methods, publishProgress() and onPublishProgress(), that are worth mentioning. The AsyncTask thread uses these methods to communicate with the UI thread while the AsyncTask thread is running. The publishProgress() method executes within the AsyncTask thread and triggers the execution of onPublishProgress() within the UI thread. These methods are commonly used to update progress meters (hence the name publishProgress) or other UI elements that cannot be directly updated from within the AsyncTask thread. After doInBackground() has completed, the AsyncTask thread terminates. This triggers the calling of doPostExecute() from the UI thread. The doPostExecute() method is used for any post-thread cleanup and updating any UI elements that need to be modified. The fram app uses the closeFRAM() PacktHAL function to close the current FRAM context that it opened with openFRAM() in the onPreExecute() method. protected void onPostExecute(Boolean result) {    if (!closeFRAM()) {    Log.e("HardwareTask", "Error closing hardware"); }    ... The user must now be notified that the task has been completed. If the Load button was pressed, then the string displayed in the loadTextField widget is updated via the MainActivity class updateLoadedData() method. If the Save button was pressed, a Toast message is displayed to notify the user that the save was successful. Log.i("HardwareTask", "onPostExecute: Completed."); if (isSave) {    Toast toast = Toast.makeText(mCallerActivity.getApplicationContext(),      "Data stored to FRAM", Toast.LENGTH_SHORT);    toast.show(); } else {    ((MainActivity)mCallerActivity).updateLoadedData(loadData); } Giving Toast feedback to the user The Toast class is a great way to provide quick feedback to your app's user. It pops up a small message that disappears after a configurable period of time. If you perform a hardware-related task in the background and you want to notify the user of its completion without changing any UI elements, try using a Toast message! Toast messages can only be triggered by methods that are executing from within the UI thread. An example of the Toast message Finally, the onPostExecute() method will re-enable all of the UI elements that were disabled in onPreExecute(): saveText.setEnabled(true);saveButton.setEnabled(true); loadButton.setEnabled(true); The onPostExecute() method has now finished its execution and the app is back to patiently waiting for the user to make the next fram access request by pressing either the Load or Save button. Are you ready for a challenge? Now that you have seen all of the pieces of the fram app, why not change it to add new functionality? For a challenge, try adding a counter that indicates to the user how many more characters can be entered into the saveText text field before the 60-character limit is reached. Summary The fram app in this article demonstrated how to use the AsyncTask class to perform time-intensive hardware interfacing tasks without stalling the app's UI thread and triggering the ANR dialog. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 7460
Modal Close icon
Modal Close icon