Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-automated-testing-using-robotium
Packt
19 Nov 2013
10 min read
Save for later

Automated testing using Robotium

Packt
19 Nov 2013
10 min read
(For more resources related to this topic, see here.) Robotium framework Robotium is an open source automation testing framework that is used to write a robust and powerful black box for Android applications (the emphasis is mostly on black box test cases). It fully supports testing for native and hybrid applications. Native apps are live on the device, that is, designed for a specific platform and can be installed from the Google Play Store, whereas Hybrid apps are partly native and partly web apps. These can also be installed from the app store, but require the HTML to be rendered in the browser. Robotium is mostly used to automate UI test cases and internally uses run-time binding to Graphical User Interface (GUI) components. Robotium is released under the Apache License 2.0. It is free to download and can be easily used by individuals and enterprises and is built on Java and JUnit 3. It will be more appropriate to call Robotium an extension of the Android Test Unit Framework, available at http://developer.android.com/tools/testing/testing_android.html. Robotium can also work without the application, under the test's source code. The test cases written using Robotium can either be executed on the Android Emulator (Android Virtual Device (AVD))—we will see how to create an AVD during installation in the following section—or on a real Android device. Developers can write function, system, and acceptance test scenarios across multiple activities. It is currently the world's leading Automation Testing Framework, and many open source developers are contributing to introduce more and more exciting features in subsequent releases. The following screenshot is of the git repository website for the Robotium project: As Robotium is an open source project, anyone can contribute for the purpose of development and help in enhancing the framework with many more features. The Robotium source code is maintained at GitHub and can be accessed using the following link: https://github.com/jayway/robotium You just need to fork the project. Make all your changes in a clone project and click on Pull Request on your repository to tell core team members which changes to bring in. If you are new to the git environment, you can refer to the GitHub tutorial at the following link: https://help.github.com/ Robotium is like Selenium but for Android. This project was started in January 2010 by Renas Reda. He is the founder and main developer for Robotium. The project initiated with v1.0 and continues to be followed up with new releases due to new requirements. It has support for Android features such as activities, toasts, menus, context menus, web views, and remote controls. Let's see most of the Robotium features and benefits for Android test case developers. Features and benefits Automated testing using Robotium has many features and benefits. The triangularization workflow diagram between the user, Robotium, and the Android device clearly explains use cases between them: The features and benefits of Robotium are as follows: Robotium helps us to quickly write powerful test cases with minimal knowledge of the application under test. Robotium offers APIs to directly interact with UI controls within the Android application such as EditText, TextView, and Button. Robotium officially supports Android 1.6 and above versions. The Android platform is not modified by Robotium. The Robotium test can also be executed using command prompt. Robotium can be integrated smoothly with Maven or Ant. This helps to add Robotium to your project's build automation process. Screenshots can be captured in Robotium (an example screenshot is shown as follows): The test application project and the application project run on the same JVM, that is, Dalvik Virtual Machine (DVM). It's possible to run Robotium without a source code. Robotium can work with other code coverage measurement tools, such as Cobertura and Emma. Robotium can detect the messages that are shown on the screen (Toasts). Robotium supports Android features such as activities, menu, and context menu. Robotium automated tests can be implemented quickly. Robotium is built on JUnit, because of which it inherits all JUnit's features. The Robotium framework automatically handles multiple activities in an Android application. Robotium test cases are prominently readable, in comparison to standard instrumentation tests. Scrolling activity is automatically handled by the Robotium framework. Recent versions of Robotium support hybrid applications. Hybrid applications use WebViews to present the HTML and JavaScript files in full screen, using the native browser rendering engine. API set Web support has been added to the Robotium framework since Robotium 4.0 released. Robotium has full support for hybrid applications. There are some key differences between native and hybrid applications. Let's go through them one by one, as follows: Native Application Hybrid Application Platform dependent Cross platform Run on the device's internal software and hardware Built using HTML5 and JavaScript and wrapped inside a thin native container that provides access to native platform features Need more developers to build apps on different platforms and learning time is more Save development cost and time Excellent performance Less performance The native and hybrid applications are shown as follows: Let's see some of the existing methods in Robotium that support access to web content. They are as follows: searchText (String text) scrollUp/Down () clickOnText (String text) takeScreenshot () waitForText (String text) In the methods specifically added for web support, the class By is used as a parameter. It is an abstract class used as a conjunction with the web methods. These methods are used to select different WebElements by their properties, such as ID and name. The element used in a web view is referred to as a WebElement. It is similar to the WebDriver implemented in Selenium. The following table lists all the methods inside the class By: Method Description className (String className) Select a WebElement by its class name cssSelector (String selectors) Select a WebElement by its CSS selector getValue () Return the value id (String id) Select a WebElement by its id name (String name) Select a WebElement by its name tagName (String tagName) Select a WebElement by its tag name textContent (String textContent) Select a WebElement by its text content xpath (String xpath) Select a WebElement by its xpath Some of the important methods in the Robotium framework, that aim at direct communication with web content in Android applications, are listed as follows: clickOnWebElement(By by): It clicks on the WebElement matching the specified By class object. waitForWebElement(By by): It waits for the WebElement matching the specified By class object. getWebElement(By by, int index): It returns a WebElement matching the specified By class object and index. enterTextInWebElement(By by, String text): It enters the text in a WebElement matching the specified By class object. typeTextInWebElement(By by): It types the text in a WebElement matching the specified By class object. In this method, the program actually types the text letter by letter using the keyboard, whereas enterTextInWebElement directly enters the text in the particular. clearTextInWebElement(By by): It clears the text in a WebElement matching the specified By class object. getCurrentWebElements(By by): It returns the ArrayList of WebElements displayed in the active web view matching the specified By class object. Before actually looking into the hybrid test example, let's gain more information about WebViews. You can get an instance of WebView using the Solo class as follows: WebView wb = solo.getCurrentViews(WebView.class).get(0); Now that you have control of WebView, you can inject your JavaScript code as follows: Wb.loadUrl("<JavaScript>"); This is very powerful, as we can call every function on the current page; thus, it helps automation. Robotium Remote Control using SAFS SAFS tests are not wrapped up as JUnit tests and the SAFS Remote Control of Robotium uses an implementation that is NOT JUnit based. Also, there is no technical requirement for a JUnit on the Remote-Control side of the test. The test setup and deployment of the automation of the target app can be achieved using the SDK tools. These tools are used as part of the test runtime such as adb and aapt. The existing packaging tools can be used to repackage a compiled Robotium test with an alternate AndroidManifest.xml file, which can change the target application at runtime. SAFS is a general-purpose, data-driven framework. The only thing that should be provided by the user is the target package name or APK path arguments. The test will extract and redeploy the modified packages automatically and then launch the actual test. Traditional JUnit/Robotium users might not have, or see the need for, this general-purpose nature, but that is likely because it was necessary for the previous Android tests to be JUnit tests. It is required for the test to target one specific application. The Remote Control application is application specific. That's why the test app with the Remote Control installed in the device no longer needs to be an application. The Remote Control in Robotium means there are two test applications to build for any given test. They are as follows: Traditional on-device Robotium/JUnit test app Remote Control app These two build projects have entirely different dependencies and build scripts. The on-device test app has the traditional Robotium/Android/JUnit dependencies and build scripts, while the Remote Control app only has dependencies on the TCP sockets for communications and Robotium Remote Control API. The implementation for the remote-controlled Robotium can be done in the following two pieces: On Device: ActivityInstrumentationTestCase2.setup() is initialized when Robotium's Solo class object is to be used for the RobotiumTestRunner (RTR). The RTR has a Remote Control Listener and routes remote control calls and data to the appropriate Solo class methods and returns any results, as needed, to the Remote Controller. The on-device implementation may exploit test-result asserts if that is desirable. Remote Controller: The RemoteSolo API duplicates the traditional Solo API, but its implementation largely pushes the data through the Remote Control to the RTR, and then receives results from the Remote Controller. The Remote Control implementation may exploit any number of options for asserting, handling, or otherwise reporting or tracking the test results for each call. As you can see, the Remote-Control side only requires a RemoteSolo API without any specific JUnit context. It can be wrapped in a JUnit context if the tester desires it, but it is not necessary to be in a JUnit context. The sample code and installation of Robotium Remote Control can be accessed in the following link: http://code.google.com/p/robotium/wiki/RemoteControl Summary Thus this article introduced us to the Robotium framework, its different features, its benefits in the world of automated testing, the API set of the Robotium Framework, and how to implement the Robotium Remote Control using SAFS. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 6648

article-image-dom-and-qtp
Packt
15 Nov 2013
7 min read
Save for later

DOM and QTP

Packt
15 Nov 2013
7 min read
(For more resources related to this topic, see here.) QuickTest Object property for a Web object allows us to get a reference to the DOM object, and can perform any operation on a DOM object. For example, the following code shows that the object on the page allows retrieving the element by name username, that is, the textbox in the next step assigns the value as ashish. Set Obj = Browser("Tours").Page("Tours").Object.getElementsByName("userName) 'Get the length of the objects Print obj.length ' obj(0).value="ashish" The following code snippet shows various operations that can be performed using the Object property of the web object: 'Retrieve all the link elements in a web page Set links = Browser ("Mercury Tours").Page("Mercury Tours").Object.links 'Length property provide total number of links For i =0 to links.length -1 Print links(i).toString() 'Print the value of the links Next Get the web edit object and set the focus on it Set MyWebEdit = Browser("Tours").Page("Tours").WebEdit("username"). Object MyWebEdit.focus 'Retrieve the html element by its name obj = Browser("Tours").Page("Tours").Object. getElementsByName("userName" ) 'set the value to the retrieved object obj.value ="ashish" 'Retrieve the total number of the images in the web pages set images = Browser("Mercury Tours").Page("Mercury Tours").Object.images print images.length 'Clicking on the Image obj = Browser("Tours").Page("Mercury Tours").Object.getElementsByName("login") obj.click 'Selecting the value from the drop down using selectedIndex method Browser("Flight").Page("Flight).WebList("pass.0.meal").Object. selectedIndex =1 'Click on the check box Browser("Flight").Page("Flight").WebCheckBox("ticketLess").Object. click Firing an event QTP allows firing of the events on the web objects: Browser("The Fishing Website fishing").Page("The Fishing Website fishing").Link("Link").FireEvent "onmouseover" The following example uses the FireEvent method to trigger the onpropertychange event on a form: Browser("New Page").Page("New Page").WebElement("html tag:=Form").FireEvent "onpropertychange" QTP allows executing JavaScript code. There are two JavaScript functions that allow us to interact with web pages. We can retrieve objects and perform the actions on them or we can retrieve the properties from the element on the pages: RunScript executes the JavaScript code, passed as an argument to this function. The following example shows how the RunScript method calls the ImgCount method, which returns the number of images in the page: length = Browser("Mercury Tours").Page("Mercury Tours").RunScript("ImgCount(); function ImageCount() {var list = document.getElementsByTagName('img'); return list.length;}") print "The total number of images on the page is " & length RunScriptsFormFile uses the full path of the JavaScript files to execute it. The location can be an absolute or relative filesystem path or a quality center path. The following is a sample JavaScript file (logo.js): var myNode = document.getElementById("lga"); myNode.innerHTML = ''; Use the logo.js file, as shown in the following code: Browser("Browser").Page("page").RunScriptFromFile "c:logo.js" 'Check that the Web page behaves correctly If Browser("Browser").Page("page").Image("Image").Exist Then Reporter.ReportEvent micFail, "Failed to remove logo" End If The above example uses the RunScriptFromFile method to remove a DOM element from a web page and checks if the page still behaves correctly when the DOM element has been removed. Using XPath XPath allows navigating and finding elements and attributes in an HTML document. XPath uses path expressions to navigate in HTML documents. QTP allows XPath to create the object description; for example: xpath:=//input[@type='image' and contains(@name,'findFlights') In the following section, we will learn the various XPath terminologies and methodologies to find the objects using XPath. XPath terminology XPath uses various terms to define elements and their relationships among HTML elements, as shown in the following table: Atomic values Atomic values are nodes with no children or parent Ancestors A node's parent, parent's parent, and so on Descendants A node's children, children's children, and so on Parent Each element and attribute has one parent Children Element nodes may have zero, one, or more children Siblings Nodes that have the same parent Selecting nodes A path expression allows selecting nodes in a document. The commonly used path expressions are shown in the following table: Symbol Meaning /(slash) Select elements relative to the root node //(double slash) Select nodes in the document from the current node that match the selection irrespective of its position .(dot) Represents the current node .. Represents the parent of the current node @ Represents an attribute nodename Selects all nodes with the name "nodename" Slash (/) is used in the beginning and it defines an absolute path; for example, /html/head/title returns the title tag. It defines ancestor and descendant relationships if used in the middle; for example, //div/table that returns the div containing a table. Double slash (//) is used to find a node in any location; for example, //table returns all the tables. It defines a descendant relationship if used in the middle; for example, /html//title returns the title tag, which is descendant of the html tag. Refer to the following table to see a few more examples with their meanings: Expression Meaning //a Find all anchor tags //a//img List the images that are inside a link //img/@alt Show all the alt tags //a/@href Show the href attribute for every link //a[@*] Anchor tab with any attribute //title/text() or /html/head/title/text() Get the title of a page //img[@alt] List the images that have alt tags //img[not(@alt)] List the images that don't have alt tags //*[@id='mainContent'] Get an element with a particular CSS ID //div [not(@id="div1")] Make an array element from the XPath //p/.. Selects the parent element of p (paragraph) child XXX[@att] Selects all the child elements of XXX with an attribute named att ./@* for example, //script/./@* Finds all attribute values of current element Predicates A predicate is embedded in square brackets and is used to find out specific node(s) or a node that contains a specific value: //p[@align]: This allows finding all the tags that have align attribute value as center //img[@alt]: This allows finding all the img (image) tags that contain the alt tag //table[@border]: This allows finding all the table tags that contain border attributes //table[@border >1]: This finds the table with border value greater than 1 Retrieve the table row using the complete path: //body/div/table/tbody/tr[1] Get the name of the parent of //body/div/table/.. (parent of the table tag) //body/div/table/..[name()] Path expression Result //div/p[1] Selects the first paragraph element that is the child of the div element //div/p [last()] Selects the last paragraph element that is the child of the div element //div/p[last()-1] Selects the second last paragraph element that is the child of the div element //div/p[position()<3] Selects the first two paragraph elements that are children of the div element //script[@language] Selects all script element(s) with an attribute named as language //script[@language='javascript'] Selects all the script elements that have an attribute named language with a value of JavaScript //div/p[text()>45.00] Selects all the paragraph elements of the div element that have a text element with a value greater than 45.00 Selecting unknown nodes Apart from selecting the specific nodes in XPath, XPath allows us to select the group of HTML elements using *, @, and node() functions. * represents an element node @ represents the attribute node() represents any node The previous mentioned elements allow selecting the unknown nodes; for example: /div/* selects all the child nodes of a div element //* selects all the elements in a document //script[@*] selects all the title elements which contain attributes
Read more
  • 0
  • 0
  • 6022

article-image-enabling-and-configuring-snmp-windows
Packt
14 Nov 2013
5 min read
Save for later

Enabling and configuring SNMP on Windows

Packt
14 Nov 2013
5 min read
This article by Justin M. Brant, the author of SolarWinds Server & Application Monitor: Deployment and Administration, covers enabling and configuring SNMP on Windows. (For more resources related to this topic, see here.) Procedures in this article are not required pre-deployment, as it is possible after deployment to populate SolarWinds SAM with nodes; however, it is recommended. Even after deployment, you should still enable and configure advanced monitoring services on your vital nodes. SolarWinds SAM uses three types of protocols to poll management data: Simple Network Management Protocol (SNMP): This is the most common network management service protocol. To utilize it, SNMP must be enabled and an SNMP community string must be assigned on the server, device, or application. The community string is essentially a password that is sent between a node and SolarWinds SAM. Once the community string is set and assigned, the node is permitted to expose management data to SolarWinds SAM, in the form of variables. Currently, there are three versions of SNMP: v1, v2c, and v3. SolarWinds SAM uses SNMPv2c by default. To poll using SNMPv1, you must disable SNMPv2c on the device. Similarly, to poll using SNMPv3, you must configure your devices and SolarWinds SAM accordingly. Windows Management Instrumentation (WMI): This has added functionality by incorporating Windows specifi c communications and security features. WMI comes preinstalled on Windows by default but is not automatically enabled and confi gured. WMI is not exclusive to Windows server platforms; it comes installed on all modern Microsoft operating systems, and can also be used to poll desktop operating systems, such as Windows 7. Internet Control Message Protocol (ICMP): This is the most basic of the three; it simply sends echo requests (pings) to a server or device for status, response time, and packet loss. SolarWinds SAM uses ICMP in conjunction with SNMP and WMI. Nodes can be confi gured to poll with ICMP exclusively, but you miss out on CPU, memory, and volume data. Some devices can only be polled with ICMP, although in most instances you will rarely use ICMP exclusively. Trying to decide between SNMP and WMI? SNMP is more standardized and provides data that you may not be able to poll with WMI, such as interface information. In addition, polling a single WMI-enabled node uses roughly five times the resources required to poll the same node with SNMP. This article will explain how to prepare for SolarWinds SAM deployment, by enabling and configuring network management services and protocols on: Windows servers. In this article we will reference service accounts. A service account is an account created to handoff credentials to SolarWinds SAM. Service accounts are a best practice primarily for security reasons, but also to ensure that user accounts do not become locked out. Enabling and configuring SNMP on Windows Procedures listed in this article will explain how to enable SNMP and then assign a community string, on Windows Server 2008 R2. All Windows server-related procedures in this book are performed on Windows Server 2008 R2. Procedures vary slightly in other supported versions. Installing an SNMP service on Windows This procedure explains how to install the SNMP service on Windows Server 2008 R2. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Server Manager. In order to see Administrative Tools in the Control Panel, you may need to select View by: Small Icons or Large Icons. Select Features and click on Add Features. Check SNMP Services, then click on Next and Install. Click on Close. Assigning an SNMP community string on Windows This procedure explains how to assign a community string on Windows 2008 R2, and ensure that the SNMP service is configured to run automatically on start up. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Services. Double-click on SNMP Service. On the General tab, select Automatic under Startup type. Select the Agent tab and ensure Physical, Applications, Internet, and End-to-end are all checked under the Service area. Optionally, enter a Contact person and system Location. Select the Security tab and click on the Add button under Accepted community names. Enter a Community Name and click on the Add button. For example, we used S4MS3rv3r. We recommend using something secure, as this is a password. Community String and Community Name mean the same thing.   READ ONLY community rights will normally suffice. A detailed explanation of community rights can be found on the author's blog: http://justinmbrant.blogspot.com/ Next, tick the Accept SNMP packets from these hosts radio button. Click on the Add button underneath the radio buttons and add the IP of the server you have designated as the SolarWinds SAM host. Once you complete these steps, the SNMP Service Properties Security tab should look something like the following screenshot. Notice that we used 192.168.1.3, as that is the IP of the server where we plan to deploy SolarWinds SAM. Summary In this article, we learned different types of protocols to poll management data. We also learned how to install SNMP as well as assign SNMP community string on Windows. Resources for Article: Further resources on this subject: The OpenFlow Controllers [Article] The GNS3 orchestra [Article] Network Monitoring Essentials [Article]
Read more
  • 0
  • 0
  • 17736

article-image-package-management
Packt
14 Nov 2013
12 min read
Save for later

Package Management

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) Using NuGet with source control Source control systems are an integral part of software development. As soon as there is more than one person working on the project, it becomes an invaluable tool for sharing source code. Even when we are on the project on our own, there is no better way for tracking versions and source code changes. The question arises: how should we put the installed packages into source control? The first impulse would be to simply add the packages folder to the repository. Though this will work, it isn't the best possible approach. Packages can grow quite large and they can be obtained from elsewhere; therefore, we would only "pollute" the repository with redundant files. Many source control systems don't handle large binary files well. Even for those that don't have such problems, having packages in the repository doesn't add much value; it does noticeably increase the repository size, though. Fortunately, NuGet offers a feature called Package Restore, which can be used to avoid adding packages to source control. Let's see how it works. The following sample will use Team Foundation Service (TFS) for source control. If you don't have an account yet, you can sign up for free at http://tfs.visualstudio.com. You need a Microsoft account for authentication. If you decide to use a different source control system instead, just skip all the steps dealing with TFS, replacing them with equivalent actions in your source control system of choice. We'll start by creating a sample project: Create a new Console Application project in Visual Studio. Install the Json.NET NuGet package into the project. Add the following code to the Main method so that the project won't compile without a valid reference to the Newtonsoft.Json.dll assembly: var jsonString = @"{ ""title"": ""NuGet 2 Essentials"", ""authors"":""Damir Arh & Dejan Dakic"", ""publisher"": ""Packt Publishing"" }"; var parsedJson = Newtonsoft.Json.JsonConvert .DeserializeObject(jsonString); Compile and run the project to make sure the code works. It's time to create a source code repository. If you already have a repository, you can skip the following steps. Just make sure you have a repository ready and know how to connect to it before moving on. You will need Visual Studio 2012 or Visual Studio 2010 with Service Pack 1 and KB2662296 installed to connect to TFS. In a browser, navigate to https://<accountname>.visualstudio.com/ (replacing <accountname> with the name you used when signing up for TFS). Click on New team project. In the CREATE NEW TEAM PROJECT dialog box, enter the project name (for example, PackageRestore) and click on Create project, leaving the rest of the fields unchanged. Click on Navigate to project once the creation process is complete. On the project page, click on Open new instance of Visual Studio in the Activities menu on the right to connect Visual Studio to your TFS account. You can close this Visual Studio instance once the connection is established. Now we're ready to add the project to the repository. Return to Visual Studio, right-click on the solution node in the Solution Explorer window and click on the Add Solution to Source Control… menu item. Make sure you select Team Foundation Version Control as the source control system if a dialog box pops up and asks you to make a selection. In the Connect to Team Foundation Server dialog box which opens next, select your TFS account (for example, accountname.visualstudio.com) from the drop-down list and check the created repository in the Team Projects list box. Click on Connect to move to the next step and confirm the default settings by clicking on OK in the dialog box that follows. We still need to select the right set of files to add to source control and check them in so that they will be available for others. Open the Team Explorer window and click on the Source Control Explorer link inside it. Look in the tree view on the left side of the Source Control Explorer window. You will notice that apart from your project, the packages folder is also included. We need to exclude it since we don't want to add packages to source control. Right-click on the packages folder and select Undo Pending Changes… from the context menu. Click on the Undo Changes button in the Undo Pending Changes dialog box that pops up. Click on the Check In button in the toolbar and click on Check In again in the Pending Changes pane inside the Team Explorer window. Close the confirmation dialog box if it pops up. The packages folder should now be removed from the tree view. Navigate to https://<accountname>.visualstudio.com/DefaultCollection/<PackageRestore>/_versionControl from your browser to check which files have been successfully checked in. Replace <accountname> and <PackageRestore> with appropriate values as necessary for your case. Let's retrieve the code and place it in a different location and see how the packages are going to get restored: With TFS, a new workspace needs to be created for that purpose in the Manage Workspaces dialog box, which can be accessed by clicking on Workspaces… from the Workspace drop-down list in the Source Control Explorer toolbar. Click on Add… to add a new workspace. You need to specify both Source Control Folder (solution folder, that is, $/<PackageRestore>/<SolutionName>) and Local Folder where you want to put the files. After adding the workspace, confirm the next dialog box to get the files from the repository to your selected local folder. Check the contents of that folder after Visual Studio is done to see that there is no packages folder inside it. Open the solution from the new folder in Visual Studio and build it. You should notice a NuGet Package Manager dialog box popping up displaying the package restore progress and closing again once it is done. The application should build successfully. You can run it to see that it works as expected. If you check the contents of the solution folder once again, you will see that the packages folder has been restored with all the required packages inside it. Automatic Package Restore, which was described earlier has been available only since NuGet 2.7. In earlier versions, only MSBuild-Integrated Package Restore was supported. In case your repository will be accessed by users still using NuGet 2.6 or older, it might be a better idea to use this instead; otherwise, package restore won't work for them. You can enable it by following these steps (if you do this in NuGet 2.7, the Automatic Package Restore will get disabled): Right-click on the solution node in the Solution Explorer window and click on the Enable NuGet Package Restore menu item. Confirm the confirmation dialog box that pops up briefly explaining what is going to happen. Another dialog box will pop up once the process is complete. Notice that a .nuget folder containing three files has been added to the solution shown as follows: When adding a solution configured like this to the source control, don't forget to include the .nuget folder as well. The packages folder of course still remains outside source control. If you encounter a repository with MSBuild-Integrated Package Restore, which was enabled with NuGet 2.6 or older, the restoring of packages before build might fail with the following error: Package restore is disabled by default. To give consent, open the VisualStudio Options dialog, click on Package Manager node and check 'AllowNuGet to download missing packages during build.' You can also giveconsent by setting the environment variable 'EnableNuGetPackageRestore'to 'true'. This happens because the Allow NuGet to download missing packages during build setting was disabled by default in NuGet versions before 2.7. To fix the problem navigate to Tools | Library | Package Manager | Package Manager Settings to open the option dialog box on the right node; then uncheck and recheck the mentioned setting and click on OK to explicitly set it. A more permanent solution is to either update NuGet.exe in the .nuget folder to the latest version or to switch to Automatic Package Restore instead as described at http://bit.ly/NuGetAutoRestore. Using NuGet on a build server Automatic Package Restore only works within Visual Studio. If you try to build such a solution on a build server by using MSBuild only, it will fail if the packages are missing. To solve this problem, you should use the Command-Line Package Restore approach by executing the following command as a separate step before building the solution file: C:> NuGet.exe restore pathtoSolutionFile.sln This will restore all of the packages in the solution, making sure that the build won't fail because of missing packages. Even if the solution is using MSBuild-Integrated Package Restore, this approach will still work because all of the packages will already be available when it is invoked; and this will just silently be skipped. The exact procedure for adding the extra step will depend on your existing build server configuration. You should either call it from within your build script or add it as an additional step to your build server configuration. In any case, you need to make sure you have installed NuGet 2.7 on your build server. The NuGet Package Restore feature can be optimized even more on a build server by defining a common package repository for all solutions. This way each package will be downloaded only once even if it is used in multiple solutions; saving both the download time and the storage space. To achieve this, save a NuGet.config file with the following content at the root folder containing all your solutions in its subfolders: <?xml version="1.0" encoding="utf-8"?> <configuration> <config> <add key="repositorypath" value="C:pathtorepository" /> </config> </configuration> You can even have more control of your repository location and other NuGet settings by taking advantage of the hierarchical or machine-wide NuGet.config file as explained at http://bit.ly/NuGetConfig. Using Package Manager Console We have already used Package Manager Console twice to achieve something that couldn't have been done using the graphical user interface. It's time to take a closer look at it and the commands that are available. The Package Manager Console window is accessible by either navigating to Tools | Library Package Manager | Package Manager Console or by navigating to View | Other Windows | Package Manager Console. The most important commands are used to install, update, and uninstall packages on a project. By default, they operate on Default project selected from a drop-down list in the window's toolbar. The target project name can be specified using the -ProjectName parameter. To get a list of all commands, type Get-Help NuGet in the console. To get more information about a command, type Get-Help CommandName in the console, replacing CommandName with the actual name of the command. You can also check the online PowerShell command reference at http://bit.ly/NuGetPsRef. Let's take a look at few of the examples: To install the latest version of the Newtonsoft.Json package to the default project, type: PM> Install-Package Newtonsoft.Json To install Version 5.0.1 of the Newtonsoft.Json package to the default project, type: PM> Install-Package Newtonsoft.Json –Version 5.0.1 To install the latest version of the Newtonsoft.Json package to the Net40 project, type: PM> Install-Package Newtonsoft.Json –ProjectName Net40 To update the Newtonsoft.Json package in all projects to its latest version, type: PM> Update-Package Newtonsoft.Json To update the Newtonsoft.Json package in all projects to Version 5.0.3 (this will fail for projects with a newer version already installed), type: PM> Update-Package Newtonsoft.Json –Version 5.0.3 To update the Newtonsoft.Json package in the Net40 project to the latest version, type: PM> Update-Package Newtonsoft.Json –ProjectName Net40 To update all packages in all projects to the latest available version with the same major and minor version component, type: PM> Update-Package –Safe To uninstall the Newtonsoft.Json package from the default project, type: PM> Uninstall-Package Newtonsoft.Json To uninstall the Newtonsoft.Json package from the Net40 project, type: PM> Uninstall-Package Newtonsoft.Json –ProjectName Net40 To list all packages in the online package source matching the Newtonsoft.Json search filter, type: PM> Get-Package –ListAvailable –Filter Newtonsoft.Json To list all installed packages having an update in the online package source, type: PM> Get-Package –Updates Installed packages can add their own commands. An example of such a package is EntityFramework. To get a list of all commands for a package, type Get-Help PackageName, replacing PackageName with the actual name of the package after it is installed, for example: PM> Get-Help EntityFramework Summary This article has covered various NuGet features in detail. We started out with package versioning support and the package update process. We then moved on to built-in support for different target platforms. A large part of the article was dedicated to the usage of NuGet in conjunction with source control systems. We have seen how to avoid adding packages to source control and still have them automatically restored when they are required during build. We concluded the article with a quick overview of the console and the commands that give access to features not available using the graphical user interface. This concludes our tour of NuGet from the package consumer point of view. In the following article, we will take on the role of a package creator and look at the basics of creating and publishing our own NuGet package. Resources for Article: Further resources on this subject: Lucene.NET: Optimizing and merging index segments [Article] Creating your first collection (Simple) [Article] Material nodes in Cycles [Article]
Read more
  • 0
  • 0
  • 2205

article-image-business-layer-java-ee-7-first-look
Packt
13 Nov 2013
7 min read
Save for later

The Business Layer (Java EE 7 First Look)

Packt
13 Nov 2013
7 min read
Enterprise JavaBeans 3.2 The Enterprise JavaBeans 3.2 Specification was developed under JSR 345. This section just gives you an overview of improvements in the API. The complete document specification (for more information) can be downloaded from http://jcp.org/aboutJava/communityprocess/final/jsr345/index.html. The businesslayer of an application is the part of the application that islocated between the presentationlayer and data accesslayer. The following diagram presents a simplified Java EE architecture. As you can see, the businesslayer acts as a bridge between the data access and the presentationlayer. It implements businesslogic of the application. To do so, it can use some specifications such as Bean Validation for data validation, CDifor context and dependency injection, interceptors to intercept processing, and so on. As thislayer can belocated anywhere in the network and is expected to serve more than one user, it needs a minimum of non functional services such as security, transaction, concurrency, and remote access management. With EJBs, the Java EE platform provides to developers the possibility to implement thislayer without worrying about different non functional services that are necessarily required. In general, this specification does not initiate any new major feature. It continues the work started by thelast version, making optional the implementation of certain features that became obsolete and adds slight modification to others. Pruning some features After the pruning process introduced by Java EE 6 from the perspective of removing obsolete features, support for some features has been made optional in Java EE 7 platform, and their description was moved to another document called EJB 3.2 Optional Features for Evaluation. The features involved in this movement are: EJB 2.1 and earlier Entity Bean Component Contract for Container-Managed Persistence EJB 2.1 and earlier Entity Bean Component Contract for Bean-Managed Persistence Client View of EJB 2.1 and earlier Entity Bean EJB QL: Querylanguage for Container-Managed Persistence Query Methods JAX-RPC-based Web Service Endpoints JAX-RPC Web Service Client View The latest improvements in EJB 3.2 For those who have had to use EJB 3.0 and EJB 3.1, you will notice that EJB 3.2 has brought, in fact, only minor changes to the specification. However, some improvements cannot be overlooked since they improve the testability of applications, simplify the development of session beans or Message-Driven Beans, and improve control over the management of the transaction and passivation of stateful beans. Session bean enhancement A session bean is a type of EJB that allows us to implement businesslogic accessible tolocal, remote, or Web Service Client View. There are three types of session beans: stateless for processing without states, stateful for processes that require the preservation of states between different calls of methods, and singleton for sharing a single instance of an object between different clients. The following code shows an example of a stateless session bean to save an entity in the database: @Stateless public class ExampleOfSessionBean { @PersistenceContext EntityManager em; public void persistEntity(Object entity){ em.persist(entity); }} Talking about improvements of session beans, we first note two changes in stateful session beans: the ability to executelife-cycle callback interceptor methods in a user-defined transaction context and the ability to manually disable passivation of stateful session beans. It is possible to define a process that must be executed according to thelifecycle of an EJB bean (post-construct, pre-destroy). Due to the @TransactionAttribute annotation, you can perform processes related to the database during these phases and control how they impact your system. The following code retrieves an entity after being initialized and ensures that all changes made to the persistence context are sent to the database at the time of destruction of the bean. As you can see in the following code, TransactionAttributeType of init() method is NOT_SUPPORTED; this means that the retrieved entity will not be included in the persistence context and any changes made to it will not be saved in the database: @Stateful public class StatefulBeanNewFeatures { @PersistenceContext(type= PersistenceContextType.EXTENDED) EntityManager em; @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) @PostConstruct public void init(){ entity = em.find(...); } @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) @PreDestroy public void destroy(){ em.flush(); } } The following code demonstrates how to control passivation of the stateful bean. Usually, the session beans are removed from memory to be stored on the disk after a certain time of inactivity. This process requires data to be serialized, but during serialization all transient variables are skipped and restored to the default value of their data type, which is null for object, zero for int, and so on. To prevent theloss of this type of data, you can simply disable the passivation of stateful session beans by passing the false value to the passivationCapable attribute of the @Stateful annotation. @Stateful(passivationCapable = false) public class StatefulBeanNewFeatures { //... } For the sake of simplicity, EJB 3.2 has relaxed the rules to define the defaultlocal or remote business interface of a session bean. The following code shows how a simple interface can be considered aslocal or remote depending on the case: //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are local interfaces public interface yellow { ... } public interface green { ... } @Local @Stateless public class Color implements yellow, green { ... } //In this example, yellow and green are remote interfaces public interface yellow { ... } public interface green { ... } @Remote @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface @Remote public interface yellow { ... } public interface green { ... } @Stateless public class Color implements yellow, green { ... } //In this example, only the yellow interface is exposed as a remote interface public interface yellow { ... } public interface green { ... } @Remote(yellow.class) @Stateless public class Color implements yellow, green { ... } EJBlite improvements Before EJB 3.1, the implementation of a Java EE application required the use of a full Java EE server with more than twenty specifications. This could be heavy enough for applications that only need some specification (as if you were asked to take a hammer to kill a fl y). To adapt Java EE to this situation, JCP (Java Community Process) introduced the concept of profile and EJBlite. Specifically, EJBlite is a subset of EJBs, grouping essential capabilities forlocal transactional and secured processing. With this concept, it has become possible to make unit tests of an EJB application without using the Java EE server and it is also possible to use EJBs in web applications or Java SE effectively. In addition to the features already present in EJB 3.1, the EJB 3.2 Specification has added support forlocal asynchronous session bean invocations and non persistent EJB Timer Service. This enriches the embeddable EJBContainer, web profiles, and augments the number of testable features in an embeddable EJBContainer. The following code shows an EJB packaged in a WAR archive that contains two methods. The asynchronousMethod() is an asynchronous method that allows you to compare the time gap between the end of a method call on the client side and the end of execution of the method on the server side. The nonPersistentEJBTimerService() method demonstrates how to define a non persistent EJB Timer Service that will be executed every minute while the hour is one o'clock: @Stateless public class EjbLiteSessionBean { @Asynchronous public void asynchronousMethod() { try{ System.out.println("EjbLiteSessionBean - start : "+new Date()); Thread.sleep(1000*10); System.out.println("EjbLiteSessionBean - end : "+new Date()); }catch (Exception ex){ ex.printStackTrace(); } } @Schedule(persistent = false, minute = "*", hour = "1") public void nonPersistentEJBTimerService() { System.out.println("nonPersistentEJBTimerService method executed"); } } Changes made to the TimerService API The EJB 3.2 Specification enhanced the TimerService APiwith a new method called getAllTimers(). This method gives you the ability to access all active timers in an EJB module. The following code demonstrates how to create different types of timers, access their information, and cancel them; it makes use of the getAllTimers() method: @Stateless public class ChangesInTimerAPI implements ChangesInTimerAPILocal { @Resource TimerService timerService; public void createTimer() { //create a programmatic timer long initialDuration = 1000*5; long intervalDuration = 1000*60; String timerInfo = "PROGRAMMATIC TIMER"; timerService.createTimer(initialDuration, intervalDuration, timerInfo); } @Timeout public void timerMethodForProgrammaticTimer() { System.out.println("ChangesInTimerAPI - programmatic timer : "+new Date()); } @Schedule(info = "AUTOMATIC TIMER", hour = "*", minute = "*") public void automaticTimer(){ System.out.println("ChangesInTimerAPI - automatic timer : "+new Date()); } public void getListOfAllTimers(){ Collection alltimers = timerService.getAllTimers(); for(Timer timer : alltimers){ System.out.println("The next time out : "+timer. getNextTimeout()+", " + " timer info : "+timer.getInfo()); timer.cancel(); } } } In addition to this method, the specification has removed the restrictions that required the use of javax.ejb.Timer and javax.ejb.TimerHandlereferences only inside a bean.
Read more
  • 0
  • 0
  • 5900

article-image-adding-connectors-bonita
Packt
11 Nov 2013
7 min read
Save for later

Adding Connectors in Bonita

Packt
11 Nov 2013
7 min read
(For more resources related to this topic, see here.) Bonita connectors Bonita connectors are used to set variables or some other parameters inside Bonita. They can also be used to start a process or execute a step. These connectors equip the user to connect with different parameters of the Bonita work flow. The other kind of connectors are used to integrate with some other third-party tools. Most of the Bonita connectors are related to the documents and comments at a particular step. Although these may be useful in some cases, in a majority of the cases we will not find much use for them. The most useful ones are getting the users a step, executing a step, starting a new process, and setting variables. Click on any step on which you want to define the connector and click on Add.... Here, we will check the start an instance connector of Bonita. Give a name to this connector and click on Next. Here we have to fill in the name of the process that we want to invoke. We also have an option to specify different versions of the process. If we leave this blank, it will pick up the latest version. Next, we can specify the process variables that need to be copied from one pool to the other. Start an instance connector in Bonita Studio In the previous example, the process variables that we specify will be copied over to the target pool. We have to make sure that the target pool has the process variables mentioned in this connector. Make sure that you mention the name of the variable in the first column without the curly braces. If you select the names from the drop-down menu, make sure you remove the $ and the {} for filling in the name. The value field can be filled by the actual process variable. We can also use the set variable connector to set a value to a variable, either a process variable or a step variable. Here, we have two parameters: one is the variable whose value we have to set and the other parameter is the actual value of the variable. Note that this value may be a Groovy expression, too. Hence, it is similar to writing a Groovy script to assign a value to a variable. Another type of connector is the one to start or finish a step. In this connector, all we have to do is mention the name of the step we want to start or stop. Similarly, there is another connector to execute a step. Executing will run all the start and end Connectors of a particular step and then finish it. These connectors might be useful in the cases where some step may be waiting for another step, and at the end of the current step we might execute that step or mark it finished. We also have connectors to get the users from the workflow. There are connectors to find out the initiator of a process and the step submitter. Another useful connector is to get a user based on the username. This returns the User class that Bonita uses to implement the functionality of a user in the work flow. Select the connector to get a user from a username. Enter the username and click on Next. Here, we get the output of the connector and we can decide to save the output in a particular pool or step variable. Saving the connector output in a variable in Bonita The user class has methods to retrieve data, such as the e-mail, first name, last name, metadata, and password from the user. The e-mail connector We have a connector in the messaging group to send an e-mail. Now, we might use this connector for a variety of purposes: to send information about the work flow to an external e-mail, to send a notification to the person performing the task that he/she has some pending items in his/her inbox, and so on. We have to configure the e-mail connector on various parameters. In our TicketingWorkflow, let us send an e-mail to the person in whose name the tickets are booked. He/she enters his/her e-mail address in the Payment step of the workflow. Hence, let us send an e-mail at the end of the Payment step to the person at his/her e-mail address with which the tickets have been booked. For this, let us configure the e-mail connector: Click on the Payment step of the work flow. Click on the Connectors tab to add a connector. Select the connector as a medium to send an e-mail. Then name the connector as SendEmail and make sure that this connector is at the finish event of the step. In the next step, we are required to enter the configuration details of the SMTP server we will use for sending the e-mail. By default, it is set to the Gmail configuration with the host as smtp.gmail.com and the port as 465. Let us stick to the default option and send an e-mail from a Gmail hosted server. Leave the Security option as it is, but enter your credentials in the Authentication section. Here, you should enter your full e-mail address, not just your username. You can also use your own domain e-mail address if it is hosted on a Gmail server. Next, we define the parameters of the e-mail notification that has to be sent. After entering the From address as the ticketing admin address or some similar address, enter the To address as the variable in which we have saved the e-mail address: email. In the title field, we have to specify the subject of the e-mail. We have already seen that we can use Java inside the Groovy editor. Here, we will have a look at a simple Java code that is executed inside the editor. Enter the following code in the Groovy editor: import java.text.SimpleDateFormat; return "Flight ticket from " + from + " to " + to + " on " + new SimpleDateFormat("MM-dd-yyyy").format(departOn); The overview of the flight details is mentioned in the subject of the e-mail. We know that the departOn variable is a Date object. For printing the date, we have to convert it into a String by using the SimpleDateFormat class. Next, we have to write the actual e-mail that we will send to the customer. Below the Title field, make sure that the e-mail body is in HTML and not plain text. We can insert Groovy scripts in between the text, which will be substituted with the actual variable value when the e-mail is sent. Write the following in the body of the e-mail: Hi ${passenger1}, Your ${from} to ${to} flight is confirmed. The flight details are given below: Date Departure  Arrival Duration Price ${import java.text. SimpleDateFormat; return new SimpleDateFormat ("MM-dd-yyyy"). format(departOn); ${departure} ${arrival} ${duration} ${price} Travelers: ${passenger1} ${passenger2} ${passenger3} Payment Details: Card Holder - ${cardHolder} Card Number - ${cardNumber} Thank you for booking with TicketingWorkflow! Configuring the e-mail connector Clicking on Next will get you to the advanced options. Generally it's not really required to configure these options, and we can make do with the default settings. Summary This article looked at the various connector integration options available in Bonita Studio. It showed how connectors can be used to fetch data into the workflow and how to export data, too. We have a close look at the Bonita inbuilt connectors and e-mail connectors. Resources for Article: Further resources on this subject: Oracle BPM Suite 11gR1: Creating a BPM Application [Article] Managing Oracle Business Intelligence [Article] Setting Up Oracle Order Management [Article]
Read more
  • 0
  • 0
  • 3875
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-ladder-diagram-programs-simple
Packt
31 Oct 2013
7 min read
Save for later

Building Ladder Diagram programs (Simple)

Packt
31 Oct 2013
7 min read
(For more resources related to this topic, see here.) There are several editions of RSLogix 5000 available today, which are similar to Microsoft Windows' home and professional versions. The more "basic" (less expensive) editions of RSLogix 5000 have many features disabled. For example, only the full and professional editions, which are more expensive, support the editing of Function Block Diagrams, Graphical Structured Text, and Sequential Function Chart. In my experience, Ladder Logic is the most commonly used language. Refer to http://www.rockwellautomation.com/rockwellsoftware/design/rslogix5000/orderinginfo.html for more on this. Getting ready You will need to have added the cards and tags from the previous recipes to complete this exercise. How to do it... Open Controller Organizer and expand the leaf Tasks | Main Tasks | Main Program. Right-click on Main Program and select New Routine as shown in the following screenshot: Configure a new Ladder Logic program by setting the following values: Name: VALVES Description: Valve Control Program Type: Ladder Diagram For our newly created routine to be executed with each scan of the PLC, we will need to add a reference to it in MainRoutine that is executed with each scan of the MainTask task. Double-click on our MainRoutine program to display the Ladder Logic contained within it. Next, we will add a Jump To Subroutine (JSR) element that will add our newly added Ladder Diagram program to the main task and ensure that it is executed with each scan. Above the Ladder Diagram, there are tab buttons that organize Ladder Elements into Element Groups. Click on the left and right arrows that are on the left side of Element Groups and find the one labeled Program Control. After clicking on the Program Control element group, you will see the JSR element. Click on the JSR element to add it to the current Ladder Logic Rung in MainRoutine. Next, we will make some modifications to the JSR routine so that it calls our newly added Ladder Diagram. Click on the Routine Name parameter of the JSR element and select the VALVES routine from the list as shown in the following screenshot: There are three additional parameters that we are not using as part of the JSR element, which can be removed. Select the Input Par parameter and then click on the Remove Parameter icon in the toolbar above the Ladder Diagram. This icon looks as shown in the following screenshot: Repeat this process for the other optional parameter: Return Par. Now that we have ensured that our newly added Ladder Logic routine will be scanned, we can add the elements to our Ladder Logic routine. Double-click on our VALVES routine in the Controller Organizer tab under the MainTask task. Find the Timer/Counter element group and click on the TON (Timer On Delay) element to add it to our Ladder Diagram. Now we will create the Timer object. Enter the name in the Timer field as FC1001_TON. Right-click on the TIMER object tag name we just entered and select New "FC1001_TON" (or press Ctrl + W). In the New Tag form that appears, enter in the description FAULT TIMER FOR FLOW CONTROL VALVE 1001 and click on OK to create the new TIMER tag. Next, we will configure our TON element to count to five seconds (5,000 milliseconds). Double-click on the Preset parameter and enter in the value 5000, which is in milliseconds. Now, we will need to add the condition that will start the TIMER object. We will be adding a Less Than (LES) element from the Compare element group. Be sure to add the element to the same Ladder Logic Rung as the Timer on Delay element. The LES element will compare the valve position with the valve set point and return true if the values do not match. So set the two parameters of the LES element to the following: FC1001_PV FC1001_SP Now, we will add a second Ladder Logic Rung where a latched fault alarm is triggered after TIMER reaches five seconds. Right-click under the first Ladder Logic Rung and select Add Rung (or press Ctrl + R). Find the Favorites element group and select the Examine On icon as shown in the following screenshot: Click on ? above the Examine On tab and select the TIMER object's Done property, FC1001_TON.DN, as shown in the following screenshot. Now, once the valve values are not equal, and the TIMER has completed its count to five seconds, this Ladder Logic Rung will be activated as shown in the following screenshot: Next, we will add an Output Latched element to this Ladder Logic Rung. Click on the Output Latched element from the Favorites element group with our new rung selected. Click on ? above the Output Latched element and type in the name of a new base tag we are going to add as FC1001_FLT. Press Enter or click on the element to complete the text entry. Right-click on FC1001_FLT and select New "FC1001_FLT" (or press Ctrl + W). Set the following values in the New Tag form that appears: Description: FLOW CONTROL VALVE 1001 POSITION FAULT Type: Base Scope: FirstController Data Type: Bool Click on OK to add the new tag. Our new tag will look like the following screenshot: It is considered bad practice to latch a bit without having the code to unlatch the bit directly below it. Create a new BOOL type tag called ALARM_RESET with the following properties: Name: ALARM_RESET Description: RESET ALARMS Type: Base Scope: FirstController Data Type: BOOL Click on OK to add the new tag. Then add the following coil and OTU to unlatch the fault when the master alarm reset is triggered. Finally, we will add a comment so that we can see what our Ladder Diagram is doing at a glance. Right-click in the far-right area of the first Ladder Logic Rung (where the 0 is) and select Edit Rung Comment (Ctrl + D). Enter the following helpful comment: TRIGGER FAULT IF THE SETPOINT OF THE FLOW CONTROL VALVE 1001 IS NOT EQUAL TO THE VALVE POSITION How it works... We have created our first Ladder Logic Diagram and linked it to the MainTask task. Now, each time that the task is scanned (executed), our Ladder Logic routine will be run from left to right and top to bottom. There's more... More information on Ladder Logic can be found in the Rockwell publication Logix5000 Controllers Ladder Diagram available at http://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm008_-en-p.pdf. Ladder Logic is the most commonly used programming language in RSLogix 5000. This recipe describes a few more helpful hints to get you started. Understanding Ladder Rung statuses Did you notice the vertical output eeeeeee on the left-hand side of your Ladder Logic Rung? This indicates that an error is present in your Ladder Logic code. After making changes to your controller project, it is a good practice to Verify your project using the drop-down menu item Logic | Verify | Controller. Once Verify has been run, you will see the error pane appear with any errors that it has detected. Element help You can easily get detailed documentation on Ladder Logic Elements, Function Block Diagram Elements, Structured Text Code, and other element types by selecting the object and pressing F1. Copying and pasting Ladder Logic Ladder Logic Rungs and elements can be copied and pasted within your ladder routine. Simply select the rung or element you wish to copy and press Ctrl + C. Then, to paste the rung or element, select the location where you would like to paste it and press Ctrl + V. Summary This article took a first look at creating new routines using ladder logic diagrams. The reader was introduced to the concept of Tasks and also learns how to link routines. In this article, we learned how to navigate the ladder elements that are available, how to find help on each element, and how to create a simple alarm timer using ladder logic. Resources for Article: Further resources on this subject: DirectX graphics diagnostic [Article] Flash 10 Multiplayer Game: Game Interface Design [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 5883

article-image-mocking-static-methods-simple
Packt
30 Oct 2013
7 min read
Save for later

Mocking static methods (Simple)

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready The use of static methods is usually considered a bad Object Oriented Programming practice, but if we end up in a project that uses a pattern such as active record (see http://en.wikipedia.org/wiki/Active_record_pattern), we will end up having a lot of static methods. In such situations, we will need to write some unit tests and PowerMock could be quite handy. Start your favorite IDE (which we set up in the Getting and installing PowerMock (Simple) recipe), and let's fire away. How to do it... We will start where we left off. In the EmployeeService.java file, we need to implement the getEmployeeCount method; currently it throws an instance of UnsupportedOperationException. Let's implement the method in the EmployeeService class; the updated classes are as follows: /** * This class is responsible to handle the CRUD * operations on the Employee objects. * @author Deep Shah */ public class EmployeeService { /** * This method is responsible to return * the count of employees in the system. * It does it by calling the * static count method on the Employee class. * @return Total number of employees in the system. */ public int getEmployeeCount() { return Employee.count(); } } /** * This is a model class that will hold * properties specific to an employee in the system. * @author Deep Shah */ public class Employee { /** * The method that is responsible to return the * count of employees in the system. * @return The total number of employees in the system. * Currently this * method throws UnsupportedOperationException. */ public static int count() { throw new UnsupportedOperationException(); } } The getEmployeeCount method of EmployeeService calls the static method count of the Employee class. This method in turn throws an instance of UnsupportedOperationException. To write a unit test of the getEmployeeCount method of EmployeeService, we will need to mock the static method count of the Employee class. Let's create a file called EmployeeServiceTest.java in the test directory. This class is as follows: /** * The class that holds all unit tests for * the EmployeeService class. * @author Deep Shah */ @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTheCountOfEmployeesUsingTheDomainClass() { PowerMockito.mockStatic(Employee.class); PowerMockito.when(Employee.count()).thenReturn(900); EmployeeService employeeService = newEmployeeService(); Assert.assertEquals(900,employeeService.getEmployeeCount()); } } If we run the preceding test, it passes. The important things to notice are the two annotations (@RunWith and @PrepareForTest) at the top of the class, and the call to the PowerMockito.mockStatic method. The @RunWith(PowerMockRunner.class) statement tells JUnit to execute the test using PowerMockRunner. The @PrepareForTest(Employee.class) statement tells PowerMock to prepare the Employee class for tests. This annotation is required when we want to mock final classes or classes with final, private, static, or native methods. The PowerMockito.mockStatic(Employee.class) statement tells PowerMock that we want to mock all the static methods of the Employee class. The next statements in the code are pretty standard, and we have looked at them earlier in the Saying Hello World! (Simple) recipe. We are basically setting up the static count method of the Employee class to return 900. Finally, we are asserting that when the getEmployeeCount method on the instance of EmployeeService is invoked, we do get 900 back. Let's look at one more example of mocking a static method; but this time, let's mock a static method that returns void. We want to add another method to the EmployeeService class that will increment the salary of all employees (wouldn't we love to have such a method in reality?). Updated code is as follows: /** * This method is responsible to increment the salary * of all employees in the system by the given percentage. * It does this by calling the static giveIncrementOf method * on the Employee class. * @param percentage the percentage value by which * salaries would be increased * @return true if the increment was successful. * False if increment failed because of some exception* otherwise. */ public boolean giveIncrementToAllEmployeesOf(intpercentage) { try{ Employee.giveIncrementOf(percentage); return true; } catch(Exception e) { return false; } } The static method Employee.giveIncrementOf is as follows: /** * The method that is responsible to increment * salaries of all employees by the given percentage. * @param percentage the percentage value by which * salaries would be increased * Currently this method throws * UnsupportedOperationException. */ public static void giveIncrementOf(int percentage) { throw new UnsupportedOperationException(); } The earlier syntax would not work for mocking a void static method . The test case that mocks this method would look like the following: @RunWith(PowerMockRunner.class) @PrepareForTest(Employee.class) public class EmployeeServiceTest { @Test public void shouldReturnTrueWhenIncrementOf10PercentageIsGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doNothing().when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertTrue(employeeService.giveIncrementToAllEmployeesOf(10)); } @Test public void shouldReturnFalseWhenIncrementOf10PercentageIsNotGivenSuccessfully() { PowerMockito.mockStatic(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(Employee.class); Employee.giveIncrementOf(10); EmployeeService employeeService = newEmployeeService(); Assert.assertFalse(employeeService.giveIncrementToAllEmployeesOf(10)); } } Notice that we still need the two annotations @RunWith and @PrepareForTest, and we still need to inform PowerMock that we want to mock the static methods of the Employee class. Notice the syntax for PowerMockito.doNothing and PowerMockito.doThrow: The PowerMockito.doNothing method tells PowerMock to literally do nothing when a certain method is called. The next statement of the doNothing call sets up the mock method. In this case it's the Employee.giveIncrementOf method. This essentially means that PowerMock will do nothing when the Employee.giveIncrementOf method is called. The PowerMockito.doThrow method tells PowerMock to throw an exception when a certain method is called. The next statement of the doThrow call tells PowerMock about the method that should throw an exception; in this case, it would again be Employee.giveIncrementOf. Hence, when the Employee.giveIncrementOf method is called, PowerMock will throw an instance of IllegalStateException. How it works... PowerMock uses custom class loader and bytecode manipulation to enable mocking of static methods. It does this by using the @RunWith and @PrepareForTest annotations. The rule of thumb is whenever we want to mock any method that returns a non-void value , we should be using the PowerMockito.when().thenReturn() syntax. It's the same syntax for instance methods as well as static methods. But for methods that return void, the preceding syntax cannot work. Hence, we have to use PowerMockito.doNothing and PowerMockito.doThrow. This syntax for static methods looks a bit like the record-playback style. On a mocked instance created using PowerMock, we can choose to return canned values only for a few methods; however, PowerMock will provide defaults values for all the other methods. This means that if we did not provide any canned value for a method that returns an int value, PowerMock will mock such a method and return 0 (since 0 is the default value for the int datatype) when invoked. There's more... The syntax of PowerMockito.doNothing and PowerMockito.doThrow can be used on instance methods as well. .doNothing and .doThrow on instance methods The syntax on instance methods is simpler compared to the one used for static methods. Let's say we want to mock the instance method save on the Employee class. The save method returns void, hence we have to use the doNothing and doThrow syntax. The test code to achieve is as follows: /** * The class that holds all unit tests for * the Employee class. * @author Deep Shah */ public class EmployeeTest { @Test() public void shouldNotDoAnythingIfEmployeeWasSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doNothing().when(employee.save(); try { employee.save(); } catch(Exception e) { Assert.fail("Should not have thrown anexception"); } } @Test(expected = IllegalStateException.class) public void shouldThrowAnExceptionIfEmployeeWasNotSaved() { Employee employee =PowerMockito.mock(Employee.class); PowerMockito.doThrow(newIllegalStateException()).when(employee).save(); employee.save(); } } To inform PowerMock about the method to mock, we just have to invoke it on the return value of the when method. The line PowerMockito.doNothing().when(employee).save() essentially means do nothing when the save method is invoked on the mocked Employee instance. Similarly, PowerMockito.doThrow(new IllegalStateException()).when(employee).save() means throw IllegalStateException when the save method is invoked on the mocked Employee instance. Notice that the syntax is more fluent when we want to mock void instance methods. Summary In this article, we saw how easily we can mock static methods. Resources for Article: Further resources on this subject: Important features of Mockito [Article] Python Testing: Mock Objects [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 15338

article-image-multiserver-installation
Packt
29 Oct 2013
7 min read
Save for later

Multiserver Installation

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) The prerequisites for Zimbra Let us dive into the prerequisites for Zimbra: Zimbra supports only 64-bit LTS versions of Ubuntu, release 10.04 and above. If you would like to use a 32-bit version, you should use Ubuntu 8.04.x LTS with Zimbra 7.2.3. Having a clean and freshly installed system is preferred for Zimbra; it requires a dedicated system and there is no need to install components such as Apache and MySQL since the Zimbra server contains all the components it needs. Note that installing Zimbra with another service (such as a web server) on the same server can cause operational issues. The dependencies (libperl5.14, libgmp3c2, build-essential, sqlite3, sysstat, and ntp) should be installed beforehand. Configure a fixed IP address on the server. Have a domain name and a well-configured DNS (A and MX entries) that points to the server. The system clocks should be synced on all servers. Configure the file /etc/resolv.conf on all servers to point at the server on which we installed the bind (it can be installed on any Zimbra server or on a separate server). We will explain this point in detail later. Preparing the environment Before starting the Zimbra installation process, we should prepare the environment. In the first part of this section, we will see the different possible configurations and then, in the second part, we will present the needed assumptions to apply the chosen configuration. Multiserver configuration examples One of the greatest advantages of Zimbra is its scalability; we can deploy it for a small business with few mail accounts as well as for a huge organization with thousands of mail accounts. There are many possible configuration options; the following are the most used out of those: Small configuration: All Zimbra components are installed on only one server. Medium configuration: Here, LDAP and message store are installed on one server and Zimbra MTA on a separate server. Note here that we can use more Zimbra MTA servers so we can scale easier for large incoming or outgoing e-mail volume. Large configuration: In this case, LDAP will be installed on a dedicated server and we will have multiple mailbox and MTA servers, so we can scale easier for a large number of users. Very large configuration: The difference between this configuration and large one is the existence of an additional LDAP server, so we will have a Master LDAP and its replica. We choose the medium configuration; so, we will install LDAP and mailbox in one server and MTA on the other server. Install different servers in the following order (for medium configuration, 1 and 2 are combined in only one step): 1. First of all, install and configure the LDAP server. 2. Then, install and configure Zimbra mailbox servers. 3. Finally, install Zimbra MTA servers and finish the whole installation configuration. New installations of Zimbra limit spam/ham training to the first installed MTA. If you uninstall or move this MTA, you should enable spam/ham training on another MTA as one host should have this enabled to run zmtrainsa --cleanup. To do this, execute the following command: zmlocalconfig -e zmtrainsa_cleanup_host=TRUE Assumptions In this article, we will use some specific information as input in the Zimbra installation process, which, in most cases, will be different for each user. Therefore, we will note some of the most redundant ones in this section. Remember that you should specify your own values rather than using the arbitrary values that I have provided. The following is the list of assumptions used : OS version: ubuntu-12.04.2-server-amd64 Zimbra version: zcs-8.0.3_GA_5664.UBUNTU12_64.20130305090204 MTA server name: mta MTA hostname: mta.zimbra-essentials.com Internet domain: zimbra-essentials.com MTA server IP address: 172.16.126.141 MTA server IP subnet mask: 255.255.255.0 MTA server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 MTA admin ID: abdelmonam MTA admin Password: Z!mbra@dm1n Zimbra admin Password: zimbrabook MTA server name: ldap MTA hostname: ldap.zimbra-essentials.com LDAP server IP address: 172.16.126.140 LDAP server IP subnet mask: 255.255.255.0 LDAP server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 LDAP admin ID: abdelmonam LDAP admin password: Z!mbra@dm1n To be able to follow the steps described in the next sections, especially each time we need to perform a configuration, the reader should know how to harness the vi editor. If not, you should develop your skill set for using the vi editor or use another editor instead. You can find good basic training for the vi editor at http://www.cs.colostate.edu/helpdocs/vi.html System requirements For the various system requirements, please refer to the following link: http://www.zimbra.com/docs/os/8.0.0/multi_server_install/wwhelp/wwhimpl/common/html/wwhelp.htm#href=ZCS_Multiserver_Open_8.0.System_Requirements_for_VMware_Zimbra_Collaboration_Server_8.0.html&single=true If you are using another version of Zimbra, please check the correct requirements on the Zimbra website. Ubuntu server installation First of all, choose the appropriate language. Choose Install Ubuntu Server and then press Enter. When the installation prompts you to provide a hostname, configure only a one-word hostname; in the Assumptions section, we've chosen ldap for the LDAP and mailstore server and mta for the MTA server—don't give the fully qualified domain name (for example, mta.zimbra-essentials.com). On the next screen that calls for the domain name, assign it zimbra-essentials.com (without the hostname). The hard disk setup is simple if you are using a single drive; however, in the case of a server, it's not the best way to do things. There are a lot of options for partitioning your drives. In our case, we just make a little partition (2x RAM) for swapping, and what remains will be used for the whole system. Others can recommend separate partitions for mailstore, system, and so on. Feel free to use the recommendation you want depending on your IT architecture; use your own judgment here or ask your IT manager. After finishing the partitioning task, you will be asked to enter the username and password; you can choose what you want except admin and zimbra. When asked if you want to encrypt the home directory, select No and then press Enter. Press Enter to accept an empty entry for the HTTP proxy. Choose Install security updates automatically and then press Enter. On the Software Selection screen, you must select the DNS Server and the OpenSSH Server choices for installation; no other options. This will authorize remote administration (SSH) and mandatorily set up bind9 for a split DNS. For bind9, you can install it on only one server, which is what we've done in this article. Select Yes and then press Enter to install the GRUB boot loader to the master boot record. The installation should have completed successfully. Preparing Ubuntu for Zimbra installation In order to prepare the Ubuntu for the Zimbra installation, the following steps need to be performed: Log in to the newly installed system and update and upgrade Ubuntu using the following commands: sudo apt-get update sudo apt-get upgrade Install the dependencies as follows: sudo apt-get install libperl5.14 libgmp3c2 build-essential sqlite3 sysstat ntp Zimbra recommends (but there's no obligation) to disable and remove Apparmor. sudo /etc/init.d/apparmor stop sudo /etc/init.d/apparmor teardown sudo update-rc.d -f apparmor remove sudo aptitude remove apparmor apparmor-utils Set the static IP for your server as follows: Open the network interfaces file using the following command: sudo vi /etc/network/interfaces Then replace the following line: iface eth0 inet dhcp With: iface eth0 inet static address 172.16.126.14 netmask 255.255.255.0 gateway 172.16.126.1 network 172.16.126.0 broadcast 172.16.126.255 Restart the network process by typing in the following: sudo /etc/init.d/networking restart Sanity test! To verify that your network configuration is configured properly, type in ifconfig and ensure that the settings are correct. Then try to ping any working website (such as google.com) to see if that works. On each server, pay attention when you set the static IP address (172.16.126.140 for the LDAP server and 172.16.126.141 for the MTA server). Summary In this article, we learned the prerequisites for Zimbra multiserver installation and preparing the environment for the installation of the Zimbra server in a multiserver environment. Resources for Article : Further resources on this subject: Routing Rules in AsteriskNOW - The Calling Rules Tables [Article] Users, Profiles, and Connections in Elgg [Article] Integrating Zimbra Collaboration Suite with Microsoft Outlook [Article]
Read more
  • 0
  • 0
  • 4467

article-image-miscellaneous-tips
Packt
29 Oct 2013
24 min read
Save for later

Miscellaneous Tips

Packt
29 Oct 2013
24 min read
(For more resources related to this topic, see here.) Mission Briefing The topics covered here include: Tracing Tkinter variables Widget traversal Validating user input Formatting widget data More on fonts Working with Unicode characters Tkinter class hierarchy Custom-made mixins Tips for code cleanup and program optimization Distributing the Tkinter application Limitations of Tkinter Tkinter alternatives Getting interactive help Tkinter in Python 3. x Tracing Tkinter variables When you specify a Tkinter variable as a textvariable for a widget (textvariable = myvar), the widget automatically gets updated whenever the value of the variable changes. However, there might be times when, in addition to updating the widget, you need to do some extra processing at the time of reading or writing (or modifying) the variable. Tkinter provides a method to attach a callback method that would be triggered every time the value of a variable is accessed. Thus, the callback acts as a variable observer. The callback method is named trace_variable(self, mode, callback), or simply trace(self, mode, callback). The mode argument can take any one of 'r', 'w', 'u' values, which stand for read, write, or undefined. Depending upon the mode specifications, the callback method is triggered if the variable is read or written. The callback method gets three arguments by default. The arguments in order of their position are: Name of the Tkinter variable The index of the variable, if the Tkinter variable is an array, else an empty string The access modes ('w', 'r', or 'u') Note that the triggered callback function may also modify the value of the variable. This modification does not, however, trigger any additional callbacks. Let's see a small example of variable tracing in Tkinter, where writing into the Tkinter variable into an entry widget triggers a callback function (refer to the 8.01 trace variable.py Python file available in the code bundle): from Tkinter import * root = Tk() myvar = StringVar() def trace_when_myvar_written(var,indx,mode): print"Traced variable %s"%myvar.get() myvar.trace_variable("w", trace_when_myvar_written) Label(root, textvariable=myvar).pack(padx=5, pady=5) Entry(root, textvariable=myvar).pack(padx=5, pady=5) root.mainloop() The description of the preceding code is as follows: This code creates a trace variable on the Tkinter variable myvar in the write ("w") mode The trace variable is attached to a callback method named trace_when_myvar_written (this means that every time the value of myvar is changed, the callback method will be triggered) Now, every time you write into the entry widget, it modifies the value of myvar. Because we have set a trace on myvar, it triggers the callback method, which in our example, simply prints the new value into the console. The code creates a GUI window similar to the one shown here: It also produces a console output in IDLE, which shows like the following once you start typing in the GUI window: Traced variable T Traced variable Tr Traced variable Tra Traced variable Trac Traced variable Traci Traced variable Tracin Traced variable Tracing The trace on a variable is active until it is explicitly deleted. You can delete a trace using: trace_vdelete(self, mode, callbacktobedeleted) The trace method returns the name of the callback method. This can be used to get the name of the callback method that is to be deleted. Widget traversal When a GUI has more than one widget, a given widget can come under focus by an explicit mouse-click on the widget. Alternatively, the focus can be shifted to another given widget by pressing the Tab key on the keyboard in the order the widgets were created in the program. It is therefore vital to create widgets in the order we want the user to traverse through them, or else the user will have a tough time navigating between the widgets using the keyboard. Different widgets are designed to behave differently to different keyboard strokes. Let's therefore spend some time trying to understand the rules of traversing through widgets using the keyboard. Let's look at the code of the 8.02 widget traversal.py Python file to understand the keyboard traversal behavior for different widgets. Once you run the mentioned .py file, it shows a window something like the following: The code is simple. It adds an entry widget, a few buttons, a few radio buttons, a text widget, and a scale widget. However, it also demonstrates some of the most important keyboard traversal behaviors for these widgets. Here are some important points to note (refer to 8.02 widget traversal.py): The Tab key can be used to traverse forward, and Shift + Tab can be used to traverse backwards. The text widget cannot be traversed using the Tab key. This is because the text widget can contain tab characters as its content. Instead, the text widget can be traversed using Ctrl + Tab. Buttons on the widget can be pressed using the spacebar. Similarly, check buttons and radio buttons can also be toggled using the spacebar. You can go up and down the items in a Listbox widget using the up and down arrows. The Scale widget responds to both the left and right keys or up and down keys. Similarly, the Scrollbar widget responds to both the left/right or up/down keys, depending on their orientation. Most of the widgets (except Frame, Label, and Menus) get an outline by default when they have the focus set on them. This outline normally displays as a thin black border around the widget. You can even set the Frame and Label widgets to show this outline by specifying the highlightthickness option to a non-zero Integer value for these widgets. We change the color of the outline using highlightcolor= 'red' in our code. Frame, Label, and Menu are not included in the tab navigation path. However, they can be included in the navigation path by using the takefocus = 1 option. You can explicitly exclude a widget from the tab navigation path by setting the takefocus= 0 option. The Tab key traverses widgets in the order they were created. It visits a parent widget first (unless it is excluded using takefocus = 0) followed by all its children widgets. You can use widget.focus_force() to force the input focus to the widget. Validating user input Let's now discuss input data validation. Most of the applications we have developed in this article are point and click-based (drum machine, chess, drawing application), where validation of user input is not required. However, data validation is a must in programs like our phonebook application, where the user enters some data, and we store it in a database. Ignoring the user input validation can be dangerous in such applications because input data can be misused for SQL injection. In general, any application where an user can enter textual data, is a good candidate for validating user input. In fact, it is almost considered a maxim not to trust user inputs. A wrong user input may be intentional or accidental. In either case, if you fail to validate or sanitize the data, you may cause unexpected error in your program. In worst cases, user input can be used to inject harmful code that may be capable of crashing a program or wiping out an entire database. Widgets such as Listbox, Combobox, and Radiobuttons allow limited input options, and hence, cannot normally be misused to input wrong data. On the other hand, widgets such as Entry widget, Spinbox widget, and Text widget allow a large possibility of user inputs, and hence, need to be validated for correctness. To enable validation on a widget, you need to specify an additional option of the form validate = 'validationmode' to the widget. For example, if you want to enable validation on an entry widget, you begin by specifying the validate option as follows: Entry( root, validate="all", validatecommand=vcmd) The validation can occur in one of the following validation modes: Validation Mode Explanation none This is the default mode. No validation occurs if validate is set to "none" focus When validate is set to "focus", the validate command is called twice; once when the widget receives focus and once when the focus is lost focusin The validate command is called when the widget receives focus focusout The validate command is called when the widget loses focus key The validate command is called when the entry is edited all The validate command is called in all the above cases The code of the 8.03 validation mode demo.py file demonstrates all these validation modes by attaching them to a single validation method. Note the different ways different Entry widgets respond to different events. Some Entry widgets call the validation method on focus events while others call the validation method at the time of entering key strokes into the widget, while still others use a combination of focus and key events. Although we did set the validation mode to trigger the validate method, we need some sort of data to validate against our rules. This is passed to the validate method using percent substitution. For instance, we passed the mode as an argument to our validate method by performing a percent substitution on the validate command, as shown in the following: vcmd = (self.root.register(self.validate), '%V') We followed by passing the value of v as an argument to our validate method: def validate(self, v) In addition to %V, Tkinter recognizes the following percent substitutions: Percent substitutions Explanation %d Type of action that occurred on the widget-1 for insert, 0 for delete, and -1 for focus, forced, or textvariable validation. %i Index of char string inserted or deleted, if any, else it will be -1. %P The value of the entry if the edit is allowed. If you are configuring the Entry widget to have a new textvariable, this will be the value of that textvariable. %s The current value of entry, prior to editing. %S The text string being inserted/deleted, if any, {} otherwise. %v The type of validation currently set. %V The type of validation that triggered the callback method (key, focusin,  focusout, and forced). %W The name of the Entry widget. These validations provide us with the necessary data we can use to validate the input. Let's now pass all these data and just print them through a dummy validate method just to see the kind of data we can expect to get for carrying out our validations (refer to the code of 8.04 percent substitutions demo.py): Take particular note of data returned by %P and %s, because they pertain to the actual data entered by the user in the Entry widget. In most cases, you will be checking either of these two data against your validation rules. Now that we have a background of rules of data validation, let's see two practical examples that demonstrate input validation. Key Validation Let's assume that we have a form that asks for a user's name. We want the user to input only alphabets or space characters in the name. Thus, any number or special character is not to be allowed, as shown in the following screenshot of the widget: This is clearly a case of 'key' validation mode, because we want to check if an entry is valid after every key press. The percent substitution that we need to check is %S, because it yields the text string being inserted or deleted in the Entry widget. Accordingly, the code that validates the entry widget is as follows (refer to 8.05 key validation.py): import Tkinter as tk class KeyValidationDemo(): def __init__(self): root = tk.Tk() tk.Label(root, text='Enter your name').pack() vcmd = (root.register(self.validate_data), '%S') invcmd = (root.register(self.invalid_name), '%S') tk.Entry(root, validate="key", validatecommand=vcmd,invalidcommand=invcmd).pack(pady=5, padx=5) self.errmsg = tk.Label(root, text= '', fg='red') self.errmsg.pack() root.mainloop() def validate_data (self, S): self.errmsg.config(text='') return (S.isalpha() or S =='') # always return True or False def invalid_name (self, S): self.errmsg.config(text='Invalid characters n name canonly have alphabets'%S) app= KeyValidationDemo() The description of the preceding code is as follows: We first register two options validatecommand (vcmd) and invalidcommand (invcmd). In our example, validatecommand is registered to call the validate_data method, and the invalidcommand option is registered to call another method named invalid_name. The validatecommand option specifies a method to be evaluated which would validate the input. The validation method must return a Boolean value, where a True signifies that the data entered is valid, and a False return value signifies that data is invalid. If the validate method returns False (invalid data), no data is added to the Entry widget and the script registered for invalidcommand is evaluated. In our case, a False validation would call the invalid_name method. The invalidcommand method is generally responsible for displaying error messages or setting back the focus to the Entry widget. Let's look at the code register(self, func, subst=None, needcleanup=1). The register method returns a newly created Tcl function. If this function is called, the Python function func is executed. If an optional function subst is provided it is executed before func. Focus Out Validation The previous example demonstrated validation in 'key' mode. This means that the validation method was called after every key press to check if the entry was valid. However, there are situations when you might want to check the entire string entered into the widget, rather than checking individual key stroke entries. For example, if an Entry widget accepts a valid e-mail address, we would ideally like to check the validity after the user has entered the entire e-mail address, and not after every key stroke entry. This would qualify as validation in 'focusout' mode. Check out the code of 8.06 focus out validation.py for a demonstration on e-mail validation in the focusout mode: import Tkinter as tk import re class FocusOutValidationDemo(): def __init__(self): self.master = tk.Tk() self.errormsg = tk.Label(text='', fg='red') self.errormsg.pack() tk.Label(text='Enter Email Address').pack() vcmd = (self.master. register(self.validate_email), '%P' ) invcmd = (self.master. register(self.invalid_email), '%P' ) self.emailentry = tk.Entry(self.master, validate ="focusout", validatecommand=vcmd , invalidcommand=invcmd ) self.emailentry.pack() tk.Button(self.master, text="Login").pack() tk.mainloop() def validate_email(self, P): self.errormsg.config(text='') x = re.match(r"[^@]+@[^@]+.[^@]+", P) return (x != None)# True(valid email)/False(invalid email) def invalid_email(self, P): self.errormsg.config(text='Invalid Email Address') self.emailentry.focus_set() app = FocusOutValidationDemo() The description of the preceding code is as follows: The code has a lot of similarities to the previous validation example. However, note the following differences: The validate mode is set to 'focusout' in contrast to the 'key' mode in the previous example. This means that the validation would be done only when the Entry widget loses focus. This program uses data provided by the %P percentage substitution, in contrast to %S, as used in the previous example. This is understandable as %P provides the value entered in the Entry widget, but %S provides the value of the last key stroke. This program uses regular expressions to check if the entered value corresponds to a valid e-mail format. Validation usually relies on regular expressions and a whole lot of explanation to cover this topic, but it is out of the scope of this project and the article. For more information on regular expression modules, visit the following link: http://docs.python.org/2/library/re.html This concludes our discussion on input validation in Tkinter. Hopefully, you should now be able to implement input validation to suit your custom needs. Formatting widget data Several input data such as date, time, phone number, credit card number, website URL, IP number, and so on have an associated display format. For instance, date is better represented in a MM/DD/YYYY format. Fortunately, it is easy to format the data in the required format as the user enters them in the widget (refer to 8.07 formatting entry widget to display date.py). The mentioned Python file formats the user input automatically to insert forward slashes at the required places to display user-entered date in the MM/DD/YYYY format. from Tkinter import * class FormatEntryWidgetDemo: def __init__(self, root): Label(root, text='Date(MM/DD/YYYY)').pack() self.entereddata = StringVar() self.dateentrywidget =Entry(textvariable=self.entereddata) self.dateentrywidget.pack(padx=5, pady=5) self.dateentrywidget.focus_set() self.slashpositions = [2, 5] root.bind('<Key>', self.format_date_entry_widget) def format_date_entry_widget(self, event): entrylist = [c for c in self.entereddata.get() if c != '/'] for pos in self.slashpositions: if len(entrylist) > pos: entrylist.insert(pos, '/') self.entereddata.set(''.join(entrylist)) # Controlling cursor cursorpos = self.dateentrywidget.index(INSERT) for pos in self.slashpositions: if cursorpos == (pos + 1): # if cursor is on slash cursorpos += 1 if event.keysym not in ['BackSpace', 'Right', 'Left','Up', 'Down']: self.dateentrywidget.icursor(cursorpos) root = Tk() FormatEntryWidgetDemo(root) root.mainloop() The description of the preceding code is as follows: The Entry widget is bound to the key press event, where every new key press calls the related callback format_date_entry_widget method. First, the format_date_entry_widget method breaks down the entered text into an equivalent list by the name entrylist, also ignoring any slash '/' symbol if entered by the user. It then iterates through the self.slashpositions list and inserts the slash symbol at all required positions in the entrylist argument. The net result of this is a list that has slash inserted at all the right places. The next line converts this list into an equivalent string using join(), and then sets the value of our Entry widget to this string. This ensures that the Entry widget text is formatted into the aforementioned date format. The remaining pieces of code simply control the cursor to ensure that the cursor advances by one position whenever it encounters a slash symbol. It also ensures that key presses, such as 'BackSpace', 'Right', 'Left', 'Up', and 'Down' are handled properly. Note that this method does not validate the date value and the user may add any invalid date. The method defined here will simply format it by adding forward slash at third and sixth positions. Adding date validation to this example is left as an exercise for you to complete. This concludes our brief discussion on formatting data within widgets. Hopefully, you should now be able to create formatted widgets for a wide variety of input data that can be displayed better in a given format. More on fonts Many Tkinter widgets let you specify custom font specifications either at the time of widget creation or later using the configure() option. For most cases, default fonts provide a standard look and feel. However, should you want to change font specifications, Tkinter lets you do so. There is one caveat though. When you specify your own font, you need to make sure it looks good on all platforms where the program is intended to be deployed. This is because a font might look good and match well on a particular platform, but may look awful on another. Unless you know what you are doing, it is always advisable to stick to Tkinter's default fonts. Most platforms have their own set of standard fonts that are used by the platform's native widgets. So, rather than trying to reinvent the wheel on what looks good on a given platform or what would be available for a given platform, Tkinter assigns these standard platform-specific fonts into its widget, thus providing a native look and feel on every platform. Tkinter assigns nine fonts to nine different names, which you can therefore use in your programs. The font names are as follows: TkDefaultFont TkTextFont TkFixedFont TkMenuFont TkHeadingFont TkCaptionFont TkSmallCaptionFont TkIconFont TkTooltipFont Accordingly, you can use them in your programs in the following way: Label(text="Sale Up to 50% Off !", font="TkHeadingFont 20") Label(text="**Conditions Apply", font="TkSmallCaptionFont 8") Using these kinds of fonts mark up, you can be assured that your font will look native across all platforms. Finer Control over Font In addition to the above method on handling fonts, Tkinter provides a separate Font class implementation. The source code of this class is located at the following link: <Python27_installtion_dir>Liblib-tktkfont.py. To use this module, you need to import tkFont into your namespace.(refer to 8.08 tkfont demo.py): from Tkinter import Tk, Label, Pack import tkFont root=Tk() label = Label(root, text="Humpty Dumpty was pushed") label.pack() currentfont = tkFont.Font(font=label['font']) print'Actual :' + str(currentfont. actual ()) print'Family :' + currentfont. cget ("family") print'Weight :' + currentfont.cget("weight") print'Text width of Dumpty : %d' %currentfont. measure ("Dumpty") print'Metrics:' + str(currentfont. metrics ()) currentfont.config(size=14) label.config (font=currentfont) print'New Actual :' + str(currentfont. actual ()) root.mainloop() The console output of this program is as follows: Actual :{'family': 'Segoe UI', 'weight': 'normal', 'slant': 'roman', 'overstrike': 0, 'underline': 0, 'size': 9} Family : Segoe UI Weight : normal Text width of Dumpty : 43 Metrics:{'fixed': 0, 'ascent': 12, 'descent': 3, 'linespace': 15} As you can see, the tkfont module provides a much better fine-grained control over various aspects of fonts, which are otherwise inaccessible. Font Selector Now that we have seen the basic features available in the tkfont module, let's use it to implement a font selector. The font selector would look like the one shown here: The code for the font selector is as follows (refer to 8.09 font selector.py): from Tkinter import * import ttk import tkFont class FontSelectorDemo (): def __init__(self): self.currentfont = tkFont.Font(font=('Times New Roman',12)) self.family = StringVar(value='Times New Roman') self.fontsize = StringVar(value='12') self.fontweight =StringVar(value=tkFont.NORMAL) self.slant = StringVar(value=tkFont.ROMAN) self.underlinevalue = BooleanVar(value=False) self.overstrikevalue= BooleanVar(value=False) self. gui_creator () The description of the preceding code is as follows: We import Tkinter (for all widgets), ttk (for the Combobox widget), and tkfont for handling font-related aspects of the program We create a class named FontSelectorDemo and use its __init_ method to initialize al attributes that we intend to track in our program. Finally, the __init__ method calls another method named gui_creator(), which is be responsible for creating all the GUI elements of the program Creating the GUI The code represented here is a highly abridged version of the actual code (refer to 8.09 font selector.py). Here, we removed all the code that creates basic widgets, such as Label and Checkbuttons, in order to show only the font-related code: def gui_creator(self): # create the top labels – code removed fontList = ttk.Combobox(textvariable=self.family) fontList.bind('<<ComboboxSelected>>', self.on_value_change) allfonts = list(tkFont.families()) allfonts.sort() fontList['values'] = allfonts # Font Sizes sizeList = ttk.Combobox(textvariable=self.fontsize) sizeList.bind('<<ComboboxSelected>>', self.on_value_change) allfontsizes = range(6,70) sizeList['values'] = allfontsizes # add four checkbuttons to provide choice for font style # all checkbuttons command attached to self.on_value_change #create text widget sampletext ='The quick brown fox jumps over the lazy dog' self.text.insert(INSERT,'%sn%s'% (sampletext,sampletext.upper()), 'fontspecs' ) self.text.config(state=DISABLED) The description of the preceding code is as follows: We have highlighted the code that creates two Combobox widgets; one for the Font Family, and the other for the Font Size selection. We use tkfont.families() to fetch the list of all the fonts installed on a computer. This is converted into a list format and sorted before it is inserted into the fontList Combobox widget. Similarly, we add a font size range of values from 6 to 70 in the Font Size combobox. We also add four Checkbutton widgets to keep track of font styles bold, italics, underline , and overstrike. The code for this has not been shown previously, because we have created similar check buttons in some of our previous programs. We then add a Text widget and insert a sample text into it. More importantly, we add a tag to the text named fontspec. Finally, all our widgets have a command callback method connecting back to a common method named on_value_change. This method will be responsible for updating the display of the sample text at the time of changes in the values of any of the widgets. Updating Sample Text def on_value_change(self, event=None): try: self.currentfont.config(family=self.family.get(), size=self.fontsize.get(), weight=self.fontweight.get(), slant=self.slant.get(), underline=self.underlinevalue.get(), overstrike=self.overstrikevalue.get()) self.text.tag_config('fontspecs', font=self.currentfont) except ValueError: pass ### invalid entry - ignored for now. You can use a tkMessageBox dialog to show an error The description of the preceding code is as follows: This method is called at the time of a state change for any of the widgets This method simply fetches all font data and configures our currentfont attribute with the updated font values Finally, it updates the text content tagged as fontspec with the values of the current font Working with Unicode characters Computers only understand binary numbers. Therefore, all that you see on your computer, for example, texts, images, audio, video, and so on need to be expressed in terms of binary numbers. This is where encoding comes into play. An encoding is a set of standard rules that assign unique numeral values to each text character. Python 2.x default encoding is ASCII (American Standard Code for Information Interchange). The ASCII character encoding is a 7-bit encoding that can encode 2 ^7 (128) characters. Because ASCII encoding was developed in America, it encodes characters from the English alphabet, namely, the numbers 0-9, the letters a-z and A-Z, some common punctuation symbols, some teletype machine control codes, and a blank space. It is here that Unicode encoding comes to our rescue. The following are the key features of Unicode encoding: It is a way to represent text without bytes It provides unique code point for each character of every language It defines more than a million code points, representing characters of all major scripts on the earth Within Unicode, there are several Unicode Transformation Formats (UTF) UTF-8 is one of the most commonly used encodings, where 8 means that 8-bit numbers are used in the encoding Python also supports UTF-16 encoding, but it's less frequently used, and UTF-32 is not supported by Python 2. x Say you want to display a Hindi character on a Tkinter Label widget. You would intuitively try to run a code like the following: from Tkinter import * root = Tk() Label( root, text = " भारतमेंआपकास्वागतहै " ).pack() root.mainloop() If you try to run the previous code, you will get an error message as follows: SyntaxError: Non-ASCII character 'xe0' in file 8.07.py on line 4, but no encoding declared; see http://www.Python.org/peps/pep-0263.html for details. This means that Python 2.x, by default, cannot handle non-ASCII characters. Python standard library supports over 100 encodings, but if you are trying to use anything other than ASCII encoding you have to explicitly declare the encoding. Fortunately, handling other encodings is very simple in Python. There are two ways in which you can deal with non-ASCII characters. Declaring line encoding The first way is to mark a string containing Unicode characters with the prefix u explicitly, as shown in the following code snippet (refer to 8.10 line encoding.py): from Tkinter import * root = Tk() Label(root, text = u"भारतमेंआपकास्वागतहै").pack() root.mainloop() When you try to run this program from IDLE, you get a warning message similar to the following one: Simply click on Ok to save this file as UTF-8 and run this program to display the Unicode label. Summary In this article, we discussed some vital aspects of GUI programming form a common theme in many GUI programs. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 6716
article-image-getting-started-codeblocks
Packt
28 Oct 2013
7 min read
Save for later

Getting Started with Code::Blocks

Packt
28 Oct 2013
7 min read
(For more resources related to this topic, see here.) Why Code::Blocks? Before we go on learning more about Code::Blocks let us understand why we shall use Code::Blocks over other IDEs. It is a cross-platform Integrated Development Environment (IDE). It supports Windows, Linux, and Mac operating system. It supports GCC compiler and GNU debugger on all supported platforms completely. It supports numerous other compilers to various degrees on multiple platforms. It is scriptable and extendable. It comes with several plugins that extend its core functionality. It is lightweight on resources and doesn't require a powerful computer to run it. Finally, it is free and open source. Installing Code::Blocks on Windows Our primary focus of this article will be on Windows platform. However, we'll touch upon other platforms wherever possible. Official Code::Blocks binaries are available from www.codeblocks.org. Perform the following steps for successful installation of Code::Blocks: For installation on Windows platform download codeblocks-12.11mingw-setup.exe file from http://www.codeblocks.org/downloads/26 or from sourceforge mirror http://sourceforge.net/projects/codeblocks/files/Binaries/12.11/Windows/codeblocks-12.11mingw-setup.exe/download and save it in a folder. Double-click on this file and run it. You'll be presented with the following screen: As shown in the following screenshot click on the Next button to continue. License text will be presented. The Code::Blocks application is licensed under GNU GPLv3 and Code::Blocks SDK is licensed under GNU LGPLv3. You can learn more about these licenses at this URL—https://www.gnu.org/licenses/licenses.html. Click on I Agree to accept the License Agreement. The component selection page will be presented in the following screenshot: You may choose any of the following options: Default install: This is the default installation option. This will install Code::Block's core components and core plugins. Contrib Plugins: Plugins are small programs that extend Code::Block's functionality. Select this option to install plugins contributed by several other developers. C::B Share Config: This utility can copy all/parts of configuration file. MinGW Compiler Suite: This option will install GCC 4.7.1 for Windows. Select Full Installation and click on Next button to continue. As shown in the following screenshot installer will now prompt to select installation directory: You can install it to default installation directory. Otherwise choose Destination Folder and then click on the Install button. Installer will now proceed with installation. As shown in the following screenshot Code::Blocks will now prompt us to run it after the installation is completed: Click on the No button here and then click on the Next button. Installation will now be completed: Click on the Finish button to complete installation. A shortcut will be created on the desktop. This completes our Code::Blocks installation on Windows. Installing Code::Blocks on Linux Code::Blocks runs numerous Linux distributions. In this section we'll learn about installation of Code::Blocks on CentOS linux. CentOS is a Linux distro based on Red Hat Enterprise Linux and is a freely available, enterprise grade Linux distribution. Perform the following steps to install Code::Blocks on Linux OS: Navigate to Settings | Administration | Add/Remove Software menu option. Enter wxGTK in the Search box and hit the Enter key. As of writing wxGTK-2.8.12 is the latest wxWidgets stable release available. Select it and click on the Apply button to install wxGTK package via the package manager, as shown in the following screenshot. Download packages for CentOS 6 from this URL—http://www.codeblocks.org/downloads/26. Unpack the .tar.bz2 file by issuing the following command in shell: tar xvjf codeblocks-12.11-1.el6.i686.tar.bz2 Right-click on the codeblocks-12.11-1.el6.i686.rpm file as shown in the following screenshot and choose the Open with Package Installer option. The following window will be displayed. Click on the Install button to begin installation, as shown in the following screenshot: You may be asked to enter the root password if you are installing it from a user account. Enter the root password and click on the Authenticate button. Code::Blocks will now be installed. Repeat steps 4 to 6 to install other rpm files. We have now learned to install Code::Blocks on the Windows and Linux platforms. We are now ready for C++ development. Before doing that we'll learn about the Code::Blocks user interface. First run On the Windows platform navigate to the Start | All Programs | CodeBlocks | CodeBlocks menu options to launch Code::Blocks. Alternatively you may double-click on the shortcut displayed on the desktop to launch Code::Blocks, as in the following screenshot: On Linux navigate to Applications | Programming | Code::Blocks IDE menu options to run Code::Blocks. Code::Blocks will now ask the user to select the default compiler. Code::Blocks supports several compilers and hence, is able to detect the presence of other compilers. The following screenshot shows that Code::Blocks has detected GNU GCC Compiler (which was bundled with the installer and has been installed). Click on it to select and then click on Set as default button, as shown in the following screenshot: Do not worry about the items highlighted in red in the previous screenshot. Red colored lines indicate Code::Blocks was unable to detect the presence of a particular compiler. Finally, click on the OK button to continue with the loading of Code::Blocks. After the loading is complete the Code::Blocks window will be shown. The following screenshot shows main window of Code::Blocks. Annotated portions highlight different User Interface (UI) components: Now, let us understand more about different UI components: Menu bar and toolbar: All Code::Blocks commands are available via menu bar. On the other hand toolbars provide quick access to commonly used commands. Start page and code editors: Start page is the default page when Code::Blocks is launched. This contains some useful links and recent project and file history. Code editors are text containers to edit C++ (and other language) source files. These editors offer syntax highlighting—a feature that highlights keywords in different colors. Management pane: This window shows all open files (including source files, project files, and workspace files). This pane is also used by other plugins to provide additional functionalities. In the preceding screenshot FileManager plugin is providing a Windows Explorer like facility and Code Completion plugin is providing details of currently open source files. Log windows: Log messages from different tools, for example, compiler, debugger, document parser, and so on, are shown here. This component is also used by other plugins. Status bar: This component shows various status information of Code::Blocks, for example, file path, file encoding, line numbers, and so on. Introduction to important toolbars Toolbars provide easier access to different functions of Code::Blocks. Amongst the several toolbars following ones are most important. Main toolbar The main toolbar holds core component commands. From left to right there are new file, open file, save, save all, undo, redo, cut, copy, paste, find, and replace buttons. Compiler toolbar The compiler toolbar holds commonly used compiler related commands. From left to right there are build, run, build and run, rebuild, stop build, build target buttons. Compilation of C++ source code is also called a build and this terminology will be used throughout the article. Debugger toolbar The debugger toolbar holds commonly used debugger related commands. From left to right there are debug/continue, run to cursor, next line, step into, step out, next instruction, step into instruction, break debugger, stop debugger, debugging windows, and various info buttons. Summary In this article we have learned to download and install Code::Blocks. We also learnt about different interface elements. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 8147

article-image-image-classification-and-feature-extraction-images
Packt
25 Oct 2013
3 min read
Save for later

Image classification and feature extraction from images

Packt
25 Oct 2013
3 min read
(For more resources related to this topic, see here.) Classifying images Automated Remote Sensing ( ARS ) is rarely ever done in the visible spectrum. The most commonly available wavelengths outside of the visible spectrum are infrared and near-infrared. The following scene is a thermal image (band 10) from a fairly recent Landsat 8 flyover of the US Gulf Coast from New Orleans, Louisiana to Mobile, Alabama. Major natural features in the image are labeled so you can orient yourself: Because every pixel in that image has a reflectance value, it is information. Python can "see" those values and pick out features the same way we intuitively do by grouping related pixel values. We can colorize pixels based on their relation to each other to simplify the image and view related features. This technique is called classification. Classifying can range from fairly simple groupings based only on some value distribution algorithm derived from the histogram to complex methods involving training data sets and even computer learning and artificial intelligence. The simplest forms are called unsupervised classifications, whereas methods involving some sort of training data to guide the computer are called supervised. It should be noted that classification techniques are used across many fields, from medical doctors trying to spot cancerous cells in a patient's body scan, to casinos using facial-recognition software on security videos to automatically spot known con-artists at blackjack tables. To introduce remote sensing classification we'll just use the histogram to group pixels with similar colors and intensities and see what we get. First you'll need to download the Landsat 8 scene here: http://geospatialpython.googlecode.com/files/thermal.zip Instead of our histogram() function from previous examples, we'll use the version included with NumPy that allows you to easily specify a number of bins and returns two arrays with the frequency as well as the ranges of the bin values. We'll use the second array with the ranges as our class definitions for the image. The lut or look-up table is an arbitrary color palette used to assign colors to classes. You can use any colors you want. import gdalnumeric # Input file name (thermal image) src = "thermal.tif" # Output file name tgt = "classified.jpg" # Load the image into numpy using gdal srcArr = gdalnumeric.LoadFile(src) # Split the histogram into 20 bins as our classes classes = gdalnumeric.numpy.histogram(srcArr, bins=20)[1] # Color look-up table (LUT) - must be len(classes)+1. # Specified as R,G,B tuples lut = [[255,0,0],[191,48,48],[166,0,0],[255,64,64], [255,115,115],[255,116,0],[191,113,48],[255,178,115], [0,153,153],[29,115,115],[0,99,99],[166,75,0], [0,204,0],[51,204,204],[255,150,64],[92,204,204],[38,153,38], [0,133,0],[57,230,57],[103,230,103],[184,138,0]] # Starting value for classification start = 1 # Set up the RGB color JPEG output image rgb = gdalnumeric.numpy.zeros((3, srcArr.shape[0], srcArr.shape[1],), gdalnumeric.numpy.float32) # Process all classes and assign colors for i in range(len(classes)): mask = gdalnumeric.numpy.logical_and(start <= srcArr, srcArr <= classes[i]) for j in range(len(lut[i])): rgb[j] = gdalnumeric.numpy.choose(mask, (rgb[j], lut[i][j])) start = classes[i]+1 # Save the image gdalnumeric.SaveArray(rgb.astype(gdalnumeric.numpy.uint8), tgt, format="JPEG") The following image is our classification output, which we just saved as a JPEG. We didn't specify the prototype argument when saving as an image, so it has no georeferencing information. This result isn't bad for a very simple unsupervised classification. The islands and coastal flats show up as different shades of green. The clouds were isolated as shades of orange and dark blues. We did have some confusion inland where the land features were colored the same as the Gulf of Mexico. We could further refine this process by defining the class ranges manually instead of just using the histogram.
Read more
  • 0
  • 0
  • 13101

article-image-scratching-surface-zend-framework-2
Packt
25 Oct 2013
11 min read
Save for later

Scratching the Surface of Zend Framework 2

Packt
25 Oct 2013
11 min read
Bootstrap your app There are two ways to bootstrap your ZF2 app. The default is less flexible but handles the entire configuration, and the manual is really flexible but you have to take care of everything. The goal of the bootstrap is to provide to the application, ZendMvcApplication, with all the components and dependencies needed to successfully handle a request. A Zend Framework 2 application relies on the following six components: Configuration array ServiceManager instance EventManager instance ModuleManager instance Request object Response object As these are the pillars of a ZF2 application, we will take a look at how these components are configured to bootstrap the app. To begin with, we will see how the components interact from a high perspective and then we will jump into details of how each one works. When a new request arrives to our application, ZF2 needs to set up the environment to be able to fulfill it. This process implies reading configuration files and creating the required objects and services; attach them to the events that are going to be used and finally create the request object based on the request data. Once we have the request object, ZF2 will tell the router to do his job and will inspect the request object to determine who is responsible for processing the data. Once a controller and action has been identified as the one in charge of the request, ZF2 dispatches it and gives the controller/action the control of the program in order to execute the code that will interpret the request and will do something with it. This can be from accepting an uploaded image to showing a sign-up form and also changing data on an external database. When the controller processes the data, sometimes a view object is generated to encapsulate the data that we should send to the client who made the request, and a response object is created. After we have a response object, ZF2 sends it to the browser and the request ends. Now that we have seen a very simple overview of the lifecycle of a request we will jump into the details of how each object works, the options available and some examples of each one. Configuration array Let's dissect the first component of the list by taking a look at the index.php file: chdir(dirname(__DIR__)); // Setup autoloading require 'init_autoloader.php'; // Run the application! ZendMvcApplication::init(require 'config/application.config.php')->run(); As you can see we only do three things. The first thing is we change the current folder for the convenience of making everything relative to the root folder. Then we require the autoloader file; we will examine this file later. Finally, we initialize a ZendMvcApplication object by passing a configuration file and only then does the run method get called. The configuration file looks like the following code snippet: return array( 'modules' => array( 'Application', ), 'module_listener_options' => array( 'config_glob_paths' => array( 'config/autoload/{,*.}{global,local}.php', ), 'module_paths' => array( './module', './vendor', ), ), ); This file will return an array containing the configuration options for the application. Two options are used: modules and module_listener_options. As ZF2 uses a module organization approach, we should add the modules that we want to use on the application here. The second option we are using is passed as configuration to the ModuleManager object. The config_glob_path array is used when scanning the folders in search of config files and the module_paths array is used to tell ModuleManager a set of paths where the module resides. ZF2 uses a module approach to organize files. A module can contain almost anything, simple PHP files, view scripts, images, CSS, JavaScript, and so on. This approach will allow us to build reusable blocks of functionality and we will adhere to this while developing our project. PSR-0 and autoloaders Before continuing with the key components, let's take a closer look at the init_autoloader.php file used in the index.php file. As is stated on the first block comment, this file is more complicated than it's supposed to be. This is because ZF2 will try to set up different loading mechanisms and configurations. if (file_exists('vendor/autoload.php')) { $loader = include 'vendor/autoload.php'; } The first thing is to check if there is an autoload.php file inside the vendor folder; if it's found, we will load it. This is because the user might be using composer, in which case composer will provide a PSR-0 class loader. Also, this will register the namespaces defined by composer on the loader. PSR-0 is an autoloading standard proposed by the PHP Framework Interop Group (http://www.php-fig.org/) that describes the mandatory requirements for autoloader interoperability between frameworks. Zend Framework 2 is one of the projects that adheres to it. if (getenv('ZF2_PATH')) { $zf2Path = getenv('ZF2_PATH'); } elseif (get_cfg_var('zf2_path')) { $zf2Path = get_cfg_var('zf2_path'); } elseif (is_dir('vendor/ZF2/library')) { $zf2Path = 'vendor/ZF2/library'; } In the next section we will try to get the path of the ZF2 files from different sources. We will first try to get it from the environment, if not, we'll try from a directive value in the php.ini file. Finally, if the previous methods fail the code, we will try to check whether a specific folder exists inside the vendor folder. if ($zf2Path) { if (isset($loader)) { $loader->add('Zend', $zf2Path); } else { include $zf2Path . '/Zend/Loader/AutoloaderFactory.php'; ZendLoaderAutoloaderFactory::factory(array( 'ZendLoaderStandardAutoloader' => array( 'autoregister_zf' => true ) )); } } Finally, if the framework is found by any of these methods, based on the existence of the composer autoloader, the code will just add the Zend namespace or will instantiate an internal autoloader, ZendLoaderAutoloader, and use it as a default. As you can see, there are multiple ways to set up the autoloading mechanism on ZF2 and at the end what matters is which one you prefer, as all of them in essence will behave the same. ServiceManager After all this execution of code, we arrive at the last section of the index.php file where we actually instantiate the ZendMvcApplication object. As we said, there are two methods of creating an instance of ZendMvcApplication. In the default approach, we call the static method init of the class by passing an optional configuration as the first parameter. This method will take care of instantiating a new ServiceManager object, storing the configuration inside, loading the modules specified in the configuration, and getting a configured ZendMvcApplication. ServiceManager is a service/object locator that implements the Service Locator design pattern; its responsibility is to retrieve other objects. $serviceManager = new ServiceManager( new ServiceServiceManagerConfig($smConfig) ); $serviceManager->setService('ApplicationConfig', $configuration); $serviceManager->get('ModuleManager')->loadModules(); return $serviceManager->get('Application')->bootstrap(); As you can see, the init method calls the bootstrap() method of the ZendMvcApplication instance. Service Locator is a design pattern used in software development to encapsulate the process of obtaining other objects. The concept is based on a central repository that stores the objects and also knows how to create them if required. EventManager This component is designed to provide multiple functionalities. It can be used to implement simple observer patterns, and also can be used to do aspect-oriented design or even create event-driven architectures. The basic operations you can do over these components is attaching and detaching listeners to named events, trigger events, and interrupting the execution of listeners when an event is fired. Let's see a couple of examples on how to attach to an event and how to fire them: //Registering an event listener $events = new EventManager(); $events->attach(array('EVENT_NAME'), $callback); //Triggering an event $events->trigger('EVENT_NAME', $this, $params); Inside the bootstrap method of ZendMvcApplication, we are registering the events of RouteListener, DispatchListener, and ViewManager. After that, the code is instantiating a new custom event called MvcEvent that will be used as the target when firing events. Finally, this piece of code will fire the bootstrap event. ModuleManager Zend Framework 2 introduces a completely redesigned ModuleManager. This new module has been built with simplicity, flexibility, and reuse in mind. These modules can hold everything from PHP to images, CSS, library code, views, and so on. The responsibility of this component in the bootstrap process of an app is loading the available modules specified by the config file. This is accomplished by the following code line located in the init method of ZendMvcApplication: $serviceManager->get('ModuleManager')->loadModules(); This line, when executed, will retrieve the list of modules located at the config file and will load each module. Each module has to contain a file called Module.php with the initialization of the components of the module if needed. This will allow the module manager to retrieve the configuration of the module. Let's see the usual content of this file: namespace MyModule; class Module { public function getAutoloaderConfig() { return array( 'ZendLoaderClassMapAutoloader' => array( __DIR__ . '/autoload_classmap.php', ), 'ZendLoaderStandardAutoloader' => array( 'namespaces' => array( __NAMESPACE__ => __DIR__ . '/src/' . __NAMESPACE__, ), ), ); } public function getConfig() { return include __DIR__ . '/config/module.config.php'; } } As you can see we are defining a method called getAutoloaderConfig() that provides the configuration for the autoloader to ModuleManager. The last method getConfig() is used to provide the configuration of the module to ModuleManager; for example, this will contain the routes handled by the module. Request object This object encapsulates all data related to a request and allows the developer to interact with the different parts of a request. This object is used in the constructor of ZendMvcApplication and also is set inside MvcEvent to be able to retrieve when some events are fired. Response object This object encapsulates all the parts of an HTTP response and provides the developer with a fluent interface to set all the response data. This object is used in the same way as the request object. Basically it is instantiated on the constructor and added to MvcEvent to be able to interact with it across all the events and classes. The request object As we said, the request object will encapsulate all the data related to a request and provide the developer with a fluent API to access the data. Let's take a look at the details of the request object in order to understand how to use it and what it can offer to us: use ZendHttpRequest; $string = "GET /foo HTTP/1.1rnrnSome Content"; $request = Request::fromString($string); $request->getMethod(); $request->getUri(); $request->getUriString(); $request->getVersion(); $request->getContent(); This example comes directly from the documentation and shows how a request object can be created from a string, and then access some data related with the request using the methods provided. So, every time we need to know something related to the request, we will access this object to get the data we need. If we check the code on ZendHttpPhpEnvironmentRequest.php, the first thing we can notice is that the data is populated on the constructor using the superglobal arrays. All this data is processed and then populated inside the object to be able to expose it in a standard way using methods. To manipulate the URI of the request you can get/set the data with three methods, two getters and one setter. The only difference with the getters is that one returns a plain string and the other returns an HttpUri object. getUri() and getUriString() setUri() To retrieve the data passed in the request, there are a few specialized methods depending on the data you want to get: getQuery() getPost() getFiles() getHeader() and getHeaders() About the request method, the object has a general way to know the method used, returning a string or nine specialized functions that will test specific methods based on the RFC 2616, which defines the standard methods for an HTTP request. getMethod() isOptions() isGet() isHead() isPost() isPut() isDelete() isTrace() isConnect() isPatch() Finally, two more methods are available in this object that will test special requests such as AJAX and requests made by a flash object. isXmlHttpRequest() isFlashRequest() Notice that the data stored on the superglobal arrays when populated on the object are converted from an Array to a Parameters object. The Parameters object lives in the Stdlib section of ZF2, a folder where common objects can be found and used across the framework. In this case, the Parameters class is an extension of ArrayObject and implements ParametersInterface that will bring ArrayAccess, Countable, Serializable, and Traversable functionality to the parameters stored inside the object. The goal with this object is to provide a common interface to access data stored in the superglobal arrays. This expands the ways you can interact with the data in an object-oriented approach.
Read more
  • 0
  • 0
  • 1839
Packt
24 Oct 2013
13 min read
Save for later

Using Media Files – playing audio files

Packt
24 Oct 2013
13 min read
(For more resources related to this topic, see here.) Playing audio files JUCE provides a sophisticated set of classes for dealing with audio. This includes: sound file reading and writing utilities, interfacing with the native audio hardware, audio data conversion functions, and a cross-platform framework for creating audio plugins for a range of well-known host applications. Covering all of these aspects is beyond the scope of this article, but the examples in this section will outline the principles of playing sound files and communicating with the audio hardware. In addition to showing the audio features of JUCE, in this section we will also create the GUI and autogenerate some other aspects of the code using the Introjucer application. Creating a GUI to control audio file playback Create a new GUI application Introjucer project of your choice, selecting the option to create a basic window. In the Introjucer application, select the Config panel, and select Modules in the hierarchy. For this project we need the juce_audio_utils module (which contains a special Component class for configuring the audio device hardware); therefore, turn ON this module. Even though we created a basic window and a basic component, we are going to create the GUI using the Introjucer application. Navigate to the Files panel and right-click (on the Mac, press control and click) on the Source folder in the hierarchy, and select Add New GUI Component… from the contextual menu. When asked, name the header MediaPlayer.h and click on Save. In the Files hierarchy, select the MediaPlayer.cpp file. First select the Class panel and change the Class name from NewComponent to MediaPlayer. We will need four buttons for this basic project: a button to open an audio file, a Play button, a Stop button, and an audio device settings button. Select the Subcomponents panel, and add four TextButton components to the editor by right-clicking to access the contextual menu. Space the buttons equally near the top of the editor, and configure each button as outlined in the table as follows: Purpose member name name text background (normal) Open file openButton open Open... Default Play/pause file playButton play Play Green Stop playback stopButton stop Stop Red Configure audio settingsButton settings Audio Settings... Default Arrange the buttons as shown in the following screenshot: For each button, access the mode pop-up menu for the width setting, and choose Subtracted from width of parent. This will keep the right-hand side of the buttons the same distance from the right-hand side of the window if the window is resized. There are more customizations to be done in the Introjucer project, but for now, make sure that you have saved the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project before you open your native IDE project. Make sure that you have saved all of these files in the Introjucer application; otherwise the files may not get correctly updated in the file system when the project is opened in the IDE. In the IDE we need to replace the MainContentComponent class code to place a MediaPlayer object within it. Change the MainComponent.h file as follows: #ifndef __MAINCOMPONENT_H__#define __MAINCOMPONENT_H__#include "../JuceLibraryCode/JuceHeader.h"#include "MediaPlayer.h"class MainContentComponent : public Component{public:MainContentComponent();void resized();private:MediaPlayer player;};#endif Then, change the MainComponent.cpp file to: #include "MainComponent.h"MainContentComponent::MainContentComponent(){addAndMakeVisible (&player);setSize (player.getWidth(),player.getHeight());}void MainContentComponent::resized(){player.setBounds (0, 0, getWidth(), getHeight());} Finally, make the window resizable in the Main.cpp file, and build and run the project to check that the window appears as expected. Adding audio file playback support Quit the application and return to the Introjucer project. Select the MediaPlayer.cpp file in the Files panel hierarchy and select its Class panel. The Parent classes setting already contains public Component. We are going to be listening for state changes from two of our member objects that are ChangeBroadcaster objects. To do this, we need our MediaPlayer class to inherit from the ChangeListener class. Change the Parent classes setting such that it reads: public Component, public ChangeListener Save the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project again, and open it into your IDE. Notice in the MediaPlayer.h file that the parent classes have been updated to reflect this change. For convenience, we are going to add some enumerated constants to reflect the current playback state of our MediaPlayer object, and a function to centralize the change of this state (which will, in turn, update the state of various objects, such as the text displayed on the buttons). The ChangeListener class also has one pure virtual function, which we need to add. Add the following code to the [UserMethods] section of MediaPlayer.h: //[UserMethods]-- You can add your own custom methods...enum TransportState {Stopped,Starting,Playing,Pausing,Paused,Stopping};void changeState (TransportState newState);void changeListenerCallback (ChangeBroadcaster* source);//[/UserMethods] We also need some additional member variables to support our audio playback. Add these to the [UserVariables] section: //[UserVariables] -- You can add your own custom variables...AudioDeviceManager deviceManager;AudioFormatManager formatManager;ScopedPointer<AudioFormatReaderSource> readerSource;AudioTransportSource transportSource;AudioSourcePlayer sourcePlayer;TransportState state;//[/UserVariables] The AudioDeviceManager object will manage our interface between the application and the audio hardware. The AudioFormatManager object will assist in creating an object that will read and decode the audio data from an audio file. This object will be stored in the ScopedPointer<AudioFormatReaderSource> object. The AudioTransportSource object will control the playback of the audio file and perform any sampling rate conversion that may be required (if the sampling rate of the audio file differs from the audio hardware sampling rate). The AudioSourcePlayer object will stream audio from the AudioTransportSource object to the AudioDeviceManager object. The state variable will store one of our enumerated constants to reflect the current playback state of our MediaPlayer object. Now add some code to the MediaPlayer.cpp file. In the [Constructor] section of the constructor, add following two lines: playButton->setEnabled (false);stopButton->setEnabled (false); This sets the Play and Stop buttons to be disabled (and grayed out) initially. Later, we enable the Play button once a valid file is loaded, and change the state of each button and the text displayed on the buttons, depending on whether the file is currently playing or not. In this [Constructor] section you should also initialize the AudioFormatManager as follows: formatManager.registerBasicFormats(); This allows the AudioFormatManager object to detect different audio file formats and create appropriate file reader objects. We also need to connect the AudioSourcePlayer, AudioTransportSource and AudioDeviceManager objects together, and initialize the AudioDeviceManager object. To do this, add the following lines to the [Constructor] section: sourcePlayer.setSource (&transportSource);deviceManager.addAudioCallback (&sourcePlayer);deviceManager.initialise (0, 2, nullptr, true); The first line connects the AudioTransportSource object to the AudioSourcePlayer object. The second line connects the AudioSourcePlayer object to the AudioDeviceManager object. The final line initializes the AudioDeviceManager object with: The number of required audio input channels (0 in this case). The number of required audio output channels (2 in this case, for stereo output). An optional "saved state" for the AudioDeviceManager object (nullptr initializes from scratch). Whether to open the default device if the saved state fails to open. As we are not using a saved state, this argument is irrelevant, but it is useful to set this to true in any case. The final three lines to add to the [Constructor] section to configure our MediaPlayer object as a listener to the AudioDeviceManager and AudioTransportSource objects, and sets the current state to Stopped: deviceManager.addChangeListener (this);transportSource.addChangeListener (this);state = Stopped; In the buttonClicked() function we need to add some code to the various sections. In the [UserButtonCode_openButton] section, add: //[UserButtonCode_openButton] -- add your button handler...FileChooser chooser ("Select a Wave file to play...",File::nonexistent,"*.wav");if (chooser.browseForFileToOpen()) {File file (chooser.getResult());readerSource = new AudioFormatReaderSource(formatManager.createReaderFor (file), true);transportSource.setSource (readerSource);playButton->setEnabled (true);}//[/UserButtonCode_openButton] When the openButton button is clicked, this will create a FileChooser object that allows the user to select a file using the native interface for the platform. The types of files that are allowed to be selected are limited using the wildcard *.wav to allow only files with the .wav file extension to be selected. If the user actually selects a file (rather than cancels the operation), the code can call the FileChooser::getResult() function to retrieve a reference to the file that was selected. This file is then passed to the AudioFormatManager object to create a file reader object, which in turn is passed to create an AudioFormatReaderSource object that will manage and own this file reader object. Finally, the AudioFormatReaderSource object is connected to the AudioTransportSource object and the Play button is enabled. The handlers for the playButton and stopButton objects will make a call to our changeState() function depending on the current transport state. We will define the changeState() function in a moment where its purpose should become clear. In the [UserButtonCode_playButton] section, add the following code: //[UserButtonCode_playButton] -- add your button handler...if ((Stopped == state) || (Paused == state))changeState (Starting);else if (Playing == state)changeState (Pausing);//[/UserButtonCode_playButton] This changes the state to Starting if the current state is either Stopped or Paused, and changes the state to Pausing if the current state is Playing. This is in order to have a button with combined play and pause functionality. In the [UserButtonCode_stopButton] section, add the following code: //[UserButtonCode_stopButton] -- add your button handler...if (Paused == state)changeState (Stopped);elsechangeState (Stopping);//[/UserButtonCode_stopButton] This sets the state to Stopped if the current state is Paused, and sets it to Stopping in other cases. Again, we will add the changeState() function in a moment, where these state changes update various objects. In the [UserButtonCode_settingsButton] section add the following code: //[UserButtonCode_settingsButton] -- add your button handler...bool showMidiInputOptions = false;bool showMidiOutputSelector = false;bool showChannelsAsStereoPairs = true;bool hideAdvancedOptions = false;AudioDeviceSelectorComponent settings (deviceManager,0, 0, 1, 2,showMidiInputOptions,showMidiOutputSelector,showChannelsAsStereoPairs,hideAdvancedOptions);settings.setSize (500, 400);DialogWindow::showModalDialog(String ("Audio Settings"),&settings,TopLevelWindow::getTopLevelWindow (0),Colours::white,true); //[/UserButtonCode_settingsButton] This presents a useful interface to configure the audio device settings. We need to add the changeListenerCallback() function to respond to changes in the AudioDeviceManager and AudioTransportSource objects. Add the following to the [MiscUserCode] section of the MediaPlayer.cpp file: //[MiscUserCode] You can add your own definitions...void MediaPlayer::changeListenerCallback (ChangeBroadcaster* src){if (&deviceManager == src) {AudioDeviceManager::AudioDeviceSetup setup;deviceManager.getAudioDeviceSetup (setup);if (setup.outputChannels.isZero())sourcePlayer.setSource (nullptr);elsesourcePlayer.setSource (&transportSource);} else if (&transportSource == src) {if (transportSource.isPlaying()) {changeState (Playing);} else {if ((Stopping == state) || (Playing == state))changeState (Stopped);else if (Pausing == state)changeState (Paused);}}}//[/MiscUserCode] If our MediaPlayer object receives a message that the AudioDeviceManager object changed in some way, we need to check that this change wasn't to disable all of the audio output channels, by obtaining the setup information from the device manager. If the number of output channels is zero, we disconnect our AudioSourcePlayer object from the AudioTransportSource object (otherwise our application may crash) by setting the source to nullptr. If the number of output channels becomes nonzero again, we reconnect these objects. If our AudioTransportSource object has changed, this is likely to be a change in its playback state. It is important to note the difference between requesting the transport to start or stop, and this change actually taking place. This is why we created the enumerated constants for all the other states (including transitional states). Again we issue calls to the changeState() function depending on the current value of our state variable and the state of the AudioTransportSource object. Finally, add the important changeState() function to the [MiscUserCode] section of the MediaPlayer.cpp file that handles all of these state changes: void MediaPlayer::changeState (TransportState newState){if (state != newState) {state = newState;switch (state) {case Stopped:playButton->setButtonText ("Play");stopButton->setButtonText ("Stop");stopButton->setEnabled (false);transportSource.setPosition (0.0);break;case Starting:transportSource.start();break;case Playing:playButton->setButtonText ("Pause");stopButton->setButtonText ("Stop");stopButton->setEnabled (true);break;case Pausing:transportSource.stop();break;case Paused:playButton->setButtonText ("Resume");stopButton->setButtonText ("Return to Zero");break;case Stopping:transportSource.stop();break;}}} After checking that the newState value is different from the current value of the state variable, we update the state variable with the new value. Then, we perform the appropriate actions for this particular point in the cycle of state changes. These are summarized as follows: In the Stopped state, the buttons are configured with the Play and Stop labels, the Stop button is disabled, and the transport is positioned to the start of the audio file. In the Starting state, the AudioTransportSource object is told to start. Once the AudioTransportSource object has actually started playing, the system will be in the Playing state. Here we update the playButton button to display the text Pause, ensure the stopButton button displays the text Stop, and we enable the Stop button. If the Pause button is clicked, the state becomes Pausing, and the transport is told to stop. Once the transport has actually stopped, the state changes to Paused, the playButton button is updated to display the text Resume and the stopButton button is updated to display Return to Zero. If the Stop button is clicked, the state is changed to Stopping, and the transport is told to stop. Once the transport has actually stopped, the state changes to Stopped (as described in the first point). If the Return to Zero button is clicked, the state is changed directly to Stopped (again, as previously described). When the audio file reaches the end of the file, the state is also changed to Stopped. Build and run the application. You should be able to select a .wav audio file after clicking the Open... button, play, pause, resume, and stop the audio file using the respective buttons, and configure the audio device using the Audio Settings… button. The audio settings window allows you to select the input and output device, the sample rate, and the hardware buffer size. It also provides a Test button that plays a tone through the selected output device. Summary This article has covered a few of the techniques for dealing with audio files in JUCE. The article has given only an introduction to get you started; there are many other options and alternative approaches, which may suit different circumstances. The JUCE documentation will take you through each of these and point you to related classes and functions. Resources for Article: Further resources on this subject: Quick start – media files and XBMC [Article] Audio Playback [Article] Automating the Audio Parameters – How it Works [Article]
Read more
  • 0
  • 0
  • 2441

article-image-parse-objects-and-queries
Packt
18 Oct 2013
6 min read
Save for later

Parse Objects and Queries

Packt
18 Oct 2013
6 min read
(For more resources related to this topic, see here.) In this article, we will learn how to work with Parse objects along with writing queries to set and get data from Parse. Every application has a different and specific Application ID associated with the Client Key, which remains same for all the applications of the same user. Parse is based on object-oriented principles. All the operations on Parse will be done in the form of objects. Parse saves your data in the form of objects you send, and helps you to fetch the data in the same format again. In this article, you will learn about objects and operations that can be performed on Parse objects. Parse objects All the data in Parse is saved in the form of PFObject. When you fetch any data from Parse by firing a query, the result will be in the form of PFObject. The detailed concept of PFObject is explained in the following section. PFObject Data stored on Parse is in the form of objects and it's developed around PFObject. PFObject can be defined as the key-value (dictionary format) pair of JSON data. The Parse data is schemaless, which means that you don't need to specify ahead of time what keys exist on each PFObject. Parse backend will take care of storing your data simply as a set of whatever key-value pair you want. Let's say you are tracking the visited count of the username with a user ID using your application. A single PFObject could contain the following code: visitedCount:1122, userName:"Jack Samuel", userId:1232333332 Parse accepts only string as Key. Values can be strings, numbers, Booleans, or even arrays, and dictionaries—anything that can be JSON encoded. The class name of PFObject is used to distinguish different sorts of data. Let's say you call the visitedCounts object of the user. Parse recommends you to write your class name NameYourClassLikeThis and nameYourKeysLikeThis just to provide readability to the code. As you have seen in the previous example, we have used visitedCounts to represent the visited count key. Operations on Parse objects You can perform save, update, and delete operations on Parse objects. Following is the detailed explanation of the operations that can be performed on Parse objects. Saving objects To save your User table on the Parse Cloud with additional fields, you need to follow the coding convention similar to the NSMutableDictionary method. After updating the data you have to call the saveInBackground method to save it on the Parse Cloud. Here is the example that explains how to save additional data on the Parse Cloud: PFObject *userObject = [PFObject currentUser];[userObject setObject:[NSNumber numberWithInt:1122]forKey:@"visitedCount"];[userObject setObject:@"Jack Samuel" forKey:@"userName"];[userObject setObject:@"1232333332" forKey:@"userId"];[userObject saveInBackground]; Just after executing the preceding piece of code, your data is saved on the Parse Cloud. You can check your data in Data Browser of your application on Parse. It should be something similar to the following line of code: objectId: "xWMyZ4YEGZ", visitedCount: 1122, userName: "JackSamuel", userId: "1232333332",createdAt:"2011-06-10T18:33:42Z", updatedAt:"2011-06-10T18:33:42Z" There are two things to note here: You don't have to configure or set up a new class called User before running your code. Parse will automatically create the class when it first encounters it. There are also a few fields you don't need to specify, those are provided as a convenience: objectId is a unique identifier for each saved object. createdAt and updatedAt represent the time that each object was created and last modified in the Parse Cloud. Each of these fields is filled in by Parse, so they don't exist on PFObject until a save operation has completed. You can provide additional logic after the success or failure of the callback operation using the saveInBackgroundWithBlock or saveInBackgroundWithTarget:selector: methods provided by Parse: [userObject saveInBackgroundWithBlock:^(BOOLsucceeded, NSError *error) {if (succeeded)NSLog(@"Success");elseNSLog(@"Error %@",error);}]; Fetching objects To fetch the saved data from the Parse Cloud is even easier than saving data. You can fetch the data from the Parse Cloud in the following way. You can fetch the complete object from its objectId using PFQuery. Methods to fetch data from the cloud are asynchronous. You can implement this either by using block-based or callback-based methods provided by Parse: PFQuery *query = [PFQuery queryWithClassName:@"GameScore"]; // 1[query getObjectInBackgroundWithId:@"xWMyZ4YEGZ" block:^(PFObject*gameScore, NSError *error) { //2// Do something with the returned PFObject in the gameScorevariable.int score = [[gameScore objectForKey:@"score"] intValue];NSString *playerName = [gameScore objectForKey:@"playerName"];//3BOOL cheatMode = [[gameScore objectForKey:@"cheatMode"]boolValue];NSLog(@"%@", gameScore);}];// The InBackground methods are asynchronous, so the code writtenafter this will be executed// immediately. The codes which are dependent on the query resultshould be moved// inside the completion block above. Lets analyze each line in here, as follows: Line 1: It creates a query object pointing to the class name given in the argument. Line 2: It calls an asynchronous method on the query object created in line 1 to download the complete object for objectId, provided as an argument. As we are using the block-based method, we can provide code inside the block, which will execute on success or failure. Line 3: It reads data from PFObject that we got in response to the query. Parse provides some common values of all Parse objects as properties: NSString *objectId = gameScore.objectId;NSDate *updatedAt = gameScore.updatedAt;NSDate *createdAt = gameScore.createdAt; To refresh the current Parse object, type: [myObject refresh]; This method can be called on any Parse object, which is useful when you want to refresh the data of the object. Let's say you want to re-authenticate a user, so you can call the refresh method on the user object to refresh it. Saving objects offline Parse provides you with the functions to save your data when the user is offline. So when the user is not connected to the Internet, the data will be saved locally in the objects, and as soon as the user is connected to the Internet, data will be saved automatically on the Parse Cloud. If your application is forcefully closed before establishing the connection, Parse will try again to save the object next time the application is opened. For such operations, Parse provides you with the saveEventually method, so that you will not lose any data even when the user is not connected to the Internet. Eventually all calls are executed in the order the request is made. The following code demonstrates the saveEventually call: // Create the object.PFObject *gameScore = [PFObject objectWithClassName:@"GameScore"];[gameScore setObject:[NSNumber numberWithInt:1337]forKey:@"score"];[gameScore setObject:@"Sean Plott" forKey:@"playerName"];[gameScore setObject:[NSNumber numberWithBool:NO]forKey:@"cheatMode"];[gameScore saveEventually]; Summary In this article, we explored Parse objects and the way to query the data available on Parse. We started by exploring Parse objects and the ways to save these objects on the cloud. Finally, we learned about the queries which will help us to fetch the saved data on Parse. Resources for Article: Further resources on this subject: New iPad Features in iOS 6 [Article] Creating a New iOS Social Project [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 2650
Modal Close icon
Modal Close icon