Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-advanced-playbooks
Packt
24 Apr 2015
10 min read
Save for later

Advanced Playbooks

Packt
24 Apr 2015
10 min read
In this article by the author, Daniel Hall, of the book, Ansible Configuration Management - Second Edition, we learn to start digging a bit deeper into playbooks. We will be covering the following topics: External data lookups Storing results Processing data Debugging playboks (For more resources related to this topic, see here.) External data lookups Ansible introduced the lookup plugins in version 0.9. These plugins allow Ansible to fetch data from outside sources. Ansible provides several plugins, but you can also write your own. This really opens the doors and allows you to be flexible in your configuration. Lookup plugins are written in Python and run on the controlling machine. They are executed in two different ways: direct calls and with_* keys. Direct calls are useful when you want to use them like you would use variables. Using the with_* keys is useful when you want to use them as loops. In the next example, we use a lookup plugin directly to get the http_proxy value from environment and send it through to the configured machine. This makes sure that the machines we are configuring will use the same proxy server to download the file. --- - name: Downloads a file using a proxy hosts: all tasks:    - name: Download file      get_url:        dest: /var/tmp/file.tar.gz         url: http://server/file.tar.gz      environment:        http_proxy: "{{ lookup('env', 'http_proxy') }}" You can also use lookup plugins in the variable section. This doesn't immediately lookup the result and put it in the variable as you might assume; instead, it stores it as a macro and looks it up every time you use it. This is good to know if you are using something, the value of which might change over time. Using lookup plugins in the with_* form will allow you to iterate over things you wouldn't normally be able to. You can use any plugin like this, but ones that return a list are most useful. In the following code, we show how to dynamically register a webapp farm. --- - name: Registers the app server farm hosts: localhost connection: local vars:    hostcount: 5 tasks:    - name: Register the webapp farm      local_action: add_host name={{ item }} groupname=webapp      with_sequence: start=1 end={{ hostcount }} format=webapp%02x If you were using this example, you would append a task to create each as a virtual machine and then a new play to configure each of them. Situations where lookup plugins are useful are as follows: Copying a whole directory of Apache config to a conf.d style directory Using environment variables to adjust what the playbooks does Getting configuration from DNS TXT records Fetching the output of a command into a variable Storing results Almost every module outputs something, even the debug module. Most of the time, the only variable used is the one named changed. The changed variable helps Ansible decide whether to run handlers or not and which color to print the output in. However, if you wish to, you can store the returned values and use them later in the playbook. In this example, we look at the mode in the /tmp directory and create a new directory named /tmp/subtmp with the same mode as shown here. --- - name: Using register hosts: ansibletest user: root tasks:    - name: Get /tmp info    file:        dest: /tmp        state: directory      register: tmp      - name: Set mode on /var/tmp      file:        dest: /tmp/subtmp        mode: "{{ tmp.mode }}"        state: directory Some modules, such as the file module in the previous example, can be configured to simply give information. By combining this with the register feature, you can create playbooks that can examine the environment and calculate how to proceed. Combining the register feature and the set_fact module allows you to perform data processing on data you receive back from the modules. This allows you to compute values and perform data processing on these values. This makes your playbooks even smarter and more flexible than ever. Register allows you to make your own facts about hosts from modules already available to you. This can be useful in many different circumstances: Getting a list of files in a remote directory and downloading them all with fetch Running a task when a previous task changes, before the handlers run Getting the contents of the remote host SSH key and building a known_hosts file Processing data Ansible uses Jinja2 filters to allow you to transform data in ways that aren't possible with basic templates. We use filters when the data available to us in our playbooks is not in the format we want, or require further complex processing before it can be used with modules or templates. Filters can be used anywhere we would normally use a variable, such as in templates, as arguments to modules, and in conditionals. Filters are used by providing the variable name, a pipe character, and then the filter name. We can use multiple filter names separated with pipe characters to use multiple pipes, which are then applied left to right. Here is an example where we ensure that all users are created with lowercase usernames: --- - name: Create user accounts hosts: all vars:    users: tasks:    - name: Create accounts      user: name={{ item|lower }} state=present      with_items:        - Fred        - John        - DanielH Here are a few popular filters that you may find useful: Filter Description min When the argument is a list it returns only the smallest value. max When the argument is a list it returns only the largest value. random When the argument is a list it picks a random item from the list. changed When used on a variable created with the register keyword, it returns true if the task changed anything; otherwise, it returns false. failed When used on a variable created with the register keyword, it returns true if the task failed; otherwise, it returns false. skipped When used on a variable created with the register keyword, it returns true if the task changed anything; otherwise, it returns false. default(X) If the variable does not exist, then the value of X will be used instead. unique When the argument is a list, return a list without any duplicate items. b64decode Convert the base64 encoded string in the variable to its binary representation. This is useful with the slurp modules, as it returns its data as a base64 encoded string. replace(X, Y) Return a copy of the string with any occurrences of X replaced by Y. join(X) When the variable is a list, return a string with all the entries separated by X. Debugging playbooks There are a few ways in which you can debug a playbook. Ansible includes both a verbose mode and a debug module specifically for debugging. You can also use modules such as fetch and get_url for help. These debugging techniques can also be used to examine how modules behave when you wish to learn how to use them. The debug module Using the debug module is really quite simple. It takes two optional arguments, msg and fail.msg to set the message that will be printed by the module and fail, if set to yes, indicates a failure to Ansible, which will cause it to stop processing the playbook for that host. We used this module earlier in the skipping modules section to bail out of a playbook if the operating system was not recognized. In the following example, we will show how to use the debug module to list all the interfaces available on the machine: --- - name: Demonstrate the debug module hosts: ansibletest user: root vars:    hostcount: 5 tasks:    - name: Print interface      debug:        msg: "{{ item }}"      with_items: ansible_interfaces The preceding code gives the following output: PLAY [Demonstrate the debug module] *********************************   GATHERING FACTS ***************************************************** ok: [ansibletest]   TASK: [Print interface] ********************************************* ok: [ansibletest] => (item=lo) => {"item": "lo", "msg": "lo"} ok: [ansibletest] => (item=eth0) => {"item": "eth0", "msg": "eth0"}   PLAY RECAP ********************************************************** ansibletest               : ok=2   changed=0   unreachable=0   failed=0 As you can see, the debug module is easy to use to see the current value of a variable during the play. The verbose mode Your other option for debugging is the verbose option. When running Ansible with verbose, it prints out all the values that were returned by each module after it runs. This is especially useful if you are using the register keyword introduced in the previous section. To run ansible-playbook in verbose mode, simply add --verbose to your command line as follows: ansible-playbook --verbose playbook.yml The check mode In addition to the verbose mode, Ansible also includes a check mode and a diff mode. You can use the check mode by adding --check to the command line, and --diff to use the diff mode. The check mode instructs Ansible to walk through the play without actually making any changes to remote systems. This allows you to obtain a listing of the changes that Ansible plans to make to the configured system. It is important here to note that the check mode of Ansible is not perfect. Any modules that do not implement the check feature are skipped. Additionally, if a module is skipped that provides more variables, or the variables depend on a module actually changing something (such as file size), then they will not be available. This is an obvious limitation when using the command or shell modules The diff mode shows the changes that are made by the template module. This limitation is because the template file only works with text files. If you were to provide a diff of a binary file from the copy module, the result would almost be unreadable. The diff mode also works with the check mode to show you the planned changes that were not made due to being in check mode. The pause module Another technique is to use the pause module to pause the playbook while you examine the configured machine as it runs. This way, you can see changes that the modules have made at the current position in the play, and then watch while it continues with the rest of the play. Summary In this article, we explored the more advanced details of writing playbooks. You should now be able to use features such as delegation, looping, conditionals, and fact registration to make your plays much easier to maintain and edit. We also looked at how to access information from other hosts, configure the environment for a module, and gather data from external sources. Finally, we covered some techniques for debugging plays that are not behaving as expected. Resources for Article: Further resources on this subject: Static Data Management [article] Drupal 8 and Configuration Management [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 2966

article-image-project-management
Packt
24 Apr 2015
17 min read
Save for later

Project Management

Packt
24 Apr 2015
17 min read
In this article by Patrick Li, author of the book JIRA Essentials - Third Edition, we will start with a high-level view of the overall hierarchy on how data is structured in JIRA. We will then take a look at the various user interfaces that JIRA has for working with projects, both as an administrator and an everyday user. We will also introduce permissions for the first time in the context of projects and will expand on this. In this article, you will learn the following: How JIRA structures content Different user interfaces for project management in JIRA How to create new projects in JIRA How to import data from other systems into JIRA How to manage and configure a project How to manage components and versions (For more resources related to this topic, see here.) The JIRA hierarchy Like most other information systems, JIRA organizes its data in a hierarchical structure. At the lowest level, we have field, which are used to hold raw information. Then the next level up, we have issue, which are like a unit of work to be performed. An issue will belong to a project, which defines the context of the issue. Finally, we have project category, which logically group similar projects together. The figure in the following illustrates the hierarchy we just talked about: Project category Project category is a logical grouping for projects, usually of similar nature. Project category is optional. Projects do not have to belong to any category in JIRA. When a project does not belong to any category, it is considered uncategorized. The categories themselves do not contain any information; they serve as a way to organize all your projects in JIRA, especially when you have many of them. Project In JIRA, a project is a collection of issues. Projects provide the background context for issues by letting users know where issues should be created. Users will be members of a project, working on issues in the project. Most configurations in JIRA, such as permissions and screen layouts, are all applied on the project level. It is important to remember that projects are not limited to software development projects that need to deliver a product. They can be anything logical, such as the following: Company department or team Software development projects Products or systems A risk register Issue Issues represent work to be performed. From a functional perspective, an issue is the base unit for JIRA. Users create issues and assign them to other people to be worked on. Project leaders can generate reports on issues to see how everything is tracking. In a sense, you can say JIRA is issue-centric. Here, you just need to remember three things: An issue can belong to only one project There can be many different types of issues An issue contains many fields that hold values for the issue Field Fields are the most basic unit of data in JIRA. They hold data for issues and give meaning to them. Fields in JIRA can be broadly categorized into two distinctive categories, namely, system fields and custom fields. They come in many different forms, such as text fields, drop-down lists, and user pickers. Here, you just need to remember three things: Fields hold values for issues Fields can have behaviors (hidden or mandatory) Fields can have a view and structure (text field or drop-down list) Project permissions Before we start working with projects in JIRA, we need to first understand a little bit about permissions. We will briefly talk about the permissions related to creating and deleting, administering, and browsing projects. In JIRA, users with the JIRA administrator permission will be able to create and delete projects. By default, users in the jira-administrators group have this permission, so the administrator user we created during the installation process will be able to create new projects. We will be referring to this user and any other users with this permission as JIRA Administrator. For any given project, users with the Administer Project permission for that project will be able to administer the project's configuration settings. This allows them to update the project's details, manage versions and components, and decide who will be able to access this project. We will be referring to users with this permission as the Project Administrator. By default, the JIRA Administrator will have this permission. If a user needs to browse the contents of a given project, then he must have the Browse Project permission for that project. This means that the user will have access to the Project Browser interface for the project. By default, the JIRA Administrator will have this permission. As you have probably realized already, one of the key differences in the three permissions is that the JIRA Administrator's permission is global, which means it is global across all projects in JIRA. The Administer Project and Browse Project permissions are project-specific. A user may have the Administer Project permission for project A, but only Browse Project permission for project B. As we will see the separation of permissions allows you to set up your JIRA instance in such a way that you can effectively delegate permission controls, so you can still centralize control on who can create and delete projects, but not get over-burdened with having to manually manage each project on its own settings. Now with this in mind, let's first take a quick look at JIRA from the JIRA Administrator user's view. Creating projects To create a new project, the easiest way is to select the Create Project menu option from the Projects drop-down menu from the top navigation bar. This will bring up the create project dialog. Note that, as we explained, you need to be a JIRA Administrator (such as the user we created during installation) to create projects. This option is only available if you have the permission. When creating a new project in JIRA, we need to first select the type of project we want to create, from a list of project templates. Project template, as the name suggests, acts as a blueprint template for the project. Each project template has a predefined set of configurations such as issue type and workflows. For example, if we select the Simple Issue Tracking project template, and click on the Next button. JIRA will show us the issue types and workflow for the Simple Issue Tracking template. If we select a different template, then a different set of configurations will be applied. For those who have been using JIRA since JIRA 5 or earlier, JIRA Classic is the template that has the classic issue types and classic JIRA workflow. Clicking on the Select button will accept and select the project template. For the last step, we need to provide the new project's details. JIRA will help you validate the details, such as making sure the project key conforms to the configured format. After filling in the project details, click on the Submit button to create the new project. The following table lists the information you need to provide when creating a new project: Field Description Name A unique name for the project. Key A unique identity key for the project. As you type the name of your project, JIRA will auto-fill the key based on the name, but you can change the autogenerated key with one of your own. Starting from JIRA 6.1, the project key is not changeable after the project is created. The project key will also become the first part of the issue key for issues created in the project. Project Lead The lead of the project can be used to auto-assign issues. Each project can have only one lead. This option is available only if you have more than one user in JIRA. Changing the project key format When creating new projects, you may find that the project key needs to be in a specific format and length. By default, the project key needs to adhere to the following criteria: Contain at least two characters Cannot be more than 10 characters in length Contain only characters, that is, no numbers You can change the default format to have less restrictive rules. These changes are for advanced users only. First, to change the project key length perform the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Edit Settings button. Change the value for the Maximum project key size option to a value between 2 and 255 (inclusive), and click on the Update button to apply the change. Changing the project key format is a bit more complicated. JIRA uses a regular expression to define what the format should be. To change the project key format use the following steps: Browse to the JIRA Administration console. Select the System tab and then the General Configuration option. Click on the Advanced Settings button. Hover over and click on the value (([A-Z][A-Z]+)) for the jira.projectkey.pattern option. Enter the new regular expression for the project key format, and click on Update. There are a few rules when it comes to setting the project key format: The key must start with a letter All letters must be uppercase, that is (A-Z) Only letters, numbers, and the underscore character can be used Importing data into JIRA JIRA supports importing data directly from many popular issue-tracking systems, such as Bugzilla, GitHub, and Trac. All the importers have a wizard-driven interface, guiding you through a series of steps. These steps are mostly identical with few differences. Generally speaking, there are four steps when importing data into JIRA that are as follows: Select your source data. For example, if you are importing from CSV, it will select the CSV file. If you are importing from Bugzilla, it will provide Bugzilla database details. Select a destination project where imported issues will go into. This can be an existing project or a new project created on the fly. Map old system fields to JIRA fields. Map old system field values to JIRA field values. This is usually required for select-based fields, such as the priority field, or select list custom fields. Importing data through CSV JIRA comes with a CSV importer, which lets you import data in the comma-separated value format. This is a useful tool if you want to import data from a system that is not directly supported by JIRA, since most systems are able to export their data in CSV. It is recommended to do a trial import on a test instance first. Use the following steps to import data through a CSV file: Select the Import External Project option from the Projects drop-down menu. Click on the Import from Comma-separated Values (CSV) importer option. This will start the import wizard. First you need to select the CSV file that contains the data you want to import, by clicking on the Choose File button. After you selected the source file, you can also expand the Advanced section to select the file encoding and delimiter used in the CSV file. There is also a Use an existing configuration option, we will talk about this later in this section. Click on the Next button to proceed. For the second step, you need to select the project you want to import our data into. You can also select the Create New option to create a new project on the fly. If your CSV file contains date-related data, make sure you enter the format used in the Date format field. Click on the Next button to proceed. For the third step, you need to map the CSV fields to the fields in JIRA. Not all fields need to be mapped. If you do not want to import a particular field, simply leave it as Don't map this field for the corresponding JIRA field selection. For fields that contain data that needs to be mapped manually, such as for select list fields, you need to check the Map field value option. This will let you map the CSV field value to the JIRA field value, so they can be imported correctly. If you do not manually map these values, they will be copied over as is. Click on the Next button to proceed. For the last step, you need to map the CSV field value to the JIRA field value. This step is only required if you have checked the Map field value option for a field in step 10. Enter the JIRA field value for each CSV field value. Once you are done with mapping field values, click on the Begin Import button to start the actual import process. Depending on the size of your data, this may take some time to complete. Once the import process completes, you will get a confirmation message that tells you the number of issues that have been imported. This number should match the number of records you have in the CSV file. On the last confirmation screen, you can click on the download a detailed log link to download the full log file containing all the information for the import process. This is particularly useful if the import was not successful. You can also click on the save the configuration link, which will generate a text file containing all the mappings you have done for this import. Even if you need to run a similar import in the future, you can use this import file so that you will not need to manually re-map everything again. To use this configuration file, check the Use an existing configuration file option in step one. As we can see, JIRA's project importer makes importing data from other systems simple and straightforward. However, you must not underestimate its complexity. For any data migration, especially if you are moving off one platform and onto a new one, such as JIRA, there are a number of factors you need to consider and prepare for. The following list summarizes some of the common tasks for most data migrations: Evaluate the size and impact. This includes how many records you will be importing and also the number of users that will be impacted by this. Perform a full gap analysis between the old system and JIRA, such as how the fields will map from one to the other. Set up test environments for you to run test imports on to make sure you have your mappings done correctly. Involve your end users as early as possible, and have them review your test results. Prepare and communicate any outages and support procedure post-migration. Project user interfaces There are two distinctive interfaces for projects in JIRA. The first interface is designed for everyday users, providing useful information on how the project is going with graphs and statistics, called Project Browser. The second interface is designed for project administrators to control project configuration settings, such as permissions and workflows, called Project Administration. The Help Desk project In this exercise, we will be setting up a project for our support teams: A new project category for all support teams A new project for our help desk support team Components for the systems supported by the team Versions to better manage issues created by users Creating a new project category Let's start by creating a project category. We will create a category for all of our internal support teams and their respective support JIRA projects. Please note that this step is optional as JIRA does not require any project to belong to a project category: Log in to JIRA with a user who has JIRA Administrator's permission. Browse to the JIRA administration console. Select the Projects tab and Project Categories. Fill in the fields as shown in the following screenshot. Click on Add to create the new project category. Creating a new project Now that we have a project category created, let's create a project for our help desk support team. To create a new project, perform the following steps: Bring up the create project dialog by selecting the Create Project option from the Projects drop-down menu. Select the Simple Issue Tracking project template. Name our new project as Global Help Desk and accept the other default values for Key and Project Lead. Click on the Submit button to create the new project. You should now be taken to the Project Browser interface of your new project. Assigning a project to a category Having created the new project, you need to assign the new project to your project category, and you can do this from the Project Administration interface: Select the Administration tab. Click on the None link next to Category, on the top left of the page, right underneath the project's name. Select the new Support project category we just created. Click on Select to assign the project. Creating new components As discussed in the earlier sections, components are subsections of a project. This makes logical sense for a software development project, where each component will represent a software deliverable module. For other types of project, components may first appear useless or inappropriate. It is true that components are not for every type of project out there, and this is the reason why you are not required to have them by default. Just like everything else in JIRA, all the features come from how you can best map them to your business needs. The power of a component is more than just a flag field for an issue. For example, let's imagine that the company you are working for has a range of systems that need to be supported. These may range from phone systems and desktop computers to other business applications. Let's also assume that our support team needs to support all of the systems. Now, that is a lot of systems to support. To help manage and delegate support for these systems, we will create a component for each of the systems that the help desk team supports. We will also assign a lead for each of the components. This setup allows us to establish a structure where the Help Desk project is led by the support team lead, and each component is led by their respective system expert (who may or may not be the same as the team lead). This allows for a very flexible management process when we start wiring in other JIRA features, such as notification schemes: From the Project Administration interface, select the Components tab. Type Internal Phone System for the new component's name. Provide a short description for the new component. Select a user to be the lead of the component. Click on Add to create the new component. Add a few more components. Putting it together Now that you have fully prepared your project, let's see how everything comes together by creating an issue. If everything is done correctly, you should see a dialog box similar to the next screenshot, where you can choose your new project to create the issue in and also the new components that are available for selection: Click on the Create button from the top navigation bar. This will bring up the Create Issue dialog box. Select Global Help Desk for Project. Select Task for Issue Type, and click on the Next button. Fill in the fields with some dummy data. Note that the Component/s field should display the components we just created. Click on the Create button to create the issue. You can test out the default assignee feature by leaving the Assignee field as Automatic, select a component, and JIRA will automatically assign the issue to the default assignee defined for the component. If everything goes well, the issue will be created in the new project. Summary In this article, we looked at one of the most important concepts in JIRA, projects, and how to create and manage them. Permissions were introduced for the first time, and we looked at three permissions that are related to creating and deleting, administering, and browsing projects. We were introduced to the two interfaces JIRA provides for project administrators and everyday users, the Project Administration interface and Project Browser interface, respectively. Resources for Article: Further resources on this subject: Securing your JIRA 4 [article] Advanced JIRA 5.2 Features [article] Validating user input in workflow transitions [article]
Read more
  • 0
  • 0
  • 2761

article-image-profiling-app
Packt
24 Apr 2015
9 min read
Save for later

Profiling an app

Packt
24 Apr 2015
9 min read
This article is written by Cecil Costa, the author of the book, Swift Cookbook. We'll delve into what profiling is and how we can profile an app by following some simple steps. It's very common to hear about issues, but if an app doesn't have any important issue, it doesn't mean that it is working fine. Imagine that you have a program that has a memory leak, presumably you won't find any problem using it for 10 minutes. However, a user may find it after using it for a few days. Don't think that this sort of thing is impossible; remember that iOS apps don't terminate, so if you do have memory leaks, it will be kept until your app blows up. Performance is another important, common topic. What if your app looks okay, but it gets slower with the passing of time? We, therefore, have to be aware of this problem. This kind of test is called profiling and Xcode comes with a very good tool for realizing this operation, which is called Instruments. In this instance, we will profile our app to visualize the amount of energy wasted by our app and, of course, let's try to reduce it. (For more resources related to this topic, see here.) Getting ready For this recipe you will need a physical device, and to install your app into the device you will need to be enrolled on the Apple Developer Program. If you have both the requirements, the next thing you have to do is create a new project called Chapter 7 Energy. How to do it... To profile an app, follow these steps: Before we start coding, we will need to add a framework to the project. Click on the Build Phases tab of your project, go to the Link Binaries with Libraries section, and press the plus sign. Once Xcode opens a dialog window asking for the framework to add, choose CoreLocation and MapKit. Now, go to the storyboard and place a label and a MapKit view. You might have a layout similar to this one: Link the MapKit view and call it just map and the UILabel class and call it just label:    @IBOutlet var label: UILabel!    @IBOutlet var map: MKMapView! Continue with the view controller; let's click at the beginning of the file to add the core location and MapKit imports: import CoreLocation import MapKit After this, you have to initialize the location manager object on the viewDidLoad method:    override func viewDidLoad() {        super.viewDidLoad()        locationManager.delegate = self        locationManager.desiredAccuracy =          kCLLocationAccuracyBest        locationManager.requestWhenInUseAuthorization()        locationManager.startUpdatingLocation()    } At the moment, you may get an error because your view controller doesn't conform with CLLocationManagerDelegate, so let's go to the header of the view controller class and specify that it implements this protocol. Another error we have to deal with is the locationManager variable, because it is not declared. Therefore, we have to create it as an attribute. And as we are declaring attributes, we will add the geocoder, which will be used later: class ViewController: UIViewController, CLLocationManagerDelegate {    var locationManager = CLLocationManager()    var geocoder = CLGeocoder() Before we implement this method that receives the positioning, let's create another method to detect whether there was any authorization error:    func locationManager(manager: CLLocationManager!,       didChangeAuthorizationStatus status:          CLAuthorizationStatus) {            var locationStatus:String            switch status {            case CLAuthorizationStatus.Restricted:                locationStatus = "Access: Restricted"               break            case CLAuthorizationStatus.Denied:                locationStatus = "Access: Denied"                break            case CLAuthorizationStatus.NotDetermined:                locationStatus = "Access: NotDetermined"               break            default:                locationStatus = "Access: Allowed"            }            NSLog(locationStatus)    } And then, we can implement the method that will update our location:    func locationManager(manager:CLLocationManager,      didUpdateLocations locations:[AnyObject]) {        if locations[0] is CLLocation {            let location:CLLocation = locations[0] as              CLLocation            self.map.setRegion(              MKCoordinateRegionMakeWithDistance(            location.coordinate, 800,800),              animated: true)                       geocoder.reverseGeocodeLocation(location,              completionHandler: { (addresses,              error) -> Void in                    let placeMarket:CLPlacemark =                      addresses[0] as CLPlacemark                let curraddress:String = (placeMarket.                  addressDictionary["FormattedAddressLines"                  ] as [String]) [0] as String                    self.label.text = "You are at                      (curraddress)"            })        }    } Before you test the app, there is still another step to follow. In your project navigator, click to expand the supporting files, and then click on info.plist. Add a row by right-clicking on the list and selecting add row. On this new row, type NSLocationWhenInUseUsageDescription as a key and on value Permission required, like the one shown here: Now, select a device and install this app onto it, and test the application walking around your street (or walking around the planet earth if you want) and you will see that the label will change, and also the map will display your current position. Now, go back to your computer and plug the device in again. Instead of clicking on play, you have to hold the play button until you see more options and then you have to click on the Profile option. The next thing that will happen is that instruments will be opened; probably, a dialog will pop up asking for an administrator account. This is due to the fact that instruments need to use some special permission to access some low-level information. On the next dialog, you will see different kinds of instruments, some of them are OS X specific, some are iOS specific, and others are for both. If you choose the wrong platform instrument, the record button will be disabled. For this recipe, click on Energy Diagnostics. Once the Energy Diagnostics window is open, you can click on the record button, which is on the upper-left corner and try to move around—yes, you need to keep the device connected to your computer, so you have to move around with both elements together—and do some actions with your device, such as pressing the home button and turning off the screen. Now, you may have a screen that displays an output similar to this one: Now, you can analyze who is spending more energy on you app. To get a better idea of this, go to your code and replace the constant kCLLocationAccuracyBest with kCLLocationAccuracyThreeKilometers and check whether you have saved some energy. How it works... Instruments are a tool used for profiling your application. They give you information about your app which can't be retrieved by code, or at least can't be retrieved easily. You can check whether your app has memory leaks, whether it is loosing performance, and as you can see, whether it is wasting lots of energy or not. In this recipe we used the GPS because it is a sensor that requires some energy. Also, you can check on the table at the bottom of your instrument to see that Internet requests were completed, which is something that if you do very frequently will also empty your battery fast. Something you might be asking is: why did we have to change info.plist? Since iOS 8, some sensors require user permission; the GPS is one of them, so you need to report what is the message that will be shown to the user. There's more... I recommend you to read the way instruments work, mainly those that you will use. Check the Apple documentation about instruments to get more details about this (https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Introduction/Introduction.html). Summary In this article, we looked at all the hows and whats of profiling an app. We specifically looked at profiling our app to visualize the amount of energy wasted by our app. So, go ahead to try doing it. Resources for Article: Further resources on this subject: Playing with Swift [article] Using OpenStack Swift [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 8841

article-image-solr-indexing-internals
Packt
23 Apr 2015
9 min read
Save for later

Solr Indexing Internals

Packt
23 Apr 2015
9 min read
In this article by Jayant Kumar, author of the book Apache Solr Search Patterns, we will discuss use cases for Solr in e-commerce and job sites. We will look at the problems faced while providing search in an e-commerce or job site: The e-commerce problem statement The job site problem statement Challenges of large-scale indexing (For more resources related to this topic, see here.) The e-commerce problem statement E-commerce provides an easy way to sell products to a large customer base. However, there is a lot of competition among multiple e-commerce sites. When users land on an e-commerce site, they expect to find what they are looking for quickly and easily. Also, users are not sure about the brands or the actual products they want to purchase. They have a very broad idea about what they want to buy. Many customers nowadays search for their products on Google rather than visiting specific e-commerce sites. They believe that Google will take them to the e-commerce sites that have their product. The purpose of any e-commerce website is to help customers narrow down their broad ideas and enable them to finalize the products they want to purchase. For example, suppose a customer is interested in purchasing a mobile. His or her search for a mobile should list mobile brands, operating systems on mobiles, screen size of mobiles, and all other features as facets. As the customer selects more and more features or options from the facets provided, the search narrows down to a small list of mobiles that suit his or her choice. If the list is small enough and the customer likes one of the mobiles listed, he or she will make the purchase. The challenge is also that each category will have a different set of facets to be displayed. For example, searching for books should display their format, as in paperpack or hardcover, author name, book series, language, and other facets related to books. These facets were different for mobiles that we discussed earlier. Similarly, each category will have different facets and it needs to be designed properly so that customers can narrow down to their preferred products, irrespective of the category they are looking into. The takeaway from this is that categorization and feature listing of products should be taken care of. Misrepresentation of features can lead to incorrect search results. Another takeaway is that we need to provide multiple facets in the search results. For example, while displaying the list of all mobiles, we need to provide facets for a brand. Once a brand is selected, another set of facets for operating systems, network, and mobile phone features has to be provided. As more and more facets are selected, we still need to show facets within the remaining products. Example of facet selection on Amazon.com Another problem is that we do not know what product the customer is searching for. A site that displays a huge list of products from different categories, such as electronics, mobiles, clothes, or books, needs to be able to identify what the customer is searching for. A customer can be searching for samsung, which can be in mobiles, tablets, electronics, or computers. The site should be able to identify whether the customer has input the author name or the book name. Identifying the input would help in increasing the relevance of the result set by increasing the precision of the search results. Most e-commerce sites provide search suggestions that include the category to help customers target the right category during their search. Amazon, for example, provides search suggestions that include both latest searched terms and products along with category-wise suggestions: Search suggestions on Amazon.com It is also important that products are added to the index as soon as they are available. It is even more important that they are removed from the index or marked as sold out as soon as their stock is exhausted. For this, modifications to the index should be immediately visible in the search. This is facilitated by a concept in Solr known as Near Real Time Indexing and Search (NRT). The job site problem statement A job site serves a dual purpose. On the one hand, it provides jobs to candidates, and on the other, it serves as a database of registered candidates' profiles for companies to shortlist. A job search has to be very intuitive for the candidates so that they can find jobs suiting their skills, position, industry, role, and location, or even by the company name. As it is important to keep the candidates engaged during their job search, it is important to provide facets on the abovementioned criteria so that they can narrow down to the job of their choice. The searches by candidates are not very elaborate. If the search is generic, the results need to have high precision. On the other hand, if the search does not return many results, then recall has to be high to keep the candidate engaged on the site. Providing a personalized job search to candidates on the basis of their profiles and past search history makes sense for the candidates. On the recruiter side, the search provided over the candidate database is required to have a huge set of fields to search upon every data point that the candidate has entered. The recruiters are very selective when it comes to searching for candidates for specific jobs. Educational qualification, industry, function, key skills, designation, location, and experience are some of the fields provided to the recruiter during a search. In such cases, the precision has to be high. The recruiter would like a certain candidate and may be interested in more candidates similar to the selected candidate. The more like this search in Solr can be used to provide a search for candidates similar to a selected candidate. NRT is important as the site should be able to provide a job or a candidate for a search as soon as any one of them is added to the database by either the recruiter or the candidate. The promptness of the site is an important factor in keeping users engaged on the site. Challenges of large-scale indexing Let us understand how indexing happens and what can be done to speed it up. We will also look at the challenges faced during the indexing of a large number of documents or bulky documents. An e-commerce site is a perfect example of a site containing a large number of products, while a job site is an example of a search where documents are bulky because of the content in candidate resumes. During indexing, Solr first analyzes the documents and converts them into tokens that are stored in the RAM buffer. When the RAM buffer is full, data is flushed into a segment on the disk. When the numbers of segments are more than that defined in the MergeFactor class of the Solr configuration, the segments are merged. Data is also written to disk when a commit is made in Solr. Let us discuss a few points to make Solr indexing fast and to handle a large index containing a huge number of documents. Using multiple threads for indexing on Solr We can divide our data into smaller chunks and each chunk can be indexed in a separate thread. Ideally, the number of threads should be twice the number of processor cores to avoid a lot of context switching. However, we can increase the number of threads beyond that and check for performance improvement. Using the Java binary format of data for indexing Instead of using XML files, we can use the Java bin format for indexing. This reduces a lot of overhead of parsing an XML file and converting it into a binary format that is usable. The way to use the Java bin format is to write our own program for creating fields, adding fields to documents, and finally adding documents to the index. Here is a sample code: //Create an instance of the Solr server String SOLR_URL = "http://localhost:8983/solr" SolrServer server = new HttpSolrServer(SOLR_URL);   //Create collection of documents to add to Solr server SolrInputDocument doc1 = new SolrInputDocument(); document.addField("id",1); document.addField("desc", "description text for doc 1");   SolrInputDocument doc2 = new SolrInputDocument(); document.addField("id",2); document.addField("desc", "description text for doc 2");   Collection<SolrInputDocument> docs = new ArrayList<SolrInputDocument>(); docs.add(doc1); docs.add(doc2);   //Add the collection of documents to the Solr server and commit. server.add(docs); server.commit(); Here is the reference to the API for the HttpSolrServer program http://lucene.apache.org/solr/4_6_0/solr-solrj/org/apache/solr/client/solrj/impl/HttpSolrServer.html. Add all files from the <solr_directory>/dist folder to the classpath for compiling and running the HttpSolrServer program. Using the ConcurrentUpdateSolrServer class for indexing Using the ConcurrentUpdateSolrServer class instead of the HttpSolrServer class can provide performance benefits as the former uses buffers to store processed documents before sending them to the Solr server. We can also specify the number of background threads to use to empty the buffers. The API docs for ConcurrentUpdateSolrServer are found in the following link: http://lucene.apache.org/solr/4_6_0/solr-solrj/org/apache/solr/client/solrj/impl/ConcurrentUpdateSolrServer.html The constructor for the ConcurrentUpdateSolrServer class is defined as: ConcurrentUpdateSolrServer(String solrServerUrl, int queueSize, int threadCount) Here, queueSize is the buffer and threadCount is the number of background threads used to flush the buffers to the index on disk. Note that using too many threads can increase the context switching between threads and reduce performance. In order to optimize the number of threads, we should monitor performance (docs indexed per minute) after each increase and ensure that there is no decrease in performance. Summary In this article, we saw in brief the problems faced by e-commerce and job websites during indexing and search. We discussed the challenges faced while indexing a large number of documents. We also saw some tips on improving the speed of indexing. Resources for Article: Further resources on this subject: Tuning Solr JVM and Container [article] Apache Solr PHP Integration [article] Boost Your search [article]
Read more
  • 0
  • 0
  • 2303

article-image-managing-files-folders-and-registry-items-using-powershell
Packt
23 Apr 2015
27 min read
Save for later

Managing Files, Folders, and Registry Items Using PowerShell

Packt
23 Apr 2015
27 min read
This article is written by Brenton J.W. Blawat, the author of Mastering Windows PowerShell Scripting. When you are automating tasks on servers and workstations, you will frequently run into situations where you need to manage files, folders, and registry items. PowerShell provides a wide variety of cmdlets that enable you to create, view, modify, and delete items on a system. (For more resources related to this topic, see here.) In this article, you will learn many techniques to interact with files, folders, and registry items. These techniques and items include: Registry provider Creating files, folders, registry keys, and registry named values Adding named values to registry keys Verifying the existence of item files, folders, and registry keys Renaming files, folders, registry keys, and named values Copying and moving files and folders Deleting files, folders, registry keys, and named values To properly follow the examples in this article, you will need to sequentially execute the examples. Each example builds on the previous examples, and some of these examples may not function properly if you do not execute the previous steps. Registry provider When you're working with the registry, PowerShell interprets the registry in the same way it does files and folders. In fact, the cmdlets that you use for files and folders are the same that you would use for registry items. The only difference with the registry is the way in which you call the registry path locations. When you want to reference the registry in PowerShell, you use the [RegistryLocation]:Path syntax. This is made available through the PowerShell Windows Registry Provider. While referencing [RegistryLocation]:Path, PowerShell provides you with the ability to use registry abbreviations pertaining to registry path locations. Instead of referencing the full path of HKEY_LOCAL_MACHINE, you can use the abbreviation of HKLM. Some other abbreviations include: HKLM: Abbreviation for HKEY_LOCAL_MACHINE hive HKCU: Abbreviation for HKEY_CURRENT_USER hive HKU: Abbreviation for HKEY_USERS hive HKCR: Abbreviation for HKEY_CLASSES_ROOT hive HKCC: Abbreviation for HKEY_CURRENT_CONFIG hive For example, if you wanted to reference the named values in the Run registry key for programs that start up on boot, the command line syntax would look like this: HKLM:SoftwareMicrosoftWindowsCurrentVersionRun While it is recommended that you don't use cmdlet aliases in your scripts, it is recommended, and a common practice, to use registry abbreviations in your code. This not only reduces the amount of effort to create the scripts but also makes it easier for others to read the registry locations. Creating files, folders, and registry items with PowerShell When you want to create a new file, folder, or registry key, you will need to leverage the new-item cmdlet. The syntax of this command is new-item, calling the –path argument to specify the location, calling the -name argument to provide a name for the item, and the -ItemType argument to designate whether you want a file or a directory (folder). When you are creating a file, it has an additional argument of –value, which allows you to prepopulate data into the file after creation. When you are creating a new registry key in PowerShell, you can omit the –ItemType argument as it is not needed for registry key creation. PowerShell assumes that when you are interacting with the registry using new-item, you are creating registry keys. The new-item command accepts the -force argument in the instance that the file, folder, or key is being created in a space that is restricted by User Account Control (UAC). To create a new folder and registry item, do the following action: New-item –path "c:Program Files" -name MyCustomSoftware –ItemType Directory New-item –path HKCU:SoftwareMyCustomSoftware -force The output of this is shown in the following screenshot: The preceding example shows how you can create folders and registry keys for a custom application. You first create a new folder in c:Program Files named MyCustomSoftware. You then create a new registry key in HKEY_CURRENT_USER:Software named MyCustomSoftware. You start by issuing the new-item cmdlet followed by the –path argument to designate that the new folder should be placed in c:Program Files. You then call the –name argument to specify the name of MyCustomSoftware. Finally, you tell the cmdlet that the -ItemType argument is Directory. After executing this command you will see a new folder in c:Progam Files named MyCustomSoftware. You then create the new registry key by calling the new-item cmdlet and issuing the –path argument and then specifying the HKCU:SoftwareMyCustomSoftware key location, and you complete it with the –force argument to force the creation of the key. After executing this command, you will see a new registry key in HKEY_CURRENT_USER:Software named MyCustomSoftware. One of the main benefits of PowerShell breaking apart the -path, -name, and -values arguments is that you have the ability to customize each of the values before you use them with the new-item cmdlet. For example, if you want to name a log file with the date stamp, add that parameter into a string and set the –name value to a string. To create a log file with a date included in the filename, do the following action: $logpath = "c:Program FilesMyCustomSoftwareLogs" New-item –path $logpath –ItemType Directory | out-null $itemname = (get-date –format "yyyyMMddmmss") + "MyLogFile.txt" $itemvalue = "Starting Logging at: " + " " + (get-date) New-item –path $logpath -name $itemname –ItemType File –value $itemvalue $logfile = $logpath + $itemname $logfile The output of this is shown in the following screenshot: The content of the log file is shown in the following screenshot: The preceding example displays how you can properly create a new log file with a date time path included in the log file name. It also shows how to create a new directory for the logs. It then displays how to include text inside the log file, designating the start of a new log file. Finally, this example displays how you can save the log file name and path in a variable to use later in your scripts. You first start by declaring the path of c:Program FilesMyCustomSoftwareLogs in the $logpath variable. You then use the new-item cmdlet to create a new folder in c:Program FilesMyCustomSoftware named Logs. By piping the command to out-null, the default output of the directory creation is silenced. You then declare the name that you want the file to be by using the get-date cmdlet, with the –format argument set to yyyyMMddmmss, and by adding mylogfile.txt. This will generate a date time stamp in the format of 4 digits including year, month, day, minutes, seconds, and mylogfile.txt. You then set the name of the file to the $itemname variable. Finally, you declare the $itemvalue variable which contains Starting Log at: and the standard PowerShell date time information. After the variables are populated, you issue the new-item command, the –path argument referencing the $logpath variable, the –name argument referencing the $itemname variable, the –ItemType referencing File, and the –value argument referencing the $itemvalue variable. At the end of the script, you will take the $logpath and $itemname variables to create a new variable of $logfile, which contains the location of the log file. As you will see from this example, after you execute the script the log file is populated with the value of Starting Logging at: 03/16/2015 14:38:24. Adding named values to registry keys When you are interacting with the registry, you typically view and edit named values or properties that are contained with in the keys and sub-keys. PowerShell uses several cmdlets to interact with named values. The first is the get-itemproperty cmdlet which allows you to retrieve the properties of a named value. The proper syntax for this cmdlet is to specify get-itemproperty to use the –path argument to specify the location in the registry, and to use the –name argument to specify the named value. The second cmdlet is new-itemproperty, which allows you to create new named values. The proper syntax for this cmdlet is specifying new-itemproperty, followed by the –path argument and the location where you want to create the new named value. You then specify the –name argument and the name you want to call the named value with. Finally, you use the –PropertyType argument which allows you to specify what kind of registry named value you want to create. The PropertyType argument can be set to Binary, DWord, ExpandString, MultiString, String, and Qword, depending on what your need for the registry value is. Finally, you specifythe –value argument which enables you to place a value into that named value. You may also use the –force overload to force the creation of the key in the instance that the key may be restricted by UAC. To create a named value in the registry, do the following action: $regpath = "HKCU:SoftwareMyCustomSoftware" $regname = "BuildTime" $regvalue = "Build Started At: " + " " + (get-date) New-itemproperty –path $regpath –name $regname –PropertyType String –value $regvalue $verifyValue = Get-itemproperty –path $regpath –name $regname Write-Host "The $regName named value is set to: " $verifyValue.$regname The output of this is shown in the following screenshot:   After executing the script, the registry will look like the following screenshot:   This script displays how you can create a registry named value in a specific location. It also displays how you can retrieve a value and display it in the console. You first start by defining several variables. The first variable $regpath defines where you want to create the new named value which is in the HKCU:SoftwareMyCustomSoftware registry key. The second variable $regname defines what you want the new named value to be named, which is BuildTime. The third variable defines what you want the value of the named value to be, which is Build Started At: with the current date and time. The next step in the script is to create the new value. You first call the new-itemproperty cmdlet with the –path argument and specify the $regpath variable. You then use the –name argument and specify $regname. This is followed by specifying the –PropertyType argument and by specifying the string PropertyType. Finally, you specify the –value argument and use the $regvalue variable to fill the named value with data. Proceeding forward in the script, you verify that the named value has proper data by leveraging the get-itemproperty cmdlet. You first define the $verifyvalue variable that captures the data from the cmdlet. You then issue get-itemproperty with the –path argument of $regpath and the –name argument of $regname. You then write to the console that the $regname named value is set to $verifyvalue.$regname. When you are done with script execution, you should have a new registry named value of BuildTime in the HKEY_CURRENT_USER:SoftwareMyCustomSoftware key with a value similar to Build Started At: 03/16/2015 14:49:22. Verifying files, folders, and registry items When you are creating and modifying objects, it's important to make sure that the file, folder, and registry items don't exist prior to creating and modifying them. The test-path cmdlet allows you to test to see if a file, folder, or registry item exists prior to working with it. The proper syntax for this is first calling test-path and then specifying a file, folder, or registry location. The result of the test-path command is True if the object exists or False if the object doesn't exist. To verify if files, folders, and registry entries exist, do the following action: $testfolder = test-path "c:Program FilesMyCustomSoftwareLogs" #Update The Following Line with the Date/Timestamp of your file $testfile = test-path "c:Program FilesMyCustomSoftwareLogs201503163824MyLogFile.txt" $testreg = test-path "HKCU:SoftwareMyCustomSoftware" If ($testfolder) { write-host "Folder Found!" } If ($testfile) { write-host "File Found!" } If ($testreg) { write-host "Registry Key Found!" } The output is shown in the following screenshot: This example displays how to verify if a file, folder, and registry item exists. You first start by declaring a variable to catch the output from the test-path cmdlet. You then specify test-path, followed by a file, folder, or registry item whose existence you want to verify. In this example, you start by using the test-path cmdlet to verify if the Logs folder is located in the c:Program FilesMyCustomSoftware directory. You then store the result in the $testfolder variable. You then use the test-path cmdlet to check if the file located at c:Program FilesMyCustomSoftwareLogs201503163824MyLogFile.txt exists. You then store the result in the $testfile variable. Finally, you use the test-path cmdlet to see if the registry key of HKCU:SoftwareMyCustomSoftware exists. You then store the result in the $testreg variable. To evaluate the variables, you create IF statements to check whether the variables are True and write to the console if the items are found. After executing the script, the console will output the messages Folder Found!, File Found!, and Registry Key Found!. Copying and moving files and folders When you are working in the operating system, there may be instances where you need to copy or move files and folders around on the operating system. PowerShell provides two cmdlets to copy and move files. The copy-item cmdlet allows you to copy a file or a folder from one location to another. The proper syntax of this cmdlet is calling copy-item, followed by –path argument for the source you want to copy and the –destination argument for the destination of the file or folder. The copy-item cmdlet also has the –force argument to write over a read-only or hidden file. There are instances when read-only files cannot be overwritten, such as a lack of user permissions, which will require additional code to change the file attributes before copying over files or folders. The copy-item cmdlet also has a –recurse argument, which allows you to recursively copy the files in a folder and its subdirectories. A common trick to use with the copy-item cmdlet is to rename during the copy operation. To do this, you change the destination to the desired name you want the file or folder to be. After executing the command, the file or folder will have a new name in its destination. This reduces the number of steps required to copy and rename a file or folder. The move-item cmdlet allows you to move files from one location to another. The move-item cmdlet has the same syntax as the copy-item cmdlet. The proper syntax of this cmdlet is calling move-item, followed by the –path argument for the source you want to move and the –destination argument for the destination of the file or folder. The move-item cmdlet also has the –force overload to write over a read-only or hidden file. There are also instances when read-only files cannot be overwritten, such as a lack of user permissions, which will require additional code to change the file attributes before moving files or folders. The move-item cmdlet does not, however, have a -recurse argument. Also, it's important to remember that the move-item cmdlet requires the destination to be created prior to the move. If the destination folder is not available, it will throw an exception. It's recommended to use the test-path cmdlet in conjunction with the move-item cmdlet to verify that the destination exists prior to the move operation. PowerShell has the same file and folder limitations as the core operating system it is being run on. This means that file paths that are longer than 256 characters in length will receive an error message during the copy process. For paths that are over 256 characters in length, you need to leverage robocopy.exe or a similar file copy program to copy or move files. All move-item operations are recursive by default. You do not have to specify the –recurse argument to recursively move files. To copy files recursively, you need to specify the –recurse argument. To copy and move files and folders, do the following action: New-item –path "c:Program FilesMyCustomSoftwareAppTesting" –ItemType Directory | Out-null New-item –path "c:Program FilesMyCustomSoftwareAppTestingHelp" -ItemType Directory | Out-null New-item –path "c:Program FilesMyCustomSoftwareAppTesting" –name AppTest.txt –ItemType File | out-null New-item –path "c:Program FilesMyCustomSoftwareAppTestingHelp" –name HelpInformation.txt –ItemType File | out-null New-item –path "c:Program FilesMyCustomSoftware" -name ConfigFile.txt –ItemType File | out-null move-item –path "c:Program FilesMyCustomSoftwareAppTesting" –destination "c:Program FilesMyCustomSoftwareArchive" –force copy-item –path "c:Program FilesMyCustomSoftwareConfigFile.txt" "c:Program FilesMyCustomSoftwareArchiveArchived_ConfigFile.txt" –force The output of this is shown in the following screenshot: This example displays how to properly use the copy-item and move-item cmdlets. You first start by using the new-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareAppTesting and the –ItemType argument set to Directory. You then pipe the command to out-null to suppress the default output. This creates the AppTesting sub directory in the c:Program FilesMyCustomSoftware directory. You then create a second folder using the new-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareAppTestingHelp and the –ItemType argument set to Directory. You then pipe the command to out-null. This creates the Help sub directory in the c:Program FilesMyCustomSoftwareAppTesting directory. After creating the directories, you create a new file using the new-item cmdlet with the path of c:Program FilesMyCustomSoftwareAppTesting, the -name argument set to AppTest.txt, the -ItemType argument set to File; you then pipe it to out-null. You create a second file by using the new-item cmdlet with the path of c:Program FilesMyCustomSoftwareAppTestingHelp, the -name argument set to HelpInformation.txt and the -ItemType argument set to File, and then piping it to out-null. Finally, you create a third file using the new-item cmdlet with the path of c:Program FilesMyCustomSoftware, the -name argument set to ConfigFile.txt and the -ItemType argument set to File, and then pipe it to out-null. After creating the files, you are ready to start copying and moving files. You first move the AppTesting directory to the Archive directory by using the move-item cmdlet and then specifying the –path argument with the value of c:Program FilesMyCustomSoftwareAppTesting as the source, the –destination argument with the value of c:Program FilesMyCustomSoftwareArchive as the destination, and the –force argument to force the move if the directory is hidden. You then copy a configuration file by using the copy-item cmdlet, using the –path argument with c:Program FilesMyCustomSoftwareConfigFile.txt as the source, and then specifying the –destination argument with c:Program FilesMyCustomSoftwareArchiveArchived_ConfigFile.txt as the new destination with a new filename;, you then leverage the –force argument to force the copy if the file is hidden. This follow screenshot displays the file folder hierarchy after executing this script: After executing this script, the file folder hierarchy should be as displayed in the preceding screenshot. This also displays that when you move the AppTesting directory to the Archive folder, it automatically performs the move recursively, keeping the file and folder structure intact. Renaming files, folders, registry keys, and named values When you are working with PowerShell scripts, you may have instances where you need to rename files, folders, and registry keys. The rename-item cmdlet can be used to perform renaming operations on a system. The syntax for this cmdlet is rename-item and specifying the –path argument with path to the original object, and then you call the –newname argument with a full path to what you want the item to be renamed to. The rename-item has a –force argument to force the rename in instances where the file or folder is hidden or restricted by UAC or to avoid prompting for the rename action. To copy and rename files and folders, do the following action: New-item –path "c:Program FilesMyCustomSoftwareOldConfigFiles" –ItemType Directory | out-null Rename-item –path "c:Program FilesMyCustomSoftwareOldConfigFiles" –newname "c:Program FilesMyCustomSoftwareConfigArchive" -force copy-item –path "c:Program FilesMyCustomSoftwareConfigFile.txt" "c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt" –force Rename-item –path "c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt" –newname "c:Program FilesMyCustomSoftwareConfigArchiveOld_ConfigFile.txt" –force The output of this is shown in the following screenshot: In this example, you create a script that creates a new folder and a new file, and then renames the file and the folder. To start, you leverage the new-item cmdlet which creates a new folder in c:Program FilesMyCustomSoftware named OldConfigFiles. You then pipe that command to Out-Null, which silences the standard console output of the folder creation. You proceed to rename the folder c:Program FilesMyCustomSoftwareOldConfigFiles with the rename-item cmdlet using the –newname argument to c:Program FilesMyCustomSoftwareConfigArchive. You follow the command with the –force argument to force the renaming of the folder. You leverage the copy-item cmdlet to copy the ConfigFile.txt into the ConfigArchive directory. You first start by specifying the copy-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareConfigFile.txt and the destination set to c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt. You include the –Force argument to force the copy. After moving the file, leverage the rename-item cmdlet with the -path argument to rename c:Program FilesMyCustomSoftwareConfigArchiveConfigFile.txt using the –newname argument to c:Program FilesMyCustomSoftwareConfigArchiveOld_ConfigFile.txt. You follow this command with the –force argument to force the renaming of the file. At the end of this script, you will have successfully renamed a folder, moved a file into that renamed folder, and renamed a file in the newly created folder. In the instance that you want to rename a registry key, do the following action: New-item –path "HKCU:SoftwareMyCustomSoftware" –name CInfo –force | out-null Rename-item –path "HKCU:SoftwareMyCustomSoftwareCInfo" –newname ConnectionInformation –force The output of this is shown in the following screenshot: After renaming the subkey, the registry will look like the following screenshot: This example displays how to create a new subkey and rename it. You first start by using the new-item cmdlet to create a new sub key with the –path argument of the HKCU:SoftwareMyCustomSoftware key and the -name argument set to CInfo. You then pipe that line to out-null in order to suppress the standard output from the script. You proceed to execute the rename-item cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareCInfo and the –newname argument set to ConnectionInformation. You then use the –force argument to force the renaming in instances when the subkey is restricted by UAC. After executing this command, you will see that the CInfo subkey located in HKCU:SoftwareMyCustomSoftware is now renamed to ConnectionInformation. When you want to update named values in the registry, you will not be able to use the rename-item cmdlet. This is due to the named values being properties of the keys themselves. Instead, PowerShell provides the rename-itemproperty cmdlet to rename the named values in the key. The proper syntax for this cmdlet is calling rename-itemproperty by using the –path argument, followed by the path to the key that contains the named value. You then issue the –name argument to specify the named value you want to rename. Finally, you specify the –newname argument and the name you want the named value to be renamed to. To rename the registry named value, do the following action: $regpath = "HKCU:SoftwareMyCustomSoftwareConnectionInformation" $regname = "DBServer" $regvalue = "mySQLserver.mydomain.local" New-itemproperty –path $regpath –name $regname –PropertyType String –value $regvalue | Out-null Rename-itemproperty –path $regpath –name DBServer –newname DatabaseServer The output of this is shown in the following screenshot: After updating the named value, the registry will reflect this change, and so should look like the following screenshot: The preceding script displays how to create a new named value and rename it to a different named value. You first start by defining the variables to be used with the new-itemproperty cmdlet. You define the location of the registry subkey in the $regpath variable and set it to HKCU:SoftwareMyCustomSoftwareConnectionInformation. You then specify the named value name of DBServer and store it in the $regname variable. Finally, you define the $regvalue variable and store the value of mySQLserver.mydomain.local. To create the new named value, leverage new-itemproperty, specify the –path argument with the $regpath variable, use the –name argument with the $regname variable, and use the –value argument with the $regvalue variable. You then pipe this command to out-null in order to suppress the default output of the command. This command will create the new named value of DBServer with the value of mySQLserver.mydomain.local in the HKCU:SoftwareMyCustomSoftwareConnectionInformation subkey. The last step in the script is renaming the DBServer named value to DatabaseServer. You first start by calling the rename-itemproperty cmdlet and then using the –path argument and specifying the $regpath variable which contains the HKCU:SoftwareMyCustomSoftwareConnectionInformation subkey; you then proceed by calling the –name argument and specifying DBServer and finally calling the –newname argument with the new value of DatabaseServer. After executing this command, you will see that the HKCU:SoftwareMyCustomSoftwareConnectionInformation key has a new named value of DatabaseServer containing the same value of mySQLserver.mydomain.local. Deleting files, folders, registry keys, and named values When you are creating scripts, there are instances when you need to delete items from a computer. PowerShell has the remove-item cmdlet that enables the removal of objects from a computer. The syntax of this cmdlet starts by calling the remove-item cmdlet and proceeds with specifying the –path argument with a file, folder, or registry key to delete. The remove-item cmdlet has several useful arguments that can be leveraged. The –force argument is available to delete files, folders, and registry keys that are read-only, hidden, or restricted by UAC. The –recurse argument is available to enable recursive deletion of files, folders, and registry keys on a system. The –include argument enables you to delete specific files, folders, and registry keys. The –include argument allows you to use the wildcard character of an asterisk (*) to search for specific values in an object name or a specific object type. The –exclude argument will exclude specific files, folders, and registry keys on a system. It also accepts the wildcard character of an asterisk (*) to search for specific values in an object name or a specific object type. The named values in the registry are properties of the key that they are contained in. As a result, you cannot use the remove-item cmdlet to remove them. Instead, PowerShell offers the remove-itemproperty cmdlet to enable the removal of the named values. The remove-itemproperty cmdlet has arguments similar to those of the remove-item cmdlet. It is important to note, however, that the –filter, -include, and –exclude arguments will not work with named values in the registry. They only work with item paths such as registry keys. To set up the system for the deletion example, you need to process the following script: # Create New Directory new-item –path "c:program filesMyCustomSoftwareGraphics" –ItemType Directory | Out-null # Create Files for This Example new-item –path "c:program filesMyCustomSoftwareGraphics" –name FirstGraphic.bmp –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name FirstGraphic.png –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name SecondGraphic.bmp –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareGraphics" –name SecondGraphic.png –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201301010101LogFile.txt –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201302010101LogFile.txt –ItemType File | Out-Null new-item –path "c:program filesMyCustomSoftwareLogs" –name 201303010101LogFile.txt –ItemType File | Out-Null # Create New Registry Keys and Named Values New-item –path "HKCU:SoftwareMyCustomSoftwareAppSettings" | Out-null New-item –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" | Out-null New-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –name AlwaysOn –PropertyType String –value True | Out-null New-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –name AutoDeleteLogs –PropertyType String –value True | Out-null The output of this is shown in the following screenshot: The preceding example is designed to set up the file structure for the following example. You first use the new-item cmdlet to create a new directory called Graphics in c:program filesMyCustomSoftware. You then use the new-item cmdlet to create new files named FirstGraphic.bmp, FirstGraphic.png, SecondGraphic.bmp, and SecondGraphic.png in the c:Program FilesMyCustomSoftwareGraphics directory. You then use the new-item cmdlet to create new log files in c:Program FilesMyCustomSoftwareLogs named 201301010101LogFile.txt, 201302010101LogFile.txt, and 201303010101LogFile.txt. After creating the files, you create two new registry keys located at HKCU:SoftwareMyCustomSoftwareAppSettings and HKCU:SoftwareMyCustomSoftwareApplicationSettings. You then populate the HKCU:SoftwareMyCustomSoftwareApplicationSettings key with a named value of AlwaysOn set to True and a named value of AutoDeleteLogs set to True. To remove files, folders, and registry items from a system, do the following action: # Get Current year $currentyear = get-date –f yyyy # Build the Exclude String $exclude = "*" + $currentyear + "*" # Remove Items from System Remove-item –path "c:Program FilesMyCustomSoftwareGraphics" –include *.bmp –force -recurse Remove-item –path "c:Program FilesMyCustomSoftwareLogs" –exclude $exclude -force –recurse Remove-itemproperty –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" –Name AutoDeleteLogs Remove-item –path "HKCU:SoftwareMyCustomSoftwareApplicationSettings" The output of this is shown in the following screenshot: This script displays how you can leverage PowerShell to clean up files and folders with the remove-item cmdlet and the –exclude and –include arguments. You first start by building the exclusion string for the remove-item cmdlet. You retrieve the current year by using the get-date cmdlet with the –f parameter set to yyyy. You save the output into the $currentyear variable. You then create a $exclude variable that appends asterisks on each end of the $currentyear variable, which contains the current date. This will allow the exclusion filter to find the year anywhere in the file or folder names. The first command is that you use the remove-item cmdlet and call the –path argument with the path of c:Program FilesMyCustomSoftwareGraphics. You then specify the –include argument with the value of *.bmp. This tells the remove-item cmdlet to delete all files that end in .bmp. You then specify –force to force the deletion of the files and –recurse to search the entire Graphics directory to delete the files that meet the *.bmp inclusion criteria but leaves the other files you created with the *.png extension. The second command leverages the remove-item cmdlet with the –path argument set to c:Program FilesMyCustomSoftwareLogs. You use the –exclude argument with the value of $exclude to exclude files that contain the current year. You then specify –force to force the deletion of the files and –recurse to search the entire logs directory to delete the files and folders that do not meet the exclusion criteria. The third command leverages the remove-itemproperty cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareApplicationSettings and the –name argument set to AutoDeleteLogs. After execution, the AutoDeleteLogs named path is deleted from the registry. The last command leverages the remote-item cmdlet with the –path argument set to HKCU:SoftwareMyCustomSoftwareApplicationSettings. After running this last command, the entire subkey of ApplicationSettings is removed from HKCU:SoftwareMyCustomSoftware. After executing this script, you will see that the script deletes the .BMP files in the c:Program FilesMyCustomSoftwareGraphics directory, but it leaves the .PNG files. You will also see that the script deletes all of the log files except the ones that had the current year contained in them. Last, you will see that the ApplicationSettings sub key that was created in the previous step is successfully deleted from HKCU:SoftwareMyCustomSoftware. When you use the remove-item and -recurse parameters together, it is important to note that if remote-item cmdlet deletes all the files and folders in a directory, the –recurse parameter will also delete the empty folder and subfolders that contained those files. This is only true when there are no remaining files in the folders in a particular directory. This may create undesirable results on your system, and so you should use caution while performing this combination. Summary This article thoroughly explained the interaction of PowerShell with the files, folders, and registry objects. It began by displaying how to create a folder and a registry key by leveraging the new-item cmdlet. It also displayed the additional arguments that can be used with the new-item cmdlet to create a log file with the date time integrated in the filename. The article proceeded to display how to create and view a registry key property using the get-itemproperty and new-itemproperty cmdlets This article then moved to verification of files, folder, and registry items through the test-path cmdlet. By using this cmdlet, you can test to see if the object exists prior to interacting with it. You also learned how to interact with copying and moving files and folders by leveraging the copy-item and move-item cmdlets. You also learned how to rename files, folders, registry keys and registry properties with the use of the rename-item and rename-itemproperty cmdlets. This article ends with learning how to delete files, folders, and registry items by leveraging the remove-item and remove-itemproperty cmdlets. Resources for Article: Further resources on this subject: Azure Storage [article] Hyper-V Basics [article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 22055

article-image-creating-cool-content
Packt
23 Apr 2015
26 min read
Save for later

Creating Cool Content

Packt
23 Apr 2015
26 min read
In this article by Alex Ogorek, author of the book Mastering Cocos2d Game Development you'll be learning how to implement the really complex, subtle game mechanics that not many developers do. This is what separates the good games from the great games. There will be many examples, tutorials, and code snippets in this article intended for adaption in your own projects, so feel free to come back at any time to look at something you may have either missed the first time, or are just curious to know about in general. In this article, we will cover the following topics: Adding a table for scores Adding subtle sliding to the units Creating movements on a Bézier curve instead of straight paths (For more resources related to this topic, see here.) Adding a table for scores Because "we want a way to show the user their past high scores, in the GameOver scene, we're going to add a table that displays the most recent high scores that are saved. For this, we're going to use CCTableView. It's still relatively new, but it works for what we're going to use it. CCTableView versus UITableView Although UITableView might be known to some of you who've made non-Cocos2d apps before, you "should be aware of its downfalls when it comes to using it within Cocos2d. For example, if you want a BMFont in your table, you can't add LabelBMFont (you could try to convert the BMFont into a TTF font and use that within the table, but that's outside the scope of this book). If you still wish to use a UITableView object (or any UIKit element for that matter), you can create the object like normal, and add it to the scene, like this (tblScores is the name of the UITableView object): [[[CCDirector sharedDirector] view] addSubview:tblScores]; Saving high scores (NSUserDefaults) Before "we display any high scores, we have to make sure we save them. "The easiest way to do this is by making use of Apple's built-in data preservation tool—NSUserDefaults. If you've never used it before, let me tell you that it's basically a dictionary with "save" mechanics that stores the values in the device so that the next time the user loads the device, the values are available for the app. Also, because there are three different values we're tracking for each gameplay, let's only say a given game is better than another game when the total score is greater. Therefore, let's create a saveHighScore method that will go through all the total scores in our saved list and see whether the current total score is greater than any of the saved scores. If so, it will insert itself and bump the rest down. In MainScene.m, add the following method: -(NSInteger)saveHighScore { //save top 20 scores //an array of Dictionaries... //keys in each dictionary: // [DictTotalScore] // [DictTurnsSurvived] // [DictUnitsKilled]   //read the array of high scores saved on the user's device NSMutableArray *arrScores = [[[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores] mutableCopy]; //sentinel value of -1 (in other words, if a high score was not found on this play through) NSInteger index = -1; //loop through the scores in the array for (NSDictionary *dictHighScore in arrScores) { //if the current game's total score is greater than the score stored in the current index of the array...    if (numTotalScore > [dictHighScore[DictTotalScore] integerValue])    { //then store that index and break out of the loop      index = [arrScores indexOfObject:dictHighScore];      break;    } } //if a new high score was found if (index > -1) { //create a dictionary to store the score, turns survived, and units killed    NSDictionary *newHighScore = @{ DictTotalScore : @(numTotalScore),    DictTurnsSurvived : @(numTurnSurvived),    DictUnitsKilled : @(numUnitsKilled) };    //then insert that dictionary into the array of high scores    [arrScores insertObject:newHighScore atIndex:index];    //remove the very last object in the high score list (in other words, limit the number of scores)    [arrScores removeLastObject];    //then save the array    [[NSUserDefaults standardUserDefaults] setObject:arrScores forKey:DataHighScores];    [[NSUserDefaults standardUserDefaults] synchronize]; }   //finally return the index of the high score (whether it's -1 or an actual value within the array) return index; } Finally, call "this method in the endGame method right before you transition to the next scene: -(void)endGame { //call the method here to save the high score, then grab the index of the high score within the array NSInteger hsIndex = [self saveHighScore]; NSDictionary *scoreData = @{ DictTotalScore : @(numTotalScore), DictTurnsSurvived : @(numTurnSurvived), DictUnitsKilled : @(numUnitsKilled), DictHighScoreIndex : @(hsIndex)}; [[CCDirector sharedDirector] replaceScene:[GameOverScene sceneWithScoreData:scoreData]]; } Now that we have our high scores being saved, let's create the table to display them. Creating the table It's "really simple to set up a CCTableView object. All we need to do is modify the contentSize object, and then put in a few methods that handle the size and content of each cell. So first, open the GameOverScene.h file and set the scene as a data source for the CCTableView: @interface GameOverScene : CCScene <CCTableViewDataSource> Then, in the initWithScoreData method, create the header labels as well as initialize the CCTableView: //get the high score array from the user's device arrScores = [[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores];    //create labels CCLabelBMFont *lblTableTotalScore = [CCLabelBMFont labelWithString:@"Total Score:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableUnitsKilled = [CCLabelBMFont labelWithString:@"Units Killed:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableTurnsSurvived = [CCLabelBMFont labelWithString:@"Turns Survived:" fntFile:@"bmFont.fnt"]; //position the labels lblTableTotalScore.position = ccp(winSize.width * 0.5, winSize.height * 0.85); lblTableUnitsKilled.position = ccp(winSize.width * 0.675, winSize.height * 0.85); lblTableTurnsSurvived.position = ccp(winSize.width * 0.875, winSize.height * 0.85); //add the labels to the scene [self addChild:lblTableTurnsSurvived]; [self addChild:lblTableTotalScore]; [self addChild:lblTableUnitsKilled]; //create the tableview and add it to the scene CCTableView * tblScores = [CCTableView node]; tblScores.contentSize = CGSizeMake(0.6, 0.4); CGFloat ratioX = (1.0 - tblScores.contentSize.width) * 0.75; CGFloat ratioY = (1.0 - tblScores.contentSize.height) / 2; tblScores.position = ccp(winSize.width * ratioX, winSize.height * ratioY); tblScores.dataSource = self; tblScores.block = ^(CCTableView *table){    //if the press a cell, do something here.    //NSLog(@"Cell %ld", (long)table.selectedRow); }; [self addChild: tblScores]; With the CCTableView object's data source being set to self we can add the three methods that will determine exactly how our table looks and what data goes in each cell (that is, row). Note that if we don't set the data source, the table view's method will not be called; and if we set it to anything other than self, the methods will be called on that object/class instead. That being" said, add these three methods: -(CCTableViewCell*)tableView:(CCTableView *)tableView nodeForRowAtIndex:(NSUInteger)index { CCTableViewCell* cell = [CCTableViewCell node]; cell.contentSizeType = CCSizeTypeMake(CCSizeUnitNormalized, CCSizeUnitPoints); cell.contentSize = CGSizeMake(1, 40); // Color every other row differently CCNodeColor* bg; if (index % 2 != 0) bg = [CCNodeColor nodeWithColor:[CCColor colorWithRed:0 green:0 blue:0 alpha:0.3]]; else bg = [CCNodeColor nodeWithColor: [CCColor colorWithRed:0 green:0 blue:0 alpha:0.2]]; bg.userInteractionEnabled = NO; bg.contentSizeType = CCSizeTypeNormalized; bg.contentSize = CGSizeMake(1, 1); [cell addChild:bg]; return cell; }   -(NSUInteger)tableViewNumberOfRows:(CCTableView *)tableView { return [arrScores count]; }   -(float)tableView:(CCTableView *)tableView heightForRowAtIndex:(NSUInteger)index { return 40.f; } The first method, tableView:nodeForRowAtIndex:, will format each cell "based on which index it is. For now, we're going to color each cell in one of two different colors. The second method, tableViewNumberOfRows:, returns the number of rows, "or cells, that will be in the table view. Since we know there are going to be 20, "we can technically type 20, but what if we decide to change that number later? "So, let's stick with using the count of the array. The third "method, tableView:heightForRowAtIndex:, is meant to return the height of the row, or cell, at the given index. Since we aren't doing anything different with any cell in particular, we can hardcode this value to a fairly reasonable height of 40. At this point, you should be able to run the game, and when you lose, you'll be taken to the game over screen with the labels across the top as well as a table that scrolls on the right side of the screen. It's good practice when learning Cocos2d to just mess around with stuff to see what sort of effects you can make. For example, you could try using some ScaleTo actions to scale the text up from 0, or use a MoveTo action to slide it from the bottom or the side. Feel free to see whether you can create a cool way to display the text right now. Now that we have the table in place, let's get the data displayed, shall we? Showing the scores Now that "we have our table created, it's a simple addition to our code to get the proper numbers to display correctly. In the nodeForRowAtIndex method, add the following block of code right after adding the background color to the cell: //Create the 4 labels that will be used within the cell (row). CCLabelBMFont *lblScoreNumber = [CCLabelBMFont labelWithString: [NSString stringWithFormat:@"%d)", index+1] fntFile:@"bmFont.fnt"]; //Set the anchor point to the middle-right (default middle-middle) lblScoreNumber.anchorPoint = ccp(1,0.5); CCLabelBMFont *lblTotalScore = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTotalScore] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblUnitsKilled = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictUnitsKilled] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTurnsSurvived = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTurnsSurvived] integerValue]] fntFile:@"bmFont.fnt"]; //set the position type of each label to normalized (where (0,0) is the bottom left of its parent and (1,1) is the top right of its parent) lblScoreNumber.positionType = lblTotalScore.positionType = lblUnitsKilled.positionType = lblTurnsSurvived.positionType = CCPositionTypeNormalized;   //position all of the labels within the cell lblScoreNumber.position = ccp(0.15,0.5); lblTotalScore.position = ccp(0.35,0.5); lblUnitsKilled.position = ccp(0.6,0.5); lblTurnsSurvived.position = ccp(0.9,0.5); //if the index we're iterating through is the same index as our High Score index... if (index == highScoreIndex) { //then set the color of all the labels to a golden color    lblScoreNumber.color =    lblTotalScore.color =    lblUnitsKilled.color =    lblTurnsSurvived.color = [CCColor colorWithRed:1 green:183/255.f blue:0]; } //add all of the labels to the individual cell [cell addChild:lblScoreNumber]; [cell addChild:lblTurnsSurvived]; [cell addChild:lblTotalScore]; [cell addChild:lblUnitsKilled]; And that's it! When you play the game and end up at the game over screen, you'll see the high scores being displayed (even the scores from earlier attempts, because they were saved, remember?). Notice the high score that is yellow. It's an indication that the score you got in the game you just played is on the scoreboard, and shows you where it is. Although the CCTableView might feel a bit weird with things disappearing and reappearing as you scroll, let's get some Threes!—like sliding into our game. If you're considering adding a CCTableView to your own project, the key takeaway here is to make sure you modify the contentSize and position properly. By default, the contentSize is a normalized CGSize, so from 0 to 1, and the anchor point is (0,0). Plus, make sure you perform these two steps: Set the data source of the table view Add the three table view methods With all that in mind, it should be relatively easy to implement a CCTableView. Adding subtle sliding to the units If you've ever played Threes! (or if you haven't, check out the trailer at http://asherv.com/threes/, and maybe even download the game on your phone), you would be aware of the sliding feature when a user begins to make "their move but hasn't yet completed the move. At the speed of the dragging finger, the units slide in the direction they're going to move, showing the user where each unit will go and how each unit will combine with another. This is useful as it not only adds that extra layer of "cool factor" but also provides a preview of the future for the user if they want to revert their decision ahead of time and make a different, more calculated move. Here's a side note: if you want your game to go really viral, you have to make the user believe it was their fault that they lost, and not your "stupid game mechanics" (as some players might say). Think Angry Birds, Smash Hit, Crossy Road, Threes!, Tiny Wings… the list goes on and on with more games that became popular, and all had one underlying theme: when the user loses, it was entirely in their control to win or lose, and they made the wrong move. This" unseen mechanic pushes players to play again with a better strategy in mind. And this is exactly why we want our users to see their move before it gets made. It's a win-win situation for both the developers and the players. Sliding one unit If we can get one unit to slide, we can surely get the rest of the units to slide by simply looping through them, modularizing the code, or some other form of generalization. That being said, we need to set up the Unit class so that it can detect how far "the finger has dragged. Thus, we can determine how far to move the unit. So, "open Unit.h and add the following variable. It will track the distance from the previous touch position: @property (nonatomic, assign) CGPoint previousTouchPos; Then, in the touchMoved method of Unit.m, add the following assignment to previousTouchPos. It sets the previous touch position to the touch-down position, but only after the distance is greater than 20 units: if (!self.isBeingDragged && ccpDistance(touchPos, self.touchDownPos) > 20) { self.isBeingDragged = YES; //add it here: self.previousTouchPos = self.touchDownPos; Once that's in place, we can begin calculating the distance while the finger is being dragged. To do that, we'll do a simple check. Add the following block of code at the end of touchMoved, after the end of the initial if block: //only if the unit is currently being dragged if (self.isBeingDragged) {    CGFloat dist = 0;    //if the direction the unit is being dragged is either UP or "     DOWN    if (self.dragDirection == DirUp || self.dragDirection == DirDown)    //then subtract the current touch position's Y-value from the "     previously-recorded Y-value to determine the distance to "     move      dist = touchPos.y - self.previousTouchPos.y;      //else if the direction the unit is being dragged is either "       LEFT or RIGHT    else if (self.dragDirection == DirLeft ||        self.dragDirection == DirRight)        //then subtract the current touch position's Y-value from "         the previously-recorded Y-value to determine the "         distance to move      dist = touchPos.x - self.previousTouchPos.x;   //then assign the touch position for the next iteration of touchMoved to work properly self.previousTouchPos = touchPos;   } The "assignment of previousTouchPos at the end will ensure that while the unit is being dragged, we continue to update the touch position so that we can determine the distance. Plus, the distance is calculated in only the direction in which the unit is being dragged (up and down are denoted by Y, and left and right are denoted by X). Now that we have the distance between finger drags being calculated, let's push "this into a function that will move our unit based on which direction it's being dragged in. So, right after you've calculated dist in the previous code block, "call the following method to move our unit based on the amount dragged: dist /= 2; //optional [self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Dividing the distance by 2 is optional. You may think the squares are too small, and want the user to be able to see their square. So note that dividing by 2, or a larger number, will mean that for every 1 point the finger moves, the unit will move by 1/2 (or less) points. With that method call being ready, we need to implement it, so add the following method body for now. Since this method is rather complicated, it's going to be "added in parts: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { } The first thing "we need to do is set up a variable to calculate the new x and y positions of the unit. We'll call these newX and newY, and set them to the unit's current position: CGFloat newX = self.position.x, newY = self.position.y; Next, we want to grab the position that the unit starts at, that is, the position the "unit would be at if it was positioned at its current grid coordinate. To do that, "we're going to call the getPositionForGridCoordinate method from MainScene, (since that's where the positions are being calculated anyway, we might as well use that function): CGPoint originalPos = [MainScene "getPositionForGridCoord:self.gridPos]; Next, we're going to move the newX or newY based on the direction in which the unit is being dragged. For now, let's just add the up direction: if (self.dragDirection == DirUp) {    newY += dist;    if (newY > originalPos.y + self.gridWidth)      newY = originalPos.y + self.gridWidth;    else if (newY < originalPos.y)      newY = originalPos.y; } In this if block, we're first going to add the distance to the newY variable "(because we're going up, we're adding to Y instead of X). Then, we want to "make sure the position is at most 1 square up. We're going to use the gridWidth (which is essentially the width of the square, assigned in the initCommon method). Also, we need to make sure that if they're bringing the square back to its original position, it doesn't go into the square beneath it. So let's add the rest of the directions as else if statements: else if (self.dragDirection == DirDown) {    newY += dist;    if (newY < originalPos.y - self.gridWidth)      newY = originalPos.y - self.gridWidth;    else if (newY > originalPos.y)      newY = originalPos.y; } else if (self.dragDirection == DirLeft) {    newX += dist;    if (newX < originalPos.x - self.gridWidth)      newX = originalPos.x - self.gridWidth;    else if (newX > originalPos.x)      newX = originalPos.x; } else if (self.dragDirection == DirRight) {    newX += dist;    if (newX > originalPos.x + self.gridWidth)      newX = originalPos.x + self.gridWidth;    else if (newX < originalPos.x)      newX = originalPos.x; } Finally, we "will set the position of the unit based on the newly calculated "x and y positions: self.position = ccp(newX, newY); Running the game at this point should cause the unit you drag to slide along "with your finger. Nice, huh? Since we have a function that moves one unit, "we can very easily alter it so that every unit can be moved like this. But first, there's something you've probably noticed a while ago (or maybe just recently), and that's the unit movement being canceled only when you bring your finger back to the original touch down position. Because we're dragging the unit itself, we can "cancel" the move by dragging the unit back to where it started. However, the finger might be in a completely different position, so we need to modify how the cancelling gets determined. To do that, in your touchEnded method of Unit.m, locate this if statement: if (ccpDistance(touchPos, self.touchDownPos) > "self.boundingBox.size.width/2) Change it to the following, which will determine the unit's distance, and not the finger's distance: CGPoint oldSelfPos = [MainScene "getPositionForGridCoord:self.gridPos];   CGFloat dist = ccpDistance(oldSelfPos, self.position); if (dist > self.gridWidth/2) Yes, this means you no longer need the touchPos variable in touchEnded if "you're getting that "warning and wish to get rid of it. But that's it for sliding 1 unit. Now we're ready to slide all the units, so let's do it! Sliding all units Now "that we have the dragging unit being slid, let's continue and make all the units slide (even the enemy units so that we can better predict our troops' movement). First, we need a way to move all the units on the screen. However, since the Unit class only contains information about the individual unit (which is a good thing), "we need to call a method in MainScene, since that's where the arrays of units are. Moreover, we cannot simply call [MainScene method], since the arrays are "instance variables, and instance variables must be accessed through an instance "of the object itself. That being said, because we know that our unit will be added to the scene as "a child, we can use Cocos2d to our advantage, and call an instance method on the MainScene class via the parent parameter. So, in touchMoved of Unit.m, make the following change: [(MainScene*)self.parent slideAllUnitsWithDistance:dist "withDragDirection:self.dragDirection]; //[self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Basically we've commented out (or deleted) the old method call here, and instead called it on our parent object (which we cast as a MainScene so that we know "which functions it has). But we don't have that method created yet, so in MainScene.h, add the following method declaration: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir; Just in case you haven't noticed, the enum UnitDirection is declared in Unit.h, which is why MainScene.h imports Unit.h—so that we can make use of that enum in this class, and the function to be more specific. Then in MainScene.m, we're going to loop through both the friendly and enemy arrays, and call the slideUnitWithDistance function on each individual unit: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir { for (Unit *u in arrFriendlies)    [u slideUnitWithDistance:dist withDragDirection:dir]; for (Unit *u in arrEnemies)    [u slideUnitWithDistance:dist withDragDirection:dir]; } However, that" still isn't functional, as we haven't declared that function in the "header file for the Unit class. So go ahead and do that now. Declare the function header in Unit.h: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum "UnitDirection)dir; We're almost done. We initially set up our slideUnitWithDistance method with a drag direction in mind. However, only the unit that's currently being dragged will have a drag direction. Every other unit will need to use the direction it's currently facing "(that is, the direction in which it's already going). To do that, we just need to modify how the slideUnitWithDistance method does its checking to determine which direction to modify the distance by. But first, we need to handle the negatives. What does that mean? Well, if you're dragging a unit to the left and a unit being moved is supposed to be moving to the left, it will work properly, as x-10 (for example) will still be less than the grid's width. However, if you're dragging left and a unit being moved is supposed to be moving right, it won't be moving at all, as it tries to add a negative value x -10, but because it needs to be moving to the right, it'll encounter the left-bound right away (of less than the original position), and stay still. The following diagram should help explain what is meant by "handling negatives." As you can see, in the top section, when the non-dragged unit is supposed to be going left by 10 (in other words, negative 10 in the x direction), it works. But when the non-dragged unit is going the opposite sign (in other words, positive 10 in the x direction), it doesn't. To" handle this, we set up a pretty complicated if statement. It checks when the drag direction and the unit's own direction are opposite (positive versus negative), and multiplies the distance by -1 (flips it). Add this to the top of the slideUnitWithDistance method, right after you grab the newX and the original position: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { CGFloat newX = self.position.x, newY = self.position.y; CGPoint originalPos = [MainScene getPositionForGridCoord:self.gridPos]; if (!self.isBeingDragged &&   (((self.direction == DirUp || self.direction == DirRight) && (dir == DirDown || dir == DirLeft)) ||   ((self.direction == DirDown || self.direction == DirLeft) && (dir == DirUp || dir == DirRight)))) {    dist *= -1; } } The logic of this if statement works is as follows: Suppose the unit is not being dragged. Also suppose that either the direction is positive and the drag direction is negative, or the direction is negative and the drag direction is positive. Then multiply by -1. Finally, as mentioned earlier, we just need to handle the non-dragged units. So, in every if statement, add an "or" portion that will check for the same direction, but only if the unit is not currently being dragged. In other words, in the slideUnitWithDistance method, modify your if statements to look like this: if (self.dragDirection == DirUp || (!self.isBeingDragged && self.direction == DirUp)) {} else if (self.dragDirection == DirDown || (!self.isBeingDragged && self.direction == DirDown)) {} else if (self.dragDirection == DirLeft || (!self.isBeingDragged && self.direction == DirLeft)) {} else if (self.dragDirection == DirRight || (!self.isBeingDragged && self.direction == DirRight)) {} Finally, we can run the game. Bam! All the units go gliding across the screen with our drag. Isn't it lovely? Now the player can better choose their move. That's it for the sliding portion. The key to unit sliding is to loop through the arrays to ensure that all the units get moved by an equal amount, hence passing the distance to the move function. Creating movements on a Bézier curve If you don't know what a Bézier curve is, it's basically a line that goes from point A to point B over a curve. Instead of being a straight line with two points, it uses a second set of points called control points that bend the line in a smooth way. When you want to apply movement with animations in Cocos2d, it's very tempting to queue up a bunch of MoveTo actions in a sequence. However, it's going to look a lot nicer ( in both the game and the code) if you use a smoother Bézier curve animation. Here's a good example of what a Bézier curve looks like: As you can see, the red line goes from point P0 to P3. However, the line is influenced in the direction of the control points, P1 and P2. Examples of using a Bézier curve Let's list" a few examples where it would be a good choice to use a Bézier curve instead of just the regular MoveTo or MoveBy actions: A character that will perform a jumping animation, for example, in Super Mario Bros A boomerang as a weapon that the player throws Launching a missile or rocket and giving it a parabolic curve A tutorial hand that indicates a curved path the user must make with their finger A skateboarder on a half-pipe ramp (if not done with Chipmunk) There are obviously a lot of other examples that could use a Bézier curve for their movement. But let's actually code one, shall we? Sample project – Bézier map route First, to make things go a lot faster—as this isn't going to be part of the book's project—simply download the project from the code repository or the website. If you open the project and run it on your device or a simulator, you will notice a blue screen and a square in the bottom-left corner. If you tap anywhere on the screen, you'll see the blue square make an M shape ending in the bottom-right corner. If you hold your finger, it will repeat. Tap again and the animation will reset. Imagine the path this square takes is over a map, and indicates what route a player will travel with their character. This is a very choppy, very sharp path. Generally, paths are curved, so let's make one that is! Here is a screenshot that shows a very straight path of the blue square: The following screenshot shows the Bézier path of the yellow square: Curved M-shape Open MainScene.h and add another CCNodeColor variable, named unitBezier: CCNodeColor *unitBezier; Then open MainScene.m and add the following code to the init method so that your yellow block shows up on the screen: unitBezier = [[CCNodeColor alloc] initWithColor:[CCColor colorWithRed:1 green:1 blue:0] width:50 height:50]; [self addChild:unitBezier]; CCNodeColor *shadow2 = [[CCNodeColor alloc] initWithColor:[CCColor blackColor] width:50 height:50]; shadow2.anchorPoint = ccp(0.5,0.5); shadow2.position = ccp(26,24); shadow2.opacity = 0.5; [unitBezier addChild:shadow2 z:-1]; Then, in the sendFirstUnit method, add the lines of code that will reset the yellow block's position as well as queue up the method to move the yellow block: -(void)sendFirstUnit { unitRegular.position = ccp(0,0); //Add these 2 lines unitBezier.position = ccp(0,0); [self scheduleOnce:@selector(sendSecondUnit) delay:2]; CCActionMoveTo *move1 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/4, winSize.height * 0.75)]; CCActionMoveTo *move2 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/2, winSize.height/4)]; CCActionMoveTo *move3 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width*3/4, winSize.height * 0.75)]; CCActionMoveTo *move4 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width - 50, 0)]; [unitRegular runAction:[CCActionSequence actions:move1, move2, "move3, move4, nil]]; } After this, you'll need to actually create the sendSecondUnit method, like this: -(void)sendSecondUnit { ccBezierConfig bezConfig1; bezConfig1.controlPoint_1 = ccp(0, winSize.height); bezConfig1.controlPoint_2 = ccp(winSize.width*3/8, "winSize.height); bezConfig1.endPosition = ccp(winSize.width*3/8, "winSize.height/2); CCActionBezierTo *bez1 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig1]; ccBezierConfig bezConfig2; bezConfig2.controlPoint_1 = ccp(winSize.width*3/8, 0); bezConfig2.controlPoint_2 = ccp(winSize.width*5/8, 0); bezConfig2.endPosition = ccp(winSize.width*5/8, winSize.height/2); CCActionBezierBy *bez2 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig2]; ccBezierConfig bezConfig3; bezConfig3.controlPoint_1 = ccp(winSize.width*5/8, "winSize.height); bezConfig3.controlPoint_2 = ccp(winSize.width, winSize.height); bezConfig3.endPosition = ccp(winSize.width - 50, 0); CCActionBezierTo *bez3 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig3]; [unitBezier runAction:[CCActionSequence actions:bez1, bez2, bez3, nil]]; } The preceding method creates three Bézier configurations and attaches them to a MoveTo command that takes a Bézier configuration. The reason for this is that each Bézier configuration can take only two control points. As you can see in this marked-up screenshot, where each white and red square represents a control point, you can make only a U-shaped parabola with a single Bézier configuration. Thus, to make three U-shapes, you need three Bézier configurations. Finally, make sure that in the touchBegan method, you make the unitBezier stop all its actions (that is, stop on reset): [unitBezier stopAllActions]; And that's it! When you run the project and tap on the screen (or tap and hold), you'll see the blue square M-shape its way across, followed by the yellow square in its squiggly M-shape. If you" want to adapt the Bézier MoveTo or MoveBy actions for your own project, you should know that you can create only one U-shape with each Bézier configuration. They're fairly easy to implement and can quickly be copied and pasted, as shown in the sendSecondUnit function. Plus, as the control points and end position are just CGPoint values, they can be relative (that is, relative to the unit's current position, the world's position, or an enemy's position), and as a regular CCAction, they can be run with any CCNode object quite easily. Summary In this article, you learned how to do a variety of things, from making a score table and previewing the next move, to making use of Bézier curves. The code was built with a copy-paste mindset, so it can be adapted for any project without much reworking (if it is required at all). Resources for Article: Further resources on this subject: Cocos2d-x: Installation [article] Dragging a CCNode in Cocos2D-Swift [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 7190
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 11065

article-image-light-speed-unit-testing
Packt
23 Apr 2015
6 min read
Save for later

Light Speed Unit Testing

Packt
23 Apr 2015
6 min read
In this article by Paulo Ragonha, author of the book Jasmine JavaScript Testing - Second Edition, we will learn Jasmine stubs and Jasmine Ajax plugin. (For more resources related to this topic, see here.) Jasmine stubs We use stubs whenever we want to force a specific path in our specs or replace a real implementation for a simpler one. Let's take the example of the acceptance criteria, "Stock when fetched, should update its share price", by writing it using Jasmine stubs. The stock's fetch function is implemented using the $.getJSON function, as follows: Stock.prototype.fetch = function(parameters) { $.getJSON(url, function (data) {    that.sharePrice = data.sharePrice;    success(that); }); }; We could use the spyOn function to set up a spy on the getJSON function with the following code: describe("when fetched", function() { beforeEach(function() {    spyOn($, 'getJSON').and.callFake(function(url, callback) {      callback({ sharePrice: 20.18 });    });    stock.fetch(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); We will use the and.callFake function to set a behavior to our spy (by default, a spy does nothing and returns undefined). We make the spy invoke its callback parameter with an object response ({ sharePrice: 20.18 }). Later, at the expectation, we use the toEqual assertion to verify that the stock's sharePrice has changed. To run this spec, you no longer need a server to make the requests to, which is a good thing, but there is one issue with this approach. If the fetch function gets refactored to use $.ajax instead of $.getJSON, then the test will fail. A better solution, provided by a Jasmine plugin called jasmine-ajax, is to stub the browser's AJAX infrastructure instead, so the implementation of the AJAX request is free to be done in different manners. Jasmine Ajax Jasmine Ajax is an official plugin developed to help out the testing of AJAX requests. It changes the browser's AJAX request infrastructure to a fake implementation. This fake (or mocked) implementation, although simpler, still behaves like the real implementation to any code using its API. Installing the plugin Before we dig into the spec implementation, first we need to add the plugin to the project. Go to https://github.com/jasmine/jasmine-ajax/ and download the current release (which should be compatible with the Jasmine 2.x release). Place it inside the lib folder. It is also needed to be added to the SpecRunner.html file, so go ahead and add another script: <script type="text/javascript" src="lib/mock-ajax.js"></script> A fake XMLHttpRequest Whenever you are using jQuery to make AJAX requests, under the hood it is actually using the XMLHttpRequest object to perform the request. XMLHttpRequest is the standard JavaScript HTTP API. Even though its name suggests that it uses XML, it supports other types of content such as JSON; the name has remained the same for compatibility reasons. So, instead of stubbing jQuery, we could change the XMLHttpRequest object with a fake implementation. That is exactly what this plugin does. Let's rewrite the previous spec to use this fake implementation: describe("when fetched", function() { beforeEach(function() {    jasmine.Ajax.install(); });   beforeEach(function() {    stock.fetch();      jasmine.Ajax.requests.mostRecent().respondWith({      'status': 200,      'contentType': 'application/json',      'responseText': '{ "sharePrice": 20.18 }'    }); });   afterEach(function() {    jasmine.Ajax.uninstall(); });   it("should update its share price", function() {    expect(stock.sharePrice).toEqual(20.18); }); }); Drilling the implementation down: First, we tell the plugin to replace the original implementation of the XMLHttpRequest object by a fake implementation using the jasmine.Ajax.install function. We then invoke the stock.fetch function, which will invoke $.getJSON, creating XMLHttpRequest anew under the hood. And finally, we use the jasmine.Ajax.requests.mostRecent().respondWith function to get the most recently made request and respond to it with a fake response. We use the respondWith function, which accepts an object with three properties: The status property to define the HTTP status code. The contentType (JSON in the example) property. The responseText property, which is a text string containing the response body for the request. Then, it's all a matter of running the expectations: it("should update its share price", function() { expect(stock.sharePrice).toEqual(20.18); }); Since the plugin changes the global XMLHttpRequest object, you must remember to tell Jasmine to restore it to its original implementation after the test runs; otherwise, you could interfere with the code from other specs (such as the Jasmine jQuery fixtures module). Here's how you can accomplish this: afterEach(function() { jasmine.Ajax.uninstall(); }); There is also a slightly different approach to write this spec; here, the request is first stubbed (with the response details) and the code to be exercised is executed later. The previous example is changed to the following: beforeEach(function() {jasmine.Ajax.stubRequest('http://localhost:8000/stocks/AOUE').andReturn({    'status': 200,    'contentType': 'application/json',    'responseText': '{ "sharePrice": 20.18 }' });   stock.fetch(); }); It is possible to use the jasmine.Ajax.stubRequest function to stub any request to a specific request. In the example, it is defined by the URL http://localhost:8000/stocks/AOUE, and the response definition is as follows: { 'status': 200, 'contentType': 'application/json', 'responseText': '{ "sharePrice": 20.18 }' } The response definition follows the same properties as the previously used respondWith function. Summary In this article, you learned how asynchronous tests can hurt the quick feedback loop you can get with unit testing. I showed how you can use either stubs or fakes to make your specs run quicker and with fewer dependencies. We have seen two different ways in which you could test AJAX requests with a simple Jasmine stub and with the more advanced, fake implementation of the XMLHttpRequest. You also got more familiar with spies and stubs and should be more comfortable using them in different scenarios. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Working with Blender [article] Category Theory [article]
Read more
  • 0
  • 0
  • 1782

article-image-develop-digital-clock
Packt
22 Apr 2015
15 min read
Save for later

Develop a Digital Clock

Packt
22 Apr 2015
15 min read
In this article by Samarth Shah, author of the book Learning Raspberry Pi, we will take your Raspberry Pi to the real world. Make sure you have all the components listed for you to go ahead: Raspberry Pi with Raspbian OS. A keyboard/mouse. A monitor to display the content of Raspberry Pi. If you don't have Raspberry Pi, you can install the VNC server on Raspberry Pi, and on your laptop using the VNC viewer, you will be able to display the content. Hook up wires of different colors (keep around 30 wires of around 10 cm long). To do: Read instructions on how to cut the wires. An HD44780-based LCD. Note; I have used JHD162A. A breadboard. 10K potentiometer (optional). You will be using potentiometer to control the contrast of the LCD, so if you don't have potentiometer, contrast would be fixed and that would be okay for this project. Potentiometer is just a fancy word used for variable resistor. Basically, it is just a three-terminal resistor with sliding or rotating contact, which is used for changing the value of the resistor. (For more resources related to this topic, see here.) Setting up Raspberry Pi Once you have all the components listed in the previous section, before you get started, there are some software installation that needs to be done: Sudo apt-get update For controlling GPIO pins of Raspberry Pi, you will be using Python so for that python-dev, python-setuptools, and rpi.gpio (the Python wrapper of WiringPi) are required. Install them using the following command: Sudo apt-get install python-dev Sudo apt-get install python-setuptools Sudo apt-get install rpi.gpio Now, your Raspberry Pi is all set to control the LCD, but before you go ahead and start connecting LCD pins with Raspberry Pi GPIO pins, you need to understand how LCD works and more specifically how HD44780 based LCD works. Understanding HD44780-based LCD The LCD character displays can be found in espresso machines, laser printers, children's toys, and maybe even the odd toaster. The Hitachi HD44780 controller has become an industry standard for these types of displays. If you look at the back side of the LCD that you have bought, you will find 16 pins: Vcc / HIGH / '1' +5 V GND / LOW / '0' 0 V The following table depicts the HD44780 pin number and functionality: Pin number Functionality 1 Ground 2 VCC 3 Contrast adjustment 4 Register select 5 Read/Write(R/W) 6 Clock(Enable) 7 Bit 0 8 Bit 1 9 Bit 2 10 Bit 3 11 Bit 4 12 Bit 5 13 Bit 6 14 Bit 7 15 Backlight anode (+) 16 Backlight cathode (-) Pin 1 and Pin 2 are the power supply pins. They need to be connected with ground and +5 V power supply respectively. Pin 3 is a contrast setting pin. It should be connected to a potentiometer to control the contrast. However, in a JHD162A LCD, if you directly connect this pin to ground, initially, you will see dark boxes but that will work if you don't have potentiometer. Pin 4, Pin 5, and Pin 6 are the control pins. Pin 7 to Pin 14 are the data pins of LCD. Pin 7 is the least significant bit and pin 14 is the most significant bit of the date inputs. You can use LCD in two modes, that is, 4-bit or 8-bit. In the next section, you will be doing 4-bit operation to control the LCD. If you want to display some number/character on the display, you have to input the appropriate codes for that number/character on these pins. Pin 15 and Pin 16 provide power supply to the backlight of LCD. A backlight is a light within the LCD panel, which makes seeing the characters on the screen easier. When you leave your cell phone or MP3 player untouched for some time, the screen goes dark. This is the backlight turning off. It is possible to use the LCD without the backlight as well. JHD162A has a backlight, so you need to connect the power supply and ground to these pins respectively. Pin 4, Pin 5, and Pin 6 are the most important pins. As mentioned in the table, Pin 4 is the register select pin. This allows you to switch between two operating modes of the LCD, namely the instruction and character modes. Depending on the status of this pin, the data on the 8 data pins (D0-D7) is treated as either an instruction or as character data. To display some characters on LCD, you have to activate the character mode. And to give some instructions such as "clear the display" and "move cursor to home", you have to activate the command mode. To set the LCD in the instruction mode, set Pin 4 to '0' and to put it in character mode, set Pin 4 to '1'. Mostly, you will be using the LCD to display something on the screen; however, sometimes you may require to read what is being written on the LCD. In this case, Pin 5 (read-write) is used. If you set Pin 5 to 0, it will work in the write mode, and if you set Pin 5 to 1, it will work in the read mode. For all the practical purposes, Pin 5 (R/W) has to be permanently set to 0, that is, connect it with GND (Ground). Pin 6 (enable pin) has a very simple function. This is just the clock input for the LCD. The instruction or the character data at the data pins (Pin 7-Pin 14) is processed by the LCD on the falling edge of this pin. The enable pin should be normally held at 1, that is, Vcc by a pull up resistor. When a momentary button switch is pressed, the pin goes low and back to high again when you leave the switch. Your instruction or character will be executed on the falling edge of the pulse, that is, the moment when the switch get closed. So, the flow diagram of a typical write sequence to LCD will be: Connecting LCD pins and Raspberry Pi GPIO pins Having understood the way LCD works, you are ready to connect your LCD with Raspberry Pi. Connect LCD pins with Raspberry Pi pins using following table: LCD pins Functionality Raspberry Pi pins 1 Ground Pin 6 2 Vcc Pin 2 3 Contrast adjustment Pin 6 4 Register select Pin 26 5 Read/Write (R/W) Pin 6 6 Clock (Enable) Pin 24 7 Bit 0 Not used 8 Bit 1 Not used 9 Bit 2 Not used 10 Bit 3 Not used 11 Bit 4 Pin 22 12 Bit 5 Pin 18 13 Bit 6 Pin 16 14 Bit 7 Pin 12 15 Backlight anode (+) Pin 2 16 Backlight cathode (-) Pin 6 Your LCD should have come with 16-pin single row pin header (male/female) soldered with 16 pins of LCD. If you didn't get any pin header with the LCD, you have to buy a 16-pin single row female pin header. If the LCD that you bought doesn't have a soldered 16-pin single row pin header, you have to solder it. Please note that if you have not done soldering before, don't try to solder it by yourself. Ask some of your friends who can help or the easiest option is to take it to the place from where you have bought it; he will solder it for you in merely 5 minutes. The final connections are shown in the following screenshot. Make sure your Raspberry Pi is not running when you are connecting LCD pins with Raspberry Pi pins. Connect LCD pins with Raspberry Pi using hook up wires. Before you start scripting, you need to understand few more things about operating LCD in a 4-bit mode: LCD default is a 8-bit mode, so the first command must be set in a 8-bit mode instructing the LCD to operate in a 4-bit mode In the 4-bit mode, a write sequence is a bit different than what has been depicted earlier for the 8-bit mode. In the following diagram, Step 3 will be different in the 4-bit mode as there are only 4 data pins available for data transfer and you cannot transfer 8 bits at the same time. So in this case, as per the HD44780 datasheet, you have to send the first Upper "nibble" to LCD Pin 11 to Pin 14. Execute those bits by making Enable pin to LOW (as instructions/character will get executed on the falling edge of the pulse) and back to HIGH again. Once you have sent the Upper "nibble", send Lower "nibble" to LCD Pin 11 to Pin 14. To execute the instruction/character sent, set Enable Pin to LOW. So, the typical 4-bit mode write process will look like this: Scripting Once you have connected LCD pins with Raspberry Pi, as per the previously shown diagram, boot up your Raspberry Pi. The following are the steps to develop a digital clock: Create a new python script by right-clicking Create | New File | DigitalClock.py. Here is the code that needs to be copied in the DigitalClock.py file: #!/usr/bin/python import RPi.GPIO as GPIO import time from time import sleep from datetime import datetime from time import strftime class HD44780: def __init__(self, pin_rs=7, pin_e=8,   pins_db=[18,23,24,25]):    self.pin_rs=pin_rs    self.pin_e=pin_e    self.pins_db=pins_db    GPIO.setmode(GPIO.BCM)    GPIO.setup(self.pin_e, GPIO.OUT)    GPIO.setup(self.pin_rs, GPIO.OUT)    for pin in self.pins_db:      GPIO.setup(pin, GPIO.OUT)      self.clear() def clear(self):    # Blank / Reset LCD    self.cmd(0x33)    self.cmd(0x32)    self.cmd(0x28)    self.cmd(0x0C)    self.cmd(0x06)    self.cmd(0x01) def cmd(self, bits, char_mode=False):    # Send command to LCD    sleep(0.001)    bits=bin(bits)[2:].zfill(8)    GPIO.output(self.pin_rs, char_mode)    for pin in self.pins_db:      GPIO.output(pin, False)    for i in range(4):      if bits[i] == "1":        GPIO.output(self.pins_db[i], True)    GPIO.output(self.pin_e, True)    GPIO.output(self.pin_e, False)    for pin in self.pins_db:      GPIO.output(pin, False)    for i in range(4,8):      if bits[i] == "1":        GPIO.output(self.pins_db[i-4], True) GPIO.output(self.pin_e, True)    GPIO.output(self.pin_e, False) def message(self, text):    # Send string to LCD. Newline wraps to second line    for char in text:      if char == 'n':        self.cmd(0xC0) # next line      else:        self.cmd(ord(char),True) if __name__ == '__main__': while True:    lcd = HD44780()    lcd.message(" "+datetime.now().strftime(%H:%M:%S))    time.sleep(1) Run the preceding script by executing the following command: sudo python DigitalClock.py This code accesses the Raspberry Pi GPIO port, which requires root privileges, so sudo is used. Now, your digital clock is ready. The current Raspberry Pi time will get displayed on the LCD screen. Only 4 data pins of LCD are being connected, so this script will work for the LCD 4-bit mode. I have used the JHD162A LCD, which is based on the HD44780 controller. Controlling mechanisms for all HD44780 LCDs are same. So, the preceding class, HD44780, can also be used for controlling another HD44780-based LCD. Once you have understood the preceding LCD 4-bit operation, it will be much easier to understand this code: if __name__ == '__main__': while True:    lcd = HD44780()    lcd.message(" "+datetime.now().strftime(%H:%M:%S))    time.sleep(1) The main HD44780 class has been initialized and the current time will be sent to the LCD every one second by calling the message function of the lcd object. The most important part of this project is the HD44780 class. The HD44780 class has the following four components: __init__: This will initialize the necessary components. clear: This will clear the LCD and instruct it to work in the 4-bit mode. cmd: This is the core of the class. Each and every command/instruction that has to be executed will get executed in this function. message: This will be used to display a message on the screen. The __init__ function The _init_ function initializes the necessary GPIO port for LCD operation: def __init__(self, pin_rs=7, pin_e=8, pins_db=[18,23,24,25]): self.pin_rs=pin_rs self.pin_e=pin_e self.pins_db=pins_db GPIO.setmode(GPIO.BCM) GPIO.setup(self.pin_e, GPIO.OUT) GPIO.setup(self.pin_rs, GPIO.OUT) for pin in self.pins_db:    GPIO.setup(pin, GPIO.OUT)    self.clear() GPIO.setmode(GPIO.BCM): Basically, the GPIO library has two modes in which it can operate. One is BOARD and the other one is BCM. The BOARD mode specifies that you are referring to the numbers printed in the board. The BCM mode means that you are referring to the pins by the "Broadcom SOC channel" number. While creating an instance of the HD44780 class, you can specify which pin to use for a specific purpose. By default, it will take GPIO 7 as RS Pin, GPIO 8 as Enable Pin, and GPIO 18, GPIO 23, GPIO 24, and GPIO 25 as data pins. Once you have defined the mode of the GPIO operation, you have to set all the pins that will be used as output, as you are going to provide output of these pins to LCD pins. Once this is done, clear and initialize the screen. The clear function The clear function clears/resets the LCD:    def clear(self):      # Blank / Reset LCD      self.cmd(0x33)      self.cmd(0x32)      self.cmd(0x28)      self.cmd(0x0C)      self.cmd(0x06)      self.cmd(0x01) You will see some code sequence that is executed in this section. This code sequence is generated based on the HD44780 datasheet instructions. A complete discussion of this code is beyond the scope of this article; however, to give a high-level overview, the functionality of each code is as follows: 0x33: This is the function set to the 8-bit mode 0x32: This is the function set to the 8-bit mode again 0x28: This is the function set to the 4-bit mode, which indicates the LCD has two lines 0x0C: This turns on just the LCD and not the cursor 0x06: This sets the entry mode to the autoincrement cursor and disables the shift mode 0x01: This clears the display The cmd function The cmd function sends the command to the LCD as per the LCD operation, and is shown as follows:    def cmd(self, bits, char_mode=False):      # Send command to LCD      sleep(0.001)      bits=bin(bits)[2:].zfill(8)      GPIO.output(self.pin_rs, char_mode)      for pin in self.pins_db:        GPIO.output(pin, False)      for i in range(4):        if bits[i] == "1":          GPIO.output(self.pins_db[i], True)      GPIO.output(self.pin_e, True)      GPIO.output(self.pin_e, False)      for pin in self.pins_db:        GPIO.output(pin, False)      for i in range(4,8):        if bits[i] == "1":          GPIO.output(self.pins_db[i-4], True)      GPIO.output(self.pin_e, True)      GPIO.output(self.pin_e, False) As the 4-bit mode is used drive the LCD, you will be using the flowchart that has been discussed previously. In the first line, the sleep(0.001) second is used because a few LCD operations will take some time to execute, so you have to wait for a certain time before it gets completed. The bits=bin(bits)[2:].zfill(8) line will convert the integer number to binary string and then pad it with zeros on the left to make it proper 8-bit data that can be sent to the LCD processor. The preceding lines of code does the operation as per the 4-bit write operation flowchart. The message function The message function sends the string to LCD, as shown here:    def message(self, text):      # Send string to LCD. Newline wraps to second line      for char in text:        if char == 'n':          self.cmd(0xC0) # next line        else:          self.cmd(ord(char),True) This function will take the string as input, convert each character into the corresponding ASCII code, and then send it to the cmd function. For a new line, the 0xC0 code is used. Discover more Raspberry Pi projects in Raspberry Pi LED Blueprints - pick up our guide and see how software can bring LEDs to life in amazing different ways!
Read more
  • 0
  • 0
  • 7867

article-image-designing-jasmine-tests-spies
Packt
22 Apr 2015
17 min read
Save for later

Designing Jasmine Tests with Spies

Packt
22 Apr 2015
17 min read
In this article by Munish Sethi, author of the book Jasmine Cookbook, we will see the implementation of Jasmine tests using spies. (For more resources related to this topic, see here.) Nowadays, JavaScript has become the de facto programming language to build and empower frontend/web applications. We can use JavaScript to develop simple or complex applications. However, applications in production are often vulnerable to bugs caused by design inconsistencies, logical implementation errors, and similar issues. Due to this, it is usually difficult to predict how applications will behave in real-time environments, which leads to unexpected behavior, nonavailability of applications, or outage for shorter/longer durations. This generates lack of confidence and dissatisfaction among application users. Also, high cost is often associated with fixing the production bugs. Therefore, there is a need to develop applications with high quality and high availability. Jasmine is a Behavior-Driven development (BDD) framework for testing JavaScript code both in browser and on server side. It plays a vital role to establish effective development process by applying efficient testing processes. Jasmine provides a rich set of libraries to design and develop Jasmine specs (unit tests) for JavaScript (or JavaScript enabled) applications. In this article, we will see how to develop specs using Jasmine spies and matchers. We will also see how to write Jasmine specs with the Data-Driven approach using JSON/HTML fixture from end-to-end (E2E) perspective. Let's understand the concept of mocks before we start developing Jasmine specs with spies. Generally, we write one unit test corresponding to a Jasmine spec to test a method, object, or component in isolation, and see how it behaves in different circumstances. However, there are situations where a method/object has dependencies on other methods or objects. In this scenario, we need to design tests/specs across the unit/methods or components to validate behavior or simulate a real-time scenario. However, due to nonavailability of dependent methods/objects or staging/production environment, it is quite challenging to write Jasmine tests for methods that have dependencies on other methods/objects. This is where Mocks come into the picture. A mock is a fake object that replaces the original object/method and imitates the behavior of the real object without going into the nitty-gritty or creating the real object/method. Mocks work by implementing the proxy model. Whenever, we create a mock object, it creates a proxy object, which replaces the real object/method. We can then define the methods that are called and their returned values in our test method. Mocks can then be utilized to retrieve some of the runtime statistics, as follows: How many times was the mocked function/object method called? What was the value that the function returned to the caller? With how many arguments was the function called? Developing Jasmine specs using spies In Jasmine, mocks are referred to as spies. Spies are used to mock a function/object method. A spy can stub any function and track calls to it and all its arguments. Jasmine provides a rich set of functions and properties to enable mocking. There are special matchers to interact with spies, that is, toHaveBeenCalled and toHaveBeenCalledWith. Now, to understand the preceding concepts, let's assume that you are developing an application for a company providing solutions for the healthcare industry. Currently, there is a need to design a component that gets a person's details (such as name, age, blood group, details of diseases, and so on) and processes it further for other usage. Now, assume that you are developing a component that verifies a person's details for blood or organ donation. There are also a few factors or biological rules that exist to donate or receive blood. For now, we can consider the following biological factors: The person's age should be greater than or equal to 18 years The person should not be infected with HIV+ Let's create the validate_person_eligibility.js file and consider the following code in the current context: var Person = function(name, DOB, bloodgroup, donor_receiver) {    this.myName = name; this.myDOB = DOB; this.myBloodGroup = bloodgroup; this.donor_receiver = donor_receiver; this.ValidateAge    = function(myDOB){     this.myDOB = myDOB || DOB;     return this.getAge(this.myDOB);    };    this.ValidateHIV   = function(personName,personDOB,personBloodGroup){     this.myName = personName || this.myName;     this.myDOB = personDOB || this.myDOB;     this.myBloodGroup = personBloodGroup || this.myBloodGroup;     return this.checkHIV(this.myName, this.myDOB, this.myBloodGroup);    }; }; Person.prototype.getAge = function(birth){ console.log("getAge() function is called"); var calculatedAge=0; // Logic to calculate person's age will be implemented later   if (calculatedAge<18) {    throw new ValidationError("Person must be 18 years or older"); }; return calculatedAge; }; Person.prototype.checkHIV = function(pName, pDOB, pBloodGroup){ console.log("checkHIV() function is called"); bolHIVResult=true; // Logic to verify HIV+ will be implemented later   if (bolHIVResult == true) {    throw new ValidationError("A person is infected with HIV+");   }; return bolHIVResult; };   // Define custom error for validation function ValidationError(message) { this.message = message; } ValidationError.prototype = Object.create(Error.prototype); In the preceding code snapshot, we created an object Person, which accepts four parameters, that is, name of the person, date of birth, the person's blood group, and the donor or receiver. Further, we defined the following functions within the person's object to validate biological factors: ValidateAge(): This function accepts an argument as the date of birth and returns the person's age by calling the getAge function. You can also notice that under the getAge function, the code is not developed to calculate the person's age. ValidateHIV(): This function accepts three arguments as name, date of birth, and the person's blood group. It verifies whether the person is infected with HIV or not by calling the checkHIV function. Under the function checkHIV, you can observe that code is not developed to check whether the person is infected with HIV+ or not. Next, let's create the spec file (validate_person_eligibility_spec.js) and code the following lines to develop the Jasmine spec, which validates all the test conditions (biological rules) described in the previous sections: describe("<ABC> Company: Health Care Solution, ", function() { describe("When to donate or receive blood, ", function(){    it("Person's age should be greater than " +        "or equal to 18 years", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "getAge");      testPersonCriteria.ValidateAge("10/25/1990");      expect(testPersonCriteria.getAge).toHaveBeenCalled();      expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990");    });    it("A person should not be " +        "infected with HIV+", function() {      var testPersonCriteria = new Person();      spyOn(testPersonCriteria, "checkHIV");      testPersonCriteria.ValidateHIV();      expect(testPersonCriteria.checkHIV).toHaveBeenCalled();    }); }); }); In the preceding snapshot, we mocked the functions getAge and checkHIV using spyOn(). Also, we applied the toHaveBeenCalled matcher to verify whether the function getAge is called or not. Let's look at the following pointers before we jump to the next step: Jasmine provides the spyOn() function to mock any JavaScript function. A spy can stub any function and track calls to it and to all arguments. A spy only exists in the describe or it block; it is defined, and will be removed after each spec. Jasmine provides special matchers, toHaveBeenCalled and toHaveBeenCalledWith, to interact with spies. The matcher toHaveBeenCalled returns true if the spy was called. The matcher toHaveBeenCalledWith returns true if the argument list matches any of the recorded calls to the spy. Let's add the reference of the validate_person_eligibility.js file to the Jasmine runner (that is, SpecRunner.html) and run the spec file to execute both the specs. You will see that both the specs are passing, as shown in the following screenshot: While executing the Jasmine specs, you can notice that log messages, which we defined under the getAge() and checkHIV functions, are not printed in the browser console window. Whenever, we mock a function using Jasmine's spyOn() function, it replaces the original method of the object with a proxy method. Next, let's consider a situation where the function <B> is called under the function <A>, which is mocked in your test. Due to the mock behavior, it creates a proxy object that replaces the function <A>, and function <B> will never be called. However, in order to pass the test, it needs to be executed. In this situation, we chain the spyOn() function with .and.callThrough. Let's consider the following test code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callThrough(); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); }); Whenever the spyOn() function is chained with and.callThrough, the spy will still track all calls to it. However, in addition, it will delegate the control back to the actual implementation/function. To see the effect, let's run the spec file check_person_eligibility_spec.js with the Jasmine runner. You will see that the spec is failing, as shown in the following screenshot: This time, while executing the spec file, you can notice that a log message (that is, getAge() function is called) is also printed in the browser console window. On the other hand, you can also define your own logic or set values in your test code as per specific requirements by chaining the spyOn() function with and.callFake. For example, consider the following code: it("Person's age should be greater than " +    "or equal to 18 years", function() { var testPersonCriteria = new Person(); spyOn(testPersonCriteria, "getAge").and.callFake(function()      {    return 18; }); testPersonCriteria.ValidateAge("10/25/1990"); expect(testPersonCriteria.getAge).toHaveBeenCalled(); expect(testPersonCriteria.getAge).toHaveBeenCalledWith("10/25/1990"); expect(testPersonCriteria.getAge()).toEqual(18); }); Whenever the spyOn() function is chained with and.callFake, all calls to the spy will be delegated to the supplied function. You can also notice that we added one more expectation to validate the person's age. To see execution results, run the spec file with the Jasmine runner. You will see that both the specs are passing: Implementing Jasmine specs using custom spy method In the previous section, we looked at how we can spy on a function. Now, we will understand the need of custom spy method and how Jasmine specs can be designed using it. There are several cases when one would need to replace the original method. For example, original function/method takes a long time to execute or it depends on the other method/object (or third-party system) that is/are not available in the test environment. In this situation, it is beneficial to replace the original method with a fake/custom spy method for testing purpose. Jasmine provides a method called jasmine.createSpy to create your own custom spy method. As we described in the previous section, there are few factors or biological rules that exist to donate or receive bloods. Let's consider few more biological rules as follows: Person with O+ blood group can receive blood from a person with O+ blood group Person with O+ blood group can give the blood to a person with A+ blood group First, let's update the JavaScript file validate_person_eligibility.js and add a new method ValidateBloodGroup to the Person object. Consider the following code: this.ValidateBloodGroup   = function(callback){ var _this = this; var matchBloodGroup; this.MatchBloodGroupToGiveReceive(function (personBloodGroup) {    _this.personBloodGroup = personBloodGroup;    matchBloodGroup = personBloodGroup;    callback.call(_this, _this.personBloodGroup); }); return matchBloodGroup; };   Person.prototype.MatchBloodGroupToGiveReceive = function(callback){ // Network actions are required to match the values corresponding // to match blood group. Network actions are asynchronous hence the // need for a callback. // But, for now, let's use hard coded values. var matchBloodGroup; if (this.donor_receiver == null || this.donor_receiver == undefined){    throw new ValidationError("Argument (donor_receiver) is missing "); }; if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "RECEIVER"){    matchBloodGroup = ["O+"]; }else if (this.myBloodGroup == "O+" && this.donor_receiver.toUpperCase() == "DONOR"){    matchBloodGroup = ["A+"]; }; callback.call(this, matchBloodGroup); }; In the preceding code snapshot, you can notice that the ValidateBloodGroup() function accepts an argument as the callback function. The ValidateBloodGroup() function returns matching/eligible blood group(s) for receiver/donor by calling the MatchBloodGroupToGiveReceive function. Let's create the Jasmine tests with custom spy method using the following code: describe("Person With O+ Blood Group: ", function(){ it("can receive the blood of the " +      "person with O+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Receiver");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    var testPersonCriteria = new Person("John Player", "10/30/1980", "O+", "Donor");    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); You can notice that in the preceding snapshot, first we mocked the function MatchBloodGroupToGiveReceive using spyOn() and chained it with and.callThrough() to hand over the control back to the function. Thereafter, we created callback as the custom spy method using jasmine.createSpy. Furthermore, we are tracking calls/arguments to the callback and MatchBloodGroupToGiveReceive functions using tracking properties (that is, .calls.any() and .calls.count()). Whenever we create a custom spy method using jasmine.createSpy, it creates a bare spy. It is a good mechanism to test the callbacks. You can also track calls and arguments corresponding to custom spy method. However, there is no implementation behind it. To execute the tests, run the spec file with the Jasmine runner. You will see that all the specs are passing: Implementing Jasmine specs using Data-Driven approach In Data-Driven approach, Jasmine specs get input or expected values from the external data files (JSON, CSV, TXT files, and so on), which are required to run/execute tests. In other words, we isolate test data and Jasmine specs so that one can prepare the test data (input/expected values) separately as per the need of specs. For example, in the previous section, we provided all the input values (that is, name of person, date of birth, blood group, donor or receiver) to the person's object in the test code itself. However, for better management, it's always good to maintain test data and code/specs separately. To implement Jasmine tests with the data-driven approach, let's create a data file fixture_input_data.json. For now, you can use the following data in JSON format: [ { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Receiver" }, { "Name": "John Player", "DOB": "10/30/1980", "Blood_Group": "O+", "Donor_Receiver": "Donor" } ] Next, we will see how to provide all the required input values in our tests through a data file using the jasmine-jquery plugin. Before we move to the next step and implement the Jasmine tests with the Data-Driven approach, let's note the following points regarding the jasmine-jquery plugin: It provides two extensions to write the tests with HTML and JSON fixture: An API for handling HTML and JSON fixtures in your specs A set of custom matchers for jQuery framework The loadJSONFixtures method loads fixture(s) from one or more JSON files and makes it available at runtime To know more about the jasmine-jquery plugin, you can visit the following website: https://github.com/velesin/jasmine-jquery Let's implement both the specs created in the previous section using the Data-Driven approach. Consider the following code: describe("Person With O+ Blood Group: ", function(){    var fixturefile, fixtures, myResult; beforeEach(function() {        //Start - Load JSON Files to provide input data for all the test scenarios        fixturefile = "fixture_input_data.json";        fixtures = loadJSONFixtures(fixturefile);        myResult = fixtures[fixturefile];            //End - Load JSON Files to provide input data for all the test scenarios });   it("can receive the blood of the " +      "person with O+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[0].Name,        myResult[0].DOB,        myResult[0].Blood_Group,        myResult[0].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    //Verify, callback method is called or not    expect(callback).toHaveBeenCalled();    //Verify, MatchBloodGroupToGiveReceive is    // called and check whether control goes back    // to the function or not    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.any()).toEqual(true);        expect(testPersonCriteria.MatchBloodGroupToGiveReceive.calls.count()).toEqual(1);    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("O+"); }); it("can give the blood to the " +      "person with A+ blood group", function() {    //Start - Provide input values from the data file    var testPersonCriteria = new Person(        myResult[1].Name,        myResult[1].DOB,        myResult[1].Blood_Group,        myResult[1].Donor_Receiver    );    //End - Provide input values from the data file    spyOn(testPersonCriteria, "MatchBloodGroupToGiveReceive").and.callThrough();    var callback = jasmine.createSpy();    testPersonCriteria.ValidateBloodGroup(callback);    expect(callback).toHaveBeenCalled();    expect(testPersonCriteria.MatchBloodGroupToGiveReceive).toHaveBeenCalled();    expect(testPersonCriteria.ValidateBloodGroup(callback)).toContain("A+"); }); }); In the preceding code snapshot, you can notice that first we provided the input data from an external JSON file (that is, fixture_input_data.json) using the loadJSONFixtures function and made it available at runtime. Thereafter, we provided input values/data to both the specs, as required; we set the value of name, date of birth, blood group, and donor/receiver for specs 1 and 2, respectively. Further, following the same methodology, we can also create a separate data file for expected values, which we require in our tests to compare with actual values. If test data (input or expected values) is required during execution, it is advisable to provide it from an external file instead of using hardcoded values in your tests. Now, execute the test suite with the Jasmine runner and you will see that all the specs are passing: Summary In this article, we looked at the implementation of Jasmine tests using spies. We also demonstrated how to test the callback function using custom spy method. Further, we saw the implementation of Data-Driven approach, where you learned how to to isolate test data from the code. Resources for Article: Further resources on this subject: Web Application Testing [article] Testing Backbone.js Application [article] The architecture of JavaScriptMVC [article]
Read more
  • 0
  • 0
  • 2718
article-image-arduino-development
Packt
22 Apr 2015
19 min read
Save for later

Arduino Development

Packt
22 Apr 2015
19 min read
Most systems using the Arduino have a similar architecture. They have a way of reading data from the environment—a sensor—they make decision using the code running inside the Arduino and then output those decisions to the environment using various actuators, such as a simple motor. Using three recipes from the book, Arduino Development Cookbook, by Cornel Amariei, we will build such a system, and quite a useful one—a fan controlled by the air temperature. Let's break the process into three key steps, the first and easiest will be to connect an LED to the Arduino, a few of them will act as a thermometer, displaying the room temperature. The second step will be to connect the sensor and program it, and the third will be to connect the motor. Here, we will learn this basic skills. (For more resources related to this topic, see here.) Connecting an external LED Luckily, the Arduino boards come with an internal LED connected to pin 13. It is simple to use and always there. But most times we want our own LEDs in different places of our system. It is possible that we connect something on top of the Arduino board and are unable to see the internal LED anymore. Here, we will explore how to connect an external LED. Getting ready For this step we need the following ingredients: An Arduino board connected to the computer via USB A breadboard and jumper wires A regular LED (the typical LED size is 3 mm) A resistor between 220–1,000 ohm How to do it… Follow these steps to connect an external LED to an Arduino board: Mount the resistor on the breadboard. Connect one end of the resistor to a digital pin on the Arduino board using a jumper wire. Mount the LED on the breadboard. Connect the anode (+) pin of the LED to the available pin on the resistor. We can determine the anode on the LED in two ways. Usually, the longer pin is the anode. Another way is to look for the flat edge on the outer casing of the LED. The pin next to the flat edge is the cathode (-). Connect the LED cathode (-) to the Arduino GND using jumper wires. Schematic This is one possible implementation on the second digital pin. Other digital pins can also be used. Here is a simple way of wiring the LED: Code The following code will make the external LED blink: // Declare the LED pin int LED = 2; void setup() { // Declare the pin for the LED as Output pinMode(LED, OUTPUT); } void loop(){ // Here we will turn the LED ON and wait 200 milliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 milliseconds digitalWrite(LED, LOW); delay(200); } If the LED is connected to a different pin, simply change the LED value to the value of the pin that has been used. How it works… This is all semiconductor magic. When the second digital pin is set to HIGH, the Arduino provides 5 V of electricity, which travels through the resistor to the LED and GND. When enough voltage and current is present, the LED will light up. The resistor limits the amount of current passing through the LED. Without it, it is possible that the LED (or worse, the Arduino pin) will burn. Try to avoid using LEDs without resistors; this can easily destroy the LED or even your Arduino. Code breakdown The code simply turns the LED on, waits, and then turns it off again. In this one we will use a blocking approach by using the delay() function. Here we declare the LED pin on digital pin 2: int LED = 2; In the setup() function we set the LED pin as an output: void setup() { pinMode(LED, OUTPUT); } In the loop() function, we continuously turn the LED on, wait 200 milliseconds, and then we turn it off. After turning it off we need to wait another 200 milliseconds, otherwise it will instantaneously turn on again and we will only see a permanently on LED. void loop(){ // Here we will turn the LED ON and wait 200 miliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 miliseconds digitalWrite(LED, LOW); delay(200); } There's more… There are a few more things we can do. For example, what if we want more LEDs? Do we really need to mount the resistor first and then the LED? LED resistor We do need the resistor connected to the LED; otherwise there is a chance that the LED or the Arduino pin will burn. However, we can also mount the LED first and then the resistor. This means we will connect the Arduino digital pin to the anode (+) and the resistor between the LED cathode (-) and GND. If we want a quick cheat, check the following See also section. Multiple LEDs Each LED will require its own resistor and digital pin. For example, we can mount one LED on pin 2 and one on pin 3 and individually control each. What if we want multiple LEDs on the same pin? Due to the low voltage of the Arduino, we cannot really mount more than three LEDs on a single pin. For this we require a small resistor, 220 ohm for example, and we need to mount the LEDs in series. This means that the cathode (-) of the first LED will be mounted to the anode (+) of the second LED, and the cathode (-) of the second LED will be connected to the GND. The resistor can be placed anywhere in the path from the digital pin to the GND. See also For more information on external LEDs, take a look at the following recipes and links: For more details about LEDs in general, visit http://electronicsclub.info/leds.htm To connect multiple LEDs to a single pin, read the instructable at http://www.instructables.com/id/How-to-make-a-string-of-LEDs-in-parallel-for-ardu/ Because we are always lazy and we don't want to compute the needed resistor values, use the calculator at http://www.evilmadscientist.com/2009/wallet-size-led-resistance-calculator/ Now that we know how to connect an LED, let's also learn how to work with a basic temperature sensor, and built the thermometer we need. Temperature sensor Almost all sensors use the same analog interface. Here, we explore a very useful and fun sensor that uses the same. Temperature sensors are useful for obtaining data from the environment. They come in a variety of shapes, sizes, and specifications. We can mount one at the end of a robotic hand and measure the temperature in dangerous liquids. Or we can just build a thermometer. Here, we will build a small thermometer using the classic LM35 and a bunch of LEDs. Getting ready The following are the ingredients required: A LM35 temperature sensor A bunch of LEDs, different colors for a better effect Some resistors between 220–1,000 ohm How to do it… The following are the steps to connect a button without a resistor: Connect the LEDs next to each other on the breadboard. Connect all LED negative terminals—the cathodes—together and then connect them to the Arduino GND. Connect a resistor to each positive terminal of the LED. Then, connect each of the remaining resistor terminals to a digital pin on the Arduino. Here, we used pins 2 to 6. Plug the LM35 in the breadboard and connect its ground to the GND line. The GND pin is the one on the right, when looking at the flat face. Connect the leftmost pin on the LM35 to 5V on the Arduino. Lastly, use a jumper wire to connect the center LM35 pin to an analog input on the Arduino. Here we used the A0 analog pin. Schematic This is one possible implementation using the pin A0 for analog input and pins 2 to 6 for the LEDs: Here is a possible breadboard implementation: Code The following code will read the temperature from the LM35 sensor, write it on the serial, and light up the LEDs to create a thermometer effect: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH); } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH); } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH); } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH); } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH); } delay(100); // Small delay for the Serial to send } Blow into the temperature sensor to observe how the temperature goes up or down. How it works… The LM35 is a very simple and reliable sensor. It outputs an analog voltage on the center pin that is proportional to the temperature. More exactly, it outputs 10 mV for each degree Celsius. For a common value of 25 degrees, it will output 250 mV, or 0.25 V. We use the ADC inside the Arduino to read that voltage and light up LEDs accordingly. If it's hot, we light up more of them, if not, less. If the LEDs are in order, we will get a nice thermometer effect. Code breakdown First, we declare the used LED pins and the analog input to which we connected the sensor. We have five LEDs to declare so, rather than defining five variables, we can store all five pin numbers in an array with 5 elements: int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; We use the same array trick to simplify setting each pin as an output in the setup() function. Rather than using the pinMode() function five times, we have a for loop that will do it for us. It will iterate through each value in the LED[i] array and set each pin as output: void setup(){ Serial.begin(9600); for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } } In the loop() function, we continuously read the value of the sensor using the analogRead() function; then we print it on the serial: int val = analogRead(sensorPin); Serial.println(val); At last, we create our thermometer effect. For each degree Celsius, the LM35 returns 10 mV more. We can convert this to our analogRead() value in this way: 5V returns 1023, so a value of 0.20 V, corresponding to 20 degrees Celsius, will return 0.20 V/5 V * 1023, which will be equal to around 41. We have five different temperature areas; we'll use standard if and else casuals to determine which region we are in. Then we light the required LEDs. There's more… Almost all analog sensors use this method to return a value. They bring a proportional voltage to the value they read that we can read using the analogRead() function. Here are just a few of the sensor types we can use with this interface: Temperature Humidity Pressure Altitude Depth Liquid level Distance Radiation Interference Current Voltage Inductance Resistance Capacitance Acceleration Orientation Angular velocity Magnetism Compass Infrared Flexing Weight Force Alcohol Methane and other gases Light Sound Pulse Unique ID such as fingerprint Ghost! The last building block is the fan motor. Any DC fan motor will do for this, here we will learn how to connect and program it. Controlling motors with transistors We can control a motor by directly connecting it to the Arduino digital pin; however, any motor bigger than a coin would kill the digital pin and most probably burn Arduino. The solution is to use a simple amplification device, the transistor, to aid in controlling motors of any size. Here, we will explore how to control larger motors using both NPN and PNP transistors. Getting ready To execute this recipe, you will require the following ingredients: A DC motor A resistor between 220 ohm and 10K ohm A standard NPN transistor (BC547, 2N3904, N2222A, TIP120) A standard diode (1N4148, 1N4001, 1N4007) All these components can be found on websites such as Adafruit, Pololu, and Sparkfun, or in any general electronics store. How to do it… The following are the steps to connect a motor using a transistor: Connect the Arduino GND to the long strip on the breadboard. Connect one of the motor terminals to VIN or 5V on the Arduino. We use 5V if we power the board from the USB port. If we want higher voltages, we could use an external power source, such as a battery, and connect it to the power jack on Arduino. However, even the power jack has an input voltage range of 7 V–12 V. Don't exceed these limitations. Connect the other terminal of the motor to the collector pin on the NPN transistor. Check the datasheet to identify which terminal on the transistor is the collector. Connect the emitter pin of the NPN transistor to the GND using the long strip or a long connection. Mount a resistor between the base pin of the NPN transistor and one digital pin on the Arduino board. Mount a protection diode in parallel with the motor. The diode should point to 5V if the motor is powered by 5V, or should point to VIN if we use an external power supply. Schematic This is one possible implementation on the ninth digital pin. The Arduino has to be powered by an external supply. If not, we can connect the motor to 5V and it will be powered with 5 volts. Here is one way of hooking up the motor and the transistor on a breadboard: Code For the coding part, nothing changes if we compare it with a small motor directly mounted on the pin. The code will start the motor for 1 second and then stop it for another one: // Declare the pin for the motor int motorPin = 2; void setup() { // Define pin #2 as output pinMode(motorPin, OUTPUT); } void loop(){ // Turn motor on digitalWrite(motorPin, HIGH); // Wait 1000 ms delay(1000); // Turn motor off digitalWrite(motorPin, LOW); // Wait another 1000 ms delay(1000); } If the motor is connected to a different pin, simply change the motorPin value to the value of the pin that has been used. How it works… Transistors are very neat components that are unfortunately hard to understand. We should think of a transistor as an electric valve: the more current we put into the valve, the more water it will allow to flow. The same happens with a transistor; only here, current flows. If we apply a current on the base of the transistor, a proportional current will be allowed to pass from the collector to the emitter, in the case of an NPN transistor. The more current we put on the base, the more the flow of current will be between the other two terminals. When we set the digital pin at HIGH on the Arduino, current passes from the pin to the base of the NPN transistor, thus allowing current to pass through the other two terminals. When we set the pin at LOW, no current goes to the base and so, no current will pass through the other two terminals. Another analogy would be a digital switch that allows current to pass from the collector to the emitter only when we 'push' the base with current. Transistors are very useful because, with a very small current on the base, we can control a very large current from the collector to the emitter. A typical amplification factor called b for a transistor is 200. This means that, for a base current of 1 mA, the transistor will allow a maximum of 200 mA to pass from the collector to the emitter. An important component is the diode, which should never be omitted. A motor is also an inductor; whenever an inductor is cut from power it may generate large voltage spikes, which could easily destroy a transistor. The diode makes sure that all current coming out of the motor goes back to the power supply and not to the motor. There's more… Transistors are handy devices; here are a few more things that can be done with them. Pull-down resistor The base of a transistor is very sensitive. Even touching it with a finger might make the motor turn. A solution to avoid unwanted noise and starting the motor is to use a pull-down resistor on the base pin, as shown in the following figure. A value of around 10K is recommended, and it will safeguard the transistor from accidentally starting. PNP transistors A PNP transistor is even harder to understand. It uses the same principle, but in reverse. Current flows from the base to the digital pin on the Arduino; if we allow that current to flow, the transistor will allow current to pass from its emitter to its collector (yes, the opposite of what happens with an NPN transistor). Another important point is that the PNP is mounted between the power source and the load we want to power up. The load, in this case a motor, will be connected between the collector on the PNP and the ground. A key point to remember while using PNP transistors with Arduino is that the maximum voltage on the emitter is 5 V, so the motor will never receive more than 5 V. If we use an external power supply for the motor, the base will have a voltage higher than 5 V and will burn the Arduino. One possible solution, which is quite complicated, has been shown here: MOSFETs Let's face it; NPN and PNP transistors are old. There are better things these days that can provide much better performance. They are called Metal-oxide-semiconductor field-effect transistors. Normal people just call them MOSFETs and they work mostly the same. The three pins on a normal transistor are called collector, base, and emitter. On the MOSFET, they are called drain, gate, and source. Operation-wise, we can use them exactly the same way as with normal transistors. When voltage is applied at the gate, current will pass from the drain to the source in the case of an N-channel MOSFET. A P-channel is the equivalent of a PNP transistor. However, there are some important differences in the way a MOSFET works compared with a normal transistor. Not all MOSFETs can be properly powered on by the Arduino. Usually logic-level MOSFETs will work. Some of the famous N-channel MOSFETs are the FQP30N06, the IRF510, and the IRF520. The first one can handle up to 30 A and 60 V while the following two can handle 5.6 A and 10 A, respectively, at 100 V. Here is one implementation of the previous circuit, this time using an N-channel MOSFET: We can also use the following breadboard arrangement: Different loads A motor is not the only thing we can control with a transistor. Any kind of DC load can be controlled. An LED, a light or other tools, even another Arduino can be powered up by an Arduino and a PNP or NPN transistor. Arduinoception! See also For general and easy to use motors, Solarbotics is quite nice. Visit the site at https://solarbotics.com/catalog/motors-servos/. For higher-end motors that pack quite some power, Pololu has made a name for itself. Visit the site at https://www.pololu.com/category/51/pololu-metal-gearmotors. Putting it all together Now that we have the three key building blocks, we need to assemble them together. For the code, we only need to briefly modify the Temperature Sensor code, to also output to the motor: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin int motorPin = 9; // Declare the used motor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } // Define motorPin as output pinMode(motorPin, OUTPUT);   }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH);    digitalWrite( motorPIN, HIGH); // Fan ON } delay(100); // Small delay for the Serial to send } Summary In this article, we learned the three basic skills—to connect an LED to the Arduino, to connect a sensor, and to connect a motor. Resources for Article: Further resources on this subject: Internet of Things with Xively [article] Avoiding Obstacles Using Sensors [article] Hardware configuration [article]
Read more
  • 0
  • 0
  • 10878

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163

article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 2458
article-image-structure-applications
Packt
21 Apr 2015
21 min read
Save for later

Structure of Applications

Packt
21 Apr 2015
21 min read
In this article by Colin Ramsay, author of the book Ext JS Application Development Blueprints, we will learn that one of the great things about imposing structure is that it automatically gives predictability (a kind of filing system in which we immediately know where a particular piece of code should live). The same applies to the files that make up your application. Certainly, we could put all of our files in the root of the website, mixing CSS, JavaScript, configuration and HTML files in a long alphabetical list, but we'd be losing out on a number of opportunities to keep our application organized. In this article, we'll look at: Ideas to structure your code The layout of a typical Ext JS application Use of singletons, mixins, and inheritance Why global state is a bad thing Structuring your application is like keeping your house in order. You'll know where to find your car keys, and you'll be prepared for unexpected guests. (For more resources related to this topic, see here.) Ideas for structure One of the ways in which code is structured in large applications involves namespacing (the practice of dividing code up by naming identifiers). One namespace could contain everything relating to Ajax, whereas another could contain classes related to mathematics. Programming languages (such as C# and Java) even incorporate namespaces as a first-class language construct to help with code organization. Separating code from directories based on namespace becomes a sensible extension of this: From left: Java's Platform API, Ext JS 5, and .NET Framework A namespace identifier is made up of one or more name tokens, such as "Java" or "Ext", "Ajax" or "Math", separated by a symbol, in most cases a full stop/period. The top level name will be an overarching identifier for the whole package (such as "Ext") and will become less specific as names are added and you drill down into the code base. The Ext JS source code makes heavy use of this practice to partition UI components, utility classes, and all the other parts of the framework, so let's look at a real example. The GridPanel component is perhaps one of the most complicated in the framework; a large collection of classes contribute to features (such as columns, cell editing, selection, and grouping). These work together to create a highly powerful UI widget. Take a look at the following files that make up GridPanel: The Ext JS grid component's directory structure The grid directory reflects the Ext.grid namespace. Likewise, the subdirectories are child namespaces with the deepest namespace being Ext.grid.filters.filter. The main Panel and View classes: Ext.grid.Grid and Ext.grid.View respectively are there in the main director. Then, additional pieces of functionality, for example, the Column class and the various column subclasses are further grouped together in their own subdirectories. We can also see a plugins directory, which contains a number of grid-specific plugins. Ext JS actually already has an Ext.plugins namespace. It contains classes to support the plugin infrastructure as well as plugins that are generic enough to apply across the entire framework. In the event of uncertainty regarding the best place in the code base for a plugin, we might mistakenly have put it in Ext.plugins. Instead, Ext JS follows best practice and creates a new, more specific namespace underneath Ext.grid. Going back to the root of the Ext JS framework, we can see that there are only a few files at the top level. In general, these will be classes that are either responsible for orchestrating other parts of the framework (such as EventManager or StoreManager) or classes that are widely reused across the framework (such as Action or Component). Any more specific functionality should be namespaced in a suitably specific way. As a rule of thumb, you can take your inspiration from the organization of the Ext JS framework, though as a framework rather than a full-blown application. It's lacking some of the structural aspects we'll talk about shortly. Getting to know your application When generating an Ext JS application using Sencha Cmd, we end up with a code base that adheres to the concept of namespacing in class names and in the directory structure, as shown here: The structure created with Sencha Cmd's "generate app" feature We should be familiar with all of this, as it was already covered when we discussed MVVM in Ext JS. Having said that, there are some parts of this that are worth examining further to see whether they're being used to the full. /overrides This is a handy one to help us fall into a positive and predictable pattern. There are some cases where you need to override Ext JS functionality on a global level. Maybe, you want to change the implementation of a low-level class (such as Ext.data.proxy.Proxy) to provide custom batching behavior for your application. Sometimes, you might even find a bug in Ext JS itself and use an override to hotfix until the next point release. The overrides directory provides a logical place to put these changes (just mirror the directory structure and namespacing of the code you're overriding). This also provides us with a helpful rule, that is, subclasses go in /app and overrides go in /overrides. /.sencha This contains configuration information and build files used by Sencha Cmd. In general, I'd say try and avoid fiddling around in here too much until you know Sencha Cmd inside out because there's a chance you'll end up with nasty conflicts if you try and upgrade to a newer version of Sencha Cmd. bootstrap.js, bootstrap.json, and bootstrap.css The Ext JS class system has powerful dependency management through the requires feature, which gives us the means to create a build that contains only the code that's in use. The bootstrap files contain information about the minimum CSS and JavaScript needed to run your application as provided by the dependency system. /packages In a similar way to something like Ruby has RubyGems and Node.js has npm, Sencha Cmd has the concept of packages (a bundle which can be pulled into your application from a local or remote source). This allows you to reuse and publish bundles of functionality (including CSS, images, and other resources) to reduce copy and paste of code and share your work with the Sencha community. This directory is empty until you configure packages to be used in your app. /resources and SASS SASS is a technology that aids in the creation of complex CSS by promoting reuse and bringing powerful features (such as mixins and functions) to your style sheets. Ext JS uses SASS for its theme files and encourages you to use it as well. index.html We know that index.html is the root HTML page of our application. It can be customized as you see fit (although, it's rare you'll need to). There's one caveat and it's written in a comment in the file already: <!-- The line below must be kept intact for Sencha Cmd to build your application --><script id="microloader" type="text/javascript" src="bootstrap.js"></script> We know what bootstrap.js refers to (loading up our application and starting to fulfill its dependencies according to the current build). So, heed the comment and leave this script tag, well, alone! /build and build.xml The /build directory contains build artifacts (the files created when the build process is run). If you run a production build, then you'll get a directory inside /build called production and you should use only these files when deploying. The build.xml file allows you to avoid tweaking some of the files in /.sencha when you want to add some extra functionality to a build process. If you want to do something before, during, or after the build, this is the place to do it. app.js This is the main JavaScript entry point to your application. The comments in this file advise avoiding editing it in order to allow Sencha Cmd to upgrade it in the future. The Application.js file at /app/Application.js can be edited without fear of conflicts and will enable you to do the majority of things you might need to do. app.json This contains configuration options related to Sencha Cmd and to boot your application. When we refer to the subject of this article as a JavaScript application, we need to remember that it's just a website composed of HTML, CSS, and JavaScript as well. However, when dealing with a large application that needs to target different environments, it's incredibly useful to augment this simplicity with tools that assist in the development process. At first, it may seem that the default application template contains a lot of cruft, but they are the key to supporting the tools that will help you craft a solid product. Cultivating your code As you build your application, there will come a point at which you create a new class and yet it doesn't logically fit into the directory structure Sencha Cmd created for you. Let's look at a few examples. I'm a lumberjack – let's go log in Many applications have a centralized SessionManager to take care of the currently logged in user, perform authentication operations, and set up persistent storage for session credentials. There's only one SessionManager in an application. A truncated version might look like this: /** * @class CultivateCode.SessionManager * @extends extendsClass * Description */ Ext.define('CultivateCode.SessionManager', {    singleton: true,    isLoggedIn: false,      login: function(username, password) {        // login impl    },        logout: function() {        // logout impl    },        isLoggedIn() {        return isLoggedIn;    } }); We create a singleton class. This class doesn't have to be instantiated using the new keyword. As per its class name, CultivateCode.SessionManager, it's a top-level class and so it goes in the top-level directory. In a more complicated application, there could be a dedicated Session class too and some other ancillary code, so maybe, we'd create the following structure: The directory structure for our session namespace What about user interface elements? There's an informal practice in the Ext JS community that helps here. We want to create an extension that shows the coordinates of the currently selected cell (similar to cell references in Excel). In this case, we'd create an ux directory—user experience or user extensions—and then go with the naming conventions of the Ext JS framework: Ext.define('CultivateCode.ux.grid.plugins.CoordViewer', {    extend: 'Ext.plugin.Abstract',    alias: 'plugin.coordviewer',      mixins: {        observable: 'Ext.util.Observable'    },      init: function(grid) {        this.mon(grid.view, 'cellclick', this.onCellClick, this);    },      onCellClick: function(view, cell, colIdx, record, row, rowIdx, e) {        var coords = Ext.String.format('Cell is at {0}, {1}', colIdx, rowIdx)          Ext.Msg.alert('Coordinates', coords);    } }); It looks a little like this, triggering when you click on a grid cell: Also, the corresponding directory structure follows directly from the namespace: You can probably see a pattern emerging already. We've mentioned before that organizing an application is often about setting things up to fall into a position of success. A positive pattern like this is a good sign that you're doing things right. We've got a predictable system that should enable us to create new classes without having to think too hard about where they're going to sit in our application. Let's take a look at one more example of a mathematics helper class (one that is a little less obvious). Again, we can look at the Ext JS framework itself for inspiration. There's an Ext.util namespace containing over 20 general classes that just don't fit anywhere else. So, in this case, let's create CultivateCode.util.Mathematics that contains our specialized methods for numerical work: Ext.define('CultivateCode.util.Mathematics', {    singleton: true,      square: function(num) {        return Math.pow(num, 2);    },      circumference: function(radius) {        return 2 * Math.PI * radius;    } }); There is one caveat here and it's an important one. There's a real danger that rather than thinking about the namespace for your code and its place in your application, a lot of stuff ends up under the utils namespace, thereby defeating the whole purpose. Take time to carefully check whether there's a more suitable location for your code before putting it in the utils bucket. This is particularly applicable if you're considering adding lots of code to a single class in the utils namespace. Looking again at Ext JS, there are lots of specialized namespaces (such as Ext.state or Ext.draw. If you were working with an application with lots of mathematics, perhaps you'd be better off with the following namespace and directory structure: Ext.define('CultivateCode.math.Combinatorics', {    // implementation here! }); Ext.define('CultivateCode.math.Geometry', {    // implementation here! }); The directory structure for the math namespace is shown in the following screenshot: This is another situation where there is no definitive right answer. It will come to you with experience and will depend entirely on the application you're building. Over time, putting together these high-level applications, building blocks will become second nature. Money can't buy class Now that we're learning where our classes belong, we need to make sure that we're actually using the right type of class. Here's the standard way of instantiating an Ext JS class: var geometry = Ext.create('MyApp.math.Geometry'); However, think about your code. Think how rare it's in Ext JS to actually manually invoke Ext.create. So, how else are the class instances created? Singletons A singleton is simply a class that only has one instance across the lifetime of your application. There are quite a number of singleton classes in the Ext JS framework. While the use of singletons in general is a contentious point in software architecture, they tend to be used fairly well in Ext JS. It could be that you prefer to implement the mathematical functions (we discussed earlier) as a singleton. For example, the following command could work: var area = CultivateCode.math.areaOfCircle(radius); However, most developers would implement a circle class: var circle = Ext.create('CultivateCode.math.Circle', { radius: radius }); var area = circle.getArea(); This keeps the circle-related functionality partitioned off into the circle class. It also enables us to pass the circle variable round to other functions and classes for additional processing. On the other hand, look at Ext.Msg. Each of the methods here are fired and forget, there's never going to be anything to do further actions on. The same is true of Ext.Ajax. So, once more we find ourselves with a question that does not have a definitive answer. It depends entirely on the context. This is going to happen a lot, but it's a good thing! This article isn't going to teach you a list of facts and figures; it's going to teach you to think for yourself. Read other people's code and learn from experience. This isn't coding by numbers! The other place you might find yourself reaching for the power of the singleton is when you're creating an overarching manager class (such as the inbuilt StoreManager or our previous SessionManager example). One of the objections about singletons is that they tend to be abused to store lots of global state and break down the separation of concerns we've set up in our code as follows: Ext.define('CultivateCode.ux.grid.GridManager', {       singleton: true,    currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.focusedGrid = grid;    } }); No one wants to see this sort of thing in a code base. It brings behavior and state to a high level in the application. In theory, any part of the code base could call this manager with unexpected results. Instead, we'd do something like this: Ext.define('CultivateCode.view.main.Main', {    extend: 'CultivateCode.ux.GridContainer',      currentGrid: null,    grids: [],      add: function(grid) {        this.grids.push(grid);    },      setCurrentGrid: function(grid) {        this.currentGrid = grid;    } }); We still have the same behavior (a way of collecting together grids), but now, it's limited to a more contextually appropriate part of the grid. Also, we're working with the MVVM system. We avoid global state and organize our code in a more correct manner. A win all round. As a general rule, if you can avoid using a singleton, do so. Otherwise, think very carefully to make sure that it's the right choice for your application and that a standard class wouldn't better fit your requirements. In the previous example, we could have taken the easy way out and used a manager singleton, but it would have been a poor choice that would compromise the structure of our code. Mixins We're used to the concept of inheriting from a subclass in Ext JS—a grid extends a panel to take on all of its functionality. Mixins provide a similar opportunity to reuse functionality to augment an existing class with a thin slice of behavior. An Ext.Panel "is an" Ext.Component, but it also "has a" pinnable feature that provides a pin tool via the Ext.panel.Pinnable mixin. In your code, you should be looking at mixins to provide a feature, particularly in cases where this feature can be reused. In the next example, we'll create a UI mixin called shakeable, which provides a UI component with a shake method that draws the user's attention by rocking it from side to side: Ext.define('CultivateCode.util.Shakeable', {    mixinId: 'shakeable',      shake: function() {        var el = this.el,            box = el.getBox(),            left = box.x - (box.width / 3),            right = box.x + (box.width / 3),            end = box.x;          el.animate({            duration: 400,            keyframes: {                33: {                      x: left                },                66: {                    x: right                },                 100: {                    x: end                }            }        });    } }); We use the animate method (which itself is actually mixed in Ext.Element) to set up some animation keyframes to move the component's element first left, then right, then back to its original position. Here's a class that implements it: Ext.define('CultivateCode.ux.button.ShakingButton', {    extend: 'Ext.Button',    mixins: ['CultivateCode.util.Shakeable'],    xtype: 'shakingbutton' }); Also it's used like this: var btn = Ext.create('CultivateCode.ux.button.ShakingButton', {    text: 'Shake It!' }); btn.on('click', function(btn) {    btn.shake(); }); The button has taken on the new shake method provided by the mixin. Now, if we'd like a class to have the shakeable feature, we can reuse this mixin where necessary. In addition, mixins can simply be used to pull out the functionality of a class into logical chunks, rather than having a single file of many thousands of lines. Ext.Component is an example of this. In fact, most of its core functionality is found in classes that are mixed in Ext.Component. This is also helpful when navigating a code base. Methods that work together to build a feature can be grouped and set aside in a tidy little package. Let's take a look at a practical example of how an existing class could be refactored using a mixin. Here's the skeleton of the original: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    },      buildItemsFromRecord: function(model) {        // Implementation    },      buildFieldsetsFromRecord: function(model){        // Implementation    },      buildItemForField: function(field){        // Implementation    },      isStateAvailable: function(){        // Implementation    },      addPersistenceEvents: function(){      // Implementation    },      persistFieldOnChange: function(){        // Implementation    },      restorePersistedForm: function(){        // Implementation    },      clearPersistence: function(){        // Implementation    } }); This MetaPanel does two things that the normal FormPanel does not: It reads the Ext.data.Fields from an Ext.data.Model and automatically generates a form layout based on these fields. It can also generate field sets if the fields have the same group configuration value. When the values of the form change, it persists them to localStorage so that the user can navigate away and resume completing the form later. This is useful for long forms. In reality, implementing these features would probably require additional methods to the ones shown in the previous code skeleton. As the two extra features are clearly defined, it's easy enough to refactor this code to better describe our intent: Ext.define('CultivateCode.ux.form.MetaPanel', {    extend: 'Ext.form.Panel',      mixins: [        // Contains methods:        // - buildItemsFromRecord        // - buildFieldsetsFromRecord        // - buildItemForField        'CultivateCode.ux.form.Builder',          // - isStateAvailable        // - addPersistenceEvents        // - persistFieldOnChange        // - restorePersistedForm        // - clearPersistence        'CultivateCode.ux.form.Persistence'    ],      initialize: function() {        this.callParent(arguments);        this.addPersistenceEvents();    },      loadRecord: function(model) {        this.buildItemsFromRecord(model);        this.callParent(arguments);    } }); We have a much shorter file and the behavior we're including in this class is described a lot more concisely. Rather than seven or more method bodies that may span a couple of hundred lines of code, we have two mixin lines and the relevant methods extracted to a well-named mixin class. Summary This article showed how the various parts of an Ext JS application can be organized into a form that eases the development process. Resources for Article: Further resources on this subject: CreateJS – Performing Animation and Transforming Function [article] Good time management in CasperJS tests [article] The Login Page using Ext JS [article]
Read more
  • 0
  • 0
  • 1685

article-image-inserting-gis-objects
Packt
21 Apr 2015
15 min read
Save for later

Inserting GIS Objects

Packt
21 Apr 2015
15 min read
In this article by Angel Marquez author of the book PostGIS Essentials see how to insert GIS objects. Now is the time to fill our tables with data. It's very important to understand some of the theoretical concepts about spatial data before we can properly work with it. We will cover this concept through the real estate company example, used previously. Basically, we will insert two kinds of data: firstly, all the data that belongs to our own scope of interest. By this, I mean the spatial data that was generated by us (the positions of properties in the case of the example of the real estate company) for our specific problem, so as to save this data in a way that can be easily exploited. Secondly, we will import data of a more general use, which was provided by a third party. Another important feature that we will cover in this article are the spatial data files that we could use to share, import, and export spatial data within a standardized and popular format called shp or Shape files. In this article, we will cover the following topics: Developing insertion queries that include GIS objects Obtaining useful spatial data from a public third-party Filling our spatial tables with the help of spatial data files using a command line tool Filling our spatial tables with the help of spatial data files using a GUI tool provided by PostGIS (For more resources related to this topic, see here.) Developing insertion queries with GIS objects Developing an insertion query is a very common task for someone who works with databases. Basically, we follow the SQL language syntax of the insertion, by first listing all the fields involved and then listing all the data that will be saved in each one: INSERT INTO tbl_properties( id, town, postal_code, street, "number) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32); If the field is of a numerical value, we simply write the number; if it's a string-like data type, we have to enclose the text in two single quotes. Now, if we wish to include a spatial value in the insertion query, we must first find a way to represent this value. This is where the Well-Known Text (WKT) notation enters. WKT is a notation that represents a geometry object that can be easily read by humans; following is an example of this: POINT(-0.116190 51.556173) Here, we defined a geographic point by using a list of two real values, the latitude (y-axis) and the longitude (x-axis). Additionally, if we need to specify the elevation of some point, we will have to specify a third value for the z-axis; this value will be defined in meters by default, as shown in the following code snippet: POINT(-0.116190 51.556173 100) Some of the other basic geometry types defined by the WKT notation are: MULTILINESTRING: This is used to define one or more lines POLYGON: This is used to define only one polygon MULTIPOLYGON: This is used to define several polygons in the same row So, as an example, an SQL insertion query to add the first row to the table, tbl_properties, of our real estate database using the WKT notation, should be as follows: INSERT INTO tbl_properties (id, town, postal_code, street, "number", the_geom) VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromText('POINT(-0.116190 51.556173)')); The special function provided by PostGIS, ST_GeomFromText, parses the text given as a parameter and converts it into a GIS object that can be inserted in the_geom field. Now, we could think this is everything and, therefore, start to develop all the insertion queries that we need. It could be true if we just want to work with the data generated by us and there isn't a need to share this information with other entities. However, if we want to have a better understanding of GIS (believe me, it could help you a lot and prevent a lot of unnecessary headache when working with data from several sources), it would be better to specify another piece of information as part of our GIS object representation to establish its Spatial Reference System (SRS). In the next section, we will explain this concept. What is a spatial reference system? We could think about Earth as a perfect sphere that will float forever in space and never change its shape, but it is not. Earth is alive and in a state of constant change, and it's certainly not a perfect circle; it is more like an ellipse (though not a perfect ellipse) with a lot of small variations, which have taken place over the years. If we want to represent a specific position inside this irregular shape called Earth, we must first make some abstractions: First we have to choose a method to represent Earth's surface into a regular form (such as a sphere, ellipsoid, and so on). After this, we must take this abstract three-dimensional form and represent it into a two-dimensional plane. This process is commonly called map projection, also known as projection. There are a lot of ways to make a projection; some of them are more precise than others. This depends on the usefulness that we want to give to the data, and the kind of projection that we choose. The SRS defines which projection will be used and the transformation that will be used to translate a position from a given projection to another. This leads us to another important point. Maybe it has occurred to you that a geographic position was unique, but it is not. By this, I mean that there could be two different positions with the same latitude and longitude values but be in different physical places on Earth. For a position to be unique, it is necessary to specify the SRS that was used to obtain this position. To explain this concept, let's consider Earth as a perfect sphere; how can you represent it as a two-dimensional square? Well, to do this, you will have to make a projection, as shown in the following figure:   A projection implies that you will have to make a spherical 3D image fit into a 2D figure, as shown in the preceding image; there are several ways to achieve this. We applied an azimuthal projection, which is a result of projecting a spherical surface onto a plane. However, as I told you earlier, there are several other ways to do this, as we can see in the following image:   These are examples of cylindrical and conical projections. Each one produces a different kind of 2D image of the terrain. Each has its own peculiarities and is used for several distinct purposes. If we put all the resultant images of these projections one above the other, we must get an image similar to the following figure:   As you can see, the terrain positions, which are not necessary, are the same between two projections, so you must clearly specify which projection you are using in your project in order to avoid possible mistakes and errors when you establish a position. There are a lot of SRS defined around the world. They could be grouped by their reach, that is, they could be local (state or province), national (an entire country), regional (several countries from the same area), or global (worldwide). The International Association of Oil and Gas Producers has defined a collection of Coordinate Reference System (CRS) known as the European Petroleum Survey Group (EPSG) dataset and has assigned a unique ID to each of these SRSs; this ID is called SRID. For uniquely defining a position, you must establish the SRS that it belongs to, using its particular ID; this is the SRID. There are literally hundreds of SRSs defined; to avoid any possible error, we must standardize which SRS we will use. A very common SRS, widely used around the globe is the WGS84 SRS with the SRID 4326. It is very important that you store the spatial data on your database, using EPSG: 4326 as much as possible, or almost use one equal projection on your database; this way you will avoid problems when you analyze your data. The WKT notation doesn't support the SRID specification as part of the text, since this was developed at the EWKT notation that allows us to include this information as part of our input string, as we will see in the following example: 'SRID=4326;POINT(51.556173 -0.116190)' When you create a spatial field, you must specify the SRID that will be used. Including SRS information in our spatial tables The matter that was discussed in the previous section is very important to develop our spatial tables. Taking into account the SRS that they will use from the beginning, we will follow a procedure to recreate our tables by adding this feature. This procedure must be applied to all the tables that we have created on both databases. Perform the following steps: Open a command session on pgSQL in your command line tool or by using the graphical GUI, PGAdmin III. We will open the Real_Estate database. Drop the spatial fields of your tables using the following instruction: SELECT DropGeometryColumn('tbl_properties', 'the_geom') Add the spatial field using this command: SELECT AddGeometryColumn('tbl_properties', 'the_geom', 4326, 'POINT', 2); Repeat these steps for the rest of the spatial tables. Now that we can specify the SRS that was used to obtain this position, we will develop an insertion query using the Extended WKT (EWKT) notation: INSERT INTO tbl_properties ( id, town, postal_code, street, "number", the_geom)VALUES (1, 'London', 'N7 6PA', 'Holloway Road', 32, ST_GeomFromEWKT('SRID=4326;POINT(51.556173 -0.116190)')); The ST_GeomFromEWKT function works exactly as ST_GeomFromText, but it implements the extended functionality of the WKT notation. Now that you know how to represent a GIS object as text, it is up to you to choose the most convenient way to generate a SQL script that inserts existing data into the spatial data tables. As an example, you could develop a macro in Excel, a desktop application in C#, a PHP script on your server, and so on. Getting data from external sources In this section, we will learn how to obtain data from third-party sources. Most often, this data interchange is achieved through a spatial data file. There are many data formats for this file (such as KML, geoJSON, and so on). We will choose to work with the *.shp files, because they are widely used and supported in practically all the GIS tools available in the market. There are dozens of sites where you could get useful spatial data from practically any city, state, or country in the world. Much of this data is public and freely available. In this case, we will use data from a fabulous website called http://www.openstreetmap.org/. The following is a series of steps that you could follow if you want to obtain spatial data from this particular provider. I'm pretty sure you can easily adapt this procedure to obtain data from another provider on the Internet. Using the example of the real estate company, we will get data from the English county of Buckinghamshire. The idea is that you, as a member of the IT department, import data from the cities where the company has activities: Open your favorite Internet browser and go to http://www.openstreetmap.org/, as shown in the following screenshot: Click on the Export tab. Click on the Geofabrik Downloads link; you will be taken to http://download.geofabrik.de/, as shown in the following screenshot: There, you will find a list of sub regions of the world; select Europe: Next is a list of all countries in Europe; notice a new column called .shp.zip. This is the file format that we need to download. Select Great Britain: In the next list, select England, you can see your selection on the map located at the right-hand side of the web page, as shown in the following screenshot: Now, you will see a list of all the counties. Select the .shp.zip column from the county of Buckinghamshire: A download will start. When it finishes, you will get a file called buckinghamshire-latest.shp.zip. Unzip it. At this point, we have just obtained the data (several shp files). The next procedure will show us how to convert this file into SQL insertion scripts. Extracting spatial data from an shp file In the unzipped folder are shp files; each of them stores a particular feature of the geography of this county. We will focus on the shp named buildings.shp. Now, we will extract this data stored in the shp file. We will convert this data to a sql script so that we can insert its data into the tbl_buildings table. For this, we will use a Postgis tool called shp2pgSQL. Perform the following steps for extracting spatial data from an shp file: Open a command window with the cmd command. Go to the unzipped folder. Type the following command: shp2pgsql -g the_geom buildings.shp tbl_buildings > buildings.sql Open the script with Notepad. Delete the following lines from the script: CREATE TABLE "tbl_buildings"(gid serial, "osm_id" varchar(20), "name" varchar(50), "type" varchar(20), "timestamp" varchar (30) ); ALTER TABLE "tbl_buildings" ADD PRIMARY KEY (gid); SELECT AddGeometryColumn('','tbl_buildings','geom','0','MULTIPOLYGON',2); Save the script. Open and run it with the pgAdmin query editor. Open the table; you must have at least 13363 new registers. Keep in mind that this number can change when new updates come. Importing shp files with a graphical tool There is another way to import an shp file into our table; we could use a graphical tool called postgisgui for this. To use this tool, perform the following steps: In the file explorer, open the folder: C:Program FilesPostgreSQL9.3binpostgisgui. Execute the shp2pgsql-gui application. Once this is done, we will see the following window: Configure the connection with the server. Click on the View Connections Details... button. Set the data to connect to the server, as shown in the following screenshot: Click the Add File button. Select the points.shp file. Once selected, type the following parameters in the Import List section:     Mode: In this field, type Append     SRID: In this field, type 4326     Geo column: In this field, type the_geom     Table: In this field, type tbl_landmarks   Click on the Import button. The import process will fail and show you the following message: This is because the structure is not the same as shown in the shp and in our table. There is no way to indicate to the tool which field we don't want to import. So, the only way for us to solve this problem is let the tool create a new table and after this, change the structure. This can be done by following these steps: Go to pgAdmin and drop the tbl_landmarks table. Change the mode to Create in the Import list. Click on the Import button. Now, the import process is successful, but the table structure has changed. Go to the PGAdmin again, refresh the data, and edit the table structure to be the same as it was before:     Change the name of the geom field to the_geom.     Change the name of the osm_id field to id.     Drop the Timestamp field.     Drop the primary key constraint and add a new one attached to the id field. For that, right-click on Constraints in the left panel.     Navigate to New Object | New Primary Key and type pk_landmarks_id. In the Columns tab, add the id field.   Now, we have two spatial tables, one with data that contains positions represented as the PostGIS type, POINT (tbl_landmarks), and the other with polygons, represented by PostGIS with the type, MULTIPOLYGON(tbl_buildings). Now, I would like you to import the data contained in the roads.shp file, using one of the two previously viewed methods. The following table has data that represents the path of different highways, streets, roads, and so on, which belong to this area in the form of lines, represented by PostGIS with the MULTILINESTRING type. When it's imported, change its name to tbl_roads and adjust the columns to the structure used for the other tables in this article. Here's an example of how the imported data must look like, as you can see the spatial data is show in its binary form in the following table: Summary In this article, you learned some basic concepts of GIS (such as WKT, EWKT, and SRS), which are fundamental for working with the GIS data. Now, you are able to craft your own spatial insertion queries or import this data into your own data tables. Resources for Article: Further resources on this subject: Improving proximity filtering with KNN [article] Installing PostgreSQL [article] Securing the WAL Stream [article]
Read more
  • 0
  • 0
  • 1865
Modal Close icon
Modal Close icon