Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-orchestrate-multiple-docker-containers-simply-using-fig
Felix Rabe
15 Dec 2014
7 min read
Save for later

Orchestrate Multiple Docker Containers Simply Using Fig

Felix Rabe
15 Dec 2014
7 min read
When you start learning how to use Docker, you play around running a single container in a single project. Soon after, you want to start multiple Docker containers in multiple projects. A nifty little tool called Fig helps you do just that. Sneak preview At the end of this blog post, you will have the following small sample app running on Docker using a MongoDB database and a Node.js web server in two separate containers:   But first, some background. Bash vs Fig Before this article, I had written my own shell script for the setup of named-data.education. But as I was exploring the realm of Docker orchestration, I came to the conclusion that throwing my script out and going with an existing solution would be the better practice. I ended up choosing Fig because it supports (and simplifies) the workflow I had implemented with my custom shell script. Also, it was recently acquired by Docker, and will soon be integrated with Docker. What does Fig do for you? It builds, runs, and removes multiple containers together in a single command. It keeps docker command-line arguments out of sight and inside the fig.yml file. This reduces long docker run … commands to a simple fig up command, similar to vagrant up. It avoids naming conflicts by giving image and container names project-specific prefixes, derived from the name of the directory that contains the fig.yml file. It knows the state in which the application environment is in, so docker ps ; docker stop ; docker rm just becomes fig up, which restarts running containers transparently. The fig.yml file is much more readable and maintainable than an equivalent shell script would be. Life cycle of a single Docker container (without Fig) This state diagram illustrates the life cycle of a typical Docker container:   States source - There is a Dockerfile, but nothing was done with it built - A Docker image was built from the Dockerfile running - A Docker container was started from the Docker image stopped - The Docker container has been stopped or it has stopped on its own Transitions These actually correspond to Docker commands, the most prominent ones being build and run: build - Takes a Dockerfile and creates a Docker image run - Takes a Docker image and runs a command in a container stop - Stops a running Docker container, or the container dies rm - Removes a stopped Docker container rmi - Removes a Docker image Life cycle of multiple Docker containers with Fig This diagram illustrates how Fig orchestrates multiple Docker containers: These transitions also correspond to Fig commands, with fig up being the champion here. (There is also a fig run command, but it has a marginal role in comparison.) Okay, let's get practical. A web app and a database As an example, let's say your app is really simple and consists of just a database and a web frontend: In Docker, this architecture is implemented by running each part in a separate container. The external connection, between NodeJS and the Internet, is realized by exposing a port, whereas the internal connection, between NodeJS and MongoDB, is realized using a link. Application source code For this sample application, create a directory with the following layout with the listings shown below, or get the source code from GitHub: fig-nodejs-mongodb-example/ fig.yml web/ Dockerfile liststorage.coffee package.json server.coffee fig.yml: web: build: ./web ports: - "8080:8080" links: - db db: image: mongo:2.6 Dockerfile: FROM node:0.10 ADD package.json /code/ WORKDIR /code RUN npm install ADD . /code CMD ["./node_modules/.bin/coffee", "./server.coffee"] liststorage.coffee: {MongoClient, ObjectID} = require 'mongodb' class module.exports.ListStorage constructor: -> @ready = false @collection = null MongoClient.connect 'mongodb://db_1:27017/list', (err, db) => throw err if err db.createCollection 'list', (err, collection) => throw err if err @ready = true @collection = collection toArray: (callback) -> return callback new Error 'not ready' unless @ready @collection.find().toArray (err, list) -> return callback err if err callback null, list push: (item, callback) -> doc = item: item @collection.insert doc, {w: 1}, (err, result) -> return callback err if err callback null remove: (_id, callback) -> @collection.remove {_id: ObjectID(_id)}, {w: 1}, (err, result) -> return callback err if err callback null package.json: (The bare minimum is to make npm install happy, but normally npm init should be used to create this file.) { "dependencies": { "body-parser": "^1.6.6", "coffee-script": "^1.8.0", "express": "^4.8.5", "handlebars": "^2.0.0-beta.1", "mongodb": "^1.4.9" } } server.coffee: #!/usr/bin/env coffee require 'coffee-script/register' ListStorage = require('./liststorage').ListStorage listStorage = new ListStorage handlebars = require 'handlebars' indexTemplate = handlebars.compile ''' <title>List</title> <h1>List</h1> <ul> {{#each items}} <li><a href="/delete/{{_id}}">[&times;]</a> {{item}}</li> {{/each}} </ul> <form method="POST"> <label>Add something:</label> <input name="item" autofocus="autofocus" /> <input type="submit" value="Submit" /> </form> ''' express = require 'express' bodyParser = require 'body-parser' app = express() app.use bodyParser.urlencoded extended: true app.get '/', (req, res) -> listStorage.toArray (err, items) -> throw err if err res.send indexTemplate items: items app.post '/', (req, res) -> listStorage.push req.body.item, (err) -> throw err if err res.redirect '/' app.get '/delete/:_id', (req, res) -> listStorage.remove req.params._id, (err) -> throw err if err res.redirect '/' app.listen 8080 Start it up First, make sure you have Docker and Fig installed. (This example has been tested with Fig 0.5.2 and Docker 1.2.0. On OS X, brew install fig works fairly well, together with docker-osx or boot2docker.) Then, open the terminal and type the following: cd fig-nodejs-mongodb-example fig up -d Then (assuming you run docker-osx), open http://localdocker:8080/ and play around - knowing that you did not have to manually set up two virtual machines! Other commands you might wish to use fig up (Ctrl-C will stop this from running) fig logs - for logging fig stop - for stopping fig rm - for removing fig ps - for status Remarks The sections in fig.yml are called “services.” As Fig allows scaling by running multiple instances of services, link aliases get an additional suffix inside containers compared to plain Docker; db_1 for Fig versus just db for Docker. Also, as Docker (and Fig) manage the container's /etc/hosts file, you get a db_1 host for free. Now, go have fun and keep containin’! About the author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently, he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol.
Read more
  • 0
  • 0
  • 5363

article-image-how-to-auto-scale-your-cloud-with-saltstack
Nicole Thomas
15 Dec 2014
10 min read
Save for later

How to Auto-Scale Your Cloud with SaltStack

Nicole Thomas
15 Dec 2014
10 min read
What is SaltStack? SaltStack is an extremely fast, scalable, and powerful remote execution engine and configuration management tool created to control distributed infrastructure, code, and data efficiently. At the heart of SaltStack, or “Salt”, is its remote execution engine, which is a bi-directional, secure communication system administered through the use of a Salt Master daemon. This daemon is used to control Salt Minion daemons, which receive commands from the remote Salt Master. A major component of Salt’s approach to configuration management is Salt Cloud, which was made to manage Salt Minions in cloud environments. The main purpose of Salt Cloud is to spin up instances on cloud providers, install a Salt Minion on the new instance using Salt’s Bootstrap Script, and configure the new minion so it can immediately get to work. Salt Cloud makes it easy to get an infrastructure up and running quickly and supports an array of cloud providers such as OpenStack, Digital Ocean, Joyent, Linode, Rackspace, Amazon EC2, and Google Compute Engine to name a few. Here is a full list of cloud providers supported by SaltStack and the automation features supported for each. What is cloud auto scaling? One of the most formidable benefits of cloud application hosting and data storage is the cloud infrastructure’s capacity to scale as demand fluctuates. Many cloud providers offer auto scaling features that automatically increase or decrease the number of instances that are up and running in a user’s cloud at any given time. These components generate new instances as needed to ensure optimal performance as activity escalates, while during idle periods, instances are destroyed to reduce costs. To harness the power of cloud auto-scaling technologies, SaltStack provides two reactor formulas that integrate Salt’s configuration management and remote execution capabilities for either Amazon EC2 Auto Scaling or Rackspace Auto Scale. The Salt Cloud Reactor Salt Formulas can be very helpful in the rapid build out of management frameworks for cloud infrastructures. Formulas are pre-written Salt States that can be used to configure services, install packages, or any other common configuration management tasks. The Salt Cloud Reactor is a formula that allows Salt to interact with supported Salt Cloud providers who provide cloud auto scaling features. (Note: at the time this article was written, the only supported Salt Cloud providers with cloud auto scaling capabilities were Rackspace Auto Scale and Amazon EC2 Auto Scaling. The Salt Cloud Reactor can also be used directly with EC2 Auto Scaling, but it is recommended that the EC2 Autoscale Reactor be used instead, as discussed in the following section.) The Salt Cloud Reactor allows SaltStack to know when instances are spawned or destroyed by the cloud provider. When a new instance comes online, a Salt Minion is automatically installed and the minion’s key is accepted by the Salt Master. If the configuration for the minion contains the appropriate startup state, it will configure itself and start working on its tasks. Accordingly, when an instance is deleted by the cloud provider, the minion’s key is removed from the Salt Master. In order to use the Salt Cloud Reactor, the Salt Master must be configured appropriately. In addition to applying all necessary settings on the Salt Master, a Salt Cloud query must be executed on a regular basis. The query polls data from the cloud provider to collect changes in the auto scaling sequence, as cloud providers using the Salt Cloud Reactor do not directly trigger notifications to Salt upon instance creation and deletion. The cloud query must be issued via a scheduling system such as cron or the Salt Scheduler. Once the Salt Master has been configured and query scheduling has been implemented, the reactor will manage itself and allow the Salt Master to interact with any Salt Minions created or destroyed by the auto scaling system. The EC2 Autoscale Reactor Salt’s EC2 Autoscale Reactor enables Salt to collaborate with Amazon EC2 Auto Scaling. Similarly to the Salt Cloud Reactor, the EC2 Autoscale Reactor will bootstrap a Salt Minion on any newly created instances and the Salt Master will automatically accept the new minion’s key. Additionally, when an EC2 instance is destroyed, the Salt Minion’s key will be automatically removed from the Salt Master. However, the EC2 Auto Scale Reactor formula differs from the Salt Cloud Reactor formula in one major way. Amazon EC2 provides notifications directly to the reactor when the EC2 cloud is scaled up or down, making it easy for Salt to immediately bootstrap new instances with a Salt Minion, or to delete old Salt Minion keys from the master. This behavior, therefore, does not require any kind of scheduled query to poll EC2 for changes in scale like the Salt Cloud Reactor demands. Changes to the EC2 cloud can be acted upon by the Salt Master immediately, whereas changes in clouds using the Salt Cloud Reactor may experience a delay in the instance being created and the Salt Master bootstrapping the instance with a new minion. Configuring the EC2 Autoscale Reactor Both of the cloud auto scaling reactors were only recently added to the SaltStack arsenal, and as such, the Salt develop branch is required to set up auto any scaling capabilities. To get started, clone the Salt repository from GitHub onto the machine serving as the Salt Master: git clone https://github.com/saltstack/salt Depending on the operating system you are using, there are a few dependencies that also need to be installed to run SaltStack from the develop branch. Check out the Installing Salt for Development documentation for OS-specific instructions. Once Salt has been installed for development, the Salt Master needs to be configured. First, create the default salt directory in /etc : mkdir /etc/salt The default Salt Master configuration file resides in salt/conf/master. Copy this file into the new salt directory: cp path/to/salt/conf/master /etc/salt/master The Salt Master configuration file is completely commented out, as the default configuration for the master will work on most systems. However, some additional settings must be configured to enable the EC2 Autoscale Reactor to work with the Salt Master. Under the external_auth section of the master configuration file, replace the commented out lines with the following: external_auth:   pam:     myuser:       - .*       - ‘@runner’       - ‘@wheel’ rest_cherrypy:   port: 8080   host: 0.0.0.0   webhook_url: /hook   webhook_disable_auth: True reactor:   - ‘salt/netapi/hook/ec2/autoscale’:     - ‘/srv/reactor/ec2-autoscale.sls’ ec2.autoscale:   provider: my-ec2-config   ssh_username: ec2-user These settings allow the Salt API web hook system to interact with EC2. When a web request is received from EC2, the Salt API will execute an event for the reactor system to respond to. The final ec2.autoscale setting points the reactor to the corresponding Salt Cloud provider configuration file. If authenticity problems with the reactor’s web hook occur, an email notification from Amazon will be sent to the user. To configure the Salt Master to connect to a mail server, see the example SMTP settings in the EC2 Autoscale Reactor documentation. Next, the Salt Cloud provider configuration file must be created. First, create the cloud provider configuration directory: mkdir /etc/salt/cloud.providers.d In /etc/salt/cloud.providers.d, create a file named ec2.conf, and set the following configurations according to your Amazon EC2 account: my-ec2-config:   id: <my aws id>   key: <my aws key>   keyname: <my aws key name>   securitygroup: <my aws security group>   private_key: </path/to/my/private_key.pem>   location: us-east-1   provider: ec2   minion:     master: saltmaster.example.com The last line, master: saltmaster.example.com, represents the location of the Salt Master so the Salt Minions know where to connect once it’s up and running. To set up the actual reactor, create a new reactor directory, download the ec2-autoscale-reactor formula, and copy the reactor formula into the new directory, like so: mkdir /srv/reactor cp path/to/downloaded/package/ec2-autoscale.sls /srv/reactor/ec2-autoscale.sls The last major configuration step is to configure all of the appropriate settings on your EC2 account. First, log in to your AWS account and set up SNS HTTP(S) notifications by selecting SNS (Push Notification Service) from the AWS Console. Click Create New Topic, enter a topic name and a display name, and click the Create Topic button. Then, inside the Topic Details area, click Create Subscription. Choose HTTP or HTTPS as needed and enter the web hook for the Salt API. Assuming your Salt Master is set up at https://saltmaster.example.com, the final web hook endpoint will be: https://saltmaster.example.com/hook/ec2/autoscale. Finally, click Subscribe. Next, set up the launch configurations by choosing EC2 (Virtual Servers in the Cloud) from the AWS Console. Then, select Launch Configurations on the left-hand side. Click Create Launch Configuration and follow the prompts to define the appropriate settings for your cloud. Finally, on the review screen, click Create Launch Configuration to save your settings. Once the launch configuration is set up, click Auto Scaling Groups from the left-hand navigation menu to create auto scaling variables such as the minimum and maximum number of instances your cloud should contain. Click Create Auto Scaling Group, choose Create an Auto Scaling group from an existing launch configuration, select the appropriate configuration, and then click Next Step. From there, follow the prompts until you reach the Configure Notifications screen. Click Add Notification and choose the notification setting that was configured during the SNS configuration step. Finally, complete the rest of the prompts. Congratulations! At this point, you should have successfully configured SaltStack to work with EC2 Auto Scaling! Salt Scheduler As mentioned in the Salt Cloud Reactor section, some type of scheduling system must be implemented when using the Salt Cloud Reactor formula. SaltStack provides its own scheduler, which can be used by adding the following state to the Salt Master’s configuration file: schedule:   job1:     function: cloud.full_query     seconds: 300 Here, the seconds setting ensures that the Salt Master will perform a salt-cloud --full-query command every 5 minutes. A minimum value of 300 seconds or greater is recommended, however, the value can be changed as necessary. Salting instances from the web interface Another exciting quality of Salt’s auto-scale reactor formulas is once a reactor is configured, the respective cloud provider web interface can be used to spin up new instances that are automatically “Salted”. Since the reactor integrates with the web interface to automatically install a Salt Minion on any new instances, it will perform the same operations when instances are created manually via the web interface. The same functionality is true for manually deleting instances: if an instance is manually destroyed on the web interface, the corresponding minion’s key will be removed from the Salt Master. More resources For troubleshooting, more configuration options, or SaltStack specifics, SaltStack has many helpful resources such as SaltStack, Salt Cloud, Salt Cloud Reactor, and EC2 Autoscale Reactor documentation. SaltStack also has a thriving, active, and friendly open source community. About the Author Nicole Thomas is a QA Engineer at SaltStack, Inc. Before coming to SaltStack, she wore many hats from web and Android developer to contributing editor to working in Environmental Education. Nicole recently graduated Summa Cum Laude from Westminster College with a degree in Computer Science. Nicole also has a degree in Environmental Studies from the University of Utah.
Read more
  • 0
  • 0
  • 11265

article-image-qgis-feature-selection-tools
Packt
05 Dec 2014
4 min read
Save for later

QGIS Feature Selection Tools

Packt
05 Dec 2014
4 min read
 In this article by Anita Graser, the author of Learning QGIS Third Edition, we will cover the following topics: Selecting features with the mouse Selecting features using expressions Selecting features using Spatial queries (For more resources related to this topic, see here.) Selecting features with the mouse The first group of tools in the Attributes toolbar allows us to select features on the map using the mouse. The following screenshot shows the Select Feature(s) tool. We can select a single feature by clicking on it or select multiple features by drawing a rectangle. The other tools can be used to select features by drawing different shapes: polygons, freehand areas, or circles around the features. All features that intersect with the drawn shape are selected. Holding down the Ctrl key will add the new selection to an existing one. Similarly, holding down Ctrl + Shift will remove the new selection from the existing selection. Selecting features by expression The second type of select tool is called Select by Expression, and it is also available in the Attribute toolbar. It selects features based on expressions that can contain references and functions using feature attributes and/or geometry. The list of available functions is pretty long, but we can use the search box to filter the list by name to find the function we are looking for faster. On the right-hand side of the window, we will find Selected Function Help, which explains the functionality and how to use the function in an expression. The Function List option also shows the layer attribute fields, and by clicking on Load all unique values or Load 10 sample values, we can easily access their content. As with the mouse tools, we can choose between creating a new selection or adding to or deleting from an existing selection. Additionally, we can choose to only select features from within an existing selection. Let's have a look at some example expressions that you can build on and use in your own work: Using the lakes.shp file in our sample data, we can, for example, select big lakes with an area bigger than 1,000 square miles using a simple attribute query, "AREA_MI" > 1000.0, or using geometry functions such as $area > (1000.0 * 27878400). Note that the lakes.shp CRS uses feet, and we, therefore, have to multiply by 27,878,400 to convert from square feet to square miles. The dialog will look like the one shown in the following screenshot. We can also work with string functions, for example, to find lakes with long names, such as length("NAMES") > 12, or lakes with names that contain the s or S character, such as lower("NAMES") LIKE '%s%', which first converts the names to lowercase and then looks for any appearance of s. Selecting features using spatial queries The third type of tool is called Spatial Query and allows us to select features in one layer based on their location, relative to the features in a second layer. These tools can be accessed by going to Vector | Research Tools | Select by location and then going to Vector | Spatial Query | Spatial Query. Enable it in Plugin Manager if you cannot find it in the Vector menu. In general, we want to use the Spatial Query plugin, as it supports a variety of spatial operations such as crosses, equals, intersects, is disjoint, overlaps, touches, and contains, depending on the layer's geometry type. Let's test the Spatial Query plugin using railroads.shp and pipelines.shp from the sample data. For example, we might want to find all the railroad features that cross a pipeline; we will, therefore, select the railroads layer, the Crosses operation, and the pipelines layer. After clicking on Apply, the plugin presents us with the query results. There is a list of IDs of the result features on the right-hand side of the window, as you can see in the following screenshot. Below this list, we can select the Zoom to item checkbox, and QGIS will zoom to the feature that belongs to the selected ID. Additionally, the plugin offers buttons to directly save all the resulting features to a new layer. Summary This article introduced you to three solutions to select features in QGIS: selecting features with mouse, using spatial queries, and using expressions. Resources for Article: Further resources on this subject: Editing attributes [article] Server Logs [article] Improving proximity filtering with KNN [article]
Read more
  • 0
  • 0
  • 12869

article-image-solving-problems-with-spring-boot
Greg Turnquist
05 Dec 2014
4 min read
Save for later

Solving Problems With Spring Boot

Greg Turnquist
05 Dec 2014
4 min read
I first became aware of Spring Boot early in 2013. At the time, we were rebuilding Spring's website and decided to write a collection of guides that could be consumed during a single lunch break (at spring.io/guides). Realizing how much code Spring Boot saved us from writing (and explaining to readers) thanks to its auto-configuration feature, we embraced it fully. In fact, it led me to write several patches for Spring Boot to help with several of the guides I was writing. Discovering that boiler plate Spring code was unnecessary was incredibly exciting and very effective. Another of Spring Boot's amazing features was its support for properties. This was something I learned more about when I attended Dave Syer and Phil Webb's presentation at the 2013 SpringOne conference. The room was packed with attendees. The keynote presentation of Spring Boot from the night before had whet many appetites. I learned that not only did Spring Boot provide the means to inject critical values into auto-configured beans, it also had strong support to configure on any platform through different naming conventions. You could also override embedded property settings at any stage, even after deployment into production. At my previous company, I had built something by hand that was similar, but never as sophisticated. Given Java's frankly ineffective property APIs, it doesn't surprise me how much people like Spring Boot's solution to this. Another golden feature is Spring Boot's library of built-in actuators. These include metrics, controls, and reports. In a production environment, this type of material is critical. Not having to build it up for the nth time as I have done in the past makes it a killer feature to me. And Spring Boot's support for adding your own metrics and management endpoints is really cool. Every feature Spring Boot provides is incredibly appealing to quickly moving into coding features and deploying them into real apps. Things are simpler and are more aligned with what is needed when you deploy and maintain an app. You don't change gears when it comes production time. Spring Boot doesn't laden me with complex XML configuration files. Instead, I can configure things with code and simply properties. But when I need to customize something special, like a view resolver, Spring Boot gets out of the way by withdrawing its auto-configured one. I was speaking with a colleague a couple months ago and discovered he was helping a local school to setup a computer science workshop. Thanks to Spring Boot, he gave them advice on setting up some exercises where the students would be able to immediately start writing the code that displays "Hello World" on a web page. They wouldn't have to start with installing a build system nor standing up an application server. The thought of going through all those extras sounds really boring; I can only imagine how that would dampen the spirits of kids just getting started. Being to instead move right into application development, and seeing results within minutes, sounds more exciting than ever. So taking all this into account, I started writing my proposal earlier this year about Spring Boot. It was within two weeks of that when Packt reached out to me about writing some Spring Framework oriented book. I immediately polished up my proposal and responded with it. It didn't take Packt long (24 hours perhaps?) for them to leap at the idea. We hammered out the specifics in less than a week and got moving quickly. I have never been more excited about writing. Given that I have been reading blog articles about Spring Boot for almost two years, I have seen so many examples of how people are solving problems, not just building toy apps. I decided to tilt my book towards solving those problems and show how Spring Boot really is the innovative answer to modern application development. I hope everyone is able to enjoy it. About The Author Greg Turnquist is a test-bitten script junky. He is a member of the Spring team at Pivotal. He works on Spring Data REST, Spring Boot, and other Spring projects, while also working as an editor-at-large of Spring's Getting Started guides. He launched the Nashville JUG in 2010. He also created Spring Python and wrote Spring Python 1.1 and Python Testing Cookbook for Packt. He has been a Spring fan for years.
Read more
  • 0
  • 0
  • 7592

article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 3098

article-image-advanced-techniques-and-reflection
Packt
02 Dec 2014
27 min read
Save for later

Advanced Techniques and Reflection

Packt
02 Dec 2014
27 min read
  target // // Page 1 Advanced Techniques and Reflection In this chapter, we will discuss the flexibility and reusability of your code with the help of advanced techniques in Dart. Generic programming is widely useful and is about making your code type-unaware. Using types and generics makes your code safer and allows you to detect bugs early. The debate over errors versus exceptions splits developers into two sides. Which side to choose? It doesn't matter if you know the secret of using both. Annotation is another advanced technique used to decorate existing classes at runtime to change their behavior. Annotations can help reduce the amount of boilerplate code to write your applications. And last but not least, we will open Pandora's box through mirrors of reflection. In this chapter, we will cover the following topics: • Generics • Errors versus exceptions • Annotations • Reflection Generics Dart originally came with generics—a facility of generic programming. We have to tell the static analyzer the permitted type of a collection so it can inform us at compile time if we insert a wrong type of object. As a result, programs become clearer and safer to use. We will discuss how to effectively use generics and Your browser does not support the canvas tag! minimize the complications associated with them. // DrawPage1(); // Page 2 Advanced Techniques and Reflection Your browser does not support the canvas tag! Raw types Dart supports arrays in the form of the List class. Let's say you use a list to store data. The data that you put in the list depends on the context of your code. The list may contain different types of data at the same time, as shown in the following code: // List of data List raw = [1, "Letter", {'test':'wrong'}]; // Ordinary item double item = 1.23; void main() { // Add the item to array raw.add(item); print(raw); } In the preceding code, we assigned data of different types to the raw list. When the code executes, we get the following result: [1, Letter, {test: wrong}, 1.23] So what's the problem with this code? There is no problem. In our code, we intentionally used the default raw list class in order to store items of different types. But such situations are very rare. Usually, we keep data of a specific type in a list. How can we prevent inserting the wrong data type into the list? One way is to check the data type each time we read or write data to the list, as shown in the following code: // Array of String data List parts = ['wheel', 'bumper', 'engine']; // Ordinary item double item = 1.23; void main() { if (item is String) { // Add the item to array parts.add(item); } print(parts); } Your browser does not support the canvas tag! [ 2 ] // DrawPage2(); // Page 3 Chapter 2 Your browser does not support the canvas tag! Now, from the following result, we can see that the code is safer and works as expected: [wheel, bumper, engine] The code becomes more complicated with those extra conditional statements. What should you do when you add the wrong type in the list and it throws exceptions? What if you forget to insert an extra conditional statement? This is where generics come to the fore. Instead of writing a lot of type checks and class casts when manipulating a collection, we tell the static analyzer what type of object the list is allowed to contain. Here is the modified code, where we specify that parts can only contain strings: // Array of String data List<String> parts = ['wheel', 'bumper', 'engine']; // Ordinary item double item = 1.23; void main() { // Add the item to array parts.add(item); print(parts); } Now, List is a generic class with the String parameter. Dart Editor invokes the static analyzer to check the types in the code for potential problems at compile time and alert us if we try to insert a wrong type of object in our collection, as shown in the following screenshot: Your browser does not support the canvas tag! [ 3 ] // DrawPage3(); // Page 4 Advanced Techniques and Reflection Your browser does not support the canvas tag! This helps us make the code clearer and safer because the static analyzer checks the type of the collection at compile time. The important point is that you shouldn't use raw types. As a bonus, we can use a whole bunch of shorthand methods to organize iteration through the list of items to cast safer. Bear in mind that the static analyzer only warns about potential problems and doesn't generate any errors. Dart checks the types of generic classes only in the check mode. Execution in the production mode or code compiled to JavaScript loses all the type information. Using generics Let's discuss how to make the transition to using generics in our code with some real-world examples. Assume that we have the following AssemblyLine class: part of assembly.room; // AssemblyLine. class AssemblyLine { // List of items on line. List _items = []; // Add [item] to line. add(item) { _items.add(item); } // Make operation on all items in line. make(operation) { _items.forEach((item) { operation(item); }); } } Also, we have a set of different kinds of cars, as shown in the following code: part of assembly.room; // Car abstract class Car { // Color Your browser does not support the canvas tag! [ 4 ] // DrawPage4(); // Page 5 Chapter 2 Your browser does not support the canvas tag! String color; } // Passenger car class PassengerCar extends Car { String toString() => "Passenger Car"; } // Truck class Truck extends Car { String toString() => "Truck"; } Finally, we have the following assembly.room library with a main method: library assembly.room; part 'assembly_line.dart'; part 'car.dart'; operation(car) { print('Operate ${car}'); } main() { // Create passenger assembly line AssemblyLine passengerCarAssembly = new AssemblyLine(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // We can occasionally add Truck as well passengerCarAssembly.add(new Truck()); // Operate passengerCarAssembly.make(operation); } In the preceding example, we were able to add the occasional truck in the assembly line for passenger cars without any problem to get the following result: Operate Passenger Car Operate Truck This seems a bit farfetched since in real life, we can't assemble passenger cars and trucks in the same assembly line. So to make your solution safer, you need to make theAssemblyLine type generic. Your browser does not support the canvas tag! [ 5 ] // DrawPage5(); // Page 6 example be of bounded Advanced Techniques and Reflection Your browser does not support the canvas tag! Generic types In general, it's not difficult to make a type generic. Consider the following theAssemblyLine class: part of assembly.room; // AssemblyLine. class AssemblyLine <E extends Car> { // List of items on List<E> _items = []; // Add [item] to line. add(E item) { _items.insert(0, item); } // Make operation on all items line. make(operation) { _items.forEach((E { operation(item); }); } } In the preceding code, we added one type parameter, E, in the declaration of the AssemblyLine class. In this case, the type parameter requires the original one to subtype of Car. This allows the AssemblyLine implementation to take advantage without the need for casting a class. The type parameter E is known as a type parameter. Any changes to the assembly.room library will look like this: library assembly.room; part 'assembly_line.dart'; part 'car.dart'; operation(car) { print('Operate ${car}'); } main() { // Create passenger assembly line [ 6 ] of line. in item) a Your browser does not support the canvas tag! Car // DrawPage6(); // Page 7 Chapter 2 Your browser does not support the canvas tag! AssemblyLine<PassengerCar> passengerCarAssembly = new AssemblyLine<PassengerCar>(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // We can occasionally add truck as well passengerCarAssembly.add(new Truck()); // Operate passengerCarAssembly.make(operation); } The static analyzer alerts us at compile time if we try to insert the Truck argument in the assembly line for passenger cars, as shown in the following screenshot: After we fix the code in line 17, all looks good. Our assembly line is now safe. But if you look at the operation function, it is totally different for passenger cars than it is for trucks; this means that we must make the operation generic as well. The static analyzer doesn't show any warnings and, even worse, we cannot make the operation generic directly because Dart doesn't support generics for functions. But there is a solution. Your browser does not support the canvas tag! [ 7 ] // DrawPage7(); // Page 8 Operation Function the Operation<Truck>("paint"); Advanced Techniques and Reflection Your browser does not support the canvas tag! Generic functions Functions, like all other data types in Dart, are objects, and they data type Function. In the following code, we will create an as an implementation of Function and then apply generics to it as usual: part of assembly.room; // Operation for specific type of car class Operation<E extends Car> implements Function { // Operation final String name; // Create new operation [name] Operation(this.name); // We call our here call(E car) { print('Make ${name} on ${car}'); } } The gem in our class is the call method. As Operation implements has acall method, we can pass an instance of our class as a function in method of the assembly line, as shown in the following code: library assembly.room; part 'assembly.dart'; part 'car.dart'; part 'operation.dart'; main() { // Paint operation for passenger Operation<PassengerCar> paint = new Operation<PassengerCar>("paint"); // Paint operation for Trucks Operation<Truck> paintTruck = new // Create passenger assembly line Assembly<PassengerCar> passengerCarAssembly = new Assembly<PassengerCar>(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // Operate only with passenger passengerCarAssembly.make(paint); // Operate with mistake passengerCarAssembly.make(paintTruck); } [ 8 ] have the class name with function and make car Your browser does not support the canvas tag! car // DrawPage8(); // Page 9 Chapter 2 Your browser does not support the canvas tag! In the preceding code, we created the paint operation to paint the passenger cars and thepaintTruck operation to paint trucks. Later, we created the passengerCarAssembly line and added a new passenger car to the line via the add method. We can run the paint operation on the passenger car by calling the make method of the passengerCarAssembly line. Next, we intentionally made a mistake and tried to paint the truck on the assembly line for passenger cars, which resulted in the following runtime exception: Make paint on Passenger Car Unhandled exception: type 'PassengerCar' is not a subtype of type 'Truck' of 'car'. #0 Operation.call (…/generics_operation.dart:10:10) #1 Assembly.make.<anonymous closure>(…/generics_assembly.dart:16:15 ) #2 List.forEach (dart:core-patch/growable_array.dart:240) #3 Assembly.make (…/generics_assembly.dart:15:18) #4 main (…/generics_assembly_and_operation_room.dart:20:28) … This trick with the call method of the Function type helps you make all the aspects of your assembly line generic. We've seen how to make a class generic and function to make the code of our application safer and cleaner. The documentation generator automatically adds information about generics in the generated documentation pages. To understand the differences between errors and exceptions, let's move on to the next topic. Errors versus exceptions Runtime faults can and do occur during the execution of a Dart program. We can split all faults into two types: • Errors • Exceptions Your browser does not support the canvas tag! [ 9 ] // DrawPage9(); // Page 10 Advanced Techniques and Reflection Your browser does not support the canvas tag! There is always some confusion on deciding when to use each kind of fault, but you will be given several general rules to make your life a bit easier. All your decisions will be based on the simple principle of recoverability. If your code generates a fault that can reasonably be recovered from, use exceptions. Conversely, if the code generates a fault that cannot be recovered from, or where continuing the execution would do more harm, use errors. Let's take a look at each of them in detail. Errors An error occurs if your code has programming errors that should be fixed by the programmer. Let's take a look at the following main function: main() { // Fixed length list List list = new List(5); // Fill list with values for (int i = 0; i < 10; i++) { list[i] = i; } print('Result is ${list}'); } We created an instance of the List class with a fixed length and then tried to fill it with values in a loop with more items than the fixed size of the List class. Executing the preceding code generates RangeError, as shown in the following screenshot: This error occurred because we performed a precondition violation in our code when we tried to insert a value in the list at an index outside the valid range. Mostly, these types of failures occur when the contract between the code and the calling API is broken. In our case, RangeError indicates that the precondition was violated. There are a whole bunch of errors in the Dart SDK such as CastError, RangeError, NoSuchMethodError, UnsupportedError, OutOfMemoryError, and StackOverflowError. Also, there are many others that you will find in the errors. dart file as a part of the dart.core library. All error classes inherit from the Error class and can return stack trace information to help find the bug quickly. In the preceding example, the error happened in line 6 of the main method in the range_error.dart file. Your browser does not support the canvas tag! [ 10 ] // DrawPage10(); // Page 11 Chapter 2 Your browser does not support the canvas tag! We can catch errors in our code, but because the code was badly implemented, we should rather fix it. Errors are not designed to be caught, but we can throw them if a critical situation occurs. A Dart program should usually terminate when an error occurs. Exceptions Exceptions, unlike errors, are meant to be caught and usually carry information about the failure, but they don't include the stack trace information. Exceptions happen in recoverable situations and don't stop the execution of a program. You can throw any non-null object as an exception, but it is better to create a new exception class that implements the marker interface Exception and overrides the toString method of the Object class in order to deliver additional information. An exception should be handled in a catch clause or made to propagate outwards. The following is an example of code without the use of exceptions: import 'dart:io'; main() { // File URI Uri uri = new Uri.file("test.json"); // Check uri if (uri != null) { // Create the file File file = new File.fromUri(uri); // Check whether file exists if (file.existsSync()) { // Open file RandomAccessFile random = file.openSync(); // Check random if (random != null) { // Read file List<int> notReadyContent = random.readSync(random.lengthSync()); // Check not ready content if (notReadyContent != null) { // Convert to String String content = new String.fromCharCodes(notReadyContent); // Print results print('File content: ${content}'); } // Close file random.closeSync(); } Your browser does not support the canvas tag! [ 11 ] // DrawPage11(); // Page 12 Advanced Here As code. making and exceptions Print Techniques and Reflection Your browser does not support the canvas tag! } else { print ("File doesn't exist"); } } } is the result of this code execution: File content: [{ name: Test, length: 100 }] you can see, the error detection and handling leads to Worse yet, the logical flow of the code has been lost, understand it. So, we transform our code to use import 'dart:io'; main() { RandomAccessFile random; try { // File URI Uri uri = new Uri.file("test.json"); // Create the file File file = new File.fromUri(uri); // Open file random = file.openSync(); // Read file List<int> notReadyContent = random.readSync(random.lengthSync()); // Convert to String String content = new String.fromCharCodes(notReadyContent); // print('File content: ${content}'); } on ArgumentError catch(ex) { print('Argument error exception'); } on UnsupportedError catch(ex) { print('URI cannot reference a file'); } on FileSystemException catch(ex) { print ("File doesn't accessible"); } finally { try { random.closeSync(); } on FileSystemException print("File can't be close"); } } } [ 12 ] a confusing spaghetti it difficult to read as follows: results exist or Your browser does not support the canvas tag! catch(ex) { // DrawPage12(); // Page 13 Chapter 2 Your browser does not support the canvas tag! The code in the finally statement will always be executed independent of whether the exception happened or not to close the random file. Finally, we have a clear separation of exception handling from the working code and we can now propagate uncaught exceptions outwards in the call stack. The suggestions based on recoverability after exceptions are fragile. In our example, we caught ArgumentError and UnsupportError in common with FileSystemException. This was only done to show that errors and exceptions have the same nature and can be caught any time. So, what is the truth? While developing my own framework, I used the following principle: If I believe the code cannot recover, I use an error, and if I think it can recover, I use an exception. Let's discuss another advanced technique that has become very popular and that helps you change the behavior of the code without making any changes to it. Annotations An annotation is metadata—data about data. An annotation is a way to keep additional information about the code in the code itself. An annotation can have parameter values to pass specific information about an annotated member. An annotation without parameters is called a marker annotation. The purpose of a marker annotation is just to mark the annotated member. Dart annotations are constant expressions beginning with the @ character. We can apply annotations to all the members of the Dart language, excluding comments and annotations themselves. Annotations can be: • Interpreted statically by parsing the program and evaluating the constants via a suitable interpreter • Retrieved via reflection at runtime by a framework The documentation generator does not add annotations to the generated documentation pages automatically, so the information about annotations must be specified separately in comments. Your browser does not support the canvas tag! [ 13 ] // DrawPage13(); // Page 14 Advanced Techniques and Reflection Your browser does not support the canvas tag! Built-in annotations There are several built-in annotations defined in the Dart SDK interpreted by the static analyzer. Let's take a look at them. Deprecated The first built-in annotation is deprecated, which is very useful when you need to mark a function, variable, a method of a class, or even a whole class as deprecated and that it should no longer be used. The static analyzer generates a warning whenever a marked statement is used in code, as shown in the following screenshot: Override Another built-in annotation is override. This annotation informs the static analyzer that any instance member, such as a method, getter, or setter, is meant to override the member of a superclass with the same name. The class instance variables as well as static members never override each other. If an instance member marked with override fails to correctly override a member in one of its superclasses, the static analyzer generates the following warning: Your browser does not support the canvas tag! [ 14 ] // DrawPage14(); // Page 15 Let's Proxy The last annotation is proxy. Proxy is a well-known pattern used when call a real class's methods through the instance of another class. that we have the following Car class: part of cars; // Class Car class Car { int _speed = 0; // The car speed int get speed => _speed; // Accelerate car accelerate(acc) { _speed += acc; } } [ 15 ] Chapter 2 we need to Your browser does not support the canvas tag! assume // DrawPage15(); // Page 16 passing invoke Symbol('accelerate')) Symbol('speed')) Advanced Techniques and Reflection To drive the car instance, we must accelerate it as follows: library cars; part 'car.dart'; main() { Car car = new Car(); car.accelerate(10); print('Car speed is ${car.speed}'); } We now run our example to get the following result: Car speed is 10 In practice, we may have a lot of different car types and would want to them. To help us with this, we created the CarProxy class by instance of Car in the proxy's constructor. From now on, we can methods through the proxy and save the results in a log as follows: part of cars; // Proxy to [Car] class CarProxy { final Car _car; // Create new proxy to [car] CarProxy(this._car); @override noSuchMethod(Invocation invocation) { if (invocation.isMethod && invocation.memberName == const // Get acceleration value var acc = invocation.positionalArguments[0]; // Log info print("LOG: Accelerate car with ${acc}"); // Call original method _car.accelerate(acc); } else if (invocation.isGetter && invocation.memberName == const var speed = _car.speed; // Log info [ 16 ] Your browser does not support the canvas tag! test all of an the car's { Your browser does not support the canvas tag! { // DrawPage16(); // Page 17 Chapter 2 Your browser does not support the canvas tag! print("LOG: The car speed ${speed}"); return speed; } return super.noSuchMethod(invocation); } } As you can see, CarProxy does not implement the Car interface. All the magic happens insidenoSuchMethod, which is overridden from theObject class. In this method, we compare the invoked member name with accelerate and speed. If the comparison results match one of our conditions, we log the information and then call the original method on the real object. Now let's make changes to the main method, as shown in the following screenshot: Here, the static analyzer alerts you with a warning because the CarProxy class doesn't have the accelerate method and the speed getter. You must add the proxy annotation to the definition of the CarProxy class to suppress the static analyzer warning, as shown in the following screenshot: Now with all the warnings gone, we can run our example to get the following successful result: Car speed is 10 LOG: Accelerate car with 10 LOG: The car speed 20 Car speed through proxy is 20 Your browser does not support the canvas tag! [ 17 ] // DrawPage17(); // Page 18 Advanced Techniques and Reflection Your browser does not support the canvas tag! Custom annotations Let's say we want to create a test framework. For this, we will need several custom annotations to mark methods in a testable class to be included in a test case. The following code has two custom annotations. In the case, where we need only marker annotation, we use a constant string test. In the event that we need to pass parameters to an annotation, we will use a Test class with a constant constructor, as shown in the following code: library test; // Marker annotation test const String test = "test"; // Test annotation class Test { // Should test be ignored? final bool include; // Default constant constructor const Test({this.include:true}); String toString() => 'test'; } The Test class has the final include variable initialized with a default value of true. To exclude a method from tests, we should pass false as a parameter for the annotation, as shown in the following code: library test.case; import 'test.dart'; import 'engine.dart'; // Test case of Engine class TestCase { Engine engine = new Engine(); // Start engine @test testStart() { engine.start(); if (!engine.started) throw new Exception("Engine must start"); } // Stop engine @Test() Your browser does not support the canvas tag! [ 18 ] // DrawPage18(); // Page 19 Chapter 2 Your browser does not support the canvas tag! testStop() { engine.stop(); if (engine.started) throw new Exception("Engine must stop"); } // Warm up engine @Test(include:false ) testWarmUp() { // ... } } In this scenario, we test the Engine class via the invocation of the testStart andtestStop methods ofTestCase, while avoiding the invocation of the testWarmUp method. So what's next? How can we really use annotations? Annotations are useful with reflection at runtime, so now it's time to discuss how to make annotations available through reflection. Reflection Introspection is the ability of a program to discover and use its own structure. Reflection is the ability of a program to use introspection to examine and modify the structure and behavior of the program at runtime. You can use reflection to dynamically create an instance of a type or get the type from an existing object and invoke its methods or access its fields and properties. This makes your code more dynamic and can be written against known interfaces so that the actual classes can be instantiated using reflection. Another purpose of reflection is to create development and debugging tools, and it is also used for meta-programming. There are two different approaches to implementing reflection: • The first approach is that the information about reflection is tightly integrated with the language and exists as part of the program's structure. Access to program-based reflection is available by a property or method. • The second approach is based on the separation of reflection information and program structure. Reflection information is separated inside a distinct mirror object that binds to the real program member. Your browser does not support the canvas tag! [ 19 ] // DrawPage19(); // Page 20 Advanced Techniques and Reflection Your browser does not support the canvas tag! Dart reflection follows the second approach with Mirrors. You can find more information about the concept of Mirrors in the original paper written by Gilad Bracha athttp://bracha.org/mirrors.pdf. Let's discuss the advantages of mirrors: • Mirrors are separate from the main code and cannot be exploited for malicious purposes • As reflection is not part of the code, the resulting code is smaller • There are no method-naming conflicts between the reflection API and inspected classes • It is possible to implement many different mirrors with different levels of reflection privileges • It is possible to use mirrors in command-line and web applications Let's try Mirrors and see what we can do with them. We will continue to create a library to run our tests. Introspection in action We will demonstrate the use of Mirrors with something simple such as introspection. We will need a universal code that can retrieve the information about any object or class in our program to discover its structure and possibly manipulate it with properties and call methods. For this, we've prepared the TypeInspector class. Let's take a look at the code. We've imported the dart:mirrors library here to add the introspection ability to our code: library inspector; import 'dart:mirrors'; import 'test.dart'; class TypeInspector { ClassMirror _classMirror; // Create type inspector for [type]. TypeInspector(Type type) { _classMirror = reflectClass(type); } Your browser does not support the canvas tag! [ 20 ] // DrawPage20(); // Page 21 Chapter 2 Your browser does not support the canvas tag! The ClassMirror class contains all the information about the observing type. We perform the actual introspection with the reflectClass function of Mirrors and return a distinct mirror object as the result. Then, we call the getAnnotatedMethods method and specify the name of the annotation that we are interested in. This will return a list of MethodMirror that will contain methods annotated with specified parameters. One by one, we step through all the instance members and call the private_isMethodAnnotated method. If the result of the execution of the _isMethodAnnotated method is successful, then we add the discovering method to theresult list of foundMethodMirror's, as shown in the following code: // Return list of method mirrors assigned by [annotation]. List<MethodMirror> getAnnotatedMethods(String annotation) { List<MethodMirror> result = []; // Get all methods _classMirror.instanceMembers.forEach( (Symbol name, MethodMirror method) { if (_isMethodAnnotated(method, annotation)) { result.add(method); } }); return result; } The first argument of _isMethodAnnotated has the metadata property that keeps a list of annotations. The second argument of this method is the annotation name that we would like to find. The inst variable holds a reference to the original object in the reflectee property. We pass through all the method's metadata to exclude some of them annotated with the Test class and marked with include equals false. All other method's annotations should be compared to the annotation name, as follows: // Check is [method] annotated with [annotation]. bool _isMethodAnnotated(MethodMirror method, String annotation) { return method.metadata.any( (InstanceMirror inst) { // For [Test] class we check include condition if (inst.reflectee is Test && !(inst.reflectee as Test).include) { // Test must be exclude return false; } // Literal compare of reflectee and annotation return inst.reflectee.toString() == annotation; }); } } Your browser does not support the canvas tag! [ 21 ] // DrawPage21(); // Page 22 Advanced Techniques and Reflection Your browser does not support the canvas tag! Dart mirrors have the following three main functions for introspection: • reflect: This function is used to introspect an instance that is passed as a parameter and saves the result in InstanceMirror or ClosureMirror. For the first one, we can call methods, functions, or get and set fields of the reflectee property. For the second one, we can execute the closure. • reflectClass: This function reflects the class declaration and returns ClassMirror. It holds full information about the type passed as a parameter. • reflectType: This function returns TypeMirror and reflects a class, typedef, function type, or type variable. Let's take a look at the main code: library test.framework; import 'type_inspector.dart'; import 'test_case.dart'; main() { TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods('test'); print(methods); } Firstly, we created an instance of our TypeInspector class and passed the testable class, in our case, TestCase. Then, we called getAnnotatedMethods from inspector with the name of the annotation, test. Here is the result of the execution: [MethodMirror on 'testStart', MethodMirror on 'testStop'] The inspector method found the methods testStart and testStop and ignored testWarmUp of the TestCase class as per our requirements. Reflection in action We have seen how introspection helps us find methods marked with annotations. Now we need to call each marked method to run the actual tests. We will do that using reflection. Let's make a MethodInvoker class to show reflection in action: library executor; import 'dart:mirrors'; class MethodInvoker implements Function { // Invoke the method Your browser does not support the canvas tag! [ 22 ] // DrawPage22(); // Page 23 Chapter 2 Your browser does not support the canvas tag! call(MethodMirror method) { ClassMirror classMirror = method.owner as ClassMirror; // Create an instance of class InstanceMirror inst = classMirror.newInstance(new Symbol(''), []); // Invoke method of instance inst.invoke(method.simpleName, []); } } As the MethodInvoker class implements the Function interface and has the call method, we can call instance it as if it was a function. In order to call the method, we must first instantiate a class. Each MethodMirror method has the owner property, which points to the owner object in the hierarchy. The owner ofMethodMirror in our case is ClassMirror. In the preceding code, we created a new instance of the class with an empty constructor and then we invoked the method of inst by name. In both cases, the second parameter was an empty list of method parameters. Now, we introduce MethodInvoker to the main code. In addition to TypeInspector, we create the instance of MethodInvoker. One by one, we step through the methods and send each of them to invoker. We print Success only if no exceptions occur. To prevent the program from terminating if any of the tests failed, we wrap invoker in the try-catch block, as shown in the following code: library test.framework; import 'type_inspector.dart'; import 'method_invoker.dart'; import 'engine_case.dart'; main() { TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods(test); MethodInvoker invoker = new MethodInvoker(); methods.forEach((method) { try { invoker(method); print('Success ${method.simpleName}'); } on Exception catch(ex) { print(ex); } on Error catch(ex) { print("$ex : ${ex.stackTrace}"); } }); } Your browser does not support the canvas tag! [ 23 ] // DrawPage23(); // Page 24 Advanced Techniques and Reflection Your browser does not support the canvas tag! As a result, we will get the following code: Success Symbol("testStart") Success Symbol("testStop") To prove that the program will not terminate in the case of an exception in the tests, we will change the code in TestCase to break it, as follows: // Start engine @test testStart() { engine.start(); // !!! Broken for reason if (engine.started) throw new Exception("Engine must start"); } When we run the program, the code for testStart fails, but the program continues executing until all the tests are finished, as shown in the following code: Exception: Engine must start Success Symbol("testStop") And now our test library is ready for use. It uses introspection and reflection to observe and invoke marked methods of any class. Summary This concludes mastering of the advanced techniques in Dart. You now know that generics produce safer and clearer code, annotation with reflection helps execute code dynamically, and errors and exceptions play an important role in finding bugs that are detected at runtime. In the next chapter, we will talk about the creation of objects and how and when to create them using best practices from the programming world. Your browser does not support the canvas tag!Your browser does not support the canvas tag!Your browser does not support the canvas tag!Your browser does not support the canvas tag!Your browser does not support the canvas tag! [ 24 ] // DrawPage24(); //
Read more
  • 0
  • 0
  • 1043
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-lc-process-article
Packt
02 Dec 2014
27 min read
Save for later

LC Process :Article

Packt
02 Dec 2014
27 min read
In this chapter, we will discuss the flexibility and reusability of your code with the help of advanced techniques in Dart. Generic programming is widely useful and is about making your code type-unaware. Using types and generics makes your code safer and allows you to detect bugs early. The debate over errors versus exceptions splits developers into two sides. Which side to choose? It doesn't matter if you know the secret of using both. Annotation is another advanced technique used to decorate existing classes at runtime to change their behavior. Annotations can help reduce the amount of boilerplate code to write your applications. And last but not least, we will open Pandora's box through Mirrors of reflection. In this chapter, we will cover the following topics: Generics Errors versus exceptions Annotations Reflection Generics Dart originally came with generics—a facility of generic programming. We have to tell the static analyzer the permitted type of a collection so it can inform us at compile time if we insert a wrong type of object. As a result, programs become clearer and safer to use. We will discuss how to effectively use generics and minimize the complications associated with them. Raw types Dart supports arrays in the form of the List class. Let's say you use a list to store data. The data that you put in the list depends on the context of your code. The list may contain different types of data at the same time, as shown in the following code: // List of data List raw = [1, "Letter", {'test':'wrong'}]; // Ordinary item double item = 1.23;   void main() { // Add the item to array raw.add(item); print(raw); } In the preceding code, we assigned data of different types to the raw list. When the code executes, we get the following result: [1, Letter, {test: wrong}, 1.23] So what's the problem with this code? There is no problem. In our code, we intentionally used the default raw list class in order to store items of different types. But such situations are very rare. Usually, we keep data of a specific type in a list. How can we prevent inserting the wrong data type into the list? One way is to check the data type each time we read or write data to the list, as shown in the following code: // Array of String data List parts = ['wheel', 'bumper', 'engine']; // Ordinary item double item = 1.23;   void main() { if (item is String) {    // Add the item to array    parts.add(item); } print(parts); } Now, from the following result, we can see that the code is safer and works as expected: [wheel, bumper, engine] The code becomes more complicated with those extra conditional statements. What should you do when you add the wrong type in the list and it throws exceptions? What if you forget to insert an extra conditional statement? This is where generics come to the fore. Instead of writing a lot of type checks and class casts when manipulating a collection, we tell the static analyzer what type of object the list is allowed to contain. Here is the modified code, where we specify that parts can only contain strings: // Array of String data List<String> parts = ['wheel', 'bumper', 'engine']; // Ordinary item double item = 1.23;   void main() { // Add the item to array parts.add(item); print(parts); } Now, List is a generic class with the String parameter. Dart Editor invokes the static analyzer to check the types in the code for potential problems at compile time and alert us if we try to insert a wrong type of object in our collection, as shown in the following screenshot:   This helps us make the code clearer and safer because the static analyzer checks the type of the collection at compile time. The important point is that you shouldn't use raw types. As a bonus, we can use a whole bunch of shorthand methods to organize iteration through the list of items to cast safer. Bear in mind that the static analyzer only warns about potential problems and doesn't generate any errors.     Dart checks the types of generic classes only in the check mode. Execution in the production mode or code compiled to JavaScript loses all the type information.     Using generics Let's discuss how to make the transition to using generics in our code with some real-world examples. Assume that we have the following AssemblyLine class: part of assembly.room;   // AssemblyLine. class AssemblyLine { // List of items on line. List _items = []; // Add [item] to line. add(item) {    _items.add(item); }   // Make operation on all items in line. make(operation) {    _items.forEach((item) {      operation(item);    }); } } Also, we have a set of different kinds of cars, as shown in the following code: part of assembly.room;   // Car abstract class Car { // Color String color; }   // Passenger car class PassengerCar extends Car { String toString() => "Passenger Car"; }   // Truck class Truck extends Car { String toString() => "Truck"; } Finally, we have the following assembly.room library with a main method: library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() { // Create passenger assembly line AssemblyLine passengerCarAssembly = new AssemblyLine(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // We can occasionally add Truck as well passengerCarAssembly.add(new Truck()); // Operate passengerCarAssembly.make(operation); } In the preceding example, we were able to add the occasional truck in the assembly line for passenger cars without any problem to get the following result: Operate Passenger Car Operate Truck This seems a bit far fetched since in real life, we can't assemble passenger cars and trucks in the same assembly line. So to make your solution safer, you need to make the AssemblyLine type generic. Generic types In general, it's not difficult to make a type generic. Consider the following example of the AssemblyLine class: part of assembly.room;   // AssemblyLine. class AssemblyLine <E extends Car> { // List of items on line. List<E> _items = []; // Add [item] to line. add(E item) {    _items.insert(0, item); } // Make operation on all items in line. make(operation) {    _items.forEach((E item) {      operation(item);    }); } } In the preceding code, we added one type parameter, E, in the declaration of the AssemblyLine class. In this case, the type parameter requires the original one to be a subtype of Car. This allows the AssemblyLine implementation to take advantage of Car without the need for casting a class. The type parameter E is known as a bounded type parameter. Any changes to the assembly.room library will look like this: library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() { // Create passenger assembly line AssemblyLine<PassengerCar> passengerCarAssembly =      new AssemblyLine<PassengerCar>(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // We can occasionally add truck as well passengerCarAssembly.add(new Truck()); // Operate passengerCarAssembly.make(operation); } The static analyzer alerts us at compile time if we try to insert the Truck argument in the assembly line for passenger cars, as shown in the following screenshot:   After we fix the code in line 17, all looks good. Our assembly line is now safe. But if you look at the operation function, it is totally different for passenger cars than it is for trucks; this means that we must make the operation generic as well. The static analyzer doesn't show any warnings and, even worse, we cannot make the operation generic directly because Dart doesn't support generics for functions. But there is a solution. Generic functions Functions, like all other data types in Dart, are objects, and they have the data type Function. In the following code, we will create an Operation class as an implementation of Function and then apply generics to it as usual: part of assembly.room;   // Operation for specific type of car class Operation<E extends Car> implements Function { // Operation name final String name; // Create new operation with [name] Operation(this.name); // We call our function here call(E car) {    print('Make ${name} on ${car}'); } } The gem in our class is the call method. As Operation implements Function and has a call method, we can pass an instance of our class as a function in the make method of the assembly line, as shown in the following code: library assembly.room;   part 'assembly.dart'; part 'car.dart'; part 'operation.dart';   main() { // Paint operation for passenger car Operation<PassengerCar> paint = new      Operation<PassengerCar>("paint"); // Paint operation for Trucks Operation<Truck> paintTruck = new Operation<Truck>("paint"); // Create passenger assembly line Assembly<PassengerCar> passengerCarAssembly =    new Assembly<PassengerCar>(); // We can add passenger car passengerCarAssembly.add(new PassengerCar()); // Operate only with passenger car passengerCarAssembly.make(paint); // Operate with mistake passengerCarAssembly.make(paintTruck); } In the preceding code, we created the paint operation to paint the passenger cars and the paintTruck operation to paint trucks. Later, we created the passengerCarAssembly line and added a new passenger car to the line via the add method. We can run the paint operation on the passenger car by calling the make method of the passengerCarAssembly line. Next, we intentionally made a mistake and tried to paint the truck on the assembly line for passenger cars, which resulted in the following runtime exception: Make paint on Passenger Car Unhandled exception: type 'PassengerCar' is not a subtype of type 'Truck' of 'car'. #0 Operation.call (…/generics_operation.dart:10:10) #1 Assembly.make.<anonymous   closure>(…/generics_assembly.dart:16:15) #2 List.forEach (dart:core-patch/growable_array.dart:240) #3 Assembly.make (…/generics_assembly.dart:15:18) #4 main (…/generics_assembly_and_operation_room.dart:20:28) … This trick with the call method of the Function type helps you make all the aspects of your assembly line generic. We've seen how to make a class generic and function to make the code of our application safer and cleaner.     The documentation generator automatically adds information about generics in the generated documentation pages.     To understand the differences between errors and exceptions, let's move on to the next topic. Errors versus exceptions Runtime faults can and do occur during the execution of a Dart program. We can split all faults into two types: Errors Exceptions There is always some confusion on deciding when to use each kind of fault, but you will be given several general rules to make your life a bit easier. All your decisions will be based on the simple principle of recoverability. If your code generates a fault that can reasonably be recovered from, use exceptions. Conversely, if the code generates a fault that cannot be recovered from, or where continuing the execution would do more harm, use errors. Let's take a look at each of them in detail. Errors An error occurs if your code has programming errors that should be fixed by the programmer. Let's take a look at the following main function: main() { // Fixed length list List list = new List(5); // Fill list with values for (int i = 0; i < 10; i++) {    list[i] = i; } print('Result is ${list}'); } We created an instance of the List class with a fixed length and then tried to fill it with values in a loop with more items than the fixed size of the List class. Executing the preceding code generates RangeError, as shown in the following screenshot:   This error occurred because we performed a precondition violation in our code when we tried to insert a value in the list at an index outside the valid range. Mostly, these types of failures occur when the contract between the code and the calling API is broken. In our case, RangeError indicates that the precondition was violated. There are a whole bunch of errors in the Dart SDK such as CastError, RangeError, NoSuchMethodError, UnsupportedError, OutOfMemoryError, and StackOverflowError. Also, there are many others that you will find in the errors.dart file as a part of the dart.core library. All error classes inherit from the Error class and can return stack trace information to help find the bug quickly. In the preceding example, the error happened in line 6 of the main method in the range_error.dart file. We can catch errors in our code, but because the code was badly implemented, we should rather fix it. Errors are not designed to be caught, but we can throw them if a critical situation occurs. A Dart program should usually terminate when an error occurs. Exceptions Exceptions, unlike errors, are meant to be caught and usually carry information about the failure, but they don't include the stack trace information. Exceptions happen in recoverable situations and don't stop the execution of a program. You can throw any non-null object as an exception, but it is better to create a new exception class that implements the abstract class Exception and overrides the toString method of the Object class in order to deliver additional information. An exception should be handled in a catch clause or made to propagate outwards. The following is an example of code without the use of exceptions: import 'dart:io';   main() { // File URI Uri uri = new Uri.file("test.json"); // Check uri if (uri != null) {    // Create the file    File file = new File.fromUri(uri);    // Check whether file exists    if (file.existsSync()) {      // Open file      RandomAccessFile random = file.openSync();      // Check random      if (random != null) {        // Read file        List<int> notReadyContent =          random.readSync(random.lengthSync());         // Check not ready content        if (notReadyContent != null) {          // Convert to String          String content = new            String.fromCharCodes(notReadyContent);          // Print results          print('File content: ${content}');        }        // Close file        random.closeSync();      }    } else {      print ("File doesn't exist");    } } } Here is the result of this code execution: File content: [{ name: Test, length: 100 }] As you can see, the error detection and handling leads to a confusing spaghetti code. Worse yet, the logical flow of the code has been lost, making it difficult to read and understand it. So, we transform our code to use exceptions as follows: import 'dart:io';   main() { RandomAccessFile random; try {    // File URI    Uri uri = new Uri.file("test.json");    // Create the file    File file = new File.fromUri(uri);    // Open file    random = file.openSync();    // Read file    List<int> notReadyContent =     random.readSync(random.lengthSync());    // Convert to String    String content = new String.fromCharCodes(notReadyContent);    // Print results    print('File content: ${content}'); } on ArgumentError catch(ex) {    print('Argument error exception'); } on UnsupportedError catch(ex) {    print('URI cannot reference a file'); } on FileSystemException catch(ex) {    print ("File doesn't exist or accessible"); } finally {    try {      random.closeSync();    } on FileSystemException catch(ex) {      print("File can't be close");    } } } The code in the finally statement will always be executed independent of whether the exception happened or not to close the random file. Finally, we have a clear separation of exception handling from the working code and we can now propagate uncaught exceptions outwards in the call stack. The suggestions based on recoverability after exceptions are fragile. In our example, we caught ArgumentError and UnsupportError in common with FileSystemException. This was only done to show that errors and exceptions have the same nature and can be caught any time. So, what is the truth? While developing my own framework, I used the following principle: If I believe the code cannot recover, I use an error, and if I think it can recover, I use an exception. Let's discuss another advanced technique that has become very popular and that helps you change the behavior of the code without making any changes to it. Annotations An annotation is metadata—data about data. An annotation is a way to keep additional information about the code in the code itself. An annotation can have parameter values to pass specific information about an annotated member. An annotation without parameters is called a marker annotation. The purpose of a marker annotation is just to mark the annotated member. Dart annotations are constant expressions beginning with the @ character. We can apply annotations to all the members of the Dart language, excluding comments and annotations themselves. Annotations can be: Interpreted statically by parsing the program and evaluating the constants via a suitable interpreter Retrieved via reflection at runtime by a framework     The documentation generator does not add annotations to the generated documentation pages automatically, so the information about annotations must be specified separately in comments.     Built-in annotations There are several built-in annotations defined in the Dart SDK interpreted by the static analyzer. Let's take a look at them. Deprecated The first built-in annotation is deprecated, which is very useful when you need to mark a function, variable, a method of a class, or even a whole class as deprecated and that it should no longer be used. The static analyzer generates a warning whenever a marked statement is used in code, as shown in the following screenshot:   Override Another built-in annotation is override. This annotation informs the static analyzer that any instance member, such as a method, getter, or setter, is meant to override the member of a superclass with the same name. The class instance variables as well as static members never override each other. If an instance member marked with override fails to correctly override a member in one of its superclasses, the static analyzer generates the following warning:   Proxy The last annotation is proxy. Proxy is a well-known pattern used when we need to call a real class's methods through the instance of another class. Let's assume that we have the following Car class: part of cars;   // Class Car class Car { int _speed = 0; // The car speed int get speed => _speed; // Accelerate car accelerate(acc) {    _speed += acc; } } To drive the car instance, we must accelerate it as follows: library cars;   part 'car.dart';   main() { Car car = new Car(); car.accelerate(10); print('Car speed is ${car.speed}'); } We now run our example to get the following result: Car speed is 10 In practice, we may have a lot of different car types and would want to test all of them. To help us with this, we created the CarProxy class by passing an instance of Car in the proxy's constructor. From now on, we can invoke the car's methods through the proxy and save the results in a log as follows: part of cars;   // Proxy to [Car] class CarProxy { final Car _car; // Create new proxy to [car] CarProxy(this._car); @override noSuchMethod(Invocation invocation) {    if (invocation.isMethod &&        invocation.memberName == const Symbol('accelerate')) {      // Get acceleration value      var acc = invocation.positionalArguments[0];      // Log info      print("LOG: Accelerate car with ${acc}");      // Call original method      _car.accelerate(acc);    } else if (invocation.isGetter &&                invocation.memberName == const Symbol('speed')) {      var speed = _car.speed;      // Log info      print("LOG: The car speed ${speed}");      return speed;    }    return super.noSuchMethod(invocation); } } As you can see, CarProxy does not implement the Car interface. All the magic happens inside noSuchMethod, which is overridden from the Object class. In this method, we compare the invoked member name with accelerate and speed. If the comparison results match one of our conditions, we log the information and then call the original method on the real object. Now let's make changes to the main method, as shown in the following screenshot:   Here, the static analyzer alerts you with a warning because the CarProxy class doesn't have the accelerate method and the speed getter. You must add the proxy annotation to the definition of the CarProxy class to suppress the static analyzer warning, as shown in the following screenshot:   Now with all the warnings gone, we can run our example to get the following successful result: Car speed is 10 LOG: Accelerate car with 10 LOG: The car speed 20 Car speed through proxy is 20 Custom annotations Let's say we want to create a test framework. For this, we will need several custom annotations to mark methods in a testable class to be included in a test case. The following code has two custom annotations. In the case, where we need only marker annotation, we use a constant string test. In the event that we need to pass parameters to an annotation, we will use a Test class with a constant constructor, as shown in the following code: library test;   // Marker annotation test const String test = "test";   // Test annotation class Test { // Should test be ignored? final bool include; // Default constant constructor const Test({this.include:true});   String toString() => 'test'; } The Test class has the final include variable initialized with a default value of true. To exclude a method from tests, we should pass false as a parameter for the annotation, as shown in the following code: library test.case;   import 'test.dart'; import 'engine.dart';   // Test case of Engine class TestCase { Engine engine = new Engine(); // Start engine @test testStart() {    engine.start();    if (!engine.started) throw new Exception("Engine must start"); } // Stop engine @Test() testStop() {    engine.stop();    if (engine.started) throw new Exception("Engine must stop"); } // Warm up engine @Test(include:false) testWarmUp() {    // ... } } In this scenario, we test the Engine class via the invocation of the testStart and testStop methods of TestCase, while avoiding the invocation of the testWarmUp method. So what's next? How can we really use annotations? Annotations are useful with reflection at runtime, so now it's time to discuss how to make annotations available through reflection. Reflection Introspection is the ability of a program to discover and use its own structure. Reflection is the ability of a program to use introspection to examine and modify the structure and behavior of the program at runtime. You can use reflection to dynamically create an instance of a type or get the type from an existing object and invoke its methods or access its fields and properties. This makes your code more dynamic and can be written against known interfaces so that the actual classes can be instantiated using reflection. Another purpose of reflection is to create development and debugging tools, and it is also used for meta-programming. There are two different approaches to implementing reflection: The first approach is that the information about reflection is tightly integrated with the language and exists as part of the program's structure. Access to program-based reflection is available by a property or method. The second approach is based on the separation of reflection information and program structure. Reflection information is separated inside a distinct Mirror object that binds to the real program member. Dart reflection follows the second approach with Mirrors. You can find more information about the concept of Mirrors in the original paper written by Gilad Bracha at http://bracha.org/mirrors.pdf. Let's discuss the advantages of Mirrors: Mirrors are separate from the main code and cannot be exploited for malicious purposes As reflection is not part of the code, the resulting code is smaller There are no method-naming conflicts between the reflection API and inspected classes It is possible to implement many different Mirrors with different levels of reflection privileges It is possible to use Mirrors in command-line and web applications Let's try Mirrors and see what we can do with them. We will continue to create a library to run our tests. Introspection in action We will demonstrate the use of Mirrors with something simple such as introspection. We will need a universal code that can retrieve the information about any object or class in our program to discover its structure and possibly manipulate it with properties and call methods. For this, we've prepared the TypeInspector class. Let's take a look at the code. We've imported the dart:mirrors library here to add the introspection ability to our code: library inspector;   import 'dart:mirrors'; import 'test.dart';   class TypeInspector { ClassMirror _classMirror; // Create type inspector for [type]. TypeInspector(Type type) {    _classMirror = reflectClass(type); } The ClassMirror class contains all the information about the observing type. We perform the actual introspection with the reflectClass function of Mirrors and return a distinct Mirror object as the result. Then, we call the getAnnotatedMethods method and specify the name of the annotation that we are interested in. This will return a list of MethodMirror that will contain methods annotated with specified parameters. One by one, we step through all the instance members and call the private _isMethodAnnotated method. If the result of the execution of the _isMethodAnnotated method is successful, then we add the discovering method to the result list of found MethodMirror's, as shown in the following code: // Return list of method mirrors assigned by [annotation]. List<MethodMirror> getAnnotatedMethods(String annotation) {    List<MethodMirror> result = [];    // Get all methods    _classMirror.instanceMembers.forEach(      (Symbol name, MethodMirror method) {      if (_isMethodAnnotated(method, annotation)) {        result.add(method);      }    });    return result; } The first argument of _isMethodAnnotated has the metadata property that keeps a list of annotations. The second argument of this method is the annotation name that we would like to find. The inst variable holds a reference to the original object in the reflectee property. We pass through all the method's metadata to exclude some of them annotated with the Test class and marked with include equals false. All other method's annotations should be compared to the annotation name, as follows: // Check is [method] annotated with [annotation]. bool _isMethodAnnotated(MethodMirror method, String annotation) {    return method.metadata.any(      (InstanceMirror inst) {      // For [Test] class we check include condition      if (inst.reflectee is Test &&        !(inst.reflectee as Test).include) {        // Test must be exclude        return false;      }      // Literal compare of reflectee and annotation      return inst.reflectee.toString() == annotation;    }); } } Dart Mirrors have the following three main functions for introspection: reflect: This function is used to introspect an instance that is passed as a parameter and saves the result in InstanceMirror or ClosureMirror. For the first one, we can call methods, functions, or get and set fields of the reflectee property. For the second one, we can execute the closure. reflectClass: This function reflects the class declaration and returns ClassMirror. It holds full information about the type passed as a parameter. reflectType: This function returns TypeMirror and reflects a class, typedef, function type, or type variable. Let's take a look at the main code: library test.framework;   import 'type_inspector.dart'; import 'test_case.dart';   main() { TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods('test'); print(methods); } Firstly, we created an instance of our TypeInspector class and passed the testable class, in our case, TestCase. Then, we called getAnnotatedMethods from inspector with the name of the annotation, test. Here is the result of the execution: [MethodMirror on 'testStart', MethodMirror on 'testStop'] The inspector method found the methods testStart and testStop and ignored testWarmUp of the TestCase class as per our requirements. Reflection in action We have seen how introspection helps us find methods marked with annotations. Now we need to call each marked method to run the actual tests. We will do that using reflection. Let's make a MethodInvoker class to show reflection in action: library executor;   import 'dart:mirrors';   class MethodInvoker implements Function { // Invoke the method call(MethodMirror method) {    ClassMirror classMirror = method.owner as ClassMirror;    // Create an instance of class    InstanceMirror inst =      classMirror.newInstance(new Symbol(''), []);    // Invoke method of instance    inst.invoke(method.simpleName, []); } } As the MethodInvoker class implements the Function interface and has the call method, we can call instance it as if it was a function. In order to call the method, we must first instantiate a class. Each MethodMirror method has the owner property, which points to the owner object in the hierarchy. The owner of MethodMirror in our case is ClassMirror. In the preceding code, we created a new instance of the class with an empty constructor and then we invoked the method of inst by name. In both cases, the second parameter was an empty list of method parameters. Now, we introduce MethodInvoker to the main code. In addition to TypeInspector, we create the instance of MethodInvoker. One by one, we step through the methods and send each of them to invoker. We print Success only if no exceptions occur. To prevent the program from terminating if any of the tests failed, we wrap invoker in the try-catch block, as shown in the following code: library test.framework;   import 'type_inspector.dart'; import 'method_invoker.dart'; import 'engine_case.dart';   main() { TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods(test); MethodInvoker invoker = new MethodInvoker(); methods.forEach((method) {    try {      invoker(method);      print('Success ${method.simpleName}');    } on Exception catch(ex) {      print(ex);    } on Error catch(ex) {      print("$ex : ${ex.stackTrace}");    } }); } As a result, we will get the following code: Success Symbol("testStart") Success Symbol("testStop") To prove that the program will not terminate in the case of an exception in the tests, we will change the code in TestCase to break it, as follows: // Start engine @test testStart() { engine.start(); // !!! Broken for reason if (engine.started) throw new Exception("Engine must start"); } When we run the program, the code for testStart fails, but the program continues executing until all the tests are finished, as shown in the following code: Exception: Engine must start Success Symbol("testStop") And now our test library is ready for use. It uses introspection and reflection to observe and invoke marked methods of any class. Summary This concludes mastering of the advanced techniques in Dart. You now know that generics produce safer and clearer code, annotation with reflection helps execute code dynamically, and errors and exceptions play an important role in finding bugs that are detected at runtime. In the next chapter, we will talk about the creation of objects and how and when to create them using best practices from the programming world.  
Read more
  • 0
  • 0
  • 876

article-image-test456456
Packt
02 Dec 2014
28 min read
Save for later

test456456

Packt
02 Dec 2014
28 min read
Advanced Techniques   and Reflection     In this chapter, we will discuss the flexibility and reusability of your code with the help of advanced techniques in Dart. Generic programming is widely useful and is about making your code type-unaware. Using types and generics makes your code safer and allows you to detect bugs early. The debate over errors versus exceptions splits developers into two sides. Which side to choose? It doesn't matter if you know the secret of using both. Annotation is another advanced technique used to decorate existing classes at runtime to change their behavior. Annotations can help reduce the amount of boilerplate code to write your applications. And last but not least, we will open Pandora's box through mirrors of reflection. In this chapter, we will cover the following topics:   Generics   Errors versus exceptions   Annotations   Reflection   Generics   Dart originally came with generics—a facility of generic programming. We have to tell the static analyzer the permitted type of a collection so it can inform us at compile time if we insert a wrong type of object. As a result, programs become clearer and safer to use. We will discuss how to effectively use generics and minimize the complications associated with them.       Advanced Techniques and Reflection Raw types   Dart supports arrays in the form of the List class. Let's say you use a list to store data. The data that you put in the list depends on the context of your code. The list may contain different types of data at the same time, as shown in the following code:   // List of data   List raw = [1, "Letter", {'test':'wrong'}]; // Ordinary item   double item = 1.23;   void main() {   // Add the item to array raw.add(item); print(raw);   }   In the preceding code, we assigned data of different types to the raw list. When the code executes, we get the following result:   [1, Letter, {test: wrong}, 1.23]   So what's the problem with this code? There is no problem. In our code, we intentionally used the default raw list class in order to store items of different types. But such situations are very rare. Usually, we keep data of a specific type in a list. How can we prevent inserting the wrong data type into the list? One way is to check the data type each time we read or write data to the list, as shown in the following code:   // Array of String data   List parts = ['wheel', 'bumper', 'engine']; // Ordinary item   double item = 1.23;   void main() {   if (item is String) {   // Add the item to array parts.add(item);   }   print(parts);   }                 [ 2 ]     Chapter 2 Now, from the following result, we can see that the code is safer and works as expected:   [wheel, bumper, engine]   The code becomes more complicated with those extra conditional statements. What should you do when you add the wrong type in the list and it throws exceptions? What if you forget to insert an extra conditional statement? This is where generics come to the fore.   Instead of writing a lot of type checks and class casts when manipulating a collection, we tell the static analyzer what type of object the list is allowed to contain. Here is the modified code, where we specify that parts can only contain strings:   // Array of String data   List<String> parts = ['wheel', 'bumper', 'engine']; // Ordinary item   double item = 1.23;   void main() {   // Add the item to array parts.add(item); print(parts);   }   Now, Listis a generic class with the String parameter. Dart Editor invokes the static analyzer to check the types in the code for potential problems at compile time and alert us if we try to insert a wrong type of object in our collection, as shown in the following screenshot:                                           [ 3 ]     Advanced Techniques and Reflection This helps us make the code clearer and safer because the static analyzer checks the type of the collection at compile time. The important point is that you shouldn't use raw types. As a bonus, we can use a whole bunch of shorthand methods to organize iteration through the list of items to cast safer. Bear in mind that the static analyzer only warns about potential problems and doesn't generate any errors. Dart checks the types of generic classes only in the check mode. Execution in the production mode or code compiled to JavaScript loses all the type information. Using generics   Let's discuss how to make the transition to using generics in our code with some real-world examples. Assume that we have the following AssemblyLine class:   part of assembly.room;   // AssemblyLine. class AssemblyLine { List of items on line. List _items = [];   Add [item] to line. add(item) {   _items.add(item);   }   Make operation on all items in line. make(operation) {   _items.forEach((item) { operation(item);   });   }   }   Also, we have a set of different kinds of cars, as shown in the following code:   part of assembly.room;   // Car   abstract class Car { // Color         [ 4 ]     Chapter 2 String color;   }   // Passenger car   class PassengerCar extends Car {   String toString() => "Passenger Car";   }   // Truck   class Truck extends Car { String toString() => "Truck"; }   Finally, we have the following assembly.room library with a main method:   library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() {   // Create passenger assembly line   AssemblyLine passengerCarAssembly = new AssemblyLine();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); We can occasionally add Truck as well passengerCarAssembly.add(new Truck()); Operate passengerCarAssembly.make(operation);   }   In the preceding example, we were able to add the occasional truck in the assembly line for passenger cars without any problem to get the following result:   Operate Passenger Car   Operate Truck   This seems a bit farfetched since in real life, we can't assemble passenger cars and trucks in the same assembly line. So to make your solution safer, you need to make the AssemblyLinetype generic.         [ 5 ]     Advanced Techniques and Reflection Generic types   In general, it's not difficult to make a type generic. Consider the following example of the AssemblyLineclass:   part of assembly.room;   // AssemblyLine.   class AssemblyLine <E extends Car> {   o   List of items on line. List<E> _items = [];   o   Add [item] to line. add(E item) { _items.insert(0, item);   }   o   Make operation on all items in line. make(operation) {   _items.forEach((E item) { operation(item);   });   }   }   In the preceding code, we added one type parameter, E, in the declaration of the AssemblyLine class. In this case, the type parameter requires the original one to be asubtype of Car. This allows the AssemblyLine implementation to take advantage of Car without the need for casting a class. The type parameter E is known as a boundedtype parameter. Any changes to the assembly.room library will look like this:   library assembly.room;   part 'assembly_line.dart'; part 'car.dart';   operation(car) { print('Operate ${car}'); }   main() {   // Create passenger assembly line             [ 6 ]     Chapter 2 AssemblyLine<PassengerCar> passengerCarAssembly = new AssemblyLine<PassengerCar>();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); We can occasionally add truck as well passengerCarAssembly.add(new Truck()); Operate passengerCarAssembly.make(operation);   }   The static analyzer alerts us at compile time if we try to insert the Truck argument in the assembly line for passenger cars, as shown in the following screenshot:                                                 After we fix the code in line 17, all looks good. Our assembly line is now safe. But if you look at the operation function, it is totally different for passenger cars than it is for trucks; this means that we must make the operation generic as well. The static analyzer doesn't show any warnings and, even worse, we cannot make the operation generic directly because Dart doesn't support generics for functions. But there is   a solution.                 [ 7 ]     Advanced Techniques and Reflection Generic functions   Functions, like all other data types in Dart, are objects, and they have the data type Function. In the following code, we will create an Operationclass as an implementation of Function and then apply generics to it as usual:   part of assembly.room;   // Operation for specific type of car   class Operation<E extends Car> implements Function {   o   Operation name final String name; o   Create new operation with [name] Operation(this.name);   o   We call our function here call(E car) {   print('Make ${name} on ${car}');   }   }   The gem in our class is the call method. As Operation implements Functionandhas a callmethod, we can pass an instance of our class as a function in themakemethod of the assembly line, as shown in the following code:   library assembly.room;   part 'assembly.dart'; part 'car.dart'; part 'operation.dart';   main() {   Paint operation for passenger car Operation<PassengerCar> paint = new Operation<PassengerCar>("paint");   Paint operation for Trucks   Operation<Truck> paintTruck = new Operation<Truck>("paint");   Create passenger assembly line Assembly<PassengerCar> passengerCarAssembly = new Assembly<PassengerCar>();   We can add passenger car passengerCarAssembly.add(new PassengerCar()); Operate only with passenger car passengerCarAssembly.make(paint); Operate with mistake passengerCarAssembly.make(paintTruck); }       [ 8 ]     Chapter 2 In the preceding code, we created the paint operation to paint the passenger cars and the paintTruckoperation to paint trucks. Later, we created the   passengerCarAssembly line and added a newpassenger car to the line via the add method. We can run the paint operation on the passenger car by calling the make method of the passengerCarAssembly line. Next, we intentionally made a mistake and tried to paint the truck on the assembly line for passenger cars, which resulted in the following runtime exception:   Make paint on Passenger Car Unhandled exception:   type 'PassengerCar' is not a subtype of type 'Truck' of 'car'. #0 Operation.call (…/generics_operation.dart:10:10)   #1 Assembly.make.<anonymous closure>(…/generics_assembly.dart:16:15)   #2 List.forEach (dart:core-patch/growable_array.dart:240) #3 Assembly.make (…/generics_assembly.dart:15:18)   #4 main (…/generics_assembly_and_operation_room.dart:20:28)   …   This trick with the call method of the Function type helps you make all the aspects of your assembly line generic. We've seen how to make a class generic and function to make the code of our application safer and cleaner.   The documentation generator automatically adds information about generics in the generated documentation pages.   To understand the differences between errors and exceptions, let's move on to the next topic.   Errors versus exceptions   Runtime faults can and do occur during the execution of a Dart program. We can split all faults into two types:   Errors   Exceptions                   [ 9 ]     Advanced Techniques and Reflection There is always some confusion on deciding when to use each kind of fault, but you will be given several general rules to make your life a bit easier. All your decisions will be based on the simple principle of recoverability. If your code generates a fault that can reasonably be recovered from, use exceptions. Conversely, if the code generates a fault that cannot be recovered from, or where continuing the execution would do more harm, use errors.   Let's take a look at each of them in detail.   Errors   An error occurs if your code has programming errors that should be fixed by the programmer. Let's take a look at the following main function:   main() {   Fixed length list List list = new List(5); Fill list with values   for (int i = 0; i < 10; i++) { list[i] = i;   }   print('Result is ${list}');   }   We created an instance of the List class with a fixed length and then tried to fill it with values in a loop with more items than the fixed size of the List class. Executing the preceding code generates RangeError, as shown in the following screenshot:               This error occurred because we performed a precondition violation in our code when we tried to insert a value in the list at an index outside the valid range. Mostly, these types of failures occur when the contract between the code and the calling API is broken. In our case, RangeError indicates that the precondition was violated. There are a whole bunch of errors in the Dart SDK such as CastError, RangeError, NoSuchMethodError, UnsupportedError, OutOfMemoryError, and StackOverflowError. Also, there are many others that you will find in the errors. dart file as a part of the dart.core library. All error classes inherit from the Error class and can return stack trace information to help find the bug quickly. In the preceding example, the error happened in line 6 of the main method in the   range_error.dart file.     [ 10 ]     Chapter 2 We can catch errors in our code, but because the code was badly implemented, we should rather fix it. Errors are not designed to be caught, but we can throw them if a critical situation occurs. A Dart program should usually terminate when an error occurs.   Exceptions   Exceptions, unlike errors, are meant to be caught and usually carry information about the failure, but they don't include the stack trace information. Exceptions happen in recoverable situations and don't stop the execution of a program. You can throw any non-null object as an exception, but it is better to create a new exception class that implements the marker interface Exception and overrides the toString method of the Object class in order to deliver additional information. An exception should be handled in a catch clause or made to propagate outwards. The following is an example of code without the use of exceptions:   import 'dart:io';   main() {   // File URI   Uri uri = new Uri.file("test.json"); // Check uri   if (uri != null) { // Create the file File file = new File.fromUri(uri); // Check whether file exists if (file.existsSync()) { // Open file   RandomAccessFile random = file.openSync(); // Check random   if (random != null) { // Read file   List<int> notReadyContent = random.readSync(random.lengthSync());   // Check not ready content   if (notReadyContent != null) {   Convert to String String content = new String.fromCharCodes(notReadyContent);   Print results   print('File content: ${content}');   }   // Close file random.closeSync();   }   [ 11 ]     Advanced Techniques and Reflection } else {   print ("File doesn't exist");   }   }   }   Here is the result of this code execution:   File content: [{ name: Test, length: 100 }]   As you can see, the error detection and handling leads to a confusing spaghetti code. Worse yet, the logical flow of the code has been lost, making it difficult to read and understand it. So, we transform our code to use exceptions as follows:   import 'dart:io';   main() {   RandomAccessFile random; try {   // File URI   Uri uri = new Uri.file("test.json"); // Create the file File file = new File.fromUri(uri); // Open file   random = file.openSync(); // Read file List<int> notReadyContent = random.readSync(random.lengthSync());   // Convert to String   String content = new String.fromCharCodes(notReadyContent); // Print results   print('File content: ${content}');   } on ArgumentError catch(ex) { print('Argument error exception'); } on UnsupportedError catch(ex) { print('URI cannot reference a file'); } on FileSystemException catch(ex) {   print ("File doesn't exist or accessible"); } finally {   try { random.closeSync();   } on FileSystemException catch(ex) { print("File can't be close");   }   }   }   [ 12 ]     Chapter 2 The code in thefinally statement will always be executed independent of whether the exception happened or not to close the random file. Finally, we have a clear separation of exception handling from the working code and we can now propagate uncaught exceptions outwards in the call stack.   The suggestions based on recoverability after exceptions are fragile. In our example, we caught ArgumentErrorandUnsupportError in common with FileSystemException. This was only done to show that errors and exceptionshave the same nature and can be caught any time. So, what is the truth? While developing my own framework, I used the following principle:   If I believe the code cannot recover, I use an error, and if I think it can recover, I use an exception.   Let's discuss another advanced technique that has become very popular and that helps you change the behavior of the code without making any changes to it.   Annotations   An annotation is metadata—data about data. An annotation is a way to keep additional information about the code in the code itself. An annotation can have parameter values to pass specific information about an annotated member. An annotation without parameters is called a marker annotation. The purpose of a marker annotation is just to mark the annotated member.   Dart annotations are constant expressions beginning with the @ character. We can apply annotations to all the members of the Dart language, excluding comments and annotations themselves. Annotations can be:   Interpreted statically by parsing the program and evaluating the constants via a suitable interpreter   Retrieved via reflection at runtime by a framework   The documentation generator does not add annotations to the generated documentation pages automatically, so the information about annotations must be specified separately in comments.                     [ 13 ]     Advanced Techniques and Reflection Built-in annotations   There are several built-in annotations defined in the Dart SDK interpreted by the static analyzer. Let's take a look at them.   Deprecated   The first built-in annotation is deprecated, which is very useful when you need to mark a function, variable, a method of a class, or even a whole class as deprecated and that it should no longer be used. The static analyzer generates a warning whenever a marked statement is used in code, as shown in the following screenshot:                                                   Override   Another built-in annotation is override. This annotation informs the static analyzer that any instance member, such as a method, getter, or setter, is meant to override the member of a superclass with the same name. The class instance variables as well as static members never override each other. If an instance member marked with override fails to correctly override a member in one of its superclasses, the static analyzer generates the following warning:             [ 14 ]     Chapter 2                                             Proxy   The last annotation is proxy. Proxy is a well-known pattern used when we need to call a real class's methods through the instance of another class. Let's assume that we have the following Car class:   part of cars;   // Class Car class Car {   int _speed = 0; // The car speed int get speed => _speed;   // Accelerate car accelerate(acc) { _speed += acc;   }   }                     [ 15 ]     Advanced Techniques and Reflection To drive the car instance, we must accelerate it as follows:   library cars;   part 'car.dart';   main() {   Car car = new Car(); car.accelerate(10); print('Car speed is ${car.speed}');   }   We now run our example to get the following result:   Car speed is 10   In practice, we may have a lot of different car types and would want to test all ofthem. To help us with this, we created the CarProxy class by passing an instance of Car in the proxy's constructor. From now on, we can invoke the car's methods through the proxy and save the results in a log as follows:   part of cars;   // Proxy to [Car] class CarProxy {   final Car _car;   // Create new proxy to [car] CarProxy(this._car);   @override   noSuchMethod(Invocation invocation) { if (invocation.isMethod &&   invocation.memberName == const Symbol('accelerate')) { // Get acceleration value   var acc = invocation.positionalArguments[0]; // Log info   print("LOG: Accelerate car with ${acc}"); // Call original method _car.accelerate(acc);   } else if (invocation.isGetter &&   invocation.memberName == const Symbol('speed')) { var speed = _car.speed;   // Log info         [ 16 ]     Chapter 2 print("LOG: The car speed ${speed}"); return speed;   }   return super.noSuchMethod(invocation);   }   }   As you can see, CarProxy does not implement the Car interface. All the magic happens inside noSuchMethod, which is overridden from theObjectclass. In thismethod, we compare the invoked member name with accelerateandspeed. If the comparison results match one of our conditions, we log the information and then call the original method on the real object. Now let's make changes to the main method, as shown in the following screenshot:                                   Here, the static analyzer alerts you with a warning because the CarProxyclass doesn't have the accelerate method and the speed getter. You must add the proxy annotation to the definition of the CarProxy class to suppress the staticanalyzer warning, as shown in the following screenshot:           Now with all the warnings gone, we can run our example to get the following successful result:   Car speed is 10   LOG: Accelerate car with 10   LOG: The car speed 20   Car speed through proxy is 20   [ 17 ]     Advanced Techniques and Reflection Custom annotations   Let's say we want to create a test framework. For this, we will need several custom annotations to mark methods in a testable class to be included in a test case. The following code has two custom annotations. In the case, where we need only marker annotation, we use a constant string test. In the event that we need to pass parameters to an annotation, we will use a Test class with a constant constructor, as shown in the following code:   library test;   //  Marker annotation test const String test = "test";   //  Test annotation   class Test {   //  Should test be ignored? final bool include; //  Default constant constructor const Test({this.include:true});   String toString() => 'test';   }   The Test class has the final include variable initialized with a default value of true. To exclude a method from tests, we should pass false as a parameter forthe annotation, as shown in the following code:   library test.case;   import 'test.dart'; import 'engine.dart';   // Test case of Engine class TestCase {   Engine engine = new Engine();   //  Start engine @test testStart() {   engine.start();   if (!engine.started) throw new Exception("Engine must start");   }   //  Stop engine   @Test()   [ 18 ]     Chapter 2 testStop() { engine.stop(); if (engine.started) throw new Exception("Engine must stop");   }   // Warm up engine @Test(include:false) testWarmUp() {   // ...   }   }   In this scenario, we test theEngine class via the invocation of the testStartand testStopmethods ofTestCase, while avoiding the invocation of thetestWarmUp method.   So what's next? How can we really use annotations? Annotations are useful with reflection at runtime, so now it's time to discuss how to make annotations available through reflection.   Reflection   Introspection is the ability of a program to discover and use its own structure. Reflection is the ability of a program to use introspection to examine and modify the structure and behavior of the program at runtime. You can use reflection to dynamically create an instance of a type or get the type from an existing object and invoke its methods or access its fields and properties. This makes your code more dynamic and can be written against known interfaces so that the actual classes can be instantiated using reflection. Another purpose of reflection is to create development and debugging tools, and it is also used for meta-programming.   There are two different approaches to implementing reflection:   The first approach is that the information about reflection is tightly integrated with the language and exists as part of the program's structure. Access to program-based reflection is available by a property or method.   The second approach is based on the separation of reflection information and program structure. Reflection information is separated inside a distinct mirror object that binds to the real program member.               [ 19 ]     Advanced Techniques and Reflection Dart reflection follows the second approach with Mirrors. You can find more information about the concept of Mirrors in the original paper written by Gilad   Bracha at http://bracha.org/mirrors.pdf. Let's discuss the advantagesof mirrors:   Mirrors are separate from the main code and cannot be exploited for malicious purposes   As reflection is not part of the code, the resulting code is smaller   There are no method-naming conflicts between the reflection API and inspected classes   It is possible to implement many different mirrors with different levels of reflection privileges   It is possible to use mirrors in command-line and web applications   Let's try Mirrors and see what we can do with them. We will continue to create a library to run our tests.   Introspection in action   We will demonstrate the use of Mirrors with something simple such as introspection. We will need a universal code that can retrieve the information about any object   or class in our program to discover its structure and possibly manipulate it with properties and call methods. For this, we've prepared the TypeInspector class. Let's take a look at the code. We've imported the dart:mirrors library here to add the introspection ability to our code:   library inspector;   import 'dart:mirrors'; import 'test.dart';   class TypeInspector { ClassMirror _classMirror; // Create type inspector for [type]. TypeInspector(Type type) {   _classMirror = reflectClass(type);   }                   [ 20 ]     Chapter 2 The ClassMirror class contains all the information about the observing type. We perform the actual introspection with the reflectClass function of Mirrors and return a distinct mirror object as the result. Then, we call the getAnnotatedMethods method and specify the name of the annotation that we are interested in. This will return a list of MethodMirror that will contain methods annotated with specified parameters. One by one, we step through all the instance members and call the private_isMethodAnnotatedmethod. If the result of the execution of the_isMethodAnnotated method is successful, then we add the discovering method tothe resultlist of foundMethodMirror's, as shown in the following code:   // Return list of method mirrors assigned by [annotation]. List<MethodMirror> getAnnotatedMethods(String annotation) { List<MethodMirror> result = []; // Get all methods   _classMirror.instanceMembers.forEach( (Symbol name, MethodMirror method) {   if (_isMethodAnnotated(method, annotation)) { result.add(method);   }   });   return result;   }   The first argument of _isMethodAnnotatedhas themetadata property that keeps a list of annotations. The second argument of this method is the annotation name that we would like to find. The inst variable holds a reference to the original object in the reflectee property. We pass through all the method's metadata to exclude some ofthem annotated with the Test class and marked with include equals false. All other method's annotations should be compared to the annotation name, as follows:   // Check is [method] annotated with [annotation].   bool _isMethodAnnotated(MethodMirror method, String annotation) { return method.metadata.any(   (InstanceMirror inst) {   // For [Test] class we check include condition if (inst.reflectee is Test &&   !(inst.reflectee as Test).include) { // Test must be exclude   return false;   }   // Literal compare of reflectee and annotation return inst.reflectee.toString() == annotation;   });   }   }   [ 21 ]     Advanced Techniques and Reflection Dart mirrors have the following three main functions for introspection:   reflect: This function is used to introspect an instance that is passed asa parameter and saves the result in InstanceMirror or ClosureMirror. For the first one, we can call methods, functions, or get and set fields of the reflectee property. For the second one, we can execute the closure.   reflectClass: This function reflects the class declaration and returns ClassMirror. It holds full information about the type passed as a parameter.   reflectType: This function returns TypeMirror and reflects a class, typedef, function type, or type variable.   Let's take a look at the main code:   library test.framework;   import 'type_inspector.dart'; import 'test_case.dart';   main() {   TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods('test'); print(methods); }   Firstly, we created an instance of our TypeInspectorclass and passed the testable   class, in our case, TestCase. Then, we called getAnnotatedMethods from inspector with the name of the annotation, test. Here is the result of the execution:   [MethodMirror on 'testStart', MethodMirror on 'testStop']   The inspector method found the methods testStartandtestStop and ignored testWarmUp of the TestCase class as per our requirements.   Reflection in action   We have seen how introspection helps us find methods marked with annotations. Now we need to call each marked method to run the actual tests. We will do that using reflection. Let's make a MethodInvoker class to show reflection in action:   library executor;   import 'dart:mirrors';   class MethodInvoker implements Function { // Invoke the method   [ 22 ]     Chapter 2 call(MethodMirror method) {   ClassMirror classMirror = method.owner as ClassMirror;   //  Create an instance of class InstanceMirror inst =   classMirror.newInstance(new Symbol(''), []);   //  Invoke method of instance inst.invoke(method.simpleName, []); }   }   As the MethodInvoker class implements the Function interface and has the call method, we can call instance it as if it was a function. In order to call the method, we must first instantiate a class. Each MethodMirror method has the owner property, which points to the owner object in the hierarchy. The owner ofMethodMirror in our case is ClassMirror. In the preceding code, we created a new instance of the class with an empty constructor and then we invoked the method of inst by name. In both cases, the second parameter was an empty list of method parameters.   Now, we introduce MethodInvoker to the main code. In addition to TypeInspector, we create the instance of MethodInvoker. One by one, we step through the methods and send each of them to invoker. We print Success only if no exceptions occur.   To prevent the program from terminating if any of the tests failed, we wrap invoker in the try-catch block, as shown in the following code:   library test.framework;   import 'type_inspector.dart'; import 'method_invoker.dart'; import 'engine_case.dart';   main() {   TypeInspector inspector = new TypeInspector(TestCase); List methods = inspector.getAnnotatedMethods(test); MethodInvoker invoker = new MethodInvoker(); methods.forEach((method) { try { invoker(method);   print('Success ${method.simpleName}');   } on Exception catch(ex) { print(ex);   } on Error catch(ex) {   print("$ex : ${ex.stackTrace}");   }   });   }   [ 23 ]     Advanced Techniques and Reflection As a result, we will get the following code:   Success Symbol("testStart")   Success Symbol("testStop")   To prove that the program will not terminate in the case of an exception in the tests, we will change the code in TestCase to break it, as follows:   // Start engine @test testStart() {   engine.start();   // !!! Broken for reason   if (engine.started) throw new Exception("Engine must start");   }   When we run the program, the code for testStart fails, but the program continues executing until all the tests are finished, as shown in the following code:   Exception: Engine must start   Success Symbol("testStop")   And now our test library is ready for use. It uses introspection and reflection to observe and invoke marked methods of any class.   Summary   This concludes mastering of the advanced techniques in Dart. You now know that generics produce safer and clearer code, annotation with reflection helps execute code dynamically, and errors and exceptions play an important role in finding bugs that are detected at runtime.   In the next chapter, we will talk about the creation of objects and how and when to create them using best practices from the programming world.                               [ 24 ]  
Read more
  • 0
  • 0
  • 1291

article-image-using-classes
Packt
28 Nov 2014
26 min read
Save for later

Using Classes

Packt
28 Nov 2014
26 min read
In this article by Meir Bar-Tal and Jonathon Lee Wright, authors of Advanced UFT 12 for Test Engineers Cookbook, we will cover the following recipes: Implementing a class Implementing a simple search class Implementing a generic Login class Implementing function pointers (For more resources related to this topic, see here.) Introduction This article describes how to use classes in VBScript, along with some very useful and illustrative implementation examples. Classes are a fundamental feature of object-oriented programming languages such as C++, C#, and Java. Classes enable us to encapsulate data fields with the methods and properties that process them, in contrast to global variables and functions scattered in function libraries. UFT already uses classes, such as with reserved objects, and Test Objects are also instances of classes. Although elementary object-oriented features such as inheritance and polymorphism are not supported by VBScript, using classes can be an excellent choice to make your code more structured, better organized, and more efficient and reusable. Implementing a class In this recipe, you will learn the following: The basic concepts and the syntax required by VBScript to implement a class The different components of a class and interoperation How to implement a type of generic constructor function for VBScript classes How to use a class during runtime Getting ready From the File menu, navigate to New | Function Library…, or use the Alt + Shift + N shortcut. Name the new function library cls.MyFirstClass.vbs and associate it with your test. How to do it... We will build our MyFirstClass class from the ground up. There are several steps one must follow to implement a class; they are follows: Define the class as follows: Class MyFirstClass Next, we define the class fields. Fields are like regular variables, but encapsulated within the namespace defined by the class. The fields can be private or public. A private field can be accessed only by class members. A public field can be accessed from any block of code. The code is as follows: Class MyFirstClass Private m_sMyPrivateString Private m_oMyPrivateObject Public m_iMyPublicInteger End Class It is a matter of convention to use the prefix m_ for class member fields; and str for string, int for integer, obj for Object, flt for Float, bln for Boolean, chr for Character, lng for Long, and dbl for Double, to distinguish between fields of different data types. For examples of other prefixes to represent additional data types, please refer to sites such as https://en.wikipedia.org/wiki/Hungarian_notation. Hence, the private fields' m_sMyPrivateString and m_oMyPrivateObject will be accessible only from within the class methods, properties, and subroutines. The public field m_iMyPublicInteger will be accessible from any part of the code that will have a reference to an instance of the MyFirstClass class; and it can also allow partial or full access to private fields, by implementing public properties. By default, within a script file, VBScript treats as public identifiers such as function and subroutines and any constant or variable defined with Const and Dim respectively,, even if not explicitly defined. When associating function libraries to UFT, one can limit access to specific globally defined identifiers, by preceding them with the keyword Private. The same applies to members of a class, function, sub, and property. Class fields must be preceded either by Public or Private; the public scope is not assumed by VBScript, and failing to precede a field identifier with its access scope will result in a syntax error. Remember that, by default, VBScript creates a new variable if the explicit option is used at the script level to force explicit declaration of all variables in that script level. Next, we define the class properties. A property is a code structure used to selectively provide access to a class' private member fields. Hence, a property is often referred to as a getter (to allow for data retrieval) or setter (to allow for data change). A property is a special case in VBScript; it is the only code structure that allows for a duplicate identifier. That is, one can have a Property Get and a Property Let procedure (or Property Set, to be used when the member field actually is meant to store a reference to an instance of another class) with the same identifier. Note that Property Let and Property Set accept a mandatory argument. For example: Class MyFirstClass Private m_sMyPrivateString Private m_oMyPrivateObject    Public m_iMyPublicInteger  Property Get MyPrivateString()    MyPrivateString = m_sMyPrivateString End Property Property Let MyPrivateString(ByVal str)    m_sMyPrivateString = str End Property Property Get MyPrivateObject()    Set MyPrivateObject = m_oMyPrivateObject End Property Private Property Set MyPrivateObject(ByRef obj)    Set m_oMyPrivateObject = obj End Property End Class The public field m_iMyPublicInteger can be accessed from any code block, so defining a getter and setter (as properties are often referred to) for such a field is optional. However, it is a good practice to define fields as private and explicitly provide access through public properties. For fields that are for exclusive use of the class members, one can define the properties as private. In such a case, usually, the setter (Property Let or Property Set) would be defined as private, while the getter (Property Get) would be defined as public. This way, one can prevent other code components from making changes to the internal fields of the class to ensure data integrity and validity. Define the class methods and subroutines. A method is a function, which is a member of a class. Like fields and properties, methods (as well as subroutines) can be Private or Public. For example: Class MyFirstClass '… Continued Private Function MyPrivateFunction(ByVal str)    MsgBox TypeName(me) & " – Private Func: " & str    MyPrivateFunction = 0 End Function Function MyPublicFunction(ByVal str)    MsgBox TypeName(me) & " – Public Func: " & str    MyPublicFunction = 0 End Function Sub MyPublicSub(ByVal str)    MsgBox TypeName(me) & " – Public Sub: " & str End Sub End Class Keep in mind that subroutines do not return a value. Functions by design should not return a value, but they can be implemented as a subroutine. A better way is to, in any case, have a function return a value that tells the caller if it executed properly or not (usually zero (0) for no errors and one (1) for any fault). Recall that a function that is not explicitly assigned a value function and is not explicitly assigned a value, will return empty, which may cause problems if the caller attempts to evaluate the returned value. Now, we define how to initialize the class when a VBScript object is instantiated: Set obj = New MyFirstClass The Initialize event takes place at the time the object is created. It is possible to add code that we wish to execute every time an object is created. So, now define the standard private subroutine Class_Initialize, sometimes referred to (albeit only by analogy) as the constructor of the class. If implemented, the code will automatically be executed during the Initialize event. For example, if we add the following code to our class: Private Sub Class_Initialize MsgBox TypeName(me) & " started" End Sub Now, every time the Set obj = New MyFirstClass statement is executed, the following message will be displayed: Define how to finalize the class. We finalize a class when a VBScript object is disposed of (as follows), or when the script exits the current scope (such as when a local object is disposed when a function returns control to the caller), or a global object is disposed (when UFT ends its run session): Set obj = Nothing The Finalize event takes place at the time when the object is removed from memory. It is possible to add code that we wish to execute, every time an object is disposed of. If so, then define the standard private subroutine Class_Terminate, sometimes referred to (albeit only by analogy) as the destructor of the class. If implemented, the code will automatically be executed during the Finalize event. For example, if we add the following code to our class: Private Sub Class_Terminate MsgBox TypeName(me) & " ended" End Sub Now, every time the Set obj = Nothing statement is executed, the following message will be displayed: Invoking (calling) a class method or property is done as follows: 'Declare variables Dim obj, var 'Calling MyPublicFunction obj.MyPublicFunction("Hello") 'Retrieving the value of m_sMyPrivateString var = obj.MyPrivateString 'Setting the value of m_sMyPrivateString obj.MyPrivateString = "My String" Note that the usage of the public members is done by using the syntax obj.<method or property name>, where obj is the variable holding the reference to the object of class. The dot operator (.) after the variable identifier provides access to the public members of the class. Private members can be called only by other members of the class, and this is done like any other regular function call. VBScript supports classes with a default behavior. To utilize this feature, we need to define a single default method or property that will be invoked every time an object of the class is referred to, without specifying which method or property to call. For example, if we define the public method MyPublicFunction as default: Public Default Function MyPublicFunction(ByVal str) MsgBox TypeName(me) & " – Public Func: " & str MyPublicFunction = 0 End Function Now, the following statements would invoke the MyPublicFunction method implicitly: Set obj = New MyFirstClass obj("Hello") This is exactly the same as if we called the MyPublicFunction method explicitly: Set obj = New MyFirstClass obj.MyPublicFunction("Hello") Contrary to the usual standard for such functions, a default method or property must be explicitly defined as public. Now, we will see how to add a constructor-like function. When using classes stored in function libraries, UFT (know as QTP in previous versions), cannot create an object using the New operator inside a test Action. In general, the reason is linked to the fact that UFT uses a wrapper on top of WSH, which actually executes the VBScript (VBS 5.6) code. Therefore, in order to create instances of such a custom class, we need to use a kind of constructor function that will perform the New operation from the proper memory namespace. Add the following generic constructor to your function library: Function Constructor(ByVal sClass) Dim obj On Error Resume Next 'Get instance of sClass Execute "Set obj = New [" & sClass & "]" If Err.Number <> 0 Then    Set obj = Nothing      Reporter.ReportEvent micFail, "Constructor", "Failed      to create an instance of class '" & sClass & "'." End If Set Constructor = obj End Function We will then instantiate the object from the UFT Action, as follows: Set obj = Constructor("MyFirstClass") Consequently, use the object reference in the same fashion as seen in the previous line of code: obj.MyPublicFunction("Hello") How it works... As mentioned earlier, using the internal public fields, methods, subroutines, and properties is done using a variable followed by the dot operator and the relevant identifier (for example, the function name). As to the constructor, it accepts a string with the name of a class as an argument, and attempts to create an instance of the given class. By using the Execute command (which performs any string containing valid VBScript syntax), it tries to set the variable obj with a new reference to an instance of sClass. Hence, we can handle any custom class with this function. If the class cannot be instantiated (for instance, because the string passed to the function is faulty, the function library is not associated to the test, or there is a syntax error in the function library), then an error would arise, which is gracefully handled by using the error-handling mechanism, leading to the function returning nothing. Otherwise, the function will return a valid reference to the newly created object. See also The following articles at www.advancedqtp.com are part of a wider collection, which also discuss classes and code design in depth: An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-classes An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-code-design An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-design-patterns Implementing a simple search class In this recipe, we will see how to create a class that can be used to execute a search on Google. Getting ready From the File menu, navigate to New | Test, and name the new test SimpleSearch. Then create a new function library by navigating to New | Function Library, or use the Alt + Shift + N shortcut. Name the new function library cls.Google.vbs and associate it with your test. How to do it... Proceed with the following steps: Define an environment variable as OPEN_URL. Insert the following code in the new library: Class GoogleSearch Public Function DoSearch(ByVal sQuery)    With me.Page_      .WebEdit("name:=q").Set sQuery      .WebButton("html id:=gbqfba").Click    End With    me.Browser_.Sync     If me.Results.WaitProperty("visible", 1, 10000) Then      DoSearch = GetNumResults()    Else      DoSearch = 0      Reporter.ReportEvent micFail, TypeName(Me),        "Search did not retrieve results until timeout"    End If End Function Public Function GetNumResults()    Dim tmpStr    tmpStr = me.Results.GetROProperty("innertext")    tmpStr = Split(tmpStr, " ")    GetNumResults = CLng(tmpStr(1)) 'Assumes the number      is always in the second entry End Function Public Property Get Browser_()    Set Browser_ = Browser(me.Title) End Property Public Property Get Page_()    Set Page_ = me.Browser_.Page(me.Title) End Property Public Property Get Results()    Set Results = me.Page_.WebElement(me.ResultsId) End Property Public Property Get ResultsId()    ResultsId = "html id:=resultStats" End Property Public Property Get Title()    Title = "title:=.*Google.*" End Property Private Sub Class_Initialize    If Not me.Browser_.Exist(0) Then      SystemUtil.Run "iexplore.exe",        Environment("OPEN_URL")      Reporter.Filter = rfEnableErrorsOnly      While Not Browser_.Exist(0)        Wait 0, 50      Wend      Reporter.Filter = rfEnableAll      Reporter.ReportEvent micDone, TypeName(Me),        "Opened browser"    Else      Reporter.ReportEvent micDone, TypeName(Me),        "Browser was already open"    End If End Sub Private Sub Class_Terminate    If me.Browser_.Exist(0) Then      me.Browser_.Close      Reporter.Filter = rfEnableErrorsOnly      While me.Browser_.Exist(0)        wait 0, 50      Wend      Reporter.Filter = rfEnableAll      Reporter.ReportEvent micDone, TypeName(Me),        "Closed browser"    End If End Sub End Class In Action, write the following code: Dim oGoogleSearch Dim oListResults Dim oDicSearches Dim iNumResults Dim sMaxResults Dim iMaxResults '--- Create these objects only in the first iteration If Not LCase(TypeName(oListResults)) = "arraylist" Then Set oListResults =    CreateObject("System.Collections.ArrayList") End If If Not LCase(TypeName(oDicSearches)) = "Dictionary" Then Set oDicSearches = CreateObject("Scripting.Dictionary") End If '--- Get a fresh instance of GoogleSearch Set oGoogleSearch = GetGoogleSearch() '--- Get search term from the DataTable for each action iteration sToSearch = DataTable("Query", dtLocalSheet) iNumResults = oGoogleSearch.DoSearch(sToSearch) '--- Store the results of the current iteration '--- Store the number of results oListResults.Add iNumResults '--- Store the search term attached to the number of results as key (if not exists) If Not oDicSearches.Exists(iNumResults) Then oDicSearches.Add iNumResults, sToSearch End If 'Last iteration (assuming we always run on all rows), so perform the comparison between the different searches If CInt(Environment("ActionIteration")) = DataTable.LocalSheet.GetRowCount Then 'Sort the results ascending oListResults.Sort 'Get the last item which is the largest iMaxResults = oListResults.item(oListResults.Count-1) 'Print to the Output pane for debugging Print iMaxResults 'Get the search text which got the most results sMaxResults = oDicSearches(iMaxResults) 'Report result Reporter.ReportEvent micDone, "Max search", sMaxResults    & " got " & iMaxResults 'Dispose of the objects used Set oListResults = Nothing Set oDicSearches = Nothing Set oGoogleSearch = Nothing End If In the local datasheet, create a parameter named Query and enter several values to be used in the test as search terms. Next, from the UFT home page navigate to View | Test Flow, and then right-click with the mouse on the Action component in the graphic display, then select Action Call Properties and set the Action to run on all rows. How it works... The Action takes care to preserve the data collected through the iterations in the array list oListResults and the dictionary oDicSearches. It checks if it reaches the last iteration after each search is done. Upon reaching the last iteration, it analyses the data to decide which term yielded the most results. A more detailed description of the workings of the code can be seen as follows. First, we create an instance of the GoogleSearch class, and the Class_Initialize subroutine automatically checks if the browser is not already open. If not open, Class_Initialize opens it with the SystemUtil.Run command and waits until it is open at the web address defined in Environment("OPEN_URL"). The Title property always returns the value of the Descriptive Programming (DP) value required to identify the Google browser and page. The Browser_, Page_, and Results properties always return a reference to the Google browser, page, and WebElement respectively, which hold the text with the search results. After the browser is open, we retrieve the search term from the local DataTable parameter Query and call the GoogleSearch DoSearch method with the search term string as parameter. The DoSearch method returns a value with the number of results, which are given by the internal method GetNumResults. In the Action, we store the number itself and add to the dictionary, an entry with this number as the key and the search term as the value. When the last iteration is reached, an analysis of the results is automatically done by invoking the Sort method of oListResults ArrayList, getting the last item (the greatest), and then retrieving the search term associated with this number from the dictionary; it reports the result. At last, we dispose off all the objects used, and then the Class_Terminate subroutine automatically checks if the browser is open. If open, then the Class_Terminate subroutine closes the browser. Implementing a generic Login class In this recipe, we will see how to implement a generic Login class. The class captures both, the GUI structure and the processes that are common to all applications with regard to their user access module. It is agnostic to the particular object classes, their technologies, and other identification properties. The class shown here implements the command wrapper design pattern, as it encapsulates a process (Login) with the main default method (Run). Getting ready You can use the same function library cls.Google.vbs as in the previous recipe Implementing a simple search class, or create a new one (for instance, cls.Login.vbs) and associate it with your test. How to do it... In the function library, we will write the following code to define the class Login: Class Login Private m_wndContainer 'Such as a Browser, Window,    SwfWindow Private m_wndLoginForm 'Such as a Page, Dialog,    SwfWindow Private m_txtUsername 'Such as a WebEdit, WinEdit,    SwfEdit Private m_txtIdField 'Such as a WebEdit, WinEdit,    SwfEdit Private m_txtPassword 'Such as a WebEdit, WinEdit,    SwfEdit Private m_chkRemember 'Such as a WebCheckbox,    WinCheckbox, SwfCheckbox Private m_btnLogin   'Such as a WebEdit, WinEdit,    SwfEdit End Class These fields define the test objects, which are required for any Login class, and the following fields are used to keep runtime data for the report: Public Status 'As Integer Public Info 'As String The Run function is defined as a Default method that accepts a Dictionary as argument. This way, we can pass a set of named arguments, some of which are optional, such as timeout. Public Default Function Run(ByVal ArgsDic)    'Check if the timeout parameter was passed, if not      assign it 10 seconds    If Not ArgsDic.Exists("timeout") Then ArgsDic.Add      "timeout", 10    'Check if the client window exists    If Not me.Container.Exist(ArgsDic("timeout")) Then      me.Status = micFail      me.Info   = "Failed to detect login        browser/dialog/window."      Exit Function    End If    'Set the Username    me.Username.Set ArgsDic("Username")    'If the login form has an additional mandatory field    If me.IdField.Exist(ArgsDic("timeout")) And      ArgsDic.Exists("IdField") Then      me.IdField.Set ArgsDic("IdField")    End If    'Set the password    me.Password.SetSecure ArgsDic("Password")    'It is a common practice that Login forms have a      checkbox to keep the user logged-in if set ON    If me.Remember.Exist(ArgsDic("timeout")) And      ArgsDic.Exists("Remember") Then      me.Remember.Set ArgsDic("Remember")    End If    me.LoginButton.Click End Function The Run method actually performs the login procedure, setting the username and password, as well as checking or unchecking the Remember Me or Keep me Logged In checkbox according to the argument passed with the ArgsDic dictionary. The Initialize method accepts Dictionary just like the Run method. However, in this case, we pass the actual TOs with which we wish to perform the login procedure. This way, we can actually utilize the class for any Login form, whatever the technology used to develop it. We can say that the class is technology agnostic. The parent client dialog/browser/window of the objects is retrieved using the GetTOProperty("parent") statement: Function Initialize(ByVal ArgsDic)    Set m_txtUsername = ArgsDic("Username")    Set m_txtIdField = ArgsDic("IdField")    Set m_txtPassword = ArgsDic("Password")    Set m_btnLogin   = ArgsDic("LoginButton")    Set m_chkRemember = ArgsDic("Remember")    'Get Parents    Set m_wndLoginForm =      me.Username.GetTOProperty("parent")    Set m_wndContainer =      me.LoginForm.GetTOProperty("parent") End Function In addition, here you can see the following properties used in the class for better readability: Property Get Container()    Set Container = m_wndContainer End Property Property Get LoginForm()    Set LoginForm = m_wndLoginForm End Property Property Get Username()    Set Username = m_txtUsername End Property Property Get IdField()    Set IdField = m_txtIdField End Property Property Get Password()    Set Password = m_txtPassword End Property Property Get Remember()    Set Remember = m_chkRemember End Property Property Get LoginButton()  Set LoginButton = m_btnLogin End Property Private Sub Class_Initialize()    'TODO: Additional initialization code here End Sub Private Sub Class_Terminate()    'TODO: Additional finalization code here End Sub We will also add a custom function to override the WinEdit and WinEditor Type methods: Function WinEditSet(ByRef obj, ByVal str) obj.Type str End Function This way, no matter which technology the textbox belongs to, the Set method will work seamlessly. To actually test the Login class, write the following code in the Test Action (this time we assume that the Login form was already opened by another procedure): Dim ArgsDic, oLogin 'Register the set method for the WinEdit and WinEditor RegisterUserFunc "WinEdit", "WinEditSet", "Set" RegisterUserFunc "WinEditor", "WinEditSet", "Set" 'Create a Dictionary object Set ArgsDic = CreateObject("Scripting.Dictionary") 'Create a Login object Set oLogin = New Login 'Add the test objects to the Dictionary With ArgsDic .Add "Username",    Browser("Gmail").Page("Gmail").WebEdit("txtUsername") .Add "Password",    Browser("Gmail").Page("Gmail").WebEdit("txtPassword") .Add "Remember",    Browser("Gmail").Page("Gmail")    .WebCheckbox("chkRemember") .Add "LoginButton",    Browser("Gmail").Page("Gmail").WebButton("btnLogin") End With 'Initialize the Login class oLogin.Initialize(ArgsDic) 'Initialize the dictionary to pass the arguments to the login ArgsDic.RemoveAll With ArgsDic .Add "Username", "myuser" .Add "Password", "myencriptedpassword" .Add "Remember", "OFF" End With 'Login oLogin.Run(ArgsDic) 'or: oLogin(ArgsDic) 'Report result Reporter.ReportEvent oLogin.Status, "Login", "Ended with " & GetStatusText(oLogin.Status) & "." & vbNewLine & oStatus.Info 'Dispose of the objects Set oLogin = Nothing Set ArgsDic = Nothing How it works... Here we will not delve into the parts of the code already explained in the Implementing a simple search class recipe. Let's see what we did in this recipe: We registered the custom function WinEditSet to the WinEdit and WinEditor TO classes using RegisterUserFunc. As discussed previously, this will make every call to the method set to be rerouted to our custom function, resulting in applying the correct method to the Standard Windows text fields. Next, we created the objects we need, a Dictionary object and a Login object. Then, we added the required test objects to Dictionary, and then invoked its Initialize method, passing the Dictionary as the argument. We cleared Dictionary and then added to it the values needed for actually executing the login (Username, Password, and the whether to remember the user or keep logged in checkboxes usually used in Login forms). We called the Run method for the Login class with the newly populated Dictionary. Later, we reported the result by taking the Status and Info public fields from the oLogin object. At the end of the script, we unregistered the custom function from all classes in the environment (StdWin in this case). Implementing function pointers What is a function pointer? A function pointer is a variable that stores the memory address of a block of code that is programmed to fulfill a specific function. Function pointers are useful to avoid complex switch case structures. Instead, they support direct access in runtime to previously loaded functions or class methods. This enables the construction of callback functions. A callback is, in essence, an executable code that is passed as an argument to a function. This enables more generic coding, by having lower-level modules calling higher-level functions or subroutines. This recipe will describe how to implement function pointers in VBScript, a scripting language that does not natively support the usage of pointers. Getting ready Create a new function library (for instance, cls.FunctionPointers.vbs) and associate it with your test. How to do it... Write the following code in the function library: Class WebEditSet    Public Default Function Run(ByRef obj, ByVal sText)        On Error Resume Next         Run = 1 'micFail (pessimistic initialization)        Select Case True            Case   obj.Exist(0) And _                    obj.GetROProperty("visible") And _                    obj.GetROProperty("enabled")                              'Perform the set operation                    obj.Set(sText)                              Case Else                Reporter.ReportEvent micWarning,                  TypeName(me), "Object not available."              Exit Function        End Select        If Err.Number = 0 Then            Run = 0 'micPass        End If    End Function End Class Write the following code in Action: Dim pFunctiontion Set pFunctiontion = New WebEditSet Reporter.ReportEvent pFunctiontion(Browser("Google").Page("Google") .WebEdit("q"), "UFT"), "Set the Google Search WebEdit", "Done" How it works... The WebEditSet class actually implements the command wrapper design pattern (refer to also the Implementing a generic Login class recipe). This recipe also demonstrates an alternative way of overriding any native UFT TO method without recurring to the RegisterUserFunc method. First, we create an instance of the WebEditSet class and set the reference to our pFunctiontion variable. Note that the Run method of WebEditSet is declared as a default function, so we can invoke its execution by merely referring to the object reference, as is done with the statement pFunctiontion in the last line of code in the How to do it… section. This way, pFunctiontion actually functions as if it were a function pointer. Let us take a close look at the following line of code, beginning with Reporter.ReportEvent: Reporter.ReportEvent pFunc(Browser("Google").Page("Google").WebEdit("q"), "UFT"), "Set the Google Search WebEdit", "Done" We call the ReportEvent method of Reporter, and as its first parameter, instead of a status constant such as micPass or micFail, we pass pFunctiontion and the arguments accepted by the Run method (the target TO and its parameter, a string). This way of using the function pointer actually implements a kind of callback. The value returned by the Run method of WebEditSet will determine whether UFT will report a success or failure in regard to the Set operation. It will return through the call invoked by accessing the function pointer. See also The following articles are part of a wider collection at www.advancedqtp.com, which also discusses function pointers in depth: An article by Meir Bar-Tal at http://www.advancedqtp.com/ function-pointers-in-vb-script-revised An article by Meir Bar-Tal at http://www.advancedqtp.com/using-to-custom-property-as-function-pointer Summary In this article, we learned how to implement a general class; basic concepts and the syntax required by VBScript to implement a class. Then we saw how to implement a simple class that can be used to execute a search on Google and a generic Login class. We also saw how to implement function pointers in VBScript along with various links to the articles that discusses function pointers. Resources for Article: Further resources on this subject: DOM and QTP [Article] Getting Started with Selenium Grid [Article] Quick Start into Selenium Tests [Article]
Read more
  • 0
  • 0
  • 20092

article-image-connecting-database
Packt
28 Nov 2014
20 min read
Save for later

Connecting to a database

Packt
28 Nov 2014
20 min read
In this article by Christopher Ritchie, the author of R WildFly Configuration, Deployment, and Administration, Second Edition, you will learn to configure enterprise services and components, such as transactions, connection pools, and Enterprise JavaBeans. (For more resources related to this topic, see here.) To allow your application to connect to a database, you will need to configure your server by adding a datasource. Upon server startup, each datasource is prepopulated with a pool of database connections. Applications acquire a database connection from the pool by doing a JNDI lookup and then calling getConnection(). Take a look at the following code: Connection result = null; try {    Context initialContext = new InitialContext();    DataSource datasource =    (DataSource)initialContext.lookup("java:/MySqlDS");    result = datasource.getConnection(); } catch (Exception ex) {    log("Cannot get connection: " + ex);} After the connection has been used, you should always call connection.close() as soon as possible. This frees the connection and allows it to be returned to the connection pool—ready for other applications or processes to use. Releases prior to JBoss AS 7 required a datasource configuration file (ds.xml) to be deployed with the application. Ever since the release of JBoss AS 7, this approach has no longer been mandatory due to the modular nature of the application server. Out of the box, the application server ships with the H2 open source database engine (http://www.h2database.com), which, because of its small footprint and browser-based console, is ideal for testing purposes. However, a real-world application requires an industry-standard database, such as the Oracle database or MySQL. In the following section, we will show you how to configure a datasource for the MySQL database. Any database configuration requires a two step procedure, which is as follows: Installing the JDBC driver Adding the datasource to your configuration Let's look at each section in detail. Installing the JDBC driver In WildFly's modular server architecture, you have a couple of ways to install your JDBC driver. You can install it either as a module or as a deployment unit. The first and recommended approach is to install the driver as a module. We will now look at a faster approach to installing the driver. However, it does have various limitations, which we will cover shortly. The first step to install a new module is to create the directory structure under the modules folder. The actual path for the module is JBOSS_HOME/modules/<module>/main. The main folder is where all the key module components are installed, namely, the driver and the module.xml file. So, next we need to add the following units: JBOSS_HOME/modules/com/mysql/main/mysql-connector-java-5.1.30-bin.jar JBOSS_HOME/modules/com/mysql/main/module.xml The MySQL JDBC driver used in this example, also known as Connector/J, can be downloaded for free from the MySQL site (http://dev.mysql.com/downloads/connector/j/). At the time of writing, the latest version is 5.1.30. The last thing to do is to create the module.xml file. This file contains the actual module definition. It is important to make sure that the module name (com.mysql) corresponds to the module attribute defined in the your datasource. You must also state the path to the JDBC driver resource and finally add the module dependencies, as shown in the following code: <module name="com.mysql">    <resources>        <resource-root path="mysql-connector-java-5.1.30-bin.jar"/>    </resources>    <dependencies>        <module name="javax.api"/>        <module name="javax.transaction.api"/>    </dependencies> </module> Here is a diagram showing the final directory structure of this new module: You will notice that there is a directory structure already within the modules folder. All the system libraries are housed inside the system/layers/base directory. Your custom modules should be placed directly inside the modules folder and not with the system modules. Adding a local datasource Once the JDBC driver is installed, you need to configure the datasource within the application server's configuration file. In WildFly, you can configure two kinds of datasources, local datasources and xa-datasources, which are distinguishable by the element name in the configuration file. A local datasource does not support two-phase commits using a java.sql.Driver. On the other hand, an xa-datasource supports two-phase commits using a javax.sql.XADataSource. Adding a datasource definition can be completed by adding the datasource definition within the server configuration file or by using the management interfaces. The management interfaces are the recommended way, as they will accurately update the configuration for you, which means that you do not need to worry about getting the correct syntax. In this article, we are going to add the datasource by modifying the server configuration file directly. Although this is not the recommended approach, it will allow you to get used to the syntax and layout of the file. In this article, we will show you how to add a datasource using the management tools. Here is a sample MySQL datasource configuration that you can copy into your datasources subsystem section within the standalone.xml configuration file: <datasources> <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool"    enabled="true" jta="true" use-java-context="true" use-ccm="true">    <connection-url>      jdbc:mysql://localhost:3306/MyDB    </connection-url>    <driver>mysql</driver>    <pool />    <security>      <user-name>jboss</user-name>      <password>jboss</password>    </security>    <statement/>    <timeout>      <idle-timeout-minutes>0</idle-timeout-minutes>      <query-timeout>600</query-timeout>    </timeout> </datasource> <drivers>    <driver name="mysql" module="com.mysql"/> </drivers> </datasources> As you can see, the configuration file uses the same XML schema definition from the earlier -*.ds.xml file, so it will not be difficult to migrate to WildFly from previous releases. In WildFly, it's mandatory that the datasource is bound into the java:/ or java:jboss/ JNDI namespace. Let's take a look at the various elements of this file: connection-url: This element is used to define the connection path to the database. driver: This element is used to define the JDBC driver class. pool: This element is used to define the JDBC connection pool properties. In this case, we are going to leave the default values. security: This element is used to configure the connection credentials. statement: This element is added just as a placeholder for statement-caching options. timeout: This element is optional and contains a set of other elements, such as query-timeout, which is a static configuration of the maximum seconds before a query times out. Also the included idle-timeout-minutes element indicates the maximum time a connection may be idle before being closed; setting it to 0 disables it, and the default is 15 minutes. Configuring the connection pool One key aspect of the datasource configuration is the pool element. You can use connection pooling without modifying any of the existing WildFly configurations, as, without modification, WildFly will choose to use default settings. If you want to customize the pooling configuration, for example, change the pool size or change the types of connections that are pooled, you will need to learn how to modify the configuration file. Here's an example of pool configuration, which can be added to your datasource configuration: <pool>    <min-pool-size>5</min-pool-size>    <max-pool-size>10</max-pool-size>    <prefill>true</prefill>    <use-strict-min>true</use-strict-min>    <flush-strategy>FailingConnectionOnly</flush-strategy> </pool> The attributes included in the pool configuration are actually borrowed from earlier releases, so we include them here for your reference: Attribute Meaning initial-pool-size This means the initial number of connections a pool should hold (default is 0 (zero)). min-pool-size This is the minimum number of connections in the pool (default is 0 (zero)). max-pool-size This is the maximum number of connections in the pool (default is 20). prefill This attempts to prefill the connection pool to the minimum number of connections. use-strict-min This determines whether idle connections below min-pool-size should be closed. allow-multiple-users This determines whether multiple users can access the datasource through the getConnection method. This has been changed slightly in WildFly. In WildFly, the line <allow-multiple-users>true</allow-multiple-users> is required. In JBoss AS 7, the empty element <allow-multiple-users/> was used. capacity This specifies the capacity policies for the pool—either incrementer or decrementer. connection-listener Here, you can specify org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that allows you to listen for connection callbacks, such as activation and passivation. flush-strategy This specifies how the pool should be flushed in the event of an error (default is FailingConnectionsOnly). Configuring the statement cache For each connection within a connection pool, the WildFly server is able to create a statement cache. When a prepared statement or callable statement is used, WildFly will cache the statement so that it can be reused. In order to activate the statement cache, you have to specify a value greater than 0 within the prepared-statement-cache-size element. Take a look at the following code: <statement>    <track-statements>true</track-statements>    <prepared-statement-cache-size>10</prepared-statement-cache-size>    <share-prepared-statements/> </statement> Notice that we have also set track-statements to true. This will enable automatic closing of statements and ResultSets. This is important if you want to use prepared statement caching and/or don't want to prevent cursor leaks. The last element, share-prepared-statements, can only be used when the prepared statement cache is enabled. This property determines whether two requests in the same transaction should return the same statement (default is false). Adding an xa-datasource Adding an xa-datasource requires some modification to the datasource configuration. The xa-datasource is configured within its own element, that is, within the datasource. You will also need to specify the xa-datasource class within the driver element. In the following code, we will add a configuration for our MySQL JDBC driver, which will be used to set up an xa-datasource: <datasources> <xa-datasource jndi-name="java:/XAMySqlDS" pool-name="MySqlDS_Pool"    enabled="true" use-java-context="true" use-ccm="true">    <xa-datasource-property name="URL">      jdbc:mysql://localhost:3306/MyDB    </xa-datasource-property>    <xa-datasource-property name="User">jboss    </xa-datasource-property>    <xa-datasource-property name="Password">jboss    </xa-datasource-property>    <driver>mysql-xa</driver> </xa-datasource> <drivers>    <driver name="mysql-xa" module="com.mysql">      <xa-datasource-class>       com.mysql.jdbc.jdbc2.optional.MysqlXADataSource      </xa-datasource-class>    </driver> </drivers> </datasources> Datasource versus xa-datasource You should use an xa-datasource in cases where a single transaction spans multiple datasources, for example, if a method consumes a Java Message Service (JMS) and updates a Java Persistence API (JPA) entity. Installing the driver as a deployment unit In the WildFly application server, every library is a module. Thus, simply deploying the JDBC driver to the application server will trigger its installation. If the JDBC driver consists of more than a single JAR file, you will not be able to install the driver as a deployment unit. In this case, you will have to install the driver as a core module. So, to install the database driver as a deployment unit, simply copy the mysql-connector-java-5.1.30-bin.jar driver into the JBOSS_HOME/standalone/deployments folder of your installation, as shown in the following image: Once you have deployed your JDBC driver, you still need to add the datasource to your server configuration file. The simplest way to do this is to paste the following datasource definition into the configuration file, as follows: <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool" enabled="true" jta="true" use-java-context="true" use-ccm="true"> <connection-url>    jdbc:mysql://localhost:3306/MyDB </connection-url> <driver>mysql-connector-java-5.1.130-bin.jar</driver> <pool /> <security>    <user-name>jboss</user-name>    <password>jboss</password> </security> </datasource> Alternatively, you can use the command-line interface (CLI) or the web administration console to achieve the same result. What about domain deployment? In this article, we are discussing the configuration of standalone servers. The services can also be configured in the domain servers. Domain servers, however, don't have a specified folder scanned for deployment. Rather, the management interfaces are used to inject resources into the domain. Choosing the right driver deployment strategy At this point, you might wonder about a best practice for deploying the JDBC driver. Installing the driver as a deployment unit is a handy shortcut; however, it can limit its usage. Firstly, it requires a JDBC 4-compliant driver. Deploying a non-JDBC-4-compliant driver is possible, but it requires a simple patching procedure. To do this, create a META-INF/services structure containing the java.sql.Driver file. The content of the file will be the driver name. For example, let's suppose you have to patch a MySQL driver—the content will be com.mysql.jdbc.Driver. Once you have created your structure, you can package your JDBC driver with any zipping utility or the .jar command, jar -uf <your -jdbc-driver.jar> META-INF/services/java.sql.Driver. The most current JDBC drivers are compliant with JDBC 4 although, curiously, not all are recognized as such by the application server. The following table describes some of the most used drivers and their JDBC compliance: Database Driver JDBC 4 compliant Contains java.sql.Driver MySQL mysql-connector-java-5.1.30-bin.jar Yes, though not recognized as compliant by WildFly Yes PostgreSQL postgresql-9.3-1101.jdbc4.jar Yes, though not recognized as compliant by WildFly Yes Oracle ojdbc6.jar/ojdbc5.jar Yes Yes Oracle ojdbc4.jar No No As you can see, the most notable exception to the list of drivers is the older Oracle ojdbc4.jar, which is not compliant with JDBC 4 and does not contain the driver information in META-INF/services/java.sql.Driver. The second issue with driver deployment is related to the specific case of xa-datasources. Installing the driver as deployment means that the application server by itself cannot deduce the information about the xa-datasource class used in the driver. Since this information is not contained inside META-INF/services, you are forced to specify information about the xa-datasource class for each xa-datasource you are going to create. When you install a driver as a module, the xa-datasource class information can be shared for all the installed datasources. <driver name="mysql-xa" module="com.mysql"> <xa-datasource-class>    com.mysql.jdbc.jdbc2.optional.MysqlXADataSource </xa-datasource-class> </driver> So, if you are not too limited by these issues, installing the driver as a deployment is a handy shortcut that can be used in your development environment. For a production environment, it is recommended that you install the driver as a static module. Configuring a datasource programmatically After installing your driver, you may want to limit the amount of application configuration in the server file. This can be done by configuring your datasource programmatically This option requires zero modification to your configuration file, which means greater application portability. The support to configure a datasource programmatically is one of the cool features of Java EE that can be achieved by using the @DataSourceDefinition annotation, as follows: @DataSourceDefinition(name = "java:/OracleDS", className = " oracle.jdbc.OracleDriver", portNumber = 1521, serverName = "192.168.1.1", databaseName = "OracleSID", user = "scott", password = "tiger", properties = {"createDatabase=create"}) @Singleton public class DataSourceEJB { @Resource(lookup = "java:/OracleDS") private DataSource ds; } In this example, we defined a datasource for an Oracle database. It's important to note that, when configuring a datasource programmatically, you will actually bypass JCA, which proxies requests between the client and the connection pool. The obvious advantage of this approach is that you can move your application from one application server to another without the need for reconfiguring its datasources. On the other hand, by modifying the datasource within the configuration file, you will be able to utilize the full benefits of the application server, many of which are required for enterprise applications. Configuring the Enterprise JavaBeans container The Enterprise JavaBeans (EJB) container is a fundamental part of the Java Enterprise architecture. The EJB container provides the environment used to host and manage the EJB components deployed in the container. The container is responsible for providing a standard set of services, including caching, concurrency, persistence, security, transaction management, and locking services. The container also provides distributed access and lookup functions for hosted components, and it intercepts all method invocations on hosted components to enforce declarative security and transaction contexts. Take a look at the following figure: As depicted in this image, you will be able to deploy the full set of EJB components within WildFly: Stateless session bean (SLSB): SLSBs are objects whose instances have no conversational state. This means that all bean instances are equivalent when they are not servicing a client. Stateful session bean (SFSB): SFSBs support conversational services with tightly coupled clients. A stateful session bean accomplishes a task for a particular client. It maintains the state for the duration of a client session. After session completion, the state is not retained. Message-driven bean (MDB): MDBs are a kind of enterprise beans that are able to asynchronously process messages sent by any JMS producer. Singleton EJB: This is essentially similar to a stateless session bean; however, it uses a single instance to serve the client requests. Thus, you are guaranteed to use the same instance across invocations. Singletons can use a set of events with a richer life cycle and a stricter locking policy to control concurrent access to the instance. No-interface EJB: This is just another view of the standard session bean, except that local clients do not require a separate interface, that is, all public methods of the bean class are automatically exposed to the caller. Interfaces should only be used in EJB 3.x if you have multiple implementations. Asynchronous EJB: These are able to process client requests asynchronously just like MDBs, except that they expose a typed interface and follow a more complex approach to processing client requests, which are composed of: The fire-and-forget asynchronous void methods, which are invoked by the client The retrieve-result-later asynchronous methods having a Future<?> return type EJB components that don't keep conversational states (SLSB and MDB) can be optionally configured to emit timed notifications. Configuring the EJB components Now that we have briefly outlined the basic types of EJB, we will look at the specific details of the application server configuration. This comprises the following components: The SLSB configuration The SFSB configuration The MDB configuration The Timer service configuration Let's see them all in detail. Configuring the stateless session beans EJBs are configured within the ejb3.2.0 subsystem. By default, no stateless session bean instances exist in WildFly at startup time. As individual beans are invoked, the EJB container initializes new SLSB instances. These instances are then kept in a pool that will be used to service future EJB method calls. The EJB remains active for the duration of the client's method call. After the method call is complete, the EJB instance is returned to the pool. Because the EJB container unbinds stateless session beans from clients after each method call, the actual bean class instance that a client uses can be different from invocation to invocation. Have a look at the following diagram: If all instances of an EJB class are active and the pool's maximum pool size has been reached, new clients requesting the EJB class will be blocked until an active EJB completes a method call. Depending on how you have configured your stateless pool, an acquisition timeout can be triggered if you are not able to acquire an instance from the pool within a maximum time. You can either configure your session pool through your main configuration file or programmatically. Let's look at both approaches, starting with the main configuration file. In order to configure your pool, you can operate on two parameters: the maximum size of the pool (max-pool-size) and the instance acquisition timeout (instance-acquisition-timeout). Let's see an example: <subsystem > <session-bean> <stateless>    <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> ... </session-bean> ... <pools> <bean-instance-pools>    <strict-max-pool name="slsb-strict-max-pool" max-pool-size=      "25" instance-acquisition-timeout="5" instance-acquisition-      timeout-unit="MINUTES"/> </bean-instance-pools> </pools> ... </subsystem> In this example, we have configured the SLSB pool with a strict upper limit of 25 elements. The strict maximum pool is the only available pool instance implementation; it allows a fixed number of concurrent requests to run at one time. If there are more requests running than the pool's strict maximum size, those requests will get blocked until an instance becomes available. Within the pool configuration, we have also set an instance-acquisition-timeout value of 5 minutes, which will come into play if your requests are larger than the pool size. You can configure as many pools as you like. The pool used by the EJB container is indicated by the attribute pool-name on the bean-instance-pool-ref element. For example, here we have added one more pool configuration, largepool, and set it as the EJB container's pool implementation. Have a look at the following code: <subsystem > <session-bean>    <stateless>      <bean-instance-pool-ref pool-name="large-pool"/>    </stateless> </session-bean> <pools>    <bean-instance-pools>      <strict-max-pool name="large-pool" max-pool-size="100"        instance-acquisition-timeout="5"        instance-acquisition-timeout-unit="MINUTES"/>    <strict-max-pool name="slsb-strict-max-pool"      max-pool-size="25" instance-acquisition-timeout="5"      instance-acquisition-timeout-unit="MINUTES"/>    </bean-instance-pools> </pools> </subsystem> Using the CLI to configure the stateless pool size We have detailed the steps necessary to configure the SLSB pool size through the main configuration file. However, the suggested best practice is to use CLI to alter the server model. Here's how you can add a new pool named large-pool to your EJB 3 subsystem: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool: add(max-pool-size=100) Now, you can set this pool as the default to be used by the EJB container, as follows: /subsystem=ejb3:write-attribute(name=default-slsb-instance-pool, value=large-pool) Finally, you can, at any time, change the pool size property by operating on the max-pool-size attribute, as follows: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool:write- attribute(name="max-pool-size",value=50) Summary In this article, we continued the analysis of the application server configuration by looking at Java's enterprise services. We first learned how to configure datasources, which can be used to add database connectivity to your applications. Installing a datasource in WildFly 8 requires two simple steps: installing the JDBC driver and adding the datasource into the server configuration. We then looked at the enterprise JavaBeans subsystem, which allows you to configure and tune your EJB container. We looked at the basic EJB component configuration of SLSB. Resources for Article: Further resources on this subject: Dart with JavaScript [article] Creating Java EE Applications [article] OpenShift for Java Developers [article]
Read more
  • 0
  • 0
  • 10248
article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 25240

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3-part2
Mike Ball
28 Nov 2014
9 min read
Save for later

Part 2: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
28 Nov 2014
9 min read
Part 2: Migrating WordPress blog content and deploying to production In part 1 of this series, we created middleman-demo, a basic Middleman-based blog. Part 1 addressed the benefits of a static site, setting up a Middleman development environment, Middleman’s templating system, and how to configure a Middleman project to support a basic blogging functionality. Now that middleman-demo is configured for blogging, let’s export old content from an existing WordPress blog, compile the application for production, and deploy to a web server. In this part, we’ll cover the following: Using the wp2middleman gem to migrate content from an existing WordPress blog Creating a Rake task to establish an Amazon Web Services S3 bucket Deploying a Middleman blog to Amazon S3 Setting up a custom domain for an S3-hosted site If you didn’t follow part 1, or you no longer have your original middleman-demo code, you can clone mine and check out the part2 branch: $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part2 Export your content from Wordpress Now that middleman-demo is configured for blogging, let’s export old content from an existing Wordpress blog. Wordpress provides a tool through which blog content can be exported as an XML file, also called a WordPress “eXtended RSS” or “WXR” file. A WXR file can be generated and downloaded via the Wordpress admin’s Tools > Export screen, as is explained in Wordpress’s WXR documentation. In absence of a real Wordpress blog, download middleman_demo.wordpress.xml file, a sample WXR file: $ wget www.mikeball.info/downloads/middleman_demo.wordpress.xml Migrating the Wordpress posts to markdown To migrate the posts contained in the Wordpress WXR file, I created wp2middleman, a command line tool to generate Middleman-style markdown files from the posts in a WXR. Install wp2middleman via Rubygems: $ gem install wp2middleman wp2middleman provides a wp2mm command. Pass the middleman_demo.wordpress.xml file to the wp2mm command: $ wp2mm middleman_demo.wordpress.xml If all goes well, the following output is printed to the terminal: Successfully migrated middleman_demo.wordpress.xml wp2middleman also produced an export directory. The export directory houses the blog posts from the middleman_demo.wordpress.xml WXR file, now represented as Middleman-style markdown files: $ ls export/ 2007-02-14-Fusce-mauris-ligula-rutrum-at-tristique-at-pellentesque-quis-nisl.html.markdown 2007-07-21-Suspendisse-feugiat-enim-vel-lorem.html.markdown 2008-02-20-Suspendisse-rutrum-Suspendisse-nisi-turpis-congue-ac.html.markdown 2008-03-17-Duis-euismod-purus-ac-quam-Mauris-tortor.html.markdown 2008-04-02-Donec-cursus-tincidunt-libero-Nam-blandit.html.markdown 2008-04-28-Etiam-nulla-nisl-cursus-vel-auctor-at-mollis-a-quam.html.markdown 2008-06-08-Praesent-faucibus-ligula-luctus-dolor.html.markdown 2008-07-08-Proin-lobortis-sapien-non-venenatis-luctus.html.markdown 2008-08-08-Etiam-eu-urna-eget-dolor-imperdiet-vehicula-Phasellus-dictum-ipsum-vel-neque-mauris-interdum-iaculis-risus.html.markdown 2008-09-08-Lorem-ipsum-dolor-sit-amet-consectetuer-adipiscing-elit.html.markdown 2013-12-30-Hello-world.html.markdown Note that wp2mm supports additional options, though these are beyond the scope of this tutorial. Read more on wp2middleman’s GitHub page. Also note that the markdown posts in export are named *.html.markdown and some -- such as SOME EXAMPLE TODO -- contain the HTML embedded in the original Wordpress post. Middleman supports the ability to embed multiple languages within a single post file. For example, Middleman will evaluate a file named .html.erb.markdown first as markdown and then ERb. The final result would be HTML. Move the contents of export to source/blog and remove the export directory: $ mv export/* source/blog && rm -rf export Now, assuming the Middleman server is running, visiting http://localhost:4567 lists all the blog posts migrated from Wordpress. Each post links to its permalink. In the case of posts with tags, each tag links to a tag page. Compiling for production Thus far, we’ve been viewing middleman-demo in local development, where the Middleman server dynamically generates the HTML, CSS, and JavaScript with each request. However, Middleman’s value lies in its ability to generate a static website -- simple HTML, CSS, JavaScript, and image files -- served directly by a web server such as Nginx or Apache and thus requiring no application server or internal backend. Compile middleman-demo to a static build directory: $ middleman build The resulting build directory houses every HTML file that can be served by middleman-demo, as well as all necessary CSS, JavaScript, and images. Its directory layout maps to the URL patterns defined in config.rb. The build directory is typically ignored from source control. Deploying the build to Amazon S3 Amazon Web Services is Amazon’s cloud computing platform. Amazon S3, or Simple Storage Service, is a simple data storage service. Because S3 “buckets” can be accessible over HTTP, S3 offers a great cloud-based hosting solution for static websites, such as middleman-demo. While S3 is not free, it is generally extremely affordable. Amazon charges on a per-usage basis according to how many requests your bucket serves, including PUT requests, i.e. uploads. Read more about S3 pricing on AWS’s pricing guide. Let’s deploy the middleman-demo build to Amazon S3. First, sign up for AWS. Through AWS’s web-based admin, create an IAM user and locate the corresponding “access key id” and “secret access key:” 1: Visit the AWS IAM console. 2: From the navigation menu, click Users. 3: Select your IAM user name. 4: Click User Actions; then click Manage Access Keys. 5: Click Create Access Key. 6: Click Download Credentials; store the keys in a secure location. 7: Store your access key id in an environment variable named AWS_ACCESS_KEY_ID: $ export AWS_ACCESS_KEY_ID=your_access_key_id 8: Store your secret access key as an environment variable named AWS_SECRET_ACCESS_KEY: $ export AWS_SECRET_ACCESS_KEY=your_secret_access_key Note that, to persist these environment variables beyond the current shell session, you may want to automatically set them in each shell session. Setting them in a file such as your ~/.bashrc ensures this: export AWS_ACCESS_KEY_ID=your_access_key_id export AWS_SECRET_ACCESS_KEY=your_secret_access_key Creating an S3 bucket with Ruby To deploy to S3, we’ll need to create a “bucket,” or an S3 endpoint to which the middleman-demo’s build directory can be deployed. This can be done via AWS’s management console, but we can also automate its creation with Ruby. We’ll use the aws-sdk Ruby gem and a Rake task to create an S3 bucket for middleman-demo. Add the aws-sdk gem to middleman-demo’s Gemfile: gem 'aws-sdk Install the new gem: $ bundle install Create a Rakefile: $ touch Rakefile Add the following Ruby to the Rakefile; this code establishes a Rake task -- a quick command line utility -- to automate the creation of an S3 bucket: require 'aws-sdk' desc "Create an AWS S3 bucket" task :s3_bucket, :bucket_name do |task, args| s3 = AWS::S3.new(region: 'us-east-1) bucket = s3.buckets.create(args[:bucket_name]) bucket.configure_website do |config| config.index_document_suffix = 'index.html' config.error_document_key = 'error/index.html' end end From the command line, use the newly-established :s3_bucket Rake task to create a unique S3 bucket for your middleman-demo. Note that, if you have an existing domain you’d like to use, your bucket should be named www.yourdomain.com: $ rake s3_bucket[some_unique_bucket_name] For example, I named my S3 bucket www.middlemandemo.com by entering the following: $ rake s3_bucket[www.middlemandemo.com] After running rake s3_bucket[YOUR_BUCKET], you should see YOUR_BUCKET amongst the buckets listed in your AWS web console. Creating an error template Our rake task specifies a config.error_document_key whose value is error/index.html. This configures your S3 bucket to serve an error.html for erroring responses, such as 404s. Create an source/error.html.erb template: $ touch source/error.html.erb And add the following: --- title: Oops - something went wrong --- <h2><%= current_page.data.title %></h2> Deploying to your S3 bucket With an S3 bucket established, the middleman-sync Ruby gem can be used to automate uploading middleman-demo builds to S3. Add the middleman-sync gem to the Gemfile: gem ‘middleman-sync’ Install the middleman-sync gem: $ bundle install Add the necessary middleman-sync configuration to config.rb: activate :sync do |sync| sync.fog_provider = 'AWS' sync.fog_region = 'us-east-1' sync.fog_directory = '<YOUR_BUCKET>' sync.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID'] sync.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY'] end Build and deploy middleman-demo: $ middleman build && middleman sync Note: if your deployment fails with a ’post_connection_check': hostname "YOUR_BUCKET" does not match the server certificate (OpenSSL::SSL::SSLError) (Excon::Errors::SocketError), it’s likely due to an open issue with middleman-sync. To work around this issue, add the following to the top of config.rb: require 'fog' Fog.credentials = { path_style: true } Now, middlemn-demo is browseable online at http://YOUR_BUCKET.s3-website-us-east-1.amazonaws.com/ Using a custom domain With middleman-demo -- deployed to an S3 bucket whose name matches a domain name, a custom domain can be configured easily. To use a custom domain, log into your domain management provider and add a CNAME mapping your domain to www.yourdomain.com.s3-website-us-east-1.amazonaws.com.. While the exact process for managing a CNAME varies between domain name providers, the process is generally fairly simple. Note that your S3 bucket name must perfectly match your domain name. Recap We’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a Wordpress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 4206

article-image-ogc-esri-professionals
Packt
27 Nov 2014
16 min read
Save for later

OGC for ESRI Professionals

Packt
27 Nov 2014
16 min read
In this article by Stefano Iacovella, author of GeoServer Cookbook, we look into a brief comparison between GeoServer and ArcGIS for Server, a map server created by ESRI. The importance of adopting OGC standards when building a geographical information system is stressed. We will also learn how OGC standards let us create a system where different pieces of software cooperate with each other. (For more resources related to this topic, see here.) ArcGIS versus GeoServer As an ESRI professional, you obviously know the server product from this vendor that can be compared to GeoServer well. It is called ArcGIS for Server and in many ways it can play the same role as that of GeoServer, and the opposite is true as well, of course. Undoubtedly, the big question for you is: why should I use GeoServer and not stand safely on the vendor side, leveraging on integration with the other software members of the big ArcGIS family? Listening to colleagues, asking to experts, and browsing on the Internet, you'll find a lot of different answers to this question, often supported by strong arguments and somehow by a religious and fanatic approach. There are a few benchmarks available on the Internet that compare performances of GeoServer and other open source map servers versus ArcGIS for Server. Although they're not definitely authoritative, a reasonably objective advantage of GeoServer and its OS cousins on ArcGIS for Server is recognizable. Anyway, I don't think that your choice should overestimate the importance of its performance. I'm sorry but my answer to your original question is another question: why should you choose a particular piece of software? This may sound puzzling, so let me elaborate a bit on the topic. Let's say you are an IT architect and a customer asked you to design a solution for a GIS portal. Of course, in that specific case, you have to give him or her a detailed response, containing specific software that'll be used for data publication. Also, as a professional, you'll arrive to the solution by accurately considering all requirements and constraints that can be inferred from the talks and surveying what is already up and running at the customer site. Then, a specific answer to what the software best suited for the task is should exist in any specific case. However, if you consider the question from a more general point of view, you should be aware that a map server, which is the best choice for any specific case, does not exist. You may find that the licensing costs a limit in some case or the performances in some other cases will lead you to a different choice. Also, as in any other job, the best tool is often the one you know better, and this is quite true when you are in a hurry and your customer can't wait to have the site up and running. So the right approach, although a little bit generic, is to keep your mind open and try to pick the right tool for any scenario. However, a general answer does exist. It's not about the vendor or the name of the piece of software you're going to use; it's about the way the components or your system communicate among them and with external systems. It's about standard protocol. This is a crucial consideration for any GIS architect or developer; nevertheless, if you're going to use an ESRI suite of products or open source tools, you should create your system with special care to expose data with open standards. Understanding standards Let's take a closer look at what standards are and why they're so important when you are designing your GIS solution. The term standard as mentioned in Wikipedia (http://en.wikipedia.org/wiki/ Technical_standard) may be explained as follows: "An established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. In contrast, a custom, convention, company product, corporate standard, etc. that becomes generally accepted and dominant is often called a de facto standard." Obviously, a lot of standards exist if you consider the Information Technology domain. Standards are usually formalized by standards organization, which usually involves several members from different areas, such as government agencies, private companies, education, and so on. In the GIS world, an authoritative organization is the Open Geospatial Consortium (OGC), which you may find often cited in this book in many links to the reference information. In recent years, OGC has been publishing several standards that cover the interaction of the GIS system and details on how data is transferred from one software to another. We'll focus on three of them that are widely used and particularly important for GeoServer and ArcGIS for Server: WMS: This is the acronym for Web Mapping Service. This standard describes how a server should publish data for mapping purposes, which is a static representation of data. WFS: This is the acronym for Web Feature Service. This standard describes the details of publishing data for feature streaming to a client. WCS: This is the acronym for Web Coverage Service. This standard describes the details of publishing data for raster data streaming to a client. It's the equivalent of WFS applied to raster data. Now let's dive into these three standards. We'll explore the similarities and differences among GeoServer and ArcGIS for Server. WMS versus the mapping service As an ESRI user, you surely know how to publish some data in a map service. This lets you create a web service that can be used by a client who wants to show the map and data. This is the proprietary equivalent of exposing data through a WMS service. With WMS, you can inquire the server for its capabilities with an HTTP request: $ curl -XGET -H 'Accept: text/xml' 'http://localhost:8080/geoserver/wms?service=WMS &version=1.1.1&request=GetCapabilities' -o capabilitiesWMS.xml Browsing through the XML document, you'll know which data is published and how this can be represented. If you're using the proprietary way of exposing map services with ESRI, you can perform a similar query that starts from the root: $ curl -XGET 'http://localhost/arcgis/rest/services?f=pjson' -o capabilitiesArcGIS.json The output, in this case formatted as a JSON file, is a text file containing the first of the services and folders available to an anonymous user. It looks like the following code snippet: {"currentVersion": 10.22,"folders": ["Geology","Cultural data",…"Hydrography"],"services": [{"name": "SampleWorldCities","type": "MapServer"}]} At a glance, you can recognize two big differences here. Firstly, there are logical items, which are the folders that work only as a container for services. Secondly, there is no complete definition of items, just a list of elements contained at a certain level of a publishing tree. To obtain specific information about an element, you can perform another request pointing to the item: $ curl -XGET 'http://localhost/arcgis/rest/ services/SampleWorldCities/MapServer?f=pjson' -o SampleWorldCities.json Setting up an ArcGIS site is out of the scope of this book; besides, this appendix assumes that you are familiar with the software and its terminology. Anyway, all the examples use the SampleWorldCities service, which is a default service created by the standard installation. In the new JSON file, you'll find a lot of information about the specific service: {"currentVersion": 10.22,"serviceDescription": "A sample service just for demonstation.","mapName": "World Cities Population","description": "","copyrightText": "","supportsDynamicLayers": false,"layers": [{"id": 0,"name": "Cities","parentLayerId": -1,"defaultVisibility": true,"subLayerIds": null,"minScale": 0,"maxScale": 0},…"supportedImageFormatTypes":"PNG32,PNG24,PNG,JPG,DIB,TIFF,EMF,PS,PDF,GIF,SVG,SVGZ,BMP",…"capabilities": "Map,Query,Data","supportedQueryFormats": "JSON, AMF","exportTilesAllowed": false,"maxRecordCount": 1000,"maxImageHeight": 4096,"maxImageWidth": 4096,"supportedExtensions": "KmlServer"} Please note the information about the image format supported. We're, in fact, dealing with a map service. As for the operation supported, this one shows three different operations: Map, Query, and Data. For the first two, you can probably recognize the equivalent of the GetMap and GetFeatureinfo operations of WMS, while the third one is little bit more mysterious. In fact, it is not relevant to map services and we'll explore it in the next paragraph. If you're familiar with the GeoServer REST interface, you can see the similarities in the way you can retrieve information. We don't want to explore the ArcGIS for Server interface in detail and how to handle it. What is important to understand is the huge difference with the standard WMS capabilities document. If you're going to create a client to interact with maps produced by a mix of ArcGIS for Server and GeoServer, you should create different interfaces for both. In one case, you can interact with the proprietary REST interface and use the standard WMS for GeoServer. However, there is good news for you. ESRI also supports standards. If you go to the map service parameters page, you can change the way the data is published.   The situation shown in the previous screenshot is the default capabilities configuration. As you can see, there are options for WMS, WFS, and WCS, so you can expose your data with ArcGIS for Server according to the OGC standards. If you enable the WMS option, you can now perform this query: $ curl -XGET 'http://localhost/arcgis/ services/SampleWorldCities/MapServer/ WMSServer?SERVICE=WMS&VERSION=1.3.0&REQUEST=GetCapabilities'    -o capabilitiesArcGISWMS.xml The information contained is very similar to that of the GeoServer capabilities. A point of attention is about fundamental differences in data publishing with the two software. In ArcGIS for Server, you always start from a map project. A map project is a collection of datasets, containing vector or raster data, with a drawing order, a coordinate reference system, and rules to draw. It is, in fact, very similar to a map project you can prepare with a GIS desktop application. Actually, in the ESRI world, you should use ArcGIS for desktop to prepare the map project and then publish it on the server. In GeoServer, the map concept doesn't exist. You publish data, setting several parameters, and the map composition is totally demanded to the client. You can only mimic a map, server side, using the group layer for a logical merge of several layers in a single entity. In ArcGIS for Server, the map is central to the publication process; also, if you just want to publish a single dataset, you have to create a map project, containing just that dataset, and publish it. Always remember this different approach; when using WMS, you can use the same operation on both servers. A GetMap request on the previous map service will look like this: $ curl -XGET 'http://localhost/arcgis/services/ SampleWorldCities/MapServer/WMSServer?service= WMS&version=1.1.0&request=GetMap&layers=fields&styles =&bbox=47.130647,8.931116,48.604188,29.54223&srs= EPSG:4326&height=445&width=1073&format=img/png' -o map.png Please note that you can filter what layers will be drawn in the map. By default, all the layers contained in the map service definition will be drawn. WFS versus feature access If you open the capabilities panel for the ArcGIS service again, you will note that there is an option called feature access. This lets you enable the feature streaming to a client. With this option enabled, your clients can acquire features and symbology information to ArcGIS and render them directly on the client side. In fact, feature access can also be used to edit features, that is, you can modify the features on the client and then post the changes on the server. When you check the Feature Access option, many specific settings appear. In particular, you'll note that by default, the Update operation is enabled, but the Geometry Updates is disabled, so you can't edit the shape of each feature. If you want to stream features using a standard approach, you should instead turn on the WFS option. ArcGIS for Server supports versions 1.1 and 1.0 of WFS. Moreover, the transactional option, also known as WFS-T, is fully supported.   As you can see in the previous screenshot, when you check the WFS option, several more options appear. In the lower part of the panel, you'll find the option to enable the transaction, which is the editing feature. In this case, there is no separate option for geometry and attributes; you can only decide to enable editing on any part of your features. After you enable the WFS, you can access the capabilities from this address: $ curl -XGET 'http://localhost/arcgis/services/ SampleWorldCities/MapServer/WFSServer?SERVICE=WFS&VERSION=1.1. 0&REQUEST=GetCapabilities' -o capabilitiesArcGISWFS.xml Also, a request for features is shown as follows: $ curl -XGET "http://localhost/arcgis/services/SampleWorldCities /MapServer/WFSServer?service=wfs&version=1.1.0 &request=GetFeature&TypeName=SampleWorldCities: cities&maxFeatures=1" -o getFeatureArcGIS.xml This will output a GML code as a result of your request. As with WMS, the syntax is the same. You only need to pay attention to the difference between the service and the contained layers: <wfs:FeatureCollection xsi:schemaLocation="http://localhost/arcgis/services/SampleWorldCities/MapServer/WFSServer http://localhost/arcgis/services/SampleWorldCities/MapServer/WFSServer?request=DescribeFeatureType%26version=1.1.0%26typename=citieshttp://www.opengis.net/wfs http://schemas.opengis.net/wfs/1.1.0/wfs.xsd"><gml:boundedBy><gml:Envelope srsName="urn:ogc:def:crs:EPSG:6.9:4326"><gml:lowerCorner>-54.7919921875 -176.1514892578125</gml:lowerCorner><gml:upperCorner>78.2000732421875179.221923828125</gml:upperCorner></gml:Envelope></gml:boundedBy><gml:featureMember><SampleWorldCities:cities gml_id="F4__1"><SampleWorldCities:OBJECTID>1</SampleWorldCities:OBJECTID><SampleWorldCities:Shape><gml:Point><gml:pos>-15.614990234375 -56.093017578125</gml:pos></gml:Point></SampleWorldCities:Shape><SampleWorldCities:CITY_NAME>Cuiaba</SampleWorldCities:CITY_NAME><SampleWorldCities:POP>521934</SampleWorldCities:POP><SampleWorldCities:POP_RANK>3</SampleWorldCities:POP_RANK><SampleWorldCities:POP_CLASS>500,000 to999,999</SampleWorldCities:POP_CLASS><SampleWorldCities:LABEL_FLAG>0</SampleWorldCities:LABEL_FLAG></SampleWorldCities:cities></gml:featureMember></wfs:FeatureCollection> Publishing raster data with WCS The WCS option is always present in the panel to configure services. As we already noted, WCS is used to publish raster data, so this may sound odd to you. Indeed, ArcGIS for Server lets you enable the WCS option, only if the map project for the service contains one of the following: A map containing raster or mosaic layers A raster or mosaic dataset A layer file referencing a raster or mosaic dataset A geodatabase that contains raster data If you try to enable the WCS option on SampleWorldCities, you won't get an error. Then, try to ask for the capabilities: $ curl -XGET "http://localhost/arcgis/services /SampleWorldCities/MapServer/ WCSServer?SERVICE=WCS&VERSION=1.1.1&REQUEST=GetCapabilities" -o capabilitiesArcGISWCS.xml You'll get a proper document, compliant to the standard and well formatted, but containing no reference to any dataset. Indeed, the sample service does not contain any raster data:  <Capabilities xsi_schemaLocation="http://www.opengis.net/wcs/1.1.1http://schemas.opengis.net/wcs/1.1/wcsGetCapabilities.xsdhttp://www.opengis.net/ows/1.1/http://schemas.opengis.net/ows/1.1.0/owsAll.xsd"version="1.1.1"><ows:ServiceIdentification><ows:Title>WCS</ows:Title><ows:ServiceType>WCS</ows:ServiceType><ows:ServiceTypeVersion>1.0.0</ows:ServiceTypeVersion><ows:ServiceTypeVersion>1.1.0</ows:ServiceTypeVersion><ows:ServiceTypeVersion>1.1.1</ows:ServiceTypeVersion><ows:ServiceTypeVersion>1.1.2</ows:ServiceTypeVersion><ows:Fees>NONE</ows:Fees><ows:AccessConstraints>None</ows:AccessConstraints></ows:ServiceIdentification>...<Contents><SupportedCRS>urn:ogc:def:crs:EPSG::4326</SupportedCRS><SupportedFormat>image/GeoTIFF</SupportedFormat><SupportedFormat>image/NITF</SupportedFormat><SupportedFormat>image/JPEG</SupportedFormat><SupportedFormat>image/PNG</SupportedFormat><SupportedFormat>image/JPEG2000</SupportedFormat><SupportedFormat>image/HDF</SupportedFormat></Contents></Capabilities> If you want to try out WCS, other than the GetCapabilities operation, you need to publish a service with raster data; or, you may take a look at the sample service from ESRI arcgisonline™. Try the following request: $ curl -XGET "http://sampleserver3.arcgisonline.com/ ArcGIS/services/World/Temperature/ImageServer/ WCSServer?SERVICE=WCS&VERSION=1.1.0&REQUEST=GETCAPABILITIES" -o capabilitiesArcGISWCS.xml Parsing the XML file, you'll find that the contents section now contains coverage, raster data that you can retrieve from that server:  …<Contents><CoverageSummary><ows:Title>Temperature1950To2100_1</ows:Title><ows:Abstract>Temperature1950To2100</ows:Abstract><ows:WGS84BoundingBox><ows:LowerCorner>-179.99999999999994 -55.5</ows:LowerCorner><ows:UpperCorner>180.00000000000006 83.5</ows:UpperCorner></ows:WGS84BoundingBox><Identifier>1</Identifier></CoverageSummary><SupportedCRS>urn:ogc:def:crs:EPSG::4326</SupportedCRS><SupportedFormat>image/GeoTIFF</SupportedFormat><SupportedFormat>image/NITF</SupportedFormat><SupportedFormat>image/JPEG</SupportedFormat><SupportedFormat>image/PNG</SupportedFormat><SupportedFormat>image/JPEG2000</SupportedFormat><SupportedFormat>image/HDF</SupportedFormat></Contents> You can, of course, use all the operations supported by standard. The following request will return a full description of one or more coverages within the service in the GML format. An example of the URL is shown as follows: $ curl -XGET "http://sampleserver3.arcgisonline.com/ ArcGIS/services/World/Temperature/ImageServer/ WCSServer?SERVICE=WCS&VERSION=1.1.0&REQUEST=DescribeCoverage& COVERAGE=1" -o describeCoverageArcGISWCS.xml Also, you can obviously request for data, and use requests that will return coverage in one of the supported formats, namely GeoTIFF, NITF, HDF, JPEG, JPEG2000, and PNG. Another URL example is shown as follows: $ curl -XGET "http://sampleserver3.arcgisonline.com/ ArcGIS/services/World/Temperature/ImageServer/ WCSServer?SERVICE=WCS&VERSION=1.0.0 &REQUEST=GetCoverage&COVERAGE=1&CRS=EPSG:4326 &RESPONSE_CRS=EPSG:4326&BBOX=-158.203125,- 105.46875,158.203125,105.46875&WIDTH=500&HEIGHT=500&FORMAT=jpeg" -o coverage.jpeg  Summary In this article, we started with the differences between ArcGIS and GeoServer and then moved on to understanding standards. Then we went on to compare WMS with mapping service as well as WFS with feature access. Finally we successfully published a raster dataset with WCS. Resources for Article: Further resources on this subject: Getting Started with GeoServer [Article] Enterprise Geodatabase [Article] Sending Data to Google Docs [Article]
Read more
  • 0
  • 0
  • 2703
article-image-setting-qt-creator-android
Packt
27 Nov 2014
8 min read
Save for later

Setting up Qt Creator for Android

Packt
27 Nov 2014
8 min read
This article by Ray Rischpater, the author of the book Application Development with Qt Creator Second Edition, focusses on setting up Qt Creator for Android. Android's functionality is delimited in API levels; Qt for Android supports Android level 10 and above: that's Android 2.3.3, a variant of Gingerbread. Fortunately, most devices in the market today are at least Gingerbread, making Qt for Android a viable development platform for millions of devices. Downloading all the pieces To get started with Qt Creator for Android, you're going to need to download a lot of stuff. Let's get started: Begin with a release of Qt for Android, which was either. For this, you need to download it from http://qt-project.org/downloads. The Android developer tools require the current version of the Java Development Kit (JDK) (not just the runtime, the Java Runtime Environment, but the whole kit and caboodle); you can download it from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. You need the latest Android Software Development Kit (SDK), which you can download for Mac OS X, Linux, or Windows at http://developer.android.com/sdk/index.html. You need the latest Android Native Development Kit (NDK), which you can download at http://developer.android.com/tools/sdk/ndk/index.html. You need the current version of Ant, the Java build tool, which you can download at http://ant.apache.org/bindownload.cgi. Download, unzip, and install each of these, in the given order. On Windows, I installed the Android SDK and NDK by unzipping them to the root of my hard drive and installed the JDK at the default location I was offered. Setting environment variables Once you install the JDK, you need to be sure that you've set your JAVA_HOME environment variable to point to the directory where it was installed. How you will do this differs from platform to platform; on a Mac OS X or Linux box, you'd edit .bashrc, .tcshrc, or the likes; on Windows, go to System Properties, click on Environment Variables, and add the JAVA_HOME variable. The path should point to the base of the JDK directory; for me, it was C:Program FilesJavajdk1.7.0_25, although the path for you will depend on where you installed the JDK and which version you installed. (Make sure you set the path with the trailing directory separator; the Android SDK is pretty fussy about that sort of thing.) Next, you need to update your PATH to point to all the stuff you just installed. Again, this is an environment variable and you'll need to add the following: There are two different components/subsystems shown in the diagram. The first is YARN, which is the new resource management layer introduced in Hadoop 2.0. The second is HDFS. Let's first delve into HDFS since that has not changed much since Hadoop 1.0. The bin directory of your JDK The androidsdktools directory The androidsdkplatform-tools directory For me, on my Windows 8 computer, my PATH includes this now: …C:Program FilesJavajdk1.7.0_25bin;C:adt-bundle- windows-x86_64-20130729sdktools;;C:adt-bundlewindows-x86_64- 20130729sdkplatform-tools;… Don't forget the separators: on Windows, it's a semicolon (;), while on Mac OS X and Linux, it's a colon (:). An environment variable is a variable maintained by your operating system which affects its configuration; see http://en.wikipedia.org/wiki/Environment_variable for more details. At this point, it's a good idea to restart your computer (if you're running Windows) or log out and log in again (on Linux or Mac OS X) to make sure that all these settings take effect. If you're on a Mac OS X or Linux box, you might be able to start a new terminal and have the same effect (or reload your shell configuration file) instead, but I like the idea of restarting at this point to ensure that the next time I start everything up, it'll work correctly. Finishing the Android SDK installation Now, we need to use the Android SDK tools to ensure that you have a full version of the SDK for at least one Android API level installed. We'll need to start Eclipse, the Android SDK's development environment, and run the Android SDK manager. To do this, follow these steps: Find Eclipse. It's probably in the Eclipse directory of the directory where you installed the Android SDK. If Eclipse doesn't start, check your JAVA_HOME and PATH variables; the odds are that Eclipse will not find the Java environment it needs to run. Click on OK when Eclipse prompts you for a workspace. This doesn't matter; you won't use Eclipse except to download Android SDK components. Click on the Android SDK Manager button in the Eclipse toolbar (circled in the next screenshot): Make sure that you have at least one Android API level above API level 10 installed, along with the Google USB Driver (you'll need this to debug on the hardware). Quit Eclipse. Next, let's see whether the Android Debug Bridge—the software component that transfers your executables to your Android device and supports on-device debugging—is working as it should. Fire up a shell prompt and type adb. If you see a lot of output and no errors, the bridge is correctly installed. If not, go back and check your PATH variable to be sure it's correct. While you're at it, you should developer-enable your Android device too so that it'll work with ADB. Follow the steps provided at http://bit.ly/1a29sal. Configuring Qt Creator Now, it's time to tell Qt Creator about all the stuff you just installed. Perform the following steps: Start Qt Creator but don't create a new project. Under the Tools menu, select Options and then click on Android. Fill in the blanks, as shown in the next screenshot. They should be: The path to the SDK directory, in the directory where you installed the Android SDK. The path to where you installed the Android NDK. Check Automatically create kits for Android tool chains. The path to Ant; here, enter either the path to the Ant executable itself on Mac OS X and Linux platforms or the path to ant.bat in the bin directory of the directory where you unpacked Ant. The directory where you installed the JDK (this might be automatically picked up from your JAVA_HOME directory), as shown in the following screenshot: Click on OK to close the Options window. You should now be able to create a new Qt GUI or Qt Quick application for Android! Do so, and ensure that Android is a target option in the wizard, as the next screenshot shows; be sure to choose at least one ARM target, one x86 target, and one target for your desktop environment: If you want to add Android build configurations to an existing project, the process is slightly different. Perform the following steps: Load the project as you normally would. Click on Projects in the left-hand side pane. The Projects pane will open. Click on Add Kit and choose the desired Android (or other) device build kit. The following screenshot shows you where the Projects and Add Kit buttons are in Qt Creator: Building and running your application Write and build your application normally. A good idea is to build the Qt Quick Hello World application for Android first before you go to town and make a lot of changes, and test the environment by compiling for the device. When you're ready to run on the device, perform the following steps: Navigate to Projects (on the left-hand side) and then select the Android for arm kit's Run Settings. Under Package Configurations, ensure that the Android SDK level is set to the SDK level of the SDK you installed. Ensure that the Package name reads something similar to org.qtproject.example, followed by your project name. Connect your Android device to your computer using the USB cable. Select the Android for arm run target and then click on either Debug or Run to debug or run your application on the device. Summary Qt for Android gives you an excellent leg up on mobile development, but it's not a panacea. If you're planning to target mobile devices, you should be sure to have a good understanding of the usage patterns for your application's users as well as the constraints in CPU, GPU, memory, and network that a mobile application must run on. Once we understand these, however, all of our skills with Qt Creator and Qt carry over to the mobile arena. To develop for Android, begin by installing the JDK, Android SDK, Android NDK, and Ant, and then develop applications as usual: compiling for the device and running on the device frequently to iron out any unexpected problems along the way. Resources for Article: Further resources on this subject: Reversing Android Applications [article] Building Android (Must know) [article] Introducing an Android platform [article]
Read more
  • 0
  • 0
  • 14930

article-image-logistic-regression
Packt
27 Nov 2014
9 min read
Save for later

Logistic regression

Packt
27 Nov 2014
9 min read
This article is written by Breck Baldwin and Krishna Dayanidhi, the authors of Natural Language Processing with Java and LingPipe Cookbook. In this article, we will cover logistic regression. (For more resources related to this topic, see here.) Logistic regression is probably responsible for the majority of industrial classifiers, with the possible exception of naïve Bayes classifiers. It almost certainly is one of the best performing classifiers available, albeit at the cost of slow training and considerable complexity in configuration and tuning. Logistic regression is also known as maximum entropy, neural network classification with a single neuron, and others. The classifiers have been based on the underlying characters or tokens, but logistic regression uses unrestricted feature extraction, which allows for arbitrary observations of the situation to be encoded in the classifier. This article closely follows a more complete tutorial at http://alias-i.com/lingpipe/demos/tutorial/logistic-regression/read-me.html. How logistic regression works All that logistic regression does is take a vector of feature weights over the data, apply a vector of coefficients, and do some simple math, which results in a probability for each class encountered in training. The complicated bit is in determining what the coefficients should be. The following are some of the features produced by our training example for 21 tweets annotated for English e and non-English n. There are relatively few features because feature weights are being pushed to 0.0 by our prior, and once a weight is 0.0, then the feature is removed. Note that one category, n, is set to 0.0 for all the features of the n-1 category—this is a property of the logistic regression process that fixes once categories features to 0.0 and adjust all other categories features with respect to that: FEATURE e nI : 0.37 0.0! : 0.30 0.0Disney : 0.15 0.0" : 0.08 0.0to : 0.07 0.0anymore : 0.06 0.0isn : 0.06 0.0' : 0.06 0.0t : 0.04 0.0for : 0.03 0.0que : -0.01 0.0moi : -0.01 0.0_ : -0.02 0.0, : -0.08 0.0pra : -0.09 0.0? : -0.09 0.0 Take the string, I luv Disney, which will only have two non-zero features: I=0.37 and Disney=0.15 for e and zeros for n. Since there is no feature that matches luv, it is ignored. The probability that the tweet is English breaks down to: vectorMultiply(e,[I,Disney]) = exp(.37*1 + .15*1) = 1.68 vectorMultiply(n,[I,Disney]) = exp(0*1 + 0*1) = 1 We will rescale to a probability by summing the outcomes and dividing it: p(e|,[I,Disney]) = 1.68/(1.68 +1) = 0.62p(e|,[I,Disney]) = 1/(1.68 +1) = 0.38 This is how the math works on running a logistic regression model. Training is another issue entirely. Getting ready This example assumes the same framework that we have been using all along to get training data from .csv files, train the classifier, and run it from the command line. Setting up to train the classifier is a bit complex because of the number of parameters and objects used in training. The main() method starts with what should be familiar classes and methods: public static void main(String[] args) throws IOException {String trainingFile = args.length > 0 ? args[0]: "data/disney_e_n.csv";List<String[]> training= Util.readAnnotatedCsvRemoveHeader(new File(trainingFile));int numFolds = 0;XValidatingObjectCorpus<Classified<CharSequence>> corpus= Util.loadXValCorpus(training,numFolds);TokenizerFactory tokenizerFactory= IndoEuropeanTokenizerFactory.INSTANCE; Note that we are using XValidatingObjectCorpus when a simpler implementation such as ListCorpus will do. We will not take advantage of any of its cross-validation features, because the numFolds param as 0 will have training visit the entire corpus. We are trying to keep the number of novel classes to a minimum, and we tend to always use this implementation in real-world gigs anyway. Now, we will start to build the configuration for our classifier. The FeatureExtractor<E> interface provides a mapping from data to features; this will be used to train and run the classifier. In this case, we are using a TokenFeatureExtractor() method, which creates features based on the tokens found by the tokenizer supplied during construction. This is similar to what naïve Bayes reasons over: FeatureExtractor<CharSequence> featureExtractor= new TokenFeatureExtractor(tokenizerFactory); The minFeatureCount item is usually set to a number higher than 1, but with small training sets, this is needed to get any performance. The thought behind filtering feature counts is that logistic regression tends to overfit low-count features that, just by chance, exist in one category of training data. As training data grows, the minFeatureCount value is adjusted usually by paying attention to cross-validation performance: int minFeatureCount = 1; The addInterceptFeature Boolean controls whether a category feature exists that models the prevalence of the category in training. The default name of the intercept feature is *&^INTERCEPT%$^&**, and you will see it in the weight vector output if it is being used. By convention, the intercept feature is set to 1.0 for all inputs. The idea is that if a category is just very common or very rare, there should be a feature that captures just this fact, independent of other features that might not be as cleanly distributed. This models the category probability in naïve Bayes in some way, but the logistic regression algorithm will decide how useful it is as it does with all other features: boolean addInterceptFeature = true;boolean noninformativeIntercept = true; These Booleans control what happens to the intercept feature if it is used. Priors, in the following code, are typically not applied to the intercept feature; this is the result if this parameter is true. Set the Boolean to false, and the prior will be applied to the intercept. Next is the RegressionPrior instance, which controls how the model is fit. What you need to know is that priors help prevent logistic regression from overfitting the data by pushing coefficients towards 0. There is a non-informative prior that does not do this with the consequence that if there is a feature that applies to just one category it will be scaled to infinity, because the model keeps fitting better as the coefficient is increased in the numeric estimation. Priors, in this context, function as a way to not be over confident in observations about the world. Another dimension in the RegressionPrior instance is the expected variance of the features. Low variance will push coefficients to zero more aggressively. The prior returned by the static laplace() method tends to work well for NLP problems. There is a lot going on, but it can be managed without a deep theoretical understanding. double priorVariance = 2;RegressionPrior prior= RegressionPrior.laplace(priorVariance,noninformativeIntercept); Next, we will control how the algorithm searches for an answer. AnnealingSchedule annealingSchedule= AnnealingSchedule.exponential(0.00025,0.999);double minImprovement = 0.000000001;int minEpochs = 100;int maxEpochs = 2000; AnnealingSchedule is best understood by consulting the Javadoc, but what it does is change how much the coefficients are allowed to vary when fitting the model. The minImprovement parameter sets the amount the model fit has to improve to not terminate the search, because the algorithm has converged. The minEpochs parameter sets a minimal number of iterations, and maxEpochs sets an upper limit if the search does not converge as determined by minImprovement. Next is some code that allows for basic reporting/logging. LogLevel.INFO will report a great deal of information about the progress of the classifier as it tries to converge: PrintWriter progressWriter = new PrintWriter(System.out,true);progressWriter.println("Reading data.");Reporter reporter = Reporters.writer(progressWriter);reporter.setLevel(LogLevel.INFO); Here ends the Getting ready section of one of our most complex classes—next, we will train and run the classifier. How to do it... It has been a bit of work setting up to train and run this class. We will just go through the steps to get it up and running: Note that there is a more complex 14-argument train method as well the one that extends configurability. This is the 10-argument version: LogisticRegressionClassifier<CharSequence> classifier= LogisticRegressionClassifier.<CharSequence>train(corpus,featureExtractor,minFeatureCount,addInterceptFeature,prior,annealingSchedule,minImprovement,minEpochs,maxEpochs,reporter); The train() method, depending on the LogLevel constant, will produce from nothing with LogLevel.NONE to the prodigious output with LogLevel.ALL. While we are not going to use it, we show how to serialize the trained model to disk: AbstractExternalizable.compileTo(classifier, new File("models/myModel.LogisticRegression")); Once trained, we will apply the standard classification loop with: Util.consoleInputPrintClassification(classifier); Run the preceding code in the IDE of your choice or use the command-line command: java -cp lingpipe-cookbook.1.0.jar:lib/lingpipe-4.1.0.jar:lib/opencsv-2.4.jar com.lingpipe.cookbook.chapter3.TrainAndRunLogReg The result is a big dump of information about the training: Reading data.:00 Feature Extractor class=class com.aliasi.tokenizer.TokenFeatureExtractor:00 min feature count=1:00 Extracting Training Data:00 Cold start:00 Regression callback handler=null:00 Logistic Regression Estimation:00 Monitoring convergence=true:00 Number of dimensions=233:00 Number of Outcomes=2:00 Number of Parameters=233:00 Number of Training Instances=21:00 Prior=LaplaceRegressionPrior(Variance=2.0,noninformativeIntercept=true):00 Annealing Schedule=Exponential(initialLearningRate=2.5E-4,base=0.999):00 Minimum Epochs=100:00 Maximum Epochs=2000:00 Minimum Improvement Per Period=1.0E-9:00 Has Informative Prior=true:00 epoch= 0 lr=0.000250000 ll= -20.9648 lp=-232.0139 llp= -252.9787 llp*= -252.9787:00 epoch= 1 lr=0.000249750 ll= -20.9406 lp=-232.0195 llp= -252.9602 llp*= -252.9602 The epoch reporting goes on until either the number of epochs is met or the search converges. In the following case, the number of epochs was met: :00 epoch= 1998 lr=0.000033868 ll= -15.4568 lp= -233.8125 llp= -249.2693 llp*= -249.2693 :00 epoch= 1999 lr=0.000033834 ll= -15.4565 lp= -233.8127 llp= -249.2692 llp*= -249.2692 Now, we can play with the classifier a bit: Type a string to be classified. Empty string to quit. I luv Disney Rank Category Score P(Category|Input) 0=e 0.626898085027528 0.626898085027528 1=n 0.373101914972472 0.373101914972472 This should look familiar; it is exactly the same result as the worked example at the start. That's it! You have trained up and used the world's most relevant industrial classifier. However, there's a lot more to harnessing the power of this beast. Summary In this article, we learned how to do logistic regression. Resources for Article: Further resources on this subject: Installing NumPy, SciPy, matplotlib, and IPython [Article] Introspecting Maya, Python, and PyMEL [Article] Understanding the Python regex engine [Article]
Read more
  • 0
  • 0
  • 1775
Modal Close icon
Modal Close icon