Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-clustering-k-means
Packt
27 Oct 2014
9 min read
Save for later

Clustering with K-Means

Packt
27 Oct 2014
9 min read
In this article by Gavin Hackeling, the author of Mastering Machine Learning with scikit-Learn, we will discuss an unsupervised learning task called clustering. Clustering is used to find groups of similar observations within a set of unlabeled data. We will discuss the K-Means clustering algorithm, apply it to an image compression problem, and learn to measure its performance. Finally, we will work through a semi-supervised learning problem that combines clustering with classification. Clustering, or cluster analysis, is the task of grouping observations such that members of the same group, or cluster, are more similar to each other by some metric than they are to the members of the other clusters. As with supervised learning, we will represent an observation as an n-dimensional vector. For example, assume that your training data consists of the samples plotted in the following figure: Clustering might reveal the following two groups, indicated by squares and circles. Clustering could also reveal the following four groups: Clustering is commonly used to explore a data set. Social networks can be clustered to identify communities, and to suggest missing connections between people. In biology, clustering is used to find groups of genes with similar expression patterns. Recommendation systems sometimes employ clustering to identify products or media that might appeal to a user. In marketing, clustering is used to find segments of similar consumers. In the following sections we will work through an example of using the K-Means algorithm to cluster a data set. Clustering with the K-Means Algorithm The K-Means algorithm is a clustering method that is popular because of its speed and scalability. K-Means is an iterative process of moving the centers of the clusters, or the centroids, to the mean position of their constituent points, and re-assigning instances to their closest clusters. The titular K is a hyperparameter that specifies the number of clusters that should be created; K-Means automatically assigns observations to clusters but cannot determine the appropriate number of clusters. K must be a positive integer that is less than the number of instances in the training set. Sometimes the number of clusters is specified by the clustering problem's context. For example, a company that manufactures shoes might know that it is able to support manufacturing three new models. To understand what groups of customers to target with each model, it surveys customers and creates three clusters from the results. That is, the value of K was specified by the problem's context. Other problems may not require a specific number of clusters, and the optimal number of clusters may be ambiguous. We will discuss a heuristic for estimating the optimal number of clusters called the elbow method later in this article. The parameters of K-Means are the positions of the clusters' centroids and the observations that are assigned to each cluster. Like generalized linear models and decision trees, the optimal values of K-Means' parameters are found by minimizing a cost function. The cost function for K-Means is given by the following equation: where µk is the centroid for cluster k. The cost function sums the distortions of the clusters. Each cluster's distortion is equal to the sum of the squared distances between its centroid and its constituent instances. The distortion is small for compact clusters, and large for clusters that contain scattered instances. The parameters that minimize the cost function are learned through an iterative process of assigning observations to clusters and then moving the clusters. First, the clusters' centroids are initialized to random positions. In practice, setting the centroids' positions equal to the positions of randomly selected observations yields the best results. During each iteration, K-Means assigns observations to the cluster that they are closest to, and then moves the centroids to their assigned observations' mean location. Let's work through an example by hand using the training data shown in the following table. Instance X0 X1 1 7 5 2 5 7 3 7 7 4 3 3 5 4 6 6 1 4 7 0 0 8 2 2 9 8 7 10 6 8 11 5 5 12 3 7 There are two explanatory variables; each instance has two features. The instances are plotted in the following figure. Assume that K-Means initializes the centroid for the first cluster to the fifth instance and the centroid for the second cluster to the eleventh instance. For each instance, we will calculate its distance to both centroids, and assign it to the cluster with the closest centroid. The initial assignments are shown in the “Cluster” column of the following table. Instance X0 X1 C1 distance C2 distance Last Cluster Cluster Changed? 1 7 5 3.16228 2 None C2 Yes 2 5 7 1.41421 2 None C1 Yes 3 7 7 3.16228 2.82843 None C2 Yes 4 3 3 3.16228 2.82843 None C2 Yes 5 4 6 0 1.41421 None C1 Yes 6 1 4 3.60555 4.12311 None C1 Yes 7 0 0 7.21110 7.07107 None C2 Yes 8 2 2 4.47214 4.24264 None C2 Yes 9 8 7 4.12311 3.60555 None C2 Yes 10 6 8 2.82843 3.16228 None C1 Yes 11 5 5 1.41421 0 None C2 Yes 12 3 7 1.41421 2.82843 None C1 Yes C1 centroid 4 6           C2 centroid 5 5           The plotted centroids and the initial cluster assignments are shown in the following graph. Instances assigned to the first cluster are marked with “Xs”, and instances assigned to the second cluster are marked with dots. The markers for the centroids are larger than the markers for the instances. Now we will move both centroids to the means of their constituent instances, re-calculate the distances of the training instances to the centroids, and re-assign the instances to the closest centroids. Instance X0 X1 C1 distance C2 distance Last Cluster New Cluster Changed? 1 7 5 3.492850 2.575394 C2 C2 No 2 5 7 1.341641 2.889107 C1 C1 No 3 7 7 3.255764 3.749830 C2 C1 Yes 4 3 3 3.492850 1.943067 C2 C2 No 5 4 6 0.447214 1.943067 C1 C1 No 6 1 4 3.687818 3.574285 C1 C2 Yes 7 0 0 7.443118 6.169378 C2 C2 No 8 2 2 4.753946 3.347250 C2 C2 No 9 8 7 4.242641 4.463000 C2 C1 Yes 10 6 8 2.720294 4.113194 C1 C1 No 11 5 5 1.843909 0.958315 C2 C2 No 12 3 7 1 3.260775 C1 C1 No C1 centroid 3.8 6.4           C2 centroid 4.571429 4.142857           The new clusters are plotted in the following graph. Note that the centroids are diverging, and several instances have changed their assignments. Now we will move the centroids to the means of their constituents' locations again, and re-assign the instances to their nearest centroids. The centroids continue to diverge, as shown in the following figure. None of the instances' centroid assignments will change in the next iteration; K-Means will continue iterating until some stopping criteria is satisfied. Usually, this criteria is either a threshold for the difference between the values of the cost function for subsequent iterations, or a threshold for the change in the positions of the centroids between subsequent iterations. If these stopping criteria are small enough, K-Means will converge on an optimum. This optimum will not necessarily be the global optimum. Local Optima Recall that K-Means initially sets the positions of the clusters' centroids to the positions of randomly selected observations. Sometimes the random initialization is unlucky, and the centroids are set to positions that cause K-Means to converge to a local optimum. For example, assume that K-Means randomly initializes two cluster centroids to the following positions: K-Means will eventually converge on a local optimum like that shown in the following figure. These clusters may be informative, but it is more likely that the top and bottom groups of observations are more informative clusters. To avoid local optima, K-Means is often repeated dozens or hundreds of times. In each iteration, it is randomly initialized to different starting cluster positions. The initialization that minimizes the cost function best is selected. The Elbow Method If K is not specified by the problem's context, the optimal number of clusters can be estimated using a technique called the elbow method. The elbow method plots the value of the cost function produced by different values of K. As K increases, the average distortion will decrease; each cluster will have fewer constituent instances, and the instances will be closer to their respective centroids. However, the improvements to the average distortion will decline as K increases. The value of K at which the improvement to the distortion declines the most is called the elbow. Let's use the elbow method to choose the number of clusters for a data set. The following scatter plot visualizes a data set with two obvious clusters. We will calculate and plot the mean distortion of the clusters for each value of K from one to ten with the following: >>> import numpy as np>>> from sklearn.cluster import KMeans>>> from scipy.spatial.distance import cdist>>> import matplotlib.pyplot as plt>>> cluster1 = np.random.uniform(0.5, 1.5, (2, 10))>>> cluster2 = np.random.uniform(3.5, 4.5, (2, 10))>>> X = np.hstack((cluster1, cluster2)).T>>> X = np.vstack((x, y)).T>>> K = range(1, 10)>>> meandistortions = []>>> for k in K:>>> kmeans = KMeans(n_clusters=k)>>> kmeans.fit(X)>>> meandistortions.append(sum(np.min(cdist(X, kmeans.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])>>> plt.plot(K, meandistortions, 'bx-')>>> plt.xlabel('k')>>> plt.ylabel('Average distortion')>>> plt.title('Selecting k with the Elbow Method')>>> plt.show() The average distortion improves rapidly as we increase K from one to two. There is little improvement for values of K greater than two. Now let's use the elbow method on the following data set with three clusters: The following is the elbow plot for the data set. From this we can see that the rate of improvement to the average distortion declines the most when adding a fourth cluster. That is, the elbow method confirms that K should be set to three for this data set. Summary In this article we explained what clustering is and we talked about the two methods available for clustering Resources for Article: Further resources on this subject: Machine Learning in IPython with scikit-learn [Article] Machine Learning in Bioinformatics [Article] Specialized Machine Learning Topics [Article]
Read more
  • 0
  • 0
  • 10283

article-image-creating-java-ee-applications
Packt
24 Oct 2014
16 min read
Save for later

Creating Java EE Applications

Packt
24 Oct 2014
16 min read
In this article by Grant Shipley author of Learning OpenShift we are going to learn how to use OpenShift in order to create and deploy Java-EE-based applications using the JBoss Enterprise Application Platform (EAP) application server. To illustrate and learn the concepts of Java EE, we are going to create an application that displays an interactive map that contains all of the major league baseball parks in the United States. We will start by covering some background information on the Java EE framework and then introduce each part of the sample application. The process for learning how to create the sample application, named mlbparks, will be started by creating the JBoss EAP container, then adding a database, creating the web services, and lastly, creating the responsive map UI. (For more resources related to this topic, see here.) Evolution of Java EE I can't think of a single programming language other than Java that has so many fans while at the same time has a large community of developers that profess their hatred towards it. The bad reputation that Java has can largely be attributed to early promises made by the community when the language was first released and then not being able to fulfill these promises. Developers were told that we would be able to write once and run anywhere, but we quickly found out that this meant that we could write once and then debug on every platform. Java was also perceived to consume more memory than required and was accused of being overly verbose by relying heavily on XML configuration files. Another problem the language had was not being able to focus on and excel at one particular task. We used Java to create thick client applications, applets that could be downloaded via a web browser, embedded applications, web applications, and so on. Having Java available as a tool that completes most projects was a great thing, but the implementation for each project was often confusing. For example, let's examine the history of the GUI development using the Java programming language. When the language was first introduced, it included an API called the Abstract Window Toolkit (AWT) that was essentially a Java wrapper around native UI components supplied by the operating system. When Java 1.2 was released, the AWT implementation was deprecated in the favor of the Swing API that contained GUI elements written in 100 percent Java. By this time, a lot of developers were quickly growing frustrated with the available APIs and a new toolkit called the Standard Widget Toolkit (SWT) was developed to create another UI toolkit for Java. SWT was developed at IBM and is the windowing toolkit in use by the Eclipse IDE and is considered by most to be the superior toolkit that can be used when creating applications. As you can see, rapid changes in the core functionality of the language coupled with the refusal of some vendors to ship the JRE as part of the operating system left a bad taste in most developers' mouths. Another reason why developers began switching from Java to more attractive programming languages was the implementation of Enterprise JavaBeans (EJB). The first Java EE release occurred in December, 1999, and the Java community is just now beginning to recover from the complexity introduced by the language in order to create applications. If you were able to escape creating applications using early EJBs, consider yourself one of the lucky ones, as many of your fellow developers were consumed by implementing large-scale systems using this new technology. It wasn't fun; trust me. I was there and experienced it firsthand. When developers began abandoning Java EE, they seemed to go in one of two directions. Developers who understood that the Java language itself was quite beautiful and useful adopted the Spring Framework methodology of having enterprise grade features while sticking with a Plain Old Java Object (POJO) implementation. Other developers were wooed away by languages that were considered more modern, such as Ruby and the popular Rails framework. While the rise in popularity of both Ruby and Spring was happening, the team behind Java EE continued to improve and innovate, which resulted in the creation of a new implementation that is both easy to use and develop with. I am happy to report that if you haven't taken a look at Java EE in the last few years, now is the time to do so. Working with the language after a long hiatus has been a rewarding and pleasurable experience. Introducing the sample application For the remainder of this article, we are going to develop an application called mlbparks that displays a map of the United States with a pin on the map representing the location of each major league baseball stadium. The requirements for the application are as follows: A single map that a user can zoom in and out of As the user moves the map around, the map must be updated with all baseball stadiums that are located in the shown area The location of the stadiums must be searchable based on map coordinates that are passed to the REST-based API The data should be transferred in the JSON format The web application must be responsive so that it is displayed correctly regardless of the resolution of the browser When a stadium is listed on the map, the user should be able to click on the stadium to view details about the associated team The end state application will look like the following screenshot: The user will also be able to zoom in on a specific location by double-clicking on the map or by clicking on the + zoom button in the top-left corner of the application. For example, if a user zooms the map in to the Phoenix, Arizona area of the United States, they will be able to see the information for the Arizona Diamondbacks stadium as shown in the following screenshot: To view this sample application running live, open your browser and type http://mlbparks-packt.rhcloud.com. Now that we have our requirements and know what the end result should look like, let's start creating our application. Creating a JBoss EAP application For the sample application that we are going to develop as part of this article, we are going to take advantage of the JBoss EAP application server that is available on the OpenShift platform. The JBoss EAP application server is a fully tested, stable, and supported platform for deploying mission-critical applications. Some developers prefer to use the open source community application server from JBoss called WildFly. Keep in mind when choosing WildFly over EAP that it only comes with community-based support and is a bleeding edge application server. To get started with building the mlbparks application, the first thing we need to do is create a gear that contains the cartridge for our JBoss EAP runtime. For this, we are going to use the RHC tools. Open up your terminal application and enter in the following command: $ rhc app create mlbparks jbosseap-6 Once the previous command is executed, you should see the following output: Application Options ------------------- Domain:     yourDomainName Cartridges: jbosseap-6 (addtl. costs may apply) Gear Size: default Scaling:   no Creating application 'mlbparks' ... done Waiting for your DNS name to be available ... done Cloning into 'mlbparks'... Your application 'mlbparks' is now available. URL:       http://mlbparks-yourDomainName.rhcloud.com/ SSH to:     5311180f500446f54a0003bb@mlbparks-yourDomainName.rhcloud.com Git remote: ssh://5311180f500446f54a0003bb@mlbparks-yourDomainName.rhcloud.com/~/git/mlbparks.git/ Cloned to: /home/gshipley/code/mlbparks  Run 'rhc show-app mlbparks' for more details about your app. If you have a paid subscription to OpenShift Online, you might want to consider using a medium- or large-size gear to host your Java-EE-based applications. To create this application using a medium-size gear, use the following command: $ rhc app create mlbparks jbosseap-6 -g medium Adding database support to the application Now that our application gear has been created, the next thing we want to do is embed a database cartridge that will hold the information about the baseball stadiums we want to track. Given that we are going to develop an application that doesn't require referential integrity but provides a REST-based API that will return JSON, it makes sense to use MongoDB as our database. MongoDB is arguably the most popular NoSQL database available today. The company behind the database, MongoDB, offers paid subscriptions and support plans for production deployments. For more information on this popular NoSQL database, visit www.mongodb.com. Run the following command to embed a database into our existing mlbparks OpenShift gear: $ rhc cartridge add mongodb-2.4 -a mlbparks Once the preceding command is executed and the database has been added to your application, you will see the following information on the screen that contains the username and password for the database: Adding mongodb-2.4 to application 'mlbparks' ... done  mongodb-2.4 (MongoDB 2.4) ------------------------- Gears:         Located with jbosseap-6 Connection URL: mongodb://$OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/ Database Name: mlbparks Password:       q_6eZ22-fraN Username:       admin MongoDB 2.4 database added. Please make note of these credentials:    Root User:     admin    Root Password: yourPassword    Database Name: mlbparks Connection URL: mongodb://$OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/ Importing the MLB stadiums into the database Now that we have our application gear created and our database added, we need to populate the database with the information about the stadiums that we are going to place on the map. The data is provided as a JSON document and contains the following information: The name of the baseball team The total payroll for the team The location of the stadium represented with the longitude and latitude The name of the stadium The name of the city where the stadium is located The league the baseball club belongs to (national or American) The year the data is relevant for All of the players on the roster including their position and salary A sample for the Arizona Diamondbacks looks like the following line of code: {   "name":"Diamondbacks",   "payroll":89000000,   "coordinates":[     -112.066662,     33.444799   ], "ballpark":"Chase Field",   "city":"Phoenix",   "league":"National League", "year":"2013",   "players":[     {       "name":"Miguel Montero", "position":"Catcher",       "salary":10000000     }, ………… ]} In order to import the preceding data, we are going to use the SSH command. To get started with the import, SSH into your OpenShift gear for the mlbparks application by issuing the following command in your terminal prompt: $ rhc app ssh mlbparks Once we are connected to the remote gear, we need to download the JSON file and store it in the /tmp directory of our gear. To complete these steps, use the following commands on your remote gear: $ cd /tmp $ wget https://raw.github.com/gshipley/mlbparks/master/mlbparks.json Wget is a software package that is available on most Linux-based operating systems in order to retrieve files using HTTP, HTTPS, or FTP. Once the file has completed downloading, take a quick look at the contents using your favorite text editor in order to get familiar with the structure of the document. When you are comfortable with the data that we are going to import into the database, execute the following command on the remote gear to populate MongoDB with the JSON documents: $ mongoimport --jsonArray -d $OPENSHIFT_APP_NAME -c teams --type json --file /tmp/mlbparks.json -h $OPENSHIFT_MONGODB_DB_HOST --port $OPENSHIFT_MONGODB_DB_PORT -u $OPENSHIFT_MONGODB_DB_USERNAME -p $OPENSHIFT_MONGODB_DB_PASSWORD If the command was executed successfully, you should see the following output on the screen: connected to: 127.7.150.130:27017 Fri Feb 28 20:57:24.125 check 9 30 Fri Feb 28 20:57:24.126 imported 30 objects What just happened? To understand this, we need to break the command we issued into smaller chunks, as detailed in the following table: Command/argument Description mongoimport This command is provided by MongoDB to allow users to import data into a database. --jsonArray This specifies that we are going to import an array of JSON documents. -d $OPENSHIFT_APP_NAME Specifies the database that we are going to import the data into the database. We are using a system environment variable to use the database that was created by default when we embedded the database cartridge in our application. -c teams This defines the collection to which we want to import the data. If the collection does not exist, it will be created. --type json This specifies the type of file we are going to import. --file /tmp/mlbparks.json This specifies the full path and name of the file that we are going to import into the database. -h $OPENSHIFT_MONGODB_DB_HOST This specifies the host of the MongoDB server. --port $OPENSHIFT_MONGODB_DB_PORT This specifies the port of the MongoDB server. -u $OPENSHIFT_MONGODB_DB_USERNAME This specifies the username to be used to be authenticated to the database. -p $OPENSHIFT_MONGODB_DB_PASSWORD This specifies the password to be authenticated to the database. To verify that data was loaded properly, you can use the following command that will print out the number of documents in the teams collections of the mlbparks database: $ mongo -quiet $OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/$OPENSHIFT_APP_NAME -u $OPENSHIFT_MONGODB_DB_USERNAME -p $OPENSHIFT_MONGODB_DB_PASSWORD --eval "db.teams.count()" The result should be 30. Lastly, we need to create a 2d index on the teams collection to ensure that we can perform spatial queries on the data. Geospatial queries are what allow us to search for specific documents that fall within a given location as provided by the latitude and longitude parameters. To add the 2d index to the teams collections, enter the following command on the remote gear: $ mongo $OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/$OPENSHIFT_APP_NAME --eval 'db.teams.ensureIndex( { coordinates : "2d" } );' Adding database support to our Java application The next step in creating the mlbparks application is adding the MongoDB driver dependency to our application. OpenShift Online supports the popular Apache Maven build system as the default way of compiling the source code and resolving dependencies. Maven was originally created to simplify the build process by allowing developers to specify specific JARs that their application depends on. This alleviates the bad practice of checking JAR files into the source code repository and allows a way to share JARs across several projects. This is accomplished via a pom.xml file that contains configuration items and dependency information for the project. In order to add the dependency for the MongoDB client to our mlbparks applications, we need to modify the pom.xml file that is in the root directory of the Git repository. The Git repository was cloned to our local machine during the application's creation step that we performed earlier in this article. Open up your favorite text editor and modify the pom.xml file to include the following lines of code in the <dependencies> block: <dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.9.1</version> </dependency> Once you have added the dependency, commit the changes to your local repository by using the following command: $ git commit -am "added MongoDB dependency" Finally, let's push the change to our Java application to include the MongoDB database drivers using the git push command: $ git push The first time the Maven build system builds the application, it downloads all the dependencies for the application and then caches them. Because of this, the first build will always that a bit longer than any subsequent build. Creating the database access class At this point, we have our application created, the MongoDB database embedded, all the information for the baseball stadiums imported, and the dependency for our database driver added to our application. The next step is to do some actual coding by creating a Java class that will act as the interface for connecting to and communicating with the MongoDB database. Create a Java file named DBConnection.java in the mlbparks/src/main/java/org/openshift/mlbparks/mongo directory and add the following source code: package org.openshift.mlbparks.mongo;  import java.net.UnknownHostException; import javax.annotation.PostConstruct; import javax.enterprise.context.ApplicationScoped; import javax.inject.Named; import com.mongodb.DB; import com.mongodb.Mongo;  @Named @ApplicationScoped public class DBConnection { private DB mongoDB; public DBConnection() {    super(); } @PostConstruct public void afterCreate() {    String mongoHost = System.getenv("OPENSHIFT_MONGODB_DB_HOST");    String mongoPort = System.getenv("OPENSHIFT_MONGODB_DB_PORT");    String mongoUser = System.getenv("OPENSHIFT_MONGODB_DB_USERNAME");    String mongoPassword = System.getenv("OPENSHIFT_MONGODB_DB_PASSWORD");    String mongoDBName = System.getenv("OPENSHIFT_APP_NAME");    int port = Integer.decode(mongoPort);    Mongo mongo = null;  try {     mongo = new Mongo(mongoHost, port);    } catch (UnknownHostException e) {     System.out.println("Couldn't connect to MongoDB: " + e.getMessage() + " :: " + e.getClass());    }    mongoDB = mongo.getDB(mongoDBName);    if (mongoDB.authenticate(mongoUser, mongoPassword.toCharArray()) == false) {     System.out.println("Failed to authenticate DB ");    } } public DB getDB() {    return mongoDB; } } The preceding source code as well as all source code for this article is available on GitHub at https://github.com/gshipley/mlbparks. The preceding code snippet simply creates an application-scoped bean that is available until the application is shut down. The @ApplicationScoped annotation is used when creating application-wide data or constants that should be available to all the users of the application. We chose this scope because we want to maintain a single connection class for the database that is shared among all requests. The next bit of interesting code is the afterCreate method that gets authenticated on the database using the system environment variables. Once you have created the DBConnection.java file and added the preceding source code, add the file to your local repository and commit the changes as follows: $ git add . $ git commit -am "Adding database connection class" Creating the beans.xml file The DBConnection class we just created makes use of Context Dependency Injection (CDI), which is part of the Java EE specification, for dependency injection. According to the official specification for CDI, an application that uses CDI must have a file called beans.xml. The file must be present and located under the WEB-INF directory. Given this requirement, create a file named beans.xml under the mlbparks/src/main/webapp/WEB-INF directory and add the following lines of code: <?xml version="1.0"?> <beans xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://jboss.org/schema/cdi/beans_1_0.xsd"/> After you have added the beans.xml file, add and commit it to your local Git repository: $ git add . $ git commit -am "Adding beans.xml for CDI" Summary In this article we learned about the evolution of Java EE, created a JBoss EAP application, and created the database access class. Resources for Article: Further resources on this subject: Using OpenShift [Article] Common performance issues [Article] The Business Layer (Java EE 7 First Look) [Article]
Read more
  • 0
  • 0
  • 3454

article-image-wireshark
Packt
24 Oct 2014
16 min read
Save for later

Wireshark

Packt
24 Oct 2014
16 min read
In this article by James H. Baxter, author of Wireshark Essentials, we will learn how to install Wireshark, perform a packet capture, use display filters to isolate traffic of interest, and save a filtered packet trace file. (For more resources related to this topic, see here.) Installing Wireshark Wireshark can be installed on machines running 32- and 64-bit Windows (XP, Win7, Win8.1, and so on), Mac OS X (10.5 and higher), and most flavors of Linux/Unix. Installation on Windows and Mac machines is quick and easy because installers are available from the Wireshark website download page. Wireshark is a standard package available on many Linux distributions, and there is a list of links to third-party installers provided on the Wireshark download page for a variety of popular *nix platforms. Alternatively, you can download the source code and compile Wireshark for your environment if a precompiled installation package isn't available. Wireshark relies on the WinPcap (Windows) or libpcap (Linux/Unix/Mac) libraries to provide the packet capture and capture filtering function; the appropriate library is installed during the Wireshark installation. You might need the administrator (Windows) or root (Linux/Unix/Mac) privileges to install Wireshark and the WinPcap/libpcap utilities on your workstation. Assuming that you're installing Wireshark on a Windows or Mac machine, you need to go to the Wireshark website (https://www.wireshark.org/) and click on the Download button at the top of the page. This will take you to the download page, and at the same time attempt to perform an autodiscovery of your operating system type and version from your browser info. The majority of the time, the correct Wireshark installation package for your machine will be highlighted, and you only have to click on the highlighted link to download the correct installer. If you already have Wireshark installed, an autoupdate feature will notify you of available version updates when you launch Wireshark. Installing Wireshark on Windows In the following screenshot, the Wireshark download page has identified that a 64-bit Windows installer is appropriate for this Windows workstation: Clicking on the highlighted link downloads a Wireshark-win64-1.10.8.exe file or similar executable file that you can save on your hard drive. Double-clicking on the executable starts the installation process. You need to follow these steps: Agree to the License Agreement. Accept all of the defaults by clicking on Next for each prompt, including the prompt to install WinPcap, which is a library needed to capture packets from the Network Interface Card (NIC) on your workstation. Early in the Wireshark installation, the process will pause and prompt you to click on Install and several Next buttons in separate windows to install WinPcap. After the WinPcap installation is complete, click through the remaining Next prompts to finish the Wireshark installation. Installing Wireshark on Mac OS X The process to install Wireshark on Mac is the same as the process for Windows, except that you will not be prompted to install WinPcap; libpcap, the packet capture library for Mac and *nix machines, gets installed instead (without prompting). There are, however, two additional requirements that may need to be addressed in a Mac installation: The first is to install X11, a windowing system library. If this is needed for your system, you will be informed and provided a link that ultimately takes you to the XQuartz project download page so you can install this package. The second requirement that might come up is if upon starting Wireshark, you are informed that there are no interfaces on which a capture can be done. This is a permissions issue on the Berkeley packet filter (BPF) that can be resolved by opening a terminal window and typing the following command: bash-3.2$ sudo chmod 644 /dev/bpf* If this process needs to be repeated each time you start Wireshark, you can perform a web search for more permanent permissions solution for your environment. Installing Wireshark on Linux/Unix The requirements and process to install Wireshark on a Linux or Unix platform can vary significantly depending on the particular environment. Performing your first packet capture When you first start Wireshark, you are presented with an initial Start Page as shown in the following screenshot: Don't get too fond of this screen. Although you'll see this every time you start Wireshark, once you do a capture, open a trace file, or perform any other function within Wireshark, this screen will be replaced with the standard Wireshark user interface and you won't see it again until the next time you start Wireshark. So, we won't spend much time here. Selecting a network interface If you have a number of network interfaces on your machine, you may not be sure which one to select to capture packets, but there's a fairly easy way to figure this out. On the Wireshark start page, click on Interface List (alternatively, click on Interfaces from the Capture menu or click on the first icon on the icon bar). The Wireshark Capture Interfaces window that opens provides a list and description of all the network interfaces on your machine, the IP address assigned to each one (if an address has been assigned), and a couple of counters, such as the total number of packets seen on the interface since this window starts and a packets/s (packets per second) counter. If an interface has an IPv6 address assigned (which may start with fe80:: and contain a number of colons) and this is being displayed, you can click on the IPv6 address and it will toggle to display the IPv4 address. This is shown in the following screenshot: On Linux/Unix/Mac platforms, you might also see a loopback interface that can be selected to capture packets being sent between applications on the same machine. However, in most cases, you'll only be interested in capturing packets from a network interface. The goal is to identify the active interface that will be used to communicate with the Internet when you open a browser and navigate to a website. If you have a wired local area network connection and the interface is enabled, that's probably the active interface, but you might also have a wireless interface that is enabled and you may or may not be the primary interface. The most reliable indicator of the active network interface is that it will have greater number of steadily increasing packets with a corresponding active number of packets/s (which will vary over time). Another possible indicator is if an interface has an IP address assigned and others do not. If you're still unsure, open a browser window and navigate to one of your favorite websites and watch the packets and packets/s counters to identify the interface that shows the greatest increase in activity. Performing the packet capture Once you've identified the correct interface, select the checkbox on the left-hand side of that interface and click on the Start button at the bottom of the Capture Interfaces window. Wireshark will start capturing all the packets that can be seen from that interface, including the packets sent to and from your workstation. If you don't see this, try a different interface. It's a bit amazing just how much background traffic there is on a typical network such as broadcast packets from devices advertising their names, addresses, and services and from other devices asking for addresses of stations they want to communicate with. Also, a fair amount of traffic is generated from your own workstation for applications and services that are running in the background, and you had no idea they were creating this much noise. Your Wireshark's Packet List pane may look similar to the following screenshot; however, we can ignore all this for now: We're ready to generate some traffic that we'll be interested in analyzing. Open a new Internet browser window, enter www.wireshark.org in the address box, and press Enter. When the https://www.wireshark.org/ home page finishes loading, stop the Wireshark capture by either selecting Stop from the Capture menu or by clicking on the red square stop icon that's between the View and Go menu headers. Wireshark user interface essentials Once you have completed your first capture, you will see the normal Wireshark user interface main screen. So before we go much further, a quick introduction to the primary parts of this user interface will be helpful so you'll know what's being referred to when as we continue the analysis process. There are eight significant sections or elements of the default Wireshark user interface as shown in the following screenshot: Let's look at the eight significant sections in detail: Title: This area reflects the interface from where a capture is being taken or the filename of an open packet trace file Menu: This is the standard row of main functions and subfunctions in Wireshark Main toolbar (icons): These provide a quick way to access the most useful Wireshark functions and are well worth getting familiar with and using Display filter toolbar: This allows you to quickly create, edit, clear, apply, and save filters to isolate packets of interest for analysis Packet list pane: This section contains a summary info line for each captured packet, as well as a packet number and relative timestamp Packet details pane: This section provides a hierarchical display of information about a single packet that has been selected in the packet list pane, which is divided into sections for the various protocols contained in a packet Packet bytes pane: This section displays the selected packets' contents in hex bytes or bits form, as well as an ASCII display of the data that can be helpful Status bar: This section provides an expert info indicator, edit capture comments icon, trace file path, name, and size information, data on the number of packets captured and displayed and other info, and a profile display and selection section. Filtering out the noise Somewhere in your packet capture, there are packets involved with loading the Wireshark home page—but how do you find and view just those packets out of all the background noise? The simplest and most reliable method is to determine the IP address of the Wireshark website and filter out all the packets except those flowing between that IP address and the IP address of your workstation by using a display filter. The best approach—and the one that you'll likely use as a first step for most of your post-capture analysis work in future—is to investigate a list of all the conversations by IP address and/or hostnames, sorted by the most active nodes, and identify your target hostname, website name, or IP address from this list. From the Wireshark menu, select Conversations from the Statistics menu and in the Conversations window that opens, select the IPv4 tab at the top. You'll see a list of network conversations identified by Address A and Address B, with columns for total Packets, Bytes, Packets A→B, Bytes A→B, Packets A←B, and Bytes A←B. Scrolling over to the right-hand side of this window, there are Relative Start values. These are the times when each particular conversation was first observed in the capture, relative to the start of the capture in seconds. The next column is Duration, which is how long this conversation persisted in the capture (first to last packet seen). Finally, there are average data rates in bits per second (bps) in each direction for each conversation, which is the network impact for this conversation. All these are shown in the following screenshot: We want to sort the list of conversations to get the busiest ones—called the Top Talkers in the network jargon—at the top of the list. Click on the Bytes column header and then click it again. Your list should look something like the preceding screenshot, and if you didn't get a great deal on other background traffic flowing to/from your workstation, the traffic from https://www.wireshark.org/ should have the greatest volume and therefore be at the top of the list. In this example, the conversation between IP addresses 162.159.241.165 and 192.168.1.116 has the greatest overall volume, and looking at the Bytes A->B column, it's apparent that the majority of the traffic was from the 162.159.241.165 address to the 192.168.1.116 address. However, at this point, how do we know if this is really the conversation that we're after? We will need to resolve the IP addresses from our list to hostnames or website addresses, and this can be done from within Wireshark by turning on Network Name Resolution and trying to get hostnames and/or website addresses resolved for those IP addresses using reverse DNS queries (using what is known as a pointer (PTR) DNS record type). If you just installed or started Wireshark, the Name Resolution option may not be turned on by default. This is usually a good thing, as Wireshark can create traffic of its own by transmitting the DNS queries trying to resolve all the IP addresses that it comes across during the capture, and you don't really want that going on during a capture. However, the Name Resolution option can be very helpful to resolve IP addresses to proper hostnames after a capture is complete. To enable Name Resolution, navigate to View | Name Resolution | Enable for Network Layer (click to turn on the checkmark) and make sure Use External Network Name Resolver is enabled as well. Wireshark will attempt to resolve all the IP addresses in the capture to their hostname or website address, and the resolved names will then appear (replacing the previous IP addresses) in the packet list as well as the Conversations window. Note that the Name Resolution option at the bottom of the Conversations window must be enabled as well (it usually is by default), and this setting affects whether resolved names or IP addresses appear in that Conversations window (if Name Resolution is enabled in the Wireshark main screen), as shown in the following screenshot: At this point, you should see the conversation pair between wireshark.org and your workstation at or near the top of the list, as shown in the following screenshot. Of course, your workstation will have a different name or may only appear as an IP address, but identifying the conversation to wireshark.org has been achieved. Applying a display filter You now want to see just the conversation between your workstation and wireshark.org, and get rid of all the extraneous conversations so you can focus on the traffic of interest. This is accomplished by creating a filter that only displays the desired traffic. Right-click on the line containing the wireshark.org entry and navigate to Apply as Filter | Selected | A<->B, as shown in the following screenshot: Wireshark will create and apply a display filter string that isolates the displayed traffic to just the conversation between the IP addresses of wireshark.org and your workstation, as shown in the following screenshot. Note that if you create or edit a display filter entry manually, you will need to click on Apply to apply the filter to the trace file (or Clear to clear it). This particular display filter syntax works with IP addresses, not with hostnames, and uses an ip.addr== (IP address equals) syntax for each node along with the && (and) logic operator to build a string that says display any packet that contains this IP address *and* that IP address. This is the type of display filter that you will be using a great deal for packet analysis. You'll notice as you scroll up and down in the Packet List pane that all the other packets, except those between your workstation and wireshark.org are gone. They're not gone in the strict sense, they're just hidden—as you can observe by inspecting the Packet No. column, there are gaps in the numbering sequence, those are for the hidden packets. Saving the packet trace Now that you've isolated the traffic of interest using a display filter, you can save a new packet trace file that contains just the filtered packets. This serves two purposes. Firstly, you can close Wireshark, come back to it later, open the filtered trace file, and pick up where you left off in your analysis, as well as have a record of the capture in case you need to reference it later such as in a troubleshooting scenario. Secondly, it's much easier and quicker to work in the various Wireshark screens and functions with a smaller, more focused trace file that contains just the packets that you want to analyze. To create new packet trace file containing just the filtered/displayed packets, select Export Specified Packets from the Wireshark File menu. You can navigate to and/or create a folder to hold your Wireshark trace files, and then enter a filename for the trace file that you want to save. In this example, the filename is wireshark_website.pcapng. By default, Wireshark will save the trace file in the pcapng format (which is the preferred and more recent format). If you don't specify a file extension with the filename, Wireshark will provide the appropriate extension based on the Save as type selection, as shown in the following screenshot: Also, by default, Wireshark will have the All packets option selected, and if a display filter is applied (as it is in this scenario), the Displayed option will be selected as opposed to the Captured option that saves all the packets regardless of whether a filter was applied. Having entered a filename and confirmed that all the save selections are correct, you can click on Save to save the new packet trace file. Note that when you have finished this trace file save activity, Wireshark still has all the original packets from the capture in memory, and they can still be viewed by clicking on Clear in the Display Filter Toolbar menu. If you want work further with the new trace file you just saved, you'll need to open it by clicking on Open in the File menu (or Open Recent in the File menu). Summary Congratulations! If you accomplished all the activities covered in this article, you have successfully installed Wireshark, performed a packet capture, created a filter to isolate and display just the packets you were interested in from all the extraneous noise, and created a new packet trace file containing just those packets so you can analyze them later. Moreover, in the process, you gained an initial familiarity with the Wireshark user interface and you learned how to use several of its most useful and powerful features. Resources for Article: Further resources on this subject: Wireshark: Working with Packet Streams [Article] The Kendo MVVM Framework [Article] Kali Linux – Wireless Attacks [Article]
Read more
  • 0
  • 0
  • 4670

article-image-implementing-stacks-using-javascript
Packt
22 Oct 2014
10 min read
Save for later

Implementing Stacks using JavaScript

Packt
22 Oct 2014
10 min read
 In this article by Loiane Groner, author of the book Learning JavaScript Data Structures and Algorithms, we will discuss the stacks. (For more resources related to this topic, see here.) A stack is an ordered collection of items that follows the LIFO (short for Last In First Out) principle. The addition of new items or the removal of existing items takes place at the same end. The end of the stack is known as the top and the opposite is known as the base. The newest elements are near the top, and the oldest elements are near the base. We have several examples of stacks in real life, for example, a pile of books, as we can see in the following image, or a stack of trays from a cafeteria or food court: A stack is also used by compilers in programming languages and by computer memory to store variables and method calls. Creating a stack We are going to create our own class to represent a stack. Let's start from the basics and declare our class: function Stack() {   //properties and methods go here} First, we need a data structure that will store the elements of the stack. We can use an array to do this: Var items = []; Next, we need to declare the methods available for our stack: push(element(s)): This adds a new item (or several items) to the top of the stack. pop(): This removes the top item from the stack. It also returns the removed element. peek(): This returns the top element from the stack. The stack is not modified (it does not remove the element; it only returns the element for information purposes). isEmpty(): This returns true if the stack does not contain any elements and false if the size of the stack is bigger than 0. clear(): This removes all the elements of the stack. size(): This returns how many elements the stack contains. It is similar to the length property of an array. The first method we will implement is the push method. This method will be responsible for adding new elements to the stack with one very important detail: we can only add new items to the top of the stack, meaning at the end of the stack. The push method is represented as follows: this.push = function(element){   items.push(element);}; As we are using an array to store the elements of the stack, we can use the push method from the JavaScript array class. Next, we are going to implement the pop method. This method will be responsible for removing the items from the stack. As the stack uses the LIFO principle, the last item that we added is the one that is removed. For this reason, we can use the pop method from the JavaScript array class. The pop method is represented as follows: this.pop = function(){   return items.pop();}; With the push and pop methods being the only methods available for adding and removing items from the stack, the LIFO principle will apply to our own Stack class. Now, let's implement some additional helper methods for our class. If we would like to know what the last item added to our stack was, we can use the peek method. This method will return the item from the top of the stack: this.peek = function(){   return items[items.length-1];}; As we are using an array to store the items internally, we can obtain the last item from an array using length - 1 as follows: For example, in the previous diagram, we have a stack with three items; therefore, the length of the internal array is 3. The last position used in the internal array is 2. As a result, the length - 1 (3 - 1) is 2! The next method is the isEmpty method, which returns true if the stack is empty (no item has been added) and false otherwise: this.isEmpty = function(){   return items.length == 0;}; Using the isEmpty method, we can simply verify whether the length of the internal array is 0. Similar to the length property from the array class, we can also implement length for our Stack class. For collections, we usually use the term "size" instead of "length". And again, as we are using an array to store the items internally, we can simply return its length: this.size = function(){   return items.length;}; Finally, we are going to implement the clear method. The clear method simply empties the stack, removing all its elements. The simplest way of implementing this method is as follows: this.clear = function(){   items = [];}; An alternative implementation would be calling the pop method until the stack is empty. And we are done! Our Stack class is implemented. Just to make our lives easier during the examples, to help us inspect the contents of our stack, let's implement a helper method called print that is going to output the content of the stack on the console: this.print = function(){   console.log(items.toString());}; And now we are really done! The complete Stack class Let's take a look at how our Stack class looks after its full implementation: function Stack() {    var items = [];    this.push = function(element){       items.push(element);   };    this.pop = function(){       return items.pop();   };    this.peek = function(){       return items[items.length-1];   };    this.isEmpty = function(){       return items.length == 0;   };    this.size = function(){       return items.length;   };    this.clear = function(){       items = [];   };    this.print = function(){       console.log(items.toString());   };} Using the Stack class Before we dive into some examples, we need to learn how to use the Stack class. The first thing we need to do is instantiate the Stack class we just created. Next, we can verify whether it is empty (the output is true because we have not added any elements to our stack yet): var stack = new Stack();console.log(stack.isEmpty()); //outputs true Next, let's add some elements to it (let's push the numbers 5 and 8; you can add any element type to the stack): stack.push(5);stack.push(8); If we call the peek method, the output will be the number 8 because it was the last element that was added to the stack: console.log(stack.peek()); // outputs 8 Let's also add another element: stack.push(11);console.log(stack.size()); // outputs 3console.log(stack.isEmpty()); //outputs false We added the element 11. If we call the size method, it will give the output as 3, because we have three elements in our stack (5, 8, and 11). Also, if we call the isEmpty method, the output will be false (we have three elements in our stack). Finally, let's add another element: stack.push(15); The following diagram shows all the push operations we have executed so far and the current status of our stack: Next, let's remove two elements from the stack by calling the pop method twice: stack.pop();stack.pop();console.log(stack.size()); // outputs 2stack.print(); // outputs [5, 8] Before we called the pop method twice, our stack had four elements in it. After the execution of the pop method two times, the stack now has only two elements: 5 and 8. The following diagram exemplifies the execution of the pop method: Decimal to binary Now that we know how to use the Stack class, let's use it to solve some Computer Science problems. You are probably already aware of the decimal base. However, binary representation is very important in Computer Science as everything in a computer is represented by binary digits (0 and 1). Without the ability to convert back and forth between decimal and binary numbers, it would be a little bit difficult to communicate with a computer. To convert a decimal number to a binary representation, we can divide the number by 2 (binary is base 2 number system) until the division result is 0. As an example, we will convert the number 10 into binary digits: This conversion is one of the first things you learn in college (Computer Science classes). The following is our algorithm: function divideBy2(decNumber){    var remStack = new Stack(),       rem,       binaryString = '';    while (decNumber > 0){ //{1}       rem = Math.floor(decNumber % 2); //{2}       remStack.push(rem); //{3}       decNumber = Math.floor(decNumber / 2); //{4} }    while (!remStack.isEmpty()){ //{5}       binaryString += remStack.pop().toString();   }    return binaryString;} In this code, while the division result is not zero (line {1}), we get the remainder of the division (mod) and push it to the stack (lines {2} and {3}), and finally, we update the number that will be divided by 2 (line {4}). An important observation: JavaScript has a numeric data type, but it does not distinguish integers from floating points. For this reason, we need to use the Math.floor function to obtain only the integer value from the division operations. And finally, we pop the elements from the stack until it is empty, concatenating the elements that were removed from the stack into a string (line {5}). We can try the previous algorithm and output its result on the console using the following code: console.log(divideBy2(233));console.log(divideBy2(10));console.log(divideBy2(1000)); We can easily modify the previous algorithm to make it work as a converter from decimal to any base. Instead of dividing the decimal number by 2, we can pass the desired base as an argument to the method and use it in the divisions, as shown in the following algorithm: function baseConverter(decNumber, base){    var remStack = new Stack(),        rem,       baseString = '',       digits = '0123456789ABCDEF'; //{6}    while (decNumber > 0){       rem = Math.floor(decNumber % base);       remStack.push(rem);       decNumber = Math.floor(decNumber / base);   }    while (!remStack.isEmpty()){       baseString += digits[remStack.pop()]; //{7}   }    return baseString;} There is one more thing we need to change. In the conversion from decimal to binary, the remainders will be 0 or 1; in the conversion from decimal to octagonal, the remainders will be from 0 to 8; but in the conversion from decimal to hexadecimal, the remainders can be 0 to 8 plus the letters A to F (values 10 to 15). For this reason, we need to convert these values as well (lines {6} and {7}). We can use the previous algorithm and output its result on the console as follows: console.log(baseConverter(100345, 2));console.log(baseConverter(100345, 8));console.log(baseConverter(100345, 16)); Summary In this article, we learned about the stack data structure. We implemented our own algorithm that represents a stack and we learned how to add and remove elements from it using the push and pop methods. We also covered a very famous example of how to use a stack. Resources for Article: Further resources on this subject: Organizing Backbone Applications - Structure, Optimize, and Deploy [article] Introduction to Modern OpenGL [article] Customizing the Backend Editing in TYPO3 Templates [article]
Read more
  • 0
  • 0
  • 4827

article-image-designing-and-building-horizon-view-60-infrastructure
Packt
22 Oct 2014
18 min read
Save for later

Designing and Building a Horizon View 6.0 Infrastructure

Packt
22 Oct 2014
18 min read
This article is written by Peter von Oven, the author of VMware Horizon View Essentials. In this article, we will start by taking a closer look at the design process. We will now look at the reference architecture and how we start to put together a design, building out the infrastructure for a production deployment. Proving the technology – from PoC to production In this section, we are going to discuss how to approach a VDI project. This is a key and very important piece of work that needs to be completed in the very early stages and is somewhat different from how you would typically approach an IT project. Our starting point is to focus on the end users rather than the IT department. After all, these are the people that will be using the solution on a daily basis and know what tools they need to get their jobs done. Rather than giving them what you think they need, let's ask them what they actually need and then, within reason, deliver this. It's that old saying of don't try and fit a square peg into a round hole. No matter how hard you try, it's just never going to fit. First and foremost we need to design the technology around the user requirements rather than building a backend infrastructure only to find that it doesn't deliver what the users require. Assessment Once you have built your business case and validated that against your EUC strategy and there is a requirement for delivering a VDI solution, the next stage is to run an assessment. It's quite fitting that this book is entitled "Essentials", as this stage of the project is exactly that, and is essential for a successful outcome. We need to build up a picture of what the current environment looks like, ranging from looking at what applications are being used to the type of access devices. This goes back to the earlier point about giving the users what they need and the only way to find that out is to conduct an assessment. By doing this, we are creating a baseline. Then, as we move into defining the success criteria and proving the technology, we have the baseline as a reference point to demonstrate how we have improved current working and delivered on the business case and strategy. There are a number of tools that can be used in the assessment phase to gather the information required, for example, Liquidware Labs Stratusphere FIT or SysTrack from Lakeside Software. Don't forget to actually talk to the users as well, so you are armed with the hard-and-fast facts from an assessment as well as the user's perspective. Defining the success criteria The key objective in defining the success criteria is to document what a "good" solution should look like for the project to succeed and become production-ready. We need to clearly define the elements that need to function correctly in order to move from proof of concept to proof of technology, and then into a pilot phase before deploying into production. You need to fully document what these elements are and get the end users or other project stakeholders to sign up to them. It's almost like creating a statement of work with a clearly defined list of tasks. Another important factor is to ensure that during this phase of the project, the criteria don't start to grow beyond the original scope. By that, we mean other additional things should not get added to the success criteria or at least not without discussion first. It may well transpire that something key was missed; however, if you have conducted your assessment thoroughly, this shouldn't happen. Another thing that works well at this stage is to involve the end users. Set up a steering committee or advisory panel by selecting people from different departments to act as sponsors within their area of business. Actively involve them in the testing phases, but get them on board early as well to get their input in shaping the solution. Too many projects fail when an end user tries something that didn't work. However, the thing that they tried is not actually a relevant use case or something that is used by the business as a critical line of business application and therefore shouldn't derail the project. If we have a set of success criteria defined up front that the end users have signed up to, anything outside that criteria is not in scope. If it's not defined in the document, it should be disregarded as not being part of what success should look like. Proving the technology Once the previous steps have been discussed and documented, we should be able to build a picture around what's driving the project. We will understand what you are trying to achieve/deliver and, based upon hard-and-fast facts from the assessment phase, be able to work on what success should look like. From there, we can then move into testing some form of the technology should that be a requirement. There are three key stages within the testing cycle to consider, and it might be the case that you don't need all of them. The three stages we are talking about are as follows: Proof of concept (PoC) Proof of technology (PoT) Pilot In the next sections, we are briefly going to cover what each of these stages mean and why you might or might not need them. Proof of concept A proof of concept typically refers to a partial solution, typically built on any old hardware kicking about, that involves a relatively small number of users usually within the confines of the IT department acting in business roles, to establish whether the system satisfies some aspect of the purpose it was designed for. Once proven, one or two things happen. Firstly nothing happens as it's just the IT department playing with technology and there wasn't a real business driver in the first place. This is usually down to the previous steps not having been defined. In a similar way, by not having any success criteria, it will also fail, as you don't know exactly what you are setting out to prove. The second outcome is that the project moves into a pilot phase that we will discuss in a later section. You could consider moving directly into this phase and bypassing the PoC altogether. Maybe a demonstration of the technology would suffice, and using a demo environment over a longer period would show you how the technology works. Proof of technology In contrast to the PoC, the objective of a proof of technology is to determine whether or not the proposed solution or technology will integrate into your existing environment and therefore demonstrate compatibility. The objective is to highlight any technical problems specific to your environment, such as how your bespoke systems might integrate. As with the PoC, a PoT is typically run by the IT department and no business users would be involved. A PoT is purely a technical validation exercise. Pilot A pilot refers to what is almost a small-scale roll out of the solution in a production-style environment that would target a limited scope of the intended final solution. The scope may be limited by the number of users who can access the pilot system, the business processes affected, or the business partners involved. The purpose of a pilot is to test, often in a production-like environment, whether the system is working, as it was designed while limiting business exposure and risk. It will also touch real users so as to gauge the feedback from what would ultimately become a live, production solution. This is a critical step in achieving success, as the users are the ones that have to interact with the system on a daily basis, and the reason why you should set up some form of working group to gather their feedback. That would also mitigate the project from failing, as the solution may deliver everything the IT department could ever wish for, but when it goes live and the first user logs on and reports a bad experience or performance, you may as well not be bothered. The pilot should be carefully scoped, sized, and implemented. We will discuss this in the next section. The pilot phase In this section, we are going to discuss the pilot phase in a bit more detail and break it down into three distinct stages. These are important, as the output from the pilot will ultimately shape the design of your production environment. The following diagram shows the workflow we will follow in defining our project: Phase 1 – pilot design The pilot infrastructure should be designed on the same hardware platforms that the production solution is going to be deployed, for example, the same servers and storage. This takes into account any anomalies between platforms and configuration differences that could affect things such as scalability or more importantly performance. Even at pilot stage, the design is absolutely key, and you should make sure you take into account the production design even at this stage. Why? Basically because many pilot solutions end up going straight into production and more and more users get added above and beyond those scoped for the pilot. It's great going live with the solution and not having to go back and rebuild it, but when you start to scale by adding more users and applications, you might have some issues due to the pilot sizing. It may sound obvious, but often with a successful pilot, the users just keep on using it and additional users get added. If it's only ever going to be a pilot, that's fine, but keep this in mind and ask the question; if you are planning on taking the pilot straight into production design it for production. It is always useful to work from a prerequisite document to understand the different elements that need consideration in the design. Key design elements include: Hardware sizing (servers – CPU, memory, and consolidation ratios) Pool design (based on user segmentation) Storage design (local SSD, SAN, and acceleration technologies) Image creation (rebuild from scratch and optimize for VDI) Network design (load balancing and external access) Antivirus considerations Application delivery (delivering virtually versus installing in core image) User profile management Floating or dedicated desktop assignments Persistent or non-persistent desktop builds (linked clone or full clone) Once you have all this information, you can start to deploy the pilot. Phase 2 – pilot deployment In the deployment phase of the pilot, we are going to start building out the infrastructure, deploying the test users, building the OS images, and then start testing. Phase 3 – pilot test During the testing phase, the key thing during this stage is to work closely with the end users and your sponsors, showing them the solution and how it works, closely monitoring the users, and assessing the solution as it's being used. This allows you to keep in contact with the users and give them the opportunity to continually provide real-time feedback. This also allows you to answer questions and make adjustments and enhancements on the fly rather than wait to the end of the project and then to be told it didn't work or they just simply didn't understand something. This then leads us onto the last section, the review. Phase 4 – pilot review This final stage sometimes tends to get forgotten. We have deployed the solution, the users have been testing it, and then it ends there for whatever reason. However, there is one very important last thing to do to enable the customer to move to production. We need to measure the user experience or the IT department's experience against the success criteria we set out at the start of this process. We need to get customer sign off and agreement that we have successfully met all the objectives and requirements. If this is not the case, we need to understand the reasons why. Have we missed something in the use case, have the user requirements changed, or is it simply a perception issue? Whatever the case, we need to cycle round the process again. Go back to the use case, understand and reevaluate the user requirements, (what it is that is seemingly failing or not behaving as expected), and then tweak the design or make the required changes and get them to test the solution again. We need to continue this process until we get acceptance and sign off; otherwise, we will not get to the final solution deployment phase. When the project has been signed off after a successful pilot test, there is no reason why you cannot deploy the technology in production. Now that we have talked about how to prove the technology and successfully demonstrated that it delivers against both our business case and user requirements, in the next sections, we are going to start looking at the design for our production environment. Designing a Horizon 6.0 architecture We are going to start this section by looking at the VMware reference architecture for Horizon View 6.0 before we go into more detail around the design considerations, best practice, and then sizing guidelines. The pod and block reference architecture VMware has produced a reference architecture model for deploying Horizon View, with the approach being to make it easy to scale the environment by adding set component pieces of infrastructure, known as View blocks. To scale the number of users, you add View blocks up to the maximum configuration of five blocks. This maximum configuration of five View blocks is called a View pod. The important numbers to remember are that each View block supports up to a maximum of 2,000 users, and a View pod is made up of up to five View blocks, therefore supporting a maximum of 10,000 users. The View block contains all the infrastructure required to host the virtual desktop machines, so appropriately sized ESXi hosts, a vCenter Server, and the associated networking and storage requirements. We will cover the sizing aspects later on in this article. The following diagram shows an individual View block: Apart from having a View block that supports the virtual desktop machines, there is also a management block for the supporting infrastructure components. The management block contains the management elements of Horizon View, such as the connection servers and security servers. These will also be virtual machines hosted on the vSphere platform but using separate ESXi hosts and vCenter servers from those being used to host the desktops. The following diagram shows a typical View management block: The management block contains the key Horizon View components to support the maximum configuration of 10,000 users or a View pod. In terms of connection servers, the management block consists of a maximum of seven connection servers. This is often written as 5 + 2, which can be misleading, but what it means is you can have five connection servers and two that serve as backups to replace a failed server. Each connection server supports one of the five blocks, with the two spare in reserve in the event of a failure. As we discussed previously, the View Security Servers are paired with one of the connection servers in order to provide external access to the users. In our example diagram, we have drawn three security servers meaning that these servers are configured for external access, while the others serve the internal users only. In this scenario, the View Connection Servers and View Security Servers are deployed as virtual machines, and are therefore controlled and managed by vCenter. The vCenter Server can run on a virtual machine, or you can use the vCenter Virtual Appliance. It can also run on a physical Windows Server, as it's just a Windows application. The entire infrastructure is hosted on a vSphere cluster that's separate, from that being used to host the virtual desktop machines. There are a couple of other components that are not shown in the diagram, and those are the databases required for View such as the events database and for View Composer. If we now look at the entire Horizon View pod and block architecture for up to 10,000 users, the architecture design would look something like the following diagram: One thing to note is that although a pod is limited to 10,000 users, you can deploy more than one pod should you need an environment that exceeds the 10,000 users. Bear in mind though that the pods do not communicate with each other and will effectively be completely separate deployments. As this is potentially a limitation in the scalability, but more so for disaster recovery purposes, where you need to have two pods across two sites for disaster recovery, there is a feature in Horizon View 6.0 that allows you to deploy pods across sites. This is called the Cloud Pod Architecture (CPA), and we will cover this in the next section. The Cloud Pod Architecture The Cloud Pod Architecture, also referred to as linked-mode View (LMV) or multidatacenter View (MDCV), allows you to link up to four View pods together across two sites, with a maximum number of supported users of up to 20,000. There are four key features available by deploying Horizon View using this architecture: Scalability: This hosts more than 10,000 users on a single site Multidatacenter support: This supports View across more than one data center Geo roaming: This supports roaming desktops for users moving across sites DR: This delivers resilience in the event of a data center failure Let's take a look at the Cloud Pod Architecture in the following diagram to explain the features and how it builds on the pod and block architecture we discussed previously: With the Cloud Pod Architecture, user information is replicated globally and the pods are linked using the View interpod API (VIPA)—the setup for which is command-line-based. For scalability, with the Cloud Pod Architecture model, you have the ability to entitle users across pools on both different pods and sites. This means that, if you have already scaled beyond a single pod, you can link the pods together to allow you to go beyond the 10,000 user limit and also administer your users from a single location. The pods can, apart from being located on the same site, also be on two different sites to deliver a mutlidatacenter configuration running as active/active. This also introduces DR capabilities. In the event of one of the data centers failing or losing connectivity, users will still be able to connect to a virtual desktop machine. Users don't need to worry about what View Connection Server they need to use to connect to their virtual desktop machine. The Cloud Pod Architecture supports a single namespace with access via a global URL. As users can now connect from anywhere, there are some configuration options that you need to consider as to how they access their virtual desktop machine and from where it gets delivered. There are three options that form part of the global user entitlement feature: Any: This is delivered from any pod as part of the global entitlement Site: This is delivered from any pod from the same site the user is connecting from Local: This is delivered only from the local pod that the user is connected to It's not just the users that get the global experience; the administrators can also be segregated in this way so that you can deliver delegated management. Administration of pods could be delegated to the local IT teams on a per region/geo basis, with some operations such as provisioning and patching performed locally on the local pods or maybe it's so that local language support can be delivered. It is only global policy that is managed globally, typically from an organizations global HQ. Now that we have covered some of the high-level architecture options, you should now be able to start to look at your overall design, factoring in locations and the number of users. In the next section, we will start to look at how to size some of these components. Sizing the infrastructure In this section, we are going to discuss the sizing of the components previously described in the architecture section. We will start by looking at the management blocks containing the connection servers, security servers, and then the servers that host the desktops before finishing off with the desktops themselves. The management block and the block hosting the virtual desktop machines should be run on separate infrastructure (ESXi hosts and vCenter Servers); the reason being due to the different workload patterns between servers and desktops and to avoid performance issues. It's also easier to manage, as you can determine what desktops are and what servers are, but more importantly it's also the way in which the products are licensed. With vSphere for desktop that comes with Horizon View, it only entitles you to run workloads that are hosting and managing the virtual desktop infrastructure. Summary In this article, you learned how to design a Horizon 6.0 architecture. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] Setting up of Software Infrastructure on the Cloud [Article] Introduction to Veeam® Backup & Replication for VMware [Article]
Read more
  • 0
  • 0
  • 3660

article-image-creating-jsf-composite-component
Packt
22 Oct 2014
9 min read
Save for later

Creating a JSF composite component

Packt
22 Oct 2014
9 min read
This article by David Salter, author of the book, NetBeans IDE 8 Cookbook, explains how to create a JSF composite component in NetBeans. (For more resources related to this topic, see here.) JSF is a rich component-based framework, which provides many components that developers can use to enrich their applications. JSF 2 also allows composite components to be easily created, which can then be inserted into other JSF pages in a similar way to any other JSF components such as buttons and labels. In this article, we'll see how to create a custom component that displays an input label and asks for corresponding input. If the input is not validated by the JSF runtime, we'll show an error message. The component is going to look like this: The custom component is built up from three different standard JSF components. On the left, we have a <h:outputText/> component that displays the label. Next, we have a <h:inputText /> component. Finally, we have a <h:message /> component. Putting these three components together like this is a very useful pattern when designing input forms within JSF. Getting ready To create a JSF composite component, you will need to have a working installation of WildFly that has been configured within NetBeans. We will be using the Enterprise download bundle of NetBeans as this includes all of the tools we need without having to download any additional plugins. How to do it… First of all, we need to create a web application and then create a JSF composite component within it. Perform the following steps: Click on File and then New Project…. Select Java Web from the list of Categories and Web Application form the list of Projects. Click on Next. Enter the Project Name value as CompositeComp. Click on Next. Ensure that Add to Enterprise Application is set to <None>, Server is set to WildFly Application Server, Java EE Version is set to Java EE 7 Web, and Context Path is set to /CompositeComp. Click on Next. Click on the checkbox next to JavaServer Faces as we are using this framework. All of the default JSF configurations are correct, so click on the Finish button to create the project. Right-click on the CompositeComp project within the Projects explorer and click on New and then Other…. In the New File dialog, select JavaServer Faces from the list of Categories and JSF Composite Component from the list of File Types. Click on Next. On the New JSF Composite Component dialog, enter the File Name value as inputWithLabel and change the folder to resourcescookbook. Click on Finish to create the custom component. In JSF, custom components are created as Facelets files that are stored within the resources folder of the web application. Within the resources folder, multiple subfolders can exist, each representing a namespace of a custom component. Within each namespace folder, individual custom components are stored with filenames that match the composite component names. We have just created a composite component within the cookbook namespace called inputWithLabel. Within each composite component file, there are two sections: an interface and an implementation. The interface lists all of the attributes that are required by the composite component and the implementation provides the XHTML code to represent the component. Let's now define our component by specifying the interface and the implementation. Perform the following steps: The inputWithLabel.xhtml file should be open for editing. If not, double–click on it within the Projects explorer to open it. For our composite component, we need two attributes to be passed into the component. We need the text for the label and the expression language to bind the input box to. Change the interface section of the file to read:    <cc:attribute name="labelValue" />   <cc:attribute name="editValue" /></cc:interface> To render the component, we need to instantiate a <h:outputText /> tag to display the label, a <h:inputText /> tag to receive the input from the user, and a <h:message /> tag to display any errors that are entered for the input field. Change the implementation section of the file to read: <cc:implementation>   <style>   .outputText{width: 100px; }   .inputText{width: 100px; }   .errorText{width: 200px; color: red; }   </style>   <h:panelGrid id="panel" columns="3" columnClasses="outputText, inputText, errorText">       <h:outputText value="#{cc.attrs.labelValue}" />       <h:inputText value="#{cc.attrs.editValue}" id="inputText" />       <h:message for="inputText" />   </h:panelGrid></cc:implementation> Click on the lightbulb on the left-hand side of the editor window and accept the fix to add the h=http://><html       > We can now reference the composite component from within the Facelets page. Add the following code inside the <h:body> code on the page: <h:form id="inputForm">   <cookbook:inputWithLabel labelValue="Forename" editValue="#{personController.person.foreName}"/>   <cookbook:inputWithLabel labelValue="Last Name" editValue="#{personController.person.lastName}"/>   <h:commandButton type="submit" value="Submit" action="#{personController.submit}"/></h:form> This code instantiates two instances of our inputWithLabel composite control and binds them to personController. We haven't got one of those yet, so let's create one and a class to represent a person. Perform the following steps: Create a new Java class within the project. Enter Class Name as Person and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. Add members to the class to represent foreName and lastName: private String foreName;private String lastName; Use the Encapsulate Fields refactoring to generate getters and setters for these members. To allow error messages to be displayed if the foreName and lastName values are inputted incorrectly, we will add some Bean Validation annotations to the attributes of the class. Annotate the foreName member of the class as follows: @NotNull@Size(min=1, max=25)private String foreName; Annotate the lastName member of the class as follows: @NotNull@Size(min=1, max=50)private String lastName; Use the Fix Imports tool to add the required imports for the Bean Validation annotations. Create a new Java class within the project. Enter Class Name as PersonController and Package as com.davidsalter.cookbook.compositecomp. Click on Finish. We need to make the PersonController class an @Named bean so that it can be referenced via expression language from within JSF pages. Annotate the PersonController class as follows: @Named@RequestScopedpublic class PersonController { We need to add a Person instance into PersonController that will be used to transfer data from the JSF page to the named bean. We will also need to add a method onto the bean that will redirect JSF to an output page after the names have been entered. Add the following to the PersonController class: private Person person = new Person();public Person getPerson() {   return person;}public void setPerson(Person person) {   this.person = person;}public String submit() {   return "results.xhtml";} The final task before completing our application is to add a results page so we can see what input the user entered. This output page will simply display the values of foreName and lastName that have been entered. Create a new JSF page called results that uses the Facelets syntax. Change the <h:body> tag of this page to read: <h:body>   You Entered:   <h:outputText value="#{personController.person.foreName}" />&nbsp;   <h:outputText value="#{personController.person.lastName}" /></h:body> The application is now complete. Deploy and run the application by right-clicking on the project within the Projects explorer and selecting Run. Note that two instances of the composite component have been created and displayed within the browser. Click on the Submit button without entering any information and note how the error messages are displayed: Enter some valid information and click on Submit, and note how the information entered is echoed back on a second page. How it works… Creating composite components was a new feature added to JSF 2. Creating JSF components was a very tedious job in JSF 1.x, and the designers of JSF 2 thought that the majority of custom components created in JSF could probably be built by adding different existing components together. As it is seen, we've added together three different existing JSF components and made a very useful composite component. It's useful to distinguish between custom components and composite components. Custom components are entirely new components that did not exist before. They are created entirely in Java code and build into frameworks such as PrimeFaces and RichFaces. Composite components are built from existing components and their graphical view is designed in the .xhtml files. There's more... When creating composite components, it may be necessary to specify attributes. The default option is that the attributes are not mandatory when creating a custom component. They can, however, be made mandatory by adding the required="true" attribute to their definition, as follows: <cc:attribute name="labelValue" required="true" /> If an attribute is specified as required, but is not present, a JSF error will be produced, as follows: /index.xhtml @11,88 <cookbook:inputWithLabel> The following attribute(s) are required, but no values have been supplied for them: labelValue. Sometimes, it can be useful to specify a default value for an attribute. This is achieved by adding the default="…" attribute to their definition: <cc:attribute name="labelValue" default="Please enter a value" /> Summary In this article, we have learned to create a JSF composite component using NetBeans. Resources for Article: Further resources on this subject: Creating a Lazarus Component [article] Top Geany features you need to know about [article] Getting to know NetBeans [article]
Read more
  • 0
  • 0
  • 6501
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-s4-classes
Packt
22 Oct 2014
36 min read
Save for later

Introduction to S4 Classes

Packt
22 Oct 2014
36 min read
In this article, by Kelly Black, the author of the R Object-oriented Programming book, will examine S4 classes. The approach associated with S3 classes is more flexible, and the approach associated with S4 classes is a more formal and structured definition. This article is roughly divided into four parts: Class definition: This section gives you an overview of how a class is defined and how the data (slots) associated with the class are specified Class methods: This section gives you an overview of how methods that are associated with a class are defined Inheritance: This section gives you an overview of how child classes that build on the definition of a parent class can be defined Miscellaneous commands: This section explains four commands that can be used to explore a given object or class (For more resources related to this topic, see here.) Introducing the Ant class We will introduce the idea of S4 classes, which is a more formal way to implement classes in R. One of the odd quirks of S4 classes is that you first define the class along with its data, and then, you define the methods separately. As a result of this separation in the way a class is defined, we will first discuss the general idea of how to define a class and its data. We will then discuss how to add a method to an existing class. Next, we will discuss how inheritance is implemented. Finally, we will provide a few notes about other options that do not fit nicely in the categories mentioned earlier. The approach associated with an S4 class is less flexible and requires a bit more forethought in terms of how a class is defined. We will take a different approach and create a complete class from the beginning. In this case, we will build on an idea proposed by Cole and Cheshire. The authors proposed a cellular automata simulation to mimic how ants move within a colony. As part of a simulation, we will assume that we need an Ant class. We will depart from the paper and assume that the ants are not homogeneous. We will then assume that there are male (drones) and female ants, and the females can be either workers or soldiers. We will need an ant base class, which is discussed in the first two sections of this article as a means to demonstrate how to create an S4 class. In the third section, we will define a hierarchy of classes based on the original Ant class. This hierarchy includes male and female classes. The worker class will then inherit from the female class, and the soldier class will inherit from the worker class. Defining an S4 class We will define the base Ant class called Ant. The class is represented in the following figure. The class is used to represent the fundamental aspects that we need to track for an ant, and we focus on creating the class and data. The methods are constructed in a separate step and are examined in the next section. A class is created using the setClass command. When creating the class, we specify the data in a character vector using the slots argument. The slots argument is a vector of character objects and represents the names of the data elements. These elements are often referred to as the slots within the class. Some of the arguments that we will discuss here are optional, but it is a good practice to use them. In particular, we will specify a set of default values (the prototype) and a function to check whether the data is consistent (a validity function). Also, it is a good practice to keep all of the steps necessary to create a class within the same file. To that end, we assume that you will not be entering the commands from the command line. They are all found within a single file, so the formatting of the examples will reflect the lack of the R workspace markers. The first step is to define the class using the setClass command. This command defines a new class by name, and it also returns a generator that can be used to construct an object for the new class. The first argument is the name of the class followed by the data to be included in the class. We will also include the default initial values and the definition of the function used to ensure that the data is consistent. The validity function can be set separately using the setValidity command. The data types for the slots are character values that match the names of the R data types which will be returned by the class command: # Define the base Ant class. Ant <- setClass(    # Set the name of the class    "Ant",    # Name the data types (slots) that the class will track    slots = c(        Length="numeric",           # the length (size) of this ant.               Position="numeric",         # the position of this ant.                                    # (a 3 vector!)               pA="numeric",               # Probability that an ant will                                    # transition from active to                                     # inactive.        pI="numeric",               # Probability that an ant will                                    # transition from inactive to                                    # active.          ActivityLevel="numeric"     # The ant's current activity                            # level.        ),    # Set the default values for the slots. (optional)    prototype=list(        Length=4.0,        Position=c(0.0,0.0,0.0),        pA=0.05,        pI=0.1,        ActivityLevel=0.5        ),    # Make a function that can test to see if the data is consistent.    # (optional)    validity=function(object)    {        # Check to see if the activity level and length is        # non-negative.        # See the discussion on the @ notation in the text below.        if(object@ActivityLevel<0.0) {            return("Error: The activity level is negative")        } else if (object@Length<0.0) {            return("Error: The length is negative")        }        return(TRUE)  }    ) With this definition, there are two ways to create an Ant object: one is using the new command and the other is using the Ant generator, which is created after the successful execution of the setClass command. Note that in the following examples, the default values can be overridden when a new object is created: > ant1 <- new("Ant") > ant1 An object of class "Ant" Slot "Length": [1] 4 Slot "Position": [1] 0 0 0 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 We can specify the default values when creating a new object. > ant2 <- new("Ant",Length=4.5) > ant2 An object of class "Ant" Slot "Length": [1] 4.5 Slot "Position": [1] 0 0 0 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 The object can also be created using the generator that is defined when creating the class using the setClass command. > ant3 <- Ant(Length=5.0,Position=c(3.0,2.0,1.0)) > ant3 An object of class "Ant" Slot "Length": [1] 5 Slot "Position": [1] 3 2 1 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > class(ant3) [1] "Ant" attr(,"package") [1] ".GlobalEnv" > getClass(ant3) An object of class "Ant" Slot "Length": [1] 5 Slot "Position": [1] 3 2 1 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 When the object is created and a validity function is defined, the validity function will determine whether the given initial values are consistent: > ant4 <- Ant(Length=-1.0,Position=c(3.0,2.0,1.0)) Error in validObject(.Object) : invalid class “Ant” object: Error: The length is negative > ant4 Error: object 'ant4' not found In the last steps, the attempted creation of ant4, an error message is displayed. The new variable, ant4, was not created. If you wish to test whether the object was created, you must be careful to ensure that the variable name used does not exist prior to the attempted creation of the new object. Also, the validity function is only executed when a request to create a new object is made. If you change the values of the data later, the validity function is not called. Before we move on to discuss methods, we need to figure out how to get access to the data within an object. The syntax is different from other data structures, and we use @ to indicate that we want to access an element from within the object. This can be used to get a copy of the value or to set the value of an element: > adomAnt <- Ant(Length=5.0,Position=c(-1.0,2.0,1.0)) > adomAnt@Length [1] 5 > adomAnt@Position [1] -1 2 1 > adomAnt@ActivityLevel = -5.0 > adomAnt@ActivityLevel [1] -5 Note that in the preceding example, we set a value for the activity level that is not allowed according to the validity function. Since it was set after the object was created, no check is performed. The validity function is only executed during the creation of the object or if the validObject function is called. One final note: it is generally a bad form to work directly with an element within an object, and a better practice is to create methods that obtain or change an individual element within an object. It is a best practice to be careful about the encapsulation of an object's slots. The R environment does not recognize the idea of private versus public data, and the onus is on the programmer to maintain discipline with respect to this important principle. Defining methods for an S4 class When a new class is defined, the data elements are defined, but the methods associated with the class are defined on a separate stage. Methods are implemented in a manner similar to the one used for S3 classes. A function is defined, and the way the function reacts depends on its arguments. If a method is used to change one of the data components of an object, then it must return a copy of the object, just as we saw with S3 classes. The creation of new methods is discussed in two steps. We will first discuss how to define a method for a class where the method does not yet exist. Next, we will discuss some predefined methods that are available and how to extend them to accommodate a new class. Defining new methods The first step to create a new method is to reserve the name. Some functions are included by default, such as the initialize, print or show commands, and we will later see how to extend them. To reserve a new name, you must first use the setGeneric command. At the very least, you need to give this command the name of the function as a character string. As in the previous section, we will use more options as an attempt to practice safe programming. The methods to be created are shown in preceding figure. There are a number of methods, but we will only define four here. All of the methods are accessors; they are used to either get or set values of the data components. We will only define the methods associated with the length slot in this text, and you can see the rest of the code in the examples available on the website. The other methods closely follow the code used for the length slot. There are two methods to set the activity level, and those codes are examined separately to provide an example of how a method can be overloaded. First, we will define the methods to get and set the length. We will first create the method to get the length, as it is a little more straightforward. The first step is to tell R that a new function will be defined, and the name is reserved using the setGeneric command. The method that is called when an Ant object is passed to the command is defined using the setMethod command: setGeneric(name="GetLength",            def=function(antie)            {                standardGeneric("GetLength")            }            ) setMethod(f="GetLength",          signature="Ant",          definition=function(antie)          {              return(antie@Length)          }          ) Now that the GetLength function is defined, it can be used to get the length component for an Ant object: > ant2 <- new("Ant",Length=4.5) > GetLength(ant2) [1] 4.5 The method to set the length is similar, but there is one difference. The method must return a copy of the object passed to it, and it requires an additional argument: setGeneric(name="SetLength",            def=function(antie,newLength)            {                standardGeneric("SetLength")            }            ) setMethod(f="SetLength",          signature="Ant",          definition=function(antie,newLength)          {              if(newLength>0.0) {                  antie@Length = newLength              } else {                  warning("Error - invalid length passed");              }              return(antie)           }          ) When setting the length, the new object must be set using the object that is passed back from the function: > ant2 <- new("Ant",Length=4.5) > ant2@Length [1] 4.5 > ant2 <- SetLength(ant2,6.25) > ant2@Length [1] 6.25 Polymorphism The definition of S4 classes allows methods to be overloaded. That is, multiple functions that have the same name can be defined, and the function that is executed is determined by the arguments' types. We will now examine this idea in the context of defining the methods used to set the activity level in the Ant class. Two or more functions can have the same name, but the types of the arguments passed to them differ. There are two methods to set the activity level. One takes a floating point number and sets the activity level based to the value passed to it. The other takes a logical value and sets the activity level to zero if the argument is FALSE; otherwise, it sets it to a default value. The idea is to use the signature option in the setMethod command. It is set to a vector of class names, and the order of the class names is used to determine which function should be called for a given set of arguments. An important thing to note, though, is that the prototype defined in the setGeneric command defines the names of the arguments, and the argument names in both methods must be exactly the same and in the same order: setGeneric(name="SetActivityLevel",            def=function(antie,activity)            {                standardGeneric("SetActivityLevel")            }          ) setMethod(f="SetActivityLevel",          signature=c("Ant","logical"),          definition=function(antie,activity)          {              if(activity) {                  antie@ActivityLevel = 0.1              } else {                  antie@ActivityLevel = 0.0              }              return(antie)          }          ) setMethod(f="SetActivityLevel",          signature=c("Ant","numeric"),          definition=function(antie,activity)          {              if(activity>=0.0) {                  antie@ActivityLevel = activity              } else {                  warning("The activity level cannot be negative")              }              return(antie)          }          ) Once the two methods are defined, R will use the class names of the arguments to determine which function to call in a given context: > ant2 <- SetActivityLevel(ant2,0.1) > ant2@ActivityLevel [1] 0.1 > ant2 <- SetActivityLevel(ant2,FALSE) > ant2@ActivityLevel [1] 0 There are two additional data types recognized by the signature option: ANY and missing. These can be used to match any data type or a missing value. Also note that we have left out the use of ellipses (…) for the arguments in the preceding examples. The … argument must be the last argument and is used to indicate that any remaining parameters are passed as they appear in the original call to the function. Ellipses can make the use of the overloaded functions in a more flexible way than indicated. More information can be found using the help(dotsMethods) command. Extending the existing methods There are a number of generic functions defined in a basic R session, and we will examine how to extend an existing function. For example, the show command is a generic function whose behavior depends on the class name of the object passed to it. Since the function name is already reserved, the setGeneric command is not used to reserve the function name. The show command is a standard example. The command takes an object and converts it to a character value to be displayed. The command defines how other commands print out and express an object. In the preceding example, a new class called coordinate is defined; this keeps track of two values, x and y, for a coordinate, and we will add one method to set the values of the coordinate: # Define the base coordinates class. Coordinate <- setClass(    # Set the name of the class    "Coordinate",    # Name the data types (slots) that the class will track    slots = c(        x="numeric", # the x position      y="numeric"   # the y position        ),    # Set the default values for the slots. (optional)    prototype=list(        x=0.0,        y=0.0        ),    # Make a function that can test to see if the data    # is consistent.    # (optional)    # This is not called if you have an initialize    # function defined!    validity=function(object)    {        # Check to see if the coordinate is outside of a circle of        # radius 100        print("Checking the validity of the point")        if(object@x*object@x+object@y*object@y>100.0*100.0) {        return(paste("Error: The point is too far ",        "away from the origin."))       }        return(TRUE)    }    ) # Add a method to set the value of a coordinate setGeneric(name="SetPoint",            def=function(coord,x,y)            {                standardGeneric("SetPoint")            }            ) setMethod(f="SetPoint",          signature="Coordinate",          def=function(coord,x,y)          {              print("Setting the point")              coord@x = x              coord@y = y              return(coord)          }          ) We will now extend the show method so that it can properly react to a coordinate object. As it is reserved, we do not have to use the setGeneric command but can simply define it: setMethod(f="show",          signature="Coordinate",          def=function(object)          {              cat("The coordinate is X: ",object@x," Y: ",object@y,"n")          }          ) As noted previously, the signature option must match the original definition of a function that you wish to extend. You can use the getMethod('show') command to examine the signature for the function. With the new method in place, the show command is used to convert a coordinate object to a string when it is printed: > point <- Coordinate(x=1,y=5) [1] "Checking the validity of the point" > print(point) The coordinate is X: 1 Y: 5 > point The coordinate is X: 1 Y: 5 Another import predefined method is the initialize command. If the initialize command is created for a class, then it is called when a new object is created. That is, you can define an initialize function to act as a constructor. If an initialize function is defined for a class, the validator is not called. You have to manually call the validator using the validObject command. Also note that the prototype for the initialize command requires the name of the first argument to be an object, and the default values are given for the remaining arguments in case a new object is created without specifying any values for the slots: setMethod(f="initialize",          signature="Coordinate",          def=function(.Object,x=0.0,y=0.0)          {              print("Checking the point")              .Object = SetPoint(.Object,x,y)              validObject(.Object) # you must explicitly call              # the inspector              return(.Object)          }          ) Now, when you create a new object, the new initialize function is called immediately: > point <- Coordinate(x=2,y=3) [1] "Checking the point" [1] "Setting the point" [1] "Checking the validity of the point" > point The coordinate is X: 2 Y: 3 Using the initialize and validity functions together can result in surprising code paths. This is especially true when inheriting from one class and calling the initialize function of a parent class from the child class. It is important to test codes to ensure that the code is executing in the order that you expect. Personally, I try to use either validator or constructor, but not both. Inheritance The Ant class discussed in the first section of this article provided an example of how to define a class and then define the methods associated with the class. We will now extend the class by creating new classes that inherit from the base class. The original Ant class is shown in the preceding figure, and now, we will propose four classes that inherit from the base class. Two new classes that inherit from Ant are the Male and Female classes. The Worker class inherits from the Female class, while the Soldier class inherits from the Worker class. The relationships are shown in the following figure. The code for all of the new classes is included in our example codes available at our website, but we will only focus on two of the new classes in the text to keep our discussion more focused. Relationships between the classes that inherit from the base Ant class When a new class is created, it can inherit from an existing class by setting the contains parameter. This can be set to a vector of classes for multiple inheritance. However, we will focus on single inheritance here to avoid discussing the complications associated with determining how R finds a method when there are collisions. Assuming that the Ant base class given in the first section has already been defined in the current session, the child classes can be defined. The details for the two classes, Female and Worker, are discussed here. First, the FemaleAnt class is defined. It adds a new slot, Food, and inherits from the Ant class. Before defining the FemaleAnt class, we add a caveat about the Ant class. The base Ant class should have been a virtual class. We would not ordinarily create an object of the Ant class. We did not make it a virtual class in order to simplify our introduction. We are wiser now and wish to demonstrate how to define a virtual class. The FemaleAnt class will be a virtual class to demonstrate the idea. We will make it a virtual class by including the VIRTUAL character string in the contains parameter, and it will not be possible to create an object of the FemaleAnt class: # Define the female ant class. FemaleAnt <- setClass(    # Set the name of the class    "FemaleAnt",    # Name the data types (slots) that the class will track    slots = c(        Food ="numeric"     # The number of food units carried        ),    # Set the default values for the slots. (optional)    prototype=list(        Food=0        ),    # Make a function that can test to see if the data is consistent.    # (optional)    # This is not called if you have an initialize function defined!    validity=function(object)    {        print("Validity: FemaleAnt")        # Check to see if the number of offspring is non-negative.        if(object@Food<0) {            return("Error: The number of food units is negative")        }        return(TRUE)    },    # This class inherits from the Ant class    contains=c("Ant","VIRTUAL")    ) Now, we will define a WorkerAnt class that inherits from the FemaleAnt class: # Define the worker ant class. WorkerAnt <- setClass(    # Set the name of the class    "WorkerAnt",    # Name the data types (slots) that the class will track    slots = c(        Foraging ="logical",   # Whether or not the ant is actively                                # looking for food        Alarm = "logical"       # Whether or not the ant is actively                                # announcing an alarm.               ),    # Set the default values for the slots. (optional)    prototype=list(        Foraging = FALSE,        Alarm   = FALSE        ),    # Make a function that can test to see if the data is consistent.    # (optional)    # This is not called if you have an initialize function defined!    validity=function(object)    {        print("Validity: WorkerAnt")        return(TRUE)    },    # This class inherits from the FemaleAnt class    contains="FemaleAnt"    ) When a new worker is created, it inherits from the FemaleAnt class: > worker <- WorkerAnt(Position=c(-1,3,5),Length=2.5) > worker An object of class "WorkerAnt" Slot "Foraging": [1] FALSE Slot "Alarm": [1] FALSE Slot "Food": [1] 0 Slot "Length": [1] 2.5 Slot "Position": [1] -1 3 5 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > worker <- SetLength(worker,3.5) > GetLength(worker) [1] 3.5 We have not defined the relevant methods in the preceding examples. The code is available in our set of examples, and we will not discuss most of it to keep this discussion more focused. We will examine the initialize method, though. The reason to do so is to explore the callNextMethod command. The callNextMethod command is used to request that R searches for and executes a method of the same name that is a member of a parent class. We chose the initialize method because a common task is to build a chain of constructors that initialize the data associated for the class associated with each constructor. We have not yet created any of the initialize methods and start with the base Ant class: setMethod(f="initialize",          signature="Ant",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("Ant initialize")              .Object = SetLength(.Object,Length)              .Object = SetPosition(.Object,Position)              #validObject(.Object) # you must explicitly call the inspector              return(.Object)          }          ) The constructor takes three arguments: the object itself (.Object), the length, and the position of the ant, and default values are given in case none are provided when a new object is created. The validObject command is commented out. You should try uncommenting the line and create new objects to see whether the validator can in turn call the initialize method. Another important feature is that the initialize method returns a copy of the object. The initialize command is created for the FemaleAnt class, and the arguments to the initialize command should be respected when the request to callNextMethod for the next function is made: setMethod(f="initialize",          signature="FemaleAnt",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("FemaleAnt initialize ")              .Object <- callNextMethod(.Object,Length,Position)              #validObject(.Object) # you must explicitly call              #the inspector              return(.Object)          }          ) The callNextMethod command is used to call the initialize method associated with the Ant class. The arguments are arranged to match the definition of the Ant class, and it returns a new copy of the current object. Finally, the initialize function for the WorkerAnt class is created. It also makes use of callNextMethod to ensure that the method of the same name associated with the parent class is also called: setMethod(f="initialize",          signature="WorkerAnt",          def=function(.Object,Length=4,Position=c(0.0,0.0,0.0))          {              print("WorkerAnt initialize")             .Object <- callNextMethod(.Object,Length,Position)              #validObject(.Object) # you must explicitly call the inspector              return(.Object)          }          ) Now, when a new object of the WorkerAnt class is created, the initialize method associated with the WorkerAnt class is called, and each associated method for each parent class is called in turn: > worker <- WorkerAnt(Position=c(-1,3,5),Length=2.5) [1] "WorkerAnt initialize" [1] "FemaleAnt initialize " [1] "Ant initialize" Miscellaneous notes In the previous sections, we discussed how to create a new class as well as how to define a hierarchy of classes. We will now discuss four commands that are helpful when working with classes: the slotNames, getSlots, getClass, and slot commands. Each command is briefly discussed in turn, and it is assumed that the Ant, FemaleAnt, and WorkerAnt classes that are given in the previous section are defined in the current workspace. The first command, the slotnames command, is used to list the data components of an object of some class. It returns the names of each component as a vector of characters: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > slotNames(worker) [1] "Foraging"     "Alarm"         "Food"         "Length"       [5] "Position"     "pA"           "pI"           "ActivityLevel" The getSlots command is similar to the slotNames command. The difference is that the argument is a character variable which is the name of the class you want to investigate: > getSlots("WorkerAnt")      Foraging         Alarm         Food       Length     Position    "logical"     "logical"     "numeric"     "numeric"     "numeric"            pA           pI ActivityLevel    "numeric"     "numeric"     "numeric" The getClass command has two forms. If the argument is an object, the command will print out the details for the object. If the argument is a character string, then it will print out the details for the class whose name is the same as the argument: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > getClass(worker) An object of class "WorkerAnt" Slot "Foraging": [1] FALSE Slot "Alarm": [1] FALSE Slot "Food": [1] 0 Slot "Length": [1] 5.6 Slot "Position": [1] 1 2 3 Slot "pA": [1] 0.05 Slot "pI": [1] 0.1 Slot "ActivityLevel": [1] 0.5 > getClass("WorkerAnt") Class "WorkerAnt" [in ".GlobalEnv"] Slots:                                                                            Name:       Foraging         Alarm         Food       Length     Position Class:       logical      logical       numeric       numeric       numeric                                                Name:             pA           pI ActivityLevel Class:       numeric       numeric       numeric Extends: Class "FemaleAnt", directly Class "Ant", by class "FemaleAnt", distance 2 Known Subclasses: "SoldierAnt" Finally, we will examine the slot command. The slot command is used to retrieve the value of a slot for a given object based on the name of the slot: > worker <- WorkerAnt(Position=c(1,2,3),Length=5.6) > slot(worker,"Position") [1] 1 2 3 Summary We introduced the idea of an S4 class and provided several examples. The S4 class is constructed in at least two stages. The first stage is to define the name of the class and the associated data components. The methods associated with the class are then defined in a separate step. In addition to defining a class and its method, the idea of inheritance was explored. A partial example was given in this article; it built on a base class defined in the first section of the article. Additionally, the method to call-associated methods in parent classes was also explored, and the example made use of the constructor (or initialize method) to demonstrate how to build a chain of constructors. Finally, four useful commands were explained. The four commands offered different ways to get information about a class or about an object of a given class. For more information, you can refer to Mobile Cellular Automata Models of Ant Behavior: Movement Activity of Leptothorax allardycei, Blaine J. Cole and David Cheshire, The American Naturalist. Resources for Article: Further resources on this subject: Using R for Statistics, Research, and Graphics [Article] Learning Data Analytics with R and Hadoop [Article] First steps with R [Article]
Read more
  • 0
  • 0
  • 2683

article-image-understanding-context-bdd
Packt
22 Oct 2014
6 min read
Save for later

Understanding the context of BDD

Packt
22 Oct 2014
6 min read
In this article by Sujoy Acharya, author of Mockito Essentials, you will learn about the BDD concepts and BDD examples. You will also learn about how BDD can help you minimize project failure risks. (For more resources related to this topic, see here.) This section of the article deals with the software development strategies, drawbacks, and conquering the shortcomings of traditional approaches. The following strategies are applied to deliver software products to customers: Top-down or waterfall approach Bottom-up approach We'll cover these two approaches in the following sections. The following key people/roles/stakeholders are involved in software development: Customers: They explore the concept and identify the high-level goal of the system, such as automating the expense claim process Analysts: They analyze the requirements, work with the customer to understand the system, and build the system requirement specifications Designers/architects: They visualize the system, design the baseline architecture, identify the components, interact and handle the nonfunctional requirements, such as scalability and availability Developers: They construct the system from the design and specification documents Testers: They design test cases and verify the implementation Operational folks: They install the software as per the customer's environment Maintenance team: They handle bugs and monitor the system's health Managers: They act as facilitators and keep track of the progress and schedule Exploring the top-down strategy In the top-down strategy, analysts analyze the requirements and hand over the use cases / functional specifications to the designers and architects for designing the system. The architects/designers design the baseline architecture, identify the system components and interactions, and then pass the design over to the developers for implementation. The testers then verify the implementation (might report bugs for fixing), and finally, the software is deployed to the customer's environment. The following diagram depicts the top-down flow from requirement engineering to maintenance: The biggest drawback of this approach is the cost of rework. For instance, if the development team finds that a requirement is not feasible, they consult the design or analysis team. Then the architects or analysts look at the issue and rework the analysis or design. This approach has a cascading effect; the cost of rework is very high. Customers rarely know what they want before they see the system in action. Building everything all at once is a quick way to cause your requirements to change. Even without the difference in cost of requirement changes, you'll have fewer changes if you write the requirements later in the process, when you have a partially working product that the customer can see and everybody has more information about how the product will work. Exploring the bottom-up strategy In the bottom-up strategy, the requirement is broken into small chunks and each chunk is designed, developed, and unit tested separately, and finally, the chunks are integrated. The individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked until a complete top-level system is formed. Each subsystem is developed in isolation from the other subsystems, so integration is very important in the bottom-up approach. If integration fails, the cost and effort of building the subsystems gets jeopardized. Suppose you are building a healthcare system with three subsystems, namely, patient management, receivable management, and the claims module. If the patient module cannot talk to the claims module, the system fails. The effort of building the patient management and claims management subsystems is just wasted. Agile development methodology would suggest building the functionality feature by feature across subsystems, that is, building a very basic patient management and claims management subsystem to make the functionality work initially, and then adding more to both simultaneously, to support each new feature that is required. Finding the gaps In real-life projects, the following is the percentage of feature usage: 60 percent of features are never used 30 percent of features are occasionally used 10 percent of features are frequently used However, in the top-down approach, the analyst pays attention and brainstorms to create system requirements for all the features. In the top-down approach, time is spent to build a system where 90 percent of features are either not used or occasionally used. Instead, we can identify the high-value features and start building the features instead of paying attention to the low priority features, by using the bottom-up approach. In the bottom-up approach, subsystems are built in isolation from each other, and this causes integration problems. If we prioritize the requirements and start with the highest priority feature, design the feature, build it, unit test it, integrate it, and then show a demo to the stakeholders (customers, analysts, product managers, and so on), we can easily identify the gaps and reduce the risk of rework. We can then pick the next feature and follow the steps (designing, coding, testing, and getting feedback from the customers), and finally integrate the feature with the existing system. This reduces the integration issues of the bottom-up approach. The following figure represents the approach. Each feature is analyzed, designed, coded, tested, and integrated separately. An example of a requirement could be login failure error messages appear red and in bold, while a feature could be incorrect logins are rejected. Typically, it should be a little larger and a useful standalone bit of functionality, rather than a specific single requirement for that functionality. Another problem associated with software development is communication; each stakeholder has a different vocabulary and this causes issues for common understanding. The following are the best practices to minimize software delivery risks: Focus on high-value, frequently used features. Build a common vocabulary for the stakeholders; a domain-specific language that anybody can understand. No more big-fat upfront designing. Evolve the design with the requirements, iteratively. Code to satisfy the current requirement. Don't code for a future requirement, which may or may not be delivered. Follow the YAGNI (You Aren't Going to Need It) principle. Build test the safety net for each requirement. Integrate the code with the system and rerun the regression test. Get feedback from the stakeholders and make immediate changes. BDD suggests the preceding best approaches. Summary This article covered and taught you about the BDD concepts and BDD examples. Resources for Article: Further resources on this subject: Important features of Mockito [article] Progressive Mockito [article] Getting Started with Mockito [article]
Read more
  • 0
  • 0
  • 4997

article-image-important-aspect-angularjs-ui-development
Packt
22 Oct 2014
20 min read
Save for later

Important Aspect of AngularJS UI Development

Packt
22 Oct 2014
20 min read
In this article by Matthias Nehlsen, the co-author of the book, AngularJS UI Development, has explained about managing client-side dependencies with Bower. He also explains how to build an application, running Protractor from Grunt, and managing the source code with Git. He will also explain about building AngularUI Utils and integrating AngularUI-Utils into our project. (For more resources related to this topic, see here.) Managing client-side dependencies with Bower We download AngularJS and placed the angular.js file in our directory structure manually. This was not so bad for a single file, but this process will very soon become tedious and error-prone when dealing with multiple dependencies. Luckily, there is a great tool for managing client-side dependencies, and it can be found at http://bower.io. Bower allows us to record which dependencies we need for an application. Then, after downloading the application for the first time, we can simply run bower install and it will download all the libraries and assets specified in the configuration file for us. First, we will need to install Bower (potentially with sudo): # npm install -g bower Now, let's try it out by running the following command: # bower install angular You will notice an error message when running the previous command on a machine that does not have Git installed. Okay, this will download AngularJS and place the files in the app/bower_components/ folder. However, our remaining sources are in the src/ folder, so let's store the Bower files here as well. Create a file named .bowerrc in the project root, with these three lines in it: { "directory": "src/bower" } This will tell Bower that we want the managed dependencies inside the src/bower/ folder. Now, remove the app/ folder and run the earlier bower install command one more time. You should see the AngularJS files in the src/bower/ folder now. Now, we said we wanted to record the dependencies in a configuration file so that we can later run bower install after downloading/checking out the application. Why can't we just store all the dependencies in our version control system? Of course, we can, but this would bloat the repository a lot. Instead, it is better to focus on the artifacts that we created ourselves and pull in the dependencies when we check out the application for the first time. We will create the configuration file now. We could do this by hand or let Bower do it for us. Let's do the latter and then examine the results: # bower init This will start a dialogue that guides us through the initial Bower project setup process. Give the application a name, version number, and description as you desire. The main file stays empty for now. In the module type selection, just press Enter on the first option. Whenever you see something in square brackets, for example, [MIT], this is the default value that will be used when you simply press Enter. When we confirm that the currently installed dependencies should be set as dependencies, AngularJS will automatically appear as a project dependency in the configuration file if you have followed the previous instructions. Finally, let's set the package as private. There is no need to have it accidentally appear in the Bower registry. Once we are done, the bower.json file should look roughly as follows: { name: 'Hello World', version: '0.0.0', homepage: 'https://github.com/matthiasn/AngularUI-Code', authors: ['Matthias Nehlsen <mn@nehlsen-edv.de>'], description: 'Fancy starter application', license: 'MIT', private: true, ignore: [    '**/.*',    'node_modules',    'bower_components',    'src/bower',    'test',    'tests' ], dependencies: {    angular: '~1.2.22' } } Now that we have a dependency management in place, we can use AngularJS from here and delete the manually downloaded version inside the js/vendor/ folder. Also, edit index.html to use the file from the bower/ directory. Find the following line of code:    <script src="js/vendor/angular.js"></script> Replace it with this:    <script src="bower/angular/angular.js"></script> Now, we have a package manager for client-side dependencies in place. We will use more of this in the next step. Building the application Preparing an application for production environments can be a tedious task. For example, it is highly recommended that you create a single file that contains all the JavaScript needed by the application instead of multiple small files, as this can dramatically reduce page load time, particularly on slow mobile connections with high latency. It is not feasible to do this by hand though, and it is definitely no fun. This is where a build system really shines. We are going to use Grunt for this. A single command in the Terminal will run the tests and then put the necessary files (with a single, larger JavaScript file) for the application in the dist/ folder. Many more tasks can be automated with Grunt, such as minifying the JavaScript, automating tasks by watching folders, running JsHint (http://www.jshint.com), but we can only cover a fairly basic setup here. Let's get started. We need to install Grunt first (possibly with sudo): # npm install –g grunt-cli Then, we create a file named package.json in the root of our application folder with the following content: { "name": "my-hello-world", "version": "0.1.0", "devDependencies": {    "grunt": "~0.4.5",    "grunt-contrib-concat": "~0.5.0",    "grunt-contrib-copy": "~0.5.0",    "grunt-targethtml": "~0.2.6",    "grunt-karma": "~0.8.3",    "karma-jasmine": "~0.1.5",    "karma-firefox-launcher": "~0.1.3",    "karma-chrome-launcher": "~0.1.4" } } This file defines the npm modules that our application depends on. These will be installed automatically by running npm install inside the root of our application folder. Next, we define which tasks Grunt should automate by creating Gruntfile.js, also in the root of our application folder, with the following content: module.exports = function(grunt) { grunt.initConfig({    pkg: grunt.file.readJSON('package.json'),    concat: {      options: {        separator: ';'      },      dist: {        src: ['src/js/vendor/*.js','src/js/*.js'],        dest: 'dist/js/<%= pkg.name %>.js'      }    },    copy: {      main: {        src: 'src/css/main.css',        dest: 'dist/css/main.css',      },    },    targethtml: {      dist: {        files: {          'dist/index.html': 'src/index.html'        }      }    },    karma: {      unit: {        configFile: 'conf/karma.conf.js',        singleRun: true      }    } }); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-targethtml'); grunt.loadNpmTasks('grunt-karma'); grunt.registerTask('dist', ['karma', 'concat', 'targethtml', 'copy']); }; This file contains different sections that are of interest. In the first part, the configuration object is created, which defines the options for the individual modules. For example, the concat section defines that a semicolon should be used as a separator when it merges all JavaScript files into a single file inside the dist/js folder, with the name of the application. It is important that angular.js come before the application code inside this file, which is guaranteed by the order inside the src array. Files in the vendor subfolder are processed first with this order. The copy task configuration is straightforward; here, we only copy a single CSS file into the dist/css folder. We will do more interesting things with CSS later when talking about CSS frameworks. The targethtml task processes the HTML so that it only loads the one concatenated JavaScript file. For this to work, we need to modify index.html, as follows: <!DOCTYPE html> <head>    <meta charset="utf-8">    <title>Angular UI Template</title>    <link rel="stylesheet" href="css/main.css"> </head> <body data-ng-app="myApp">    <div data-ng-controller="helloWorldCtrl">      <div hello-world name="name"></div>    </div>    <!--(if target dev)><!-->    <script src="js/vendor/angular.js"></script>    <script src="js/app.js"></script>    <script src="js/controllers.js"></script>    <script src="js/directives.js"></script>      <!--<!(endif)-->    <!--(if target dist)><!-->    <script src="js/my-hello-world.js"></script>    <!--<!(endif)--> </body> </html> This, together with the configuration, tells the targethtml task to only leave the section for the dist task inside the HTML file, effectively removing the section that will load individual files. One might be tempted to think that it will not make much of a difference if one or multiple files need to be retrieved when the page is loaded. After all, the simple concatenation step does not reduce the overall size of what needs to be loaded. However, particularly on mobile networks, it makes a huge difference because of the latency of the network. When it takes 300 ms to get a single response, it soon becomes noticeable whether one or ten files need to be loaded. This is still true even when you get the maximum speed in 3G networks. LTE significantly reduces latency, so the difference is not quite as noticeable. The improvements with LTE only occur in ideal conditions, so it is best not to count on them. The karma section does nothing more than tell the karma task where to find the previously defined configuration file and that we want a single test run for now. Next, we tell Grunt to load the modules for which we have created the configuration, and then we define the dist task, consisting of all the previously described tasks. All that remains to be done is to run grunt dist in the command line when we want to test and build the application. The complete AngularJS web application can then be found in the dist/ folder. Running Protractor from Grunt Let's also run Protractor from our grunt task. First, we need to install it as follows: # npm install grunt-protractor-runner --save-dev This will not only install the grunt-protractor-runner module, but also add it as a dependency to package.json so that when you, for example, check out your application from your version control system (covered next) on a new computer and you want to install all your project's dependencies, you can simply run: # npm install If you follow along using the companion source code instead of typing the source code yourself, you will need to run npm install again in the last step's folder. Next, edit Gruntfile.js so that from the karma section, it looks as follows:    karma: {      unit: {        configFile: 'conf/karma.conf.js',        singleRun: true      }    },    protractor: {      e2e: {        options: {         configFile: 'conf/protractor.conf.js'        }      }    } }); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-targethtml'); grunt.loadNpmTasks('grunt-karma'); grunt.loadNpmTasks('grunt-protractor-runner'); grunt.registerTask('dist', ['karma', 'protractor', 'concat', 'targethtml', 'copy']); }; Now, Protractor will also run every single time we call grunt dist to build our application. The build process will be stopped when either the karma or the protractor step reports an error, keeping us from ever finding code in the dist folder that fails tests. Note that we will need to have both the webdriver-manager and http-server modules running in separate windows for the grunt dist task to work. As a little refresher, these were started as follows: # webdriver-manager start # http-server –a localhost –p 8000 Both can also be managed by Grunt, but that makes the configuration more complex and it would also mean that the task runs longer because of startup times. These can also be part of a complex configuration that watches folders and runs and spawns all the required tasks automatically. Explore the Grunt documentation further for tailoring an environment specific to your exact needs. Now that you know how the build system works in general, you may already want to explore more advanced features. The project's website is a good place to start (http://gruntjs.com). Managing the source code with Git You are probably using Git already. If not, you really should. Git is a distributed Version Control System (VCS). I cannot imagine working without it, neither in a team nor when working on a project myself. We don't have the space to cover Git for team development here in either case; this topic can easily fill an entire book. However, if you are working in a team that uses Git, you will probably know how to use it already. What we can do in is go through the basics for a single developer. First, you need to install Git if you do not have it on your system yet. OS X On a Mac, again the easiest way to do this is using Homebrew (http://brew.sh). Run the following command in the Terminal after installing Homebrew: # brew install git Windows On Windows, the easiest way to install Git is to run the Windows installer from http://git-scm.com/downloads. Linux (Ubuntu) On Ubuntu, run the following command in the shell: # sudo apt-get install git Let's initialize a fresh repository in the current directory: git init Then, we create a hidden file named .gitignore, for now with only the following content: node_modules This tells Git to ignore the hundreds of files and directories in the node_modules folder. These don't need to be stored in our version control system because the modules can be restored by running npm install in the root folder of the application instead, as all the dependencies are defined in the package.json file. Next, we add all files in the current directory (and in all subdirectories): git add Next, we freeze the file system in its current state: git commit –m "initial commit" Now, we can edit any file, try things out, and accidentally break things without worry because we can always come back to anything that was committed into the VCS. This adds incredible peace of mind when you are playing around with the code. When you issue the git status command, you will notice that we are on a branch called master. A project can have multiple branches at the same time; these are really useful when working on a new feature. We should always keep master as the latest stable version; additional features (or bug fixes) should always be worked upon in a separate branch. Let's create a new feature branch called additional-feature: git branch additional-feature Once it is created, we need to check out the branch: git checkout additional-feature Now, when the new code is ready to be committed, the process is the same as above: git add . git commit –m "additional feature added" We should commit early and often, this habit will make it much easier to undo previous changes when things go wrong. Now, when everything is working in the new branch (all the tests pass), we can go back into the master branch: git checkout master Then, we can merge the changes back into the master branch: git merge additional-feature Being able to freely change between branches, for example, makes it very easy to go back to the master branch from whatever you are working on and do a quick bug fix (in a specialized branch, ideally) without having to think about what you just broke with the current changes in the new feature branch. Please don't forget to commit before you switch the branch though. You can merge the bug fix in this example back into the master, go back to the feature branch you were working on, and even pull those changes that were just done in the master branch into the branch you are working on. For this, when you are inside the feature branch, merge the changes: git merge master If these changes were in different areas of your files, this should run without conflicts. If they were in the same lines, you will need to manually resolve the conflicts. Using Git is really useful to manage source code. However, it is in no way limited to code files (or text files, for that matter). Any file can be placed under the source control. For example, this book was written with the heavy usage of Git, for any file involved. This is extremely useful when you are trying to go back to the previous versions of any file. Building AngularUI Utils With what we learned about package managing, let's build AngularUI-Utils ourselves before we start using it. We could just download the JavaScript file(s), but it will be more gratifying to do this ourselves. Learning how to use Grunt will also be very helpful in any larger project later on. For this, first of all, either clone or fork the repository or just download the zip file from https://github.com/angular-ui/ui-utils. For simplicity, I suggest that you download the zip file; you can find the link on the right-hand side of the GitHub project page. Once we have unpacked the zip file, we first need to install the dependencies. On your command line inside the project folder, run the following commands: $ npm install $ bower install This will install the necessary files needed for the build process. Let's first check whether all the tests are passing after the previous commands have run through: $ karma start --browsers=Chrome test/karma.conf.js --single-run=true Alternatively, you can also simply run: $ grunt The tests are part of the default task specified in gruntFile.js. Take a moment and familiarize yourself with the file by trying to find where the default task is specified. Note that one subtask is the karma:unit task. Try to locate this task further down in the file; it specifies which Karma configuration file to load. If all the tests pass, as they should, we can then build the suite using the following command: $ grunt build This will run the following tasks specified in gruntFile.js: The concat:tmp task concatenates the modules into a temporary JavaScript file, except modules/utils.js. Take a look at the configuration for this task; the easiest way is to search for concat within gruntFile.js. The concat:modules task concatenates the resulting temporary file from step one into the final JavaScript library file, which you can then find in the bower_components/angular-ui-docs/build directory. The configuration for the concat:modules task should be right below the previous one. Here, the difference is that there is no absolute file and path; instead, the name is resolved so that common parts, such as the repository name and such, are not repeated within the configuration file. This follows the DRY (don't repeat yourself) principle and makes the configuration file easier to maintain. The clean:rm_tmp task removes the temporary file, created previously. The uglify task finally creates a minified version of the JavaScript file for use in production because of the smaller file size. I highly recommend that you spend some time reading and following through the gruntFile.js file and the tasks specified therein. It is not strictly necessary that you follow along in this article, as simply building the suite (or even downloading it from the project website) would suffice, but knowing more about the Grunt build system will always be helpful for our own projects. Imagine this: you find an issue in some project and add it to the list of known issues on the GitHub project. Someone picks it up immediately and fixes it. Now, do you want to wait for someone (maybe an automated build system) to decide when it is a good time to publish a version that includes the said fix? I wouldn't; I'd much rather be able to build it myself. You never know when the next release will happen. Integrating AngularUI-Utils into our project In this step, we will take the ui-utils.js file we built in the previous section and use it in a sample project. The UI-Utils suite consists of a large and growing number of individual components. We will not be able to cover all of them here, but the one's we do cover should give you a good idea about what is available. Now, let's do the following edits: Copy the ui-utils.js file from the ui-utils/bower_components/angular-ui-docs/build/ folder to the src/js/vendor/ folder of the current project. Open the package.json file and change the name of the application as you wish, for example: { "name": "fun-with-ui-utils", "version": "0.1.0", "devDependencies": {    "grunt": "~0.4.1",    "grunt-contrib-concat": "~0.3.0",    "grunt-contrib-copy": "~0.4.1",    "grunt-targethtml": "~0.2.6",    "grunt-karma": "~0.6.2" } } Edit src/index.html by inserting a script tag right below the script tag for the angular.js file, and change the name of our concatenated project JavaScript file, as the name change above will result in a different filename. The body of the index.html file now looks as follows: <body ng-app="myApp"> <div ng-controller="helloWorldCtrl">    <h1 hello-world name="name" id="greeting"></h1> </div> <!--(if target dev)><!--> <script src="bower/angular/angular.js"></script> <script src="js/vendor/ui-utils.js"></script> <script src="js/app.js"></script> <script src="js/controllers.js"></script> <script src="js/directives.js"></script>  <!--<!(endif)--> <!--(if target dist)><!-->      <script src="js/fun-with-ui-utils.js"></script> <!--<!(endif)--> </body> Run grunt dist to see if our tests are still passing and if the project gets built without problems. Dist folder You will notice that the my-hello-world.js file is still in the dist/js/ folder, despite not being used any longer. You can safely remove it. You could also remove the entire dist folder and run the my-hello-world.js file again: $ grunt dist This will recreate the folder with only the require files. Deleting the folder before recreating it could become a part of the dist task by adding a clean task that runs first. Check out gruntFile.js of the UI-Utils project if you want to know how this is done. Grunt task concat Note that all JavaScript files get concatenated into one file during the concat task that runs in our project during the grunt dist task. These files need to be in the correct order, as the browser will read the file from beginning to end and it will complain when, for example, something references the AngularJS namespace without that being loaded already. So, while it might be tempting to use wildcards as we did so far for simplicity, we shall name individual files in the correct order. This might seem tedious at first, but once you are in the habit of doing it for each file the moment you create or add it, it will only take a few seconds for each file and will keep you from scratching your head later. Let's fix this right away. Find the source property of the concat task in Gruntfile.js of our project:        src: ['src/js/vendor/*.js','src/js/*.js'], Now, replace it with the following:        src: ['src/bower/angular/angular.js',              'src/js/vendor/ui-utils.js',              'src/js/app.js',              'src/js/controllers.js',              'src/js/directives.js'], We also need to edit the app module so that it loads UI-Utils. For this, edit app.js as follows: 'use strict'; angular.module('myApp', ['myApp.controllers', 'myApp.directives', 'ui.utils']) With these changes in place, we are in a pretty good shape to try out the different components in the UI-Utils suite. We will use a single project for all the different components; this will save us time by not having to set up separate projects for every single one of them. Summary In this article, we have learned about managing client-side dependencies with Bower. We also learned about building an application, running Protractor from Grunt, and managing the source code with Git. Then, later we learned about building AngularUI Utils and integrating AngularUI-Utils into our project. Resources for Article: Further resources on this subject: AngularJS Project [article] AngularJS [article] Working with Live Data and AngularJS [article]
Read more
  • 0
  • 0
  • 1443

article-image-media-queries-less
Packt
21 Oct 2014
9 min read
Save for later

Media Queries with Less

Packt
21 Oct 2014
9 min read
In this article by Alex Libby, author of Learning Less.js, we'll see how Less can make creating media queries a cinch; we will cover the following topics: How media queries work What's wrong with CSS? Creating a simple example (For more resources related to this topic, see here.) Introducing media queries If you've ever spent time creating content for sites, particularly for display on a mobile platform, then you might have come across media queries. For those of you who are new to the concept, media queries are a means of tailoring the content that is displayed on screen when the viewport is resized to a smaller size. Historically, websites were always built at a static size—with more and more people viewing content on smartphones and tablets, this means viewing them became harder, as scrolling around a page can be a tiresome process! Thankfully, this became less of an issue with the advent of media queries—they help us with what should or should not be displayed when viewing content on a particular device. Almost all modern browsers offer native support for media queries—the only exception being IE Version 8 or below, where it is not supported natively: Media queries always begin with @media and consist of two parts: The first part, only screen, determines the media type where a rule should apply—in this case, it will only show the rule if we're viewing content on screen; content viewed when printed can easily be different. The second part, or media feature, (min-width: 530px) and (max-width: 949px), means the rule will only apply between a screen size set at a minimum of 530px and a maximum of 949px. This will rule out any smartphones and will apply to larger tablets, laptops, or PCs. There are literally dozens of combinations of media queries to suit a variety of needs—for some good examples, visit http://cssmediaqueries.com/overview.html, where you can see an extensive list, along with an indication whether it is supported in the browser you normally use. Media queries are perfect to dynamically adjust your site to work in multiple browsers—indeed, they are an essential part of a responsive web design. While browsers support media queries, there are some limitations we need to consider; let's take a look at these now. The limitations of CSS If we spend any time working with media queries, there are some limitations we need to consider; these apply equally if we were writing using Less or plain CSS: Not every browser supports media features uniformly; to see the differences, visit http://cssmediaqueries.com/overview.html using different browsers. Current thinking is that a range of breakpoints has to be provided; this can result in a lot of duplication and a constant battle to keep up with numerous different screen sizes! The @media keyword is not supported in IE8 or below; you will need to use JavaScript or jQuery to achieve the same result, or a library such as Modernizr to provide a graceful fallback option. Writing media queries will tie your design to a specific display size; this increases the risk of duplication as you might want the same element to appear in multiple breakpoints, but have to write individual rules to cover each breakpoint. Breakpoints are points where your design will break if it is resized larger or smaller than a particular set of given dimensions. The traditional thinking is that we have to provide different style rules for different breakpoints within our style sheets. While this is valid, ironically it is something we should not follow! The reason for this is the potential proliferation of breakpoint rules that you might need to add, just to manage a site. With care and planning and a design-based breakpoints mindset, we can often get away with a fewer number of rules. There is only one breakpoint given, but it works in a range of sizes without the need for more breakpoints. The key to the process is to start small, then increase the size of your display. As soon as it breaks your design (this is where your first breakpoint is) add a query to fix it, and then, keep doing it until you get to your maximum size. Okay, so we've seen what media queries are; let's change tack and look at what you need to consider when working with clients, before getting down to writing the queries in code. Creating a simple example The best way to see how media queries work is in the form of a simple demo. In this instance, we have a simple set of requirements, in terms of what should be displayed at each size: We need to cater for four different sizes of content The small version must be shown to the authors as plain text e-mail links, with no decoration For medium-sized screens, we will add an icon before the link On large screens, we will add an e-mail address after the e-mail links On extra-large screens, we will combine the medium and large breakpoints together, so both icons and e-mail addresses are displayed In all instances, we will have a simple container in which there will be some dummy text and a list of editors. The media queries we create will control the appearance of the editor list, depending on the window size of the browser being used to display the content. Next, add the following code to a new document. We'll go through it section by section, starting with the variables created for our media queries: @small: ~"(max-width: 699px) and (min-width: 520px)"; @medium: ~"(max-width: 1000px) and (min-width: 700px)"; @large: ~"(min-width: 1001px)"; @xlarge: ~"(min-width: 1151px)"; Next comes some basic styles to define margins, font sizes, and styles: * { margin: 0; padding: 0; } body { font: 14px Georgia, serif; } h3 { margin: 0 0 8px 0; } p { margin: 0 25px } We need to set sizes for each area within our demo, so go ahead and add the following styles: #fluid-wrap { width: 70%; margin: 60px auto; padding: 20px; background: #eee; overflow: hidden; } #main-content { width: 65%; float: right; }  #sidebar { width: 35%; float: left; ul { list-style: none; } ul li a { color: #900; text-decoration: none; padding: 3px 0; display: block; } } Now that the basic styles are set, we can add our media queries—beginning with the query catering for small screens, where we simply display an e-mail logo: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } The medium query comes next; here, we add the word Email before the e-mail address instead: @media @medium { #sidebar ul li a:before { content: "Email: "; font-style: italic; color: #666; } } In the large media query, we switch to showing the name first, followed by the e-mail (the latter extracted from the data-email attribute): @media @large { #sidebar ul li a:after { content: " (" attr(data-email) ")"; font-size: 11px; font-style: italic; color: #666; } } We finish with the extra-large query, where we use the e-mail address format shown in the large media query, but add an e-mail logo to it: @media @xlarge { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } Save the file as simple.less. Now that our files are prepared, let's preview the results in a browser. For this, I recommend that you use Responsive Design View within Firefox (activated by pressing Ctrl + Shift + M). Once activated, resize the view to 416 x 735; here we can see that only the name is displayed as an e-mail link: Increasing the size to 544 x 735 adds an e-mail logo, while still keeping the same name/e-mail format as before: If we increase it further to 716 x 735, the e-mail logo changes to the word Email, as seen in the following screenshot: Let's increase the size even further to 735 x 1029; the format changes again, to a name/e-mail link, followed by an e-mail address in parentheses: In our final change, increase the size to 735 x 1182. Here, we can see the previous style being used, but with the addition of an e-mail logo: These screenshots illustrate perfectly how you can resize your screen and still maintain a suitable layout for each device you decide to support; let's take a moment to consider how the code works. The normal accepted practice for developers is to work on the basis of "mobile first", or create the smallest view so it is perfect, then increase the size of the screen and adjust the content until the maximum size is reached. This works perfectly well for new sites, but the principle might have to be reversed if a mobile view is being retrofitted to an existing site. In our instance, we've produced the content for a full-size screen first. From a Less perspective, there is nothing here that isn't new—we've used nesting for the #sidebar div, but otherwise the rest of this part of the code is standard CSS. The magic happens in two parts—immediately at the top of the file, we've set a number of Less variables, which encapsulate the media definition strings we use in the queries. Here, we've created four definitions, ranging from @small (for devices between 520px to 699px), right through to @xlarge for widths of 1151px or more. We then take each of the variables and use them within each query as appropriate, for example, the @small query is set as shown in the following code: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } In the preceding code, we have standard CSS style rules to display an e-mail logo before the name/e-mail link. Each of the other queries follows exactly the same principle; they will each compile as valid CSS rules when running through Less. Summary Media queries have rapidly become a de facto part of responsive web design. We started our journey through media queries with a brief introduction, with a review of some of the limitations that we must work around and considerations that need to be considered when working with clients. We then covered how to create a simple media query. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Customizing WordPress Settings for SEO [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 20734
article-image-introduction-typescript
Packt
20 Oct 2014
16 min read
Save for later

Introduction to TypeScript

Packt
20 Oct 2014
16 min read
One of the primary benefits of compiled languages is that they provide a more plain syntax for the developer to work with before the code is eventually converted to machine code. TypeScript is able to bring this advantage to JavaScript development by wrapping several different patterns into language constructs that allow us to write better code. Every explicit type annotation that is provided is simply syntactic sugar that will be removed during compilation, but not before their constraints are analyzed and any errors are caught. In this article by Christopher Nance, the author of TypeScript Essentials, we will explore this type system in depth. We will also discuss the different language structures that TypeScript introduces. We will look at how these structures are emitted by the compiler into plain JavaScript. This article will contain a detailed at into each of these concepts: (For more resources related to this topic, see here.) Types Functions Interfaces Classes Types These type annotations put a specific set of constraints on the variables being created. These constraints allow the compiler and development tools to better assist in the proper use of the object. This includes a list of functions, variables, and properties available on the object. If a variable is created and no type is provided for it, TypeScript will attempt to infer the type from the context in which it is used. For instance, in the following code, we do not explicitly declare the variable hello as string; however, since it is created with an initial value, TypeScript is able to infer that it should always be treated as a string: var hello = "Hello There"; The ability of TypeScript to do this contextual typing provides development tools with the ability to enhance the development experience in a variety of ways. The type information allows our IDE to warn us of potential errors in our code, or provide intelligent code completion and suggestion. As you can see from the following screenshot, Visual Studio is able to provide a list of methods and properties associated with string objects as well as their type information: When an object’s type is not given and cannot be inferred from its initialization then it will be treated as an Any type. The Any type is the base type for all other types in TypeScript. It can represent any JavaScript value and the minimum amount of type checking is performed on objects of type Any. Every other type that exists in TypeScript falls into one of three categories: primitive types, object types, or type parameters. TypeScript's primitive types closely mirror those of JavaScript. The TypeScript primitive types are as follows: Number: var myNum: number = 2; Boolean: var myBool: boolean = true; String: var myString: string = "Hello"; Void: function(): void { var x = 2; } Null: if (x != null) { alert(x); } Undefined: if (x != undefined) { alert(x); } All of these types correspond directly to JavaScript's primitive types except for Void. The Void type is meant to represent the absence of a value. A function that returns no value has a return type of void. Object types are the most common types you will see in TypeScript and they are made up of references to classes, interfaces, and anonymous object types. Object types are made up of a complex set of members. These members fall into one of four categories: properties, call signatures, constructor signatures, or index signatures. Type parameters are used when referencing generic types or calling generic functions. Type parameters are used to keep code generic enough to be used on a multitude of objects while limiting those objects to a specific set of constraints. An early example of generics that we can cover is arrays. Arrays exist just like they do in JavaScript and they have an extra set of type constraints placed upon them. The array object itself has certain type constraints and methods that are created as being an object of the Array type, the second piece of information that comes from the array declaration is the type of the objects contained in the array. There are two ways to explicitly type an array; otherwise, the contextual typing system will attempt to infer the type information: var array1: string[] = []; var array2: Array<string> = []; Both of these examples are completely legal ways of declaring an array. They both generate the same JavaScript output and they both provide the same type information. The first example is a shorthand type literal using the [ and ] characters to create arrays. The resulting JavaScript for each of these arrays is shown as follows: var array1 = []; var array2 = []; Despite all of the type annotations and compile-time checking, TypeScript compiles to plain JavaScript and therefore adds absolutely no overhead to the run time speed of your applications. All of the type annotations are removed from the final code, providing us with both a much richer development experience and a clean finished product. Functions If you are at all familiar with JavaScript you will be very familiar with the concept of functions. TypeScript has added type annotations to the parameter list as well as the return type. Due to the new constraints being placed on the parameter list, the concept of function overloads was also included in the language specification. TypeScript also takes advantage of JavaScript's arguments object and provides syntax for rest parameters. Let's take a look at a function declaration in TypeScript: function add(x: number, y: number): number {    return x + y; } As you can see, we have created a function called add. It takes two parameters that are both of the type number, one of the primitive types, and it returns a number. This function is useful in its current form but it is a little limited in overall functionality. What if we want to add a third number to the first two? Then we have to call our function multiple times. TypeScript provides a way to provide optional parameters to functions. So now we can modify our function to take a third parameter, z, that will get added to the first two numbers, as shown in the following code: function add(x: number, y: number, z?: number) {    if (z !== undefined) {        return x + y + z;    }    return x + y; } As you can see, we have a third named parameter now but this one is followed by ?. This tells the compiler that this parameter is not required for the function to be called. Optional parameters tell the compiler not to generate an error if the parameter is not provided when the function is called. In JavaScript, this compile-time checking is not performed, meaning an exception could occur at runtime because each missing parameter will have a value of undefined. It is the responsibility of the developer to write code that verifies a value exists before attempting to use it. So now we can add three numbers together and we haven't broken any of our previous code that relied on the add method only taking two parameters. This has added a little bit more functionality but I think it would be nice to extend this code to operate on multiple types. We know that strings can be added together just the same as numbers can, so why not use the same method? In its current form, though, passing strings to the add function will result in compilation errors. We will modify the function's definition to take not only numbers but strings as well, as shown in the following code: function add(x: string, y: string): string; function add(x: number, y: number): number; function add(x: any, y: any): any {    return x + y; } As you can see, we now have two declarations of the add function: one for strings, one for numbers, and then we have the final implementation using the any type. The signature of the actual function implementation is not included in the function’s type definition, though. Attempting to call our add method with anything other than a number or string will fail at compile time, however, the overloads have no effect on the generated JavaScript. All of the type annotations are stripped out, as well as the overloads, and all we are left with is a very simple JavaScript method: function add(x, y) {  return x + y; } Great, so now we have a multipurpose add function that can take two values and combine them together for either strings or numbers. This still feels a little limited in overall functionality though. What if we wanted to add an indeterminate number of values together? We would have to call our add method over and over again until we eventually had only one value. Thankfully, TypeScript includes rest parameters, which is essentially an unbounded list of optional parameters. The following code shows how to modify our add functions to include a rest parameter: function add(arg1: string, ...args: string[]): string; function add(arg1: number, ...args: number[]): number; function add(arg1: any, ...args: any[]): any {    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } A rest parameter can only be the final parameter in a function's declaration. The TypeScript compiler recognizes the syntax of this final parameter and generates an extra bit of JavaScript to generate a shifted array from the JavaScript arguments object that is available to code inside of a function. The resulting JavaScript code shows the loop that the compiler has added to create the array that represents our indeterminate list of parameters: function add(arg1) {    var args = [];    for (var _i = 0; _i < (arguments.length - 1); _i++) {        args[_i] = arguments[_i + 1];    }    var total = arg1;    for (var i = 0; i < args.length; i++) {        total += args[i];    }    return total; } Now adding numbers and strings together is very simple and is completely type-safe. If you attempt to mix the different parameter types, a compile error will occur. The first two of the following statements are legal calls to our Add function; however, the third is not because the objects being passed in are not of the same type: alert(add("Hello ", "World!")); alert(add(3, 5, 9, 120, 42)); //Error alert(add(3, "World!")); We are still very early into our exploration of TypeScript but the benefits are already very apparent. There are still a few features of functions that we haven't covered yet but we need to learn more about the language first. Next, we will discuss the interface construct and the benefits it provides with absolutely no cost. Interfaces Interfaces are a key piece of creating large-scale software applications. They are a way of representing complex types about any object. Despite their usefulness they have absolutely no runtime consequences because JavaScript does not include any sort of runtime type checking. Interfaces are analyzed at compile time and then omitted from the resulting JavaScript. Interfaces create a contract for developers to use when developing new objects or writing methods to interact with existing ones. Interfaces are named types that contain a list of members. Let's look at an example of an interface: interface IPoint {    x: number;    y: number; } As you can see we use the interface keyword to start the interface declaration. Then we give the interface a name that we can easily reference from our code. Interfaces can be named anything, for example, foo or bar, however, a simple naming convention will improve the readability of the code. Interfaces will be given the format I<name> and object types will just use <name>, for example, IFoo and Foo. The interfaces' declaration body contains just a list of members and functions and their types. Interface members can only be instance members of an object. Using the static keyword in an interface declaration will result in a compile error. Interfaces have the ability to inherit from base types. This interface inheritance allows us to extend existing interfaces into a more enhanced version as well as merge separate interfaces together. To create an inheritance chain, interfaces use the extends clause. The extends clause is followed by a comma-separated list of types that the interface will merge with. interface IAdder {    add(arg1: number, ...args: number[]): number; } interface ISubtractor {    subtract(arg1: number, ...args: number[]): number; } interface ICalculator extends IAdder, ISubtractor {    multiply(arg1: number, ...args: number[]): number;    divide(arg1: number, arg2: number): number; } Here, we see three interfaces: IAdder, which defines a type that must implement the add method that we wrote earlier ISubtractor, which defines a new method called subtract that any object typed with ISubtractor must define ICalculator, which extends both IAdder and ISubtractor as well as defining two new methods that perform operations a calculator would be responsible for, which an adder or subtractor wouldn't perform These interfaces can now be referenced in our code as type parameters or type declarations. Interfaces cannot be directly instantiated and attempting to reference the members of an interface by using its type name directly will result in an error. In the following function declaration the ICalculator interface is used to restrict the object type that can be passed to the function. The compiler can now examine the function body and infer all of the type information associated with the calculator parameter and warn us if the object used does not implement this interface. function performCalculations(calculator: ICalculator, num1, num2) {    calculator.add(num1, num2);    calculator.subtract(num1, num2);    calculator.multiply(num1, num2);    calculator.divide(num1, num2);    return true; } The last thing that you need to know about interface definitions is that their declarations are open-ended and will implicitly merge together if they have the same type name. Our ICalculator interface could have been split into two separate declarations with each one adding its own list of base types and its own list of members. The resulting type definition from the following declaration is equivalent to the declaration we saw previously: interface ICalculator extends IAdder {    multiply(arg1: number, ...args: number[]): number; } interface ICalculator extends ISubtractor {    divide(arg1: number, arg2: number): number; } Creating large scale applications requires code that is flexible and reusable. Interfaces are a key component of keeping TypeScript as flexible as plain JavaScript, yet allow us to take advantage of the type checking provided at compile time. Your code doesn't have to be dependent on existing object types and will be ready for any new object types that might be introduced. The TypeScript compiler also implements a duck typing system that allows us to create objects on the fly while keeping type safety. The following example shows how we can pass objects that don't explicitly implement an interface but contain all of the required members to a function: function addPoints(p1: IPoint, p2: IPoint): IPoint {    var x = p1.x + p2.x;    var y = p1.y + p2.y;    return { x: x, y: y } } //Valid var newPoint = addPoints({ x: 3, y: 4 }, { x: 5, y: 1 }); //Error var newPoint2 = addPoints({ x: 1 }, { x: 4, y: 3 }); Classes In the next version of JavaScript, ECMAScript 6, a standard has been proposed for the definition of classes. TypeScript brings this concept to the current versions of JavaScript. Classes consist of a variety of different properties and members. These members can be either public or private and static or instance members. Definitions Creating classes in TypeScript is essentially the same as creating interfaces. Let's create a very simple Point class that keeps track of an x and a y position for us: class Point {    public x: number;    public y: number;    constructor(x: number, y = 0) {        this.x = x;        this.y = y;    } } As you can see, defining a class is very simple. Use the keyword class and then provide a name for the new type. Then you create a constructor for the object with any parameters you wish to provide upon creation. Our Point class requires two values that represent a location on a plane. The constructor is completely optional. If a constructor implementation is not provided, the compiler will automatically generate one that takes no parameters and initializes any instance members. We provided a default value for the property y. This default value tells the compiler to generate an extra JavaScript statement than if we had only given it a type. This also allows TypeScript to treat parameters with default values as optional parameters. If the parameter is not provided then the parameter's value is assigned to the default value you provide. This provides a simple method for ensuring that you are always operating on instantiated objects. The best part is that default values are available for all functions, not just constructors. Now let's examine the JavaScript output for the Point class: var Point = (function () {    function Point(x, y) {        if (typeof y === "undefined") { y = 0; }        this.x = x;        this.y = y;    }    return Point; })(); As you can see, a new object is created and assigned to an anonymous function that initializes the definition of the Point class. As we will see later, any public methods or static members will be added to the inner Point function's prototype. JavaScript closures are a very important concept in understanding TypeScript. Classes, modules, and enums in TypeScript all compile into JavaScript closures. Closures are actually a construct of the JavaScript language that provide a way of creating a private state for a specific segment of code. When a closure is created it contains two things: a function, and the state of the environment when the function was created. The function is returned to the caller of the closure and the state is used when the function is called. For more information about JavaScript closures and the module pattern visit http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html. The optional parameter was accounted for by checking its type and initializing it if a value is not available. You can also see that both x and y properties were added to the new instance and assigned to the values that were passed into the constructor. Summary This article has thoroughly discussed the different language constructs in TypeScript. Resources for Article: Further resources on this subject: Setting Up The Rig [Article] Making Your Code Better [Article] Working with Flexible Content Elements in TYPO3 Templates [Article]
Read more
  • 0
  • 0
  • 22351

article-image-handle-web-applications
Packt
20 Oct 2014
13 min read
Save for later

Handle Web Applications

Packt
20 Oct 2014
13 min read
In this article by Ivo Balbaert author of Dart Cookbook, we will cover the following recipes: Sanitizing HTML Using a browser's local storage Using an application cache to work offline Preventing an onSubmit event from reloading the page (For more resources related to this topic, see here.) Sanitizing HTML We've all heard of (or perhaps even experienced) cross-site scripting (XSS) attacks, where evil minded attackers try to inject client-side script or SQL statements into web pages. This could be done to gain access to session cookies or database data, or to get elevated access-privileges to sensitive page content. To verify an HTML document and produce a new HTML document that preserves only whatever tags are designated safe is called sanitizing the HTML. How to do it... Look at the web project sanitization. Run the following script and see how the text content and default sanitization works: See how the default sanitization works using the following code: var elem1 = new Element.html('<div class="foo">content</div>'); document.body.children.add(elem1); var elem2 = new Element.html('<script class="foo">evil content</script><p>ok?</p>'); document.body.children.add(elem2); The text content and ok? from elem1 and elem2 are displayed, but the console gives the message Removing disallowed element <SCRIPT>. So a script is removed before it can do harm. Sanitize using HtmlEscape, which is mainly used with user-generated content: import 'dart:convert' show HtmlEscape; In main(), use the following code: var unsafe = '<script class="foo">evil   content</script><p>ok?</p>'; var sanitizer = const HtmlEscape(); print(sanitizer.convert(unsafe)); This prints the following output to the console: &lt;script class=&quot;foo&quot;&gt;evil   content&lt;&#x2F;script&gt;&lt;p&gt;ok?&lt;&#x2F;p&gt; Sanitize using node validation. The following code forbids the use of a <p> tag in node1; only <a> tags are allowed: var html_string = '<p class="note">a note aside</p>'; var node1 = new Element.html(        html_string,        validator: new NodeValidatorBuilder()          ..allowElement('a', attributes: ['href'])      ); The console prints the following output: Removing disallowed element <p> Breaking on exception: Bad state: No elements A NullTreeSanitizer for no validation is used as follows: final allHtml = const NullTreeSanitizer(); class NullTreeSanitizer implements NodeTreeSanitizer {      const NullTreeSanitizer();      void sanitizeTree(Node node) {} } It can also be used as follows: var elem3 = new Element.html('<p>a text</p>'); elem3.setInnerHtml(html_string, treeSanitizer: allHtml); How it works... First, we have very good news: Dart automatically sanitizes all methods through which HTML elements are constructed, such as new Element.html(), Element.innerHtml(), and a few others. With them, you can build HTML hardcoded, but also through string interpolation, which entails more risks. The default sanitization removes all scriptable elements and attributes. If you want to escape all characters in a string so that they are transformed into HTML special characters (such as ;&#x2F for a /), use the class HTMLEscape from dart:convert as shown in the second step. The default behavior is to escape apostrophes, greater than/less than, quotes, and slashes. If your application is using untrusted HTML to put in variables, it is strongly advised to use a validation scheme, which only covers the syntax you expect users to feed into your app. This is possible because Element.html() has the following optional arguments: Element.html(String html, {NodeValidator validator, NodeTreeSanitizer treeSanitizer}) In step 3, only <a> was an allowed tag. By adding more allowElement rules in cascade, you can allow more tags. Using allowHtml5() permits all HTML5 tags. If you want to remove all control in some cases (perhaps you are dealing with known safe HTML and need to bypass sanitization for performance reasons), you can add the class NullTreeSanitizer to your code, which has no control at all and defines an object allHtml, as shown in step 4. Then, use setInnerHtml() with an optional named attribute treeSanitizer set to allHtml. Using a browser's local storage Local storage (also called the Web Storage API) is widely supported in modern browsers. It enables the application's data to be persisted locally (on the client side) as a map-like structure: a dictionary of key-value string pairs, in fact using JSON strings to store and retrieve data. It provides our application with an offline mode of functioning when the server is not available to store the data in a database. Local storage does not expire, but every application can only access its own data up to a certain limit depending on the browser. In addition, of course, different browsers can't access each other's stores. How to do it... Look at the following example, the local_storage.dart file: import 'dart:html';  Storage local = window.localStorage;  void main() { var job1 = new Job(1, "Web Developer", 6500, "Dart Unlimited") ; Perform the following steps to use the browser's local storage: Write to a local storage with the key Job:1 using the following code: local["Job:${job1.id}"] = job1.toJson; ButtonElement bel = querySelector('#readls'); bel.onClick.listen(readShowData); } A click on the button checks to see whether the key Job:1 can be found in the local storage, and, if so, reads the data in. This is then shown in the data <div>: readShowData(Event e) {    var key = 'Job:1';    if(local.containsKey(key)) { // read data from local storage:    String job = local[key];    querySelector('#data').appendText(job); } }   class Job { int id; String type; int salary; String company; Job(this.id, this.type, this.salary, this.company); String get toJson => '{ "type": "$type", "salary": "$salary", "company": "$company" } '; } The following screenshot depicts how data is stored in and retrieved from a local storage: How it works... You can store data with a certain key in the local storage from the Window class as follows using window.localStorage[key] = data; (both key and data are Strings). You can retrieve it with var data = window.localStorage[key];. In our code, we used the abbreviation Storage local = window.localStorage; because local is a map. You can check the existence of this piece of data in the local storage with containsKey(key); in Chrome (also in other browsers via Developer Tools). You can verify this by navigating to Extra | Tools | Resources | Local Storage (as shown in the previous screenshot), window.localStorage also has a length property; you can query whether it contains something with isEmpty, and you can loop through all stored values using the following code: for(var key in window.localStorage.keys) { String value = window.localStorage[key]; // more code } There's more... Local storage can be disabled (by user action, or via an installed plugin or extension), so we must alert the user when this needs to be enabled; we can do this by catching the exception that occurs in this case: try { window.localStorage[key] = data; } on Exception catch (ex) { window.alert("Data not stored: Local storage is disabled!"); } Local storage is a simple key-value store and does have good cross-browser coverage. However, it can only store strings and is a blocking (synchronous) API; this means that it can temporarily pause your web page from responding while it is doing its job storing or reading large amounts of data such as images. Moreover, it has a space limit of 5 MB (this varies with browsers); you can't detect when you are nearing this limit and you can't ask for more space. When the limit is reached, an error occurs so that the user can be informed. These properties make local storage only useful as a temporary data storage tool; this means that it is better than cookies, but not suited for a reliable, database kind of storage. Web storage also has another way of storing data called sessionStorage used in the same way, but this limits the persistence of the data to only the current browser session. So, data is lost when the browser is closed or another application is started in the same browser window. Using an application cache to work offline When, for some reason, our users don't have web access or the website is down for maintenance (or even broken), our web-based applications should also work offline. The browser cache is not robust enough to be able to do this, so HTML5 has given us the mechanism of ApplicationCache. This cache tells the browser which files should be made available offline. The effect is that the application loads and works correctly, even when the user is offline. The files to be held in the cache are specified in a manifest file, which has a .mf or .appcache extension. How to do it... Look at the appcache application; it has a manifest file called appcache.mf. The manifest file can be specified in every web page that has to be cached. This is done with the manifest attribute of the <html> tag: <html manifest="appcache.mf"> If a page has to be cached and doesn't have the manifest attribute, it must be specified in the CACHE section of the manifest file. The manifest file has the following (minimum) content: CACHE MANIFEST # 2012-09-28:v3  CACHE: Cached1.html appcache.css appcache.dart http://dart.googlecode.com/svn/branches/bleeding_edge/dart/client/dart.js  NETWORK: *  FALLBACK: / offline.html Run cached1.html. This displays the This page is cached, and works offline! text. Change the text to This page has been changed! and reload the browser. You don't see the changed text because the page is created from the application cache. When the manifest file is changed (change version v1 to v2), the cache becomes invalid and the new version of the page is loaded with the This page has been changed! text. The Dart script appcache.dart of the page should contain the following minimal code to access the cache: main() { new AppCache(window.applicationCache); }  class AppCache { ApplicationCache appCache;  AppCache(this.appCache) {    appCache.onUpdateReady.listen((e) => updateReady());    appCache.onError.listen(onCacheError); }  void updateReady() {    if (appCache.status == ApplicationCache.UPDATEREADY) {      // The browser downloaded a new app cache. Alert the user:      appCache.swapCache();      window.alert('A new version of this site is available. Please reload.');    } }  void onCacheError(Event e) {      print('Cache error: ${e}');      // Implement more complete error reporting to developers } } How it works... The CACHE section in the manifest file enumerates all the entries that have to be cached. The NETWORK: and * options mean that to use all other resources the user has to be online. FALLBACK specifies that offline.html will be displayed if the user is offline and a resource is inaccessible. A page is cached when either of the following is true: Its HTML tag has a manifest attribute pointing to the manifest file The page is specified in the CACHE section of the manifest file The browser is notified when the manifest file is changed, and the user will be forced to refresh its cached resources. Adding a timestamp and/or a version number such as # 2014-05-18:v1 works fine. Changing the date or the version invalidates the cache, and the updated pages are again loaded from the server. To access the browser's app cache from your code, use the window.applicationCache object. Make an object of a class AppCache, and alert the user when the application cache has become invalid (the status is UPDATEREADY) by defining an onUpdateReady listener. There's more... The other known states of the application cache are UNCACHED, IDLE, CHECKING, DOWNLOADING, and OBSOLETE. To log all these cache events, you could add the following listeners to the appCache constructor: appCache.onCached.listen(onCacheEvent); appCache.onChecking.listen(onCacheEvent); appCache.onDownloading.listen(onCacheEvent); appCache.onNoUpdate.listen(onCacheEvent); appCache.onObsolete.listen(onCacheEvent); appCache.onProgress.listen(onCacheEvent); Provide an onCacheEvent handler using the following code: void onCacheEvent(Event e) {    print('Cache event: ${e}'); } Preventing an onSubmit event from reloading the page The default action for a submit button on a web page that contains an HTML form is to post all the form data to the server on which the application runs. What if we don't want this to happen? How to do it... Experiment with the submit application by performing the following steps: Our web page submit.html contains the following code: <form id="form1" action="http://www.dartlang.org" method="POST"> <label>Job:<input type="text" name="Job" size="75"></input>    </label>    <input type="submit" value="Job Search">    </form> Comment out all the code in submit.dart. Run the app, enter a job name, and click on the Job Search submit button; the Dart site appears. When the following code is added to submit.dart, clicking on the no button for a longer duration makes the Dart site appear: import 'dart:html';  void main() { querySelector('#form1').onSubmit.listen(submit); }  submit(Event e) {      e.preventDefault(); // code to be executed when button is clicked  } How it works... In the first step, when the submit button is pressed, the browser sees that the method is POST. This method collects the data and names from the input fields and sends it to the URL specified in action to be executed, which only shows the Dart site in our case. To prevent the form from posting the data, make an event handler for the onSubmit event of the form. In this handler code, e.preventDefault(); as the first statement will cancel the default submit action. However, the rest of the submit event handler (and even the same handler of a parent control, should there be one) is still executed on the client side. Summary In this article we learned how to handle web applications, sanitize a HTML, use a browser's local storage, use application cache to work offline, and how to prevent an onSubmit event from reloading a page. Resources for Article: Further resources on this subject: Handling the DOM in Dart [Article] QR Codes, Geolocation, Google Maps API, and HTML5 Video [Article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [Article]
Read more
  • 0
  • 0
  • 2140

article-image-cordova-plugins
Packt
17 Oct 2014
20 min read
Save for later

Cordova Plugins

Packt
17 Oct 2014
20 min read
In this article by Hazem Saleh, author of JavaScript Mobile Application Development, we will continue to deep dive into Apache Cordova. You will learn how to create your own custom Cordova plugin on the three most popular mobile platforms: Android (using the Java programming language), iOS (using the Objective-C programming language), and Windows Phone 8 (using the C# programming language). (For more resources related to this topic, see here.) Developing a custom Cordova plugin Before going into the details of the plugin, it is important to note that developing custom Cordova plugins is not a common scenario if you are developing Apache Cordova apps. This is because the Apache Cordova core and community custom plugins already cover many of the use cases that are needed to access a device's native functions. So, make sure of two things: You are not developing a custom plugin that already exists in Apache Cordova core plugins. You are not developing a custom plugin whose functionality already exists in other good Apache Cordova custom plugin(s) that are developed by the Apache Cordova development community. Building plugins from scratch can consume precious time from your project; otherwise, you can save time by reusing one of the available good custom plugins. Another thing to note is that developing custom Cordova plugins is an advanced topic. It requires you to be aware of the native programming languages of the mobile platforms, so make sure you have an overview of Java, Objective-C, and C# (or at least one of them) before reading this section. This will be helpful in understanding all the plugin development steps (plugin structuring, JavaScript interface definition, and native plugin implementation). Now, let's start developing our custom Cordova plugin. It can be used in order to send SMS messages from one of the three popular mobile platforms (Android, iOS, and Windows Phone 8). Before we start creating our plugin, we need to define its API. The following code listing shows you how to call the sms.sendMessage method of our plugin, which will be used in order to send an SMS across platforms: var messageInfo = {    phoneNumber: "xxxxxxxxxx",    textMessage: "This is a test message" }; sms.sendMessage(messageInfo, function(message) {    console.log("success: " + message); }, function(error) {    console.log("code: " + error.code + ", message: " + error.message); }); The sms.sendMessage method has the following parameters: messageInfo: This is a JSON object that contains two main attributes: phoneNumber, which represents the phone number that will receive the SMS message, and textMessage, which represents the text message to be sent. successCallback: This is a callback that will be called if the message is sent successfully. errorCallback: This is a callback that will be called if the message is not sent successfully. This callback receives an error object as a parameter. The error object has code (the error code) and message (the error message) attributes. Using plugman In addition to the Apache Cordova CLI utility, you can use the plugman utility in order to add or remove plugin(s) to/from your Apache Cordova projects. However, it's worth mentioning that plugman is a lower-level tool that you can use if your Apache Cordova application follows platform-centered workflow and not cross-platform workflow. If your application follows cross-platform workflow, then Apache Cordova CLI should be your choice. If you want your application to run on different mobile platforms (which is a common use case if you want to use Apache Cordova), it's recommend that you follow cross-platform workflow. Use platform-centered workflow if you want to develop your Apache Cordova application on a single platform and modify your application using the platform-specific SDK. Besides adding and removing plugins to/from platform-centered workflow, the Cordova projects plugman can also be used: To create basic scaffolding for your custom Cordova plugin To add and remove a platform to/from your custom Cordova plugin To add user(s) to the Cordova plugin registry (a repository that hosts the different Apache Cordova core and custom plugins) To publish your custom Cordova plugin(s) to the Cordova plugin registry To unpublish your custom plugin(s) from the Cordova plugin registry To search for plugin(s) in the Cordova plugin registry In this section, we will use the plugman utility to create the basic scaffolding of our custom SMS plugin. In order to install plugman, you need to make sure that Node.js is installed in your operating system. Then, to install plugman, execute the following command: > npm install -g plugman After installing plugman, we can start generating our initial custom plugin artifacts using the plugman create command as follows: > plugman create --name sms --plugin_id com.jsmobile.plugins.sms -- plugin_version 0.0.1 It is important to note the following parameters: --name: This specifies the plugin name ( in our case, sms) --plugin_id: This specifies an ID for the plugin (in our case, com.jsmobile.plugins.sms) --plugin_version: This specifies the plugin version (in our case, 0.0.1) The following are the two parameters that the plugman create command can accept as well: --path: This specifies the directory path of the plugin --variable: This can specify extra variables such as author or description After executing the previous command, we will have initial artifacts for our custom plugin. As we will be supporting multiple platforms, we can use the plugman platform add command. The following two commands add the Android and iOS platforms to our custom plugin: > plugman platform add --platform_name android > plugman platform add --platform_name ios In order to run the plugman platform add command, we need to run it from the plugin directory. Unfortunately, for Windows Phone 8 platform support, we need to add it manually later to our plugin. Now, let's check the initial scaffolding of our custom plugin code. The following screenshot shows the hierarchy of our initial plugin code: Hierarchy of our initial plugin code As shown in the preceding screenshot, there is one file and two parent directories. They are as follows: plugin.xml file: This contains the plugin definition. src directory: This contains the plugin native implementation code for each platform. For now, it contains two subdirectories: android and ios. The android subdirectory contains sms.java. This represents the initial implementation of the plugin in the Android.ios subdirectory that contains sms.m, which represents the initial implementation of the plugin in iOS. www directory: This mainly contains the JavaScript interface of the plugin. It contains sms.js that represents the initial implementation of the JavaScript API plugin. We will need to edit these generated files (and may be, refactor and add new implementation files) in order to implement our custom SMS plugin. Plugin definition First of all, we need to define our plugin structure. In order to do so, we need to define our plugin in the plugin.xml file. The following code listing shows our plugin.xml code: <?xml version='1.0' encoding='utf-8'?> <plugin id="com.jsmobile.plugins.sms" version="0.0.1"    >      <name>sms</name>    <description>A plugin for sending sms messages</description>    <license>Apache 2.0</license>    <keywords>cordova,plugins,sms</keywords>    <js-module name="sms" src="www/sms.js">        <clobbers target="window.sms" />    </js-module>    <platform name="android">        <config-file parent="/*" target="res/xml/config.xml">            <feature name="Sms">                <param name="android-package" value="com.jsmobile.plugins.sms.Sms" />            </feature>        </config-file>        <config-file target="AndroidManifest.xml" parent="/manifest">          <uses-permission android_name="android.permission.SEND_SMS" />        </config-file>          <source-file src="src/android/Sms.java"                      target-dir="src/com/jsmobile/plugins/sms" />    </platform>      <platform name="ios">        <config-file parent="/*" target="config.xml">            <feature name="Sms">                <param name="ios-package" value="Sms" />            </feature>        </config-file>          <source-file src="src/ios/Sms.h" />        <source-file src="src/ios/Sms.m" />        <framework src="MessageUI.framework" weak="true" />    </platform>    <platform name="wp8">        <config-file target="config.xml" parent="/*">            <feature name="Sms">                <param name="wp-package" value="Sms" />            </feature>        </config-file>        <source-file src="src/wp8/Sms.cs" />    </platform> </plugin> The plugin.xml file defines the plugin structure and contains a top-level <plugin> , which contains the following attributes: /> tag mainly inserts the smsExport JavaScript object that is defined in the www/sms.js file and exported using module.exports (the smsExport object will be illustrated in the Defining the plugin's JavaScript interface section) into the window object as window.sms. This means that our plugin users will be able to access our plugin's API using the window.sms object (this will be shown in detail in the Testing our Cordova plugin section). The <plugin> element can contain one or more <platform> element(s). The <platform> element specifies the platform-specific plugin's configuration. It has mainly one attribute name that specifies the platform name (android, ios, wp8, bb10, wp7, and so on). The <platform> element can have the following child elements: <source-file>: This element represents the native platform source code that will be installed and executed in the plugin-client project. The <source-file> element has the following two main attributes: src: This attribute represents the location of the source file relative to plugin.xml. target-dir: This attribute represents the target directory (that is relative to the project root) in which the source file will be placed when the plugin is installed in the client project. This attribute is mainly needed in a Java platform (Android), because a file under the x.y.z package must be placed under x/y/z directories. For iOS and Windows platforms, this parameter should be ignored. <config-file>: This element represents the configuration file that will be modified. This is required for many cases; for example, in Android, in order to send an SMS from your Android application, you need to modify the Android configuration file to have the permission to send an SMS from the device. The <config-file> has two main attributes: target: This attribute represents the file to be modified and the path relative to the project root. parent: This attribute represents an XPath selector that references the parent of the elements to be added to the configuration file. <framework>: This element specifies a platform-specific framework that the plugin depends on. It mainly has the src attribute to specify the framework name and weak attribute to indicate whether the specified framework should be weakly linked. Giving this explanation for the <platform> element and getting back to our plugin.xml file, you will notice that we have the following three <platform> elements: Android (<platform name="android">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the res/xml/config.xml file to register our plugin in an Android project. In Android, the <feature> element's name attribute represents the service name, and its "android-package" parameter represents the fully qualified name of the Java plugin class: <feature name="Sms">    <param name="android-package" value="com.jsmobile.plugins.sms.Sms" /> </feature> It modifies the AndroidManifest.xml file to add the <uses-permission android_name="android.permission.SEND_SMS" /> element (to have a permission to send an SMS in an Android platform) under the <manifest> element. Finally, it specifies the plugin's implementation source file, "src/android/Sms.java", and its target directory, "src/com/jsmobile/plugins/sms" (we will explore the contents of this file in the Developing the Android code section). iOS (<platform name="ios">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the iOS project. In iOS, the <feature> element's name attribute represents the service name, and its "ios-package" parameter represents the Objective-C plugin class name: <feature name="Sms">    <param name="ios-package" value="Sms" /> </feature> It specifies the plugin implementation source files: Sms.h (the header file) and Sms.m (the methods file). We will explore the contents of these files in the Developing the iOS code section. It adds "MessageUI.framework" as a weakly linked dependency for our iOS plugin. Windows Phone 8 (<platform name="wp8">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the Windows Phone 8 project. The <feature> element's name attribute represents the service name, and its "wp-package" parameter represents the C# service class name: <feature name="Sms">        <param name="wp-package" value="Sms" /> </feature> It specifies the plugin implementation source file, "src/wp8/Sms.cs" (we will explore the contents of this file in the Developing Windows Phone 8 code section). This is all we need to know in order to understand the structure of our custom plugin; however, there are many more attributes and elements that are not mentioned here, as we didn't use them in our example. In order to get the complete list of attributes and elements of plugin.xml, you can check out the plugin specification page in the Apache Cordova documentation at http://cordova.apache.org/docs/en/3.4.0/plugin_ref_spec.md.html#Plugin%20Specification. Defining the plugin's JavaScript interface As indicated in the plugin definition file (plugin.xml), our plugin's JavaScript interface is defined in sms.js, which is located under the www directory. The following code snippet shows the sms.js file content: var smsExport = {}; smsExport.sendMessage = function(messageInfo, successCallback, errorCallback) {    if (messageInfo == null || typeof messageInfo !== 'object') {        if (errorCallback) {            errorCallback({                code: "INVALID_INPUT",                message: "Invalid Input"            });        }        return;    }    var phoneNumber = messageInfo.phoneNumber;    var textMessage = messageInfo.textMessage || "Default Text from SMS plugin";    if (! phoneNumber) {        console.log("Missing Phone Number");        if (errorCallback) {            errorCallback({                code: "MISSING_PHONE_NUMBER",                message: "Missing Phone number"            });        }        return;    }    cordova.exec(successCallback, errorCallback, "Sms", "sendMessage", [phoneNumber, textMessage]); }; module.exports = smsExport; The smsExport object contains a single method, sendMessage(messageInfo, successCallback, errorCallback). In the sendMessage method, phoneNumber and textMessage are extracted from the messageInfo object. If a phone number is not specified by the user, then errorCallback will be called with a JSON error object, which has a code attribute set to "MISSING_PHONE_NUMBER" and a message attribute set to "Missing Phone number". After passing this validation, a call is performed to the cordova.exec() API in order to call the native code (whether it is Android, iOS, Windows Phone 8, or any other supported platform) from Apache Cordova JavaScript. It is important to note that the cordova.exec(successCallback, errorCallback, "service", "action", [args]) API has the following parameters: successCallback: This represents the success callback function that will be called (with any specified parameter(s)) if the Cordova exec call completes successfully errorCallback: This represents the error callback function that will be called (with any specified error parameter(s)) if the Cordova exec call does not complete successfully "service": This represents the native service name that is mapped to a native class using the <feature> element (in sms.js, the native service name is "Sms") "action": This represents the action name to be executed, and an action is mapped to a class method in some platforms (in sms.js, the action name is "sendMessage") [args]: This is an array that represents the action arguments (in sms.js, the action arguments are [phoneNumber, textMessage]) It is very important to note that in cordova.exec(successCallback, errorCallback, "service", "action", [args]), the "service" parameter must match the name of the <feature> element, which we set in our plugin.xml file in order to call the mapped native plugin class correctly. Finally, the smsExport object is exported using module.exports. Do not forget that our JavaScript module is mapped to window.sms using the <clobbers target="window.sms" /> element inside <js-module src="www/sms.js"> element, which we discussed in the plugin.xml file. This means that in order to call the sendMessage method of the smsExport object from our plugin-client application, we use the sms.sendMessage() method. Developing the Android code As specified in our plugin.xml file's platform section for Android, the implementation of our plugin in Android is located at src/android/Sms.java. The following code snippet shows the first part of the Sms.java file: package com.jsmobile.plugins.sms;   import org.apache.cordova.CordovaPlugin; import org.apache.cordova.CallbackContext; import org.apache.cordova.PluginResult; import org.apache.cordova.PluginResult.Status; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.app.PendingIntent; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.content.pm.PackageManager; import android.telephony.SmsManager; public class Sms extends CordovaPlugin {    private static final String SMS_GENERAL_ERROR = "SMS_GENERAL_ERROR";    private static final String NO_SMS_SERVICE_AVAILABLE = "NO_SMS_SERVICE_AVAILABLE";    private static final String SMS_FEATURE_NOT_SUPPORTED = "SMS_FEATURE_NOT_SUPPORTED";    private static final String SENDING_SMS_ID = "SENDING_SMS";    @Override    public boolean execute(String action, JSONArray args, CallbackContext callbackContext) throws JSONException {        if (action.equals("sendMessage")) {            String phoneNumber = args.getString(0);            String message = args.getString(1);            boolean isSupported = getActivity().getPackageManager().hasSystemFeature(PackageManager. FEATURE_TELEPHONY);            if (! isSupported) {                JSONObject errorObject = new JSONObject();                errorObject.put("code", SMS_FEATURE_NOT_SUPPORTED);                errorObject.put("message", "SMS feature is not supported on this device");                callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                return false;            }            this.sendSMS(phoneNumber, message, callbackContext);            return true;        }        return false;    }    // Code is omitted here for simplicity ...    private Activity getActivity() {        return this.cordova.getActivity();    } } In order to create our Cordova Android plugin class, our Android plugin class must extend the CordovaPlugin class and must override one of the execute() methods of CordovaPlugin. In our Sms Java class, the execute(String action, JSONArray args, CallbackContext callbackContext) execute method, which has the following parameters, is overridden: String action: This represents the action to be performed, and it matches the specified action parameter in the cordova.exec() JavaScript API JSONArray args: This represents the action arguments, and it matches the [args] parameter in the cordova.exec() JavaScript API CallbackContext callbackContext: This represents the callback context used when calling a function back to JavaScript In the execute() method of our Sms class, phoneNumber and message parameters are retrieved from the args parameter. Using getActivity().getPackageManager().hasSystemFeature(PackageManager.FEATURE_TELEPHONY), we can check if the device has a telephony radio with data communication support. If the device does not have this feature, this API returns false, so we create errorObject of the JSONObject type that contains an error code attribute ("code") and an error message attribute ("message") that inform the plugin user that the SMS feature is not supported on this device. The plugin tells the JavaScript caller that the operation failed by calling callbackContext.sendPluginResult() and specifying a PluginResult object as a parameter (the PluginResult object's status is set to Status.ERROR, and message is set to errorObject). As indicated in our Android implementation, in order to send a plugin result to JavaScript from Android, we use the callbackContext.sendPluginResult() method that specifies the PluginResult status and message. Other platforms (iOS and Windows Phone 8) have much a similar way. If an Android device supports sending SMS messages, then a call to the sendSMS() private method is performed. The following code snippet shows the sendSMS() code: private void sendSMS(String phoneNumber, String message, final CallbackContext callbackContext) throws JSONException {    PendingIntent sentPI = PendingIntent.getBroadcast(getActivity(), 0, new Intent(SENDING_SMS_ID), 0);    getActivity().registerReceiver(new BroadcastReceiver() {        @Override        public void onReceive(Context context, Intent intent) {            switch (getResultCode()) {            case Activity.RESULT_OK:                callbackContext.sendPluginResult(new PluginResult(Status.OK, "SMS message is sent successfully"));                break;            case SmsManager.RESULT_ERROR_NO_SERVICE:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", NO_SMS_SERVICE_AVAILABLE);                    errorObject.put("message", "SMS is not sent because no service is available");                     callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                break;            default:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", SMS_GENERAL_ERROR);                    errorObject.put("message", "SMS general error");                    callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                 break;            }        }    }, new IntentFilter(SENDING_SMS_ID));    SmsManager sms = SmsManager.getDefault();    sms.sendTextMessage(phoneNumber, null, message, sentPI, null); } In order to understand the sendSMS() method, let's look into the method's last two lines: SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage(phoneNumber, null, message, sentPI, null); SmsManager is an Android class that provides an API to send text messages. Using SmsManager.getDefault() returns an object of SmsManager. In order to send a text-based message, a call to sms.sendTextMessage() should be performed. The sms.sendTextMessage (String destinationAddress, String scAddress, String text, PendingIntent sentIntent, PendingIntent deliveryIntent) method has the following parameters: destinationAddress: This represents the address (phone number) to send the message to. scAddress: This represents the service center address. It can be set to null to use the current default SMS center. text: This represents the text message to be sent. sentIntent: This represents PendingIntent, which broadcasts when the message is successfully sent or failed. It can be set to null. deliveryIntent: This represents PendingIntent, which broadcasts when the message is delivered to the recipient. It can be set to null. As shown in the preceding code snippet, we specified a destination address (phoneNumber), a text message (message), and finally, a pending intent (sendPI) in order to listen to the message-sending status. If you return to the sendSMS() code and look at it from the beginning, you will notice that sentPI is initialized by calling PendingIntent.getBroadcast(), and in order to receive the SMS-sending broadcast, BroadcastReceiver is registered. When the SMS message is sent successfully or fails, the onReceive() method of BroadcastReceiver will be called, and the resultant code can be retrieved using getResultCode(). The result code can indicate: Success when getResultCode() is equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.OK and message = "SMS message is sent successfully", and it is sent to the client using callbackContext.sendPluginResult(). Failure when getResultCode() is not equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.ERROR and message = errorObject (which contains the error code and error message), and it is sent to the client using callbackContext.sendPluginResult(). These are the details of our SMS plugin implementation in the Android platform. Now, let's move to the iOS implementation of our plugin. Summary This article showed you how to design and develop your own custom Apache Cordova plugin using JavaScript and Java for Android, Objective-C for iOS, and finally, C# for Windows Phone 8. Resources for Article:   Further resources on this subject: Building Mobile Apps [article] Digging into the Architecture [article] So, what is KineticJS? [article]
Read more
  • 0
  • 0
  • 5615
article-image-planning-desktop-virtualization
Packt
16 Oct 2014
3 min read
Save for later

Planning Desktop Virtualization

Packt
16 Oct 2014
3 min read
 This article by Andy Paul, author of the book Citrix XenApp® 7.5 Virtualization Solutions, explains the VDI and its building blocks in detail. (For more resources related to this topic, see here.) The building blocks of VDI The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity. Hosted Virtual Desktop (HVD) Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:   Hosted Virtual Desktop model; each user has dedicated resources Hosted Shared Desktop (HSD) Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:   Hosted Shared Desktop model; each user shares the desktop server resources Session-based Computing (SBC) With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:   Session-based computing model; each user accesses applications remotely, but shares resources Application virtualization In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:   Application virtualization model; the application packages execute locally The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant. Summary In this article, we learned the VDI and understood its building blocks in detail. Resources for Article: Further resources on this subject: Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Introduction to Citrix XenDesktop [article]
Read more
  • 0
  • 0
  • 8650

article-image-routing
Packt
16 Oct 2014
17 min read
Save for later

Routing

Packt
16 Oct 2014
17 min read
In this article by Mitchel Kelonye, author of Mastering Ember.js, we will learn URL-based state management in Ember.js, which constitutes routing. Routing enables us to translate different states in our applications into URLs and vice-versa. It is a key concept in Ember.js that enables developers to easily separate application logic. It also enables users to link back to content in the application via the usual HTTP URLs. (For more resources related to this topic, see here.) We all know that in traditional web development, every request is linked by a URL that enables the server make a decision on the incoming request. Typical actions include sending back a resource file or JSON payload, redirecting the request to a different resource, or sending back an error response such as in the case of unauthorized access. Ember.js strives to preserve these ideas in the browser environment by enabling association between these URLs and state of the application. The main component that manages these states is the application router. It is responsible for restoring an application to a state matching the given URL. It also enables the user to navigate between the application's history as expected. The router is automatically created on application initialization and can be referenced as MyApplicationNamespace.Router. Before we proceed, we will be using the bundled sample to better understand this extremely convenient component. The sample is a simple implementation of the Contacts OS X application as shown in the following screenshot: It enables users to add new contacts as well as edit and delete existing ones. For simplicity, we won't support avatars but that could be an implementation exercise for the reader. We already mentioned some of the states in which this application can transition into. These states have to be registered in the same way server-side frameworks have URL dispatchers that backend programmers use to map URL patters to views. The article sample already illustrates how these possible states are defined:  // app.jsvar App = Ember.Application.create();App.Router.map(function() {this.resource('contacts', function(){this.route('new');this.resource('contact', {path: '/:contact_id'}, function(){this.route('edit');});});this.route('about');}); Notice that the already instantiated router was referenced as App.Router. Calling its map method gives the application an opportunity to register its possible states. In addition, two other methods are used to classify these states into routes and resources. Mapping URLs to routes When defining routes and resources, we are essentially mapping URLs to possible states in our application. As shown in the first code snippet, the router's map function takes a function as its only argument. Inside this function, we may define a resource using the corresponding method, which takes the following signature: this.resource(resourceName, options, function); The first argument specifies the name of the resource and coincidentally, the path to match the request URL. The next argument is optional and holds configurations that we may need to specify as we shall see later. The last one is a function that is used to define the routes of that particular resource. For example, the first defined resource in the samples says, let the contacts resource handle any requests whose URL start with /contacts. It also specifies one route, new, that is used to handle creation of new contacts. Routes on the other hand accept the same arguments for the function argument. You must be asking yourself, "So how are routes different from resources?" The two are essentially the same, other than the former offers a way to categorize states (routes) that perform actions on a specific entity. We can think of an Ember.js application as tree, composed of a trunk (the router), branches (resources), and leaves (routes). For example, the contact state (a resource) caters for a specific contact. This resource can be displayed in two modes: read and write; hence, the index and edit routes respectively, as shown: this.resource('contact', {path: '/:contact_id'}, function(){this.route('index'); // auto definedthis.route('edit');}); Because Ember.js encourages convention, there are two components of routes and resources that are always autodefined: A default application resource: This is the master resource into which all other resources are defined. We therefore did not need to define it in the router. It's not mandatory to define resources on every state. For example, our about state is a route because it only needs to display static content to the user. It can however be thought to be a route of the already autodefined application resource. A default index route on every resource: Again, every resource has a default index route. It's autodefined because an application cannot settle on a resource state. The application therefore uses this route if no other route within this same resource was intended to be used. Nesting resources Resources can be nested depending on the architecture of the application. In our case, we need to load contacts in the sidebar before displaying any of them to the user. Therefore, we need to define the contact resource inside the contacts. On the other hand, in an application such as Twitter, it won't make sense to define a tweet resource embedded inside a tweets resource because an extra overhead will be incurred when a user just wants to view a single tweet linked from an external application. Understanding the state transition cycle A request is handled in the same way water travels from the roots (the application), up the trunk, and is eventually lost off leaves. This request we are referring to is a change in the browser location that can be triggered in a number of ways. Before we proceed into finer details about routes, let's discuss what happened when the application was first loaded. On boot, a few things happened as outlined: The application first transitioned into the application state, then the index state. Next, the application index route redirected the request to the contacts resource. Our application uses the browsers local storage to store the contacts and so for demoing purposes, the contacts resource populated this store with fixtures (located at fixtures.js). The application then transitioned into the corresponding contacts resource index route, contacts.index. Again, here we made a few decisions based on whether our store contained any data in it. Since we indeed have data, we redirected the application into the contact resource, passing the ID of the first contact along. Just as in the two preceding resources, the application transitioned from this last resource into the corresponding index route, contact.index. The following figure gives a good view of the preceding state change: Configuring the router The router can be customized in the following ways: Logging state transitions Specifying the root app URL Changing browser location lookup method During development, it may be necessary to track the states into which the application transitions into. Enabling these logs is as simple as: var App = Ember.Application.create({LOG_TRANSITIONS: true}); As illustrated, we enable the LOG_TRANSITIONS flag when creating the application. If an application is not served at the root of the website domain, then it may be necessary to specify the path name used as in the following example: App.Router.reopen({rootURL: '/contacts/'}); One other modification we may need to make revolves around the techniques Ember.js uses to subscribe to the browser's location changes. This makes it possible for the router to do its job of transitioning the app into the matched URL state. Two of these methods are as follows: Subscribing to the hashchange event Using the history.pushState API The default technique used is provided by the HashLocation class documented at http://emberjs.com/api/classes/Ember.HashLocation.html. This means that URL paths are usually prefixed with the hash symbol, for example, /#/contacts/1/edit. The other one is provided by the HistoryLocation class located at http://emberjs.com/api/classes/Ember.HistoryLocation.html. This does not distinguish URLs from the traditional ones and can be enabled as: App.Router.reopen({location: 'history'}); We can also opt to let Ember.js pick which method is best suited for our app with the following code: App.Router.reopen({location: 'auto'}); If we don't need any of these techniques, we could opt to do so especially when performing tests: App.Router.reopen({location: none}); Specifying a route's path We now know that when defining a route or resource, the resource name used also serves as the path the router uses to match request URLs. Sometimes, it may be necessary to specify a different path to use to match states. There are two common reasons that may lead us to do this, the first of which is good for delegating route handling to another route. Although, we have not yet covered route handlers, we already mentioned that our application transitions from the application index route into the contacts.index state. We may however specify that the contacts route handler should manage this path as: this.resource('contacts', {path: '/'}, function(){}); Therefore, to specify an alternative path for a route, simply pass the desired route in a hash as the second argument during resource definition. This also applies when defining routes. The second reason would be when a resource contains dynamic segments. For example, our contact resource handles contacts who should obviously have different URLs linking back to them. Ember.js uses URL pattern matching techniques used by other open source projects such as Ruby on Rails, Sinatra, and Express.js. Therefore, our contact resource should be defined as: this.resource('contact', {path: '/:contact_id'}, function(){}); In the preceding snippet, /:contact_id is the dynamic segment that will be replaced by the actual contact's ID. One thing to note is that nested resources prefix their paths with those of parent resources. Therefore, the contact resource's full path would be /contacts/:contact_id. It's also worth noting that the name of the dynamic segment is not mandated and so we could have named the dynamic segment as /:id. Defining route and resource handlers Now that we have defined all the possible states that our application can transition into, we need to define handlers to these states. From this point onwards, we will use the terms route and resource handlers interchangeably. A route handler performs the following major functions: Providing data (model) to be used by the current state Specifying the view and/or template to use to render the provided data to the user Redirecting an application away into another state Before we move into discussing these roles, we need to know that a route handler is defined from the Ember.Route class as: App.RouteHandlerNameRoute = Ember.Route.extend(); This class is used to define handlers for both resources and routes and therefore, the naming should not be a concern. Just as routes and resources are associated with paths and handlers, they are also associated with controllers, views, and templates using the Ember.js naming conventions. For example, when the application initializes, it enters into the application state and therefore, the following objects are sought: The application route The application controller The application view The application template In the spirit of do more with reduced boilerplate code, Ember.js autogenerates these objects unless explicitly defined in order to override the default implementations. As another example, if we examine our application, we notice that the contact.edit route has a corresponding App.ContactEditController controller and contact/edit template. We did not need to define its route handler or view. Having seen this example, when referring to routes, we normally separate the resource name from the route name by a period as in the following: resourceName.routeName In the case of templates, we may use a period or a forward slash: resourceName/routeName The other objects are usually camelized and suffixed by the class name: ResourcenameRoutenameClassname For example, the following table shows all the objects used. As mentioned earlier, some are autogenerated. Route Name Controller Route Handler View Template  applicationApplicationControllerApplicationRoute  ApplicationViewapplication        ApplicationViewapplication  IndexViewindex       about AboutController  AboutRoute  AboutView about  contactsContactsControllerContactsRoute  ContactsView  contacts      contacts.indexContactsIndexControllerContactsIndexRoute  ContactsIndexViewcontacts/index        ContactsIndexViewcontacts/index  ContactsNewRoute  ContactsNewViewcontacts/new      contact  ContactController  ContactRoute  ContactView contact  contact.index  ContactIndexController  ContactIndexRoute  ContactIndexView contact/index contact.edit  ContactEditController  ContactEditRoute  ContactEditView contact/index One thing to note is that objects associated with the intermediary application state do not need to carry the suffix; hence, just index or about. Specifying a route's model We mentioned that route handlers provide controllers, the data needed to be displayed by templates. These handlers have a model hook that can be used to provide this data in the following format: AppNamespace.RouteHandlerName = Ember.Route.extend({model: function(){}}) For instance, the route contacts handler in the sample loads any saved contacts from local storage as: model: function(){return App.Contact.find();} We have abstracted this logic into our App.Contact model. Notice how we reopen the class in order to define this static method. A static method can only be called by the class of that method and not its instances: App.Contact.reopenClass({find: function(id){return (!!id)? App.Contact.findOne(id): App.Contact.findAll();},…}) If no arguments are passed to the method, it goes ahead and calls the findAll method, which uses the local storage helper to retrieve the contacts: findAll: function(){var contacts = store('contacts') || [];return contacts.map(function(contact){return App.Contact.create(contact);});} Because we want to deal with contact objects, we iteratively convert the contents of the loaded contact list. If we examine the corresponding template, contacts, we notice that we were able to populate the sidebar as shown in the following code: <ul class="nav nav-pills nav-stacked">{{#each model}}<li>{{#link-to "contact.index" this}}{{name}}{{/link-to}}</li>{{/each}}</ul> Do not worry about the template syntax at this point if you're new to Ember.js. The important thing to note is that the model was accessed via the model variable. Of course, before that, we check to see if the model has any content in: {{#if model.length}}...{{else}}<h1>Create contact</h1>{{/if}} As we shall see later, if the list was empty, the application would be forced to transition into the contacts.new state, in order for the user to add the first contact as shown in the following screenshot: The contact handler is a different case. Remember we mentioned that its path has a dynamic segment that would be passed to the handler. This information is passed to the model hook in an options hash as: App.ContactRoute = Ember.Route.extend({model: function(params){return App.Contact.find(params.contact_id);},...}); Notice that we are able to access the contact's ID via the contact_id attribute of the hash. This time, the find method calls the findOne static method of the contact's class, which performs a search for the contact matching the provided ID, as shown in the following code: findOne: function(id){var contacts = store('contacts') || [];var contact = contacts.find(function(contact){return contact.id == id;});if (!contact) return;return App.Contact.create(contact);} Serializing resources We've mentioned that Ember.js supports content to be linked back externally. Internally, Ember.js simplifies creating these links in templates. In our sample application, when the user selects a contact, the application transitions into the contact.index state, passing his/her ID along. This is possible through the use of the link-to handlebars expression: {{#link-to "contact.index" this}}{{name}}{{/link-to}} The important thing to note is that this expression enables us to construct a link that points to the said resource by passing the resource name and the affected model. The destination resource or route handler is responsible for yielding this path constituting serialization. To serialize a resource, we need to override the matching serialize hook as in the contact handler case shown in the following code: App.ContactRoute = Ember.Route.extend({...serialize: function(model, params){var data = {}data[params[0]] = Ember.get(model, 'id');return data;}}); Serialization means that the hook is supposed to return the values of all the specified segments. It receives two arguments, the first of which is the affected resource and the second is an array of all the specified segments during the resource definition. In our case, we only had one and so we returned the required hash that resembled the following code: {contact_id: 1} If we, for example, defined a resource with multiple segments like the following code: this.resource('book',{path: '/name/:name/:publish_year'},function(){}); The serialization hook would need to return something close to: {name: 'jon+doe',publish_year: '1990'} Asynchronous routing In actual apps, we would often need to load the model data in an asynchronous fashion. There are various approaches that can be used to deliver this kind of data. The most robust way to load asynchronous data is through use of promises. Promises are objects whose unknown value can be set at a later point in time. It is very easy to create promises in Ember.js. For example, if our contacts were located in a remote resource, we could use jQuery to load them as: App.ContactsRoute = Ember.Route.extend({model: function(params){return Ember.$.getJSON('/contacts');}}); jQuery's HTTP utilities also return promises that Ember.js can consume. As a by the way, jQuery can also be referenced as Ember.$ in an Ember.js application. In the preceding snippet, once data is loaded, Ember.js would set it as the model of the resource. However, one thing is missing. We require that the loaded data be converted to the defined contact model as shown in the following little modification: App.ContactsRoute = Ember.Route.extend({model: function(params){var promise = Ember.Object.createWithMixins(Ember.DeferredMixin);Ember.$.getJSON('/contacts').then(reject, resolve);function resolve(contacts){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function reject(res){var err = new Error(res.responseText);promise.reject(err);}return promise;}}); We first create the promise, kick off the XHR request, and then return the promise while the request is still being processed. Ember.js will resume routing once this promise is rejected or resolved. The XHR call also creates a promise; so, we need to attach to it, the then method which essentially says, invoke the passed resolve or reject function on successful or failed load respectively. The resolve function converts the loaded data and resolves the promise passing the data along thereby resumes routing. If the promise was rejected, the transition fails with an error. We will see how to handle this error in a moment. Note that there are two other flavors we can use to create promises in Ember.js as shown in the following examples: var promise = Ember.Deferred.create();Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});promise.resolve(contacts)}function fail(res){var err = new Error(res.responseText);promise.reject(err);}return promise; The second example is as follows: return new Ember.RSVP.Promise(function(resolve, reject){Ember.$.getJSON('/contacts').then(success, fail);function success(){contacts = contacts.map(function(contact){return App.Contact.create(contact);});resolve(contacts)}function fail(res){var err = new Error(res.responseText);reject(err);}}); Summary This article detailed how a browser's location-based state management is accomplished in Ember.js apps. Also, we accomplished how to create a router, define resources and routes, define a route's model, and perform a redirect. Resources for Article: Further resources on this subject: AngularJS Project [Article] Automating performance analysis with YSlow and PhantomJS [Article] AngularJS [Article]
Read more
  • 0
  • 0
  • 1891
Modal Close icon
Modal Close icon