Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-reactive-data-streams
Packt
03 Jun 2015
11 min read
Save for later

Reactive Data Streams

Packt
03 Jun 2015
11 min read
In this article by Shiti Saxena, author of the book Mastering Play Framework for Scala, we will discuss the Iteratee approach used to handle such situations. This article also covers the basics of handling data streams with a brief explanation of the following topics: Iteratees Enumerators Enumeratees (For more resources related to this topic, see here.) Iteratee Iteratee is defined as a trait, Iteratee[E, +A], where E is the input type and A is the result type. The state of an Iteratee is represented by an instance of Step, which is defined as follows: sealed trait Step[E, +A] {def it: Iteratee[E, A] = this match {case Step.Done(a, e) => Done(a, e)case Step.Cont(k) => Cont(k)case Step.Error(msg, e) => Error(msg, e)}}object Step {//done state of an iterateecase class Done[+A, E](a: A, remaining: Input[E]) extends Step[E, A]//continuing state of an iteratee.case class Cont[E, +A](k: Input[E] => Iteratee[E, A]) extendsStep[E, A]//error state of an iterateecase class Error[E](msg: String, input: Input[E]) extends Step[E,Nothing]} The input used here represents an element of the data stream, which can be empty, an element, or an end of file indicator. Therefore, Input is defined as follows: sealed trait Input[+E] {def map[U](f: (E => U)): Input[U] = this match {case Input.El(e) => Input.El(f(e))case Input.Empty => Input.Emptycase Input.EOF => Input.EOF}}object Input {//An input elementcase class El[+E](e: E) extends Input[E]// An empty inputcase object Empty extends Input[Nothing]// An end of file inputcase object EOF extends Input[Nothing]} An Iteratee is an immutable data type and each result of processing an input is a new Iteratee with a new state. To handle the possible states of an Iteratee, there is a predefined helper object for each state. They are: Cont Done Error Let's see the definition of the readLine method, which utilizes these objects: def readLine(line: List[Array[Byte]] = Nil): Iteratee[Array[Byte],String] = Cont {case Input.El(data) => {val s = data.takeWhile(_ != 'n')if (s.length == data.length) {readLine(s :: line)} else {Done(new String(Array.concat((s :: line).reverse: _*),"UTF-8").trim(), elOrEmpty(data.drop(s.length + 1)))}}case Input.EOF => {Error("EOF found while reading line", Input.Empty)}case Input.Empty => readLine(line)} The readLine method is responsible for reading a line and returning an Iteratee. As long as there are more bytes to be read, the readLine method is called recursively. On completing the process, an Iteratee with a completed state (Done) is returned, else an Iteratee with state continuous (Cont) is returned. In case the method encounters EOF, an Iteratee with state Error is returned. In addition to these, Play Framework exposes a companion Iteratee object, which has helper methods to deal with Iteratees. The API exposed through the Iteratee object is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Iteratee$. The Iteratee object is also used internally within the framework to provide some key features. For example, consider the request body parsers. The apply method of the BodyParser object is defined as follows: def apply[T](debugName: String)(f: RequestHeader =>Iteratee[Array[Byte], Either[Result, T]]): BodyParser[T] = newBodyParser[T] {def apply(rh: RequestHeader) = f(rh)override def toString = "BodyParser(" + debugName + ")"} So, to define BodyParser[T], we need to define a method that accepts RequestHeader and returns an Iteratee whose input is an Array[Byte] and results in Either[Result,T]. Let's look at some of the existing implementations to understand how this works. The RawBuffer parser is defined as follows: def raw(memoryThreshold: Int): BodyParser[RawBuffer] =BodyParser("raw, memoryThreshold=" + memoryThreshold) { request =>import play.core.Execution.Implicits.internalContextval buffer = RawBuffer(memoryThreshold)Iteratee.foreach[Array[Byte]](bytes => buffer.push(bytes)).map {_ =>buffer.close()Right(buffer)}} The RawBuffer parser uses Iteratee.forEach method and pushes the input received into a buffer. The file parser is defined as follows: def file(to: File): BodyParser[File] = BodyParser("file, to=" +to) { request =>import play.core.Execution.Implicits.internalContextIteratee.fold[Array[Byte], FileOutputStream](newFileOutputStream(to)) {(os, data) =>os.write(data)os}.map { os =>os.close()Right(to)}} The file parser uses the Iteratee.fold method to create FileOutputStream of the incoming data. Now, let's see the implementation of Enumerator and how these two pieces fit together. Enumerator Similar to the Iteratee, an Enumerator is also defined through a trait and backed by an object of the same name: trait Enumerator[E] {parent =>def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]]...}object Enumerator{def apply[E](in: E*): Enumerator[E] = in.length match {case 0 => Enumerator.emptycase 1 => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =i.pureFoldNoEC {case Step.Cont(k) => k(Input.El(in.head))case _ => i}}case _ => new Enumerator[E] {def apply[A](i: Iteratee[E, A]): Future[Iteratee[E, A]] =enumerateSeq(in, i)}}...} Observe that the apply method of the trait and its companion object are different. The apply method of the trait accepts Iteratee[E, A] and returns Future[Iteratee[E, A]], while that of the companion object accepts a sequence of type E and returns an Enumerator[E]. Now, let's define a simple data flow using the companion object's apply method; first, get the character count in a given (Seq[String]) line: val line: String = "What we need is not the will to believe, butthe wish to find out."val words: Seq[String] = line.split(" ")val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + y.length)val flow: Future[Iteratee[String, Int]] = src(sink)val result: Future[Int] = flow.flatMap(_.run) The variable result has the Future[Int] type. We can now process this to get the actual count. In the preceding code snippet, we got the result by following these steps: Building an Enumerator using the companion object's apply method: val src: Enumerator[String] = Enumerator(words: _*) Getting Future[Iteratee[String, Int]] by binding the Enumerator to an Iteratee: val flow: Future[Iteratee[String, Int]] = src(sink) Flattening Future[Iteratee[String,Int]] and processing it: val result: Future[Int] = flow.flatMap(_.run) Fetching the result from Future[Int]: Thankfully, Play provides a shortcut method by merging steps 2 and 3 so that we don't have to repeat the same process every time. The method is represented by the |>>> symbol. Using the shortcut method, our code is reduced to this: val src: Enumerator[String] = Enumerator(words: _*)val sink: Iteratee[String, Int] = Iteratee.fold[String, Int](0)((x, y)=> x + y.length)val result: Future[Int] = src |>>> sink Why use this when we can simply use the methods of the data type? In this case, do we use the length method of String to get the same value (by ignoring whitespaces)? In this example, we are getting the data as a single String but this will not be the only scenario. We need ways to process continuous data, such as a file upload, or feed data from various networking sites, and so on. For example, suppose our application receives heartbeats at a fixed interval from all the devices (such as cameras, thermometers, and so on) connected to it. We can simulate a data stream using the Enumerator.generateM method: val dataStream: Enumerator[String] = Enumerator.generateM {Promise.timeout(Some("alive"), 100 millis)} In the preceding snippet, the "alive" String is produced every 100 milliseconds. The function passed to the generateM method is called whenever the Iteratee bound to the Enumerator is in the Cont state. This method is used internally to build enumerators and can come in handy when we want to analyze the processing for an expected data stream. An Enumerator can be created from a file, InputStream, or OutputStream. Enumerators can be concatenated or interleaved. The Enumerator API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumerator$. Using the Concurrent object The Concurrent object is a helper that provides utilities for using Iteratees, enumerators, and Enumeratees concurrently. Two of its important methods are: Unicast: It is useful when sending data to a single iterate. Broadcast: It facilitates sending the same data to multiple Iteratees concurrently. Unicast For example, the character count example in the previous section can be implemented as follows: val unicastSrc = Concurrent.unicast[String](channel =>channel.push(line))val unicastResult: Future[Int] = unicastSrc |>>> sink The unicast method accepts the onStart, onError, and onComplete handlers. In the preceding code snippet, we have provided the onStart method, which is mandatory. The signature of unicast is this: def unicast[E](onStart: (Channel[E]) ⇒ Unit,onComplete: ⇒ Unit = (),onError: (String, Input[E]) ⇒ Unit = (_: String, _: Input[E])=> ())(implicit ec: ExecutionContext): Enumerator[E] {…} So, to add a log for errors, we can define the onError handler as follows: val unicastSrc2 = Concurrent.unicast[String](channel => channel.push(line),onError = { (msg, str) => Logger.error(s"encountered $msg for$str")}) Now, let's see how broadcast works. Broadcast The broadcast[E] method creates an enumerator and a channel and returns a (Enumerator[E], Channel[E]) tuple. The enumerator and channel thus obtained can be used to broadcast data to multiple Iteratees: val (broadcastSrc: Enumerator[String], channel:Concurrent.Channel[String]) = Concurrent.broadcast[String]private val vowels: Seq[Char] = Seq('a', 'e', 'i', 'o', 'u')def getVowels(str: String): String = {val result = str.filter(c => vowels.contains(c))result}def getConsonants(str: String): String = {val result = str.filterNot(c => vowels.contains(c))result}val vowelCount: Iteratee[String, Int] = Iteratee.fold[String,Int](0)((x, y) => x + getVowels(y).length)val consonantCount: Iteratee[String, Int] =Iteratee.fold[String, Int](0)((x, y) => x +getConsonants(y).length)val vowelInfo: Future[Int] = broadcastSrc |>>> vowelCountval consonantInfo: Future[Int] = broadcastSrc |>>>consonantCountwords.foreach(w => channel.push(w))channel.end()vowelInfo onSuccess { case count => println(s"vowels:$count")}consonantInfo onSuccess { case count =>println(s"consonants:$count")} Enumeratee Enumeratee is also defined using a trait and its companion object with the same Enumeratee name. It is defined as follows: trait Enumeratee[From, To] {...def applyOn[A](inner: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]]def apply[A](inner: Iteratee[To, A]): Iteratee[From, Iteratee[To,A]] = applyOn[A](inner)...} An Enumeratee transforms the Iteratee given to it as input and returns a new Iteratee. Let's look at a method that defines an Enumeratee by implementing the applyOn method. An Enumeratee's flatten method accepts Future[Enumeratee] and returns an another Enumeratee, which is defined as follows: def flatten[From, To](futureOfEnumeratee:Future[Enumeratee[From, To]]) = new Enumeratee[From, To] {def applyOn[A](it: Iteratee[To, A]): Iteratee[From,Iteratee[To, A]] =Iteratee.flatten(futureOfEnumeratee.map(_.applyOn[A](it))(dec))} In the preceding snippet, applyOn is called on the Enumeratee whose future is passed and dec is defaultExecutionContext. Defining an Enumeratee using the companion object is a lot simpler. The companion object has a lot of methods to deal with enumeratees, such as map, transform, collect, take, filter, and so on. The API is documented at https://www.playframework.com/documentation/2.3.x/api/scala/index.html#play.api.libs.iteratee.Enumeratee$. Let's define an Enumeratee by working through a problem. The example we used in the previous section to find the count of vowels and consonants will not work correctly if a vowel is capitalized in a sentence, that is, the result of src |>>> vowelCount will be incorrect when the line variable is defined as follows: val line: String = "What we need is not the will to believe, but the wish to find out.".toUpperCase To fix this, let's alter the case of all the characters in the data stream to lowercase. We can use an Enumeratee to update the input provided to the Iteratee. Now, let's define an Enumeratee to return a given string in lowercase: val toSmallCase: Enumeratee[String, String] =Enumeratee.map[String] {s => s.toLowerCase} There are two ways to add an Enumeratee to the dataflow. It can be bound to the following: Enumerators Iteratees Summary In this article, we discussed the concept of Iteratees, Enumerators, and Enumeratees. We also saw how they were implemented in Play Framework and used internally. Resources for Article: Further resources on this subject: Play Framework: Data Validation Using Controllers [Article] Play Framework: Introduction to Writing Modules [Article] Integrating with other Frameworks [Article]
Read more
  • 0
  • 0
  • 2026

Packt
03 Jun 2015
9 min read
Save for later

Microsoft Azure – Developing Web API for Mobile Apps

Packt
03 Jun 2015
9 min read
Azure Websites is an excellent platform to deploy and manage the Web API, Microsoft Azure provides, however, another alternative in the form of Azure Mobile Services, which targets mobile application developers. In this article by Nikhil Sachdeva, coauthor of the book Building Web Services with Microsoft Azure, we delve into the capabilities of Azure Mobile Services and how it provides a quick and easy development ecosystem to develop Web APIs that support mobile apps. (For more resources related to this topic, see here.) Creating a Web API using Mobile Services In this section, we will create a Mobile Services-enabled Web API using Visual Studio 2013. For our fictitious scenario, we will create an Uber-like service but for medical emergencies. In the case of a medical emergency, users will have the option to send a request using their mobile device. Additionally, third-party applications and services can integrate with the Web API to display doctor availability. All requests sent to the Web API will follow the following process flow: The request will be persisted to a data store. An algorithm will find a doctor that matches the incoming request based on availability and proximity. Push Notifications will be sent to update the physician and patient. Creating the project Mobile Services provides two options to create a project: Using the Management portal, we can create a new Mobile Service and download a preassembled package that contains the Web API as well as the targeted mobile platform project Using Visual Studio templates The Management portal approach is easier to implement and does give a jumpstart by creating and configuring the project. However, for the scope of this article, we will use the Visual Studio template approach. For more information on creating a Mobile Services Web API using the Azure Management Portal, please refer to http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-windows-store-dotnet-get-started/. Azure Mobile Services provides a Visual Studio 2013 template to create a .NET Web API, we will use this template for our scenario. Note that the Azure Mobile Services template is only available from Visual Studio 2013 update 2 and onward. Creating a Mobile Service in Visual Studio 2013 requires the following steps: Create a new Azure Mobile Service project and assign it a Name, Location, and Solution. Click OK. In the next tab, we have a familiar ASP.NET project type dialog. However, we notice a few differences from the traditional ASP.NET dialog, which are as follows:    The Web API option is enabled by default and is the only choice available    The Authentication tab is disabled by default    The Test project option is disabled    The Host in the cloud option automatically suggests Mobile Services and is currently the only choice Select the default settings and click on OK. Visual Studio 2013 prompts developers to enter their Azure credentials in case they are not already logged in: For more information on Azure tools for Visual Studio, please refer visit https://msdn.microsoft.com/en-us/library/azure/ee405484.aspx. Since we are building a new Mobile Service, the next screen gathers information about how to configure the service. We can specify the existing Azure resources in our subscription or create new from within Visual Studio. Select the appropriate options and click on Create: The options are described here: Option Description Subscription This lists the name of the Azure subscription where the service will be deployed. Select from the dropdown if multiple subscriptions are available. Name This is the name of the Mobile Services deployment, this will eventually become the root DNS URL for the mobile service unless a custom domain is specified. (For example, contoso.azure-mobile.net). Runtime This allows selection of runtime. Note that as of writing this book, only the .NET framework was supported in Visual Studio, so this option is currently prepopulated and disabled. Region Select the Azure data center where the Web API will be deployed. As of writing this book, Mobile Services is available in the following regions: West US, East US, North Europe, East Asia, and West Japan. For details on latest regional availability, please refer to http://azure.microsoft.com/en-us/regions/#services. Database By default, a SQL Azure database gets associated with every Mobile Services deployment. It comes in handy if SQL is being used as the data store. However, in scenarios where different data stores such as the table storage or Mongo DB may be used, we still create this SQL database. We can select from a free 20 MB SQL database or an existing paid standard SQL database. For more information about SQL tiers, please visit http://azure.microsoft.com/en-us/pricing/details/sql-database. Server user name Provide the server name for the Azure SQL database. Server password Provide a password for the Azure SQL database. This process creates the required entities in the configured Azure subscription. Once completed, we have a new Web API project in the Visual Studio solution. The following screenshot is the representation of a new Mobile Service project: When we create a Mobile Service Web API project, the following NuGet packages are referenced in addition to the default ASP.NET Web API NuGet packages: Package Description WindowsAzure MobileServices Backend This package enables developers to build scalable and secure .NET mobile backend hosted in Microsoft Azure. We can also incorporate structured storage, user authentication, and push notifications. Assembly: Microsoft.WindowsAzure.Mobile.Service Microsoft Azure Mobile Services .NET Backend Tables This package contains the common infrastructure needed when exposing structured storage as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Tables Microsoft Azure Mobile Services .NET Backend Entity Framework Extension This package contains all types necessary to surface structured storage (using Entity Framework) as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Entity Additionally, the following third-party packages are installed: Package Description EntityFramework Since Mobile Services provides a default SQL database, it leverages Entity Framework to provide an abstraction for the data entities. AutoMapper AutoMapper is a convention based object-to-object mapper. It is used to map legacy custom entities to DTO objects in Mobile Services. OWIN Server and related assemblies Mobile Services uses OWIN as the default hosting mechanism. The current template also adds: Microsoft OWIN Katana packages to run the solution in IIS Owin security packages for Google, Azure AD, Twitter, Facebook Autofac This is the favorite Inversion of Control (IoC) framework. Azure Service Bus Microsoft Azure Service Bus provides Notification Hub functionality. We now have our Mobile Services Web API project created. The default project added by Visual Studio is not an empty project but a sample implementation of a Mobile Service-enabled Web API. In fact, a controller and Entity Data Model are already defined in the project. If we hit F5 now, we can see a running sample in the local Dev environment: Note that Mobile Services modifies the WebApiConfig file under the App_Start folder to accommodate some initialization and configuration changes: {    ConfigOptions options = new ConfigOptions();      HttpConfiguration config = ServiceConfig.Initialize     (new ConfigBuilder(options)); } In the preceding code, the ServiceConfig.Initialize method defined in the Microsoft.WindowsAzure.Mobile.Service assembly is called to load the hosting provider for our mobile service. It loads all assemblies from the current application domain and searches for types with HostConfigProviderAttribute. If it finds one, the custom host provider is loaded, or else the default host provider is used. Let's extend the project to develop our scenario. Defining the data model We now create the required entities and data model. Note that while the entities have been kept simple for this article, in the real-world application, it is recommended to define a data architecture before creating any data entities. For our scenario, we create two entities that inherit from Entity Data. These are described here. Record Record is an entity that represents data for the medical emergency. We use the Record entity when invoking CRUD operations using our controller. We also use this entity to update doctor allocation and status of the request as shown: namespace Contoso.Hospital.Entities {       /// <summary>    /// Emergency Record for the hospital    /// </summary> public class Record : EntityData    {        public string PatientId { get; set; }          public string InsuranceId { get; set; }          public string DoctorId { get; set; }          public string Emergency { get; set; }          public string Description { get; set; }          public string Location { get; set; }          public string Status { get; set; }           } } Doctor The Doctor entity represents the doctors that are registered practitioners in the area, the service will search for the availability of a doctor based on the properties of this entity. We will also assign the primary DoctorId to the Record type when a doctor is assigned to an emergency. The schema for the Doctor entity is as follows: amespace Contoso.Hospital.Entities {    public class Doctor: EntityData    {        public string Speciality{ get; set; }          public string Location { get; set; }               public bool Availability{ get; set; }           } } Summary In this article, we looked at a solution for developing a Web API that targets mobile developers. Resources for Article: Further resources on this subject: Security in Microsoft Azure [article] Azure Storage [article] High Availability, Protection, and Recovery using Microsoft Azure [article]
Read more
  • 0
  • 0
  • 3558

article-image-getting-started-livecode-mobile-0
Packt
03 Jun 2015
34 min read
Save for later

Getting Started with LiveCode Mobile

Packt
03 Jun 2015
34 min read
In this article written by Joel Gerdeen, author of the book LiveCode Mobile Development: Beginner's Guide - Second Edition we will learn the following topics: Sign up for Google Play Sign up for Amazon Appstore Download and install the Android SDK Configure LiveCode so that it knows where to look for the Android SDK Become an iOS developer with Apple Download and install Xcode Configure LiveCode so that it knows where to look for iOS SDKs Set up simulators and physical devices Test a stack in a simulator and physical device (For more resources related to this topic, see here.) Disclaimer This article references many Internet pages that are not under our control. Here, we do show screenshots or URLs, so remember that the content may have changed since we wrote this. The suppliers may also have changed some of the details, but in general, our description of procedures should still work the way we have described them. Here we go... iOS, Android, or both? It could be that you only have interest in iOS or Android. You should be able to easily skip to the sections you're interested in unless you're intrigued about how the other half works! If, like me, you're a capitalist, then you should be interested in both the operating systems. Far fewer steps are needed to get the Android SDK than the iOS developer tools because for iOS, we have to sign up as a developer with Apple. However, the configuration for Android is more involved. We'll go through all the steps for Android and then the ones for iOS. If you're an iOS-only kind of person, skip the next few pages and start up again at the Becoming an iOS Developer section. Becoming an Android developer It is possible to develop Android OS apps without signing up for anything. We'll try to be optimistic and assume that within the next 12 months, you will find time to make an awesome app that will make you rich! To that end, we'll go over everything that is involved in the process of signing up to publish your apps in both Google Play (formally known as Android Market) and Amazon Appstore. Google Play The starting location to open Google Play is http://developer.android.com/: We will come back to this page again, shortly to download the Android SDK, but for now, click on the Distribute link in the menu bar and then on the Developer Console button on the following screen. Since Google changes these pages occasionally, you can use the URL https://play.google.com/apps/publish/ or search for "Google Play Developer Console". The screens you will progress through are not shown here since they tend to change with time. There will be a sign-in page; sign in using your usual Google details. Which e-mail address to use? Some Google services are easier to sign up for if you have a Gmail account. Creating a Google+ account, or signing up for some of their cloud services, requires a Gmail address (or so it seemed to me at the time!). If you have previously set up Google Wallet as part of your account, some of the steps in signing up become simpler. So, use your Gmail address and if you don't have one, create one! Google charges you a $25 fee to sign up for Google Play. At least now, you know about this! Enter the developer name, e-mail address, website URL (if you have one), and your phone number. The payment of $25 will be done through Google Wallet, which will save you from entering the billing details yet again. Now, you're all signed up and ready to make your fortune! Amazon Appstore Although the rules and costs for Google Play are fairly relaxed, Amazon has a more Apple-like approach, both in the amount they charge you to register and in the review process to accept app submissions. The URL to open Amazon Appstore is http://developer.amazon.com/public: Follow these steps to start with Amazon Appstore: When you select Get Started, you need to sign in to your Amazon account. Which email address to use? This feels like déjà vu! There is no real advantage of using your Google e-mail address when signing up for the Amazon Appstore Developer Program, but if you happen to have an account with Amazon, sign in with that one. It will simplify the payment stage, and your developer account and the general Amazon account will be associated with each other. You are then asked to agree to the Appstore Distribution Agreement terms before learning about the costs. These costs are $99 per year, but the first year is free. So that's good! Unlike the Google Android Market, Amazon asks for your bank details up front, ready to send you lots of money later, we hope! That's it, you're ready to make another fortune to go along with the one that Google sent you! Pop quiz – when is something too much? You're at the end of developing your mega app, it's 49.5 MB in size, and you just need to add title screen music. Why would you not add the two-minute epic tune you have lined up? It would take too long to load. People tend to skip the title screen soon anyway. The file size is going to be over 50 MB. Heavy metal might not be appropriate for a children's storybook app! Answer: 3 The other answers are valid too, though you could play the music as an external sound to reduce loading time, but if your file size goes over 50 MB, you would then cut out potential sales from people who are connected by cellular and not wireless networks. At the time of writing this aticle, all the stores require that you be connected to the site via a wireless network if you intend to download apps that are over 50 MB. Downloading the Android SDK Head back to http://developer.android.com/ and click on the Get the SDK link or go straight to http://developer.android.com/sdk/index.html. This link defaults to the OS that you are running on. Click on the Other Download Options link to see the full set of options for other systems, as shown here: In this article, we're only going to cover Windows and Mac OS X (Intel) and only as much as is needed to make LiveCode work with the Android and iOS SDKs. If you intend to make native Java-based applications, you may be interested in reading through all the steps that are described in the web page http://developer.android.com/sdk/installing.html. Click on the SDK download link for your platform. Note that you don't need the ADT Bundle unless you plan to develop outside the LiveCode IDE. The steps you'll have to go through are different for Mac and Windows. Let's start with Mac. Installing the Android SDK on Mac OS X (Intel) LiveCode itself doesn't require Intel Mac; you can develop stacks using a PowerPC-based Mac, but both the Android SDK and some of the iOS tools require an Intel-based Mac, which sadly means that if you're reading this as you sit next to your Mac G4 or G5, you're not going to get too far! The Android SDK requires the Java Runtime Environment (JRE). Since Apple stopped including the JRE in more recent OS X systems, you should check whether you have it in your system by typing java –version in a Terminal window. The terminal will display the version of Java installed. If not, you may get a message like the following: Click on the More Info button and follow the instructions to install the JRE and verify its installation. At the time of writing this article, JRE 8 doesn't work with OS X 10.10 and I had to use the JRE 6 obtained from http://support.apple.com/kb/DL1572. The file that you just downloaded will automatically expand to show a folder named android-sdk-macosx. It may be in your downloads folder right now, but a more natural place for it would be in your Documents folder, so move it there before performing the next steps. There is an SDK readme text file that lists the steps you need to follow during the installation. If these steps are different to what we have here, then follow the steps in the readme file in case they have been updated since the procedure here was written. Open the Terminal application, which is in Applications/Utilities. You need to change the default directories present in the android-sdk-macosx folder. One handy trick, using Terminal, is that you can drag items into the Terminal window to get the file path to that item. Using this trick, you can type cd and a space in the Terminal window and then drag the android-sdk-macosx folder after the space character. You'll end up with this line if your username is Fred: new-host-3:~ fred$ cd /Users/fred/Documents/android-sdk-macosx Of course, the first part of the line and the user folder will match yours, not Fred's! Whatever your name is, press the Return or Enter key after entering the preceding line. The location line now changes to look like this: new-host-3:android-sdk-macosx colin$ Either carefully type or copy and paste the following line from the readme file: tools/android update sdk --no-ui Press Return or Enter again. How long the file takes to get downloaded depends on your Internet connection. Even with a very fast Internet connection, it could still take over an hour. If you care to follow the update progress, you can just run the android file in the tools directory. This will open the Android SDK Manager, which is similar to the Windows version shown a couple of pages further on in this article. Installing the Android SDK on Windows The downloads page recommends that you use the .exe download link, as it gives extra services to you, such as checking whether you have the Java Development Kit (JDK) installed. When you click on the link, either use the Run or Save options, as you would with any download of a Windows installer. Here, we've opted to use Run; if you use Save, then you need to open the file after it has been saved to your hard drive. In the following case, as the JDK wasn't installed, a dialog box appears saying go to Oracle's site to get the JDK: If you see this screen too, you can leave the dialog box open and click on the Visit java.oracle.com button. On the Oracle page, click on a checkbox to agree to their terms and then on the download link that corresponds with your platform. Choose the 64-bit option if you are running a 64-bit version of Windows or the x86 option if you are running a 32-bit version of Windows. Either way, you're greeted with another installer that you can Run or Save as you prefer. Naturally, it takes a while for the installer to do its thing too! When the installation is complete, you will see a JDK registration page and it's up to you, to register or not. Back at the Android SDK installer dialog box, you can click on the Back button and then the Next button to get back to the JDK checking stage; only now, it sees that you have the JDK installed. Complete the remaining steps of the SDK installer as you would with any Windows installer. One important thing to note is that the last screen of the installer offers to open the SDK Manager. You should do that, so resist the temptation to uncheck that box! Click on Finish and you'll be greeted with a command-line window for a few moments, as shown in the following screenshot, and then, the Android SDK Manager will appear and do its thing: As with the Mac version, it takes a very long time for all these add-ons to download. Pointing LiveCode to the Android SDK After all the installation and command-line work, it's a refreshing change to get back to LiveCode! Open the LiveCode Preferences and choose Mobile Support: We will set the two iOS entries after we get iOS going (but these options will be grayed out in Windows). For now, click on the … button next to the Android development SDK root field and navigate to where the SDK is installed. If you've followed the earlier steps correctly, then the SDK will be in the Documents folder on Mac or you can navigate to C:Program Files (x86)Android to find it on Windows (or somewhere else, if you choose to use a custom location). Depending on the APIs that were loaded in the SDK Manager, you may get a message that the path does not include support for Android 2.2 (API 8). If so, use the Android SDK Manager to install it. LiveCode seems to want API 8 even though at this time Android 5.0 uses API 21. Phew! Now, let's do the same for iOS… Pop quiz – tasty code names An Android OS uses some curious code names for each version. At the time of writing this article, we were on Android OS 5, which had a code name of Lollipop. Version 4.1 was Jelly Bean and version 4.4 was KitKat. Which of these is most likely to be the code name for the next Android OS? Lemon Cheesecake Munchies Noodle Marshmallow Answer: 4 The pattern, if it isn't obvious, is that the code name takes on the next letter of the alphabet, is a kind of food, but more specifically, it's a dessert. "Munchies" almost works for Android OS 6, but "Marshmallow" or "Meringue Pie" would be a better choices! Becoming an iOS developer Creating iOS LiveCode applications requires that LiveCode must have access to the iOS SDK. This is installed as part of the Xcode developer tools and is a Mac-only program. Also, when you upload an app to the iOS App Store, the application used is Mac only and is part of the Xcode installation. If you are a Windows-based developer and wish to develop and publish for iOS, you need either an actual Mac based system or a virtual machine that can run the Mac OS. We can even use VirtualBox for running a Mac based virtual machine, but performance will be an issue. Refer to http://apple.stackexchange.com/questions/63147/is-mac-os-x-in-a-virtualbox-vm-suitable-for-ios-development for more information. The biggest difference between becoming an Android developer and becoming an iOS developer is that you have to sign up with Apple for their developer program even if you never produce an app for the iOS App Store, but no such signing up is required when becoming an Android developer. If things go well and you end up making an app for various stores, then this isn't such a big deal. It will cost you $25 to submit an app to the Android Market, $99 a year (with the first year free) to submit an app to the Amazon Appstore, and $99 a year (including the first year) to be an iOS developer with Apple. Just try to sell more than 300 copies of your amazing $0.99 app and you'll find that it has paid for itself! Note that there is a free iOS App Store and app licensing included, with LiveCode Membership, which also costs $99 per year. As a LiveCode member, you can submit your free non-commercial app to RunRev who will provide a license that will allow you to submit your app as "closed source" to iOS App Store. This service is exclusively available for LiveCode members. The first submission each year is free; after that, there is a $25 administration fee per submission. Refer to http://livecode.com/membership/ for more information. You can enroll yourself in the iOS Developer Program for iOS at http://developer.apple.com/programs/ios/: While signing up to be an iOS developer, there are a number of possibilities when it comes to your current status. If you already have an Apple ID, which you use with your iTunes or Apple online store purchases, you could choose the I already have an Apple ID… option. In order to illustrate all the steps to sign up, we will start as a brand new user, as shown in the following screenshot: You can choose whether you want to sign up as an individual or as a company. We will choose Individual, as shown in the following screenshot: With any such sign up process, you need to enter your personal details, set a security question, and enter your postal address: Most Apple software and services have their own legal agreement for you to sign. The one shown in the following screenshot is the general Registered Apple Developer Agreement: In order to verify the e-mail address you have used, a verification code is sent to you with a link in the e-mail, you can click this, or enter the code manually. Once you have completed the verification code step, you can then enter your billing details. It could be that you might go on to make LiveCode applications for the Mac App Store, in which case, you will need to add the Mac Developer Program product. For our purpose, we only need to sign up for the iOS Developer Program, as shown in the following screenshot: Each product that you sign up for has its own agreement. Lots of small print to read! The actual purchasing of the iOS developer account is handled through the Apple Store of your own region, shown as follows: As you can see in the next screenshot, it is going to cost you $99 per year or $198 per year if you also sign up for the Mac Developer account. Most LiveCode users won't need to sign up for the Mac Developer account unless their plan is to submit desktop apps to the Mac App Store. After submitting the order, you are rewarded with a message that tells you that you are now registered as an Apple developer! Sadly, you won't get an instant approval, as was the case with Android Market or Amazon Appstore. You have to wait for the approval for five days. In the early iPhone Developer days, the approval could take a month or more, so 24 hours is an improvement! Pop quiz – iOS code names You had it easy with the pop quiz about Android OS code names! Not so with iOS. Which of these names is more likely to be a code name for a future version of iOS? Las Vegas Laguna Beach Hunter Mountain Death Valley Answer: 3 Although not publicized, Apple does use code names for each version of iOS. Previous examples included Big Bear, Apex, Kirkwood, and Telluride. These, and all the others are apparently ski resorts. Hunter Mountain is a relatively small mountain (3,200 feet), so if it does get used, perhaps it would be a minor update! Installing Xcode Once you receive confirmation of becoming an iOS developer, you will be able to log in to the iOS Dev Center at https://developer.apple.com/devcenter/ios/index.action. This same page is used by iOS developers who are not using LiveCode and is full of support documents that can help you create native applications using Xcode and Objective-C. We don't need all the support documents, but we do need to download Xcode's support documents. In the downloads area of the iOS Dev Center page, you will see a link to the current version of Xcode and a link to get to the older versions as well. The current version is delivered via Mac App Store; when you try the given link, you will see a button that takes you to the App Store application. Installing Xcode from Mac App Store is very straightforward. It's just like buying any other app from the store, except that it's free! It does require you to use the latest version of Mac OS X. Xcode will show up in your Applications folder. If you are using an older system, then you need to download one of the older versions from the developer page. The older Xcode installation process is much like the installation process of any other Mac application: The older version of Xcode takes a long time to get installed, but in the end, you should have the Developer folder or a new Xcode application ready for LiveCode. Coping with newer and older devices In early 2012, Apple brought to the market a new version of iPad. The main selling point of this one compared to iPad 2 is that it has a Retina display. The original iPads have a resolution of 1024 x 768 and the Retina version has a resolution of 2048 x 1536. If you wish to build applications to take advantage of this, you must get the current version of Xcode from Mac App Store and not one of the older versions from the developer page. The new version of Xcode demands that you work on Mac OS 10.10 or its later versions. So, to fully support the latest devices, you may have to update your system software more than you were expecting! But wait, there's more… By taking a later version of Xcode, you are missing the iOS SDK versions needed to support older iOS devices, such as the original iPhone and iPhone 3G. Fortunately, you can go to Preferences in Xcode where there is a Downloads tab where you can get these older SDKs downloaded in the new version of Xcode. Typically, Apple only allows you to download one version older than the one that is currently provided in Xcode. There are older versions available, but are not accepted by Apple for App Store submission. Pointing LiveCode to the iOS SDKs Open the LiveCode Preferences and choose Mobile Support: Click on the Add Entry button in the upper-right section of the window to see a dialog box that asks whether you are using Xcode 4.2 or 4.3 or a later version. If you choose 4.2, then go on to select the folder named Developer at the root of your hard drive. For 4.3 or later versions, choose the Xcode application itself in your Applications folder. LiveCode knows where to find the SDKs for iOS. Before we make our first mobile app… Now that the required SDKs are installed and LiveCode knows where they are, we can make a stack and test it in a simulator or on a physical device. We do, however, have to get the simulators and physical devices warmed up… Getting ready for test development on an Android device Simulating on iOS is easier than it is on Android, and testing on a physical device is easier on Android than on iOS, but the setting up of physical Android devices can be horrendous! Time for action – starting an Android Virtual Device You will have to dig a little deep in the Android SDK folders to find the Android Virtual Device setup program. You might as well provide a shortcut or an alias to it for quicker access. The following steps will help you setup and start an Android virtual device: Navigate to the Android SDK tools folder located at C:Program Files (x86)Androidandroid-sdk on Windows and navigate to your Documents/android-sdk-macosx/tools folder on Mac. Open AVD Manager on Windows or android on Mac (these look like a Unix executable file; just double-click on it and the application will open via a command-line window). If you're on Mac, select Manage AVDs… from the Tools menu. Select Tablet from the list of devices if there is one. If not, you can add your own custom devices as described in the following section. Click on the Start button. Sit patiently while the virtual device starts up! Open LiveCode, create a new Mainstack, and click on Save to save the stack to your hard drive. Navigate to File | Standalone Application Settings…. Click on the Android icon and click on the Build for Android checkbox to select it. Close the settings dialog box and take a look at the Development menu. If the virtual machine is up and running, you should see it listed in the Test Target submenu. Creating an Android Virtual Device If there are no devices listed when you open the Android Virtual Device (AVD) Manager, you may If you wish to create a device, so click on the Create button. The following screenshot will appear when you do so. Further explanation of the various fields can be found at https://developer.android.com/tools/devices/index.html. After you have created a device, you can click on Start to start the virtual device and change some of the Launch Options. You should typically select Scale display to real size unless it is too big for your development screen. Then, click on Launch to fire up the emulator. Further information on how to run the emulator can be found at http://developer.android.com/tools/help/emulator.html. What just happened? Now that you've opened an Android virtual device, LiveCode will be able to test stacks using this device. Once it has finished loading, that is! Connecting a physical Android device Connecting a physical Android device can be extremely straightforward: Connect your device to the system by USB. Select your device from the Development | Test Target submenu. Select Test from the Development menu or click on the Test button in the Tool Bar. There can be problem cases though, and Google Search will become your best friend before you are done solving these problems! We should look at an example problem case, so that you get an idea of how to solve similar situations that you may encounter. Using Kindle Fire When it comes to finding Android devices, the Android SDK recognizes a lot of them automatically. Some devices are not recognized and you have to do something to help Android Debug Bridge (ADB) find these devices. Android Debug Bridge (ADB) is part of the Android SDK that acts as an intermediary between your device and any software that needs to access the device. In some cases, you will need to go to the Android system on the device to tell it to allow access for development purposes. For example, on an Android 3 (Honeycomb) device, you need to go to the Settings | Applications | Development menu and you need to activate the USB debugging mode. Before ADB connects to a Kindle Fire device, that device must first be configured, so that it allows connection. This is enabled by default on the first generation Kindle Fire device. On all other Kindle Fire models, go to the device settings screen, select Security, and set Enable ADB to On. The original Kindle Fire model comes with USB debugging already enabled, but the ADB system doesn't know about the device at all. You can fix this! Time for action – adding Kindle Fire to ADB It only takes one line of text to add Kindle Fire to the list of devices that ADB knows about. The hard part is tracking down the text file to edit and getting ADB to restart after making the required changes. Things are more involved when using Windows than with Mac because you also have to configure the USB driver, so the two systems are shown here as separate steps. The steps to be followed for adding a Kindle Fire to ADB for a Windows OS are as follows: In Windows Explorer, navigate to C:Usersyourusername.android where the adv_usb.ini file is located. Open the adv_usb.ini text file in a text editor. The file has no visible line breaks, so it is better to use WordPad than NotePad. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Now, save the file. Navigate to C:Program Files (x86)Androidandroid-sdkextrasgoogleusb_driver where android_winusb.inf is located. Right-click on the file and in Properties, Security, select Users from the list and click on Edit to set the permissions, so that you are allowed to write the file. Open the android_winusb.inf file in NotePad. Add the following three lines to the [Google.NTx86] and [Google.NTamd64] sections and save the file: ;Kindle Fire %SingleAdbInterface% = USB_Install, USBVID_1949&PID_0006 %CompositeAdbInterface% = USB_Install, USBVID_1949&PID_0006&MI_01 You need to set the Kindle so that it uses the Google USB driver that you just edited. In the Windows control panel, navigate to Device Manager and find the Kindle entry in the list that is under USB. Right-click on the Kindle entry and choose Update Driver Software…. Choose the option that lets you find the driver on your local drive, navigate to the googleusb_driver folder, and then select it to be the new driver. When the driver is updated, open a command window (a handy trick to open a command window is to use Shift-right-click on the desktop and to choose "Open command window here"). Change the directories to where the ADB tool is located by typing: cd C:Program Files (x86)Androidandroid-sdkplatform-tools Type the following three line of code and press Enter after each line: adb kill-server adb start-server adb devices You should see the Kindle Fire listed (as an obscure looking number) as well as the virtual device if you still have that running. The steps to be followed for a Mac (MUCH simpler!) system are as follows: Navigate to where the adv_usb.ini file is located. On Mac, in Finder, select the menu by navigating to Go | Go to Folder… and type ~/.android/in. Open the adv_usb.ini file in a text editor. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Save the adv_usb.ini file. Navigate to Utilities | Terminal. You can let OS X know how to find ADB from anywhere by typing the following line (replace yourusername with your actual username and also change the path if you've installed the Android SDK to some other location): export PATH=$PATH:/Users/yourusername/Documents/android-sdk-macosx/platform-tools Now, try the same three lines as we did with Windows: adb kill-server adb start-server adb devices Again, you should see the Kindle Fire listed here. What just happened? I suspect that you're going to have nightmares about all these steps! It took a lot of research on the Web to find out some of these obscure hacks. The general case with Android devices on Windows is that you have to modify the USB driver for the device to be handled using the Google USB driver, and you may have to modify the adb_usb.ini file (on Mac too) for the device to be considered as an ADB compatible device. Getting ready for test development on an iOS device If you carefully went through all these Android steps, especially on Windows, you will hopefully be amused by the brevity of this section! There is a catch though; you can't really test on an iOS device from LiveCode. We'll look at what you have to do instead in a moment, but first, we'll look at the steps required to test an app in the iOS simulator. Time for action – using the iOS simulator The initial steps are much like what we did for Android apps, but the process becomes a lot quicker in later steps. Remember, this only applies to a Mac OS; you can only do these things on Windows if you are using a Mac OS in a virtual machine, which may have performance issues. This is most likely not covered by the Mac OS's user agreement! In other words, get a Mac OS if you intend to develop for iOS. The following steps will help you achieve that: Open LiveCode and create a new Mainstack and save the stack to your hard drive. Select File and then Standalone Application Settings…. Click on the iOS icon to select the Build for iOS checkbox. Close the settings dialog box and take a look at the Test Target menu under Development. You will see a list of simulator options for iPhone and iPad and different versions of iOS. To start the iOS simulator, select an option and click on the Test button. What just happened? This was all it took for us to get the testing done using the iOS simulators! To test on a physical iOS device, we need to create an application file first. Let's do that. Appiness at last! At this point, you should be able to create a new Mainstack, save it, select either iOS or Android in the Standalone Settings dialog box, and be able to see simulators or virtual devices in the Development/Test menu item. In the case of an Android app, you will also see your device listed if it is connected via USB at the time. Time for action – testing a simple stack in the simulators Feel free to make things that are more elaborate than the ones we have made through these steps! The following instructions make an assumption that you know how to find things by yourself in the object inspector palette: Open LiveCode, create a new Mainstack, and save it someplace where it is easy to find in a moment from now. Set the card window to the size 480 x 320 and uncheck the Resizable checkbox. Drag a label field to the top-left corner of the card window and set its contents to something appropriate. Hello World might do. If you're developing on Windows, skip to step 11. Open the Standalone Application Settings dialog box, click on the iOS icon, and click on the Build for iOS checkbox. Under Orientation Options, set the iPhone Initial Orientation to Landscape Left. Close the dialog box. Navigate to the Development | Test Target submenu and choose an iPhone Simulator. Select Test from the Development menu. You should now be able to see your test stack running in the iOS simulator! As discussed earlier, launch the Android virtual device. Open the Standalone Application Settings dialog box, click on the Android icon, and click on the Build for Android checkbox. Under User Interface Options, set the Initial Orientation to Landscape. Close the dialog box. If the virtual device is running by now, do whatever it takes to get past the locked home screen, if that's what it is showing. From the Development/Test Target submenu, choose the Android emulator. Select Test from the Development menu. You should now see your test stack running in the Android emulator! What just happened? All being well, you just made and ran your first mobile app on both Android and iOS! For an encore, we should try this on physical devices only to give Android a chance to show how easy it can be done. There is a whole can of worms we didn't open yet that has to do with getting an iOS device configured, so that it can be used for testing. You could visit the iOS Provisioning Portal at https://developer.apple.com/ios/manage/overview/index.action and look at the How To tab in each of the different sections. Time for action – testing a simple stack on devices Now, let's try running our tests on physical devices. Get your USB cables ready and connect the devices to your computer. Lets go through the steps for an Android device first: You should still have Android selected in Standalone Application Settings. Get your device to its home screen past the initial Lock screen if there is one. Choose Development/Test Target and select your Android device. It may well say "Android" and a very long number. Choose Development/Test. The stack should now be running on your Android device. Now, we'll go through the steps to test a simple stack on an iOS device: Change the Standalone Application Settings back to iOS. Under Basic Application Settings of the iOS settings is a Profile drop-down menu of the provisioning files that you have installed. Choose one that is configured for the device you are going to test. Close the dialog box and choose Save as Standalone Application… from the File menu. In Finder, locate the folder that was just created and open it to reveal the app file itself. As we didn't give the stack a sensible name, it will be named Untitled 1. Open Xcode, which is in the Developer folder you installed earlier, in the Applications subfolder. In the Xcode folder, choose Devices from the Window menu if it isn't already selected. You should see your device listed. Select it and if you see a button labeled Use for Development, click on that button. Drag the app file straight from the Finder menu to your device in the Devices window. You should see a green circle with a + sign. You can also click on the + sign below Installed Apps and locate your app file in the Finder window. You can also replace or delete an installed app from this window. You can now open the app on your iOS device! What just happened? In addition to getting a test stack to work on real devices, we also saw how easy it is, once it's all configured, to test a stack, straight on an Android device. If you are developing an app that is to be deployed on both Android and iOS, you may find that the fastest way to work is to test with the iOS Simulator for iOS tests, but for this, you need to test directly on an Android device instead of using the Android SDK virtual devices. Have a go hero – Nook Until recently, the Android support for the Nook Color from Barnes & Noble wasn't good enough to install LiveCode apps. It seems to have improved though and could well be another worthwhile app store for you to target. Investigate about the sign up process, download their SDK, and so on. With any luck, some of the processes that you've learned while signing up for the other stores will also apply to the Nook store. You can start the signing up process at https://nookdeveloper.barnesandnoble.com. Further reading The SDK providers, Google and Apple, have extensive pages of information on how to set up development environments, create certificates and provisioning files, and so on. The information covers a lot of topics that don't apply to LiveCode, so try not to get lost! These URLs would be good starting points if you want to read further: http://developer.android.com/ http://developer.apple.com/ios/ Summary Signing up for programs, downloading files, using command lines all over the place, and patiently waiting for the Android emulator to launch. Fortunately, you only have to go through it once. In this article, we worked through a number of tasks that you have to do before you create a mobile app in LiveCode. We had to sign up as an iOS developer before we could download and install Xcode and iOS SDKs. We then downloaded and installed the Android SDK and configured LiveCode for devices and simulators. We also covered some topics that will be useful once you are ready to upload a finished app. We showed you how to sign up for the Android Market and Amazon Appstore. There will be a few more mundane things that we have to cover at the end of the article, but not for a while! Next up, we will start to play with some of the special abilities of mobile devices. Resources for Article: Further resources on this subject: LiveCode: Loops and Timers [article] Creating Quizzes [article] Getting Started with LiveCode for Mobile [article]
Read more
  • 0
  • 0
  • 4445

article-image-running-cucumber
Packt
03 Jun 2015
6 min read
Save for later

Running Cucumber

Packt
03 Jun 2015
6 min read
In this article by Shankar Garg, author of the book Cucumber Cookbook, we will cover the following topics: Integrating Cucumber with Maven Running Cucumber from the Terminal Overriding options from the Terminal (For more resources related to this topic, see here.) Integrating Cucumber with Maven Maven has a lot of advantages over other build tools, such as dependency management, lots of plugins and the convenience of running integration tests. So let's also integrate our framework with Maven. Maven will allow our test cases to be run in different flavors, such as from the Terminal, integrating with Jenkins, and parallel execution. So how do we integrate with Maven? Let's find out in the next section. Getting ready I am assuming that we know the basics of Maven (the basics of Maven are out of the scope of this book). Follow the upcoming instructions to install Maven on your system and to create a sample Maven project. We need to install Maven on our system first. So, follow instructions mentioned on the following blogs: For Windows: http://www.mkyong.com/maven/how-to-install-maven-in-windows/ For Mac: http://www.mkyong.com/maven/install-maven-on-mac-osx/ We can also install the Maven Eclipse plugin by following the instructions mentioned on this blog: http://theopentutorials.com/tutorials/eclipse/installing-m2eclipse-maven-plugin-for-eclipse/. To import a Maven project into Eclipse, follow the instructions on this blog: http://www.tutorialspoint.com/maven/maven_eclispe_ide.htm. How to do it… Since it is a Maven project, we are going to change the pom.xml file to add the Cucumber dependencies. First we are going to declare some custom properties which will be used by us in managing the dependency version: <properties>    <junit.version>4.11</junit.version>    <cucumber.version>1.2.2</cucumber.version>    <selenium.version>2.45.0</selenium.version>    <maven.compiler.version>2.3.2</maven.compiler.version> </properties> Now, we are going to add the dependency for Cucumber-JVM with scope as test: <!—- Cucumber-java--> <dependency>    <groupId>info.cukes</groupId>    <artifactId>cucumber-java</artifactId>    <version>${cucumber.version}</version>    <scope>test</scope> </dependency> Now we need to add the dependency for Cucumber-JUnit with scope as test. <!-— Cucumber-JUnit --> <dependency>    <groupId>info.cukes</groupId>    <artifactId>cucumber-junit</artifactId>    <version>${cucumber.version}</version>    <scope>test</scope> </dependency> That's it! We have integrated Cucumber and Maven. How it works… By following these Steps, we have created a Maven project and added the Cucumber-Java dependency. At the moment, this project only has a pom.xml file, but this project can be used for adding different modules such as Feature files and Step Definitions. The advantage of using properties is that we are making sure that the dependency version is declared at one place in the pom.xml file. Otherwise, we declare a dependency at multiple places and may end up with a discrepancy in the dependency version. The Cucumber-Java dependency is the main dependency necessary for the different building blocks of Cucumber. The Cucumber-JUnit dependency is for Cucumber JUnit Runner, which we use in running Cucumber test cases. Running Cucumber from the Terminal Now we have integrated Cucumber with Maven, running Cucumber from the Terminal will not be a problem. Running any test framework from the Terminal has its own advantages, such as overriding the run configurations mentioned in the code. So how do we run Cucumber test cases from the Terminal? Let's find out in our next section. How to do it… Open the command prompt and cd until the project root directory. First, let's run all the Cucumber Scenarios from the command prompt. Since it's a Maven project and we have added Cucumber in test scope dependency and all features are also added in test packages, run the following command in the command prompt: mvn test This is the output:     The previous command runs everything as mentioned in the JUnit Runner class. However, if we want to override the configurations mentioned in the Runner, then we need to use following command: mvn test –DCucumber.options="<<OPTIONS>>" If you need help on these Cucumber options, then enter the following command in the command prompt and look at the output: mvn test -Dcucumber.options="--help" This is the output: How it works… mvn test runs Cucumber Features using Cucumber's JUnit Runner. The @RunWith (Cucumber.class) annotation on the RunCukesTest class tells JUnit to kick off Cucumber. The Cucumber runtime parses the command-line options to know what Feature to run, where the Glue Code lives, what plugins to use, and so on. When you use the JUnit Runner, these options are generated from the @CucumberOptions annotation on your test. Overriding Options from the Terminal When it is necessary to override the options mentioned in the JUnit Runner, then we need Dcucumber.options from the Terminal. Let's look at some of the practical examples. How to do it… If we want to run a Scenario by specifying the filesystem path, run the following command and look at the output: mvn test -Dcucumber.options= "src/test/java/com/features/sample.feature:5"   In the preceding code, "5" is the Feature file line number where a Scenario starts. If we want to run the test cases using Tags, then we run the following command and notice the output: mvn test -Dcucumber.options="--tags @sanity" The following is the output of the preceding command: If we want to generate a different report, then we can use the following command and see the JUnit report generate at the location mentioned: mvn test -Dcucumber.options= "--plugin junit:target/cucumber-junit-report.xml" How it works… When you override the options with -Dcucumber.options, you will completely override whatever options are hardcoded in your @CucumberOptions. There is one exception to this rule, and that is the --plugin option. This will not override, but instead, it will add a plugin. Summary In this article we learned that for successful implementation of any testing framework, it is mandatory that test cases can be run in multiple ways so that people with different competency levels can use it how they need to. In this article, we also covered advanced topics of running Cucumber test cases in parallel by a combination of Cucumber and Maven. Resources for Article: Further resources on this subject: Signing an application in Android using Maven [article] Apache Maven and m2eclipse [article] Understanding Maven [article]
Read more
  • 0
  • 0
  • 2729

article-image-playing-physics
Packt
03 Jun 2015
26 min read
Save for later

Playing with Physics

Packt
03 Jun 2015
26 min read
In this article by Maxime Barbier, author of the book SFML Blueprints, we will add physics into this game and turn it into a new one. By doing this, we will learn: What is a physics engine How to install and use the Box2D library How to pair the physics engine with SFML for the display How to add physics in the game In this article, we will learn the magic of physics. We will also do some mathematics but relax, it's for conversion only. Now, let's go! (For more resources related to this topic, see here.) A physics engine – késako? We will speak about physics engine, but the first question is "what is a physics engine?" so let's explain it. A physics engine is a software or library that is able to simulate Physics, for example, the Newton-Euler equation that describes the movement of a rigid body. A physics engine is also able to manage collisions, and some of them can deal with soft bodies and even fluids. There are different kinds of physics engines, mainly categorized into real-time engine and non-real-time engine. The first one is mostly used in video games or simulators and the second one is used in high performance scientific simulation, in the conception of special effects in cinema and animations. As our goal is to use the engine in a video game, let's focus on real-time-based engine. Here again, there are two important types of engines. The first one is for 2D and the other for 3D. Of course you can use a 3D engine in a 2D world, but it's preferable to use a 2D engine for an optimization purpose. There are plenty of engines, but not all of them are open source. 3D physics engines For 3D games, I advise you to use the Bullet physics library. This was integrated in the Blender software, and was used in the creation of some commercial games and also in the making of films. This is a really good engine written in C/C++ that can deal with rigid and soft bodies, fluids, collisions, forces… and all that you need. 2D physics engines As previously said, in a 2D environment, you can use a 3D physics engine; you just have to ignore the depth (Z axes). However, the most interesting thing is to use an engine optimized for the 2D environment. There are several engines like this one and the most famous ones are Box2D and Chipmunk. Both of them are really good and none of them are better than the other, but I had to make a choice, which was Box2D. I've made this choice not only because of its C++ API that allows you to use overload, but also because of the big community involved in the project. Physics engine comparing game engine Do not mistake a physics engine for a game engine. A physics engine only simulates a physical world without anything else. There are no graphics, no logics, only physics simulation. On the contrary, a game engine, most of the time includes a physics engine paired with a render technology (such as OpenGL or DirectX). Some predefined logics depend on the goal of the engine (RPG, FPS, and so on) and sometimes artificial intelligence. So as you can see, a game engine is more complete than a physics engine. The two mostly known engines are Unity and Unreal engine, which are both very complete. Moreover, they are free for non-commercial usage. So why don't we directly use a game engine? This is a good question. Sometimes, it's better to use something that is already made, instead of reinventing it. However, do we really need all the functionalities of a game engine for this project? More importantly, what do we need it for? Let's see the following: A graphic output Physics engine that can manage collision Nothing else is required. So as you can see, using a game engine for this project would be like killing a fly with a bazooka. I hope that you have understood the aim of a physics engine, the differences between a game and physics engine, and the reason for the choices made for the project. Using Box2D As previously said, Box2D is a physics engine. It has a lot of features, but the most important for the project are the following (taken from the Box2D documentation): Collision: This functionality is very interesting as it allows our tetrimino to interact with each other Continuous collision detection Rigid bodies (convex polygons and circles) Multiple shapes per body Physics: This functionality will allow a piece to fall down and more Continuous physics with the time of impact solver Joint limits, motors, and friction Fairly accurate reaction forces/impulses As you can see, Box2D provides all that we need in order to build our game. There are a lot of other features usable with this engine, but they don't interest us right now so I will not describe them in detail. However, if you are interested, you can take a look at the official website for more details on the Box2D features (http://box2d.org/about/). It's important to note that Box2D uses meters, kilograms, seconds, and radians for the angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. I will come back to this later. Preparing Box2D Now that Box2D is introduced, let's install it. You will find the list of available versions on the Google code project page at https://code.google.com/p/box2d/downloads/list. Currently, the latest stable version is 2.3. Once you have downloaded the source code (from compressed file or using SVN), you will need to build it. Install Once you have successfully built your Box2D library, you will need to configure your system or IDE to find the Box2D library and headers. The newly built library can be found in the /path/to/Box2D/build/Box2D/ directory and is named libBox2D.a. On the other hand, the headers are located in the path/to/Box2D/Box2D/ directory. If everything is okay, you will find a Box2D.h file in the folder. On Linux, the following command adds Box2D to your system without requiring any configuration: sudo make install Pairing Box2D and SFML Now that Box2D is installed and your system is configured to find it, let's build the physics "hello world": a falling square. It's important to note that Box2D uses meters, kilograms, seconds, and radian for angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. Converting radians to degrees or vice versa is not difficult, but pixels to meters… this is another story. In fact, there is no way to convert a pixel to meter, unless if the number of pixels per meter is fixed. This is the technique that we will use. So let's start by creating some utility functions. We should be able to convert radians to degrees, degrees to radians, meters to pixels, and finally pixels to meters. We will also need to fix the pixel per meter value. As we don't need any class for these functions, we will define them in a namespace converter. This will result as the following code snippet: namespace converter {    constexpr double PIXELS_PER_METERS = 32.0;    constexpr double PI = 3.14159265358979323846;      template<typename T>    constexpr T pixelsToMeters(const T& x){return x/PIXELS_PER_METERS;};      template<typename T>    constexpr T metersToPixels(const T& x){return x*PIXELS_PER_METERS;};      template<typename T>    constexpr T degToRad(const T& x){return PI*x/180.0;};      template<typename T>    constexpr T radToDeg(const T& x){return 180.0*x/PI;} } As you can see, there is no difficulty here. We start to define some constants and then the convert functions. I've chosen to make the function template to allow the use of any number type. In practice, it will mostly be double or int. The conversion functions are also declared as constexpr to allow the compiler to calculate the value at compile time if it's possible (for example, with constant as a parameter). It's interesting because we will use this primitive a lot. Box2D, how does it work? Now that we can convert SFML unit to Box2D unit and vice versa, we can pair Box2D with SFML. But first, how exactly does Box2D work? Box2D works a lot like a physics engine: You start by creating an empty world with some gravity. Then, you create some object patterns. Each pattern contains the shape of the object position, its type (static or dynamic), and some other characteristics such as its density, friction, and energy restitution. You ask the world to create a new object defined by the pattern. In each game loop, you have to update the physical world with a small step such as our world in the games we've already made. Because the physics engine does not display anything on the screen, we will need to loop all the objects and display them by ourselves. Let's start by creating a simple scene with two kinds of objects: a ground and square. The ground will be fixed and the squares will not. The square will be generated by a user event: mouse click. This project is very simple, but the goal is to show you how to use Box2D and SFML together with a simple case study. A more complex one will come later. We will need three functionalities for this small project to: Create a shape Display the world Update/fill the world Of course there is also the initialization of the world and window. Let's start with the main function: As always, we create a window for the display and we limit the FPS number to 60. I will come back to this point with the displayWorld function. We create the physical world from Box2D, with gravity as a parameter. We create a container that will store all the physical objects for the memory clean purpose. We create the ground by calling the createBox function (explained just after). Now it is time for the minimalist game loop:    Close event managements    Create a box by detecting that the right button of the mouse is pressed Finally, we clean the memory before exiting the program: int main(int argc,char* argv[]) {    sf::RenderWindow window(sf::VideoMode(800, 600, 32), "04_Basic");    window.setFramerateLimit(60);    b2Vec2 gravity(0.f, 9.8f);    b2World world(gravity);    std::list<b2Body*> bodies;    bodies.emplace_back(book::createBox(world,400,590,800,20,b2_staticBody));      while(window.isOpen()) {        sf::Event event;        while(window.pollEvent(event)) {            if (event.type == sf::Event::Closed)                window.close();        }        if (sf::Mouse::isButtonPressed(sf::Mouse::Left)) {            int x = sf::Mouse::getPosition(window).x;            int y = sf::Mouse::getPosition(window).y;            bodies.emplace_back(book::createBox(world,x,y,32,32));        }        displayWorld(world,window);    }      for(b2Body* body : bodies) {        delete static_cast<sf::RectangleShape*>(body->GetUserData());        world.DestroyBody(body);    }    return 0; } For the moment, except the Box2D world, nothing should surprise you so let's continue with the box creation. This function is under the book namespace. b2Body* createBox(b2World& world,int pos_x,int pos_y, int size_x,int size_y,b2BodyType type = b2_dynamicBody) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),                         converter::pixelsToMeters<double>(pos_y));    bodyDef.type = type;    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(size_x/2.0),                    converter::pixelsToMeters<double>(size_y/2.0));      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.4;    fixtureDef.restitution= 0.5;    fixtureDef.shape = &b2shape;      b2Body* res = world.CreateBody(&bodyDef);    res->CreateFixture(&fixtureDef);      sf::Shape* shape = new sf::RectangleShape(sf::Vector2f(size_x,size_y));    shape->setOrigin(size_x/2.0,size_y/2.0);    shape->setPosition(sf::Vector2f(pos_x,pos_y));                                                   if(type == b2_dynamicBody)        shape->setFillColor(sf::Color::Blue);    else        shape->setFillColor(sf::Color::White);      res->SetUserData(shape);      return res; } This function contains a lot of new functionalities. Its goal is to create a rectangle of a specific size at a predefined position. The type of this rectangle is also set by the user (dynamic or static). Here again, let's explain the function step-by-step: We create b2BodyDef. This object contains the definition of the body to create. So we set the position and its type. This position will be in relation to the gravity center of the object. Then, we create b2Shape. This is the physical shape of the object, in our case, a box. Note that the SetAsBox() method doesn't take the same parameter as sf::RectangleShape. The parameters are half the size of the box. This is why we need to divide the values by two. We create b2FixtureDef and initialize it. This object holds all the physical characteristics of the object such as its density, friction, restitution, and shape. Then, we properly create the object in the physical world. Now, we create the display of the object. This will be more familiar because we will only use SFML. We create a rectangle and set its position, origin, and color. As we need to associate and display SFML object to the physical object, we use a functionality of Box2D: the SetUserData() function. This function takes void* as a parameter and internally holds it. So we use it to keep track of our SFML shape. Finally, the body is returned by the function. This pointer has to be stored to clean the memory later. This is the reason for the body's container in main(). Now, we have the capability to simply create a box and add it to the world. Now, let's render it to the screen. This is the goal of the displayWorld function: void displayWorld(b2World& world,sf::RenderWindow& render) {    world.Step(1.0/60,int32(8),int32(3));    render.clear();    for (b2Body* body=world.GetBodyList(); body!=nullptr; body=body->GetNext())    {          sf::Shape* shape = static_cast<sf::Shape*>(body->GetUserData());        shape->setPosition(converter::metersToPixels(body->GetPosition().x),        converter::metersToPixels(body->GetPosition().y));        shape->setRotation(converter::radToDeg<double>(body->GetAngle()));        render.draw(*shape);    }    render.display(); } This function takes the physics world and window as a parameter. Here again, let's explain this function step-by-step: We update the physical world. If you remember, we have set the frame rate to 60. This is why we use 1,0/60 as a parameter here. The two others are for precision only. In a good code, the time step should not be hardcoded as here. We have to use a clock to be sure that the value will always be the same. Here, it has not been the case to focus on the important part: physics. We reset the screen, as usual. Here is the new part: we loop the body stored by the world and get back the SFML shape. We update the SFML shape with the information taken from the physical body and then render it on the screen. Finally, we render the result on the screen. As you can see, it's not really difficult to pair SFML with Box2D. It's not a pain to add it. However, we have to take care of the data conversion. This is the real trap. Pay attention to the precision required (int, float, double) and everything should be fine. Now that you have all the keys in hand, let's build a real game with physics. Adding physics to a game Now that Box2D is introduced with a basic project, let's focus on the real one. We will modify our basic Tetris to get Gravity-Tetris alias Gravitris. The game control will be the same as in Tetris, but the game engine will not be. We will replace the board with a real physical engine. With this project, we will reuse a lot of work previously done. As already said, the goal of some of our classes is to be reusable in any game using SFML. Here, this will be made without any difficulties as you will see. The classes concerned are those you deal with user event Action, ActionMap, ActionTarget—but also Configuration and ResourceManager. There are still some changes that will occur in the Configuration class, more precisely, in the enums and initialization methods of this class because we don't use the exact same sounds and events that were used in the Asteroid game. So we need to adjust them to our needs. Enough with explanations, let's do it with the following code: class Configuration {    public:        Configuration() = delete;        Configuration(const Configuration&) = delete;        Configuration& operator=(const Configuration&) = delete;               enum Fonts : int {Gui};        static ResourceManager<sf::Font,int> fonts;               enum PlayerInputs : int { TurnLeft,TurnRight, MoveLeft, MoveRight,HardDrop};        static ActionMap<int> playerInputs;               enum Sounds : int {Spawn,Explosion,LevelUp,};        static ResourceManager<sf::SoundBuffer,int> sounds;               enum Musics : int {Theme};        static ResourceManager<sf::Music,int> musics;               static void initialize();           private:        static void initTextures();        static void initFonts();        static void initSounds();        static void initMusics();        static void initPlayerInputs(); }; As you can see, the changes are in the enum, more precisely in Sounds and PlayerInputs. We change the values into more adapted ones to this project. We still have the font and music theme. Now, take a look at the initialization methods that have changed: void Configuration::initSounds() {    sounds.load(Sounds::Spawn,"media/sounds/spawn.flac");    sounds.load(Sounds::Explosion,"media/sounds/explosion.flac");    sounds.load(Sounds::LevelUp,"media/sounds/levelup.flac"); } void Configuration::initPlayerInputs() {    playerInputs.map(PlayerInputs::TurnRight,Action(sf::Keyboard::Up));    playerInputs.map(PlayerInputs::TurnLeft,Action(sf::Keyboard::Down));    playerInputs.map(PlayerInputs::MoveLeft,Action(sf::Keyboard::Left));    playerInputs.map(PlayerInputs::MoveRight,Action(sf::Keyboard::Right));  playerInputs.map(PlayerInputs::HardDrop,Action(sf::Keyboard::Space,    Action::Type::Released)); } No real surprises here. We simply adjust the resources to our needs for the project. As you can see, the changes are really minimalistic and easily done. This is the aim of all reusable modules or classes. Here is a piece of advice, however: keep your code as modular as possible, this will allow you to change a part very easily and also to import any generic part of your project to another one easily. The Piece class Now that we have the configuration class done, the next step is the Piece class. This class will be the most modified one. Actually, as there is too much change involved, let's build it from scratch. A piece has to be considered as an ensemble of four squares that are independent from one another. This will allow us to split a piece at runtime. Each of these squares will be a different fixture attached to the same body, the piece. We will also need to add some force to a piece, especially to the current piece, which is controlled by the player. These forces can move the piece horizontally or can rotate it. Finally, we will need to draw the piece on the screen. The result will show the following code snippet: constexpr int BOOK_BOX_SIZE = 32; constexpr int BOOK_BOX_SIZE_2 = BOOK_BOX_SIZE / 2; class Piece : public sf::Drawable {    public:        Piece(const Piece&) = delete;        Piece& operator=(const Piece&) = delete;          enum TetriminoTypes {O=0,I,S,Z,L,J,T,SIZE};        static const sf::Color TetriminoColors[TetriminoTypes::SIZE];          Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation);        ~Piece();        void update();        void rotate(float angle);        void moveX(int direction);        b2Body* getBody()const;      private:        virtual void draw(sf::RenderTarget& target, sf::RenderStates states) const override;        b2Fixture* createPart((int pos_x,int pos_y,TetriminoTypes type); ///< position is relative to the piece int the matrix coordinate (0 to 3)        b2Body * _body;        b2World& _world; }; Some parts of the class don't change such as the TetriminoTypes and TetriminoColors enums. This is normal because we don't change any piece's shape or colors. The rest is still the same. The implementation of the class, on the other side, is very different from the precedent version. Let's see it: Piece::Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation) : _world(world) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),    converter::pixelsToMeters<double>(pos_y));    bodyDef.type = b2_dynamicBody;    bodyDef.angle = converter::degToRad(rotation);    _body = world.CreateBody(&bodyDef);      switch(type)    {        case TetriminoTypes::O : {            createPart((0,0,type); createPart((0,1,type);            createPart((1,0,type); createPart((1,1,type);        }break;        case TetriminoTypes::I : {            createPart((0,0,type); createPart((1,0,type);             createPart((2,0,type); createPart((3,0,type);        }break;        case TetriminoTypes::S : {            createPart((0,1,type); createPart((1,1,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::Z : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,1,type);        }break;        case TetriminoTypes::L : {            createPart((0,1,type); createPart((0,0,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::J : {            createPart((0,0,type); createPart((1,0,type);            createPart((2,0,type); createPart((2,1,type);        }break;        case TetriminoTypes::T : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,0,type);        }break;        default:break;    }    body->SetUserData(this);    update(); } The constructor is the most important method of this class. It initializes the physical body and adds each square to it by calling createPart(). Then, we set the user data to the piece itself. This will allow us to navigate through the physics to SFML and vice versa. Finally, we synchronize the physical object to the drawable by calling the update() function: Piece::~Piece() {    for(b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        fixture->SetUserData(nullptr);        delete shape;    }    _world.DestroyBody(_body); } The destructor loop on all the fixtures attached to the body, destroys all the SFML shapes and then removes the body from the world: b2Fixture* Piece::createPart((int pos_x,int pos_y,TetriminoTypes type) {    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2),    converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2)    ,b2Vec2(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_x*BOOK_BOX_SIZE)), converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_y*BOOK_BOX_SIZE))),0);      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.5;    fixtureDef.restitution= 0.4;    fixtureDef.shape = &b2shape;      b2Fixture* fixture = _body->CreateFixture(&fixtureDef);      sf::ConvexShape* shape = new sf::ConvexShape((unsigned int) b2shape.GetVertexCount());    shape->setFillColor(TetriminoColors[type]);    shape->setOutlineThickness(1.0f);    shape->setOutlineColor(sf::Color(128,128,128));    fixture->SetUserData(shape);       return fixture; } This method adds a square to the body at a specific place. It starts by creating a physical shape as the desired box and then adds this to the body. It also creates the SFML square that will be used for the display, and it will attach this as user data to the fixture. We don't set the initial position because the constructor will do it. void Piece::update() {    const b2Transform& xf = _body->GetTransform();       for(b2Fixture* fixture = _body->GetFixtureList(); fixture != nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        const b2PolygonShape* b2shape = static_cast<b2PolygonShape*>(fixture->GetShape());        const uint32 count = b2shape->GetVertexCount();        for(uint32 i=0;i<count;++i) {            b2Vec2 vertex = b2Mul(xf,b2shape->m_vertices[i]);            shape->setPoint(i,sf::Vector2f(converter::metersToPixels(vertex.x),            converter::metersToPixels(vertex.y)));        }    } } This method synchronizes the position and rotation of all the SFML shapes from the physical position and rotation calculated by Box2D. Because each piece is composed of several parts—fixture—we need to iterate through them and update them one by one. void Piece::rotate(float angle) {    body->ApplyTorque((float32)converter::degToRad(angle),true); } void Piece::moveX(int direction) {    body->ApplyForceToCenter(b2Vec2(converter::pixelsToMeters(direction),0),true); } These two methods add some force to the object to move or rotate it. We forward the job to the Box2D library. b2Body* Piece::getBody()const {return _body;}   void Piece::draw(sf::RenderTarget& target, sf::RenderStates states) const {    for(const b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr; fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        if(shape)            target.draw(*shape,states);    } } This function draws the entire piece. However, because the piece is composed of several parts, we need to iterate on them and draw them one by one in order to display the entire piece. This is done by using the user data saved in the fixtures. Summary Since the usage of a physics engine has its own particularities such as the units and game loop, we have learned how to deal with them. Finally, we learned how to pair Box2D with SFML, integrate our fresh knowledge to our existing Tetris project, and build a new funny game. Resources for Article: Further resources on this subject: Skinning a character [article] Audio Playback [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 10498

article-image-pointers-and-references
Packt
03 Jun 2015
14 min read
Save for later

Pointers and references

Packt
03 Jun 2015
14 min read
In this article by Ivo Balbaert, author of the book, Rust Essentials, we will go through the pointers and memory safety. (For more resources related to this topic, see here.) The stack and the heap When a program starts, by default a 2 MB chunk of memory called the stack is granted to it. The program will use its stack to store all its local variables and function parameters; for example, an i32 variable takes 4 bytes of the stack. When our program calls a function, a new stack frame is allocated to it. Through this mechanism, the stack knows the order in which the functions are called so that the functions return correctly to the calling code and possibly return values as well. Dynamically sized types, such as strings or arrays, can't be stored on the stack. For these values, a program can request memory space on its heap, so this is a potentially much bigger piece of memory than the stack. When possible, stack allocation is preferred over heap allocation because accessing the stack is a lot more efficient. Lifetimes All variables in a Rust code have a lifetime. Suppose we declare an n variable with the let n = 42u32; binding. Such a value is valid from where it is declared to when it is no longer referenced, which is called the lifetime of the variable. This is illustrated in the following code snippet: fn main() { let n = 42u32; let n2 = n; // a copy of the value from n to n2 life(n); println!("{}", m); // error: unresolved name `m`. println!("{}", o); // error: unresolved name `o`. }   fn life(m: u32) -> u32 {    let o = m;    o } The lifetime of n ends when main() ends; in general, the start and end of a lifetime happen in the same scope. The words lifetime and scope are synonymous, but we generally use the word lifetime to refer to the extent of a reference. As in other languages, local variables or parameters declared in a function do not exist anymore after the function has finished executing; in Rust, we say that their lifetime has ended. This is the case for the m and o variables in the preceding code snippet, which are only known in the life function. Likewise, the lifetime of a variable declared in a nested block is restricted to that block, like phi in the following example: {    let phi = 1.618; } println!("The value of phi is {}", phi); // is error Trying to use phi when its lifetime is over results in an error: unresolved name 'phi'. The lifetime of a value can be indicated in the code by an annotation, for example 'a, which reads as lifetime where a is simply an indicator; it could also be written as 'b, 'n, or 'life. It's common to see single letters being used to represent lifetimes. In the preceding example, an explicit lifetime indication was not necessary since there were no references involved. All values tagged with the same lifetime have the same maximum lifetime. In the following example, we have a transform function that explicitly declares the lifetime of its s parameter to be 'a: fn transform<'a>(s: &'a str) { /* ... */ } Note the <'a> indication after the name of the function. In nearly all cases, this explicit indication is not needed because the compiler is smart enough to deduce the lifetimes, so we can simply write this: fn transform_without_lifetime(s: &str) { /* ... */ } Here is an example where even when we indicate a lifetime specifier 'a, the compiler does not allow our code. Let's suppose that we define a Magician struct as follows: struct Magician { name: &'static str, power: u32 } We will get an error message if we try to construct the following function: fn return_magician<'a>() -> &'a Magician { let mag = Magician { name: "Gandalf", power: 4625}; &mag } The error message is error: 'mag' does not live long enough. Why does this happen? The lifetime of the mag value ends when the return_magician function ends, but this function nevertheless tries to return a reference to the Magician value, which no longer exists. Such an invalid reference is known as a dangling pointer. This is a situation that would clearly lead to errors and cannot be allowed. The lifespan of a pointer must always be shorter than or equal to than that of the value which it points to, thus avoiding dangling (or null) references. In some situations, the decision to determine whether the lifetime of an object has ended is complicated, but in almost all cases, the borrow checker does this for us automatically by inserting lifetime annotations in the intermediate code; so, we don't have to do it. This is known as lifetime elision. For example, when working with structs, we can safely assume that the struct instance and its fields have the same lifetime. Only when the borrow checker is not completely sure, we need to indicate the lifetime explicitly; however, this happens only on rare occasions, mostly when references are returned. One example is when we have a struct with fields that are references. The following code snippet explains this: struct MagicNumbers { magn1: &u32, magn2: &u32 } This won't compile and will give us the following error: missing lifetime specifier [E0106]. Therefore, we have to change the code as follows: struct MagicNumbers<'a> { magn1: &'a u32, magn2: &'a u32 } This specifies that both the struct and the fields have the lifetime as 'a. Perform the following exercise: Explain why the following code won't compile: fn main() {    let m: &u32 = {        let n = &5u32;        &*n    };    let o = *m; } Answer the same question for this code snippet as well: let mut x = &3; { let mut y = 4; x = &y; } Copying values and the Copy trait In the code that we discussed in earlier section the value of n is copied to a new location each time n is assigned via a new let binding or passed as a function argument: let n = 42u32; // no move, only a copy of the value: let n2 = n; life(n); fn life(m: u32) -> u32 {    let o = m;    o } At a certain moment in the program's execution, we would have four memory locations that contain the copied value 42, which we can visualize as follows: Each value disappears (and its memory location is freed) when the lifetime of its corresponding variable ends, which happens at the end of the function or code block in which it is defined. Nothing much can go wrong with this Copy behavior, in which the value (its bits) is simply copied to another location on the stack. Many built-in types, such as u32 and i64, work similar to this, and this copy-value behavior is defined in Rust as the Copy trait, which u32 and i64 implement. You can also implement the Copy trait for your own type, provided all of its fields or items implement Copy. For example, the MagicNumber struct, which contains a field of the u64 type, can have the same behavior. There are two ways to indicate this: One way is to explicitly name the Copy implementation as follows: struct MagicNumber {    value: u64 } impl Copy for MagicNumber {} Otherwise, we can annotate it with a Copy attribute: #[derive(Copy)] struct MagicNumber {    value: u64 } This now means that we can create two different copies, mag and mag2, of a MagicNumber by assignment: let mag = MagicNumber {value: 42}; let mag2 = mag; They are copies because they have different memory addresses (the values shown will differ at each execution): println!("{:?}", &mag as *const MagicNumber); // address is 0x23fa88 println!("{:?}", &mag2 as *const MagicNumber); // address is 0x23fa80 The *const function is a so-called raw pointer. A type that does not implement the Copy trait is called non-copyable. Another way to accomplish this is by letting MagicNumber implement the Clone trait: #[derive(Clone)] struct MagicNumber {    value: u64 } Then, we can use clone() mag into a different object called mag3, effectively making a copy as follows: let mag3 = mag.clone(); println!("{:?}", &mag3 as *const MagicNumber); // address is 0x23fa78 mag3 is a new pointer referencing a new copy of the value of mag. Pointers The n variable in the let n = 42i32; binding is stored on the stack. Values on the stack or the heap can be accessed by pointers. A pointer is a variable that contains the memory address of some value. To access the value it points to, dereference the pointer with *. This happens automatically in simple cases such as in println! or when a pointer is given as a parameter to a method. For example, in the following code, m is a pointer containing the address of n: let m = &n; println!("The address of n is {:p}", m); println!("The value of n is {}", *m); println!("The value of n is {}", m); This prints out the following output, which differs for each program run: The address of n is 0x23fb34 The value of n is 42 The value of n is 42 So, why do we need pointers? When we work with dynamically allocated values, such as a String, that can change in size, the memory address of that value is not known at compile time. Due to this, the memory address needs to be calculated at runtime. So, to be able to keep track of it, we need a pointer for it whose value will change when the location of String in memory changes. The compiler automatically takes care of the memory allocation of pointers and the freeing up of memory when their lifetime ends. You don't have to do this yourself like in C/C++, where you could mess up by freeing memory at the wrong moment or at multiple times. The incorrect use of pointers in languages such as C++ leads to all kinds of problems. However, Rust enforces a strong set of rules at compile time called the borrow checker, so we are protected against them. We have already seen them in action, but from here onwards, we'll explain the logic behind their rules. Pointers can also be passed as arguments to functions, and they can be returned from functions, but the compiler severely restricts their usage. When passing a pointer value to a function, it is always better to use the reference-dereference &* mechanism, as shown in this example: let q = &42; println!("{}", square(q)); // 1764 fn square(k: &i32) -> i32 {    *k * *k } References In our previous example, m, which had the &n value, is the simplest form of pointer, and it is called a reference (or borrowed pointer); m is a reference to the stack-allocated n variable and has the &i32 type because it points to a value of the i32 type. In general, when n is a value of the T type, then the &n reference is of the &T type. Here, n is immutable, so m is also immutable; for example, if you try to change the value of n through m with *m = 7; you will get a cannot assign to immutable borrowed content '*m' error. Contrary to C, Rust does not let you change an immutable variable via its pointer. Since there is no danger of changing the value of n through a reference, multiple references to an immutable value are allowed; they can only be used to read the value, for example: let o = &n; println!("The address of n is {:p}", o); println!("The value of n is {}", *o); It prints out as described earlier: The address of n is 0x23fb34 The value of n is 42 We could represent this situation in the memory as follows: It is clear that working with pointers such as this or in much more complex situations necessitates much stricter rules than the Copy behavior. For example, the memory can only be freed when there are no variables or pointers associated with it anymore. And when the value is mutable, can it be changed through any of its pointers? Mutable references do exist, and they are declared as let m = &mut n. However, n also has to be a mutable value. When n is immutable, the compiler rejects the m mutable reference binding with the error, cannot borrow immutable local variable 'n' as mutable. This makes sense since immutable variables cannot be changed even when you know their memory location. To reiterate, in order to change a value through a reference, both the variable and its reference have to be mutable, as shown in the following code snippet: let mut u = 3.14f64; let v = &mut u; *v = 3.15; println!("The value of u is now {}", *v); This will print: The value of u is now 3.15. Now, the value at the memory location of u is changed to 3.15. However, note that we now cannot change (or even print) that value anymore by using the u: u = u * 2.0; variable gives us a compiler error: cannot assign to 'u' because it is borrowed. We say that borrowing a variable (by making a reference to it) freezes that variable; the original u variable is frozen (and no longer usable) until the reference goes out of scope. In addition, we can only have one mutable reference: let w = &mut u; which results in the error: cannot borrow 'u' as mutable more than once at a time. The compiler even adds the following note to the previous code line with: let v = &mut u; note: previous borrow of 'u' occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `u` until the borrow ends. This is logical; the compiler is (rightfully) concerned that a change to the value of u through one reference might change its memory location because u might change in size, so it will not fit anymore within its previous location and would have to be relocated to another address. This would render all other references to u as invalid, and even dangerous, because through them we might inadvertently change another variable that has taken up the previous location of u! A mutable value can also be changed by passing its address as a mutable reference to a function, as shown in this example: let mut m = 7; add_three_to_magic(&mut m); println!("{}", m); // prints out 10 With the function add_three_to_magic declared as follows: fn add_three_to_magic(num: &mut i32) {    *num += 3; // value is changed in place through += } To summarize, when n is a mutable value of the T type, then only one mutable reference to it (of the &mut T type) can exist at any time. Through this reference, the value can be changed. Using ref in a match If you want to get a reference to a matched variable inside a match function, use the ref keyword, as shown in the following example: fn main() { let n = 42; match n {      ref r => println!("Got a reference to {}", r), } let mut m = 42; match m {      ref mut mr => {        println!("Got a mutable reference to {}", mr);        *mr = 43;      }, } println!("m has changed to {}!", m); } Which prints out: Got a reference to 42 Got a mutable reference to 42 m has changed to 43! The r variable inside the match has the &i32 type. In other words, the ref keyword creates a reference for use in the pattern. If you need a mutable reference, use ref mut. We can also use ref to get a reference to a field of a struct or tuple in a destructuring via a let binding. For example, while reusing the Magician struct, we can extract the name of mag by using ref and then return it from the match: let mag = Magician { name: "Gandalf", power: 4625}; let name = {    let Magician { name: ref ref_to_name, power: _ } = mag;    *ref_to_name }; println!("The magician's name is {}", name); Which prints: The magician's name is Gandalf. References are the most common pointer type and have the most possibilities; other pointer types should only be applied in very specific use cases. Summary In this article, we learned the intelligence behind the Rust compiler, which is embodied in the principles of ownership, moving values, and borrowing. Resources for Article: Further resources on this subject: Getting Started with NW.js [article] Creating Random Insults [article] Creating Man-made Materials in Blender 2.5 [article]
Read more
  • 0
  • 0
  • 2736
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-adding-graphical-user-interface
Packt
03 Jun 2015
12 min read
Save for later

Adding a Graphical User Interface

Packt
03 Jun 2015
12 min read
In this article by Dr. Edward Lavieri, the author of Getting Started with Unity 5, you will learn how to use Unity 5's new User Interface (UI) system. (For more resources related to this topic, see here.) An overview of graphical user interface Graphical User Interfaces or GUI (pronounced gooey) is a collection of visual components such as text, buttons, and images that facilitates a user's interaction with software. GUIs are also used to provide feedback to players. In the case of our game, the GUI allows players to interact with our game. Without a GUI, the user would have no visual indication of how to use the game. Imagine software without any on-screen indicators of how to use the software. The following image shows how early user interfaces were anything but intuitive: We use GUIs all the time and might not pay too close attention to them, unless they are poorly designed. If you've ever tried to figure out how to use an app on your Smartphone or could not figure out how to perform a specific action with desktop software, you've most likely encountered a poorly designed GUI. Functions of a GUI Our goal is to create a GUI for our game that both informs the user and allows for interaction between the game and the user. To that end, GUIs have two primary purposes: feedback and control. Feedback is generated by the game to the user and control is given to the user and managed by user input. Let's look at each of these more closely. Feedback Feedback can come in many forms. The most common forms of game feedback are visual and audio. Visual feedback can be something as simple as a text on a game screen. An example would be a game player's current score ever-present on the game screen. Games that include dialog systems where the player interacts with non-player characters (NPC) usually have text feedback on the screen that informs what the NPC's responses are. Visual feedback can also be non-textual, such as smoke, fire, explosions, or other graphic effect. Audio feedback can be as simple as a click sound when the user clicks or taps on a button or as complex as a radar ping when an enemy submarine is detected on long-distance sonar scans. You can probably think of all the audio feedback your favorite game provides. When you run your cart over a coin, an audio sound effect is played so there is no question that you earned the coin. If you take a moment to consider all of the audio feedback you are exposed to in games, you'll begin to appreciate the significance of them. Less typical feedback includes device vibration, which is sometimes used with smartphone applications and console games. Some attractions have taken feedback to another level through seat movement and vibration, dispensing liquid and vapor, and introducing chemicals that provide olfactory input. Control Giving players control of the game is the second function of GUIs. There is a wide gambit of types of control. The most simple is using buttons or menus in a game. A game might have a graphical icon of a backpack that, when clicked, gives the user access to the inventory management system of a game. Control seems like an easy concept and it is. Interestingly, most popular console games lack good GUI interfaces, especially when it comes to control. If you play console games, think about how many times you have to refer to the printed or in-game manual. Do you intuitively know all of the controller key mappings? How do you jump, switch weapons, crotch, throw a grenade, or go into stealth mode? In the defense of the game studios that publish these games, there is a lot of control and it can be difficult to make them intuitive. By extension, control is often physical in addition to graphical. Physical components of control include keyboards, mice, trackballs, console controllers, microphones, and other devices. Feedback and control Feedback and control GUI elements are often paired. When you click or tap a button, it usually has both visual and audio effects as well as executing the user's action. When you click (control) on a treasure chest, it opens (visual feedback) and you hear the creak of the old wooden hinges (audio feedback). This example shows the power of using adding feedback to control actions. Game Layers At a primitive level, there are three layers to every game. The core or base level is the Game Layer. The top layer is the User Layer; this is the actual person playing your game. So, it is the layer in between, the GUI Layer that serves as an intermediary between a game and its player. It becomes clear that designing and developing intuitive and well-functioning GUIs is important to a game's functionality, the user experience, and a game's success. Unity 5's UI system Unity's UI system has recently been re-engineered and is now more powerful than ever. Perhaps the most important concept to grasp is the Canvas object. All UI elements are contained in a canvas. Project and scenes can have more than one canvas. You can think of a canvas as a container for UI elements. Canvas To create a canvas, you simply navigate and select GameObject | UI | Canvas from the drop-down menu. You can see from the GameObject | UI menu pop-up that there are 11 different UI elements. Alternatively, you can create your first UI element, such as a button and Unity will automatically create a canvas for you and add it to your Hierarchy view. When you create subsequent UI elements, simply highlight the canvas in the Hierarchy view and then navigate to the GameObject | UI menu to select a new UI element. Here is a brief description of each of the UI elements: UI element Description Panel A frame object Button Standard button that can be clicked Text Text with standard text formatting Image Images can be simple, sliced, tiled, and filled Raw Image Texture file Slider Slider with min and max values Scrollbar Scrollbar with values between 0 and 1 Toggle Standard checkbox; can also be grouped Input Field Text input field Canvas The game object container for UI elements Event System Allows us to trigger scripts from UI elements. An Event System is automatically created when you create a canvas. You can have multiple canvases in your game. As you start building larger games, you'll likely find a use for more than one canvas. Render mode There are a few settings in the Inspector view that you should be aware of regarding your canvas game object. The first setting is the render mode. There are three settings: Screen Space – Overlay, Screen Space – Camera, and World Space: In this render mode, the canvas is automatically resized when the user changes the size or resolution of the game screen. The second render mode, Screen Space – Camera, has a plane distance property that determines how far the canvas is rendered from the camera. The third render mode is World Space. This mode gives you the most control and can be manipulated much like any other game object. I recommend experimenting with different render modes so you know which one you like best and when to use each one. Creating a GUI Creating a GUI in Unity is a relatively easy task. We first create a canvas, or have Unity create it for us when we create our first UI element. Next, we simply add the desired UI elements to our canvas. Once all the necessary elements are in our canvas, you can arrange and format them. It is often best to switch to 2D mode in the Scene view when placing the UI elements on the canvas. This simply makes the task a bit easier. If you have used earlier versions of Unity, you'll note that several things have changed regarding creating and referencing GUI elements. For example, you'll need to include the using UnityEngine.UI; statement before referencing UI components. Also, instead of referencing GUI text as public GUIText waterHeld; you now use public Text waterHeld;. Heads-up displays A game's heads-up display (HUD) is graphical and textual information available to the user at all times. No action should be required of the user other than to look at a specific region of the screen to read the displays. For example, if you are playing car-racing game, you might have an odometer, speedometer, compass, fuel tank level, air pressure, and other visual indicators always on the screen. Creating a HUD Here are the basic steps to create a HUD: Open the game project and load the scene. Navigate and select the GameObject | UI | Text option from the drop-down menu. This will result in a Canvas game object being added to the Hierarchy view, along with a text child item. Select the child item in the Hierarchy view. Then, in the Inspector view, change the text to what you want displayed on the screen. In the Inspector view, you can change the font size. If necessary, you can change the Horizontal Overflow option from Wrap to Overflow: Zoom out in the Scene view until you can see the GUI Canvas. Use the transform tools to place your new GUI element in the top-left corner of the screen. Depending on how you are viewing the scene in the Scene view, you might need to use the hand tool to rotate the scene. So, if your GUI text appears backwards, just rotate the scene until it is correct. Repeat steps 2 through 5 until you've created all the HUD elements you need for your game. Mini-maps Miniature-maps or mini-maps provide game players with a small visual aid that helps them maintain perspective and direction in a game. These mini-maps can be used for many different purposes, depending on the game. Some examples include the ability to view a mini-map that overlooks an enemy encampment; a zoomed out view of the game map with friendly and enemy force indicators; and a mini-map that has the overall tunnel map while the main game screen views the current section of tunnel. Creating a Mini-Map Here are the steps used to create a mini-map for our game: Navigate and select GameObject | Camera from the top menu. In the Hierarchy view, change the name from Camera to Mini-Map. With the mini-map camera selected, go to the Inspector view and click on the Layer button, then Add Layer in the pop-up menu. In the next available User Layer, add the name Mini-Map: Select the Mini-Map option in the Hierarchy view, and then select Layer | Mini-Map. Now the new mini-map camera is assigned to the Mini-Map layer: Next, we'll ensure the main camera is not rendering the Mini-Map camera. Select the Main Camera option in the Hierarchy view. In the Inspector view, select Culling Mask, and then deselect Mini-Map from the pop-up menu: Now we are ready to finish the configuration of our mini-map camera. Select the Mini-Map in the Hierarchy view. Using the transform tools in the Scene view, adjust the camera object so that it shows the area of the game environment you want visible via the mini-map. In the Inspector view, under Camera, make the settings match the following values: Setting Value Clear Flags Depth only Culling Mask Everything Projection Orthographic Size 25 Clipping Planes Near 0.3; Far 1000 Viewpoint Rect X 0.75; Y 0.75; W 1; H 1 Depth 1 Rendering Path User Player Settings Target Texture None Occlusion Culling Selected HDR Not Selected With the Mini-Map camera still selected, right-click on each of the Flare Layer, GUI Layer, and Audio Listener components in the Inspector view and select Remove Component. Save your scene and your project. You are ready to test your mini-map. Mini-maps can be very powerful game components. There are a couple of things to keep in mind if you are going to use mini-maps in your games: Make sure the mini-map size does not obstruct too much of the game environment. There is nothing worse than getting shot by an enemy that you could not see because a mini-map was in the way. The mini-map should have a purpose—we do not include them in games because they are cool. They take up screen real estate and should only be used if needed, such as helping the player make informed decisions. In our game, the player is able to keep an eye on Colt's farm animals while he is out gathering water and corn. Items should be clearly visible on the mini-map. Many games use red dots for enemies, yellow for neutral forces, and blue for friendlies. This type of color-coding provides users with a lot of information at a very quick glance. Ideally, the user should have the flexibility to move the mini-map to a corner of their choosing and toggle it on and off. In our game, we placed the mini-map in the top-right corner of the game screen so that the HUD objects would not be in the way. Summary In this article, you learned about the UI system in Unity 5. You gained an appreciation for the importance of GUIs in games we create. Resources for Article: Further resources on this subject: Bringing Your Game to Life with AI and Animations [article] Looking Back, Looking Forward [article] Introducing the Building Blocks for Unity Scripts [article]
Read more
  • 0
  • 0
  • 7654

article-image-object-oriented-javascript-backbone-classes
Packt
03 Jun 2015
9 min read
Save for later

Object-Oriented JavaScript with Backbone Classes

Packt
03 Jun 2015
9 min read
In this Article by Jeremy Walker, author of the book Backbone.js Essentials, we will explore the following topics: The differences between JavaScript's class system and the class systems of traditional object-oriented languages How new, this, and prototype enable JavaScript's class system Extend, Backbone's much easier mechanism for creating subclasses (For more resources related to this topic, see here.) JavaScript's class system Programmers who use JavaScript can use classes to encapsulate units of logic in the same way as programmers of other languages. However, unlike those languages, JavaScript relies on a less popular form of inheritance known as prototype-based inheritance. Since Backbone classes are, at their core, just JavaScript classes, they too rely on the prototype system and can be subclassed in the same way as any other JavaScript class. For instance, let's say you wanted to create your own Book subclass of the Backbone Model class with additional logic that Model doesn't have, such as book-related properties and methods. Here's how you can create such a class using only JavaScript's native object-oriented capabilities: // Define Book's Initializervar Book = function() {// define Book's default propertiesthis.currentPage = 1;this.totalPages = 1;}// Define book's parent classBook.prototype = new Backbone.Model();// Define a method of BookBook.prototype.turnPage = function() {this.currentPage += 1;return this.currentPage;} If you've never worked with prototypes in JavaScript, the preceding code may look a little intimidating. Fortunately, Backbone provides a much easier and easier to read mechanism for creating subclasses. However, since that system is built on top of JavaScript's native system, it's important to first understand how the native system works. This understanding will be helpful later when you want to do more complex class-related tasks, such as calling a method defined on a parent class. The new keyword The new keyword is a relatively simple but extremely useful part of JavaScript's class system. The first thing that you need to understand about new is that it doesn't create objects in the same way as other languages. In JavaScript, every variable is either a function, object, or primitive, which means that when we refer to a class, what we're really referring to is a specially designed initialization function. Creating this class-like function is as simple as defining a function that modifies this and then using the new keyword to call that function. Normally, when you call a function, its this is obvious. For instance, when you call the turnPage method of a book object, the this method inside turnPage will be set to this book object, as shown here: var simpleBook = {currentPage: 3, pages: 60};simpleBook.turnPage = function() {this.currentPage += 1;return this.currentPage;}simpleBook.turnPage(); // == 4 Calling a function that isn't attached to an object (in other words, a function that is not a method) results in this being set to the global scope. In a web browser, this means the window object: var testGlobalThis = function() {alert(this);}testGlobalThis(); // alerts window When we use the new keyword before calling an initialization function, three things happen (well, actually four, but we'll wait to explain the fourth one until we explain prototypes): JavaScript creates a brand new object ({})for us JavaScript sets the this method inside the initialization function to the newly created object After the function finishes, JavaScript ignores the normal return value and instead returns the object that was created As you can see, although the new keyword is simple, it's nevertheless important because it allows you to treat initialization functions as if they really are actual classes. At the same time, it does so without violating the JavaScript principle that all variables must either be a function, object, or primitive. Prototypal inheritance That's all well and good, but if JavaScript has no true concept of classes, how can we create subclasses? As it turns out, every object in JavaScript has two special properties to solve this problem: prototype and __proto__ (hidden). These two properties are, perhaps, the most commonly misunderstood aspects of JavaScript, but once you learn how they work, they are actually quite simple to use. When you call a method on an object or try to retrieve a property JavaScript first checks whether the object has the method or property defined in the object itself. In other words if you define a method such as this one: book.turnPage = function()this.currentPage += 1;}; JavaScript will use that definition first when you call turnPage. In real-world code, however, you will almost never want to put methods directly in your objects for two reasons. First, doing that will result in duplicate copies of those methods, as each instance of your class will have its own separate copy. Second, adding methods in this way requires an extra step, and that step can be easily forgotten when you create new instances. If the object doesn't have a turnPage method defined in it, JavaScript will next check the object's hidden __proto__ property. If this __proto__ object doesn't have a turnPage method, then JavaScript will look at the __proto__ property on the object's __proto__. If that doesn't have the method, JavaScript continues to check the __proto__ of the __proto__ of the __proto__ and keeps checking each successive __proto__ until it has exhausted the chain. This is similar to single-class inheritance in more traditional object-oriented languages, except that instead of going through a class chain, JavaScript instead uses a prototype chain. Just as in an object-oriented language we wind up with only a single copy of each method, but instead of the method being defined on the class itself, it's defined on the class's prototype. In a future version of JavaScript (ES6), it will be possible to work with the __proto__ object directly, but for now, the only way to actually see the __proto__ property is to use your browser's debugging tool (for instance, the Chrome Developer Tools debugger):   This means that you can't use this line of code: book.__proto__.turnPage(); Also, you can't use the following code: book.__proto__ = {turnPage: function() {this.currentPage += 1;}}; But, if you can't manipulate __proto__ directly, how can you take advantage of it? Fortunately, it is possible to manipulate __proto__, but you can only do this indirectly by manipulating prototype. Do you remember I mentioned that the new keyword actually does four things? The fourth thing is that it sets the __proto__ property of the new object it creates to the prototype property of the initialization function. In other words, if you want to add a turnPage method to every new instance of Book that you create, you can assign this turnPage method to the prototype property of the Book initialization function, For example: var Book = function() {};Book.prototype.turnPage = function() {this.currentPage += 1;};var book = new Book();book.turnPage();// this works because book.__proto__ == Book.prototype Since these concepts often cause confusion, let's briefly recap: Every object has a prototype property and a hidden __proto__ property An object's __proto__ property is set to the prototype property of its constructor when it is first created and cannot be changed Whenever JavaScript can't find a property or method on an object, it "checks each step of the __proto__ chain until it finds one or until it runs "out of chain Extending Backbone classes With that explanation out of the way, we can finally get down to the workings of Backbone's subclassing system, which revolves around Backbone's extend method. To use extend, you simply call it from the class that your new subclass will be based on, and extend will return the new subclass. This new subclass will have its __proto__ property set to the prototype property of its parent class, allowing objects created with the new subclass to access all the properties and methods of the parent class. Take an example of the following code snippet: var Book = Backbone.Model.extend();// Book.prototype.__proto__ == Backbone.Model.prototype;var book = new Book();book.destroy(); In the preceding example, the last line works because JavaScript will look up the __proto__ chain, find the Model method destroy, and use it. In other words, all the functionality of our original class has been inherited by our new class. But of course, extend wouldn't be exciting if all it can do is make exact clones of the parent classes, which is why extend takes a properties object as its first argument. Any properties or methods on this object will be added to the new class's prototype. For instance, let's try making our Book class a little more interesting by adding a property and a method: var Book = Backbone.Model.extend({currentPage: 1,turnPage: function() {this.currentPage += 1;}});var book = new Book();book.currentPage; // == 1book.turnPage(); // increments book.currentPage by one The extend method also allows you to create static properties or methods, or in other words, properties or methods that live on the class rather than on objects created from that class. These static properties and methods are passed in as the second classProperties argument to extend. Here's a quick example of how to add a static method to our Book class: var Book = Backbone.Model.extend({}, {areBooksGreat: function() {alert("yes they are!");}});Book.areBooksGreat(); // alerts "yes they are!"var book = new Book();book.areBooksGreat(); // fails because static methods must becalled on a class As you can see, there are several advantages to Backbone's approach to inheritance over the native JavaScript approach. First, the word prototype did not appear even once in any of the previously mentioned code; while you still need to understand how prototype works, you don't have to think about it just to create a class. Another benefit is that the entire class definition is contained within a single extend call, keeping all of the class's parts together visually. Also, when we use extend, the various pieces of logic that make up the class are ordered the same way as in most other programming languages, defining the super class first and then the initializer and properties, instead of the other way around. Summary In this article, we explored how JavaScript's native class system works and how the new, this, and prototype keywords/properties form the basis of it. We also learned how Backbone's extend method makes creating new subclasses much more convenient as well as how to use apply and call to invoke parent methods (or when providing callback functions) to preserve the desired this method. Resources for Article: Further resources on this subject: Testing Backbone.js Application [Article] Building an app using Backbone.js [Article] Organizing Backbone Applications - Structure, Optimize, and Deploy [Article]
Read more
  • 0
  • 0
  • 4865

article-image-working-touch-gestures
Packt
03 Jun 2015
5 min read
Save for later

Working with Touch Gestures

Packt
03 Jun 2015
5 min read
 In this article by Ajit Kumar, the author Sencha Charts Essentials, we will cover the following topics: Touch gestures support in Sencha Charts Using gestures on existing charts Out-of-the-box interactions Creating custom interactions using touch gestures (For more resources related to this topic, see here.) Interacting with interactions The interactions code is located under the ext/packages/sencha-charts/src/chart/interactions folder. The Ext.chart.interactions.Abstract class is the base class for all the chart interactions. Interactions must be associated with a chart by configuring interactions on it. Consider the following example: Ext.create('Ext.chart.PolarChart', {title: 'Chart',interactions: ['rotate'],... The gestures config is an important config. It is an integral part of an interaction where it tells the framework which touch gestures would be part of an interaction. It's a map where the event name is the key and the handler method name is its value. Consider the following example: gestures: {tap: 'onTapGesture',doubletap: 'onDoubleTapGesture'} Once an interaction has been associated with a chart, the framework registers the events and their handlers, as listed in the gestures config, on the chart as part of the chart initialization, as shown here:   Here is what happens during each stage of the preceding flowchart: The chart's construction starts when its constructor is called either by a call to Ext.create or xtype usage. The interactions config is applied to the AbstractChart class by the class system, which calls the applyInteractions method. The applyInteractions method sets the chart object on each of the interaction objects. This setter operation will call the updateChart method of the interaction class—Ext.chart.interactions.Abstract. The updateChart calls addChartListener to add the gesture-related events and their handlers. The addChartListener iterates through the gestures object and registers the listed events and their handlers on the chart object. Interactions work on touch as well as non-touch devices (for example, desktop). On non-touch devices, the gestures are simulated based on their mouse or pointer events. For example, mousedown is treated as a tap event. Using built-in interactions Here is a list of the built-in interactions: Crosshair: This interaction helps the user to get precise x and y values for a specific point on a chart. Because of this, it is applicable to Cartesian charts only. The x and y values are obtained by single-touch dragging on the chart. The interaction also offers additional configs: axes: This can be used to provide label text and label rectangle configs on a per axis basis using left, right, top, and bottom configs or a single config applying to all the axes. If the axes config is not specified, the axis label value is shown as the text and the rectangle will be filled with white color. lines: The interaction draws horizontal and vertical lines through the point on the chart. Line sprite attributes can be passed using horizontal or vertical configs. For example, we configure the following crosshair interaction on a CandleStick chart: interactions: [{type: 'crosshair',axes: {left: {label: { fillStyle: 'white' },rect: {fillStyle: 'pink',radius: 2}},bottom: {label: {fontSize: '14px',fontWeight: 'bold'},rect: { fillStyle: 'cyan' }}}}] The preceding configuration will produce the following output, where the labels and rectangles on the two axes have been styled as per the configuration: CrossZoom:This interaction allows the user to zoom in on a selected area of a chart using drag events. It is very useful in getting the microscopic view of your macroscopic data view. For example, the chart presents month-wise data for two years; using zoom, you can look at the values for, say, a specific month. The interaction offers an additional config—axes—using which we indicate the axes, which will be zoomed. Consider the following configuration on a CandleStick chart: interactions: [{type: 'crosszoom',axes: ['left', 'bottom']}] This will produce the following output where a user will be able to zoom in to both the left and bottom axes:   Additionally, we can control the zoom level by passing minZoom and maxZoom, as shown in the following snippet: interactions: [{type: 'crosszoom',axes: {left: {maxZoom: 8,startZoom: 2},bottom: true}}] The zoom is reset when the user double-clicks on the chart. ItemHighlight: This interaction allows the user to highlight series items in the chart. It works in conjunction with highlight config that is configured on a series. The interaction identifies and sets the highlightItem on a chart, on which the highlight and highlightCfg configs are applied. PanZoom: This interaction allows the user to navigate the data for one or more chart axes by panning and/or zooming. Navigation can be limited to particular axes. Pinch gestures are used for zooming whereas drag gestures are used for panning. For devices which do not support multiple-touch events, zooming cannot be done via pinch gestures; in this case, the interaction will allow the user to perform both zooming and panning using the same single-touch drag gesture. By default, zooming is not enabled. We can enable it by setting zoomOnPanGesture:true on the interaction, as shown here: interactions: {type: 'panzoom',zoomOnPanGesture: true} By default, all the axes are navigable. However, the panning and zooming can be controlled at axis level, as shown here: {type: 'panzoom',axes: {left: {maxZoom: 5,allowPan: false},bottom: true}} Rotate: This interaction allows the user to rotate a polar chart about its centre. It implements the rotation using the single-touch drag gestures. This interaction does not have any additional config. RotatePie3D: This is an extension of the Rotate interaction to rotate a Pie3D chart. This does not have any additional config. Summary In this article, you learned how Sencha Charts offers interaction classes to build interactivity into the charts. We looked at the out-of-the-box interactions, their specific configurations, and how to use them on different types of charts. Resources for Article: Further resources on this subject: The Various Components in Sencha Touch [Article] Creating a Simple Application in Sencha Touch [Article] Sencha Touch: Catering Form Related Needs [Article]
Read more
  • 0
  • 0
  • 2044

article-image-lets-build-angularjs-and-bootstrap
Packt
03 Jun 2015
14 min read
Save for later

Let's Build with AngularJS and Bootstrap

Packt
03 Jun 2015
14 min read
In this article by Stephen Radford, author of the book Learning Web Development with Bootstrap and AngularJS, we're going to use Bootstrap and AngularJS. We'll look at building a maintainable code base as well as exploring the full potential of both frameworks. (For more resources related to this topic, see here.) Working with directives Something we've been using already without knowing it is what Angular calls directives. These are essentially powerful functions that can be called from an attribute or even its own element, and Angular is full of them. Whether we want to loop data, handle clicks, or submit forms, Angular will speed everything up for us. We first used a directive to initialize Angular on the page using ng-app, and all of the directives we're going to look at in this article are used in the same way—by adding an attribute to an element. Before we take a look at some more of the built-in directives, we need to quickly make a controller. Create a new file and call it controller.js. Save this to your js directory within your project and open it up in your editor. Controllers are just standard JS constructor functions that we can inject Angular's services such as $scope into. These functions are instantiated when Angular detects the ng-controller attribute. As such, we can have multiple instances of the same controller within our application, allowing us to reuse a lot of code. This familiar function declaration is all we need for our controller. function AppCtrl(){} To let the framework know this is the controller we want to use, we need to include this on the page after Angular is loaded and also attach the ng-controller directive to our opening <html> tag: <html ng-controller="AppCtl">…<script type="text/javascript"src="assets/js/controller.js"></script> ng-click and ng-mouseover One of the most basic things you'll have ever done with JavaScript is listened for a click event. This could have been using the onclick attribute on an element, using jQuery, or even with an event listener. In Angular, we use a directive. To demonstrate this, we'll create a button that will launch an alert box—simple stuff. First, let's add the button to our content area we created earlier: <div class="col-sm-8"><button>Click Me</button></div> If you open this up in your browser, you'll see a standard HTML button created—no surprises there. Before we attach the directive to this element, we need to create a handler in our controller. This is just a function within our controller that is attached to the scope. It's very important we attach our function to the scope or we won't be able to access it from our view at all: function AppCtl($scope){$scope.clickHandler = function(){window.alert('Clicked!');};} As we already know, we can have multiple scopes on a page and these are just objects that Angular allows the view and the controller to have access to. In order for the controller to have access, we've injected the $scope service into our controller. This service provides us with the scope Angular creates on the element we added the ng-controller attribute to. Angular relies heavily on dependency injection, which you may or may not be familiar with. As we've seen, Angular is split into modules and services. Each of these modules and services depend upon one another and dependency injection provides referential transparency. When unit testing, we can also mock objects that will be injected to confirm our test results. DI allows us to tell Angular what services our controller depends upon, and the framework will resolve these for us. An in-depth explanation of AngularJS' dependency injection can be found in the official documentation at https://docs.angularjs.org/guide/di. Okay, so our handler is set up; now we just need to add our directive to the button. Just like before, we need to add it as an additional attribute. This time, we're going to pass through the name of the function we're looking to execute, which in this case is clickHandler. Angular will evaluate anything we put within our directive as an AngularJS expression, so we need to be sure to include two parentheses indicating that this is a function we're calling: <button ng-click="clickHandler()">Click Me</button> If you load this up in your browser, you'll be presented with an alert box when you click the button. You'll also notice that we don't need to include the $scope variable when calling the function in our view. Functions and variables that can be accessed from the view live within the current scope or any ancestor scope.   Should we wish to display our alert box on hover instead of click, it's just a case of changing the name of the directive to ng-mouseover, as they both function in the exact same way. ng-init The ng-init directive is designed to evaluate an expression on the current scope and can be used on its own or in conjunction with other directives. It's executed at a higher priority than other directives to ensure the expression is evaluated in time. Here's a basic example of ng-init in action: <div ng-init="test = 'Hello, World'"></div>{{test}} This will display Hello, World onscreen when the application is loaded in your browser. Above, we've set the value of the test model and then used the double curly-brace syntax to display it. ng-show and ng-hide There will be times when you'll need to control whether an element is displayed programmatically. Both ng-show and ng-hide can be controlled by the value returned from a function or a model. We can extend upon our clickHandler function we created to demonstrate the ng-click directive to toggle the visibility of our element. We'll do this by creating a new model and toggling the value between true or false. First of all, let's create the element we're going to be showing or hiding. Pop this below your button: <div ng-hide="isHidden">Click the button above to toggle.</div> The value within the ng-hide attribute is our model. Because this is within our scope, we can easily modify it within our controller: $scope.clickHandler = function(){$scope.isHidden = !$scope.isHidden;}; Here we're just reversing the value of our model, which in turn toggles the visibility of our <div>. If you open up your browser, you'll notice that the element isn't hidden by default. There are a few ways we could tackle this. Firstly, we could set the value of $scope.hidden to true within our controller. We could also set the value of hidden to true using the ng-init directive. Alternatively, we could switch to the ng-show directive, which functions in reverse to ng-hide and will only make an element visible if a model's value is set to true. Ensure Angular is loaded within your header or ng-hide and ng-show won't function correctly. This is because Angular uses its own classes to hide elements and these need to be loaded on page render. ng-if Angular also includes an ng-if directive that works in a similar fashion to ng-show and ng-hide. However, ng-if actually removes the element from the DOM whereas ng-show and ng-hide just toggles the elements' visibility. Let's take a quick look at how we'd use ng-if with the preceding code: <div ng-if="isHidden">Click the button above to toggle.</div> If we wanted to reverse the statement's meaning, we'd simply just need to add an exclamation point before our expression: <div ng-if="!isHidden">Click the button above to toggle.</div> ng-repeat Something you'll come across very quickly when building a web app is the need to render an array of items. For example, in our contacts manager, this would be a list of contacts, but it could be anything. Angular allows us to do this with the ng-repeat directive. Here's an example of some data we may come across. It's array of objects with multiple properties within it. To display the data, we're going to need to be able to access each of the properties. Thankfully, ng-repeat allows us to do just that. Here's our controller with an array of contact objects assigned to the contacts model: function AppCtrl($scope){$scope.contacts = [{name: 'John Doe',phone: '01234567890',email: 'john@example.com'},{name: 'Karan Bromwich',phone: '09876543210',email: 'karan@email.com'}];} We have just a couple of contacts here, but as you can imagine, this could be hundreds of contacts served from an API that just wouldn't be feasible to work with without ng-repeat. First, add an array of contacts to your controller and assign it to $scope.contacts. Next, open up your index.html file and create a <ul> tag. We're going to be repeating a list item within this unordered list so this is the element we need to add our directive to: <ul><li ng-repeat="contact in contacts"></li></ul> If you're familiar with how loops work in PHP or Ruby, then you'll feel right at home here. We create a variable that we can access within the current element being looped. The variable after the in keyword references the model we created on $scope within our controller. This now gives us the ability to access any of the properties set on that object with each iteration or item repeated gaining a new scope. We can display these on the page using Angular's double curly-brace syntax. <ul><li ng-repeat="contact in contacts">{{contact.name}}</li></ul> You'll notice that this outputs the name within our list item as expected, and we can easily access any property on our contact object by referencing it using the standard dot syntax. ng-class Often there are times where you'll want to change or add a class to an element programmatically. We can use the ng-class directive to achieve this. It will let us define a class to add or remove based on the value of a model. There are a couple of ways we can utilize ng-class. In its most simple form, Angular will apply the value of the model as a CSS class to the element: <div ng-class="exampleClass"></div> Should the model referenced be undefined or false, Angular won't apply a class. This is great for single classes, but what if you want a little more control or want to apply multiple classes to a single element? Try this: <div ng-class="{className: model, class2: model2}"></div> Here, the expression is a little different. We've got a map of class names with the model we wish to check against. If the model returns true, then the class will be added to the element. Let's take a look at this in action. We'll use checkboxes with the ng-model attribute, to apply some classes to a paragraph: <p ng-class="{'text-center': center, 'text-danger': error}">Lorem ipsum dolor sit amet</p> I've added two Bootstrap classes: text-center and text-danger. These observe a couple of models, which we can quickly change with some checkboxes: The single quotations around the class names within the expression are only required when using hyphens, or an error will be thrown by Angular. <label><input type="checkbox" ng-model="center"> textcenter</label><label><input type="checkbox" ng-model="error"> textdanger</label> When these checkboxes are checked, the relevant classes will be applied to our element. ng-style In a similar way to ng-class, this directive is designed to allow us to dynamically style an element with Angular. To demonstrate this, we'll create a third checkbox that will apply some additional styles to our paragraph element. The ng-style directive uses a standard JavaScript object, with the keys being the property we wish to change (for example, color and background). This can be applied from a model or a value returned from a function. Let's take a look at hooking it up to a function that will check a model. We can then add this to our checkbox to turn the styles off and on. First, open up your controller.js file and create a new function attached to the scope. I'm calling mine styleDemo: $scope.styleDemo = function(){if(!$scope.styler){return;}return {background: 'red',fontWeight: 'bold'};}; Inside the function, we need to check the value of a model; in this example, it's called styler. If it's false, we don't need to return anything, otherwise we're returning an object with our CSS properties. You'll notice that we used fontWeight rather than font-weight in our returned object. Either is fine, and Angular will automatically switch the CamelCase over to the correct CSS property. Just remember than when using hyphens in JavaScript object keys, you'll need to wrap them in quotation marks. This model is going to be attached to a checkbox, just like we did with ng-class: <label><input type="checkbox" ng-model="styler"> ng-style</label> The last thing we need to do is add the ng-style directive to our paragraph element: <p .. ng-style="styleDemo()">Lorem ipsum dolor sit amet</p> Angular is clever enough to recall this function every time the scope changes. This means that as soon as our model's value changes from false to true, our styles will be applied and vice versa. ng-cloak The final directive we're going to look at is ng-cloak. When using Angular's templates within our HTML page, the double curly braces are temporarily displayed before AngularJS has finished loading and compiling everything on our page. To get around this, we need to temporarily hide our template before it's finished rendering. Angular allows us to do this with the ng-cloak directive. This sets an additional style on our element whilst it's being loaded: display: none !important;. To ensure there's no flashing while content is being loaded, it's important that Angular is loaded in the head section of our HTML page. Summary We've covered a lot in this article, let's recap it all. Bootstrap allowed us to quickly create a responsive navigation. We needed to include the JavaScript file included with our Bootstrap download to enable the toggle on the mobile navigation. We also looked at the powerful responsive grid system included with Bootstrap and created a simple two-column layout. While we were doing this, we learnt about the four different column class prefixes as well as nesting our grid. To adapt our layout, we discovered some of the helper classes included with the framework to allow us to float, center, and hide elements. In this article, we saw in detail Angular's built-in directives: functions Angular allows us to use from within our view. Before we could look at them, we needed to create a controller, which is just a function that we can pass Angular's services into using dependency injection. Directives such as ng-click and ng-mouseover are essentially just new ways of handling events that you will have no doubt done using either jQuery or vanilla JavaScript. However, directives such as ng-repeat will probably be a completely new way of working. It brings some logic directly within our view to loop through data and display it on the page. We also looked at directives that observe models on our scope and perform different actions based on their values. Directives like ng-show and ng-hide will show or hide an element based on a model's value. We also saw this in action in ng-class, which allowed us to add some classes to our elements based on our models' values. Resources for Article: Further resources on this subject: AngularJS Performance [Article] AngularJS Web Application Development Cookbook [Article] Role of AngularJS [Article]
Read more
  • 0
  • 0
  • 8267
article-image-implementing-membership-roles-permissions-and-features
Packt
02 Jun 2015
34 min read
Save for later

Implementing Membership Roles, Permissions, and Features

Packt
02 Jun 2015
34 min read
In this article by Rakhitha Nimesh Ratnayake, author of the book WordPress Web Application Development - Second Edition, we will see how to implement frontend registration and how to create a login form in the frontend. (For more resources related to this topic, see here.) Implementing frontend registration Fortunately, we can make use of the existing functionalities to implement registration from the frontend. We can use a regular HTTP request or AJAX-based technique to implement this feature. In this article, I will focus on a normal process instead of using AJAX. Our first task is to create the registration form in the frontend. There are various ways to implement such forms in the frontend. Let's look at some of the possibilities as described in the following section: Shortcode implementation Page template implementation Custom template implementation Now, let's look at the implementation of each of these techniques. Shortcode implementation Shortcodes are the quickest way to add dynamic content to your pages. In this situation, we need to create a page for registration. Therefore, we need to create a shortcode that generates the registration form, as shown in the following code: add_shortcode( "register_form", "display_register_form" );function display_register_form(){$html = "HTML for registration form";return $html;} Then, you can add the shortcode inside the created page using the following code snippet to display the registration form: [register_form] Pros and cons of using shortcodes Following are the pros and cons of using shortcodes: Shortcodes are easy to implement in any part of your application Its hard to manage the template code assigned using the PHP variables There is a possibility of the shortcode getting deleted from the page by mistake Page template implementation Page templates are a widely used technique in modern WordPress themes. We can create a page template to embed the registration form. Consider the following code for a sample page template: /** Template Name : Registration*/HTML code for registration form Next, we have to copy the template inside the theme folder. Finally, we can create a page and assign the page template to display the registration form. Now, let's look at the pros and cons of this technique. Pros and cons of page templates Following are the pros and cons of page templates: A page template is more stable than shortcode. Generally, page templates are associated with the look of the website rather than providing dynamic forms. The full width page, two-column page, and left sidebar page are some common implementations of page templates. A template is managed separately from logic, without using PHP variables. The page templates depend on the theme and need to be updated on theme switching. Custom template implementation Experienced web application developers will always look to separate business logic from view templates. This will be the perfect technique for such people. In this technique, we will create our own independent templates by intercepting the WordPress default routing process. An implementation of this technique starts from the next section on routing. Building a simple router for a user module Routing is one of the important aspects in advanced application development. We need to figure out ways of building custom routes for specific functionalities. In this scenario, we will create a custom router to handle all the user-related functionalities of our application. Let's list the requirements for building a router: All the user-related functionalities should go through a custom URL, such as http://www.example.com/user Registration should be implemented at http://www.example.com/user/register Login should be implemented at http://www.example.com/user/login Activation should be implemented at http://www.example.com/user/activate Make sure to set up your permalinks structure to post name for the examples in this article. If you prefer a different permalinks structure, you will have to update the URLs and routing rules accordingly. As you can see, the user section is common for all the functionalities. The second URL segment changes dynamically based on the functionality. In MVC terms, user acts as the controller and the next URL segment (register, login, and activate) acts as the action. Now, let's see how we can implement a custom router for the given requirements. Creating the routing rules There are various ways and action hooks used to create custom rewrite rules. We will choose the init action to define our custom routes for the user section, as shown in the following code: public function manage_user_routes() {add_rewrite_rule( '^user/([^/]+)/?','index.php?control_action=$matches[1]', 'top' );} Based on the discussed requirements, all the URLs for the user section will follow the /user/custom action pattern. Therefore, we will define the regular expression for matching all the routes in the user section. Redirection is made to the index.php file with a query variable called control_action. This variable will contain the URL segment after the /user segment. The third parameter of the add_rewrite_rule function will decide whether to check this rewrite rule before the existing rules or after them. The value of top will give a higher precedence, while the value of bottom will give a lower precedence. We need to complete two other tasks to get these rewriting rules to take effect: Add query variables to the WordPress query_vars Flush the rewriting rules Adding query variables WordPress doesn't allow you to use any type of variable in the query string. It will check for query variables within the existing list and all other variables will be ignored. Whenever we want to use a new query variable, make sure to add it to the existing list. First, we need to update our constructor with the following filter to customize query variables: add_filter( 'query_vars', array( $this, 'manage_user_routes_query_vars' ) ); This filter on query_vars will allow us to customize the list of existing variables by adding or removing entries from an array. Now, consider the implementation to add a new query variable: public function manage_user_routes_query_vars( $query_vars ) {$query_vars[] = 'control_action';return $query_vars;} As this is a filter, the existing query_vars variable will be passed as an array. We will modify the array by adding a new query variable called control_action and return the list. Now, we have the ability to access this variable from the URL. Flush the rewriting rules Once rewrite rules are modified, it's a must to flush the rules in order to prevent 404 page generation. Flushing existing rules is a time consuming task, which impacts the performance of the application and hence should be avoided in repetitive actions such as init. It's recommended that you perform such tasks in plugin activation or installation as we did earlier in user roles and capabilities. So, let's implement the function for flushing rewrite rules on plugin activation: public function flush_application_rewrite_rules() {flush_rewrite_rules();} As usual, we need to update the constructor to include the following action to call the flush_application_rewrite_rules function: register_activation_hook( __FILE__, array( $this,'flush_application_rewrite_rules' ) ); Now, go to the admin panel, deactivate the plugin, and activate the plugin again. Then, go to the URL http://www.example.com/user/login and check whether it works. Unfortunately, you will still get the 404 error for the request. You might be wondering what went wrong. Let's go back and think about the process in order to understand the issue. We flushed the rules on plugin activation. So, the new rules should persist successfully. However, we will define the rules on the init action, which is only executed after the plugin is activated. Therefore, new rules will not be available at the time of flushing. Consider the updated version of the flush_application_rewrite_rules function for a quick fix to our problem: public function flush_application_rewrite_rules() {$this->manage_user_routes();flush_rewrite_rules();} We call the manage_user_routes function on plugin activation, followed by the call to flush_rewrite_rules. So, the new rules are generated before flushing is executed. Now, follow the previous process once again; you won't get a 404 page since all the rules have taken effect. You can get 404 errors due to the modification in rewriting rules and not flushing it properly. In such situations, go to the Permalinks section on the Settings page and click on the Save Changes button to flush the rewrite rules manually. Now, we are ready with our routing rules for user functionalities. It's important to know the existing routing rules of your application. Even though we can have a look at the routing rules from the database, it's difficult to decode the serialized array, as we encountered in the previous section. So, I recommend that you use the free plugin called Rewrite Rules Inspector. You can grab a copy at http://wordpress.org/plugins/rewrite-rules-inspector/. Once installed, this plugin allows you to view all the existing routing rules as well as offers a button to flush the rules, as shown in the following screen: Controlling access to your functions We have a custom router, which handles the URLs of the user section of our application. Next, we need a controller to handle the requests and generate the template for the user. This works similar to the controllers in the MVC pattern. Even though we have changed the default routing, WordPress will look for an existing template to be sent back to the user. Therefore, we need to intercept this process and create our own templates. WordPress offers an action hook called template_redirect for intercepting requests. So, let's implement our frontend controller based on template_redirect. First, we need to update the constructor with the template_redirect action, as shown in the following code: add_action( 'template_redirect', array( $this, 'front_controller' ) ); Now, let's take a look at the implementation of the front_controller function using the following code: public function front_controller() {global $wp_query;$control_action = isset ( $wp_query->query_vars['control_action'] ) ? $wp_query->query_vars['control_action'] : ''; ;switch ( $control_action ) {case 'register':do_action( 'wpwa_register_user' );break;}} We will be handling custom routes based on the value of the control_action query variable assigned in the previous section. The value of this variable can be grabbed through the global query_vars array of the $wp_query object. Then, we can use a simple switch statement to handle the controlling based on the action. The first action to consider will be to register as we are in the registration process. Once the control_action query variable is matched with registration, we will call a handler function using do_action. You might be confused why we use do_action in this scenario. So, let's consider the same implementation in a normal PHP application, where we don't have the do_action hook: switch ( $control_action ) {case 'register':$this->register_user();break;} This is the typical scenario where we call a function within the class or in an external class to implement the registration. In the previous code, we called a function within the class, but with the do_action hook instead of the usual function call. The advantages of using the do_action function WordPress action hooks define specific points in the execution process, where we can develop custom functions to modify existing behavior. In this scenario, we are calling the wpwa_register_user function within the class using do_action. Unlike websites or blogs, web applications need to be extendable with future requirements. Think of a situation where we only allow Gmail addresses for user registration. This Gmail validation is not implemented in the original code. Therefore, we need to change the existing code to implement the necessary validations. Changing a working component is considered bad practice in application development. Let's see why it's considered as a bad practice by looking at the definition of the open/closed principle on Wikipedia. "Open/closed principle states "software entities (classes, modules, functions, and so on) should be open for extension, but closed for modification"; that is, such an entity can allow its behavior to be modified without altering its source code. This is especially valuable in a production environment, where changes to the source code may necessitate code reviews, unit tests, and other such procedures to qualify it for use in a product: the code obeying the principle doesn't change when it is extended, and therefore, needs no such effort." WordPress action hooks come to our rescue in this scenario. We can define an action for registration using the add_action function, as shown in the following code: add_action( 'wpwa_register_user', array( $this, 'register_user' ) ); Now, you can implement this action multiple times using different functions. In this scenario, register_user will be our primary registration handler. For Gmail validation, we can define another function using the following code: add_action( 'wpwa_register_user', array( $this, 'validate_gmail_registration') ); Inside this function, we can make the necessary validations, as shown in the following code: public function validate_user(){// Code to validate user// remove registration function if validation failsremove_action( 'wpwa_register_user', array( $this,'register_user' ) );} Now, the validate_user function is executed before the primary function. So, we can remove the primary registration function if something goes wrong in validation. With this technique, we have the capability of adding new functionalities as well as changing existing functionalities without affecting the already written code. We have implemented a simple controller, which can be quite effective in developing web application functionalities. In the following sections, we will continue the process of implementing registration on the frontend with custom templates. Creating custom templates Themes provide a default set of templates to cater to the existing behavior of WordPress. Here, we are trying to implement a custom template system to suit web applications. So, our first option is to include the template files directly inside the theme. Personally, I don't like this option due to two possible reasons: Whenever we switch the theme, we have to move the custom template files to a new theme. So, our templates become theme dependent. In general, all existing templates are related to CMS functionality. Mixing custom templates with the existing ones becomes hard to manage. As a solution to these concerns, we will implement the custom templates inside the plugin. First, create a folder inside the current plugin folder and name it as templates to get things started. Designing the registration form We need to design a custom form for frontend registration containing the default header and footer. The whole content area will be used for the registration and the default sidebar will be omitted for this screen. Create a PHP file called register-template.php inside the templates folder with the following code: <?php get_header(); ?><div id="wpwa_custom_panel"><?phpif( isset($errors) && count( $errors ) > 0) {foreach( $errors as $error ){echo '<p class="wpwa_frm_error">'. $error .'</p>';}}?>HTML Code for Form</div><?php get_footer(); ?> We can include the default header and footer using the get_header and get_footer functions, respectively. After the header, we will include a display area for the error messages generated in registration. Then, we have the HTML form, as shown in the following code: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('UserType','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo __('Follower','wpwa');?></option><option <?php echo __('Developer','wpwa');?></option><option <?php echo __('Member','wpwa');?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> As you can see, the form action is set to a custom route called user/register to be handled through the front controller. Also, we have added an extra field called user type to choose the preferred user type on registration. You might have noticed that we used wpwa as the prefix for form element names, element IDs, as well as CSS classes. Even though it's not a must to use a prefix, it can be highly effective when working with multiple third-party plugins. A unique plugin-specific prefix avoids or limits conflicts with other plugins and themes. We will get a screen similar to the following one, once we access the /user/register link in the browser: Once the form is submitted, we have to create the user based on the application requirements. Planning the registration process In this application, we have opted to build a complex registration process in order to understand the typical requirements of web applications. So, it's better to plan it upfront before moving into the implementation. Let's build a list of requirements for registration: The user should be able to register as any of the given user roles The activation code needs to be generated and sent to the user The default notification on successful registration needs to be customized to include the activation link Users should activate their account by clicking the link So, let's begin the task of registering users by displaying the registration form as given in the following code: public function register_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/registertemplate.php';exit;}} Once user requests /user/register, our controller will call the register_user function using the do_action call. In the initial request, we need to check whether a user is already logged in using the is_user_logged_in function. If not, we can directly include the registration template located inside the templates folder to display the registration form. WordPress templates can be included using the get_template_part function. However, it doesn't work like a typical template library, as we cannot pass data to the template. In this technique, we are including the template directly inside the function. Therefore, we have access to the data inside this function. Handling registration form submission Once the user fills the data and clicks the submit button, we have to execute quite a few tasks in order to register a user in WordPress database. Let's figure out the main tasks for registering a user: Validating form data Registering the user details Creating and saving activation code Sending e-mail notifications with an activate link In the registration form, we specified the action as /user/register, and hence the same register_user function will be used to handle form submission. Validating user data is one of the main tasks in form submission handling. So, let's take a look at the register_user function with the updated code: public function register_user() {if ( $_POST ) {$errors = array();$user_login = ( isset ( $_POST['wpwa_user'] ) ?$_POST['wpwa_user'] : '' );$user_email = ( isset ( $_POST['wpwa_email'] ) ?$_POST['wpwa_email'] : '' );$user_type = ( isset ( $_POST['wpwa_user_type'] ) ?$_POST['wpwa_user_type'] : '' );// Validating user dataif ( empty( $user_login ) )array_push($errors, __('Please enter a username.','wpwa') );if ( empty( $user_email ) )array_push( $errors, __('Please enter e-mail.','wpwa') );if ( empty( $user_type ) )array_push( $errors, __('Please enter user type.','wpwa') );}// Including the template} The following steps are to be performed: First, we will check whether the request is made as POST. Then, we get the form data from the POST array. Finally, we will check the passed values for empty conditions and push the error messages to the $errors variable created at the beginning of this function. Now, we can move into more advanced validations inside the register_user function, as shown in the following code: $sanitized_user_login = sanitize_user( $user_login );if ( !empty($user_email) && !is_email( $user_email ) )array_push( $errors, __('Please enter valid email.','wpwa'));elseif ( email_exists( $user_email ) )array_push( $errors, __('User with this email alreadyregistered.','wpwa'));if ( empty( $sanitized_user_login ) || !validate_username($user_login ) )array_push( $errors, __('Invalid username.','wpwa') );elseif ( username_exists( $sanitized_user_login ) )array_push( $errors, __('Username already exists.','wpwa') ); The steps to perform are as follows: First, we will use the existing sanitize_user function and remove unsafe characters from the username. Then, we will make validations on the e-mail to check whether it's valid and its existence status in the system. Both the email_exists and username_exists functions checks for the existence of an e-mail and username from the database. Once all the validations are completed, the errors array will be either empty or filled with error messages. In this scenario, we choose to go with the most essential validations for the registration form. You can add more advanced validation in your implementations in order to minimize potential security threats. In case we get validation errors in the form, we can directly print the contents of the error array on top of the form as it's visible to the registration template. Here is a preview of our registration screen with generated error messages: Also, it's important to repopulate the form values once errors are generated. We are using the same function for loading the registration form and handling form submission. Therefore, we can directly access the POST variables inside the template to echo the values, as shown in the updated registration form: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='<?php echo isset($user_login ) ? $user_login : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='<?php echo isset($user_email ) ? $user_email : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('User"Type','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo (isset( $user_type ) &&$user_type == 'follower') ? 'selected' : ''; ?> value='follower'><?phpecho __('Follower','wpwa'); ?></option><option <?php echo (isset( $user_type ) &&$user_type == 'developer') ? 'selected' : ''; ?>value='developer'><?php echo __('Developer','wpwa'); ?></option><option <?php echo (isset( $user_type ) && $user_type =='member') ? 'selected' : ''; ?> value='member'><?phpecho __('Member','wpwa'); ?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> Exploring the registration success path Now, let's look at the success path, where we don't have any errors by looking at the remaining sections of the register_user function: if ( empty( $errors ) ) {$user_pass = wp_generate_password();$user_id = wp_insert_user( array('user_login' =>$sanitized_user_login,'user_email' => $user_email,'role' => $user_type,'user_pass' => $user_pass));if ( !$user_id ) {array_push( $errors, __('Registration failed.','wpwa') );} else {$activation_code = $this->random_string();update_user_meta( $user_id, 'wpwa_activation_code',$activation_code );update_user_meta( $user_id, 'wpwa_activation_status', 'inactive');wp_new_user_notification( $user_id, $user_pass, $activation_code);$success_message = __('Registration completed successfully.Please check your email for activation link.','wpwa');}if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';exit;}} We can generate the default password using the wp_generate_password function. Then, we can use the wp_insert_user function with respective parameters generated from the form to save the user in the database. The wp_insert_user function will be used to update the current user or add new users to the application. Make sure you are not logged in while executing this function; otherwise, your admin will suddenly change into another user type after using this function. If the system fails to save the user, we can create a registration fail message and assign it to the $errors variable as we did earlier. Once the registration is successful, we will generate a random string as the activation code. You can use any function here to generate a random string. Then, we update the user with activation code and set the activation status as inactive for the moment. Finally, we will use the wp_new_user_notification function to send an e-mail containing the registration details. By default, this function takes the user ID and password and sends the login details. In this scenario, we have a problem as we need to send an activation link with the e-mail. This is a pluggable function and hence we can create our own implementation of this function to override the default behavior. Since this is a built-in WordPress function, we cannot declare it inside our plugin class. So, we will implement it as a standalone function inside our main plugin file. The full source code for this function will not be included here as it is quite extensive. I'll explain the modified code from the original function and you can have a look at the source code for the complete code: $activate_link = site_url() ."/user/activate/?wpwa_activation_code=$activate_code";$message = __('Hi there,') . 'rnrn';$message .= sprintf(__('Welcome to %s! Please activate youraccount using the link:','wpwa'), get_option('blogname')) .'rnrn';$message .= sprintf(__('<a href="%s">%s</a>','wpwa'),$activate_link, $activate_link) . 'rn';$message .= sprintf(__('Username: %s','wpwa'), $user_login) .'rn';$message .= sprintf(__('Password: %s','wpwa'), $plaintext_pass) .'rnrn'; We create a custom activation link using the third parameter passed to this function. Then, we modify the existing message to include the activation link. That's about all we need to change from the original function. Finally, we set the success message to be passed into the login screen. Now, let's move back to the register_user function. Once the notification is sent, the registration process is completed and the user will be redirected to the login screen. Once the user has the e-mail in their inbox, they can use the activation link to activate the account. Automatically log in the user after registration In general, most web applications uses e-mail confirmations before allowing users to log in to the system. However, there can be certain scenarios where we need to automatically authenticate the user into the application. A social network sign in is a great example for such a scenario. When using social network logins, the system checks whether the user is already registered. If not, the application automatically registers the user and authenticates them. We can easily modify our code to implement an automatic login after registration. Consider the following code: if ( !is_user_logged_in() ) {wp_set_auth_cookie($user_id, false, is_ssl());include dirname(__FILE__) . '/templates/login-template.php';exit;} The registration code is updated to use the wp_set_auth_cookie function. Once it's used, the user authentication cookie will be created and hence the user will be considered as automatically signed in. Then, we will redirect to the login page as usual. Since the user is already logged in using the authentication cookie, they will be redirected back to the home page with access to the backend. This is an easy way of automatically authenticating users into WordPress. Activating system users Once the user clicks on the activate link, redirection will be made to the /user/activate URL of the application. So, we need to modify our controller with a new case for activation, as shown in the following code: case 'activate':do_action( 'wpwa_activate_user' ); As usual, the definition of add_action goes in the constructor, as shown in the following code: add_action( 'wpwa_activate_user', array( $this,'activate_user') ); Next, we can have a look at the actual implementation of the activate_user function: public function activate_user() {$activation_code = isset( $_GET['wpwa_activation_code'] ) ?$_GET['wpwa_activation_code'] : '';$message = '';// Get activation record for the user$user_query = new WP_User_Query(array('meta_key' => ' wpwa_activation_code','meta_value' => $activation_code));$users = $user_query->get_results();// Check and update activation statusif ( !empty($users) ) {$user_id = $users[0]->ID;update_user_meta( $user_id, ' wpwa_activation_status','active' );$message = __('Account activated successfully.','wpwa');} else {$message = __('Invalid Activation Code','wpwa');}include dirname(__FILE__) . '/templates/info-template.php';exit;} We will get the activation code from the link and query the database for finding a matching entry. If no records are found, we set the message as activation failed or else, we can update the activation status of the matching user to activate the account. Upon activation, the user will be given a message using the info-template.php template, which consists of a very basic template like the following one: <?php get_header(); ?><div id='wpwa_info_message'><?php echo $message; ?></div><?php get_footer(); ?> Once the user visits the activation page on the /user/activation URL, information will be given to the user, as illustrated in the following screen: We successfully created and activated a new user. The final task of this process is to authenticate and log the user into the system. Let's see how we can create the login functionality. Creating a login form in the frontend The frontend login can be found in many WordPress websites, including small blogs. Usually, we place the login form in the sidebar of the website. In web applications, user interfaces are complex and different, compared to normal websites. Hence, we will implement a full page login screen as we did with registration. First, we need to update our controller with another case for login, as shown in the following code: switch ( $control_action ) {// Other casescase 'login':do_action( 'wpwa_login_user' );break;} This action will be executed once the user enters /user/login in the browser URL to display the login form. The design form for login will be located in the templates directory as a separate template called login-template.php. Here is the implementation of the login form design with the necessary error messages: <?php get_header(); ?><div id=' wpwa_custom_panel'><?phpif (isset($errors) && count($errors) > 0) {foreach ($errors as $error) {echo '<p class="wpwa_frm_error">' .$error. '</p>';}}if( isset( $success_message ) && $success_message != ""){echo '<p class="wpwa_frm_success">' .$success_message.'</p>';}?><form method='post' action='<?php echo site_url();?>/user/login' id='wpwa_login_form' name='wpwa_login_form'><ul><li><label class='wpwa_frm_label' for='username'><?phpecho __('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'name='wpwa_username' value='<?php echo isset( $username ) ?$username : ''; ?>' /></li><li><label class='wpwa_frm_label' for='password'><?phpecho __('Password','wpwa'); ?></label><input class='wpwa_frm_field' type='password'name='wpwa_password' value="" /></li><li><label class='wpwa_frm_label' >&nbsp;</label><input type='submit' name='submit' value='<?php echo__('Login','wpwa'); ?>' /></li></ul></form></div><?php get_footer(); ?> Similar to the registration template, we have a header, error messages, the HTML form, and the footer in this template. We have to point the action of this form to /user/login. The remaining code is self-explanatory and hence I am not going to make detailed explanations. You can take a look at the preview of our login screen in the following screenshot: Next, we need to implement the form submission handler for the login functionality. Before this, we need to update our plugin constructor with the following code to define another custom action for login: add_action( 'wpwa_login_user', array( $this, 'login_user' ) ); Once the user requests /user/login from the browser, the controller will execute the do_action( 'wpwa_login_user' ) function to load the login form in the frontend. Displaying the login form We will use the same function to handle both template inclusion and form submission for login, as we did earlier with registration. So, let's look at the initial code of the login_user function for including the template: public function login_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';} else {wp_redirect(home_url());}exit;} First, we need to check whether the user has already logged in to the system. Based on the result, we will redirect the user to the login template or home page for the moment. Once the whole system is implemented, we will be redirecting the logged in users to their own admin area. Now, we can take a look at the implementation of the login to finalize our process. Let's take a look at the form submission handling part of the login_user function: if ( $_POST ) {$errors = array();$username = isset ( $_POST['wpwa_username'] ) ?$_POST['wpwa_username'] : '';$password = isset ( $_POST['wpwa_password'] ) ?$_POST['wpwa_password'] : '';if ( empty( $username ) )array_push( $errors, __('Please enter a username.','wpwa') );if ( empty( $password ) )array_push( $errors, __('Please enter password.','wpwa') );if(count($errors) > 0){include dirname(__FILE__) . '/templates/login-template.php';exit;}$credentials = array();$credentials['user_login'] = $username;$credentials['user_login'] = sanitize_user($credentials['user_login'] );$credentials['user_password'] = $password;$credentials['remember'] = false;// Rest of the code} As usual, we need to validate the post data and generate the necessary errors to be shown in the frontend. Once validations are successfully completed, we assign all the form data to an array after sanitizing the values. The username and password are contained in the credentials array with the user_login and user_password keys. The remember key defines whether to remember the password or not. Since we don't have a remember checkbox in our form, it will be set to false. Next, we need to execute the WordPress login function in order to log the user into the system, as shown in the following code: $user = wp_signon( $credentials, false );if ( is_wp_error( $user ) )array_push( $errors, $user->get_error_message() );elsewp_redirect( home_url() ); WordPress handles user authentication through the wp_signon function. We have to pass all the credentials generated in the previous code with an additional second parameter of true or false to define whether to use a secure cookie. We can set it to false for this example. The wp_signon function will return an object of the WP_User or the WP_Error class based on the result. Internally, this function sets an authentication cookie. Users will not be logged in if it is not set. If you are using any other process for authenticating users, you have to set this authentication cookie manually. Once a user is successfully authenticated, a redirection will be made to the home page of the site. Now, we should have the ability to authenticate users from the login form in the frontend. Checking whether we implemented the process properly Take a moment to think carefully about our requirements and try to figure out what we have missed. Actually, we didn't check the activation status on log in. Therefore, any user will be able to log in to the system without activating their account. Now, let's fix this issue by intercepting the authentication process with another built-in action called authenticate, as shown in the following code: public function authenticate_user( $user, $username, $password ) {if(! empty($username) && !is_wp_error($user)){$user = get_user_by('login', $username );if (!in_array( 'administrator', (array) $user->roles ) ) {$active_status = '';$active_status = get_user_meta( $user->data->ID, 'wpwa_activation_status', true );if ( 'inactive' == $active_status ) {$user = new WP_Error( 'denied', __('<strong>ERROR</strong>:Please activate your account.','wpwa') );}}}return $user;} This function will be called in the authentication action by passing the user, username, and password variables as default parameters. All the user types of our application need to be activated, except for the administrator accounts. Therefore, we check the roles of the authenticated user to figure out whether they are admin. Then, we can check the activation status of other user types before authenticating. If an authenticated user is in inactive status, we can return the WP_Error object and prevent authentication from being successful. Last but not least, we have to include the authenticate action in the controller, to make it work as shown in the following code: add_filter( 'authenticate', array( $this, 'authenticate_user' ), 30, 3 ); This filter is also executed when the user logs out of the application. Therefore, we need to consider the following validation to prevent any errors in the logout process: if(! empty($username) && !is_wp_error($user)) Now, we have a simple and useful user registration and login system, ready to be implemented in the frontend of web applications. Make sure to check login- and registration-related plugins from the official repository to gain knowledge of complex requirements in real-world scenarios. Time to practice In this article, we implemented a simple registration and login functionality from the frontend. Before we have a complete user creation and authentication system, there are plenty of other tasks to be completed. So, I would recommend you to try out the following tasks in order to be comfortable with implementing such functionalities for web applications: Create a frontend functionality for the lost password Block the default WordPress login page and redirect it to our custom page Include extra fields in the registration form Make sure to try out these exercises and validate your answers against the implementations provided on the website for this book. Summary In this article, we looked at how we can customize the built-in registration and login process in the frontend to cater to advanced requirements in web application development. By now, you should be capable of creating custom routers for common modules, implement custom controllers with custom template systems, and customize the existing user registration and authentication process. Resources for Article: Further resources on this subject: Web Application Testing [Article] Creating Blog Content in WordPress [Article] WordPress 3: Designing your Blog [Article]
Read more
  • 0
  • 0
  • 11267

article-image-introducing-primefaces
Packt
02 Jun 2015
21 min read
Save for later

Introducing PrimeFaces

Packt
02 Jun 2015
21 min read
In this article by Mert Çalışkan and Oleg Varaksin, author of PrimeFaces Cookbook - Second Edition, we will cover the following recipes: Setting up and configuring the PrimeFaces library AJAX basics with process and update PrimeFaces selectors Internationalization (i18n) and Localization (L10n) This article will provide details on the setup and configuration of PrimeFaces, along with the basics of the PrimeFaces AJAX mechanism. The goal of this article is to provide a sneak preview of some of the features of PrimeFaces, such as the AJAX processing mechanism and Internationalization, and Localization. (For more resources related to this topic, see here.) Setting up and configuring the PrimeFaces library PrimeFaces is a lightweight JSF component library with one JAR file, which needs no configuration and does not contain any required external dependencies. To start with the development of the library, all we need is the artifact for the library. Getting ready You can download the PrimeFaces library from http://primefaces.org/downloads.html, and you need to add the primefaces-{version}.jar file to your classpath. After that, all you need to do is import the namespace of the library that is necessary to add the PrimeFaces components to your pages to get started. If you are using Maven (for more information on installing Maven, please visit http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html), you can retrieve the PrimeFaces library by defining the Maven repository in your Project Object Model XML file, pom.xml, as follows: <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> </repository> Add the dependency configuration as follows: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>5.2</version> </dependency> At the time of writing this article, the latest and most stable version of PrimeFaces was 5.2. To check whether this is the latest available version or not, please visit http://primefaces.org/downloads.html. The code in this article will work properly with PrimeFaces 5.2. In prior versions or the future versions, some methods, attributes, or components' behaviors may change. How to do it… In order to use PrimeFaces components, first we need to add the namespace declaration to our pages. The namespace for PrimeFaces components is as follows: That is all there is to it. Note that the p prefix is just a symbolic link, and any other character can be used to define the PrimeFaces components. Now you can create your first XHTML page with a PrimeFaces component, as shown in the following code snippet: <html > <f:view contentType="text/html"> <h:head /> <h:body> <h:form> <p:spinner /> </h:form> </h:body> </f:view> </html> This will render a spinner component with an empty value, as shown in the following screenshot: A link to the working example for the given page is given at the end of this recipe. How it works… When the page is requested, the p:spinner component is rendered with the SpinnerRenderer class implemented by the PrimeFaces library. Since the spinner component is an input component, the request-processing life cycle will get executed when the user inputs data and performs a post back on the page. For the first page, we also needed to provide the contentType parameter for f:view since WebKit-based browsers, such as Google Chrome and Safari, request for the content type application/xhtml+xml by default. This would overcome unexpected layout and styling issues that might occur. There's more… PrimeFaces only requires a Java 5+ runtime and a JSF 2.x implementation as mandatory dependencies. There are some optional libraries for certain features. All of these are listed in this table: Dependency Version Type Description JSF runtime 2.0, 2.1, or 2.2 Required Apache MyFaces or Oracle Mojarra itext 2.1.7 Optional DataExporter (PDF) apache-poi 3.7 Optional DataExporter (Excel) rome 1.0 Optional FeedReader commons-fileupload 1.3 Optional FileUpload commons-io 2.2 Optional FileUpload atmosphere 2.2.2 Optional PrimeFaces Push barcode4j-light 2.1 Optional Barcode Generation qrgen 1.4 Optional QR code support for barcode hazelcast 2.6.5+ Optional Integration of the <p:cache> component with hazelcast ehcache 2.7.4+ Optional Integration of the <p:cache> component with ehcache Please ensure that you have only one JAR file of PrimeFaces or a specific PrimeFaces theme in your classpath in order to avoid any issues regarding resource rendering. Currently, PrimeFaces fully supports nonlegacy web browsers with Internet Explorer 10, Safari, Firefox, Chrome, and Opera. The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. The showcase for the recipe is available under http://localhost:8080/pf-cookbook/views/chapter1/yourFirstPage.jsf. AJAX basics with process and update PrimeFaces provides Partial Page Rendering (PPR) and the view-processing feature based on standard JSF 2 APIs to enable choosing what to process in the JSF life cycle and what to render in the end with AJAX. PrimeFaces AJAX Framework is based on standard server-side APIs of JSF 2. On the client side, rather than using the client-side API implementations of JSF, such as Mojarra or MyFaces, PrimeFaces scripts are based on the jQuery JavaScript library, which is well tested and widely adopted. How to do it... We can create a simple page with a command button to update a string property with the current time in milliseconds that is created on the server side and output text to show the value of that string property, as follows: <p:commandButton update="display" action="#{basicPPRBean.updateValue}" value="Update" /> <h:outputText id="display" value="#{basicPPRBean.value}"/> If we want to update multiple components with the same trigger mechanism, we can provide the ID's of the components to the update attribute by providing them with a space, comma, or both, as follows: <p:commandButton update="display1,display2" /> <p:commandButton update="display1 display2" /> <p:commandButton update="display1,display2 display3" /> In addition, there are reserved keywords that are used for a partial update. We can also make use of these keywords along with the ID's of the components, as described in the following table. Some of them come with the JSF standard, and PrimeFaces extends this list with custom keywords. Here's the table we talked about: Keyword JSF/PrimeFaces Description @this JSF The component that triggers the PPR is updated @form JSF The encapsulating form of the PPR trigger is updated @none JSF PPR does not change the DOM with an AJAX response @all JSF The whole document is updated as in non-AJAX requests @parent PrimeFaces The parent of the PPR trigger is updated @composite PrimeFaces This is the closest composite component ancestor @namingcontainer PrimeFaces This is the closest naming container ancestor of the current component @next PrimeFaces This is the next sibling @previous PrimeFaces This is the previous sibling @child(n) PrimeFaces This is the nth child @widgetVar(name) PrimeFaces This is a component stated with a given widget variable name The keywords are a server-side part of the PrimeFaces Search Expression Framework (SEF), which provides both server-side and client-side extensions to make it easier to reference components. We can also update a component that resides in a different naming container from the component that triggers the update. In order to achieve this, we need to specify the absolute component identifier of the component that needs to be updated. An example of this could be the following: <h:form id="form1"> <p:commandButton update=":form2:display" action="#{basicPPRBean.updateValue}" value="Update"/> </h:form> <h:form id="form2"> <h:outputText id="display" value="#{basicPPRBean.value}"/> </h:form> @Named @ViewScoped public class BasicPPRBean implements Serializable { private String value; public String updateValue() { value = String.valueOf(System.currentTimeMillis()); return null; } // getter / setter } PrimeFaces also provides partial processing, which executes the JSF life cycle phases—apply request values, process validations, update model, and invoke application—for determined components with the process attribute. This provides the ability to do group validation on the JSF pages easily. Mostly group validation needs arise in situations where different values need to be validated in the same form, depending on an action that gets executed. By grouping components for validation, errors that would arise from other components when the page has been submitted can be overcome easily. Components such as commandButton, commandLink, autoComplete, fileUpload, and many others provide this attribute to process partially instead of processing the whole view. Partial processing could become very handy in cases where a drop-down list needs to be populated upon a selection on another dropdown and where there is an input field on the page with the required attribute set to true. This approach also makes immediate subforms and regions obsolete. It will also prevent submission of the whole page; thus, this will result in lightweight requests. Without partially processing the view for the dropdowns, a selection on one of the dropdowns will result in a validation error on the required field. A working example for this is shown in the following code snippet: <h:outputText value="Country: " /> <h:selectOneMenu id="countries" value="#{partialProcessing Bean.country}"> <f:selectItems value="#{partialProcessingBean.countries}" /> <p:ajax listener= "#{partialProcessingBean.handleCountryChange}" event="change" update="cities" process="@this"/> </h:selectOneMenu> <h:outputText value="City: " /> <h:selectOneMenu id="cities" value="#{partialProcessingBean.city}"> <f:selectItems value="#{partialProcessingBean.cities}" /> </h:selectOneMenu> <h:outputText value="Email: " /> <h:inputText value="#{partialProcessingBean.email}" required="true" /> With this partial processing mechanism, when a user changes the country, the cities of that country will be populated in the dropdown regardless of whether any input exists for the email field or not. How it works… As illustrated in the partial processing example to update a component in a different naming container, <p:commandButton> is updating the <h:outputText> component that has the display ID and the :form2:display absolute client ID, which is the search expression for the findComponent method. An absolute client ID starts with the separator character of the naming container, which is : by default. The <h:form>, <h:dataTable>, and composite JSF components, along with <p:tabView>, <p:accordionPanel>, <p:dataTable>, <p:dataGrid>, <p:dataList>, <p:carousel>, <p:galleria>, <p:ring>, <p:sheet>, and <p:subTable> are the components that implement the NamingContainer interface. The findComponent method, which is described at http://docs.oracle.com/javaee/7/api/javax/faces/component/UIComponent.html, is used by both JSF core implementation and PrimeFaces. There's more… JSF uses : (colon) as the separator for the NamingContainer interface. The client IDs that will be rendered in the source page will be of the kind id1:id2:id3. If needed, the configuration of the separator can be changed for the web application to something other than the colon with a context parameter in the web.xml file of the web application, as follows: <context-param> <param-name>javax.faces.SEPARATOR_CHAR</param-name> <param-value>_</param-value> </context-param> It's also possible to escape the : character, if needed, in the CSS files with the character, as :. The problem that might occur with the colon is that it's a reserved keyword for the CSS and JavaScript frameworks, like jQuery, so it might need to be escaped. The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. For the demos of this recipe, refer to the following: Basic Partial Page Rendering is available at http://localhost:8080/pf-cookbook/views/chapter1/basicPPR.jsf Updating Component in a Different Naming Container is available at http://localhost:8080/pf-cookbook/views/chapter1/componentInDifferentNamingContainer.jsf An example of Partial Processing is available at http://localhost:8080/pf-cookbook/views/chapter1/partialProcessing.jsf PrimeFaces selectors PrimeFaces integrates the jQuery Selector API (http://api.jquery.com/category/selectors) with the JSF component-referencing model. Partial processing and updating of the JSF components can be done using the jQuery Selector API instead of a regular server-side approach with findComponent(). This feature is called the PrimeFaces Selector (PFS) API. PFS provides an alternative, flexible approach to reference components to be processed or updated partially. PFS is a client-side part of the PrimeFaces SEF, which provides both server-side and client-side extensions to make it easier to reference components. In comparison with regular referencing, there is less CPU server load because the JSF component tree is not traversed on the server side in order to find client IDs. PFS is implemented on the client side by looking at the DOM tree. Another advantage is avoiding container limitations, and thus the cannot find component exception—since the component we were looking for was in a different naming container. The essential advantage of this feature, however, is speed. If we reference a component by an ID, jQuery uses document.getElementById(), a native browser call behind the scene. This is a very fast call, much faster than that on the server side with findComponent(). The second use case, where selectors are faster, is when we have a lot of components with the rendered attributes set to true or false. The JSF component tree is very big in this case, and the findComponent() call is time consuming. On the client side, only the visible part of the component tree is rendered as markup. The DOM is smaller than the component tree and its selectors work faster. In this recipe, we will learn PFS in detail. PFS is recognized when we use @(...) in the process or update attribute of AJAX-ified components. We will use this syntax in four command buttons to reference the parts of the page we are interested in. How to do it… The following code snippet contains two p:panel tags with the input, select, and checkbox components respectively. The first p:commandButton component processes/updates all components in the form(s). The second one processes / updates all panels. The third one processes input, but not select components, and updates all panels. The last button only processes the checkbox components in the second panel and updates the entire panel. <p:messages id="messages" autoUpdate="true"/> <p:panel id="panel1" header="First panel"> <h:panelGrid columns="2"> <p:outputLabel for="name" value="Name"/> <p:inputText id="name" required="true"/> <p:outputLabel for="food" value="Favorite food"/> <h:selectOneMenu id="food" required="true"> ... </h:selectOneMenu> <p:outputLabel for="married" value="Married?"/> <p:selectBooleanCheckbox id="married" required="true" label="Married?"> <f:validator validatorId="org.primefaces.cookbook. validator.RequiredCheckboxValidator"/> </p:selectBooleanCheckbox> </h:panelGrid> </p:panel> <p:panel id="panel2" header="Second panel"> <h:panelGrid columns="2"> <p:outputLabel for="address" value="Address"/> <p:inputText id="address" required="true"/> <p:outputLabel for="pet" value="Favorite pet"/> <h:selectOneMenu id="pet" required="true"> ... </h:selectOneMenu> <p:outputLabel for="gender" value="Male?"/> <p:selectBooleanCheckbox id="gender" required="true" label="Male?"> <f:validator validatorId="org.primefaces.cookbook. validator.RequiredCheckboxValidator"/> </p:selectBooleanCheckbox> </h:panelGrid> </p:panel> <h:panelGrid columns="5" style="margin-top:20px;"> <p:commandButton process="@(form)" update="@(form)" value="Process and update all in form"/> <p:commandButton process="@(.ui-panel)" update="@(.ui-panel)" value="Process and update all panels"/> <p:commandButton process="@(.ui-panel :input:not(select))" update="@(.ui-panel)" value="Process inputs except selects in all panels"/> <p:commandButton process="@(#panel2 :checkbox)" update="@(#panel2)" value="Process checkboxes in second panel"/> </h:panelGrid> In terms of jQuery selectors, regular input field, selection, and checkbox controls are all inputs. They can be selected by the :input selector. The following screenshot shows what happens when the third button is pushed. The p:inputText and p:selectBooleanCheckbox components are marked as invalid. The h:selectOneMenu component is not marked as invalid although no value was selected by the user. How it works… The first selector from the @(form) first button selects all forms on the page. The second selector, @(.ui-panel), selects all panels on the page as every main container of PrimeFaces' p:panel component has this style class. Component style classes are usually documented in the Skinning section in PrimeFaces User's Guide (http://www.primefaces.org/documentation.html). The third selector, @(.ui-panel :input:not(select)), only selects p:inputText and p:selectBooleanCheckbox within p:panel. This is why h:selectOneMenu was not marked as invalid in the preceding screenshot. The validation of this component was skipped because it renders itself as an HTML select element. The last selector variant, @(#panel2 :checkbox), intends to select p:selectBooleanCheckbox in the second panel only. In general, it is recommended that you use Firebug (https://getfirebug.com) or a similar browser add-on to explore the generated HTML structure when using jQuery selectors. A common use case is skipping validation for the hidden fields. Developers often hide some form components dynamically with JavaScript. Hidden components get validated anyway, and the form validation can fail if the fields are required or have other validation constraints. The first solution would be to disable the components (in addition to hiding them). The values of disabled fields are not sent to the server. The second solution would be to use jQuery's :visible selector in the process attribute of a command component that submits the form. There's more… PFS can be combined with regular component referencing as well, for example, update="compId1 :form:compId2 @(.ui-tabs :input)". The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. The showcase for the recipe is available at http://localhost:8080/pf-cookbook/views/chapter1/pfs.jsf. Internationalization (i18n) and Localization (L10n) Internationalization (i18n) and Localization (L10n) are two important features that should be provided in the web application's world to make it accessible globally. With Internationalization, we are emphasizing that the web application should support multiple languages, and with Localization, we are stating that the text, date, or other fields should be presented in a form specific to a region. PrimeFaces only provides English translations. Translations for other languages should be provided explicitly. In the following sections, you will find details on how to achieve this. Getting ready For internationalization, first we need to specify the resource bundle definition under the application tag in faces-config.xml, as follows: <application> <locale-config> <default-locale>en</default-locale> <supported-locale>tr_TR</supported-locale> </locale-config> <resource-bundle> <base-name>messages</base-name> <var>msg</var> </resource-bundle> </application> A resource bundle is a text file with the .properties suffix that would contain locale-specific messages. So, the preceding definition states that the resource bundle messages_{localekey}.properties file will reside under classpath, and the default value of localekey is en, which stands for English, and the supported locale is tr_TR, which stands for Turkish. For projects structured by Maven, the messages_{localekey}.properties file can be created under the src/main/resources project path. The following image was made in the IntelliJ IDEA: How to do it… To showcase Internationalization, we will broadcast an information message via the FacesMessage mechanism that will be displayed in PrimeFaces' growl component. We need two components—growl itself and a command button—to broadcast the message: <p:growl id="growl" /> <p:commandButton action="#{localizationBean.addMessage}" value="Display Message" update="growl" /> The addMessage method of localizationBean is as follows: public String addMessage() { addInfoMessage("broadcast.message"); return null; } The preceding code uses the addInfoMessage method, which is defined in the static MessageUtil class as follows: public static void addInfoMessage(String str) { FacesContext context = FacesContext.getCurrentInstance(); ResourceBundle bundle = context.getApplication().getResourceBundle(context, "msg"); String message = bundle.getString(str); FacesContext.getCurrentInstance().addMessage(null, new FacesMessage(FacesMessage.SEVERITY_INFO, message, "")); } Localization of components, such as calendar and schedule, can be achieved by providing the locale attribute. By default, locale information is retrieved from the view's locale, and it can be overridden by a string locale key or with a java.util.Locale instance. Components such as calendar and schedule use a shared PrimeFaces.locales property to display labels. As stated before, PrimeFaces only provides English translations, so in order to localize the calendar, we need to put the corresponding locales into a JavaScript file and include the scripting file to the page. The content for the German locale of the Primefaces.locales property for calendar would be as shown in the following code snippet. For the sake of the recipe, only the German locale definition is given and the Turkish locale definition is omitted; you can find it in the showcase application Here's the code snippet we talked about: PrimeFaces.locales['de'] = { closeText: 'Schließen', prevText: 'Zurück', nextText: 'Weiter', monthNames: ['Januar', 'Februar', 'März', 'April', 'Mai', 'Juni', 'Juli', 'August', 'September', 'Oktober', 'November', 'Dezember'], monthNamesShort: ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep', 'Okt', 'Nov', 'Dez'], dayNames: ['Sonntag', 'Montag', 'Dienstag', 'Mittwoch', 'Donnerstag', 'Freitag', 'Samstag'], dayNamesShort: ['Son', 'Mon', 'Die', 'Mit', 'Don', 'Fre', 'Sam'], dayNamesMin: ['S', 'M', 'D', 'M ', 'D', 'F ', 'S'], weekHeader: 'Woche', FirstDay: 1, isRTL: false, showMonthAfterYear: false, yearSuffix: '', timeOnlyTitle: 'Nur Zeit', timeText: 'Zeit', hourText: 'Stunde', minuteText: 'Minute', secondText: 'Sekunde', currentText: 'Aktuelles Datum', ampm: false, month: 'Monat', week: 'Woche', day: 'Tag', allDayText: 'Ganzer Tag' }; The definition of the calendar components both with and without the locale attribute would be as follows: <p:calendar showButtonPanel="true" navigator="true" mode="inline" id="enCal"/> <p:calendar locale="tr" showButtonPanel="true" navigator="true" mode="inline" id="trCal"/> <p:calendar locale="de" showButtonPanel="true" navigator="true" mode="inline" id="deCal"/> They will be rendered as follows: How it works… For Internationalization of the PrimeFaces message, the addInfoMessage method retrieves the message bundle via the defined msg variable. It then gets the string from the bundle with the given key by invoking the bundle.getString(str) method. Finally, the message is added by creating a new PrimeFaces message with the FacesMessage.SEVERITY_INFO severity level. There's more… For some components, localization could be accomplished by providing labels to the components via attributes, such as with p:selectBooleanButton: <p:selectBooleanButton value="#{localizationBean.selectedValue}" onLabel="#{msg['booleanButton.onLabel']}" offLabel="#{msg['booleanButton.offLabel']}" /> The msg variable is the resource bundle variable that is defined in the resource bundle definition in the PrimeFaces configuration file. The English version of the bundle key definitions in the messages_en.properties file that resides under the classpath would be as follows: booleanButton.onLabel=Yes booleanButton.offLabel=No The PrimeFaces Cookbook Showcase application This recipe is available in the demo web application on GitHub (https://github.com/ova2/primefaces-cookbook/tree/second-edition). Clone the project if you have not done it yet, explore the project structure, and build and deploy the WAR file on application servers compatible with Servlet 3.x, such as JBoss WildFly and Apache TomEE. For the demos of this recipe, refer to the following: Internationalization is available at http://localhost:8080/pf-cookbook/views/chapter1/internationalization.jsf Localization of the calendar component is available at http://localhost:8080/pf-cookbook/views/chapter1/localization.jsf Localization with resources is available at http://localhost:8080/pf-cookbook/views/chapter1/localizationWithResources.jsf For already translated locales of the calendar, see http://code.google.com/p/primefaces/wiki/PrimeFacesLocales. Summary In this article, we learned about setting up and configuring the PrimeFaces library, AJAX basics with process and update, PrimeFaces selectors, and Internationalization (i18n) and Localization (L10n). Resources for Article: Further resources on this subject: Components of PrimeFaces Extensions [Article] JSF2 composite component with PrimeFaces [Article] Getting Started with PrimeFaces [Article]
Read more
  • 0
  • 0
  • 5934

article-image-installing-and-configuring-network-monitoring-software
Packt
02 Jun 2015
9 min read
Save for later

Installing and Configuring Network Monitoring Software

Packt
02 Jun 2015
9 min read
This article written by Bill Pretty, Glenn Vander Veer, authors of the book Building Networks and Servers Using BeagleBone will serve as an installation guide for the software that will be used to monitor the traffic on your local network. These utilities can help determine which devices on your network are hogging the bandwidth, which slows down the network for other devices on your network. Here are the topics that we are going to cover: Installing traceroute and My Trace Route (MTR or Matt's Traceroute): These utilities will give you a real-time view of the connection between one node and another Installing Nmap: This utility is a network scanner that can list all the hosts on your network and all the services available on those hosts Installing iptraf-ng: This utility gathers various network traffic information and statistics (For more resources related to this topic, see here.) Installing Traceroute Traceroute is a tool that can show the path from one node on a network to another. This can help determine the ideal placement of a router to maximize wireless bandwidth in order to stream music and videos from the BeagleBone server to remote devices. Traceroute can be installed with the following command: apt-get install traceroute   Once Traceroute is installed, it can be run to find the path from the BeagleBone to any server anywhere in the world. For example, here's the route from my BeagelBone to the Canadian Google servers: Now, it is time to decipher all the information that is presented. This first command line tells traceroute the parameters that it must use: traceroute to google.ca (74.125.225.23), 30 hops max, 60 byte packets This gives the hostname, the IP address returned by the DNS server, the maximum number of hops to be taken, and the size of the data packet to be sent. The maximum number of hops can be changed with the –m flag and can be up to 255. In the context of this book, this will not have to be changed. After the first line, the next few lines show the trip from the BeagleBone, through the intermediate hosts (or hops), to the Google.ca server. Each line follows the following format: hop_number host_name (host IP_address) packet_round_trip_times From the command that was run previously (specifically hop number 4): 2 10.149.206.1 (10.149.206.1) 15.335 ms 17.319 ms 17.232 ms Here's a breakdown of the output: The hop number 2: This is a count of the number of hosts between this host and the originating host. The higher the number, the greater is the number of computers that the traffic has to go through to reach its destination. 10.149.206.1: This denotes the hostname. This is the result of a reverse DNS lookup on the IP address. If no information is returned from the DNS query (as in this case), the IP address of the host is given instead. (10.149.206.1): This is the actual host IP address. Various numbers: This is the round-trip time for a packet to go from the BeagleBone to the server and back again. These numbers will vary depending on network traffic, and lower is better. Sometimes, the traceroute will return some asterisks (*). This indicates that the packet has not been acknowledged by the host. If there are consecutive asterisks and the final destination is not reached, then there may be a routing problem. In a local network trace, it most likely is a firewall that is blocking the data packet. Installing My Traceroute My Traceroute (MTR) is an extension of traceroute, which probes the routers on the path from the packet source and destination, and keeps track of the response times of the hops. It does this repeatedly so that the response times can be averaged. Now, install mtr with the following command: sudo apt-get install mtr After it is run, mtr will provide quite a bit more information to look at, which would look like the following: While the output may look similar, the big advantage over traceroute is that the output is constantly updated. This allows you to accumulate trends and averages and also see how network performance varies over time. When using traceroute, there is a possibility that the packets that were sent to each hop happened to make the trip without incident, even in a situation where the route is suffering from intermittent packet loss. The mtr utility allows you to monitor this by gathering data over a wider range of time. Here's an mtr trace from my Beaglebone to my Android smartphone: Here's another trace, after I changed the orientation of the antennae of my router: As you can see, the original orientation was almost 100 milliseconds faster for ping traffic. Installing Nmap Nmap is designed to allow the scanning of networks in order to determine which hosts are up and what services are they offering. Nmap supports a large number of scanning options, which are overkill for what will be done in this book. Nmap is installed with the following command: sudo apt-get install nmap Answer Yes to install nmap and its dependent packages. Using Nmap After it is installed, run the following command to see all the hosts that are currently on the network: nmap –T4 –F <your_local_ip_range> The option -T4 sets the timing template to be used, and the -F option is for fast scanning. There are other options that can be used and found via the nmap manpage. Here, your_local_ip_range is within the range of addresses assigned by your router. Here's a node scan of my local network. If you have a lot of devices on your local network, this command may take a long time to complete. Now, I know that I have more nodes on my network, but they don't show up. This is because the command we ran didn't tell nmap to explicitly query each IP address to see whether the host responds but to query common ports that may be open to traffic. Instead, only use the -Pn option in the command to tell nmap to scan all the ports for every address in the range. This will scan more ports on each address to determine whether the host is active or not. Here, we can see that there are definitely more hosts registered in the router device table. This scan will attempt to scan a host IP address even if the device is powered off. Resetting the router and running the same scan will scan the same address range, but it will not return any device names for devices that are not powered at the time of the scan. You will notice that after scanning, nmap reports that some IP addresses' ports are closed and some are filtered. Closed ports are usually maintained on the addresses of devices that are locked down by their firewall. Filtered ports are on the addresses that will be handled by the router because there actually isn't a node assigned to these addresses. Here's a part of the output from an nmap scan of my Windows machine: Here's a part of the output of a scan of the BeagleBone: Installing iptraf-ng Iptraf-ng is a utility that monitors traffic on any of the interfaces or IP addresses on your network via custom filters. Because iptraf-ng is based on the ncurses libraries, we will have to install them first before downloading and compiling the actual iptraf-ng package. To install ncurses, run the following command: sudo apt-get install libncurses5-dev Here's how you will install ncurses and its dependent packages: Once ncurses is installed, download and extract the iptraf-ng tarball so that it can be built. At the time of writing this book, iptrf-ng's version 1.1.4 was available. This will change over time, and a quick search on Google will give you the latest and greatest version to download. You can download this version with the following command: wget https://fedorahosted.org/releases/i/p/iptraf-ng/iptraf-ng- <current_version_number>.tar.gz The following screenshot shows how to download the iptraf-ng tarball: After we have completed the downloading, extract the tarball using the following command: tar –xzf iptraf-ng-<current_version_number>.tar.gz Navigate to the iptraf-ng directory created by the tar command and issue the following commands: ./configure make sudo make install After these commands are complete, iptraf-ng is ready to run, using the following command: sudo iptraf-ng When the program starts, you will be presented with the following screen: Configuring iptraf-ng As an example, we are going to monitor all incoming traffic to the BeagleBone. In order to do this, iptraf-ng should be configured. Selecting the Configure... menu item will show you the following screen: Here, settings can be changed by highlighting an option in the left-hand side window and pressing Enter to select a new value, which will be shown in the Current Settings window. In this case, I have enabled all the options except Logging. Exit the configuration screen and enter the Filter Status screen. This is where we will set up the filter to only monitor traffic coming to the BeagleBone and from it. Then, the following screen will be presented: Selecting IP... will create an IP filter, and the following subscreen will pop up: Selecting Define new filter... will allow the creation and saving of a filter that will only display traffic for the IP address and the IP protocols that are selected, as shown in the following screenshot: Here, I have put in the BeagleBone's IP address, and to match all IP protocols. Once saved, return to the main menu and select IP traffic monitor. Here, you will be able to select the network interfaces to be monitored. Because my BeagleBone is connected to my wired network, I have selected eth0. The following screenshot should shows us the options: If all went well with your filter, you should see traffic to your BeagleBone and from it. Here are the entries for my PuTTy session; 192.168.17.2 is my Windows 8 machine, and 192.168.17.15 is my BeagleBone: Here's an image of the traffic generated by browsing the DLNA server from the Windows Explorer: Moreover, here's the traffic from my Android smartphone running a DLNA player, browsing the shared directories that were set up: Summary In this article, you saw how to install and configure the software that will be used to monitor the traffic on your local network. With these programs and a bit of experience, you can determine which devices on your network are hogging the bandwidth and find out whether you have any unauthorized users. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Protecting GPG Keys in BeagleBone [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 16881
article-image-developing-location-based-services-neo4j
Packt
02 Jun 2015
22 min read
Save for later

Developing Location-based Services with Neo4j

Packt
02 Jun 2015
22 min read
In this article by Ankur Goel, author of the book, Neo4j Cookbook, we will cover the following recipes: Installing the Neo4j Spatial extension Importing the Esri shapefiles Importing the OpenStreetMap files Importing data using the REST API Creating a point layer using the REST API Finding geometries within the bounding box Finding geometries within a distance Finding geometries within a distance using Cypher By definition, any database that is optimized to store and query the data that represents objects defined in a geometric space is called a spatial database. Although Neo4j is primarily a graph database, due to the importance of geospatial data in today's world, the spatial extension has been introduced in Neo4j as an unmanaged extension. It gives you most of the facilities, which are provided by common geospatial databases along with the power of connectivity through edges, which Neo4j, as a graph database, provides. In this article, we will take a look at some of the widely used use cases of Neo4j as a spatial database, and you will learn how typical geospatial operations can be performed on it. Before proceeding further, you need to install Neo4j on your system using one of the recipies that follow here. The installation will depend on your system type. (For more resources related to this topic, see here.) Single node installation of Neo4j over Linux Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications as well. The following recipe will show you how to set up a single instance of Neo4j over the Linux operating system. Getting ready Perform the following steps to get started with this recipe: Download the community edition of Neo4j from http://www.neo4j.org/download for the Linux platform: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether JDK/JRE is installed for your operating system or not by typing this in the shell prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Linux distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Linux operating system, which is simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status Neo4j can also be monitored using the web console. Open http://<ip>:7474/webadmin, as shown in the following screenshot: The preceding diagram is a screenshot of the web console of Neo4j through which the server can be monitored and different Cypher queries can be run over the graph database. How it works... Neo4j comes with prebuilt binaries over the Linux operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you may face several kind of issues, such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#linux-install. Single node installation of Neo4j over Windows Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as is or can be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the Windows operating system. Getting ready Perform the following steps to get started with this recipe: Download the Windows installer from http://www.neo4j.org/download. This has both 32- and 64-bit prebuilt binaries Check whether Java is installed for the operating system or not by typing this in the cmd prompt: echo %JAVA_HOME% If this command throws no output, install Java for your Windows distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the Windows operating system, which is simple, as shown here: Run the installer by clicking on the downloaded file: The preceding screenshot shows the Windows installer running. After the installation is complete, when you run the software, it will ask for the database location. Carefully choose the location as the entire graph database will be stored in this folder: The preceding screenshot shows the Windows installer asking for the graph database's location. The Neo4j browser can be opened by entering http://localhost:7474/ in the browser. The following screenshot depicts Neo4j started over the Windows platform: How it works... Neo4j comes with prebuilt binaries over the Windows operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. See also During installation, you might face several kinds of issues such as max open files and so on. For more information, check out http://neo4j.com/docs/stable/server-installation.html#windows-install. Single node installation of Neo4j over Mac OS X Neo4j is a highly scalable graph database that runs over all the common platforms; it can be used as in a mode and can also be embedded inside applications. The following recipe will show you how to set up a single instance of Neo4j over the OS X operating system. Getting ready Perform the following steps to get started with this recipe: Download the binary version of Neo4j from http://www.neo4j.org/download for the Mac OS X platform and the community edition as shown in the following command: $ wget http://dist.neo4j.org/neo4j-community-2.2.0-M02-unix.tar.gz Check whether Java is installed for the operating system or not by typing this over the cmd prompt: $ echo $JAVA_HOME If this command throws no output, install Java for your Mac OS X distribution and also set the JAVA_HOME path How to do it... Now, let's install Neo4j over the OS X operating system, which is very simple, as shown in the following steps: Extract the TAR file using the following command: $ tar –zxvf neo4j-community-2.2.0-M02-unix.tar.gz $ ls Go to the bin directory under the root folder: $ cd <neo4j-community-2.2.0-M02>/bin/ Start the Neo4j graph database server: $ ./neo4j start Check whether Neo4j is running or not using the following command: $ ./neo4j status How it works... Neo4j comes with prebuilt binaries over the OS X operating system, which can be extracted and run over. Neo4j comes with both web-based and terminal-based consoles, over which the Neo4j graph database can be explored. There's more… Neo4j over Mac OS X can also be installed using brew, which has been explained here. Run the following commands over the shell: $ brew update $ brew install neo4j After this, Neo4j can be started using the start option with the Neo4j command: $ neo4j start This will start the Neo4j server, which can be accessed from the default URL (http://localhost:7474). The installation can be reached using the following commands: $ cd /usr/local/Cellar/neo4j/ $ cd {NEO4J_VERSION}/libexec/ You can learn more about OS X installation from http://neo4j.com/docs/stable/server-installation.html#osx-install. Due to the limitation of content that can provided in this article, we assume you would already know how to perform the basic operations using Neo4j such as creating a graph, importing data from different formats into Neo4j, the common configurations used for Neo4j. Installing the Neo4j Spatial extension Neo4j Spatial is a library of utilities for Neo4j that facilitates the enabling of spatial operations on the data. Even on the existing data, geospatial indexes can be added and many geospatial operations can be performed on it. In this recipe, you will learn how to install the Neo4j Spatial extension. Getting ready The following steps will get you started with this recipe: Install Neo4j using the earlier recipes in this article. Install the dependencies listed in the pom.xml file for this project from https://github.com/neo4j-contrib/spatial/blob/master/pom.xml. Install Maven using the following command for your operating system: For Debian systems: apt-get install maven For Redhat/Centos systems: yum install apache-maven To install on a Windows-based system, please refer to https://maven.apache.org/guides/getting-started/windows-prerequisites.html. How to do it... Now, let's install the Neo4j Spatial plugin, which is very simple to do, by following these steps: Clone the GitHub repository for spatial extension: git clone git://github.com/neo4j/spatial spatial Move into the spatial directory: cd spatial Build the code using Maven. This will download all the dependencies, compile the library, run the tests, and install the artifacts in the local repository: mvn clean install Move the built artifact to the Neo4j plugin directory: unzip target/neo4j/neo4j-spatial-0.11-SNAPSHOT-server-plugin.zip $NEO4J_ROOT_DIR/plugins/ Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart Check whether the Neo4j Spatial plugin is properly installed or not: curl –L http://<neo4j_server_ip>:<port>/db/data If you are using Neo4j 2.2 or higher, then use the following command: curl --user neo4j:<password> http://localhost:7474/db/data/ The output will look like what is shown in the following screenshot, which shows the Neo4j Spatial plugin installed: How it works... Neo4j Spatial is a library of utilities that helps perform geospatial operations on the dataset, which is present in the Neo4j graph database. You can add geospatial indexes on the existing data and perform operations, such as data within a specified region or within some distance of point of interest. Neo4j Spatial comes as an unmanaged extension, which can be easily installed as well as removed from Neo4j. The extension does not interfere with any of the core functionality. There's more… To read more about Neo4j Spatial extension, we encourage users to visit the GitHub repository at https://github.com/neo4j-contrib/spatial. Also, it will be good to read about the Neo4j unmanaged extension in general (http://neo4j.com/docs/stable/server-unmanaged-extensions.html). Importing the Esri shapefiles The shapefile format is a popular geospatial vector data format for the Geographic Information System (GIS) software. It is developed and regulated by Esri as an open specification for data interoperability among Esri. It is very popular among GIS products, and many times, the data format is in Esri shapefiles. The main file is the .shp file, which contains the geometry data. The binary data file consists of a single, fixed-length header followed by variable-length data records. In this recipe, you will learn how to import the Esri shapefiles in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the Esri shapefile format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data using the Java API from it using the following steps: Download a sample .shp file from http://www.statsilk.com/maps/download-free-shapefile-maps. Execute the following commands: wget http://biogeo.ucdavis.edu/data/diva/adm/AFG_adm.zip unzip AFG_adm.zip mv AFG_adm1.* /data The ShapefileImporter method lets you import data from the Esri shapefile using the following code: GraphDatabaseService esri_database = new GraphDatabaseFactory().newEmbeddedDatabase(storeDir); try {    ShapefileImporter importer = new ShapefileImporter(esri_database); importer.importFile("/data/AFG_adm1.shp", "layer_afganistan");        } finally {            esri_database.shutdown(); } Using similar code, we can import multiple SHP files into the same layer or different layers, as shown in the following code snippet: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".shp"); }};   File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {    importer.importFile(fileEntry.toString(), "layer_afganistan"); } catch(Exception e){    esri_database.shutdown(); }} How it works... The Neo4j Spatial extension natively supports the import of data in the shapefile format. Using the ShapefileImporter method, any SHP file can be easily imported into Neo4j. The ShapefileImporter method takes two arguments: the first argument is the path to the SHP files and the second is the layer in which it should be imported. There's more… We will encourage you to read more about shapefiles and layers in general; for this, please visit the following URLs for more information: http://en.wikipedia.org/wiki/Shapefile http://wiki.openstreetmap.org/wiki/Shapefiles http://www.gdal.org/drv_shapefile.html Importing the OpenStreetMap files OpenStreetMap is a powerhouse of data when it comes to geospatial data. It is a collaborative project to create a free, editable map of the world. OpenStreetMap provides geospatial data in the .osm file format. To read more about .osm files in general, check out http://wiki.openstreetmap.org/wiki/.osm. In this recipe, you will learn how to import the .osm files in the Neo4j graph database. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Since the OSM file format is, by default, supported by the Neo4j Spatial extension, it is very easy to import data from it using the following steps: Download one sample .osm file from http://wiki.openstreetmap.org/wiki/Planet.osm#Downloading. Execute the following commands: wget http://download.geofabrik.de/africa-latest.osm.bz2 bunzip2 africa-latest.osm.bz2 mv africa-latest.osm /data The importfile method lets you import data from the .osm file, as shown in the following code snippet: OSMImporter importer = new OSMImporter("africa"); try {    importer.importFile(osm_database, "/data/botswana-latest.osm", false, 5000, true); } catch(Exception e){    osm_database.shutdown(); } importer.reIndex(osm_database,10000); Using similar code, we can import multiple OSM files into the same layer or different layers, as shown here: File dir = new File("/data");      FilenameFilter filter = new FilenameFilter() {          public boolean accept(File dir, String name) {      return name.endsWith(".osm"); }}; File[] listOfFiles = dir.listFiles(filter); for (final File fileEntry : listOfFiles) {      System.out.println("FileEntry Directory "+fileEntry);    try {importer.importFile(osm_database, fileEntry.toString(), false, 5000, true); importer.reIndex(osm_database,10000); } catch(Exception e){    osm_database.shutdown(); } How it works... This is slightly more complex as it requires two phases: the first phase requires a batch inserter performing insertions into the database, and the second phase requires reindexing of the database with the spatial indexes. There's more… We will encourage you to read more about the OSM file and the batch inserter in general; for this, visit the following URLs: http://en.wikipedia.org/wiki/OpenStreetMap http://wiki.openstreetmap.org/wiki/OSM_file_formats http://neo4j.com/api_docs/2.0.2/org/neo4j/unsafe/batchinsert/BatchInserter.html Importing data using the REST API The recipes that you have learned until now consist of Java code, which is used to import spatial data into Neo4j. However, by using any other programming language, such as Python or Ruby, spatial data can be easily imported into Neo4j using the REST interface. In this recipe, you will learn how to import geospatial data using the REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... Using the REST interface is a very simple three-stage process to import the geospatial data into the Neo4j graph database server. For the sake of simplicity, the code of the Python language has been used to explain this recipe, although you can also use curl for this recipe: Create the spatial index, as shown in the following code: # Create geom index url = http://<neo4j_server_ip>:<port>/db/data/index/node/ payload= {    "name" : "geom",    "config" : {        "provider" : "spatial",        "geometry_type" : "point",        "lat" : "lat",        "lon" : "lon"    } } Create nodes as lat/lng and data as properties, as shown in the following code: url = "http://<neo4j_server_ip>:<port>/db/data/node" payload = {'lon': 38.6, 'lat': 67.88, 'name': 'abc'} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) node = json.loads(response.read())['self'] Add the preceding created node to the geospatial index, as shown in the following code snippet: #add node to geom index url = "http://<neo4j_server_ip>:<port>/db/data/index/node/geom" payload = {'value': 'dummy', 'key': 'dummy', 'uri': node} req = urllib2.Request(url) req.add_header('Content-Type', 'application/json') response = urllib2.urlopen(req, json.dumps(payload)) print response.read() The data will look like what is shown in the following screenshot after the addition of a few more nodes; this screenshot depicts the Neo4j Spatial data that has been imported: The following screenshot depicts the properties of a single node, which has been imported into Neo4j: How it works... Adding geospatial data using the REST API is a three-step process, listed as follows: Create a geospatial index using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/ Add a node to the Neo4j graph database using an endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/node Add the created node to the geospatial index using the endpoint, by following this URL as a template: http://<neo4j_server_ip>:<port>/db/data/index/node/geom There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Creating a point layer using the REST API In this recipe, you will learn how to create a point layer using the REST API interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the http://<neo4j_server_ip>:/db/data/ext/ SpatialPlugin/graphdb/addSimplePointlayer endpoint to create a simple point layer. Let's add a simple point layer, as shown in the following code: "layer" : "geom", "lat"   : "lat" , "lon"   : "lon",url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/addSimplePointlayer payload= { "layer" : "geom", "lat"   : "lat" , "lon"   : "lon", } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the create point in layer query: How it works... Creating a point in the layer query is based on the REST interface, which the Neo4j Spatial plugin already provides with it. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within the bounding box In this recipe, you will learn how to find all the geometries within the bounding box using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within the bounding box:http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox. Let's find the all the geometries, using the following information: "minx" : 0.0, "maxx" : 100.0, "miny" : 0.0, "maxy" : 100.0 url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb payload= { "layer" : "geom", "minx" : 0.0, "maxx" : 100.3, "miny" : 0.0, "maxy" : 100.0 } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the bounding box query: How it works... Finding geometries in the bounding box is based on the REST interface, which the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, you can see node id54 returned as the output. There's more… We will encourage you to read more about spatial REST interfaces in general; to do this, visit http://neo4j-contrib.github.io/spatial/. Finding geometries within a distance In this recipe, you will learn how to find all the geometries within a distance using the spatial REST interface. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server using the following command: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance. Let's find all the geometries between the specified distance using the following information: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance payload= { "layer" : "geom", "pointY" : 46.8625, "pointX" : -114.0117, "distanceInKm" : 500, } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of a withinDistance query: How it works... Finding geometries within a distance is based on the REST interface that the Neo4j Spatial plugin provides. The output of the REST call contains an array of the nodes, containing the node's id, lat/lng, and its incoming/outgoing relationships. In the preceding output, we can see node id71 returned as the output. There's more… We encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Finding geometries within a distance using Cypher In this recipe, you will learn how to find all the geometries within a distance using the Cypher query. Getting ready Perform the following steps to get started with this recipe: Install Neo4j using the earlier recipies in this article. Install the Neo4j Spatial plugin using the recipe Installing the Neo4j Spatial extension, from this article. Restart the Neo4j graph database server: $NEO4J_ROOT_DIR/bin/neo4j restart How to do it... In this recipe, we will use the following endpoint to find all the geometries within a certain distance: http://<neo4j_server_ip>:<port>/db/data/cipher Let's find all the geometries within a distance using a Cypher query: "pointX" : -116.67, "pointY" : 46.89, "distanceinKm" : 500, url = "http://<neo4j_server_ip>:<port>//db/data/cypher payload= { "query" : "START n=node:geom('withinDistance:[46.9163, -114.0905, 500.0]') RETURN n" } r = requests.post(url, data=json.dumps(payload), headers=headers) The data will look like what is shown in the following screenshot; this screenshot shows the output of the withinDistance query that uses Cypher: The following is the Cypher output in the Neo4j console: How it works... Cypher comes with a withinDistance query, which takes three parameters: lat, lon, and search distance. There's more… We will encourage you to read more about the spatial REST interfaces in general (http://neo4j-contrib.github.io/spatial/). Summary Developing Location-based Services with Neo4j, teaches you the most important aspect of today's data, location, and how to deal with it in Neo4j. You have learnt how to import geospatial data into Neo4j and run queries, such as proximity searches, bounding boxes, and so on. Resources for Article:   Further resources on this subject: Recommender systems dissected Components [article] Working with a Neo4j Embedded Database [article] Differences in style between Java and Scala code [article]
Read more
  • 0
  • 0
  • 3421

article-image-basic-image-processing
Packt
02 Jun 2015
8 min read
Save for later

Basic Image Processing

Packt
02 Jun 2015
8 min read
In this article, Ashwin Pajankar, the author of the book, Raspberry PI Computer Vision Programming, takes us through basic image processing in OpenCV. We will do this with the help of the following topics: Image arithmetic operations—adding, subtracting, and blending images Splitting color channels in an image Negating an image Performing logical operations on an image This article is very short and easy to code with plenty of hands-on activities. (For more resources related to this topic, see here.) Arithmetic operations on images In this section, we will have a look at the various arithmetic operations that can be performed on images. Images are represented as matrices in OpenCV. So, arithmetic operations on images are similar to the arithmetic operations on matrices. Images must be of the same size for you to perform arithmetic operations on the images, and these operations are performed on individual pixels. cv2.add(): This function is used to add two images, where the images are passed as parameters. cv2.subtract(): This function is used to subtract an image from another. We know that the subtraction operation is not commutative. So, cv2.subtract(img1,img2) and cv2.(img2,img1) will yield different results, whereas cv2.add(img1,img2) and cv2.add(img2,img1) will yield the same result as the addition operation is commutative. Both the images have to be of same size and type, as explained before. Check out the following code: import cv2 img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1) cv2.imshow('Image1',img1) cv2.waitKey(0) cv2.imshow('Image2',img2) cv2.waitKey(0) cv2.imshow('Addition',cv2.add(img1,img2)) cv2.waitKey(0) cv2.imshow('Image1-Image2',cv2.subtract(img1,img2)) cv2.waitKey(0) cv2.imshow('Image2-Image1',cv2.subtract(img2,img1)) cv2.waitKey(0) cv2.destroyAllWindows() The preceding code demonstrates the usage of arithmetic functions on images. Here's the output window of Image1: Here is the output window of Addition: The output window of Image1-Image2 looks like this: Here is the output window of Image2-Image1: Blending and transitioning images The cv2.addWeighted() function calculates the weighted sum of two images. Because of the weight factor, it provides a blending effect to the images. Add the following lines of code before destroyAllWindows() in the previous code listing to see this function in action: cv2.addWeighted(img1,0.5,img2,0.5,0) cv2.waitKey(0) In the preceding code, we passed the following five arguments to the addWeighted() function: Img1: This is the first image. Alpha: This is the weight factor for the first image (0.5 in the example). Img2: This is the second image. Beta: This is the weight factor for the second image (0.5 in the example). Gamma: This is the scalar value (0 in the example). The output image value is calculated with the following formula: This operation is performed on every individual pixel. Here is the output of the preceding code: We can create a film-style transition effect on the two images by using the same function. Check out the output of the following code that creates a smooth image transition from an image to another image: import cv2 import numpy as np import time   img1 = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) img2 = cv2.imread('/home/pi/book/test_set/4.2.04.tiff',1)   for i in np.linspace(0,1,40): alpha=i beta=1-alpha print 'ALPHA ='+ str(alpha)+' BETA ='+str (beta) cv2.imshow('Image Transition',    cv2.addWeighted(img1,alpha,img2,beta,0)) time.sleep(0.05) if cv2.waitKey(1) == 27 :    break   cv2.destroyAllWindows() Splitting and merging image colour channels On several occasions, we may be interested in working separately with the red, green, and blue channels. For example, we might want to build a histogram for every channel of an image. Here, cv2.split() is used to split an image into three different intensity arrays for each color channel, whereas cv2.merge() is used to merge different arrays into a single multi-channel array, that is, a color image. The following example demonstrates this: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.03.tiff',1) b,g,r = cv2.split (img) cv2.imshow('Blue Channel',b) cv2.imshow('Green Channel',g) cv2.imshow('Red Channel',r) img=cv2.merge((b,g,r)) cv2.imshow('Merged Output',img) cv2.waitKey(0) cv2.destroyAllWindows() The preceding program first splits the image into three channels (blue, green, and red) and then displays each one of them. The separate channels will only hold the intensity values of the particular color and the images will essentially be displayed as grayscale intensity images. Then, the program merges all the channels back into an image and displays it. Creating a negative of an image In mathematical terms, the negative of an image is the inversion of colors. For a grayscale image, it is even simpler! The negative of a grayscale image is just the intensity inversion, which can be achieved by finding the complement of the intensity from 255. A pixel value ranges from 0 to 255, and therefore, negation involves the subtracting of the pixel value from the maximum value, that is, 255. The code for the same is as follows: import cv2 img = cv2.imread('/home/pi/book/test_set/4.2.07.tiff') grayscale = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) negative = abs(255-grayscale) cv2.imshow('Original',img) cv2.imshow('Grayscale',grayscale) cv2.imshow('Negative',negative) cv2.waitKey(0) cv2.destroyAllWindows() Here is the output window of Greyscale: Here's the output window of Negative: The negative of a negative will be the original grayscale image. Try this on your own by taking the image negative of the negative again Logical operations on images OpenCV provides bitwise logical operation functions for images. We will have a look at the functions that provide the bitwise logical AND, OR, XOR (exclusive OR), and NOT (inversion) functionality. These functions can be better demonstrated visually with grayscale images. I am going to use barcode images in horizontal and vertical orientation for demonstration. Let's have a look at the following code: import cv2 import matplotlib.pyplot as plt   img1 = cv2.imread('/home/pi/book/test_set/Barcode_Hor.png',0) img2 = cv2.imread('/home/pi/book/test_set/Barcode_Ver.png',0) not_out=cv2.bitwise_not(img1) and_out=cv2.bitwise_and(img1,img2) or_out=cv2.bitwise_or(img1,img2) xor_out=cv2.bitwise_xor(img1,img2)   titles = ['Image 1','Image 2','Image 1 NOT','AND','OR','XOR'] images = [img1,img2,not_out,and_out,or_out,xor_out]   for i in xrange(6):    plt.subplot(2,3,i+1)    plt.imshow(images[i],cmap='gray')    plt.title(titles[i])    plt.xticks([]),plt.yticks([]) plt.show() We first read the images in grayscale mode and calculated the NOT, AND, OR, and XOR, functionalities and then with matplotlib, we displayed those in a neat way. We leveraged the plt.subplot() function to display multiple images. Here in the preceding example, we created a grid with two rows and three columns for our images and displayed each image in every part of the grid. You can modify this line and change it to plt.subplot(3,2,i+1) to create a grid with three rows and two columns. Also, we can use the technique without a loop in the following way. For each image, you have to write the following statements. I will write the code for the first image only. Go ahead and write it for the rest of the five images: plt.subplot(2,3,1) , plt.imshow(img1,cmap='gray') , plt.title('Image 1') , plt.xticks([]),plt.yticks([]) Finally, use plt.show() to display. This technique is to avoid the loop when a very small number of images, usually 2 or 3 in number, have to be displayed. The output of this is as follows: Make a note of the fact that the logical NOT operation is the negative of the image. Exercise You may want to have a look at the functionality of cv2.copyMakeBorder(). This function is used to create the borders and paddings for images, and many of you will find it useful for your projects. The exploring of this function is left as an exercise for the readers. You can check the python OpenCV API documentation at the following location: http://docs.opencv.org/modules/refman.html Summary In this article, we learned how to perform arithmetic and logical operations on images and split images by their channels. We also learned how to display multiple images in a grid by using matplotlib. Resources for Article: Further resources on this subject: Raspberry Pi and 1-Wire [article] Raspberry Pi Gaming Operating Systems [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 21390
Modal Close icon
Modal Close icon