Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-real-time-communication-socketio
Daan van
07 Oct 2015
8 min read
Save for later

Real-time Communication with SocketIO

Daan van
07 Oct 2015
8 min read
In this blog post we will create a small game and explore how to create a simple server that will allow real-time communication between players. The game we are going to create is an interpretation of the playground game Tag. One player is it and needs to tag other players by moving around her circle and touching other players circle. We will use Node.js to create the server. Node.js is a platform that brings JavaScript to the server by leveraging Chrome's JavaScript Runtime and provides API's to easily create applications. Don not worry if you are unfamiliar with Node. There are detailed instructions to follow. Node consist of a small core and is meant to be enhanced by modules. One of the modules we are going to use is SocketIO. SocketIO harnesses WebSockets in a ease to use way. WebSockets are a recent addition to the protocols over TCP and allows for bi-directional communication, ideal for real-time games. Follow along If you want to follow along with the code examples, download Tag-follow-along.zip and unzip it in a suitable location. It uses Node.js, so you need to install that before we can start. Once it is installed we need to retrieve the dependencies for the project. Open a console, enter the Tag-follow-along directory and execute the following command npm install This will install the dependencies over the network. You can start the game by running npm run game If you now open http://localhost:3000 in your browser you should be greeted with Tag although there is not a lot going on yet. Tag We will create a game Tag. One player is it and needs to tag other client and server. Below you see a screen-shot of the game. A Game of Tag The big circle is it and needs to tag other circles. Two smaller circles are trying to flee, but they seem to be cornered. Four tiny circles can be seen as well. These are already tagged and unable to move. Game Overview Because this blog post is about real-time communication we are not going into the game mechanics in detail. If you want to learn more about HTML games take a look at HTML5 Games Development by Example. The important files in the project are listed below. ├── lib │   ├── extend.js │   └── game.js ├── package.json ├── public │   ├── css │   │   └── tag.css │   ├── index.html │   ├── js │   │   ├── application.js │   │   └── tag.js │   └── options.json └── server.js We will give a short description of the role each file plays. package.json is a description of the project. It is mainly used to manage the dependencies of the project. server.js is the source for the server. It has two responsibilities: serving static files from the public directory and running the game. game.js is the source for the actual game of Tag. It keeps track of players, their position and if they are tagged. tag.js the counter-part of game.js in the browser. application.js is responsible for setting up the game in the browser. Communication Communication between server and clients The diagram above shows how the server and clients communicate with each other. The clients know about the position of the players. Each time a player moves the client should inform the server. The server in turn keeps track of all the positions that the clients send and updates the game state accordingly. It also needs to inform the clients of the updated game state. This is contrary to traditional pull-based communication between clients. I.e. a client makes a request and a server responds. From the diagram it is clear that the server can push information to clients without the clients make a request for it. This is made possible by WebSockets a protocol providing full-duplex communication channels over a single TCP connection. We will use the SocketIO library to tap into the potential of WebSockets. Setting up Communication In order to let the client and the server communicate with each other we need to add some code. Open server.js and require SocketIO. var express = require('express'); var http = require('http'); var socketio = require('socket.io'); var Game = require('./lib/game'); var options = require('./public/options.json'); This allows us to use the library in our code. Next we need to get a handle on io by adding it to the server. Find the line var server = http.Server(app); and change it to var server = http.Server(app); var io = socketio(server); This sets up the server for two-way communication with clients. In order to make an actual connection open public/js/application.js and add the following line var socket = io(); Head over to server.js and we will make sure we get some feedback when a client connects. At the end of the server.js file add the following lines io.on('connection', function(socket){ console.log('%s connected', socket.id); socket.on('disconnect', function(){ console.log('%s disconnected', socket.id); }); }); This will write a log message each time a client connects and disconnects. Restart the server, use Ctrl-C to stop it and run npm run game to start it. If you now refresh your browser a few times you should see something like Listening on http://:::3000 jK0r5Eqhzn16wySjAAAA connected jK0r5Eqhzn16wySjAAAA disconnected 52cTgdk_p3IwcrMXAAAB connected 52cTgdk_p3IwcrMXAAAB disconnected X89V2p12k9Jsw2UUAAAC connected X89V2p12k9Jsw2UUAAAC disconnected hw-7y6phGkvRz-TmAAAD connected Register Players The game is responsible for keeping track of players. So when a client connects we need to inform the game of a new player. When a client disconnects we need to remove the player from the game. Update the server.js to reflect this io.on('connection', function(socket){ console.log('%s connected', socket.id); game.addPlayer(socket.id); socket.on('disconnect', function(){ console.log('%s disconnected', socket.id); game.removePlayer(socket.id); }); }); Keeping Track of Player Position Each client should inform the server of the position the player wants to be in. We can use the socket that we created to emit this information to the server. Open the development tools of the browser and select the Console tab. If you now move your mouse around inside the square you will see some log messages occur. Object { x: 28, y: 417 } Object { x: 123, y: 403 } Object { x: 210, y: 401 } Object { x: 276, y: 401 } Object { x: 397, y: 401 } Object { x: 480, y: 419 } Object { x: 519, y: 435 } Object { x: 570, y: 471 } The position is already logged in the mouseMoveHandler in public/js/application.js. Change it to function mouseMoveHandler(event){ console.log({ 'x': event.pageX - this.offsetLeft, 'y': event.pageY - this.offsetTop }); socket.emit('position', { 'x': event.pageX - this.offsetLeft, 'y': event.pageY - this.offsetTop }); } The socket.emit call will send the position to the server. Lets make sure the server handles when it receives the updated position. In server.js head over to the socket registration code and add the following snippet. socket.on('position', function(data){ console.log('%s is at (%s, %s)', socket.id, data.x, data.y); }); This registers an event handler for position events that the client sends. If you now restart the server, refresh your browser and move your mouse around in the Tag square, you should also see log messages in the server output. kZyZkVjycq0_ZIvcAAAC is at (628, 133) kZyZkVjycq0_ZIvcAAAC is at (610, 136) kZyZkVjycq0_ZIvcAAAC is at (588, 139) kZyZkVjycq0_ZIvcAAAC is at (561, 145) kZyZkVjycq0_ZIvcAAAC is at (540, 148) kZyZkVjycq0_ZIvcAAAC is at (520, 148) kZyZkVjycq0_ZIvcAAAC is at (506, 148) kZyZkVjycq0_ZIvcAAAC is at (489, 148) kZyZkVjycq0_ZIvcAAAC is at (477, 148) kZyZkVjycq0_ZIvcAAAC is at (469, 148) Broadcasting Game State Now that the server receives the position of the client when it changes, we need to close the communication loop by sending the updated game state back. In server.js, change the position event handler into socket.on('position', function(data){ console.log('%s is at (%s, %s)', socket.id, data.x, data.y); game.update(socket.id, data); }); This updates the position of the player in the game. In the emitGameState function the game state is updated by calling game's tick method. This method is called 60 times a second. Change it to also broadcast the game state to all connected clients. function emitGameState() { game.tick(); io.emit('state', game.state()); }; In the client we need to respond when the server sends us a state event. We can do this be registering a state event handler in public/js/application.js socket.on('state', function(state){ game.updateState(state); }); If you now restart the server and reconnect your browser, you should see a big circle trying to follow your mouse around. Multiple Clients If you now open multiple browsers and let them all connect with http://localhost:3000, you could see the following picture Multiple Clients If you move your mouse in the Tag square in one of the browser, you would see the corresponding circle move in all of the browser. Conclusion We have seen that SocketIO is a very nice library that harnesses the power of WebSockets to enable real-time communication between client and server. Now start having fun with it! About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 3344

article-image-managing-pools-desktops
Packt
07 Oct 2015
14 min read
Save for later

Managing Pools for Desktops

Packt
07 Oct 2015
14 min read
In this article by Andrew Alloway, the author of VMware Horizon View High Availability, we will review strategies for providing High Availability for various types of VMware Horizon View desktop pools. (For more resources related to this topic, see here.) Overview of pools VMware Horizon View provides administrators with the ability to automatically provision and manage pools of desktops. As part of our provisioning of desktops, we must also consider how we will continue service for the individual users in the event of a host or storage failure. Generally High Availability requirements fall into two categories for each pool. We can have stateless desktops where the user information is not stored on the VM between sessions and Stateful desktops where the user information is stored on the desktop between sessions. Stateless desktops In a stateless configuration, we are not required to store data on the Virtual Desktops between user sessions. This allows us to use Local Storage instead of shared storage for our HA strategies as we can tolerate host failures without the use of shared disk. We can achieve a stateless desktop configuration using roaming profiles and/or View Persona profiles. This can greatly reduce cost and maintenance requirements for View Deployments. Stateless desktops are typical in the following environments: Task Workers: A group of workers where the tasks are well known and they all share a common set of core applications. Task workers can use roaming profiles to maintain data between user sessions. In a multi shift environment, having stateless desktops means we only need to provision as many desktops that will be used consecutively. Task Worker setups are typically found in the following scenarios: Data entry Call centers Finance, Accounts Payables, Accounts Receivables Classrooms (in some situations) Laboratories Healthcare terminals Kiosk Users: A group of users that do not login. Logins are typically automatic or without credentials. Kiosk users are typically untrusted users. Kiosk VMs should be locked down and restricted to only the core applications that need to be run. Kiosks are typically refreshed after logoff or at scheduled times after hours. Kiosks can be found in situations such as the following: Airline Check-In stations Library Terminals Classrooms (in some situations) Customer service terminals Customer Self-Serve Digital Signage Stateful desktops Statefull desktops have some advantages from reduced iops and higher disk performance due to the ability to choose thick provisioning. Stateful desktops are desktops that require user data to be stored on the VM or Desktop Host between user sessions. These machines typically are required by users who will extensively customize their desktop in non-trivial ways, require complex or unique applications that are not shared by a large group or require the ability to modify their VM Stateful Desktops are typically used for the following situations: Users who require the ability to modify the installed applications Developers IT Administrators Unique or specialized users Department Managers VIP staff/managers Dedicated pools Dedicated pools are View Desktops provisioned using thin or thick provisioning. Dedicated pools are typically used for Stateful Desktop deployments. Each desktop can be provisioned with a dedicated persistent disk used for storing the User Profile and data. Once assigned a desktop that user will always log into the same desktop ensuring that their profile is kept constant. During OS refresh, balances and recomposes the OS disk is reverted back to the base image. Dedicated Pools with persistent disks offer simplicity for managing desktops as minimal profile management takes place. It is all managed by the View Composer/View Connection Server. It also ensures that applications that store profile data will almost always be able to retrieve the profile data on the next login. Meaning that the administrator doesn't have to track down applications that incorrectly store data outside the roaming profile folder. HA considerations for dedicated pools Dedicated pools unfortunately have very difficult HA requirements. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Dedicated Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware Virtual SAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. Floating Pools Floating pools are a pool of desktops where any user can be assigned to any desktop in the pool upon login. Floating pools are generally used for stateless desktop deployments. Floating pools can be used with roaming profiles or View Persona to provide a consistent user experience on login. Since floating pools are treated as disposable VMs, we open up additional options for HA. Floating pools are given 2 local disks, the OS disk which is a replica from the assigned base VM, and the Disposable Disk where the page file, hibernation file, and temp drive are located. When Floating pools are refreshed, recomposed or rebalanced, all changes made to the desktop by the users are lost. This is due to the Disposable Disk being discarded between refreshes and the OS disk being reverted back to the Base Image. As such any session information such as Profile, Temp directory, and software changes are lost between refreshes. Refreshes can be scheduled to occure after logoff, after every X days or can be manually refreshed. HA considerations for floating pools Floating pools can be protected in several ways depending on the environment. Since floating pools can be deployed on local storage we can protect against a host failure by provisioning the Floating Pool VMs on multiple separate hosts. In the event of a host failure the remaining Virtual Desktops will be used to log users in. If there is free capacity in the cluster more Virtual Desktops will be provisioned on other hosts. For environments with shared storage Floating Pools can still be deployed on the shared storage but it is a good idea to have a secondary shared storage device or a highly available storage device. In the event of a storage failure the VMs can be started on the secondary storage device. VMware Virtual SAN is inherently HA safe and there is no need for a secondary datastore when using Virtual SAN. Many floating pool environments will utilize a profile management solution such as Roaming Profiles or View Persona Management. In these situations it is essential to setup a redundant storage location for View Profiles and or Roaming Profiles. In practice a Windows DFS share is a convenient and easy way to guard profiles against loss in the event of an outage. DFS can be configured to replicate changes made to the profile in real time between hosts. If the Windows DFS server is provisioned as VMs on shared storage make sure to create a DRS rule to separate the VMs onto different hosts. Where possible DFS servers should be stored on separate disk arrays to ensure they data is preserved in the event of the Disk Array, or Storage Processor failure. For more information regarding Windows DFS you can visit the link below https://technet.microsoft.com/en-us/library/jj127250.aspx Manual pools Manual pools are custom dedicated desktops for each user. A VM is manually built for each user who is using the manual pool. Manual Pools are Stateful pools that generally do not utilize profile management technologies such as View Persona or Roaming Profiles. Like Dedicated pools once a user is assigned to a VM they will always log into the same VM. As such HA requirements for manual pools are very similar to dedicated pools. Manual desktops can be configured in almost any maner desired by the administrator. There are no requirements for more than one disk to be attached to the Manual Pool desktop. Manual pools can also be configured to utilize physical hardware as the Desktop such as Blade Servers, Desktop Computers or even Laptops. In this situation there are limited high availability options without investing in exotic and expensive hardware. As best practice the physical hosts should be built with redundant power supplies, ECC RAM, mirrored hard disks pending budget and HA requirements. There should be a good backup strategy for managing physical hosts connected to the Manual Pools. HA considerations for manual pools Manual pools like dedicated pools have a difficult HA requirement. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Manual Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware VSAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. VSAN deployments are inherently HA safe and are excellent candidates for Manual Pool storage. Manual pools given their static nature also have the option of using replication technology to backup the VMs onto another disk. You can use VMware vSphere Replication to do automatic replication or use a variety of storage replication solutions offered by storage and backup vendors. In some cases it may be possible to use fault tolerance on the Virtual Desktops for truly high availability. Note that this would limit the individual VMs to a single vCPU which may be undesirable. Remote Desktop services pools Remote Desktop Services Pools (RDS Pools) are pools where the remote session or application is hosted on a Windows Remote Desktop Server. The application or remote session is run under the users' credentials. Usually all the user data is stored locally on the Remote Desktop Server but can also be stored remotely using Roaming Profiles or View Persona Profiles. Folder Redirection to a central network location is also used with RDS Pools. Typical uses for Remote Desktop Services is for migrating users off legacy RDS environments, hosting applications, and providing access to troublesome applications or applications with large memory foot prints. The Windows Remote Desktop Server can be either a VM or a standalone physical host. It can be combined with Windows Clustering technology to provide scalability and high availability. You can also deploy a load balancer solution to manage connections between multiple Windows Remote Desktop Servers. Remote Desktop services pool HA considerations Remote Desktop services HA revolves around protecting individual RDS VMs or provisioning a cluster of RDS servers. When a single VM is deployed wilth RDS generally it is best to use vSphere HA and clustering featurs to protect the VM. If the RDS resources are larger than practical for a VM then we must focus on protecting individual host or clustering multiple hosts. When the Windows Remote Desktop Server is deployed as a VM the following options are available: Protect the VM with VMware HA, using shared storage This allows vCenter to fail over the VM to another host in the event of a host failure. vSphere will be responcible for starting the VM on another host. The VM will resume from a crashed state. Replicate the Virtual Machine to separate disks on separate hosts using VMware Virtual SAN: Same as above but in this case the VM has been replicated to another host using Virtual SAN technology. The remote VM will be started up from a crashed state, using the last consistent harddrive image that was replicated. Using replication technologies such as vSphere Replication: The VM will be periodically synchronized to a remote host. In the event of a host failure we can manually activate the remotely synchronized VM. Use a Vendors Storage Level replication: In this case we allow our storage vendor to provide replication technology to provide a redundant backup. This protects us in the event of a storage or host failure. These failures can be automated or manual. Consult with your Storage Vendor for more information. Protect the VM using backup technologies: This provides redundancy in the sense that we won't loose the VM if it fails. Unfortuantely you are at the mercy of your restore process to bring the VM back to life. The VM will resume from a crashed state. Always keep backups of production servers. For RDS servers running on a dedicated server we could utilize the following: Redundant power supplies: Redundant power supplies will keep the server going while a PSU is being replaced or becomes defective. It is also a good idea to have 2 separate power sources for each power supply. Simple things like a power bar going faulty or triping a breaker could bring down the server if there are not two independent power sources. Uninteruptable Power Supply: Battery backups are always a must for production level equipment. Make sure to scale the UPS to provide adequate power and duration for your environment. Redundant network interfaces: In rare sucumstances a NIC can go bad or a cable can be damaged. In this case redundant NICs will prevent a server outage. Remember that to protect against a switch outage we should plug the NICs into separate switches. Mirrored or redundant disks: Harddrives are one of the most common failures in computers. Mirrored harddrives or RAID configurations are a must for production level equipment. 2 or more hosts: Clustering physical servers will ensure that host failures won't cause downtime. Consider multi site configurations for even more redundancy. Shared Strategies for VMs and Hardware: Provide High Availability to the RDS using Microsoft Network Load Balancer (NLB): Microsoft Network Load Balancer can provide load balancing to the RDS servers directy. In this situation the clients would connect to a single IP managed by the NLB which would randomly be assigned to a server. Provide High Availability using a load balancer to manage sessions between RDS servers: Using a hardware or software load balancer is can be used instead of Microsoft Network Load Balancers. Load Balancer vendors provide a high variety of capabilities and features for their load balancers. Consult your load balancer vendor for best practices. Use DNS Round Robin to alternate between RDS hosts: On of the most cost effective load balancing methods. It has the drawback of not being able to balance the load or to direct clients away from failed hosts. Updating DNS may delay adding new capacity to the cluster or delay removing a failed host from the cluster. Remote Desktop Connection Broker with High Availability: We can provide RDS failover using the Connection Broker feature of our RDS server. For more details see the link below. For more information regarding Remote Desktop Connection Broker with High Availability see: https://technet.microsoft.com/en-us/library/ff686148%28WS.10%29.aspx Here is an example topology using physical or virtual Microsoft RDS Servers. We use a load balancing technology for the View Connection Servers as described in the previous chapter. We then will connect to the RDS via either a load balancer, DNS round robin, or Cluster IP. Summary In this article, we covered the concept of stateful and stateless desktops and the consequences and techniques for supporting each in a highly available environment. Resources for Article: Further resources on this subject: Working with Virtual Machines[article] Storage Scalability[article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 8387

article-image-using-resources
Packt
06 Oct 2015
6 min read
Save for later

Using Resources

Packt
06 Oct 2015
6 min read
In this article written by Mathieu Nayrolles, author of the book Xamarin Studio for Android Programming: A C# Cookbook, wants us to learn of how to play sound and how to play a movie by using an user-interactive—press on button in a programmative way. (For more resources related to this topic, see here.) Playing sound There are an infinite number of occasions in which you want your applications to play sound. In this section, we will learn how to play a song from our application in a user-interactive—press on a button—or programmative way. Getting ready For using this you need to create a project on your own: How to do it… Add the following using parameter to your MainActivity.cs file. using Android.Media; Create a class variable named _myPlayer of the MediaPlayer class: MediaPlayer _myPlayer; Create a subfolder named raw under Resources. Place the sound you wish to play inside the newly created folder. We will use the Mario theme, free of rights for non-commercial use, downloaded from http://mp3skull.com. Add the following lines at the end of the OnCreate() method of your MainActivity. _myPlayer = MediaPlayer.Create (this, Resource.Raw.mario); Button button = FindViewById<Button> (Resource.Id.myButton); button.Click += delegate { _myPlayer.Start(); }; In the preceding code sample, the first line creates an instance of the MediaPlayer using the this statement as Context and Resource.Raw.mario as the file to play with this MediaPlayer. The rest of the code is trivial, we just acquired a reference to the button and create a behavior for the OnClick() event of the button. In this event, we call the Start() method of the _myPlayer() variable. Run your application and press on the button as shown on the following screenshot: You should hear the Mario theme playing right after you pressed the button, even if you are running the application on the emulator. How it works... Playing sound (and video) is an activity handled by the Media Player class of the Android platform. This class involves some serious implementations and a multitude of states in the same way as activities. However, as an Android Applications Developer and not as an Android Platform Developer we only require a little background on this. The Android multimedia framework includes—thought the Media Player class—a support for playing a very large variety of media such MP3 from the filesystem or from the Internet. Also, you can only play a song on the current sound device which could be the phone speakers, headset or even a Bluetooth enabled speaker. In other words, even if there are many sound outputs available on the phone, the current default settled by the user is the one where your sound will be played. Finally, you cannot play a sound during a call. There's more... Playing sound which is not stored locally Obviously, you may want to play sounds that are not stored locally—in the raw folder—but anywhere else on the phone, like SDcard or so. To do it, you have to use the following code sample: Uri myUri = new Uri ("uriString"); _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(myUri); _myPlayer.Prepare(); The first line defines a Uri for the targeting file to play. The three following lines set the StreamType, the Uri prepares the MediaPlayer. The Prepare() is a method which prepares the player for playback in a synchrone manner. Meaning that this instruction blocks the program until the player is ready to play, until the player has loaded the file. You could also call the PrepareAsync() which returns immediately and performs the loading in an asynchronous way. Playing online sound Using a code very similar to the one required to play sounds stored somewhere on the phone, we could play sound from the Internet. Basically, we just have to replace the Uri parameter by some HTTP address just like the following: String url = "http://myWebsite/mario.mp3"; _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(url); _myPlayer.Prepare(); Also, you must request the permission to access the Internet with your application. This is done in the manifest by adding an <uses-permission> tag for your application as shown by the following code sample: <application android_icon="@drawable/Icon" android_label="Splash"> <uses-permission android_name="android.permission.INTERNET" /> </application> See Also See also the next recipe for playing video. Playing a movie As a final recipe in this article, we will see how to play a movie with your Android application. Playing video, unlike playing audio, involves some special views for displaying it to the users. Getting ready For the last time, we will reuse the same project, and more specifically, we will play a Mario video under the button for playing the Mario theme seen in the previous recipe. How to do it... Add the following code to your Main.axml under Layout file: <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As a result, the content of your Main.axml file should look like the following screenshot: Add the following code to the MainActivity.cs file in the OnCreate() method: var videoView = FindViewById<VideoView> (Resource.Id.SampleVideoView); var uri = Android.Net.Uri.Parse ("url of your video"); videoView.SetVideoURI (uri); Finally, invoke the videoView.Start() method. Note that playing Internet based video, even short one, will take a very long time as the video need to be fully loaded while using this technique. How it works... Playing video should involve some special view to display it to users. This special view is named the VideoView tag and should be used as same as simple TextView tag. <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As you can see in the preceding code sample, you can apply the same parameters to VideoView tag as TextView tag such as layout based options. The VideoView tag, like the MediaPlayer for audio, have a method to set the video URI named SetVideoURI and another one to start the video named Start();. Summary In this article, we've learned how to play a sound clip as well as video on the Android application which we've developed. Resources for Article: Further resources on this subject: Heads up to MvvmCross [article] XamChat – a Cross-platform App [article] Gesture [article]
Read more
  • 0
  • 0
  • 7799

article-image-swift-programming-language
Packt
06 Oct 2015
25 min read
Save for later

The Swift Programming Language

Packt
06 Oct 2015
25 min read
This article is by Chuck Gaffney, the author of the book iOS 9 Game Development Essentials. This delves into some vital specifics of the Swift language. (For more resources related to this topic, see here.) At the core of all game development is your game's code. It is the brain of your project and outside of the art, sound, and various asset developments, it is where you will spend most of your time creating and testing your game. Up until Apple's Worldwide Developers Conference WWDC14 in June of 2014, the code of choice for iOS game and app development was Objective-C. At WWDC14, a new and faster programming language Swift, was announced and is now the recommended language for all current and future iOS games and general app creation. As of the writing of this book, you can still use Objective-C to design your games, but both programmers new and seasoned will see why writing in Swift is not only easier with expressing your game's logic but even more preformat. Keeping your game running at that critical 60 FPS is dependent on fast code and logic. Engineers at Apple developed the Swift Programming Language from the ground up with performance and readability in mind, so this language can execute certain code iterations faster than Objective-C while also keeping code ambiguity to a minimum. Swift also uses many of the methodologies and syntax found in more modern languages like Scala, JavaScript, Ruby, and Python. So let's dive into the Swift language. It is recommended that some basic knowledge of Object Oriented Programming (OOP) be known prior but we will try to keep the build up and explanation of code simple and easy to follow as we move on to the more advanced topics related to game development. Hello World! It's somewhat tradition in the education of programming languages to begin with a Hello World example. A Hello World program is simply using your code to display or log the text Hello World. It's always been the general starting point because sometimes just getting your code environment set up and having your code executing correctly is half the battle. At least, this was more the case in previous programming languages. Swift makes this easier than ever, without going into the structure of a Swift file (which we shall do later on and is also much easier than Objective-C and past languages), here's how you create a Hello World program: print("Hello, World!") That's it! That is all you need to have the text "Hello, World" appear in XCode's Debug Area output. No more semicolons Those of us who have been programming for some time might notice that the usually all important semicolon (;) is missing. This isn't a mistake, in Swift we don't have to use a semicolon to mark the end of an expression. We can if we'd like and some of us might still do it as a force of habit, but Swift has omitted that common concern. The use of the semicolon to mark the end of an expression stems from the earliest days of programming when code was written in simple word processors and needed a special character to represent when the code's expression ends and the next begins. Variables, constants, and primitive data types When programming any application, either if new to programming or trying to learn a different language, first we should get an understanding of how a language handles variables, constants, and various data types, such as Booleans, integers, floats, strings, and arrays. You can think of the data in your program as boxes or containers of information. Those containers can be of different flavors, or types. Throughout the life of your game, the data could change (variables, objects, and so on) or they can stay the same. For example, the number of lives a player has would be stored as a variable, as that is expected to change during the course of the game. That variable would then be of the primitive data type integer, which are basically whole numbers. Data that stores, say the name of a certain weapon or power up in your game would be stored in what's known as a constant, as the name of that item is never going to change. In a game where the player can have interchangeable weapons or power-ups, the best way to represent the currently equipped item would be to use a variable. A variable is a piece of data that is bound to change. That weapon or power-up will also most likely have a bit more information to it than just a name or number; the primitive types we mentioned prior. The currently equipped item would be made up of properties like its name, power, effects, index number, and the sprite or 3D model that visually represents it. Thus the currently equipped item wouldn't just be a variable of a primitive data type, but be what is known as type of object. Objects in programming can hold a number of properties and functionality that can be thought of as a black box of both function and information. The currently equipped item in our case would be sort of a placeholder that can hold an item of that type and interchange it when needed; fulfilling its purpose as a replaceable item. Swift is what's known as a type-safe language, so keeping track of the exact type of data and even it's future usage (that is, if the data is or will be NULL) as it's very important when working with Swift compared to other languages. Apple made Swift behave this way to help keep runtime errors and bugs in your applications to a minimum and so we can find them much earlier in the development process. Variables Let's look at how variables are declared in Swift: var lives = 3 //variable of representing the player's lives lives = 1 //changes that variable to a value of 1 Those of us who have been developing in JavaScript will feel right at home here. Like JavaScript, we use the keyword var to represent a variable and we named the variable, lives. The compiler implicitly knows that the type of this variable is a whole number, and the data type is a primitive one: integer. The type can be explicitly declared as such: var lives: Int = 3 //variable of type Int We can also represent lives as the floating point data types as double or float: // lives are represented here as 3.0 instead of 3 var lives: Double = 3 //of type Double var lives: Float = 3 //of type Float Using a colon after the variable's name declaration allows us to explicitly typecast the variable. Constants During your game there will be points of data that don't change throughout the life of the game or the game's current level or scene. This can be various data like gravity, a text label in the Heads-Up Display (HUD), the center point of character's 2D animation, an event declaration, or time before your game checks for new touches or swipes. Declaring constants is almost the same as declaring variables. Using a colon after the variable's name declaration allows us to explicitly typecast the variable. let gravityImplicit = -9.8 //implicit declaration let gravityExplicit: Float = -9.8 //explicit declaration As we can see, we use the keyword let to declare constants. Here's another example using a string that could represent a message displayed on the screen during the start or end of a stage. let stageMessage = "Start!" stageMessage = "You Lose!" //error Since the string stageMessage is a constant, we cannot change it once it has been declared. Something like this would be better as a variable using var instead of let. Why don't we declare everything as a variable? This is a question sometimes asked by new developers and is understandable why it's asked especially since game apps tend to have a large number of variables and more interchangeable states than an average application. When the compiler is building its internal list of your game's objects and data, more goes on behind the scenes with variables than with constants. Without getting too much into topics like the program's stack and other details, in short, having objects, events, and data declared as constants with the let keyword is more efficient than var. In a small app on the newest devices today, though not recommended, we could possibly get away with this without seeing a great deal of loss in app performance. When it comes to video games however, performance is critical. Buying back as much performance as possible can allow for a better player experience. Apple recommends that when in doubt, always use let when declaring and have the complier say when to change to var. More about constants As of Swift version 1.2, constants can have a conditionally controlled initial value. Prior to this update, we had to initialize a constant with a single starting value or be forced to make the property a variable. In XCode 6.3 and newer, we can perform the following logic: let x : SomeThing if condition { x = foo() } else { x = bar() } use(x) An example of this in a game could be let stageBoss : Boss if (stageDifficulty == gameDifficulty.hard) { stageBoss = Boss.toughBoss() } else { stageBoss = Boss.normalBoss() } loadBoss(stageBoss) With this functionality, a constant's initialization can have a layer of variance while still keeping it unchangeable, or immutable through its use. Here, the constant, stageBoss can be one of two types based on the game's difficulty: Boss.toughBoss() or Boss.normalBoss(). The boss won't change for the course of this stage, so it makes sense to keep it as a constant. More on if and else statements is covered later in the article. Arrays, matrices, sets, and dictionaries Variables and constants can represent a collection of various properties and objects. The most common collection types are arrays, matrices, sets, and dictionaries. An Array is an ordered list of distinct objects, a Matrix is, in short, an array of arrays, a Set is an unordered list of distinct objects and a Dictionary is an unordered list that utilizes a key : value association to the data. Arrays Here's an example of an Array in Swift. let stageNames : [String] = ["Downtown Tokyo","Heaven Valley", "Nether"] The object stageNames is a collection of strings representing names of a game's stages. Arrays are ordered by subscripts from 0 to array length 1. So stageNames[0] would be Downtown Tokyo, stageNames[2] would be Nether, and stageNames[4] would give an error since that's beyond the limits of the array and doesn't exist. We use [] brackets around the class type of stageNames, [String] to tell the compiler that we are dealing with an array of Strings. Brackets are also used around the individual members of this array. 2D arrays or matrices A common collection type used in physics calculations, graphics, and game design, particularly grid-based puzzle games, are two-dimensional arrays or matrices. 2D arrays are simply arrays that have arrays as their members. These arrays can be expressed in a rectangular fashion in rows and columns. For example, the 4x4 (4 rows, 4 columns) tile board in the 15 Puzzle Game can be represented as such: var tileBoard = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,""]] In the 15 Puzzle game, your goal is to shift the tiles using the one empty spot (represented with the blank String ""), to all end up in the 1—15 order we see up above. The game would start with the numbers arranged in a random and solvable order and player would then have to swap the numbers and the blank space. To better perform various actions on AND or OR, store information about each tile in the 15 Game (and other games); it'd be better to create a tile object as opposed to using raw values seen here. For the sake of understanding what a matrix or 2D array is, simply take note on how the array is surrounded by doubly encapsulated brackets [[]]. We will later use one of our example games, SwiftSweeper, to better understand how puzzle games use 2D arrays of objects to create a full game. Here are ways to declare blank 2D arrays with strict types: var twoDTileArray : [[Tiles]] = [] //blank 2D array of type,Tiles var anotherArray = Array<Array<Tile>>() //same array, using Generics The variable twoDTileArray uses the double brackets [[Tiles]] to declare it as a blank 2D array or matrix for the made up type, tiles. The variable anotherArray is a rather oddly declared array that uses angle bracket characters <> for enclosures. It utilizes what's known as Generics. Generics is a rather advanced topic that we will touch more on later. They allow for very flexible functionality among a wide array of data types and classes. For the moment we can think of them as a catchall way of working with Objects. To fill in the data for either version of this array, we would then use for-loops to fill in the data. More on loops and iterations later in the article! Sets This is how we would make a set of various game items in Swift: var keyItems = Set([Dungeon_Prize, Holy_Armor, Boss_Key,"A"]) This set keyItems has various objects and a character A. Unlike an Array, a Set is not ordered and contains unique items. So unlike stageNames, attempting to get keyItems[1] would return an error and items[1] might not necessarily be the Holy_Armor object, as the placement of objects is internally random in a set. The advantage Sets have over Arrays is that Sets are great at checking for duplicated objects and for specific content searching in the collection overall. Sets make use of hashing to pinpoint the item in the collections; so checking for items in Set's content can be much faster than an array. In game development, a game's key items which the player may only get once and should never have duplicates of, could work great as a Set. Using the function keyItems, contains(Boss_Key) returns the Boolean value of true in this case. Sets were added in Swift 1.2 / XCode 6.3. Their class is represented by the Generic type Set<T> where T is the class type of the collection. In other words, the set Set([45, 66, 1233, 234]) would be of the type Set<Int> and our example here would be a Set<NSObject> instance due to it having a collection of various data types. We will discuss more on Generics and Class Hierarchy later in this article. Dictionaries A Dictionary can be represented this way in Swift: var playerInventory: [Int : String] = [1 : "Buster Sword", 43 : "Potion", 22: "StrengthBooster"] Dictionaries use a key : value association, so playerInventory[22] returns the value StrengthBooster based on the key, 22. Both the key and value could be initialized to almost any class type*. In addition to the inventory example given, we can have the code as following: var stageReward: [Int : GameItem] = [:] //blank initialization //use of the Dictionary at the end of a current stage stageReward = [currentStage.score : currentStage.rewardItem] *The values of a Dictionary, though rather flexible in Swift, do have limitations. The key must conform to what's known as the Hashable protocol. Basic data types like integer and string already have this functionality, so if you are to make your own classes or data structures that are to be used in Dictionaries, say mapping a player actions with player input, this protocol must be utilized first. We will discuss more about Protocols, later in this article. Dictionaries are like Sets in that they are unordered but with the additional layer of having a key and a value associated with their content instead of just the hashed key. As with Sets, Dictionaries are great for quick insertion and retrieval of specific data. In IOS Apps and in web applications, Dictionaries are what's used to parse and select items from JSON (JavaScript Object Notation) data. In the realm of game development, Dictionaries using JSON or via Apple's internal data class, NSUserDefaults, can be used to save and load game data, set up game configurations or access specific members of a game's API. For example, here's one way to save a player's high score in an IOS game using Swift: let newBestScore : Void = NSUserDefaults.standardUserDefaults().setInteger(bestScore, forKey: "bestScore") This code comes directly from a published Swift—developed game called PikiPop, which we will use from time to time to show code used in actual game applications. Again, note that Dictionaries are unordered but Swift has ways to iterate or search through an entire Dictionary. Mutable or immutable collections One rather important discussion that we left out is how to subtract, edit or add to Arrays, Sets, and Dictionaries, but before we do that, we should understand the concept of mutable and immutable data or collections. A mutable collection is simply data that can be changed, added to or subtracted from, while an immutable collection cannot be changed, added to or subtracted from. To work with mutable and immutable collections efficiently in Objective-C, we had to explicitly state the mutability of the collection beforehand. For example, an array of the type NSArray in Objective-C is always immutable. There are methods we can call on NSArray that would edit the collection but behind the scenes this would be creating brand new NSArrays, thus would be rather inefficient if doing this often in the life of our game. Objective-C solved this issue with class type, NSMutableArray. Thanks to the flexibility of Swift's type inference, we already know how to make a collection mutable or immutable! The concept of constants and variables has us covered when it comes to data mutability in Swift. Using the keyword let when creating a collection will make that collection immutable while using var will initialize it as a mutable collection. //mutable Array var unlockedLevels : [Int] = [1, 2, 5, 8] //immutable Dictionary let playersForThisRound : [PlayerNumber:PlayerUserName] = [453:"userName3344xx5", 233:"princeTrunks", 6567: "noScopeMan98", 211: "egoDino"] The Array of Int, unlockedLevels can be edited simply because it's a variable. The immutable Dictionary playersForThisRound, can't be changed since it's already been declared as a constant; no additional layers of ambiguity concerning additional class types. Editing or accessing collection data As long as a collection type is a variable, using the var keyword, we can do various edits to the data. Let's go back to our unlockedLevels array. Many games have the functionality of unlocking levels as the player progresses. Say the player reached the high score needed to unlock the previously locked level 3 (as 3 isn't a member of the array). We can add 3 to the array using the append function: unlockedLevels.append(3) Another neat attribute of Swift is that we can add data to an array using the += assignment operator: unlockedLevels += [3] Doing it this way however will simply add 3 to the end of the array. So our previous array of [1, 2, 5, 8] is now [1, 2, 5, 8, 3]. This probably isn't a desirable order, so to insert the number 3 in the third spot, unlockedLevels[2], we can use the following method: unlockedLevels.insert(3, atIndex: 2) Now our array of unlocked levels is ordered to [1, 2, 3, 5, 8]. This is assuming though that we know a member of the array prior to 3 is sorted already. There's various sorting functionalities provided by Swift that could assist in keeping an array sorted. We will leave the details of sorting to our discussions of loops and control flow later on in this article. Removing items from an array is just as simple. Let's use again our unlockedLevels array. Imagine our game has an over world for the player to travel to and from, and the player just unlocked a secret that triggered an event, which blocked off access to level 1. Level 1 would now have to be removed from the unlocked levels. We can do it like this: unlockedLevels.removeAtIndex(0) // array is now [2, 3, 5, 8] Alternately, imagine the player lost all of their lives and got a Game Over. A penalty to that could be to lock up the furthest level. Though probably a rather infuriating method and us knowing that Level 8 is the furthest level in our array, we can remove it using the .removeLast() function of Array types. unlockedLevels.removeLast() // array is now [2,3,5] That this is assuming we know the exact order of the collection. Sets or Dictionaries might be better at controlling certain aspects of your game. Here's some ways to edit a set or a dictionary as a quick guide. Set inventory.insert("Power Ring") //.insert() adds items to a set inventory.remove("Magic Potion") //.remove() removes a specific item inventory.count //counts # of items in the Set inventory.union(EnemyLoot) //combines two Sets inventory.removeAll() //removes everything from the Set inventory.isEmpty //returns true Dictionary var inventory = [Float : String]() //creates a mutable dictionary /* one way to set an equipped weapon in a game; where 1.0 could represent the first "item slot" that would be placeholder for the player's "current weapon" */ inventory.updateValue("Broadsword", forKey: 1.0) //removes an item from a Dictionary based on the key value inventory.removeValueForKey("StatusBooster") inventory.count //counts items in the Dictionary inventory.removeAll(keepCapacity: false) //deletes the Dictionary inventory.isEmpty //returns false //creates an array of the Dictionary's values let inventoryNames = [String](inventory.values) //creates an array of the Dictionary's keys let inventoryKeys = [String](inventory.keys) Iterating through collection types We can't discuss about collection types without mentioning how to iterate through them in mass. Here's some ways we'd iterate though an Array, Set or a Dictionary in Swift: //(a) outputs every item through the entire collection //works for Arrays, Sets, and Dictionaries but output will vary for item in inventory { print(item) } //(b) outputs sorted item list using Swift's sorted() function //works for Sets for item in sorted(inventory) { print("(item)") } //(c) outputs every item as well as it's current index //works for Arrays, Sets, and Dictionaries for (index, value) in enumerate(inventory) { print("Item (index + 1): (value)") } //(d) //Iterate through and through the keys of a Dictionary for itemCode in inventory.keys { print("Item code: (itemCode)") } //(e) //Iterate through and through the values of a Dictionary for itemName in inventory.values { print("Item name: (itemName)") } As stated previously, this is done with what's known as a for-loop; with these examples we show how Swift utilizes the for-in variation using the in keyword. The code will repeat until it reaches the end of the collection in all of these examples. In example (c) we also see the use of the Swift function, enumerate(). This function returns a compound value, (index,value), for each item. This compound value is known as a tuple and Swift's use of tuples makes for a wide variety of functionality for functions loops as well as code blocks. We will delve more into tuples, loops, and blocks later on. Comparing Objective-C and Swift Here's a quick review of our Swift code with a comparison of the Objective-C equivalent. Objective-C An example code in Objective-C is as follows: const int MAX_ENEMIES = 10; //constant float playerPower = 1.3; //variable //Array of NSStrings NSArray * stageNames = @[@"Downtown Tokyo", @"Heaven Valley", @" Nether"]; //Set of various NSObjects NSSet *items = [NSSet setWithObjects: Weapons, Armor, HealingItems,"A", nil]; //Dictionary with an Int:String key:value NSDictionary *inventory = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:1], @"Buster Sword", [NSNumber numberWithInt:43], @"Potion", [NSNumber numberWithInt:22], @"Strength", nil]; Swift An example code in Objective-C is as follows: let MAX_ENEMIES = 10 //constant var playerPower = 1.3 //variable //Array of Strings let stageNames : [String] = ["Downtown Tokyo","Heaven Valley","Nether"] //Set of various NSObjects var items = Set([Weapons, Armor, HealingItems,"A"]) //Dictionary with an Int:String key:value var playerInventory: [Int : String] = [1 : "Buster Sword", 43 : "Potion", 22: "StrengthBooster"] In the preceding code, we some examples of variables, constants, Arrays, Sets, and Dictionaries. First we see their Objective-C syntax and then we see the equivalent declarations using Swift's syntax. We can see from this example how compact Swift is compared to Objective-C. Characters and strings For some time in this article we've been mentioning Strings. Strings are also a collection of data type but a specially dealt collection of Characters, of the class type, String. Swift is Unicode-compliant so we can have Strings like this: let gameOverText = "Game Over!" We have strings with emoji characters like this: let cardSuits = "♠ ♥ ♣ ♦" What we did now was create what's known as a string literal. A string literal is when we explicitly define a String around two quotes "". We can create empty String variables for later use in our games as such: var emptyString = "" // empty string literal var anotherEmptyString = String() // using type initializer Both are valid ways to create an empty String "". String interpolation We can also create a string from a mixture of other data types, known as string interpolation. String Interpolation is rather common in game development, debugging, and string use in general. The most notable of examples are displaying the player's score and lives. This is how one our example games, PikiPop uses string interpolation to display current player stats: //displays the player's current lives var livesLabel = "x (currentScene.player!.lives)" //displays the player's current score var scoreText = "Score: (score)" Take note of the "(variable_name)" formatting. We've actually seen this before in our past code snippets. In the various print() outputs, we used this to display the variable, collection, and so on we wanted to get info on. In Swift, the way to output the value of a data type in a String is by using this formatting. For those of us who came from Objective-C, it's the same as this: NSString *livesLabel = @"Lives: "; int lives = 3; NSString *livesText = [NSString stringWithFormat:@" %@ (%d days ago)", livesLabel, lives]; Notice how Swift makes string interpolation much cleaner and easier to read than its Objective-C predecessor. Mutating strings There are various ways to change strings. We can also add to a string just the way we did while working with collection objects. Here's a basic example: var gameText = "The player enters the stage" gameText += " and quickly lost due to not leveling up" /* gameText now says "The player enters the stage and lost due to not leveling up" */ Since Strings are essentially arrays of characters, like arrays, we can use the += assignment operator to add to the previous String. Also, akin to arrays, we can use the append() function to add a character to the end of a string: let exclamationMark: Character = "!" gameText.append(exclamationMark) /*gameText now says "The player enters the stage and lost due to not leveling up!"*/ Here's how we iterate through the Characters in a string in Swift: for character in "Start!" { print(character) } //outputs: //S //t //a //r //t //! Notice how again we use the for-in loop and even have the flexibility of using a string literal if we'd so like to be what's iterated through by the loop. String indices Another similarity between Arrays and Strings is the fact that a String's individual characters can be located via indices. Unlike Arrays however, since a character can be a varying size of data, broken in 21-bit numbers known as Unicode scalars, they can not be located in Swift with Int type index values. Instead, we can use the .startIndex and .endIndex properties of a String and move one place ahead or one place behind the index with the .successor() and .predecessor() functions respectively to retrieve the needed character or characters of a String. gameText[gameText.startIndex] // = T gameText[gameText.endIndex] // = ! gameText[gameText.startIndex.successor()] // = h gameText[gameText.endIndex.predecessor()] // = p Here are some examples that use these properties and functions using our previous gameText String: There are many ways to manipulate, mix, remove, and retrieve various aspects of a String and Characters. For more information, be sure to check out the official Swift documentation on Characters and Strings here: https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html. Summary There's much more about the Swift programming language than we can fit here. Throughout the course of this book we will throw in a few extra tidbits and nuances about Swift as it becomes relevant to our upcoming gaming programming needs. If you wish to become more versed in the Swift programming language, Apple actually provides a wonderful tool for us in what's known as a Playground. Playgrounds were introduced with the Swift programming language at WWDC14 in June of 2014 and allow us to test various code output and syntax without having to create a project, build, run, and repeat again when in many cases we simply needed to tweak a few variables and function loop iterations. There are a number of resources to check out on the official Swift developer page (https://developer.apple.com/swift/resources/). Two highly recommended Playgrounds to check out are as follows: Guided Tour Playground (https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/GuidedTour.playground.zip) This Playground covers many of the topics we mentioned in this article and more, from Hello World all the way to Generics. The second playground to test out is the Balloons Playground (https://developer.apple.com/swift/blog/downloads/Balloons.zip). The Balloons Playground was the keynote Playgrounds demonstration from WWDC14 and shows off many of the features Playgrounds have to offer, particularly for making and testing games. Sometimes the best way to learn a programming language is to test live code; and that's exactly what Playgrounds allow us to do.
Read more
  • 0
  • 0
  • 14598

article-image-creating-interactive-spreadsheets-using-tables-and-slicers
Packt
06 Oct 2015
10 min read
Save for later

Creating Interactive Spreadsheets using Tables and Slicers

Packt
06 Oct 2015
10 min read
In this article by Hernan D Rojas author of the book Data Analysis and Business Modeling with Excel 2013 introduces additional materials to the advanced Excel developer in the presentation stage of the data analysis life cycle. What you will learn in this article is how to leverage Excel's new features to add interactivity to your spreadsheets. (For more resources related to this topic, see here.) What are slicers? Slicers are essentially buttons that automatically filter your data. Excel has always been able to filter data, but slicers are more practical and visually appealing. Let's compare the two in the following steps: First, fire up Excel 2013, and create a new spreadsheet. Manually enter the data, as shown in the following figure: Highlight cells A1 through E11, and press Ctrl + T to convert our data to an Excel table. Converting your data to a table is the first step that you need to take in order to introduce slicers in your spreadsheet. Let's filter our data using the default filtering capabilities that we are already familiar with. Filter the Type column and only select the rows that have the value equal to SUV, as shown in the following figure. Click on the OK button to apply the filter to the table. You will now be left with four rows that have the Type column equal to SUV. Using a typical Excel filter, we were able to filter our data and only show all of the SUV cars. We can then continue to filter by other columns, such as MPG (miles per gallon) and Price. How can we accomplish the same results using slicers? Continue reading this article to find this out. How to create slicers? In this article, we will be going through simple but powerful steps that are required to build slicers. After we create our first slicer, make sure that you compare and contrast the old way of filtering versus the new way of filtering data. Remove the filter that we just applied to our table by clicking on the option named Clear Filter From "Type", as shown in the following figure: With your Excel table selected, click on the TABLE TOOLS tab. Click on the Insert Slicer button. In the Insert Slicers dialog box, select the Type checkbox, and click on the OK button, as shown in the following screenshot: You should now have a slicer that looks similar to the one in the following figure. Notice that you can resize and move the slicer anywhere you want in the spreadsheet. Click on the Sedan filter in the slicer that we build in the previous step. Wow! The data is filtered and only the rows where the Type column is equal to Sedan is shown in the results. Click on the Sport filter and see what happens. The data is now filtered where the Type column is equal to Sport. Notice that the previous filter of Sedan was removed as soon as we clicked on the Sport filter. What if we want to filter the data by both Sport and Sedan? We can just highlight both the filters with our mouse, or click on Sedan, press Ctrl, and then, click on the Sport filter. The end result will look like this: To clear the filter, just click on the Clear Filter button. Do you see the advantage of slicers over filters? Yes, of course, they are simply better. Filtering between Sedan, Sport, or SUV is very easy and convenient. It will certainly take less key strokes and the feedback is instant. Think about the end users interacting with your spreadsheet. At a touch of a button, they can answer questions that arise in their heads. This is what you call an interactive spreadsheet or an interactive dashboard. Styling slicers There are not many options to style slicers but Excel does give you a decent amount of color schemes that you can experiment with: With the Type slicer selected, navigate to the SLICER TOOLS tab, as shown in the following figure: Click on the various slicer styles available to get a feel of what Excel offers. Adding multiple slicers You are able to add multiple slicers and multiple charts in one spreadsheet. Why would we do this? Well, this is the beginning of a dashboard creation. Let's expand on the example we have just been working on, and see how we can turn raw data into an interactive dashboard: Let's start with creating slicers for # of Passengers and MPG, as shown in the following figure: Rename Sheet1 as Data, and create a new sheet called Dashboard, as shown here: Move the three slicers by cutting and pasting them from the Data sheet to the Dashboard sheet. Create a line chart using the columns Company and MPG, as shown in the following figure: Create a bar chart using the columns Type and MPG. Create another bar chart with the columns company and # of Passengers, as shown in the following figure. These types of charts are technically called column charts, but you can get away by calling them bar charts. Now, move the three charts from the Data tab to the Dashboard tab. Right-click on the bar chart, and select the Move Chart… option. In the Move Chart dialog box, change the Object in: parameter from Data to Dashboard, and then click on the OK button. Move the other two charts to the Dashboard tab so that there are no more charts in the Data tab. Rearrange the charts and slicers so that they look as closely as possible to the ones in the following figure. As you can see that this tab is starting to look like a dashboard. The Type slicer will look better if Sedan, Sport, and SUV are laid out horizontally. Select the Type slicer, and click on the SLICER TOOLS menu option. Change the Columns parameter from 1 to 3, as shown in the following figure. This is how we are able to change the layout or shape of the slicer. Resize the Type slicer so that it looks like the one in the following figure: Clearing filters You can click on one or more filters in the dashboard that we just created. Very cool! Every time we select a filter, all of the three charts that we created get updated. This again is called adding interactivity to your spreadsheets. This allows the end users of your dashboard to interact with your data and perform their own analysis. If you notice, there is really a no good way of removing multiple filters at once. For example, if you select Sedans that have a MPG of greater or equal to 30, how would you remove all of the filters? You would have to clear the filters from the Type slicer and then from the MPG slicer. This can be a little tedious to your end user, and you will want to avoid this at any cost. The next steps will show you how to create a button using VBA that will filter all of our data in a flash: Press Alt + F11, and create a sub procedure called Clear_Slicer, as shown in the following figure. This code will basically find all of the filters that you have selected and then manually clears them for you one at a time. The next step is to bind this code to a button: Sub Clear_Slicer() ' Declare Variables Dim cache As SlicerCache ' Loop through each filter For Each cache In ActiveWorkbook.SlicerCaches ' clear filter cache.ClearManualFilter Next cache End Sub Select the DEVELOPER tab, and click on the Insert button. In the pop-up menu called Form Controls, select the Button option. Now, click anywhere on the sheet, and you will get a dialog box that looks like the following figure. This is where we are going to assign a macro to the button. This means that whenever you click on the button we are creating, Excel will run the macro of our choice. Since we have already created a macro called Clear_Slicer, it will make sense to select this macro, and then click on the OK button. Change the text of the button to Clear All Filters and resize it so that it looks like this: Adjust the properties of the button by right-clicking on the button and selecting the Format Control… option. Here, you can change the font size and the color of your button label. Now, select a bunch of filters, and click on our new shiny button. Yes, that was pretty cool. The most important part is that it is now even easier to "reset" your dashboard and start a brand new analysis. What do I mean by start a brand new analysis? In general, when a user initially starts using your dashboard, he/she will click on the filters aimlessly. The users do this just to figure out how to mechanically use the dashboard. Then, after they get the hang of it, they want to start with a clean slate and perform some data analysis. If we did not have the Clear All Filters button, the users would have to figure out how they would clear every slicer one at a time to start over. The worst case scenario is when the user does not realize when the filters are turned on and when they are turned off. Now, do not laugh at this situation, or assume that your end user is not as smart as you are. This just means that you need to lower the learning curve of your dashboard and make it easy to use. With the addition of the clear button, the end user can think of a question, use the slicers to answer it, click on the clear button, and start the process all over again. These little details are what that is going to separate you from the average Excel developer. Summary The aim of this article was to give you ideas and tools to present your data artistically. Whether you like it or not, sometimes, a better looking analysis will trump the better but less attractive one. Excel gives you the tools to not be on the short end of the stick but to always be able to present visually stunning analysis. You now know have your Excel slicers, and you learned how to bind them to your data. Users of your spreadsheet can now slice and dice your data to answer multiple questions. Executives like flashy visualizations, and when you combine them with a strong analysis, you have a very powerful combination. In this article, we also went through a variety of strategies to customize slicers and chart elements. These little changes made to your dashboard will make them standout and help you get your message across. Excel as always has been an invaluable tool that gives you all of the tools necessary to overcome any data challenges you might come across. As I tell all my students, the key to become better is simply to practice, practice, and practice. Resources for Article: Further resources on this subject: Introduction to Stata and Data Analytics [article] Getting Started with Apache Spark DataFrames [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 4331

article-image-hand-gesture-recognition-using-kinect-depth-sensor
Packt
06 Oct 2015
26 min read
Save for later

Hand Gesture Recognition Using a Kinect Depth Sensor

Packt
06 Oct 2015
26 min read
In this article by Michael Beyeler author of the book OpenCV with Python Blueprints is to develop an app that detects and tracks simple hand gestures in real time using the output of a depth sensor, such as that of a Microsoft Kinect 3D sensor or an Asus Xtion. The app will analyze each captured frame to perform the following tasks: Hand region segmentation: The user's hand region will be extracted in each frame by analyzing the depth map output of the Kinect sensor, which is done by thresholding, applying some morphological operations, and finding connected components Hand shape analysis: The shape of the segmented hand region will be analyzed by determining contours, convex hull, and convexity defects Hand gesture recognition: The number of extended fingers will be determined based on the hand contour's convexity defects, and the gesture will be classified accordingly (with no extended finger corresponding to a fist, and five extended fingers corresponding to an open hand) Gesture recognition is an ever popular topic in computer science. This is because it not only enables humans to communicate with machines (human-machine interaction or HMI), but also constitutes the first step for machines to begin understanding the human body language. With affordable sensors, such as Microsoft Kinect or Asus Xtion, and open source software such as OpenKinect and OpenNI, it has never been easy to get started in the field yourself. So what shall we do with all this technology? The beauty of the algorithm that we are going to implement in this article is that it works well for a number of hand gestures, yet is simple enough to run in real time on a generic laptop. And if we want, we can easily extend it to incorporate more complicated hand pose estimations. The end product looks like this: No matter how many fingers of my left hand I extend, the algorithm correctly segments the hand region (white), draws the corresponding convex hull (the green line surrounding the hand), finds all convexity defects that belong to the spaces between fingers (large green points) while ignoring others (small red points), and infers the correct number of extended fingers (the number in the bottom-right corner), even for a fist. This article assumes that you have a Microsoft Kinect 3D sensor installed. Alternatively, you may install Asus Xtion or any other depth sensor for which OpenCV has built-in support. First, install OpenKinect and libfreenect from http://www.openkinect.org/wiki/Getting_Started. Then, you need to build (or rebuild) OpenCV with OpenNI support. The GUI used in this article will again be designed with wxPython, which can be obtained from http://www.wxpython.org/download.php. Planning the app The final app will consist of the following modules and scripts: gestures: A module that consists of an algorithm for recognizing hand gestures. We separate this algorithm from the rest of the application so that it can be used as a standalone module without the need for a GUI. gestures.HandGestureRecognition: A class that implements the entire process flow of hand gesture recognition. It accepts a single-channel depth image (acquired from the Kinect depth sensor) and returns an annotated RGB color image with an estimated number of extended fingers. gui: A module that provides a wxPython GUI application to access the capture device and display the video feed. In order to have it access the Kinect depth sensor instead of a generic camera, we will have to extend some of the base class functionality. gui.BaseLayout: A generic layout from which more complicated layouts can be built. Setting up the app Before we can get to the nitty-grittyof our gesture recognition algorithm, we need to make sure that we can access the Kinect sensor and display a stream of depth frames in a simple GUI. Accessing the Kinect 3D sensor Accessing Microsoft Kinect from within OpenCV is not much different from accessing a computer's webcam or camera device. The easiest way to integrate a Kinect sensor with OpenCV is by using an OpenKinect module called freenect. For installation instructions, see the preceding information box. The following code snippet grants access to the sensor using cv2.VideoCapture: import cv2 import freenect device = cv2.cv.CV_CAP_OPENNI capture = cv2.VideoCapture(device) On some platforms, the first call to cv2.VideoCapture fails to open a capture channel. In this case, we provide a workaround by opening the channel ourselves: if not(capture.isOpened(device)): capture.open(device) If you want to connect to your Asus Xtion, the device variable should be assigned the cv2.cv.CV_CAP_OPENNI_ASUS value instead. In order to give our app a fair chance to run in real time, we will limit the frame size to 640 x 480 pixels: capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) If you are using OpenCV 3, the constants you are looking for might be called cv3.CAP_PROP_FRAME_WIDTH and cv3.CAP_PROP_FRAME_HEIGHT. The read() method of cv2.VideoCapture is inappropriate when we need to synchronize a set of cameras or a multihead camera, such as a Kinect. In this case, we should use the grab() and retrieve() methods instead. An even easier way when working with OpenKinect is to use the sync_get_depth() and sync_get_video()methods. For the purpose of this article, we will need only the Kinect's depth map, which is a single-channel (grayscale) image in which each pixel value is the estimated distance from the camera to a particular surface in the visual scene. The latest frame can be grabbed via this code: depth, timestamp = freenect.sync_get_depth() The preceding code returns both the depth map and a timestamp. We will ignore the latter for now. By default, the map is in 11-bit format, which is inadequate to be visualized with cv2.imshow right away. Thus, it is a good idea to convert the image to 8-bit precision first. In order to reduce the range of depth values in the frame, we will clip the maximal distance to a value of 1,023 (or 2**10-1). This will get rid of values that correspond either to noise or distances that are far too large to be of interest to us: np.clip(depth, 0, 2**10-1, depth) depth >>= 2 Then, we convert the image into 8-bit format and display it: depth = depth.astype(np.uint8) cv2.imshow("depth", depth) Running the app In order to run our app, we will need to execute a main function routine that accesses the Kinect, generates the GUI, and executes the main loop of the app: import numpy as np import wx import cv2 import freenect from gui import BaseLayout from gestures import HandGestureRecognition def main(): device = cv2.cv.CV_CAP_OPENNI capture = cv2.VideoCapture() if not(capture.isOpened()): capture.open(device) capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640) capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480) We will design a suitable layout (KinectLayout) for the current project: # start graphical user interface app = wx.App() layout = KinectLayout(None, -1, 'Kinect Hand Gesture Recognition', capture) layout.Show(True) app.MainLoop() The Kinect GUI The layout chosen for the current project (KinectLayout) is as plain as it gets. It should simply display the live stream of the Kinect depth sensor at a comfortable frame rate of 10 frames per second. Therefore, there is no need to further customize BaseLayout: class KinectLayout(BaseLayout): def _create_custom_layout(self): pass The only parameter that needs to be initialized this time is the recognition class. This will be useful in just a moment: def _init_custom_layout(self): self.hand_gestures = HandGestureRecognition() Instead of reading a regular camera frame, we need to acquire a depth frame via the freenect method sync_get_depth(). This can be achieved by overriding the following method: def _acquire_frame(self): As mentioned earlier, by default, this function returns a single-channel depth image with 11-bit precision and a timestamp. However, we are not interested in the timestamp, and we simply pass on the frame if the acquisition was successful: frame, _ = freenect.sync_get_depth() # return success if frame size is valid if frame is not None: return (True, frame) else: return (False, frame) The rest of the visualization pipeline is handled by the BaseLayout class. We only need to make sure that we provide a _process_frame method. This method accepts a depth image with 11-bit precision, processes it, and returns an annotated 8-bit RGB color image. Conversion to a regular grayscale image is the same as mentioned in the previous subsection: def _process_frame(self, frame): # clip max depth to 1023, convert to 8-bit grayscale np.clip(frame, 0, 2**10 – 1, frame) frame >>= 2 frame = frame.astype(np.uint8) The resulting grayscale image can then be passed to the hand gesture recognizer, which will return the estimated number of extended fingers (num_fingers) and the annotated RGB color image mentioned earlier (img_draw): num_fingers, img_draw = self.hand_gestures.recognize(frame) In order to simplify the segmentation task of the HandGestureRecognition class, we will instruct the user to place their hand in the center of the screen. To provide a visual aid for this, let's draw a rectangle around the image center and highlight the center pixel of the image in orange: height, width = frame.shape[:2] cv2.circle(img_draw, (width/2, height/2), 3, [255, 102, 0], 2) cv2.rectangle(img_draw, (width/3, height/3), (width*2/3, height*2/3), [255, 102, 0], 2) In addition, we print num_fingers on the screen: cv2.putText(img_draw, str(num_fingers), (30, 30),cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255)) return img_draw Tracking hand gestures in real time The bulk of the work is done by the HandGestureRecognition class, especially by its recognize method. This class starts off with a few parameter initializations, which will be explained and used later: class HandGestureRecognition: def __init__(self): # maximum depth deviation for a pixel to be considered # within range self.abs_depth_dev = 14 # cut-off angle (deg): everything below this is a convexity # point that belongs to two extended fingers self.thresh_deg = 80.0 The recognize method is where the real magic takes place. This method handles the entire process flow, from the raw grayscale image all the way to a recognized hand gesture. It implements the following procedure: It extracts the user's hand region by analyzing the depth map (img_gray) and returning a hand region mask (segment): def recognize(self, img_gray): segment = self._segment_arm(img_gray) It performs contour analysis on the hand region mask (segment). Then, it returns the largest contour area found in the image (contours) and any convexity defects (defects): [contours, defects] = self._find_hull_defects(segment) Based on the contours found and the convexity defects, it detects the number of extended fingers (num_fingers) in the image. Then, it annotates the output image (img_draw) with contours, defect points, and the number of extended fingers: img_draw = cv2.cvtColor(img_gray, cv2.COLOR_GRAY2RGB) [num_fingers, img_draw] = self._detect_num_fingers(contours, defects, img_draw) It returns the estimated number of extended fingers (num_fingers) as well as the annotated output image (img_draw): return (num_fingers, img_draw) Hand region segmentation The automatic detection of an arm, and later the hand region, could be designed to be arbitrarily complicated, maybe by combining information about the shape and color of an arm or hand. However, using a skin color as a determining feature to find hands in visual scenes might fail terribly in poor lighting conditions or when the user is wearing gloves. Instead, we choose to recognize the user's hand by its shape in the depth map. Allowing hands of all sorts to be present in any region of the image unnecessarily complicates the mission of this article, so we make two simplifying assumptions: We will instruct the user of our app to place their hand in front of the center of the screen, orienting their palm roughly parallel to the orientation of the Kinect sensor so that it is easier to identify the corresponding depth layer of the hand. We will also instruct the user to sit roughly 1 to 2 meters away from the Kinect, and to slightly extend their arm in front of their body so that the hand will end up in a slightly different depth layer than the arm. However, the algorithm will still work even if the full arm is visible. In this way, it will be relatively straightforward to segment the image based on the depth layer alone. Otherwise, we would have to come up with a hand detection algorithm first, which would unnecessarily complicate our mission. If you feel adventurous, feel free to do this on your own. Finding the most prominent depth of the image center region Once the hand is placed roughly in the center of the screen, we can start finding all image pixels that lie on the same depth plane as the hand. For this, we simply need to determine the most prominent depth value of the center region of the image. The simplest approach would be as follows: look only at the depth value of the center pixel: width, height = depth.shape center_pixel_depth = depth[width/2, height/2] Then, create a mask in which all pixels at a depth of center_pixel_depth are white and all others are black: import numpy as np depth_mask = np.where(depth == center_pixel_depth, 255, 0).astype(np.uint8) However, this approach will not be very robust, because chances are that: Your hand is not placed perfectly parallel to the Kinect sensor Your hand is not perfectly flat The Kinect sensor values are noisy Therefore, different regions of your hand will have slightly different depth values. The _segment_arm method takes a slightly better approach, that is, looking at a small neighborhood in the center of the image and determining the median (meaning the most prominent) depth value. First, we find the center (for example, 21 x 21 pixels) region of the image frame: def _segment_arm(self, frame): """ segments the arm region based on depth """ center_half = 10 # half-width of 21 is 21/2-1 lowerHeight = self.height/2 – center_half upperHeight = self.height/2 + center_half lowerWidth = self.width/2 – center_half upperWidth = self.width/2 + center_half center = frame[lowerHeight:upperHeight,lowerWidth:upperWidth] We can then reshape the depth values of this center region into a one-dimensional vector and determine the median depth value, med_val: med_val = np.median(center) We can now compare med_val with the depth value of all pixels in the image and create a mask in which all pixels whose depth values are within a particular range [med_val-self.abs_depth_dev, med_val+self.abs_depth_dev] are white and all other pixels are black. However, for reasons that will be clear in a moment, let's paint the pixels gray instead of white: frame = np.where(abs(frame – med_val) <= self.abs_depth_dev, 128, 0).astype(np.uint8) The result will look like this: Applying morphological closing to smoothen the segmentation mask A common problem with segmentation is that a hard threshold typically results in small imperfections (that is, holes, as in the preceding image) in the segmented region. These holes can be alleviated using morphological opening and closing. Opening removes small objects from the foreground (assuming that the objects are bright on a dark foreground), whereas closing removes small holes (dark regions). This means that we can get rid of the small black regions in our mask by applying morphological closing (dilation followed by erosion) with a small 3 x 3 pixel kernel: kernel = np.ones((3, 3), np.uint8) frame = cv2.morphologyEx(frame, cv2.MORPH_CLOSE, kernel) The result looks a lot smoother, as follows: Notice, however, that the mask still contains regions that do not belong to the hand or arm, such as what appears to be one of my knees on the left and some furniture on the right. These objects just happen to be on the same depth layer as my arm and hand. If possible, we could now combine the depth information with another descriptor, maybe a texture-based or skeleton-based hand classifier, that would weed out all non-skin regions. Finding connected components in a segmentation mask An easier approach is to realize that most of the times, hands are not connected to knees or furniture. We already know that the center region belongs to the hand, so we can simply apply cv2.floodfill to find all the connected image regions. Before we do this, we want to be absolutely certain that the seed point for the flood fill belongs to the right mask region. This can be achieved by assigning a grayscale value of 128 to the seed point. But we also want to make sure that the center pixel does not, by any coincidence, lie within a cavity that the morphological operation failed to close. So, let's set a small 7 x 7 pixel region with a grayscale value of 128 instead: small_kernel = 3 frame[self.height/2-small_kernel : self.height/2+small_kernel, self.width/2-small_kernel : self.width/2+small_kernel] = 128 Because flood filling (as well as morphological operations) is potentially dangerous, the Python version of later OpenCV versions requires specifying a mask that avoids flooding the entire image. This mask has to be 2 pixels wider and taller than the original image and has to be used in combination with the cv2.FLOODFILL_MASK_ONLY flag. It can be very helpful in constraining the flood filling to a small region of the image or a specific contour so that we need not connect two neighboring regions that should have never been connected in the first place. It's better to be safe than sorry, right? Ah, screw it! Today, we feel courageous! Let's make the mask entirely black: mask = np.zeros((self.height+2, self.width+2), np.uint8) Then we can apply the flood fill to the center pixel (seed point) and paint all the connected regions white: flood = frame.copy() cv2.floodFill(flood, mask, (self.width/2, self.height/2), 255, flags=4 | (255 << 8)) At this point, it should be clear why we decided to start with a gray mask earlier. We now have a mask that contains white regions (arm and hand), gray regions (neither arm nor hand but other things in the same depth plane), and black regions (all others). With this setup, it is easy to apply a simple binary threshold to highlight only the relevant regions of the pre-segmented depth plane: ret, flooded = cv2.threshold(flood, 129, 255, cv2.THRESH_BINARY) This is what the resulting mask looks like: The resulting segmentation mask can now be returned to the recognize method, where it will be used as an input to _find_hull_defects as well as a canvas for drawing the final output image (img_draw). Hand shape analysis Now that we (roughly) know where the hand is located, we aim to learn something about its shape. Determining the contour of the segmented hand region The first step involves determining the contour of the segmented hand region. Luckily, OpenCV comes with a pre-canned version of such an algorithm—cv2.findContours. This function acts on a binary image and returns a set of points that are believed to be part of the contour. Because there might be multiple contours present in the image, it is possible to retrieve an entire hierarchy of contours: def _find_hull_defects(self, segment): contours, hierarchy = cv2.findContours(segment, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) Furthermore, because we do not know which contour we are looking for, we have to make an assumption to clean up the contour result. Since it is possible that some small cavities are left over even after the morphological closing—but we are fairly certain that our mask contains only the segmented area of interest—we will assume that the largest contour found is the one that we are looking for. Thus, we simply traverse the list of contours, calculate the contour area (cv2.contourArea), and store only the largest one (max_contour): max_contour = max(contours, key=cv2.contourArea) Finding the convex hull of a contour area Once we have identified the largest contour in our mask, it is straightforward to compute the convex hull of the contour area. The convex hull is basically the envelope of the contour area. If you think of all the pixels that belong to the contour area as a set of nails sticking out of a board, then the convex hull is the shape formed by a tight rubber band that surrounds all the nails. We can get the convex hull directly from our largest contour (max_contour): hull = cv2.convexHull(max_contour, returnPoints=False) Because we now want to look at convexity deficits in this hull, we are instructed by the OpenCV documentation to set the returnPoints optional flag to False. The convex hull drawn in green around a segmented hand region looks like this: Finding convexity defects of a convex hull As is evident from the preceding screenshot, not all points on the convex hull belong to the segmented hand region. In fact, all the fingers and the wrist cause severe convexity defects, that is, points of the contour that are far away from the hull. We can find these defects by looking at both the largest contour (max_contour) and the corresponding convex hull (hull): defects = cv2.convexityDefects(max_contour, hull) The output of this function (defects) is a 4-tuple that contains start_index (the point of the contour where the defect begins), end_index (the point of the contour where the defect ends), farthest_pt_index (the farthest from the convex hull point within the defect), and fixpt_depth (distance between the farthest point and the convex hull). We will make use of this information in just a moment when we reason about fingers. But for now, our job is done. The extracted contour (max_contour) and convexity defects (defects) can be passed to recognize, where they will be used as inputs to _detect_num_fingers: return (cnt,defects) Hand gesture recognition What remains to be done is classifying the hand gesture based on the number of extended fingers. For example, if we find five extended fingers, we assume the hand to be open, whereas no extended fingers imply a fist. All that we are trying to do is count from zero to five and make the app recognize the corresponding number of fingers. This is actually trickier than it might seem at first. For example, people in Europe might count to three by extending their thumb, index finger, and middle finger. If you do that in the US, people there might get horrendously confused, because people do not tend to use their thumbs when signaling the number two. This might lead to frustration, especially in restaurants (trust me). If we could find a way to generalize these two scenarios—maybe by appropriately counting the number of extended fingers—we would have an algorithm that could teach simple hand gesture recognition to not only a machine but also (maybe) to an average waitress. As you might have guessed, the answer has to do with convexity defects. As mentioned earlier, extended fingers cause defects in the convex hull. However, the inverse is not true; that is, not all convexity defects are caused by fingers! There might be additional defects caused by the wrist as well as the overall orientation of the hand or the arm. How can we distinguish between these different causes for defects? Distinguishing between different causes for convexity defects The trick is to look at the angle between the farthest point from the convex hull point within the defect (farthest_pt_index) and the start and end points of the defect (start_index and end_index, respectively), as illustrated in the following screenshot: In this screenshot, the orange markers serve as a visual aid to center the hand in the middle of the screen, and the convex hull is outlined in green. Each red dot corresponds to a farthest from the convex hull point (farthest_pt_index) for every convexity defect detected. If we compare a typical angle that belongs to two extended fingers (such as θj) to an angle that is caused by general hand geometry (such as θi), we notice that the former is much smaller than the latter. This is obviously because humans can spread their finger only a little, thus creating a narrow angle made by the farthest defect point and the neighboring fingertips. Therefore, we can iterate over all convexity defects and compute the angle between the said points. For this, we will need a utility function that calculates the angle (in radians) between two arbitrary, list-like vectors, v1 and v2: def angle_rad(v1, v2): return np.arctan2(np.linalg.norm(np.cross(v1, v2)), np.dot(v1, v2)) This method uses the cross product to compute the angle, rather than the standard way. The standard way of calculating the angle between two vectors v1 and v2 is by calculating their dot product and dividing it by the norm of v1 and the norm of v2. However, this method has two imperfections: You have to manually avoid division by zero if either the norm of v1 or the norm of v2 is zero The method returns relatively inaccurate results for small angles Similarly, we provide a simple function to convert an angle from degrees to radians: def deg2rad(angle_deg): return angle_deg/180.0*np.pi Classifying hand gestures based on the number of extended fingers What remains to be done is actually classifying the hand gesture based on the number of extended fingers. The _detect_num_fingers method will take as input the detected contour (contours), the convexity defects (defects), and a canvas to draw on (img_draw): def _detect_num_fingers(self, contours, defects, img_draw): Based on these parameters, it will then determine the number of extended fingers. However, we first need to define a cut-off angle that can be used as a threshold to classify convexity defects as being caused by extended fingers or not. Except for the angle between the thumb and the index finger, it is rather hard to get anything close to 90 degrees, so anything close to that number should work. We do not want the cut-off angle to be too high, because that might lead to misclassifications: self.thresh_deg = 80.0 For simplicity, let's focus on the special cases first. If we do not find any convexity defects, it means that we possibly made a mistake during the convex hull calculation, or there are simply no extended fingers in the frame, so we return 0 as the number of detected fingers: if defects is None: return [0, img_draw] But we can take this idea even further. Due to the fact that arms are usually slimmer than hands or fists, we can assume that the hand geometry will always generate at least two convexity defects (which usually belong to the wrists). So if there are no additional defects, it implies that there are no extended fingers: if len(defects) <= 2: return [0, img_draw] Now that we have ruled out all special cases, we can begin counting real fingers. If there are a sufficient number of defects, we will find a defect between every pair of fingers. Thus, in order to get the number right (num_fingers), we should start counting at 1: num_fingers = 1 Then we can start iterating over all convexity defects. For each defect, we will extract the four elements and draw its hull for visualization purposes: for i in range(defects.shape[0]): # each defect point is a 4-tuplestart_idx, end_idx, farthest_idx, _ == defects[i, 0] start = tuple(contours[start_idx][0]) end = tuple(contours[end_idx][0]) far = tuple(contours[farthest_idx][0]) # draw the hull cv2.line(img_draw, start, end [0, 255, 0], 2) Then we will compute the angle between the two edges from far to start and from far to end. If the angle is smaller than self.thresh_deg degrees, it means that we are dealing with a defect that is most likely caused by two extended fingers. In this case, we want to increment the number of detected fingers (num_fingers), and we draw the point with green. Otherwise, we draw the point with red: # if angle is below a threshold, defect point belongs # to two extended fingers if angle_rad(np.subtract(start, far), np.subtract(end, far)) < deg2rad(self.thresh_deg): # increment number of fingers num_fingers = num_fingers + 1 # draw point as green cv2.circle(img_draw, far, 5, [0, 255, 0], -1) else: # draw point as red cv2.circle(img_draw, far, 5, [255, 0, 0], -1) After iterating over all convexity defects, we pass the number of detected fingers and the assembled output image to the recognize method: return (min(5, num_fingers), img_draw) This will make sure that we do not exceed the common number of fingers per hand. The result can be seen in the following screenshots: Interestingly, our app is able to detect the correct number of extended fingers in a variety of hand configurations. Defect points between extended fingers are easily classified as such by the algorithm, and others are successfully ignored. Summary This article showed a relatively simple and yet surprisingly robust way of recognizing a variety of hand gestures by counting the number of extended fingers. The algorithm first shows how a task-relevant region of the image can be segmented using depth information acquired from a Microsoft Kinect 3D Sensor, and how morphological operations can be used to clean up the segmentation result. By analyzing the shape of the segmented hand region, the algorithm comes up with a way to classify hand gestures based on the types of convexity effects found in the image. Once again, mastering our use of OpenCV to perform a desired task did not require us to produce a large amount of code. Instead, we were challenged to gain an important insight that made us use the built-in functionality of OpenCV in the most effective way possible. Gesture recognition is a popular but challenging field in computer science, with applications in a large number of areas, such as human-computer interaction, video surveillance, and even the video game industry. You can now use your advanced understanding of segmentation and structure analysis to build your own state-of-the-art gesture recognition system. Resources for Article: Tracking Faces with Haar Cascades Our First Machine Learning Method - Linear Classification Solving problems with Python: Closest good restaurant
Read more
  • 0
  • 0
  • 35824
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-instagram-clone-layout-using-ionic-framework
Packt
06 Oct 2015
7 min read
Save for later

Creating Instagram Clone Layout using Ionic framework

Packt
06 Oct 2015
7 min read
In this article by Zainul Setyo Pamungkas, author of the book PhoneGap 4 Mobile Application Development Cookbook, we will see how Ionic framework is one of the most popular HTML5 framework for hybrid application development. Ionic framework provides native such as the UI component that user can use and customize. (For more resources related to this topic, see here.) In this article, we will create a clone of Instagram mobile app layout: First, we need to create new Ionic tabs application project named ionSnap: ionic start ionSnap tabs Change directory to ionSnap: cd ionSnap Then add device platforms to the project: ionic platform add ios ionic platform add android Let's change the tab name. Open www/templates/tabs.html and edit each title attribute of ion-tab: <ion-tabs class="tabs-icon-top tabs-color-active-positive"> <ion-tab title="Timeline" icon-off="ion-ios-pulse" icon-on="ion-ios-pulse-strong" href="#/tab/dash"> <ion-nav-view name="tab-dash"></ion-nav-view> </ion-tab> <ion-tab title="Explore" icon-off="ion-ios-search" icon-on="ion-ios-search" href="#/tab/chats"> <ion-nav-view name="tab-chats"></ion-nav-view> </ion-tab> <ion-tab title="Profile" icon-off="ion-ios-person-outline" icon-on="ion-person" href="#/tab/account"> <ion-nav-view name="tab-account"></ion-nav-view> </ion-tab> </ion-tabs> We have to clean our application to start a new tab based application. Open www/templates/tab-dash.html and clean the content so we have following code: <ion-view view-title="Timeline"> <ion-content class="padding"> </ion-content> </ion-view> Open www/templates/tab-chats.html and clean it up: <ion-view view-title="Explore"> <ion-content> </ion-content> </ion-view> Open www/templates/tab-account.html and clean it up: <ion-view view-title="Profile"> <ion-content> </ion-content> </ion-view> Open www/js/controllers.js and delete methods inside controllers so we have following code: angular.module('starter.controllers', []) .controller('DashCtrl', function($scope) {}) .controller('ChatsCtrl', function($scope, Chats) { }) .controller('ChatDetailCtrl', function($scope, $stateParams, Chats) { }) .controller('AccountCtrl', function($scope) { }); We have clean up our tabs application. If we run our application, we will have view like this: The next steps, we will create layout for timeline view. Each post of timeline will be displaying username, image, Like button, and Comment button. Open www/template/tab-dash.html and add following div list: <ion-view view-title="Timelines"> <ion-content class="has-header"> <div class="list card"> <div class="item item-avatar"> <img src="http://placehold.it/50x50"> <h2>Some title</h2> <p>November 05, 1955</p> </div> <div class="item item-body"> <img class="full-image" src="http://placehold.it/500x500"> <p> <a href="#" class="subdued">1 Like</a> <a href="#" class="subdued">5 Comments</a> </p> </div> <div class="item tabs tabs-secondary tabs-icon-left"> <a class="tab-item" href="#"> <i class="icon ion-heart"></i> Like </a> <a class="tab-item" href="#"> <i class="icon ion-chatbox"></i> Comment </a> <a class="tab-item" href="#"> <i class="icon ion-share"></i> Share </a> </div> </div> </ion-content> </ion-view> Our timeline view will be like this: Then, we will create explore page to display photos in a grid view. First we need to add some styles on our www/css/styles.css: .profile ul { list-style-type: none; } .imageholder { width: 100%; height: auto; display: block; margin-left: auto; margin-right: auto; } .profile li img { float: left; border: 5px solid #fff; width: 30%; height: 10%; -webkit-transition: box-shadow 0.5s ease; -moz-transition: box-shadow 0.5s ease; -o-transition: box-shadow 0.5s ease; -ms-transition: box-shadow 0.5s ease; transition: box-shadow 0.5s ease; } .profile li img:hover { -webkit-box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); } Then we just put list with image item like so: <ion-view view-title="Explore"> <ion-content> <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> </ul> </ion-content> </ion-view> Now, our explore page will look like this: For the last, we will create our profile page. The profile page consists of two parts. The first one is profile header, which shows user information such as username, profile picture, and number of post. The second part is a grid list of picture uploaded by user. It's similar to grid view on explore page. To add profile header, open www/css/style.css and add following styles bellow existing style: .text-white{ color:#fff; } .profile-pic { width: 30%; height: auto; display: block; margin-top: -50%; margin-left: auto; margin-right: auto; margin-bottom: 20%; border-radius: 4em 4em 4em / 4em 4em; } Open www/templates/tab-account.html and then add following code inside ion-content: <ion-content> <div class="user-profile" style="width:100%;heigh:auto;background-color:#fff;float:left;"> <img src="img/cover.jpg"> <div class="avatar"> <img src="img/ionic.png" class="profile-pic"> <ul> <li> <p class="text-white text-center" style="margin-top:-15%;margin-bottom:10%;display:block;">@ionsnap, 6 Pictures</p> </li> </ul> </div> </div> … The second part of profile page is the grid list of user images. Let's add some pictures under profile header and before the close of ion-content tag: <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> </ul> </ion-content> Our profile page will now look like this: Summary In this article we have seen steps to create Instagram clone with an Ionic framework with the help of an example. If you are a developer who wants to get started with mobile application development using PhoneGap, then this article is for you. Basic understanding of web technologies such as HTML, CSS and JavaScript is a must. Resources for Article: Further resources on this subject: The Camera API [article] Working with the sharing plugin [article] Building the Middle-Tier [article]
Read more
  • 0
  • 0
  • 9265

article-image-object-detection-using-image-features-javascript
Packt
05 Oct 2015
16 min read
Save for later

Object Detection Using Image Features in JavaScript

Packt
05 Oct 2015
16 min read
In this article by Foat Akhmadeev, author of the book Computer Vision for the Web, we will discuss how we can detect an object on an image using several JavaScript libraries. In particular, we will see techniques such as FAST features detection, and BRIEF and ORB descriptors matching. Eventually, the object detection example will be presented. There are many ways to detect an object on an image. Color object detection, which is the detection of changes in intensity of an image is just a simple computer vision methods. There are some sort of fundamental things which every computer vision enthusiast should know. The libraries we use here are: JSFeat (http://inspirit.github.io/jsfeat/) tracking.js (http://trackingjs.com) (For more resources related to this topic, see here.) Detecting key points What information do we get when we see an object on an image? An object usually consists of some regular parts or unique points, which represent this particular object. Of course, we can compare each pixel of an image, but it is not a good thing in terms of computational speed. Probably, we can take unique points randomly, thus reducing the computation cost significantly, but we will still not get much information from random points. Using the whole information, we can get too much noise and lose important parts of an object representation. Eventually, we need to consider that both ideas, getting all pixels and selecting random pixels, are really bad. So, what can we do in that case? We are working with a grayscale image and we need to get unique points of an image. Then, we need to focus on the intensity information. For example, getting object edges in the Canny edge detector or the Sobel filter. We are closer to the solution! But still not close enough. What if we have a long edge? Don't you think that it is a bit bad that we have too many unique pixels that lay on this edge? An edge of an object has end points or corners; if we reduce our edge to those corners, we will get enough unique pixels and remove unnecessary information. There are various methods of getting keypoints from an image, many of which extract corners as those keypoints. To get them, we will use the FAST (Features from Accelerated Segment Test) algorithm. It is really simple and you can easily implement it by yourself if you want. But you do not need to. The algorithm implementation is provided by both tracking.js and JSFeat libraries. The idea of the FAST algorithm can be captured from the following image: Suppose we want to check whether the pixel P is a corner. We will check 16 pixels around it. If at least 9 pixels in an arc around P are much darker or brighter than the value of P, then we say that P is a corner. How much darker or brighter should the P pixels be? The decision is made by applying a threshold for the difference between the value of P and the value of pixels around P. A practical example First, we will start with an example of FAST corner detection for the tracking.js library. Before we do something, we can set the detector threshold. Threshold defines the minimum difference between a tested corner and the points around it: tracking.Fast.THRESHOLD = 30; It is usually a good practice to apply a Gaussian blur on an image before we start the method. It significantly reduces the noise of an image: var imageData = context.getImageData(0, 0, cols, rows); var gray = tracking.Image.grayscale(imageData.data, cols, rows, true); var blurred4 = tracking.Image.blur(gray, cols, rows, 3); Remember that the blur function returns a 4 channel array—RGBA. In that case, we need to convert it to 1-channel. Since we can easily skip other channels, it should not be a problem: var blurred1 = new Array(blurred4.length / 4); for (var i = 0, j = 0; i < blurred4.length; i += 4, ++j) { blurred1[j] = blurred4[i]; } Next, we run a corner detection function on our image array: var corners = tracking.Fast.findCorners(blurred1, cols, rows); The result returns an array with its length twice the length of the corner's number. The array is returned in the format [x0,y0,x1,y1,...]. Where [xn, yn] are coordinates of a detected corner. To print the result on a canvas, we will use the fillRect function: for (i = 0; i < corners.length; i += 2) { context.fillStyle = '#0f0'; context.fillRect(corners[i], corners[i + 1], 3, 3); } Let's see an example with the JSFeat library,. for which the steps are very similar to that of tracking.js. First, we set the global threshold with a function: jsfeat.fast_corners.set_threshold(30); Then, we apply a Gaussian blur to an image matrix and run the corner detection: jsfeat.imgproc.gaussian_blur(matGray, matBlurred, 3); We need to preallocate keypoints for a corners result. The keypoint_t function is just a new type which is useful for keypoints of an image. The first two parameters represent coordinates of a point and the other parameters set: point score (which checks whether the point is good enough to be a key point), point level (which you can use it in an image pyramid, for example), and point angle (which is usually used for the gradient orientation): var corners = []; var i = cols * rows; while (--i >= 0) { corners[i] = new jsfeat.keypoint_t(0, 0, 0, 0, -1); } After all this, we execute the FAST corner detection method. As a last parameter of detection function, we define a border size. The border is used to constrain circles around each possible corner. For example, you cannot precisely say whether the point is a corner for the [0,0] pixel. There is no [0, -3] pixel in our matrix: var count = jsfeat.fast_corners.detect(matBlurred, corners, 3); Since we preallocated the corners, the function returns the number of calculated corners for us. The result returns an array of structures with the x and y fields, so we can print it using those fields: for (var i = 0; i < count; i++) { context.fillStyle = '#0f0'; context.fillRect(corners[i].x, corners[i].y, 3, 3); } The result is nearly the same for both algorithms. The difference is in some parts of realization. Let's look at the following example: From left to right: tracking.js without blur, JSFeat without blur, tracking.js and JSFeat with blur. If you look closely, you can see the difference between tracking.js and JSFeat results, but it is not easy to spot it. Look at how much noise was reduced by applying just a small 3 x 3 Gaussian filter! A lot of noisy points were removed from the background. And now the algorithm can focus on points that represent flowers and the pattern of the vase. We have extracted key points from our image, and we successfully reached the goal of reducing the number of keypoints and focusing on the unique points of an image. Now, we need to compare or match those points somehow. How we can do that? Descriptors and object matching Image features by themselves are a bit useless. Yes, we have found unique points on an image. But what did we get? Only values of pixels and that's it. If we try to compare these values, it will not give us much information. Moreover, if we change the overall image brightness, we will not find the same keypoints on the same image! Taking into account all of this, we need the information that surrounds our key points. Moreover, we need a method to efficiently compare this information. First, we need to describe the image features, which comes from image descriptors. In this part, we will see how these descriptors can be extracted and matched. The tracking.js and JSFeat libraries provide different methods for image descriptors. We will discuss both. BRIEF and ORB descriptors The descriptors theory is focused on changes in image pixels' intensities. The tracking.js library provides the BRIEF (Binary Robust Independent Elementary Features) descriptors and its JSFeat extension—ORB (Oriented FAST and Rotated BRIEF). As we can see from the ORB naming, it is rotation invariant. This means that even if you rotate an object, the algorithm can still detect it. Moreover, the authors of the JSFeat library provide an example using the image pyramid, which is scale invariant too. Let's start by explaining BRIEF, since it is the source for ORB descriptors. As a first step, the algorithm takes computed image features, and it takes the unique pairs of elements around each feature. Based on these pairs' intensities it forms a binary string. For example, if we have a pair of positions i and j, and if I(i) < I(j) (where I(pos) means image value at the position pos), then the result is 1, else 0. We add this result to the binary string. We do that for N pairs, where N is taken as a power of 2 (128, 256, 512). Since descriptors are just binary strings, we can compare them in an efficient manner. To match these strings, the Hamming distance is usually used. It shows the minimum number of substitutions required to change one string to another. For example, we have two binary strings: 10011 and 11001. The Hamming distance between them is 2, since we need to change 2 bits of information to change the first string to the second. The JSFeat library provides the functionality to apply ORB descriptors. The core idea is very similar to BRIEF. However, there are two major differences: The implementation is scale invariant, since the descriptors are computed for an image pyramid. The descriptors are rotation invariant; the direction is computed using intensity of the patch around a feature. Using this orientation, ORB manages to compute the BRIEF descriptor in a rotation-invariant manner. Implementation of descriptors implementation and their matching Our goal is to find an object from a template on a scene image. We can do that by finding features and descriptors on both images and matching descriptors from a template to an image. We start from the tracking.js library and BRIEF descriptors. The first thing that we can do is set the number of location pairs: tracking.Brief.N = 512 By default, it is 256, but you can choose a higher value. The larger the value, the more information you will get and the more the memory and computational cost it requires. Before starting the computation, do not forget to apply the Gaussian blur to reduce the image noise. Next, we find the FAST corners and compute descriptors on both images. Here and in the next example, we use the suffix Object for a template image and Scene for a scene image: var cornersObject = tracking.Fast.findCorners(grayObject, colsObject, rowsObject); var cornersScene = tracking.Fast.findCorners(grayScene, colsScene, rowsScene); var descriptorsObject = tracking.Brief.getDescriptors(grayObject, colsObject, cornersObject); var descriptorsScene = tracking.Brief.getDescriptors(grayScene, colsScene, cornersScene); Then we do the matching: var matches = tracking.Brief.reciprocalMatch(cornersObject, descriptorsObject, cornersScene, descriptorsScene); We need to pass information of both corners and descriptors to the function, since it returns coordinate information as a result. Next, we print both images on one canvas. To draw the matches using this trick, we need to shift our scene keypoints for the width of a template image as a keypoint1 matching returns a point on a template and keypoint2 returns a point on a scene image. The keypoint1 and keypoint2 are arrays with x and y coordinates at 0 and 1 indexes, respectively: for (var i = 0; i < matches.length; i++) { var color = '#' + Math.floor(Math.random() * 16777215).toString(16); context.fillStyle = color; context.strokeStyle = color; context.fillRect(matches[i].keypoint1[0], matches[i].keypoint1[1], 5, 5); context.fillRect(matches[i].keypoint2[0] + colsObject, matches[i].keypoint2[1], 5, 5); context.beginPath(); context.moveTo(matches[i].keypoint1[0], matches[i].keypoint1[1]); context.lineTo(matches[i].keypoint2[0] + colsObject, matches[i].keypoint2[1]); context.stroke(); } The JSFeat library provides most of the code for pyramids and scale invariant features not in the library, but in the examples, which are available on https://github.com/inspirit/jsfeat/blob/gh-pages/sample_orb.html. We will not provide the full code here, because it requires too much space. But do not worry, we will highlight main topics here. Let's start from functions that are included in the library. First, we need to preallocate the descriptors matrix, where 32 is the length of a descriptor and 500 is the maximum number of descriptors. Again, 32 is a power of two: var descriptors = new jsfeat.matrix_t(32, 500, jsfeat.U8C1_t); Then, we compute the ORB descriptors for each corner, we need to do that for both template and scene images: jsfeat.orb.describe(matBlurred, corners, num_corners, descriptors); The function uses global variables, which mainly define input descriptors and output matching: function match_pattern() The result match_t contains the following fields: screen_idx: This is the index of a scene descriptor pattern_lev: This is the index of a pyramid level pattern_idx: This is the index of a template descriptor Since ORB works with the image pyramid, it returns corners and matches for each level: var s_kp = screen_corners[m.screen_idx]; var p_kp = pattern_corners[m.pattern_lev][m.pattern_idx]; We can print each matching as shown here. Again, we use Shift, since we computed descriptors on separate images, but print the result on one canvas: context.fillRect(p_kp.x, p_kp.y, 4, 4); context.fillRect(s_kp.x + shift, s_kp.y, 4, 4); Working with a perspective Let's take a step away. Sometimes, an object you want to detect is affected by a perspective distortion. In that case, you may want to rectify an object plane. For example, a building wall: Looks good, doesn't it? How do we do that? Let's look at the code: var imgRectified = new jsfeat.matrix_t(mat.cols, mat.rows, jsfeat.U8_t | jsfeat.C1_t); var transform = new jsfeat.matrix_t(3, 3, jsfeat.F32_t | jsfeat.C1_t); jsfeat.math.perspective_4point_transform(transform, 0, 0, 0, 0, // first pair x1_src, y1_src, x1_dst, y1_dst 640, 0, 640, 0, // x2_src, y2_src, x2_dst, y2_dst and so on. 640, 480, 640, 480, 0, 480, 180, 480); jsfeat.matmath.invert_3x3(transform, transform); jsfeat.imgproc.warp_perspective(mat, imgRectified, transform, 255); Primarily, as we did earlier, we define a result matrix object. Next, we assign a matrix for image perspective transformation. We calculate it based on four pairs of corresponding points. For example, the last, that is, the fourth point of the original image, which is [0, 480], should be projected to the point [180, 480] on the rectified image. Here, the first coordinate refers to x and the second to y. Then, we invert the transform matrix to be able to apply it to the original image—the mat variable. We pick the background color as white (255 for an unsigned byte). As a result, we get a nice image without any perspective distortion. Finding an object location Returning to our primary goal, we found a match. That is great. But what we did not do is finding an object location. There is no function for that in the tracking.js library, but JSFeat provides such functionality in the examples section. First, we need to compute a perspective transform matrix. This is why we discussed the example of such transformation previously. We have points from two images but we do not have a transformation for the whole image. First, we define a transform matrix: var homo3x3 = new jsfeat.matrix_t(3, 3, jsfeat.F32C1_t); To compute the homography, we need only four points. But after the matching, we get too many. In addition, there are can be noisy points, which we will need to skip somehow. For that, we use a RANSAC (Random sample consensus) algorithm. It is an iterative method for estimating a mathematical model from a dataset that contains outliers (noise). It estimates outliers and generates a model that is computed without the noisy data. Before we start, we need to define the algorithm parameters. The first parameter is a match mask, where all mathces will be marked as good (1) or bad (0): var match_mask = new jsfeat.matrix_t(500, 1, jsfeat.U8C1_t); Our mathematical model to find: var mm_kernel = new jsfeat.motion_model.homography2d(); Minimum number of points to estimate a model (4 points to get a homography): var num_model_points = 4; Maximum threshold to classify a data point as an inlier or a good match: var reproj_threshold = 3; Finally, the variable that holds main parameters and the last two arguments define the maximum ratio of outliers and probability of success when the algorithm stops at the point where the number of inliers is 99 percent: var ransac_param = new jsfeat.ransac_params_t(num_model_points, reproj_threshold, 0.5, 0.99); Then, we run the RANSAC algorithm. The last parameter represents the number of maximum iterations for the algorithm: jsfeat.motion_estimator.ransac(ransac_param, mm_kernel, object_xy, screen_xy, count, homo3x3, match_mask, 1000); The shape finding can be applied for both tracking.js and JSFeat libraries, you just need to set matches as object_xy and screen_xy, where those arguments must hold an array of objects with the x and y fields. After we find the transformation matrix, we compute the projected shape of an object to a new image: var shape_pts = tCorners(homo3x3.data, colsObject, rowsObject); After the computation is done, we draw computed shapes on our images: As we see, our program successfully found an object in both cases. Actually, both methods can show different performance, it is mainly based on the thresholds you set. Summary Image features and descriptors matching are powerful tools for object detection. Both JSFeat and tracking.js libraries provide different functionalities to match objects using these features. In addition to this, the JSFeat project contains algorithms for object finding. These methods can be useful for tasks such as uniques object detection, face tracking, and creating a human interface by tracking various objects very efficiently. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] Introducing JAX-RS API[article] Using Google Maps APIs with Knockout.js [article]
Read more
  • 0
  • 1
  • 30631

article-image-mathematical-imaging
Packt
05 Oct 2015
17 min read
Save for later

Mathematical Imaging

Packt
05 Oct 2015
17 min read
 In this article by Francisco J. Blanco-Silva, author of the book Mastering SciPy, you will learn about image editing and the purpose of editing is the alteration of digital images, usually to perform improvement of its properties or to turn them into an intermediate step for further processing. Let's examine different examples of editing: Transformations of the domain Intensity adjustment Image restoration (For more resources related to this topic, see here.) Transformations of the domain In this setting, we address transformations to images by first changing the location of pixels, rotations, compressions, stretching, swirls, cropping, perspective control, and so on. Once the transformation to the pixels in the domain of the original is performed, we observe the size of the output. If there are more pixels in this image than in the original, the extra locations are filled with numerical values obtained by interpolating the given data. We do have some control over the kind of interpolation performed, of course. To better illustrate these techniques, we will pair an actual image (say, Lena) with a representation of its domain as a checkerboard: In [1]: import numpy as np, matplotlib.pyplot as plt In [2]: from scipy.misc import lena; ...: from skimage.data import checkerboard In [3]: image = lena().astype('float') ...: domain = checkerboard() In [4]: print image.shape, domain.shape Out[4]: (512, 512) (200, 200) Rescale and resize Before we proceed with the pairing of image and domain, we have to make sure that they both have the same size. One quick fix is to rescale both objects, or simply resize one of the images to match the size of the other. Let's go with the first choice, so that we can illustrate the usage of the two functions available for this task in the module skimage.transform to resize and rescale: In [5]: from skimage.transform import rescale, resize In [6]: domain = rescale(domain, scale=1024./200); ...: image = resize(image, output_shape=(1024, 1024), order=3) Observe how, in the resizing operation, we requested a bicubic interpolation. Swirl To perform a swirl, we call the function swirl from the module skimage.transform: In all the examples of this section, we will present the results visually after performing the requested computations. In all cases, the syntax of the call to offer the images is the same. For a given operation mapping, we issue the command display (mapping, image, and domain) where the routine display is defined as follows: def display(mapping, image, domain): plt.figure() plt.subplot(121) plt.imshow(mapping(image)) plt.gray() plt.subplot(122) plt.imshow(mapping(domain)) plt.show() For the sake of brevity, we will not include this command in the following code, but assume it is called every time: In [7]: from skimage.transform import swirl In [8]: mapping = lambda img: swirl(img, strength=6, ...: radius=512) Geometric transformations A simple rotation around any location (no matter whether inside or outside the domain of the image) can be achieved with the function rotate from either module scipy.ndimage or skimage.transform. They are basically the same under the hood, but the syntax of the function in the scikit-image toolkit is more user-friendly: In [10]: from skimage.transform import rotate In [11]: mapping = lambda img: rotate(img, angle=30, resize=True, ....: center=None) This gives a counter-clockwise rotation of 30 degrees (angle=30) around the center of the image (center=None). The size of the output image is expanded to guarantee that all the original pixels are present in the output (resize=True): Rotations are a special case of what we call an affine transformation—a combination of rotation with scales (one for each dimension), shear, and translation. Affine transformations are in turn a special case of a homography (a projective transformation). Rather than learning a different set of functions, one for each kind of geometric transformation, the library skimage.transform allows a very comfortable setting. There is a common function (warp) that gets called with the requested geometric transformation and performs the computations. Each suitable geometric transformation is previously initialized with an appropriate python class. For example, to perform an affine transformation with a counter-clockwise rotation angle of 30 degrees about the point with coordinates (512, -2048), and scale factors of 2 and 3 units, respectively for the x and y coordinates, we issue the following command: In [13]: from skimage.transform import warp, AffineTransform In [14]: operation = AffineTransform(scale=(2,3), rotation=np.pi/6, ....: translation = (512, -2048)) In [15]: mapping = lambda img: warp(img, operation) Observe how all the lines in the transformed checkerboard are either parallel or perpendicular—affine transformations preserve angles. The following illustrates the effect of a homography: In [17]: from skimage.transform import ProjectiveTransform In [18]: generator = np.matrix('1,0,10; 0,1,20; -0.0007,0.0005,1'); ....: homography = ProjectiveTransform(matrix=generator); ....: mapping = lambda img: warp(img, homography) Observe how, unlike in the case of an affine transformation, the lines cease to be all parallel or perpendicular. All vertical lines are now incident at a point at infinity. All horizontal lines are also incident at a different point at infinity. The real usefulness of homographies arises, for example, when we need to perform perspective control. For instance, the image skimage.data.text is clearly slanted. By choosing the four corners of what we believe is a perfect rectangle (we estimate such a rectangle by visual inspection), we can compute a homography that transforms the given image into one that is devoid of any perspective. The Python classes representing geometric transformations allow us to perform this estimation very easily, as the following example shows: In [20]: from skimage.data import text In [21]: text().shape Out[21]: (172, 448) In [22]: source = np.array(((155, 15), (65, 40), ....: (260, 130), (360, 95), ....: (155, 15))) In [23]: mapping = ProjectiveTransform() Let's estimate the homography that transforms the given set of points into a perfect rectangle of the size 48 x 256 centered in an output image of size 512 x 512. The choice of size of the output image is determined by the length of the diagonal of the original image (about 479 pixels). This way, after the homography is computed, the output is likely to contain all the pixels from the original. Observe that we have included one of vertices in the source, twice. This is not strictly necessary for the following computations, but will make the visualization of rectangles much easier to code. We use the same trick for the target rectangle. In [24]: target = np.array(((256-128, 256-24), (256-128, 256+24), ....: (256+128, 256+24), (256+128, 256-24), ....: (256-128, 256-24))) In [25]: mapping.estimate(target, source) Out[25]: True In [26]: plt.figure(); ....: plt.subplot(121); ....: plt.imshow(text()); ....: plt.gray(); ....: plt.plot(source[:,0], source[:,1],'-', lw=1, color='red'); ....: plt.xlim(0, 448); ....: plt.ylim(172, 0); ....: plt.subplot(122); ....: plt.imshow(warp(text(), mapping,output_shape=(512, 512))); ....: plt.plot(target[:,0], target[:,1],'-', lw=1, color='red'); ....: plt.xlim(0, 512); ....: plt.ylim(512, 0); ....: plt.show() Other more involved geometric operations are needed, for example, to fix vignetting and some of the other kinds of distortions produced by photographic lenses. Traditionally, once we acquire an image, we assume that all these distortions are present. By knowing the technical specifications of the equipment used to take the photographs, we can automatically rectify these defects. With this purpose in mind, in the SciPy stack, we have access to the lensfun library (http://lensfun.sourceforge.net/) through the package lensfunpy (https://pypi.python.org/pypi/lensfunpy). For examples of usage and documentation, an excellent resource is the API reference of lensfunpy at http://pythonhosted.org/lensfunpy/api/. Intensity adjustment In this category, we have operations that only modify the intensity of an image obeying a global formula. All these operations can therefore be easily coded by using purely NumPy operations, by creating vectorized functions adapting the requested formulas. The applications of these operations can be explained in terms of exposure in black and white photography, for example. For this reason, all the examples in this section are applied on gray-scale images. We have mainly three approaches to enhancing images by working on its intensity: Histogram equalizations Intensity clipping/resizing Contrast adjustment Histogram equalization The starting point in this setting is always the concept of intensity histogram (or more precisely, the histogram of pixel intensity values)—a function that indicates the number of pixels in an image at each different intensity value found in that image. For instance, for the original version of Lena, we could issue the following command: In [27]: plt.figure(); ....: plt.hist(lena().flatten(), 256); ....: plt.show() The operations of histogram equalization improve the contrast of images by modifying the histogram in a way that most of the relevant intensities have the same impact. We can accomplish this enhancement by calling, from the module skimage.exposure, any of the functions equalize_hist (pure histogram equalization) or equalize_adaphist (contrast limited adaptive histogram equalization (CLAHE)). Note the obvious improvement after the application of the histogram equalization of the image skimage.data.moon. In the following examples, we include the corresponding histogram below all relevant images for comparison. A suitable code to perform this visualization could be as follows: def display(image, transform, bins): target = transform(image) plt.figure() plt.subplot(221) plt.imshow(image) plt.gray() plt.subplot(222) plt.imshow(target) plt.subplot(223) plt.hist(image.flatten(), bins) plt.subplot(224) plt.hist(target.flatten(), bins) plt.show() In [28]: from skimage.exposure import equalize_hist; ....: from skimage.data import moon In [29]: display(moon(), equalize_hist, 256) Intensity clipping/resizing A peak at the histogram indicates the presence of a particular intensity that is remarkably more predominant than its neighboring ones. If we desire to isolate intensities around a peak, we can do so using purely NumPy masking/clipping operations on the original image. If storing the result is not needed, we can request a quick visualization of the result by employing the command clim in the library matplotlib.pyplot. For instance, to isolate intensities around the highest peak of Lena (roughly, these are between 150 and 160), we could issue the following command: In [30]: plt.figure(); ....: plt.imshow(lena()); ....: plt.clim(vmin=150, vmax=160); ....: plt.show() Note how this operation, in spite of having reduced the representative range of intensities from 256 to 10, offers us a new image that keeps sufficient information to recreate the original one. Naturally, we can regard this operation also as a lossy compression method. Contrast enhancement An obvious drawback of clipping intensities is the loss of perceived lightness contrast. To overcome this loss, it is preferable to employ formulas that do not reduce the size of the range. Among the many available formulas conforming to this mathematical property, the most successful ones are those that replicate an optical property of the acquisition method. We explore the following three cases: Gamma correction: Human vision follows a power function, with greater sensitivity to relative differences between darker tones than between lighter ones. Each original image, as captured by the acquisition device, might allocate too many bits or too much bandwidth to highlight that humans cannot actually differentiate. Similarly, too few bits/bandwidth could be allocated to the darker areas of the image. By manipulation of this power function, we are capable of addressing the correct amount of bits and bandwidth. Sigmoid correction: Independently of the amount of bits and bandwidth, it is often desirable to maintain the perceived lightness contrast. Sigmoidal remapping functions were then designed based on empirical contrast enhancement model developed from the results of psychophysical adjustment experiments. Logarithmic correction: This is a purely mathematical process designed to spread the range of naturally low-contrast images by transforming to a logarithmic range. To perform gamma correction on images, we could employ the function adjust_gamma in the module skimage.exposure. The equivalent mathematical operation is the power-law relationship output = gain * input^gamma. Observe the great improvement in definition of the brighter areas of a stained micrograph of colonic glands, when we choose the exponent gamma=2.5 and no gain (gain=1.0): In [31]: from skimage.exposure import adjust_gamma; ....: from skimage.color import rgb2gray; ....: from skimage.data import immunohistochemistry In [32]: image = rgb2gray(immunohistochemistry()) In [33]: correction = lambda img: adjust_gamma(img, gamma=2.5, ....: gain=1.) Note the huge improvement in contrast in the lower right section of the micrograph, allowing a better description and differentiation of the observed objects: Immunohistochemical staining with hematoxylin counterstaining. This image was acquired at the Center for Microscopy And Molecular Imaging (CMMI). To perform sigmoid correction with given gain and cutoff coefficients, according to the formula output = 1/(1 + exp*(gain*(cutoff - input))), we employ the function adjust_sigmoid in skimage.exposure. For example, with gain=10.0 and cutoff=0.5 (the default values), we obtain the following enhancement: In [35]: from skimage.exposure import adjust_sigmoid In [36]: display(image[:256, :256], adjust_sigmoid, 256) Note the improvement in the definition of the walls of cells in the enhanced image: We have already explored logarithmic correction in the previous section, when enhancing the visualization of the frequency of an image. This is equivalent to applying a vectorized version of np.log1p to the intensities. The corresponding function in the scikit-image toolkit is adjust_log in the sub module exposure. Image restoration In this category of image editing, the purpose is to repair the damage produced by post or preprocessing of the image, or even the removal of distortion produced by the acquisition device. We explore two classic situations: Noise reduction Sharpening and blurring Noise reduction In mathematical imaging, noise is by definition a random variation of the intensity (or the color) produced by the acquisition device. Among all the possible types of noise, we acknowledge four key cases: Gaussian noise: We add to each pixel a value obtained from a random variable with normal distribution and a fixed mean. We usually allow the same variance on each pixel of the image, but it is feasible to change the variance depending on the location. Poisson noise: To each pixel, we add a value obtained from a random variable with Poisson distribution. Salt and pepper: This is a replacement noise, where some pixels are substituted by zeros (black or pepper), and some pixels are substituted by ones (white or salt). Speckle: This is a multiplicative kind of noise, where each pixel gets modified by the formula output = input + n * input. The value of the modifier n is a value obtained from a random variable with uniform distribution of fixed mean and variance. To emulate all these kinds of noise, we employ the utility random_noise from the module skimage.util. Let's illustrate the possibilities in a common image: In [37]: from skimage.data import camera; ....: from skimage.util import random_noise In [38]: gaussian_noise = random_noise(camera(), 'gaussian', ....: var=0.025); ....: poisson_noise = random_noise(camera(), 'poisson'); ....: saltpepper_noise = random_noise(camera(), 's&p', ....: salt_vs_pepper=0.45); ....: speckle_noise = random_noise(camera(), 'speckle', var=0.02) In [39]: variance_generator = lambda i,j: 0.25*(i+j)/1022. + 0.001; ....: variances = np.fromfunction(variance_generator,(512,512)); ....: lclvr_noise = random_noise(camera(), 'localvar', ....: local_vars=variances) In the last example, we have created a function that assigns a variance between 0.001 and 0.026 depending on the distance to the upper corner of an image. When we visualize the corresponding noisy version of skimage.data.camera, we see that the level of degradation gets stronger as we get closer to the lower right corner of the picture. The following is an example of visualization of the corresponding noisy images: The purpose of noise reduction is to remove as much of this unwanted signal, so we obtain an image as close to the original as possible. The trick, of course, is to do so without any previous knowledge of the properties of the noise. The most basic methods of denoising are the application of either a Gaussian or a median filter. We explored them both in the previous section. The former was presented as a smoothing filter (gaussian_filter), and the latter was discussed when we explored statistical filters (median_filter). They both offer decent noise removal, but they introduce unwanted artifacts as well. For example, the Gaussian filter does not preserve edges in images. The application of any of these methods is also not recommended if preserving texture information is needed. We have a few more advanced methods in the module skimage.restoration, able to tailor denoising to images with specific properties: denoise_bilateral: This is the bilateral filter. It is useful when preserving edges is important. denoise_tv_bregman, denoise_tv_chambolle: We will use this if we require a denoised image with small total variation. nl_means_denoising: The so-called non-local means denoising. This method ensures the best results to denoise areas of the image presenting texture. wiener, unsupervised_wiener: This is the Wiener-Hunt deconvolution. It is useful when we have knowledge of the point-spread function at acquisition time. Let's show you, by example, the performance of one of these methods on some of the noisy images we computed earlier: In [40]: from skimage.restoration import nl_means_denoising as dnoise In [41]: images = [gaussian_noise, poisson_noise, ....: saltpepper_noise, speckle_noise]; ....: names = ['Gaussian', 'Poisson', 'Salt & Pepper', 'Speckle'] In [42]: plt.figure() Out[42]: <matplotlib.figure.Figure at 0x118301490> In [43]: for index, image in enumerate(images): ....: output = dnoise(image, patch_size=5, patch_distance=7) ....: plt.subplot(2, 4, index+1) ....: plt.imshow(image) ....: plt.gray() ....: plt.title(names[index]) ....: plt.subplot(2, 4, index+5) ....: plt.imshow(output) ....: In [44]: plt.show() Under each noisy image, we have presented the corresponding result after employing, by nonlocal means, denoising: It is also possible to perform denoising by thresholding coefficient, provided we represent images with a transform. For example, to do a soft thresholding, employing Biorthonormal 2.8 wavelets; we will use the package PyWavelets: In [45]: import pywt In [46]: def dnoise(image, wavelet, noise_var): ....: levels = int(np.floor(np.log2(image.shape[0]))) ....: coeffs = pywt.wavedec2(image, wavelet, level=levels) ....: value = noise_var * np.sqrt(2 * np.log2(image.size)) ....: threshold = lambda x: pywt.thresholding.soft(x, value) ....: coeffs = map(threshold, coeffs) ....: return pywt.waverec2(coeffs, wavelet) ....: In [47]: plt.figure() Out[47]: <matplotlib.figure.Figure at 0x10e5ed790> In [48]: for index, image in enumerate(images): ....: output = dnoise(image, pywt.Wavelet('bior2.8'), ....: noise_var=0.02) ....: plt.subplot(2, 4, index+1) ....: plt.imshow(image) ....: plt.gray() ....: plt.title(names[index]) ....: plt.subplot(2, 4, index+5) ....: plt.imshow(output) ....: In [49]: plt.show() Observe that the results are of comparable quality to those obtained with the previous method: Sharpening and blurring There are many situations that produce blurred images: Incorrect focus at acquisition Movement of the imaging system The point-spread function of the imaging device (like in electron microscopy) Graphic-art effects For blurring images, we could replicate the effect of a point-spread function by performing convolution of the image with the corresponding kernel. The Gaussian filter that we used for denoising performs blurring in this fashion. In the general case, convolution with a given kernel can be done with the routine convolve from the module scipy.ndimage. For instance, for a constant kernel supported on a 10 x 10 square, we could do as follows: In [50]: from scipy.ndimage import convolve; ....: from skimage.data import page In [51]: kernel = np.ones((10, 10))/100.; ....: blurred = convolve(page(), kernel) To emulate the blurring produced by movement too, we could convolve with a kernel as created here: In [52]: from skimage.draw import polygon In [53]: x_coords = np.array([14, 14, 24, 26, 24, 18, 18]); ....: y_coords = np.array([ 2, 18, 26, 24, 22, 18, 2]); ....: kernel_2 = np.zeros((32, 32)); ....: kernel_2[polygon(x_coords, y_coords)]= 1.; ....: kernel_2 /= kernel_2.sum() In [54]: blurred_motion = convolve(page(), kernel_2) In order to reverse the effects of convolution when we have knowledge of the degradation process, we perform deconvolution. For example, if we have knowledge of the point-spread function, in the module skimage.restoration we have implementations of the Wiener filter (wiener), the unsupervised Wiener filter (unsupervised_wiener), and Lucy-Richardson deconvolution (richardson_lucy). We perform deconvolution on blurred images by applying a Wiener filter. Note the huge improvement in readability of the text: In [55]: from skimage.restoration import wiener In [56]: deconv = wiener(blurred, kernel, balance=0.025, clip=False) Summary In this article, we have seen how the SciPy stack helps us solve many problems in mathematical imaging, from the effective representation of digital images, to modifying, restoring, or analyzing them. Although certainly exhaustive, this article scratches but the surface of this challenging field of engineering. Resources for Article: Further resources on this subject: Symbolizers [article] Machine Learning [article] Analyzing a Complex Dataset [article]
Read more
  • 0
  • 0
  • 5374

article-image-using-underscorejs-collections
Packt
01 Oct 2015
21 min read
Save for later

Using Underscore.js with Collections

Packt
01 Oct 2015
21 min read
In this article Alex Pop, the author of the book, Learning Underscore.js, we will explore Underscore functionality for collections using more in-depth examples. Some of the more advanced concepts related to Underscore functions such as scope resolution and execution context will be explained. The topics of the article are as follows: Key Underscore functions revisited Searching and filtering This article assumes that you are familiar with JavaScript fundamentals such as prototypical inheritance and the built-in data types. The source code for the examples from this article is hosted online at https://github.com/popalexandruvasile/underscorejs-examples/tree/master/collections, and you can execute the examples using the Cloud9 IDE at the address https://ide.c9.io/alexpop/underscorejs-examples from the collections folder. (For more resources related to this topic, see here.) Key Underscore functions – each, map, and reduce This flexible approach means that some Underscore functions can operate over collections: an Underscore-specific term for arrays, array like objects, and objects (where the collection represents the object properties). We will refer to the elements within these collections as collection items. By providing functions that operate over object properties Underscore expands JavaScript reflection like capabilities. Reflection is a programming feature for examining the structure of a computer program, especially during program execution. JavaScript is a dynamic language without static type system support (as of ES6). This makes it convenient to use a technique named duck typing when working with objects that share similar behaviors. Duck typing is a programming technique used in dynamic languages where objects are identified through their structure represented by properties and methods rather than their type (the name of duck typing is derived from the phrase "if it walks like a duck, swims like a duck, and quacks like a duck, then it is a duck"). Underscore itself uses duck typing to assert that an object is an array by checking for a property called length of type Number. Applying reflection techniques We will build an example that demonstrates duck typing and reflection techniques through a function that will extract object properties so that they can be persisted to a relational database. Usually relational database stores objects represented as a data row with columns types that map to regular SQL data types. We will use the _.each() function to iterate over object properties and extract those of type boolean, number, string and Date as they be easily mapped to SQL data type and ignore everything else: var propertyExtractor = (function() { "use strict" return { extractStorableProperties: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { storableProperties[key] = value; } }); return storableProperties; } }; }()); You can find the example in the propertyExtractor.js file within the each-with-properties-and-context folder from the source code for this article. The first highlighted code snippet checks whether the object passed to the extractStorableProperties() function has a property called id that is a number. The + sign converts the id property to a number and the non-identity operator !== compares the result of this conversion with the unconverted original value. The non-identity operator returns true only if the type of the compared objects is different or they are of the same type and have different values. This was a duck typing technique used by Underscore up until version 1.7 to assert whether it deals with an array-like instance or an object instance in its collections related functions. Underscore collection related functions operate over array-like objects as they do not strictly check for the built in Array object. These functions can also work with the arguments objects or the HTML DOM NodeList objects. The last highlighted code snippet is the _.each() function that operates over object properties using an iteration function that receives the property value as its first argument and the property name as the optional second argument. If a property has a null or undefined value it will not appear in the returned object. The extractStorableProperties() function will return a new object with all the storable properties. The return value is used in the test specifications to assert that, given a sample object, the function behaves as expected: describe("Given propertyExtractor", function() { describe("when calling extractStorableProperties()", function() { var storableProperties; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; storableProperties = propertyExtractor.extractStorableProperties(source); }); it("then the property count should be correct", function() { expect(Object.keys(storableProperties).length).toEqual(5); }); it("then the 'price' property should be correct", function() { expect(storableProperties.price).toEqual(10); }); it("then the 'description' property should not be defined", function() { expect(storableProperties.description).toEqual(undefined); }); }); }); Notice how we used the propertyExtractor global instance to access the function under test, and then, we used the ES5 function Object.keys to assert that the number of returned properties has the correct size. In a production ready application, we need to ensure that the global objects names do not clash among other best practices. You can find the test specification in the spec/propertyExtractorSpec.js file and execute them by browsing the SpecRunner.html file from the example source code folder. There is also an index.html file that will display the results of the example rendered in the browser using the index.js file. Manipulating the this variable Many Underscore functions have a similar signature with _.each(list, iteratee, [context]),where the optional context parameter will be used to set the this value for the iteratee function when it is called for each collection item. In JavaScript, the built in this variable will be different depending on the context where it is used. When the this variable is used in the global scope context, and in a browser environment, it will return the native window object instance. If this is used in a function scope, then the variable will have different values: If the function is an object method or an object constructor, then this will return the current object instance. Here is a short example code for this scenario: var item1 = { id: 1, name: "Item1", getInfo: function(){ return "Object: " + this.id + "-" + this.name; } }; console.log(item1.getInfo()); // -> “Object: 1-Item1” If the function does not belong to an object, then this will be undefined in the JavaScript strict mode. In the non-strict mode, this will return its global scope value. With a library such as Underscore that favors a functional style, we need to ensure that the functions used as parameters are using the this variable correctly. Let's assume that you have a function that references this (maybe it was used as an object method) and you want to use it with one of the Underscore functions such as _.each.. You can still use the function as is and provide the desired this value as the context parameter value when calling each. I have rewritten the previous example function to showcase the use of the context parameter: var propertyExtractor = (function() { "use strict"; return { extractStorablePropertiesWithThis: function(source) { var storableProperties = {}; if (!source || source.id !== +source.id) { return storableProperties; } _.each(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { this[key] = value; } }, storableProperties); return storableProperties; } }; }()); The first highlighted snippet shows the use of this, which is typical for an object method. The last highlighted snippet shows the context parameter value that this was set to. The storableProperties value will be passed as this for each iteratee function call. The test specifications for this example are identical with the previous example, and you can find them in the same folder each-with-properties-and-context from the source code for this article. You can use the optional context parameter in many of the Underscore functions where applicable and is a useful technique when working with functions that rely on a specific this value. Using map and reduce with object properties In the previous example, we had some user interface-specific code in the index.js file that was tasked with displaying the results of the propertyExtractor.extractStorableProperties() call in the browser. Let's pull this functionality in another example and imagine that we need a new function that, given an object, will transform its properties in a format suitable for displaying in a browser by returning an array of formatted text for each property. To achieve this, we will use the Underscore _.map() function over object properties as demonstrated in the next example: var propertyFormatter = (function() { "use strict"; return { extractPropertiesForDisplayAsArray: function(source) { if (!source || source.id !== +source.id) { return []; } return _.map(source, function(value, key) { var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return "Property: " + key + " of type: " + typeof value + " has value: " + value; } return "Property: " + key + " cannot be displayed."; }); } }; }()); With Underscore, we can write compact and expressive code that manipulates these properties with little effort. The test specifications for the extractPropertiesForDisplayAsArray() function are using Jasmine regular expression matchers to assert the test conditions in the highlighted code snippets from the following example: describe("Given propertyFormatter", function() { describe("when calling extractPropertiesForDisplayAsArray()", function() { var propertiesForDisplayAsArray; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsArray = propertyFormatter.extractPropertiesForDisplayAsArray(source); }); it("then the returned property count should be correct", function() { expect(propertiesForDisplayAsArray.length).toEqual(7); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsArray[4]).toMatch("price.+10"); }); it("then the 'description' property should not be displayed", function() { expect(propertiesForDisplayAsArray[2]).toMatch("cannot be displayed"); }); }); }); The following example shows how _.reduce() is used to manipulate object properties. This will transform the properties of an object in a format suitable for browser display by returning a string value that contains all the properties in a convenient format: extractPropertiesForDisplayAsString: function(source) { if (!source || source.id !== +source.id) { return []; } return _.reduce(source, function(memo, value, key) { if (memo && memo !== "") { memo += "<br/>"; } var isDate = typeof value === 'object' && value instanceof Date; if (isDate || typeof value === 'boolean' || typeof value === 'number' || typeof value === 'string') { return memo + "Property: " + key + " of type: " + typeof value + " has value: " + value; } return memo + "Property: " + key + " cannot be displayed."; }, ""); } The example is almost identical with the previous one with the exception of the memo accumulator used to build the returned string value. The test specifications for the extractPropertiesForDisplayAsString() function are using a regular expression matcher and can be found in the spec/propertyFormatterSpec.js file: describe("when calling extractPropertiesForDisplayAsString()", function() { var propertiesForDisplayAsString; beforeEach(function() { var source = { id: 2, name: "Blue lamp", description: null, ui: undefined, price: 10, purchaseDate: new Date(2014, 10, 1), isInUse: true, }; propertiesForDisplayAsString = propertyFormatter.extractAllPropertiesForDisplay(source); }); it("then the returned string has expected length", function() { expect(propertiesForDisplayAsString.length).toBeGreaterThan(0); }); it("then the 'price' property should be displayed", function() { expect(propertiesForDisplayAsString).toMatch("<br/>Property: price of type: number has value: 10<br/>"); }); }); The examples from this subsection can be found within the map.reduce-with-properties folder from the source code for this article. Searching and filtering The _.find(list, predicate, [context]) function is part of the Underscore comprehensive functionality for searching and filtering collections represented by object properties and array like objects. We will make a distinction between search and filter functions with the former tasked with finding one item in a collection and the latter tasked with retrieving a subset of the collection (although sometimes, you will find the distinction between these functions thin and blurry). We will revisit the find function and the other search- and filtering-related functions using an example with slightly more diverse data that is suitable for database persistence. We will use the problem domain of a bicycle rental shop and build an array of bicycle objects with the following structure: var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; Each bicycle object has an id property, and we will use the propertyFormatter object built in the previous section to display the examples results in the browser for your convenience. The code was shortened here for brevity (you can find its full version alongside the other examples from this section within the searching and filtering folders from the source code for this article). All the examples are covered by tests and these are the recommended starting points if you want to explore them in detail. Searching For the first example of this section, we will define a bicycle-related requirement where we need to search for a bicycle of a specific type and with a rental price under a maximum value. Compared to the previous _.find() example, we will start with writing the tests specifications first for the functionality that is yet to be implemented. This is a test-driven development approach where we will define the acceptance criteria for the function under test first followed by the actual implementation. Writing the tests first forces us to think about what the code should do, rather than how it should do it, and this helps eliminate waste by writing only the code required to make the tests pass. Underscore find The test specifications for our initial requirement are as follows: describe("Given bicycleFinder", function() { describe("when calling findBicycle()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycle("Urban Bike", 16); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'type' property should be correct", function() { expect(bicycle.type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycle.rentPrice).toEqual(15); }); }); }); The highlighted function call bicyleFinder.findBicycle() should return one bicycle object of the expected type and price as asserted by the tests. Here is the implementation that satisfies the test specifications: var bicycleFinder = (function() { "use strict"; var getBicycles = function() { return [{ id: 1, name: "A fast bike", type: "Road Bike", quantity: 10, rentPrice: 20, dateAdded: new Date(2015, 1, 2) }, { ... }, { id: 12, name: "A clown bike", type: "Children Bike", quantity: 2, rentPrice: 12, dateAdded: new Date(2014, 11, 1) }]; }; return { findBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.find(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } }; }()); The code returns the first bicycle that satisfies the search criteria ignoring the rest of the bicycles that might meet the same criteria. You can browse the index.html file from the searching folder within the source code for this article to see the result of calling the bicyleFinder.findBicycle() function displayed on the browser via the propertyFormatter object. Underscore some There is a closely related function to _.find() with the signature _.some(list, [predicate], [context]). This function will return true if at least one item of the list collection satisfies the predicate function. The predicate parameter is optional, and if it is not specified, the _.some() function will return true if at least one item of the collection is not null. This makes the function a good candidate for implementing guard clauses. A guard clause is a function that ensures that a variable (usually a parameter) satisfies a specific condition before it is being used any further. The next example shows how _.some() is used to perform checks that are typical for a guard clause: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (!_.some(list1) && !_.some(object1)) { alert("Collections list1 and object1 are not valid when calling _.some() over them."); } if(_.some(list2) && _.some(object2)){ alert("Collections list2 and object2 have at least one valid item and they are valid when calling _.some() over them."); } If you execute this code in a browser, you will see both alerts being displayed. The first alert gets triggered when an empty array or an object without any properties defined are found. The second alert appears when we have an array with at least one element that is not null and is not undefined or when we have an object that has at least one property that evaluates as true. Going back to our bicycle data, we will define a new requirement to showcase the use of _.some() in this context. We will implement a function that will ensure that we can find at least one bicycle of a specific type and with a maximum rent price. The code is very similar to the bicycleFinder.findBicycle() implementation with the difference that the new function returns true if the specific bicycle is found (rather than the actual object): hasBicycle: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.some(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } You can find the tests specifications for this function in the spec/bicycleFinderSpec.js file from the searching example folder. Underscore findWhere Another function similar to _.find() has the signature _.findWhere(list, properties). This compares the property key-value pairs of each collection item from list with the property key-value pairs found on the properties object parameter. Usually, the properties parameter is an object literal that contains a subset of the properties of a collection item. The _.findWhere() function is useful when we need to extract a collection item matching an exact value compared to _.find() that can extract a collection item that matches a range of values or more complex criteria. To showcase the function, we will implement a requirement that needs to search a bicycle that has a specific id value. This is how the test specifications look like: describe("when calling findBicycleById()", function() { var bicycle; beforeEach(function() { bicycle = bicycleFinder.findBicycleById(6); }); it("then it should return an object", function() { expect(bicycle).toBeDefined(); }); it("then the 'id' property should be correct", function() { expect(bicycle.id).toEqual(6); }); }); And the next code snippet from the bicycleFinder.js file contains the actual implementation: findBicycleById: function(id){ var bicycles = getBicycles(); return _.findWhere(bicycles, {id: id}); } Underscore contains In a similar vein, with the _.some() function, there is a _.contains(list, value) function that will return true if there is at least one item from the list collection that is equal to the value parameter. The equality check is based on the strict comparison operator === where the operands will be checked for both type and value equality. We will implement a function that checks whether a bicycle with a specific id value exists in our collection: hasBicycleWithId: function(id) { var bicycles = getBicycles(); var bicycleIds = _.pluck(bicycles,"id"); return _.contains(bicycleIds, id); } Notice how the _.pluck(list, propertyName) function was used to create an array that stores the id property value for each collection item. In its implementation, _.pluck() is actually using _.map(), acting like a shortcut function for it. Filtering As we mentioned at the beginning of this section, Underscore provides powerful filtering functions, which are usually tasked with working on a subsection of a collection. We will reuse the same example data as before, and we will build some new functions to explore this functionality. Underscore filter We will start by defining a new requirement for our data where we need to build a function that retrieves all bicycles of a specific type and with a maximum rent price. This is how the test specifications looks like for the yet to be implemented function bicycleFinder.filterBicycles(type, maxRentPrice): describe("when calling filterBicycles()", function() { var bicycles; beforeEach(function() { bicycles = bicycleFinder.filterBicycles("Urban Bike", 16); }); it("then it should return two objects", function() { expect(bicycles).toBeDefined(); expect(bicycles.length).toEqual(2); }); it("then the 'type' property should be correct", function() { expect(bicycles[0].type).toEqual("Urban Bike"); expect(bicycles[1].type).toEqual("Urban Bike"); }); it("then the 'rentPrice' property should be correct", function() { expect(bicycles[0].rentPrice).toEqual(15); expect(bicycles[1].rentPrice).toEqual(14); }); }); The test expectations are assuming the function under test filterBicycles() returns an array, and they are asserting against each element of this array. To implement the new function, we will use the _.filter(list, predicate, [context]) function that returns an array with all the items from the list collection that satisfy the predicate function. Here is our example implementation code: filterBicycles: function(type, maxRentPrice) { var bicycles = getBicycles(); return _.filter(bicycles, function(bicycle) { return bicycle.type === type && bicycle.rentPrice <= maxRentPrice; }); } The usage of the _.filter() function is very similar to the _.find() function with the only difference in the return type of these functions. You can find this example together with the rest of examples from this subsection within the filtering folder from the source code for this article. Underscore where Underscore defines a shortcut function for _.filter() which is _.where(list, properties). This function is similar to the _.findWhere() function, and it uses the properties object parameter to compare and retrieve all the items from the list collection with matching properties. To showcase the function, we defined a new requirement for our example data where we need to retrieve all bicycles of a specific type. This is the code that implements the requirement: filterBicyclesByType: function(type) { var bicycles = getBicycles(); return _.where(bicycles, { type: type }); } By using _.where(), we are in fact using a more compact and expressive version of _.filter() in scenarios where we need to perform exact value matches. Underscore reject and partition Underscore provides a useful function which is the opposite for _.filter() and has a similar signature: _.reject(list, predicate, [context]). Calling the function will return an array of values from the list collection that do not satisfy the predicate function. To show its usage we will implement a function that retrieves all bicycles with a rental price less than or equal with a given value. Here is the function implementation: getAllBicyclesForSetRentPrice: function(setRentPrice) { var bicycles = getBicycles(); return _.reject(bicycles, function(bicycle) { return bicycle.rentPrice > setRentPrice; }); } Using the _.filter() function alongside the _.reject() function with the same list collection and predicate function will allow us to partition the collection in two arrays. One array holds items that do satisfy the predicate function while the other holds items that do not satisfy the predicate function. Underscore has a more convenient function that achieves the same result and this is _.partition(list, predicate). It returns an array that has two array elements: the first has the values that would be returned by calling _.filter() using the same input parameters and the second has the values for calling _.reject(). Underscore every We mentioned _.some() as being a great function for implementing guard clauses. It is also worth mentioning another closely related function _.every(list, [predicate], [context]). The function will check every item of the list collection and will return true if every item satisfies the predicate function or if list is null, undefined or empty. If the predicate function is not specified the value of each item will be evaluated instead. If we use the same data from the guard clause example for _.some() we will get the opposite results as shown in the next example: var list1 = []; var list2 = [null, , undefined, {}]; var object1 = {}; var object2 = { property1: null, property3: true }; if (_.every(list1) && _.every(object1)) { alert("Collections list1 and object1 are valid when calling _.every() over them."); } if(!_.every(list2) && !_.every(object2)){ alert("Collections list2 and object2 do not have all items valid so they are not valid when calling _.every() over them."); } To ensure a collection is not null, undefined, or empty and each item is also not null or undefined we should use both _.some() and _.every() as part of the same check as shown in the next example: var list1 = [{}]; var object1 = { property1: {}}; if (_.every(list1) && _.every(object1) && _.some(list1) && _.some(object1)) { alert("Collections list1 and object1 are valid when calling both _some() and _.every() over them."); } If the list1 object is an empty array or an empty object literal calling _.every() for it returns true while calling _some() returns false hence the need to use both functions when validating a collection. These code examples demonstrate how you can build your own guard clauses or data validation rules by using simple Underscore functions. Summary In this article, we explored many of the collection specific functions provided by Underscore and demonstrated additional functionality. We continued with searching and filtering functions. Resources for Article: Further resources on this subject: Packaged Elegance[article] Marshalling Data Services with Ext.Direct[article] Understanding and Developing Node Modules [article]
Read more
  • 0
  • 0
  • 7648
article-image-apps-different-platforms
Packt
01 Oct 2015
9 min read
Save for later

Apps for Different Platforms

Packt
01 Oct 2015
9 min read
In this article by Hoc Phan, the author of the book Ionic Cookbook, we will cover tasks related to building and publishing apps, such as: Building and publishing an app for iOS Building and publishing an app for Android Using PhoneGap Build for cross–platform (For more resources related to this topic, see here.) Introduction In the past, it used to be very cumbersome to build and successfully publish an app. However, there are many documentations and unofficial instructions on the Internet today that can pretty much address any problem you may run into. In addition, Ionic also comes with its own CLI to assist in this process. This article will guide you through the app building and publishing steps at a high level. You will learn how to: Build iOS and Android app via Ionic CLI Publish iOS app using Xcode via iTunes Connect Build Windows Phone app using PhoneGap Build The purpose of this article is to provide ideas on what to look for and some "gotchas". Apple, Google, and Microsoft are constantly updating their platforms and processes so the steps may not look exactly the same over time. Building and publishing an app for iOS Publishing on App Store could be a frustrating process if you are not well prepared upfront. In this section, you will walk through the steps to properly configure everything in Apple Developer Center, iTunes Connect and local Xcode Project. Getting ready You must register for Apple Developer Program in order to access https://developer.apple.com and https://itunesconnect.apple.com because those websites will require an approved account. In addition, the instructions given next use the latest version of these components: Mac OS X Yosemite 10.10.4 Xcode 6.4 Ionic CLI 1.6.4 Cordova 5.1.1 How to do it Here are the instructions: Make sure you are in the app folder and build for the iOS platform. $ ionic build ios Go to the ios folder under platforms/ to open the .xcodeproj file in Xcode. Go through the General tab to make sure you have correct information for everything, especially Bundle Identifier and Version. Change and save as needed. Visit Apple Developer website and click on Certificates, Identifiers & Profiles. For iOS apps, you just have to go through the steps in the website to fill out necessary information. The important part you need to do correctly here is to go to Identifiers | App IDs because it must match your Bundle Identifier in Xcode. Visit iTunes Connect and click on the My Apps button. Select the Plus (+) icon to click on New iOS App. Fill out the form and make sure to select the right Bundle Identifier of your app. There are several additional steps to provide information about the app such as screenshots, icons, addresses, and so on. If you just want to test the app, you could just provide some place holder information initially and come back to edit later. That's it for preparing your Developer and iTunes Connect account. Now open Xcode and select iOS Device as the archive target. Otherwise, the archive feature will not turn on. You will need to archive your app before you can submit it to the App Store. Navigate to Product | Archive in the top menu. After the archive process completed, click on Submit to App Store to finish the publishing process. At first, the app could take an hour to appear in iTunes Connect. However, subsequent submission will go faster. You should look for the app in the Prerelease tab in iTunes Connect. iTunes Connect has very nice integration with TestFlight to test your app. You can switch on and off this feature. Note that for each publish, you have to change the version number in Xcode so that it won't conflict with existing version in iTunes Connect. For publishing, select Submit for Beta App Review. You may want to go through other tabs such as Pricing and In-App Purchases to configure your own requirements. How it works Obviously this section does not cover every bit of details in the publishing process. In general, you just need to make sure your app is tested thoroughly, locally in a physical device (either via USB or TestFlight) before submitting to the App Store. If for some reason the Archive feature doesn't build, you could manually go to your local Xcode folder to delete that specific temporary archived app to clear cache: ~/Library/Developer/Xcode/Archives See also TestFlight is a separate subject by itself. The benefit of TestFlight is that you don't need your app to be approved by Apple in order to install the app on a physical device for testing and development. You can find out more information about TestFlight here: https://developer.apple.com/library/prerelease/ios/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/BetaTestingTheApp.html Building and publishing an app for Android Building and publishing an Android app is a little more straightforward than iOS because you just interface with the command line to build the .apk file and upload to Google Play's Developer Console. Ionic Framework documentation also has a great instruction page for this: http://ionicframework.com/docs/guide/publishing.html. Getting ready The requirement is to have your Google Developer account ready and login to https://play.google.com/apps/publish. Your local environment should also have the right SDK as well as keytool, jarsigner, and zipalign command line for that specific version. How to do it Here are the instructions: Go to your app folder and build for Android using this command: $ ionic build --release android You will see the android-release-unsigned.apk file in the apk folder under /platforms/android/build/outputs. Go to that folder in the Terminal. If this is the first time you create this app, you must have a keystore file. This file is used to identify your app for publishing. If you lose it, you cannot update your app later on. To create a keystore, type the following command in the command line and make sure it's the same keytool version of the SDK: $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Once you fill out the information in the command line, make a copy of this file somewhere safe because you will need it later. The next step is to use that file to sign your app so it will create a new .apk file that Google Play allow users to install: $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore HelloWorld-release-unsigned.apk alias_name To prepare for final .apk before upload, you must package it using zipalign: $ zipalign -v 4 HelloWorld-release-unsigned.apk HelloWorld.apk Log in to Google Developer Console and click on Add new application. Fill out as much information as possible for your app using the left menu. Now you are ready to upload your .apk file. First is to perform a beta testing. Once you are completed with beta testing, you can follow Developer Console instructions to push the app to Production. How it works This section does not cover other Android marketplaces such as Amazon Appstore because each of them has different processes. However, the common idea is that you need to completely build the unsigned version of the apk folder, sign it using existing or new keystore file, and finally zipalign to prepare for upload. Using PhoneGap Build for cross-platform Adobe PhoneGap Build is a very useful product that provides build-as-a-service in the cloud. If you have trouble building the app locally in your computer, you could upload the entire Ionic project to PhoneGap Build and it will build the app for Apple, Android, and Windows Phone automatically. Getting ready Go to https://build.phonegap.com and register for a free account. You will be able to build one private app for free. For additional private apps, there is monthly fee associated with the account. How to do it Here are the instructions: Zip your entire /www folder and replace cordova.js to phonegap.js in index.html as described in http://docs.build.phonegap.com/en_US/introduction_getting_started.md.html#Getting%20Started%20with%20Build. You may have to edit config.xml to ensure all plugins are included. Detailed changes are at PhoneGap documentation: http://docs.build.phonegap.com/en_US/configuring_plugins.md.html#Plugins. Select Upload a .zip file under private tab. Upload the .zip file of the www folder. Make sure to upload appropriate key for each platform. For Windows Phone, upload publisher ID file. After that, you just build the app and download the completed build file for each platform. How it works In a nutshell, PhoneGap Build is a convenience way when you are only familiar with one platform during development process but you want your app to be built quickly for other platforms. Under the hood, PhoneGap Build has its own environment to automate the process for each user. However, the user still has to own the responsibility of providing key file for signing the app. PhoneGap Build just helps attach the key to your app. See also The common issue people usually face with when using PhoneGap Build is failure to build. You may want to refer to their documentation for troubleshooting: http://docs.build.phonegap.com/en_US/support_failed-builds.md.html#Failed%20Builds Summary This article provided you with general information about tasks related to building and publishing apps for Android, for iOS and for cross-platform using PhoneGap, wherein you came to know how to publish an app in various places such as App Store and Google Play. Resources for Article: Further resources on this subject: Our App and Tool Stack[article] Directives and Services of Ionic[article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 8173

article-image-writing-3d-space-rail-shooter-threejs-part-2
Martin Naumann
01 Oct 2015
10 min read
Save for later

Writing a 3D space rail shooter in Three.js, Part 2

Martin Naumann
01 Oct 2015
10 min read
In the course of this 3 part article series, you will learn how to write a simple 3D space shooter game with Three.js. The game will look like this: It will introduce the basic concepts of a Three.js application, how to write modular code and the core principles of a game, such as camera, player motion and collision detection. In Part 1 we set up our package and created the world of our game. In this Part 2, we will add the spaceship and the asteroids for our game. Adding the spaceship Now we'll need two additional modules for loading our spaceship 3D model file: objmtlloader.js and mtlloader.js - both put into the js folder. We can then load the spaceship by requiring the ObjMtlLoader with var ObjMtlLoader = require('./objmtlloader') and loading the model with var loader = new ObjMtlLoader(), player = null loader.load('models/spaceship.obj', 'models/spaceship.mtl', function(mesh) { mesh.scale.set(0.2, 0.2, 0.2) mesh.rotation.set(0, Math.PI, 0) mesh.position.set(0, -25, 0) player = mesh World.add(player) }) Looks about right! So let's see what's going on here. First of all we're calling ObjMtlLoader.load with the name of the model file (spaceship.obj) where the polygons are defined and the material file (spaceship.mtl) where colors, textures, transparency and so on are defined. The most important thing is the callback function that returns the loaded mesh. In the callback we're scaling the mesh to 20% of its original size, rotate it by 180° (Three.js uses euler angles instead of degrees) for it to face the vortex instead of our camera and then we finally positioned it a bit downwards, so we get a nice angle. Now with our spaceship in space, it's time to take off! Launch it! Now let's make things move! First things first, let's get a reference to the camera: var loader = new ObjMtlLoader(), player = null, cam = World.getCamera() and instead of adding the spaceship directly to the world, we will make it a child of the camera and add the camera to the world instead. We should also adjust the position of the spaceship a bit: player.position.set(0, -25, -100) cam.add(player) World.add(cam) Now in our render function we move the camera a bit more into the vortex on each frame, like this: function render() { cam.position.z -= 1; } This makes us fly into the vortex... Intermission - modularize your code Now that our code gains size and function is a good time to come up with a strategy to keep it understandable and clean. A little housekeeping, if you will. The strategy we're going for in this article is splitting the code up in modules. Every bit of the code that is related to a small, clearly limited piece of functionality will be put into a separate module and interacts with other modules by using the functions those other modules expose. Our first example of such a module is the player module: Everything that is related to our player, the spaceship and its motion should be captured into a player module. So we'll create the js/player.js file. var ObjMtlLoader = require('./objmtlloader') var spaceship = null var Player = function(parent) { var loader = new ObjMtlLoader(), self = this this.loaded = false loader.load('models/spaceship.obj', 'models/spaceship.mtl', function(mesh) { mesh.scale.set(0.2, 0.2, 0.2) mesh.rotation.set(0, Math.PI, 0) mesh.position.set(0, -25, 0) spaceship = mesh self.player = spaceship parent.add(self.player) self.loaded = true }) } module.exports = Player And we can also move out the tunnel code into a module called tunnel.js: var THREE = require('three') var Tunnel = function() { var mesh = new THREE.Mesh( new THREE.CylinderGeometry(100, 100, 5000, 24, 24, true), new THREE.MeshBasicMaterial({ map: THREE.ImageUtils.loadTexture('images/space.jpg', null, function(tex) { tex.wrapS = tex.wrapT = THREE.RepeatWrapping tex.repeat.set(5, 10) tex.needsUpdate = true }), side: THREE.BackSide }) ) mesh.rotation.x = -Math.PI/2 this.getMesh = function() { return mesh } return this; } module.exports = Tunnel Using these modules, our main.js now looks more tidy: var World = require('three-world'), THREE = require('three'), Tunnel = require('./tunnel'), Player = require('./player') function render() { cam.position.z -= 1; } World.init({ renderCallback: render, clearColor: 0x000022}) var cam = World.getCamera() var tunnel = new Tunnel() World.add(tunnel.getMesh()) var player = new Player(cam) World.add(cam) World.getScene().fog = new THREE.FogExp2(0x0000022, 0.00125) World.start() The advantage is that we can reuse these modules in other projects and the code in main.js is pretty straight forward. Now when you keep the browser tab with our spaceship in the vortex open long enough, you'll see that we're flying out of our vortex quite quickly. To infinity and beyond! Let's make our vortex infinite. To do so, we'll do a little trick: We'll use two tunnels, positioned after each other. Once one of them is behind the camera (i.e. no longer visible), we will move it to the end of the currently visible tunnel. It's gonna be a bit like laying the tracks while we're riding on them. Our code will need a little adjustment for this trick. We'll add a new update method to our Tunnel class and use a THREE.Object3D to hold both our tunnel parts. Our render loop will then call the new update method with the current camera z-coordinate to check, if a tunnel segment is invisible and can be moved to the end of the tunnel. In tunnel.js it will look like this: And the update method of the tunnel: this.update = function(z) { for(var i=0; i<2; i++) { if(z < meshes[i].position.z - 2500) { meshes[i].position.z -= 10000 break } } } This method may look a bit odd at first. It takes the z position from the camera and then checks both tunnel segments (meshes) if they are invisible. But what's that -2500 doing in there? Well, that's because Three.js uses coordinates in a particular way. The coordinates are in the center of the mesh, which means the tunnel reaches from meshes[i].position.z + 2500 to meshes[i].position.z - 2500. The code is accounting for that by making sure that the camera has gone past the farthest point of the tunnel segment, before moving it to a new position. It's being moved 10000 units into the screen, as its current position + 2500 is the beginning of the next tunnel segment. The next tunnel segment ends at the current position + 7500. Then we already know that our tunnel will start at its new position - 2500. So all in all, we'll move the segment by 10000 to make it seamlessly continue the tunnel. Space is full of rocks Now that's all a bit boring - so we'll spice it up with Asteroids! Let's write our asteroids.js module: var THREE = require('three'), ObjLoader = require('./objloader') var loader = new ObjLoader() var rockMtl = new THREE.MeshLambertMaterial({ map: THREE.ImageUtils.loadTexture('models/lunarrock_s.png') }) var Asteroid = function(rockType) { var mesh = new THREE.Object3D(), self = this this.loaded = false // Speed of motion and rotation mesh.velocity = Math.random() * 2 + 2 mesh.vRotation = new THREE.Vector3(Math.random(), Math.random(), Math.random()) loader.load('models/rock' + rockType + '.obj', function(obj) { obj.traverse(function(child) { if(child instanceof THREE.Mesh) { child.material = rockMtl } }) obj.scale.set(10,10,10) mesh.add(obj) mesh.position.set(-50 + Math.random() * 100, -50 + Math.random() * 100, -1500 - Math.random() * 1500) self.loaded = true }) this.update = function(z) { mesh.position.z += mesh.velocity mesh.rotation.x += mesh.vRotation.x * 0.02; mesh.rotation.y += mesh.vRotation.y * 0.02; mesh.rotation.z += mesh.vRotation.z * 0.02; if(mesh.position.z > z) { mesh.velocity = Math.random() * 2 + 2 mesh.position.set( -50 + Math.random() * 100, -50 + Math.random() * 100, z - 1500 - Math.random() * 1500 ) } } this.getMesh = function() { return mesh } return this } module.exports = Asteroid This module is pretty similar to the player module but still a lot of things are going on, so let's go through it: var loader = new ObjLoader() var rockMtl = new THREE.MeshLambertMaterial({ map: THREE.ImageUtils.loadTexture('models/lunarrock_s.png') }) We're creating a material with a rocky texture, so our rocks look nice. This material will be shared by all the asteroid models later. var mesh = new THREE.Object3D() // Speed of motion and rotation mesh.velocity = Math.random() * 2 + 2 mesh.vRotation = new THREE.Vector3(Math.random(), Math.random(), Math.random()) In this part of the code we're creating a THREE.Object3D to later contain the 3D model from the OBJ file and give it two custom properties: velocity - how fast the asteroid should move towards the camera vRotation - how fast the asteroid rotates around each of its axes This gives our asteroids a bit more variety as some are moving faster than others, just as they would in space. obj.traverse(function(child) { if(child instanceof THREE.Mesh) { child.material = rockMtl } }) obj.scale.set(10,10,10) We're iterating through all the children of the loaded OBJ to make sure they're using the material we've defined at the beginnin of our module, then we scale the object (and all its children) to be nicely sized in relation to our spaceship. On to the update method: this.update = function(z) { mesh.position.z += mesh.velocity mesh.rotation.x += mesh.vRotation.x * 0.02; mesh.rotation.y += mesh.vRotation.y * 0.02; mesh.rotation.z += mesh.vRotation.z * 0.02; if(mesh.position.z > z) { mesh.velocity = Math.random() * 2 + 2 mesh.position.set(-50 + Math.random() * 100, -50 + Math.random() * 100, z - 1500 - Math.random() * 1500) } } This method, just like the tunnel update method is called in our render loop and given the position.z coordinate of the camera. Its responsibility is to move and rotate the asteroid and reposition it whenever it flew past the camera. With this module, we can extend the code in main.js: var World = require('three-world'), THREE = require('three'), Tunnel = require('./tunnel'), Player = require('./player'), Asteroid = require('./asteroid') var NUM_ASTEROIDS = 10 function render() { cam.position.z -= 1 tunnel.update(cam.position.z) for(var i=0;i<NUM_ASTEROIDS;i++) asteroids[i].update(cam.position.z) } and a bit further down in the code: var asteroids = [] for(var i=0;i<NUM_ASTEROIDS; i++) { asteroids.push(new Asteroid(Math.floor(Math.random() * 6) + 1)) World.add(asteroids[i].getMesh()) } So we're creating 10 asteroids, randomly picking from the 6 available types (Math.random() returns something that is smaller than 1, so flooring will result in a maximum of 5). Now we've got the asteroids coming at our ship - but they go straight through... we need a way to fight them and we need them to be an actual danger to us! In the final Part 3, we will set the collision detection, add weapons to our craft and add a way to score and health management as well. About the author Martin Naumann is an open source contributor and web evangelist by heart from Zurich with a decade of experience from the trenches of software engineering in multiple fields. He works as a software engineer at Archilogic in front and backend. He devotes his time to moving the web forward, fixing problems, building applications and systems and breaking things for fun & profit. Martin believes in the web platform and is working with bleeding edge technologies that will allow the web to prosper.
Read more
  • 0
  • 0
  • 5224

article-image-writing-3d-space-rail-shooter-threejs-part-1
Martin Naumann
01 Oct 2015
7 min read
Save for later

Writing a 3D space rail shooter in Three.js, Part 1

Martin Naumann
01 Oct 2015
7 min read
In the course of this 3 part article series, you will learn how to write a simple 3D space shooter game with Three.js. The game will look like this: It will introduce the basic concepts of a Three.js application, how to write modular code and the core principles of a game, such as camera, player motion and collision detection. But now it's time to get ready! Prerequisites I will introduce you to a small set of tools that have proven to work reliably and help build things in a quick and robust fashion. The tools we will use are: npm - the node.js package manager browserify / watchify for using npm modules in the browser Three.js - a wrapper around WebGL and other 3D rendering APIs three-world - a highlevel wrapper around Three.js to reduce boilerplate. uglify-js - a tool to minify and strip down Javascript code To set up our project, create a directory and bring up a terminal inside this directory, e.g. mkdir space-blaster cd space-blaster Initializing our package As we'll use npm, it's a good idea to setup our project as an npm package. This way we'll get dependency management and some handy meta information - and it's easy: npm init Answer the questions and you're ready. Getting the dependencies Next up is getting the dependencies installed. This can be done conveniently by having npm install the dependencies and save them to our package.json as well. This way others can quickly grab our source code along with the package.json and just run npm install to get all required dependencies installed. So we now do npm install --save three three-world npm install --save-dev browserify watchify uglify-js The first line installs the two dependencies that our application needs to run, the second installs dependencies that we'll use throughout development. Creating the scaffold Now we'll need an HTML file to host our game and the main entry point Javascript file for our game. Let's start with the index.html: <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Space Blaster</title> <style> html, body { padding: 0; margin: 0; height: 100%; height: 100vh; overflow: hidden; } </style> </head> <body> <script src="app.js"></script> <noscript>Yikes, you'll need Javascript to play the game :/</noscript> </body> </html> That is enough to get us a nice and level field to put our game on. Note that the app.js will be the result of running watchify while we're developing and browserify when making a release build, respectively. Now we just create a js folder where our Javascript files will be placed in and we're nearly done with our setup, just one more thing: Build scripts In order to not have to clutter our system with globally-installed npm packages for our development tools, we can use npm's superpowers by extending the package.json a bit. Replace the scripts entry in your package.json with: "scripts": { "build": "browserify js/main.js | uglifyjs -mc > app.js", "dev": "watchify js/main.js -v -o app.js" }, and you'll have two handy new commands available: build runs browserify and minifies the output into app.js, starting from js/main.js which will be our game code's entry point dev runs watchify which will look for changes in our Javascript code and update app.js, but without minifying, which decreases the cycle time but produces a larger output file. To kick off development, we now create an empty js/main.js file and run npm run dev which should output something like this: > space-blaster@1.0.0 dev /code/space-blast > watchify js/main.js -v -o app.js 498 bytes written to app.js (0.04 seconds) With this running, open the js/main.js file in your favourite editor - it's time to start coding! Example repository and assets In the course of this article series, we will be using resources such as images and 3D models that are available for free on the internet. The models come from cgtrader, the space background from mysitemyway. All sources can be found at the Github repository for this article. Light, Camera, Action! Let's start creating - first of all we need a world and a function that is updating this world on each frame: var World = require('three-world') function render() { // here we'll update the world on each frame } World.init({ renderCallback: render }) // Things will be put into the world here World.start() Behind the curtain, this code creates a camera, a scene (think of this as our stage) and an ambient light. With World.start() the rendering loop is started and the world starts to "run". But so far our world has been empty and, thus, just a black void. Let's start to fill this void - we'll start with making a vortex in which we'll fly through space. Into the vortex To add things to our stage, we put more code in between World.init and World.start. var tunnel = new THREE.Mesh( new THREE.CylinderGeometry(100, 100, 5000, 24, 24, true), new THREE.MeshBasicMaterial({ map: THREE.ImageUtils.loadTexture('images/space.jpg'), side: THREE.BackSide }) ) tunnel.rotation.x = -Math.PI/2 World.add(tunnel) With this code we're creating a new mesh with a cylindrical geometry that is open at the ends and our space background, lay it on the side and add it to the world. The THREE.BackSide means that the texture shall be drawn on the inside of the cylinder, which is what we need as we'll be inside the cylinder. Now our world looks like this: There's a few things that we should tweak at this point. First of all, there's an ugly black hole at the end - let's fix that. First of all, we'll change the color that the World uses to clear out parts that are too far for the camera to see by adjusting the parameters to World.init: World.init({ renderCallback: render, clearColor: 0x000022}) With this we'll get a very dark blue at the end of our vortex. But it still looks odd as it is very sharp on the edges. But fortunately we can fix that by adding fog to our scene with a single line: World.getScene().fog = newTHREE.FogExp2(0x0000022, 0.00125) Note that the fog has the same color as the clearColor of the World and is added to the Scene. And voilà: It looks much nicer! Now let's do something about this stretched look of the space texture on the vortex by repeating it a few times to make it look better: var tunnel = new THREE.Mesh( new THREE.CylinderGeometry(100, 100, 5000, 24, 24, true), new THREE.MeshBasicMaterial({ map: THREE.ImageUtils.loadTexture('images/space.jpg', null, function(tex) { tex.wrapS = tex.wrapT = THREE.RepeatWrapping tex.repeat.set(5, 10) tex.needsUpdate = true }), side: THREE.BackSide }) ) Alright, that looks good. In Part 2 of this series we will add the spaceship and add asteroids to our game. About the author Martin Naumann is an open source contributor and web evangelist by heart from Zurich with a decade of experience from the trenches of software engineering in multiple fields. He works as a software engineer at Archilogic in front and backend. He devotes his time to moving the web forward, fixing problems, building applications and systems and breaking things for fun & profit. Martin believes in the web platform and is working with bleeding edge technologies that will allow the web to prosper.
Read more
  • 0
  • 0
  • 5673
article-image-storing-and-generating-graphs-stored-data
Packt
01 Oct 2015
20 min read
Save for later

Storing and Generating Graphs of the Stored Data

Packt
01 Oct 2015
20 min read
In this article by Matthijs Kooijman, the author of the book Building Wireless Sensor Networks Using Arduino, we will explore some of the ways to persistently store the collected sensor data and visualize the data using convenient graphs. First, you will see how to connect your coordinator to the Internet and send its data to the Beebotte cloud platform. You will learn how to create a custom dashboard in this platform that can show you the collected data in a convenient graph format. Second, you will see how you can collect and visualize the data on your own computer instead of sending it to the Internet directly. (For more resources related to this topic, see here.) For the first part, you will need some shield to connect your coordinator Arduino to the Internet in addition to the hardware that has been recommended for the coordinator in the previous chapter. This article provides examples for the following two shields: The Arduino Ethernet shield (https://www.arduino.cc/en/Main/ArduinoEthernetShield) The Adafruit CC3000 Wi-Fi shield (https://www.adafruit.com/products/1491) If possible, the Ethernet shield is recommended as its library is small, and it is easier to keep a reliable connection using a wired connection. Additionally, the CC3000 shield conflicts with the SparkFun XBee shield and this would require some modification on the latter's part to make them work together. For the second part, no additional hardware is needed. There are other Ethernet or Wi-Fi shields that you can use. Storing your data in the cloud When it comes to storing your data online somewhere, there are literally dozens of online platforms that offer some kind of data storage service aimed at collecting sensor data. Each of these has different features, complexity, and cost, and you are encouraged to have a look at what is available. Even though a lot of platforms are available, almost none of them are really suited for a hobby sensor network like the one presented in this article. Most platforms support the basic collection of data and offer some web API to access the data but there were two requirements that ruled out most of the platforms: It has to be affordable for a home user with just a bit of data. Ideally, there is a free version to get started. It has to support creating a dashboard that can show data and graphs and also show input elements that can be used to talk back to the network. (This will be used in the next chapter to create an online thermostat). When this article was written, only two platforms seemed completely suitable: Beebotte (https://beebotte.com/) and Adafruit IO (https://io.adafruit.com/). The examples in this article use Beebotte because at the time of writing, Adafruit IO was not publicly available yet and Beebotte currently has some additional features. However, you are encouraged to check out Adafruit IO as an alternative. As both the platforms use the MQTT protocol (explained in the next section), you should be able to reuse the example code with just minimal changes for Adafruit IO. Introducing Beebotte Beebotte, like most of these services, can be seen as a big online database that stores any data you send to it and allows retrieving any data you are interested in. Additionally, you can easily create dashboards that allow you to look at your data and even interact with it through various configurable widgets. By the end of this chapter, you might have a dashboard that looks like this: Before showing you how to talk to Beebotte from your Arduino, some important concepts in the Beebotte system will be introduced: channels, resources, security tokens, and access protocols. The examples in this article serve to get started with Beebotte, but will certainly not cover all features and details. Be sure to check out the extensive documentation on the Beebotte site at https://beebotte.com/overview. Channels and resources All data collected by Beebotte is organized into resources, each representing a single series of data. All data stored in a resource signifies the same thing, such as the temperature in your living room or on/off status of the air conditioner but at different points in time. This kind of data is also often referred to as time-series data. To keep your data organized, Beebotte supports the concept of channels. Essentially, a channel is just a group of resources that somehow belong together. Typically, a channel represents a single device or data source but you are free to group your resources in any way that you see fit. In this example, every sensor module in the network will get its own channel, each containing a resource to store the temperature data and a resource to store the humidity data. Security To be able to access the data stored in your Beebotte account or publish new data, every connection needs to be authenticated. This happens using a secret token or key (similar to a password) that you configure in your Arduino code and proves to the Beebotte server that you are allowed to access the data. There are two kinds of secrets currently supported by Beebotte: Your account secret key: This is a single key for your account, which allows access to all resources and all channels in your account. It additionally allows the creation and deletion of the channels and resources. Channel tokens: Each channel has an associated channel token, which allows reading and writing data from that channel only. Additionally, the channel token can be regenerated if the token is ever compromised. This example uses the account secret key to authenticate the connection. It would be better to use a more limited channel token (to limit the consequences if the token is leaked) but in this example, as the coordinator forwards data for multiple sensor nodes (each of which have their own channel), a channel token does not provide enough access. As an alternative, you could consider using a single channel containing all the resources (named, for example, "Livingroom_Temperature" to still allow grouping) so that you can use the slightly more secure channel tokens. In the future, Beebotte might also support more flexible limited tokens that support writing to more than one channel. The examples in this article use an unencrypted connection, so make sure at least your Wi-Fi connection is encrypted (using WPA or WPA2). If you are working with particularly sensitive information, be sure to consider using SSL/TLS for the connection. Due to limited microcontroller speeds, running SSL/TLS directly on the microcontroller does not seem feasible, so this would need external cryptographic hardware or support on the Wi-Fi / Ethernet shield that is being used. At the time of writing, there does not seem to be any shield that directly supports this but it seems that at least the ESP2866-based shields and the Arduino Yún could be made to support this and the upcoming Arduino Wi-Fi shield 101 might support it as well (but this is out of the scope for this article). Access protocols To store new data and access the existing data over the Internet, a few different access methods are supported by Beebotte: HTTP / REST: The hypertext transfer protocol (HTTP) is the protocol that powers the web. Originally, it was used to let a web browser request a web page from a server, but now HTTP is also commonly used to let all kinds of devices send and request arbitrary data (instead of webpages) to and from servers as well. In this case, the server is commonly said to export an HTTP or REST (Relational state transfer) API.HTTP APIs are convenient, since HTTP is a very widespread protocol and HTTP libraries are available for most programming languages and environments. WebSockets: A downside of HTTP is that it is not very convenient for sending events from the server to the client. A server can only send data to the client after the client sends a request, which means the client must poll for new events continuously.To overcome this, the WebSockets standard was created. WebSockets is a protocol on top of HTTP that keeps a connection (socket) open indefinitely, ready for the server to send new data whenever it wants to, instead of having to wait for the client to request new data. MQTT: The message queuing telemetry transport protocol (MQTT) is a so-called publish/subscribe (pubsub) protocol. The idea is that multiple devices can connect to a central server and each can publish a message to a given topic and also subscribe to any number of topics. Whenever a message is published to a topic, it is automatically forwarded to all the devices that have subscribed to the same topic.MQTT, like WebSockets, keeps a connection open continuously so that both the client and server can send data at any time, making this protocol especially suitable for real-time data and events. MQTT can not be used to access historical data, though. A lot of alternative platforms only support the HTTP access protocol, which works fine to push and access the data and would be suitable for the examples in this chapter but is less suitable to control your network from the Internet, as used in the next chapter. To prepare for this, the examples in this chapter will already use the MQTT protocol, which supports both the use cases efficiently. Sending your data to Beebotte Now that you have learned about some important Beebotte concepts, you are ready to send your collected sensor data to Beebotte. First, you will prepare Beebotte and your Arduino to connect to each other. Then, you will write a sketch for your coordinator to send the data. Finally, you will see how to access and visualize the stored data. Preparing Beebotte Before you can start sending data to Beebotte, you will have to prepare the proper channels and resources to store the data. This example uses two channels: "Livingroom" and "Study", referring to the rooms where the sensors have been placed. You should, of course, use names that reflect your setup and adapt things if you have more or fewer sensors. The first step is to register an account on beebotte.com. Once you have done this, you can access your Control Panel, which will initially show you an empty list of channels: You can create a new channel by clicking the Create New channel button. In the resulting window, you can fill in a name and description for the channel and define the resources. This should look something like this: After creating a channel for every sensor node that you have built, you have prepared Beebotte to receive all the sensor data. The next step is to modify the coordinator sketch to actually send the data. Connecting your Arduino to the Internet In order to let your coordinator send its data to Beebotte, it must be connected to the Internet somehow. There are plenty of shields out there that add wired Ethernet or wireless Wi-Fi connectivity to your Arduino. Wi-Fi shields are typically a lot more expensive than the Ethernet ones but the recently introduced ESP2866 cheap Wi-Fi chipset will likely change that. (However, at the time of writing, no ready-to-use Arduino shield was available yet.) As the code to connect your Arduino to the Internet will be significantly different for each shield, every part of the code will not be discussed in this article. Instead, we will focus on the code that connects to the MQTT server and publishes the data, assuming that the Internet connection is already set up. In the code bundle for this article, two complete examples are available for use with the Arduino Ethernet shield and Adafruit CC3000 shield. These examples should be usable as a starting point to use the other hardware as well. Some things to keep in mind when selecting your hardware have been listed, as follows: Check carefully for conflicting pins. For example, the Adafruit CC3000 Wi-Fi shield uses pins 2 and 3[t1]  to communicate with the shield but you might be using these pins to talk to the XBee module as well (particularly, when using the SparkFun XBee shield). The libraries for the various Wi-Fi shields take up a lot of code space on the Arduino. For example, using the SparkFun or Adafruit CC3000 library along with the Adafruit MQTT library fills up most of the available code space on an Arduino Uno. The Ethernet library is a bit (but not much) smaller. It is not always easy to keep a reliable Wi-Fi connection. In theory, this is a matter of simply reconnecting when the Wi-Fi connection is failing but in practice it can be tricky to implement this completely reliably. Again, this is easier with wired Ethernet as it does not have the same disconnection issues as Wi-Fi. If you use a different hardware than recommended (including the recently announced Arduino Ethernet shield 2), you will likely need a different Arduino library and need to make changes to the example code provided. Writing the sketch To send the data to Beebotte, the Coordinator.ino sketch from the previous chapter needs to be modified. As noted before, only the MQTT code will be shown here. However, the code to establish the Internet connection is included in the full examples in the code bundle (in the Coordinator_Ethernet.ino and Coordinator_CC3000.ino sketches). This example uses the "Adafruit MQTT library" for the MQTT protocol, which can be installed through the Arduino library manager or can be found here: https://github.com/adafruit/Adafruit_MQTT_Library. Depending on the Internet shield, you might need more libraries as well (see the comments in the example sketches). Do not forget to add the appropriate includes for the libraries that you are using. For the MQTT library, this is: #include <Adafruit_MQTT.h> #include <Adafruit_MQTT_Client.h> To set up the MQTT library, you first need to define some settings: const char MQTT_SERVER[] PROGMEM = "mqtt.beebotte.com"; const char MQTT_CLIENTID[] PROGMEM = ""; const char MQTT_USERNAME[] PROGMEM = "your_key_here"; const char MQTT_PASSWORD[] PROGMEM = ""; const int MQTT_PORT = 1883; This defines the connection settings to use for the MQTT connection. These are appropriate for Beebotte; if you are using some other platform, check its documentation for the appropriate settings. Note that here the username must be set to your account's secret key, for example: const char MQTT_USERNAME[] PROGMEM = "840f626930a07c87aa315e27b22448468844edcad03fe34f551ac747533f544f"; If you are using a channel token, it must be set to the token prefixed with "token:", such as: const char MQTT_USERNAME[] PROGMEM = "token:1438006626319_UNJoxdmKBoMFIPt7"; The password is unused and can be empty, just like the client identifier. The port is just the default MQTT port number, 1883. Note that all the string variables are marked with the PROGMEM keyword. This tells the compiler to store the string variables in program memory, just like the F() macro you have seen before does (which also uses the PROGMEM keyword underwater). However, the F() macro can only be used in the functions, which is why these variables use the PROGMEM keyword directly. This also means that the extra checking offered by the F() macro is not available, so be careful not to mix up the normal and PROGMEM strings here as this will not result in any compiler error and instead things will be broken when you run the code. Using the configuration constants defined earlier, you can now define the main mqtt object: Adafruit_MQTT_Client mqtt(&client, MQTT_SERVER, MQTT_PORT, MQTT_CLIENTID, MQTT_USERNAME, MQTT_PASSWORD); There are a few different flavors of this object. For example, there are flavors optimized for the specific hardware and corresponding libraries, and there is also a generic flavor that works with any hardware whose library exposes a generic Client object (like most libraries currently do). The latter flavor, Adafruit_MQTT_Client, is used in this example and should be fine. The &client [t2] part of this line refers to a previously created Client object (not shown here as it depends on the Internet shield used), which is used by the MQTT library to set up the MQTT connection. To actually connect to the MQTT server, a function called connect() will be defined. This function is called to connect once at startup and to reconnect every time when the publishing of some data fails. In the CC3300 version, this function associates to the WiFi access point and then sets up the connection to the MQTT server. On the Ethernet version, where the network is always available after initial initialization, the connect() function only sets up the MQTT connection. The latter version is shown as follows: void connect() { client.stop(); // Ensure any old connection is closed uint8_t ret = mqtt.connect(); if (ret == 0) DebugSerial.println(F("MQTT connected")); else DebugSerial.println(mqtt.connectErrorString(ret)); } This calls mqtt.connect() to connect to the MQTT server and writes a debug message to report the success or failure. Note that mqtt.connect() returns a number as an error code (with 0 meaning OK), which is translated to a human readable error message using the mqtt.connectErrorString() function. Now, to actually publish a single value, there is the publish() function: void publish(const __FlashStringHelper *resource, float value) { // Use JSON to wrap the data, so Beebotte will remember the data // (instead of just publishing it to whoever is currently listening). String data; data += "{"data": "; data += value; data += ", "write": true}"; DebugSerial.print(F("Publishing ")); DebugSerial.print(data); DebugSerial.print(F( " to ")); DebugSerial.println(resource); // Publish data and try to reconnect when publishing data fails if (!mqtt.publish(resource, data.c_str())) { DebugSerial.println(F("Failed to publish, trying reconnect...")); connect(); if (!mqtt.publish(resource, data.c_str())) DebugSerial.println(F("Still failed to publish data")); } } This function takes two parameters: the name of the resource to publish to and the value to publish. Note the type of the resource parameter: __FlashStringHelper* is similar to the more common char* string type but indicates that the string is stored in the program memory instead of RAM. This is also the type returned by the F() macro that you have seen before. Just like the MQTT server configuration values that used the PROGMEM keyword before, the MQTT library also expects the MQTT topic names to be stored in the program memory. The actual value is sent using the JSON (JavaScript object notation) format. For example, for a temperature of 20 degrees, it constructs {"data": 20.00, "write": true}. This format allows, in addition to transmitting the value, indicating that Beebotte should store the value so that it can be retrieved later. If the write value is false, or not present, Beebotte will only forward the value to any other devices currently subscribed to the appropriate topic without saving it for later. This example uses some quick and dirty string concatenation to generate the JSON. If you want something more robust and elegant, have a look at the ArduinoJson library at https://github.com/bblanchon/ArduinoJson. If publishing the data fails, it is likely that the WiFi or MQTT connection has failed and so it attempts to reconnect and publish the data once more. As before, there is a processRxPacket() function that gets called when a radio packet is received through the XBee module: void processRxPacket(ZBRxResponse& rx, uintptr_t) { Buffer b(rx.getData(), rx.getDataLength()); uint8_t type = b.remove<uint8_t>(); XBeeAddress64 addr = rx.getRemoteAddress64(); if (addr == 0x0013A20040DADEE0 && type == 1 && b.len() == 8) { publish(F("Livingroom/Temperature"), b.remove<float>()); publish(F("Livingroom/Humidity"), b.remove<float>()); return; } if (addr == 0x0013A20040E2C832 && type == 1 && b.len() == 8) { publish(F("Study/Temperature"), b.remove<float>()); publish(F("Study/Humidity"), b.remove<float>()); return; } DebugSerial.println(F("Unknown or invalid packet")); printResponse(rx, DebugSerial); } Instead of simply printing the packet contents as before, it figures out who the sender of the packet is and which Beebotte resource corresponds to it and calls the publish() function that was defined earlier. As you can see, the Beebotte resources are identified using "Channel/Resource", resulting in a unique identifier for each resource (which is later used in the MQTT message as the topic identifier). Note that the F() macro is used for the resource names to store them in program memory as this is what publish() and the MQTT library expect. If you run the resulting sketch and everything connects correctly, the coordinator will forward any sensor values that it receives to the Beebotte server. If you wait for (at most) five minutes to pass (or reset the sensor Arduino to have it send a reading right away) and then go to the appropriate channel in your Beebotte control panel, it should look something like this: Visualizing your data To easily allow the visualizing of your data, Beebotte supports dashboards. A dashboard is essentially a web page where you can add graphs, gauges, tables, buttons, and so on (collectively called widgets). These widgets can then display or control the data in one or more previously defined resources. To create such a dashboard, head over to the My Dashboards section of your control panel and click Create Dashboard to start building one. Once you set a name for the dashboard, you can start adding widgets to it. To display the temperature and humidity for all the sensors that you are using, you could use the Multi-line Chart widget. As the temperature and humidity values will be fairly far apart, it makes sense to put them each in a separate chart. Adding the temperature chart could look like this: If you also add a chart for the humidity, it should look like this: Here, only the living room sensor has been powered on, so no data is shown for the study yet. Also, to make the graphs a bit more interesting, some warm and humid air was breathed onto the sensor, causing the big spike in both the charts. There are plenty of other widget types that will prove useful to you. The Beebotte documentation provides information on the supported types, but you are encouraged to just play with the widgets a bit to see what they can add to your project. In the next chapter, you will see how to use the control widgets, which allow sending events and data back to the coordinator to control it. Accessing your data You have been accessing your data through a Beebotte dashboard so far. However, when these dashboards are not powerful enough for your needs or you want to access your data from an existing application, you can also access the recent and historical data through the Beebotte's HTTP or WebSocket API's. This gives you full control over the processing and display of the data without being limited in any way by what Beebotte offers in its dashboards. As creating a custom (web) application is out of the scope of this article, the HTTP and WebSocket API will not be discussed in detail. Instead, you should be able to find extensive documentation on this API on the Beebotte site at https://beebotte.com/overview. Summary In this article we learnt some of the ways to store the collected sensor data and visualize the data using convenient graphs. We also learnt as to how to save our data on the Cloud. Resources for Article: Further resources on this subject: TV Set Constant Volume Controller[article] Internet Connected Smart Water Meter[article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 13175

article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 9985
Modal Close icon
Modal Close icon