Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Mobile

213 Articles
article-image-intro-swift-repl-and-playgrounds
Dov Frankel
11 Sep 2016
6 min read
Save for later

Intro to the Swift REPL and Playgrounds

Dov Frankel
11 Sep 2016
6 min read
When Apple introduced Swift at WWDC (its annual Worldwide Developers Conference) in 2014, it had a few goals for the new language. Among them was being easy to learn, especially compared to other compiled languages. The following is quoted from Apple's The Swift Programming Language: Swift is friendly to new programmers. It is the first industrial-quality systems programming language that is as expressive and enjoyable as a scripting language. The REPL Swift's playgrounds embody that friendliness by modernizing the concept of a REPL (Read-Eval-Print Loop, pronounced "repple"). Introduced by the LISP language, and now a common feature of many modern scripting languages, REPLs allow you to quickly and interactively build up your code, one line at a time. A post on the Swift Blog gives an overview of the Swift REPL's features, but this is what using it looks like (to launch it, enter swift in Terminal, if you have Xcode already installed): Welcome to Apple Swift version 2.2 (swiftlang-703.0.18.1 clang-703.0.29). Type :help for assistance.   1> "Hello world" $R0: String = "Hello World"   2> let a = 1 a: Int = 1   3> let b = 2 b: Int = 2   4> a + b $R1: Int = 3   5> func aPlusB() {   6.     print("(a + b)")   7. }   8> aPlusB() 3 If you look at what's there, each line containing input you give has a small indentation. A line that starts a new command begins with the line number, followed by > (1, 2, 3, 4, 5, and 8), and each subsequent line for a given command begins with the line number, followed by  (6 and 7). These help to keep you oriented as you enter your code one line at a time. While it's certainly possible to work on more complicated code this way, it requires the programmer to keep more state about the code in his head, and it limits the output to data types that can be represented in text only. Playgrounds Playgrounds take the concept of a REPL to the next level by allowing you to see all of your code in a single editable code window, and giving you richer options to visualize your data. To get started with a Swift playground, launch Xcode, and select File > New > Playground… (⌥⇧⌘N) to create a new playground. In the following, you can see a new playground with the same code entered into the previous  REPL:   The results are shown in the gray area on the right-hand side of the window update as you type, which allows for rapid iteration. You can write standalone functions, classes, or whatever level of abstraction you wish to work in for the task at hand, removing barriers that prevent the expression of your ideas, or experimentation with the language and APIs. So, what types of goals can you accomplish? Experimentation with the language and APIs Playgrounds are an excellent place to learn about Swift, whether you're new to the language, or new to programming. You don't need to worry about main(), app bundles, simulators, or any of the other things that go into making a full-blown app. Or, if you hear about a new framework and would like to try it out, you can import it into your playground and get your hands dirty with minimal effort. Crucially, it also blows away the typical code-build-run-quit-repeat cycle that can often take up so much development time. Providing living documentation or code samples Playgrounds provide a rich experience to allow users to try out concepts they're learning, whether they're reading a technical article, or learning to use a framework. Aside from interactivity, playgrounds provide a whole other type of richness: built-in formatting via Markdown, which you can sprinkle in your playground as easily as writing comments. This allows some interesting options such as describing exercises for students to complete or providing sample code that runs without any action required of the user. Swift blog posts have included playgrounds to experiment with, as does the Swift Programming Language's A Swift Tour chapter. To author Markdown in your playground, start a comment block with /*:. Then, to see the comment rendered, click on Editor > Show Rendered Markup. There are some unique features available, such as marking page boundaries and adding fields that populate the built-in help Xcode shows. You can learn more at Apple's Markup Formatting Reference page. Designing code or UI You can also use playgrounds to interactively visualize how your code functions. There are a few ways to see what your code is doing: Individual results are shown in the gray side panel and can be Quick-Looked (including your own custom objects that implement debugQuickLookObject()). Individual or looped values can be pinned to show inline with your code. A line inside a loop will read "(10 times)," with a little circle you can toggle to pin it. For instance, you can show how a value changes with each iteration, or how a view looks: Using some custom APIs provided in the XCPlayground module, you can show live UIViews and captured values. Just import XCPlayground and set the XCPlayground.currentPage.liveView to a UIView instance, or call XCPlayground.currentPage.captureValue(someValue, withIdentifier: "My label") to fill the Assistant view. You also still have Xcode's console available to you for when debugging is best served by printing values and keeping scrolled to the bottom. As with any Swift code, you can write to the console with NSLog and print. Working with resources Playgrounds can also include resources for your code to use, such as images: A .playground file is an OS X package (a folder presented to the user as a file), which contains two subfolders: Sources and Resources. To view these folders in Xcode, show the Project Navigator in Xcode's left sidebar, the same as for a regular project. You can then drag in any resources to the Resources folder, and they'll be exposed to your playground code the same as resources are in an app bundle. You can refer to them like so: let spriteImage = UIImage(named:"sprite.png") Xcode versions starting with 7.1 even support image literals, meaning you can drag a Resources image into your source code and treat it as a UIImage instance. It's a neat idea, but makes for some strange-looking code. It's more useful for UIColors, which allow you to use a color-picker. The Swift blog post goes into more detail on how image, color, and file literals work in playgrounds: Wrap-up Hopefully this has opened your eyes to the opportunities afforded by playgrounds. They can be useful in different ways to developers of various skill and experience levels (in Swift, or in general) when learning new things or working on solving a given problem. They allow you to iterate more quickly, and visualize how your code operates more easily than other debugging methods. About the author Dov Frankel (pronounced as in "he dove swiftly into the shallow pool") works on Windows in-house software at a Connecticut hedge fund by day, and independent Mac and iOS apps by night. Find his digital comic reader, on the Mac App Store, and his iPhone app Afterglo is in the iOS App Store. He blogs when the mood strikes him, at dovfrankel.com; he's @DovFrankel on Twitter, and @abbeycode on GitHub.
Read more
  • 0
  • 0
  • 2902

article-image-building-gallery-application
Packt
19 Aug 2016
17 min read
Save for later

Building a Gallery Application

Packt
19 Aug 2016
17 min read
In this article by Michael Williams, author of the book Xamarin Blueprints, will walk you through native development with Xamarin by building an iOS and Android application that will read from your local gallery files and display them in a UITableView and ListView.  (For more resources related to this topic, see here.) Create an iOS project Let's begin our Xamarin journey; firstly we will start by setting up our iOS project in Xamarin Studio: Start by opening Xamarin Studio and creating a new iOS project. To do so, we simply select File | New | Solution and select an iOS Single View App; we must also give it a name and add in the bundle ID you want in order to run your application. It is recommended that for each project, a new bundle ID be created, along with a developer provisioning profile for each project. Now that we have created the iOS project, you will be taken to the following screen: Doesn't this look familiar? Yes, it is our AppDelegate file, notice the .cs on the end, because we are using C-sharp (C#), all our code files will have this extension (no more .h or .m files). Before we go any further, spend a few minutes moving around the IDE, expand the folders, and explore the project structure; it is very similar to an iOS project created in XCode. Create a UIViewController and UITableView Now that we have our new iOS project, we are going to start by creating a UIViewController. Right-click on the project file, select Add | New File, and select ViewController from the iOS menu selection in the left-hand box: You will notice three files generated, a .xib, a .cs and a .designer.cs file. We don't need to worry about the third file; this is automatically generated based upon the other two files: Right-click on the project item and select Reveal in Finder, This will bring up the finder where you will double-click on the GalleryCell.xib file; this will bring up the user-interface designer in XCode. You should see automated text inserted into the document to help you get started. Firstly, we must set our namespace accordingly, and import our libraries with using statements. In order to use the iOS user interface elements, we must import the UIKit and CoreGraphics libraries. Our class will inherit the UIViewController class in which we will override the ViewDidLoad function: namespace Gallery.iOS {     using System;     using System.Collections.Generic;       using CoreGraphics;     using UIKit;       public partial class MainController : UIViewController     {         private UITableView _tableView;           private TableSource _source;           private ImageHandler _imageHandler;           public MainController () : base ("MainController", null)         {             _source = new TableSource ();               _imageHandler = new ImageHandler ();             _imageHandler.AssetsLoaded += handleAssetsLoaded;         }           private void handleAssetsLoaded (object sender, EventArgs e)         {             _source.UpdateGalleryItems (_imageHandler.CreateGalleryItems());             _tableView.ReloadData ();         }           public override void ViewDidLoad ()         {             base.ViewDidLoad ();               var width = View.Bounds.Width;             var height = View.Bounds.Height;               tableView = new UITableView(new CGRect(0, 0, width, height));             tableView.AutoresizingMask = UIViewAutoresizing.All;             tableView.Source = _source;               Add (_tableView);         }     } }   Our first UI element created is a UITableView. This will be used to insert into the UIView of the UIViewController, and we also retrieve width and height values of the UIView to stretch the UITableView to fit the entire bounds of the UIViewController. We must also call Add to insert the UITableView into the UIView. In order to have the list filled with data, we need to create a UITableSource to contain the list of items to be displayed in the list. We will also need an object called GalleryModel; this will be the model of data to be displayed in each cell. Follow the previous process for adding in two new .cs files, one will be used to create our UITableSource class and the other for the GalleryModel class. In TableSource.cs, first we must import the Foundation library with the using statement: using Foundation; Now for the rest of our class. Remember, we have to override specific functions for our UITableSource to describe its behavior. It must also include a list for containing the item view-models that will be used for the data displayed in each cell: public class TableSource : UITableViewSource     {         protected List<GalleryItem> galleryItems;         protected string cellIdentifier = "GalleryCell";           public TableSource (string[] items)         {             galleryItems = new List<GalleryItem> ();         }     } We must override the NumberOfSections function; in our case, it will always be one because we are not having list sections: public override nint NumberOfSections (UITableView tableView)         {             return 1;         } To determine the number of list items, we return the count of the list: public override nint RowsInSection (UITableView tableview, nint section)         {             return galleryItems.Count;         } Then we must add the GetCell function, this will be used to get the UITableViewCell to render for a particular row. But before we do this, we need to create a custom UITableViewCell. Customizing a cells appearance We are now going to design our cells that will appear for every model found in the TableSource class. Add in a new .cs file for our custom UITableViewCell. We are not going to use a .xib and simply build the user interface directly in code using a single .cs file. Now for the implementation: public class GalleryCell: UITableViewCell      {         private UIImageView _imageView;           private UILabel _titleLabel;           private UILabel _dateLabel;           public GalleryCell (string cellId) : base (UITableViewCellStyle.Default, cellId)         {             SelectionStyle = UITableViewCellSelectionStyle.Gray;               _imageView = new UIImageView()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _titleLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _dateLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               ContentView.Add (imageView);             ContentView.Add (titleLabel);             ContentView.Add (dateLabel);         }     } Our constructor must call the base constructor, as we need to initialize each cell with a cell style and cell identifier. We then add in a UIImageView and two UILabels for each cell, one for the file name and one for the date. Finally, we add all three elements to the main content view of the cell. When we have our initializer, we add the following: public void UpdateCell (GalleryItem gallery)         {             _imageView.Image = UIImage.LoadFromData (NSData.FromArray (gallery.ImageData));             _titleLabel.Text = gallery.Title;             _dateLabel.Text = gallery.Date;         }           public override void LayoutSubviews ()         {             base.LayoutSubviews ();               ContentView.TranslatesAutoresizingMaskIntoConstraints = false;               // set layout constraints for main view             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[imageView(100)]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("imageView", imageView)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[titleLabel]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[titleLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[dateLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "dateLabel", dateLabel)));         } Our first function, UpdateCell, simply adds the model data to the view, and our second function overrides the LayoutSubViews method of the UITableViewCell class (equivalent to the ViewDidLoad function of a UIViewController). Now that we have our cell design, let's create the properties required for the view model. We only want to store data in our GalleryItem model, meaning we want to store images as byte arrays. Let's create a property for the item model: namespace Gallery.iOS {     using System;       public class GalleryItem     {         public byte[] ImageData;           public string ImageUri;           public string Title;           public string Date;           public GalleryItem ()         {         }     } } Now back to our TableSource class. The next step is to implement the GetCell function: public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath)         {             var cell = (GalleryCell)tableView.DequeueReusableCell (CellIdentifier);             var galleryItem = galleryItems[indexPath.Row];               if (cell == null)             {                 // we create a new cell if this row has not been created yet                 cell = new GalleryCell (CellIdentifier);             }               cell.UpdateCell (galleryItem);               return cell;         } Notice the cell reuse on the if statement; you should be familiar with this type of approach, it is a common pattern for reusing cell views and is the same as the Objective-C implementation (this is a very basic cell reuse implementation). We also call the UpdateCell method to pass in the required GalleryItem data to show in the cell. Let's also set a constant height for all cells. Add the following to your TableSource class: public override nfloat GetHeightForRow (UITableView tableView, NSIndexPath indexPath)         {             return 100;         } So what is next? public override void ViewDidLoad () { .. table.Source = new TableSource(); .. } Let's stop development and have a look at what we have achieved so far. We have created our first UIViewController, UITableView, UITableViewSource, and UITableViewCell, and bound them all together. Fantastic! We now need to access the local storage of the phone to pull out the required gallery items. But before we do this, we are now going to create an Android project and replicate what we have done with iOS. Create an Android project Let's continue our Xamarin journey with Android. Our first step is to create new general Android app: The first screen you will land on is MainActivity. This is our starting activity, which will inflate the first user interface; take notice of the configuration attributes: [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")] The MainLauncher flag indicates the starting activity; one activity must have this flag set to true so the application knows what activity to load first. The icon property is used to set the application icon, and the Label property is used to set the text of the application, which appears in the top left of the navigation bar: namespace Gallery.Droid {     using Android.App;     using Android.Widget;     using Android.OS;       [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")]     public class MainActivity : Activity     {         int count = 1;           protected override void OnCreate (Bundle savedInstanceState)         {             base.OnCreate (savedInstanceState);               // Set our view from the "main" layout resource             SetContentView (Resource.Layout.Main);         }     } } The formula for our activities is the same as Java; we must override the OnCreate method for each activity where we will inflate the first XML interface Main.xml. Creating an XML interface and ListView Our starting point is the main.xml sheet; this is where we will be creating the ListView: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     android_orientation="vertical"     android_layout_width="fill_parent"     android_layout_height="fill_parent">     <ListView         android_id="@+id/listView"         android_layout_width="fill_parent"         android_layout_height="fill_parent"         android_layout_marginBottom="10dp"         android_layout_marginTop="5dp"         android_background="@android:color/transparent"         android_cacheColorHint="@android:color/transparent"         android_divider="#CCCCCC"         android_dividerHeight="1dp"         android_paddingLeft="2dp" /> </LinearLayout> The main.xml file should already be in resource | layout directory, so simply copy and paste the previous code into this file. Excellent! We now have our starting activity and interface, so now we have to create a ListAdapter for our ListView. An adapter works very much like a UITableSource, where we must override functions to determine cell data, row design, and the number of items in the list. Xamarin Studio also has an Android GUI designer. Right-click on the Android project and add in a new empty class file for our adapter class. Our class must inherit the BaseAdapter class, and we are going to override the following functions: public override long GetItemId(int position); public override View GetView(int position, View convertView, ViewGroup parent); Before we go any further, we need to create a model for the objects used to contain the data to be presented in each row. In our iOS project, we created a GalleryItem to hold the byte array of image data used to create each UIImage. We have two approaches here: we could create another object to do the same as the GalleryItem, or even better, why don't we reuse this object using a shared project? Shared projects We are going to delve into our first technique for sharing code between different platforms. This is what Xamarin tries to achieve with all of its development, and we want to reuse as much code as possible. The biggest disadvantage when developing Android and iOS applications in two different languages is that we can't reuse anything. Let's create our first shared project: Our shared project will be used to contain the GalleryItem model, so whatever code we include in this shared project can be accessed by both the iOS and Android projects: In the preceding screenshot, have a look at the Solution explorer, and notice how the shared project doesn't contain anything more than .cs code sheets. Shared projects do not have any references or components, just code that is shared by all platform projects. When our native projects reference these shared projects, any libraries being referenced via using statements come from the native projects. Now we must have the iOS and Android projects reference the shared project; right-click on the References folder and select Edit References: Select the shared project you just created and we can now reference the GalleryItem object from both projects. Summary In this article, we have seen a walkthrough of building a gallery application on both iOS and Android using native libraries. This will be done on Android using a ListView and ListAdapter. Resources for Article:   Further resources on this subject: Optimizing Games for Android [article] Getting started with Android Development [article] Creating User Interfaces [article]
Read more
  • 0
  • 0
  • 1805

article-image-conference-app
Packt
09 Aug 2016
4 min read
Save for later

Conference App

Packt
09 Aug 2016
4 min read
In this article, Indermohan Singh, the author of Ionic 2 Blueprints we will create a conference app. We will create an app which will provide list of speakers, schedule, directions to the venue, ticket booking, and lots of other features. We will learn the following things: Using the device's native features Leveraging localStorage Ionic menu and tabs Using RxJS to build a perfect search filter (For more resources related to this topic, see here.) Conference app is a companion application for conference attendees. In this application, we are using Lanyrd JSON Exportand hardcoded JSON file as our backend. We will have a tabs and side menu interface, just like our e-commerce application. When a user opens our app, the app will show a tab interface with SpeakersPageopen. It will have SchedulePage for conference schedule and AboutPage for information about conference. We will also make this app work offline, without any Internet connection. So, your user will still be able to view speakers, see the schedule, and do other stuff without using the Internet at all. JSON data In the application, we have used a hardcoded JSON file as our Database. But in the truest sense, we are actually using a JSON export of a Lanyrd event. I was trying to make this article using Lanyrd as the backend, but unfortunately, Lanyrd is mostly in maintenance mode. So I was not able to use it. In this article, I am still using a JSON export from Lanyrd, from a previous event. So, if you are able to get a JSON export for your event, you can just swap the URL and you are good to go. Those who don't want to use Lanyrd and instead want to use their own backend, must have a look at the next section. I have described the structure of JSON, which I have used to make this app. You can create your REST API accordingly. Understanding JSON Let's understand the structure of the JSON export. The whole JSON database is an object with two keys, timezone and sessions, like the following: { timezone: "Australia/Brisbane", sessions: [..] } Timezone is just a string, but sessions key is an array of lists of all the sessions of our conference. Items in the sessions array are divided according to days of the conference. Each item represents a day of the conference and has the following structure: { day: "Saturday 21st November", sessions: [..] } So, the sessions array of each day has actual sessions as items. Each item has the following structure: { start_time: "2015-11-21 09:30:00", topics: [], web_url: "url of event times: "9:30am - 10:00am", id: "sdtpgq", types: [ ], end_time_epoch: 1448064000, speakers: [], title: "Talk Title", event_id: "event_id", space: "Space", day: "Saturday 21st November", end_time: "2015-11-21 10:00:00", other_url: null, start_time_epoch: 1448062200, abstract: "<p>Abstract of Talk</p>" }, Here, the speakers array has a list of all speakers. We will use that speakers array to create a list of all speakers in an array. You can see the whole structure here: That's all we need to understand for JSON. Defining the app In this section, we will define various functionalities of our application. We will also show the architecture of our app using an app flow diagram. Functionalities We will be including the following functionalities in our application: List of speakers Schedule detail Search functionality using session title, abstract, and speaker's names Hide/Show any day of the schedule Favorite list for sessions Adding favorite sessions to the device calendar Ability to share sessions to other applications Directions to venue Offline working App flow This is how the control will flow inside our application: Let's understand the flow: RootComponent: RootComponent is the root Ionic component. It is defined inside the /app/app.ts file. TabsPage: TabsPage acts as a container for our SpeakersPage, SchedulePage, and AboutPage. SpeakersPage: SpeakersPage shows a list of all the speakers of our conference. SchedulePage: SchedulePage shows us the schedule of our conference and allows us various filter features. AboutPage: AboutPage provides us information about the conference. SpeakersDetail: SpeakerDetail page shows the details of the speaker and a list of his/her presentations in this conference. SessionDetail: SessionDetail page shows the details of a session with the title and abstract of the session. FavoritePage: FavoritePage shows a list of the user's favorite sessions. Summary In this article, we discussed about the JSON files that will used as database in our app. We also defined the the functionalities of our app and understood the flow of our app. Resources for Article:  Further resources on this subject: First Look at Ionic [article] Ionic JS Components [article] Creating Our First App with Ionic [article]
Read more
  • 0
  • 0
  • 1348

article-image-getting-started-android-development
Packt
08 Aug 2016
14 min read
Save for later

Getting started with Android Development

Packt
08 Aug 2016
14 min read
In this article by Raimon Ràfols Montané, author of the book Learning Android Application Development, we will go through all the steps required to start developing Android devices. We have to be aware that Android is an evolving platform and so are its development tools. We will show how to download and install Android Studio and how to create a new project and run it on either an emulator or a real device. (For more resources related to this topic, see here.) Setting up Android Studio Before being able to build an Android application, we have to download and install Android Studio on our computer. It is still possible to download and use Eclipse with the Android Development Tools (ADT) plugin, but Google no longer supports it and they recommend that we migrate to Android Studio. In order to be aligned with this, we will only focus on Android Studio in this article. for more information on this, visit http://android-developers.blogspot.com.es/2015/06/an-update-on-eclipse-android-developer.html. Getting the right version of Android Studio The latest stable version of Android Studio can be found at http://developer.android.com/sdk/index.html. If you are among the bravest developers, and you are not afraid of bugs, you can always go to the Canary channel and download the latest version. The Canary channel is one of the preview channels available on the Android tools download page (available at http://tools.android.com/download/studio) and contains weekly builds. The following are other preview channels available at that URL: The Canary channel contains weekly builds. These builds are tested but they might contain some issues. Just use a build from this channel if you need or want to see the latest features. The Dev channel contains selected Canary builds. The beta channel contains the beta milestones for the next version of Android Studio. The stable channel contains the most recent stable builds of Android Studio. The following screenshot illustrates the Android tools download page: It is not recommended to use an unstable version for production. To be on the safer side, always use the latest stable version. In this article, we will use the version 2.2 preview. Although it is a beta version at this moment, we will have the main version quite soon. Installing Android Studio Android Studio requires JDK 6 or higher and, at least, JDK 7 is required if you aim to develop for Android 5.0 and higher. You can easily check which version you have installed by running this on your command line: javac –version If you don't have any version of the JDK or you have an unsupported version, please install or update your JDK before proceeding to install Android Studio. Refer to the official documentation for a more comprehensive installation guide and details on all platforms (Windows, Linux, and Mac OSX): http://developer.android.com/sdk/installing/index.html?pkg=studio Once you have JDK installed, unpack the package you have just downloaded from the Internet and proceed with the installation. For example, let's use Mac OSX. If you download the latest stable version, you will get a .dmg file that can be mounted on your filesystem. Once mounted, a new finder window that appears will ask us to drag the Android Studio icon to the Applications folder. Just doing this simple step will complete the basic installation. If you have downloaded a preview version, you will have a ZIP file that once unpacked will contain the Android Studio Application directly (can be just dragged to the Applications folder using finder). For other platforms, refer to the official installation guide provided by Google at the web address mentioned earlier. First run Once you have finished installing Android Studio, it is time to run it for the first time. On the first execution (at least if you have downloaded version 2.2), will let you configure some options and install some SDK components if you choose the custom installation type. Otherwise, both these settings and SDK components can be later configured or installed. The first option you will be able to choose is the UI theme. We have the default UI theme or the Darcula theme, which basically is a choice of light or dark background, respectively. After this step, the next window will show the SDK Components Setup where the installation process will let you choose some components to automatically download and install. On Mac OS, there is a bug in some versions of Android Studio 2.0 that sometimes does not allow selecting any option if the target folder does not exist. If that happens, follow these steps for a quick fix: Copy the contents of the Android SDK Location field, just the path or something like /Users/<username>/Library/Android/sdk, to the clipboard. Open the terminal application. Create the folder manually as mkdir /Users/<username>/Library/Android/sdk. Go back to Android Studio, press the Previous button and then the Next button to come back to this screen. Now, you will be able to select the components that you would like to install. If that still does not work, cancel the installation process, ensuring that you checked the option to rerun the setup on the next installation. Quit Android Studio and rerun it. Creating a sample project We will introduce some of the most common elements on Android Studio by creating a sample project, building it and running it on an android emulator or on a real android device. It is better to dispaly those elements when you need them rather than just enumerate a long list without a real use behind. Starting a new project Just click on the Start a new Android Studio project button to start a project from scratch. Android Studio will ask you to make some project configuration settings, and you will be able to launch your project. If you have an already existing project and would like to import it to Android Studio, you could do it now as well. Any projects based on Eclipse, Ant, or Gradle build can be easily imported into Android Studio. Projects can be also checked out from Version Control software such as Subversion or Git directly from Android Studio. When creating a new project, it will ask for the application name and the company domain name, which will be reversed into the application package name. Once this information is filled out, Android Studio will ask the type of devices or form factors your application will target. This includes not only phone and tablet, but also Android Wear, Android TV, Android Auto, or Google Glass. In this example, we will target only phone and tablet and require a minimum SDK API level of 14 (Android 4.0 or Ice Cream Sandwich). By setting the minimum required level to 14, we make sure that the app will run on approximately 96.2% of devices accessing Google Play Store, which is good enough. If we set 23 as the minimum API level (Android 6.0 Marshmallow), our application will only run on Android Marshmallow devices, which is less than 1% of active devices on Google Play right now. Unless we require a very specific feature available on a specific API level, we should use common sense and try to aim for as many devices as we can. Having said that, we should not waste time supporting very old devices (or very old versions of the Android), as they might be, for example, only 5% of the active devices but may imply lots and lots of work to make your application support them. In addition to the minimum SDK version, there is also the target SDK version. The target SDK version should be, ideally, set to the latest stable version of Android available to allow your application to take advantage of all the new features, styles, and behaviors from newer versions. As a rule of thumb, Google gives you the percentage of active devices on Google Play, not the percentage of devices out there in the wild. So, unless we need to build an enterprise application for a closed set of devices and installed ad hoc, we should not mind those people not even accessing Google Play, as they will not the users of our application because they do not usually download applications, unless we are targeting countries where Google Play is not available. In that case, we should analyze our requirements with real data from the available application stores in those countries. To see the Android OS version distribution, always check the Android's developer dashboard at http://developer.android.com/about/dashboards/index.html. Alternatively, when creating a new project from Android Studio, there is a link to help you choose the version that you would like to target, which will open a new screen with the cumulative percentage of coverage. If you click on each version, it will give you more details about that Android OS version and the features that were introduced. After this step, and to simplify our application creation process, Android Studio will allow us to add an Activity class to the project out from some templates. In this case, we can add an empty Activity class for the moment being. Let's not worry for the name of the Activity class and layout file at this moment; we can safely proceed with the prefilled values. As defined by Android developer documentation an: Activity is a single, focused thing that the user can do.  http://developer.android.com/reference/android/app/Activity.html To simplify further, we can consider an Activity class as every single screen of our application where the user can interact with it. If we take into consideration the MVC pattern, we can assume the activity to be the controller, as it will receive all the user inputs and events from the views, and the layouts XML and UI widgets to be the views. To know more about the MVC pattern, visit https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. So, we have just added one activity to our application; let's see what else the Android Studio wizard created for us. Running your project Android Studio project wizard not only created an empty Activity class for us, but it also created an AndroidManifest, a layout file (activity_main.xml) defining the view controlled by the Activity class, an application icon placed carefully into different mipmaps (https://en.wikipedia.org/wiki/Mipmap) so that the most appropriate will be used depending on the screen resolution, some Gradle scripts, and and some other .xml files containing colors, dimensions, strings, and style definitions. We can have multiple resources, and even repeated resources, depending on screen resolution, screen orientation, night mode, layout direction, or even mobile country code of the SIM card. Take a look at the next topic to understand how to add qualifiers and filters to resources. For the time being, let's just try to run this example by pressing the play button next to our build configuration named app at the top of the screen. Android Studio will show us a small window where we can select the deployment target: a real device or emulator where our application will be installed and launched. If we have not connected any device or created any emulator, we can do it from the following screen. Let's press the Create New Emulator button. From this new screen, we can easily select a device and create an emulator that looks like that device. A Nexus 5X will suit us. After choosing the device, we can choose which version of the Android OS and architecture on which the platform will run. For instance, if we want to select Android Marshmallow (API level 23), we can choose from armeabi-v7a, x86 (Intel processors) and x86_64 (Intel 64bit processors). As we previously installed HAXM during our first run (https://software.intel.com/en-us/android/articles/intel-hardware-accelerated-execution-manager), we should install an Intel image, so emulator will be a lot faster than having to emulate an ARM processor. If we do not have the Android OS image downloaded to our computer, we can do it from this screen as well. Note that you can have an image of the OS with Google APIs or without them. We will use one image or another depending on whether the application uses any of the Google-specific libraries (Google Play Services) or only the Android core libraries. Once the image is selected (and downloaded and installed, if needed), we can proceed to finish the Android Virtual Device (AVD) configuration. On the last configuration screen, we can fine-tune some elements of our emulator, such as the default orientation (portrait or landscape), the screen scale, the SD card(if we enable the advanced settings), the amount of physical RAM, network latency, and we can use the webcam in our computer as emulator's camera. You are now ready to run your application on the Android emulator that you just created. Just select it as the deployment target and wait for it to load and install the app. If everything goes as it should, you should see this screen on the Android emulator: If you want to use a real device instead of an emulator, make sure that your device has the developer options enabled and it is connected to your computer using a USB cable (to enable development mode on your device or get information on how to develop and debug applications over the network, instead of having the device connected through an USB cable; check out the following links: http://developer.android.com/tools/help/adb.html http://developer.android.com/tools/device.html) If these steps are performed correctly, your device will appear as a connected device on the deployment target selection window. Resource configuration qualifiers As we introduced in the previous section, we can have multiple resources depending on the screen resolution or any other device configuration, and Android will choose the most appropriate resource in runtime. In order to do that, we have to use what is called configuration qualifiers. These qualifiers are only strings appended to the resource folder. Consider the following example: drawable drawable-hdpi drawable-mdpi drawable-en-rUS-land layout layout-en layout-sw600dp layout-v7 Qualifiers can be combined, but they must always follow the order specified by Google in the Providing Resource documentation, available at http://developer.android.com/guide/topics/resources/providing-resources.html. This allows us, for instance, to target multiple resolutions and have the best experience for each of them. It can be also used to have different images based on the country, in which the application is executed, or language. We have to be aware that putting too many resources (basically, images or any other media) will make our application grow in size. It is always good to apply common sense. And, in the case of having too many different resources or configurations, do not bloat the application and produce different binaries that can be deployed selectively to different devices on Google Play. We will briefly explain on the Gradle build system topic in this article, how to produce different binaries from one single source code. It will add some complexity on our development but will make our application smaller and more convenient for end users. For more information on multiple APK support, visit http://developer.android.com/google/play/publishing/multiple-apks.html. Summary In this article, we covered how to install Android Studio and get started with it. We also introduced some of the most common elements on Android Studio by creating a sample project, building it and running it on an android emulator or on a real android device. %MCEPASTEBIN% Resources for Article: Further resources on this subject: Hacking Android Apps Using the Xposed Framework [article] Speeding up Gradle builds for Android [article] The Art of Android Development Using Android Studio [article]
Read more
  • 0
  • 0
  • 2410

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 3873

article-image-voice-interaction-and-android-marshmallow
Raka Mahesa
30 Jun 2016
6 min read
Save for later

Voice Interaction and Android Marshmallow

Raka Mahesa
30 Jun 2016
6 min read
"Jarvis, play some music." You might imagine that to be a quote from some Iron Man stories (and hey, that might be an actual quote), but if you replace the "Jarvis" part with "OK Google," you'll get an actual line that you can speak to your Android phone right now that will open a music player and play a song. Go ahead and try it out yourself. Just make sure you're on your phone's home screen when you do it. This feature is called Voice Action, and it was actually introduced years ago in 2010, though back then it only worked on certain apps. However, Voice Action only accepts a single-line voice command, unlike Jarvis who usually engages in a conversation with its master. For example, if you ask Jarvis to play music, it will probably reply by asking what music you want to play. Fortunately, this type of conversation will no longer be limited to movies or comic books, because with Android Marshmallow, Google has introduced an API for that: the Voice Interaction API. As the name implies, the Voice Interaction API enables you to add voice-based interaction to its app. When implemented properly, the user will be able to command his/her phone to do a particular task without any touch interaction just by having a conversation with the phone. Pretty similar to Jarvis, isn't it? So, let's try it out! One thing to note before beginning: the Voice Interaction API can only be activated if the app is launched using Voice Action. This means that if the app is opened from the launcher via touch, the API will return a null object and cannot be used on that instance. So let’s cover a bit of Voice Action first before we delve further into using the Voice Interaction API. Requirements To use the Voice Interaction API, you need: Android Studio v1.0 or above Android 6.0 (API 23) SDK A device with Android Marshmallow installed (optional) Voice Action Let's start by creating a new project with a blank activity. You won’t use the app interface and you can use the terminal logging to check what app does, so it's fine to have an activity with no user interface here. Okay, you now have the activity. Let’s give the user the ability to launch it using a voice command. Let's pick a voice command for our app—such as a simple "take a picture" command? This can be achieved by simply adding intent filters to the activity. Add these lines to your app manifest file and put them below the original intent filter of your app activity. <intent-filter> <action android_name="android.media.action.STILL_IMAGE_CAMERA" /> <category android_name="android.intent.category.DEFAULT" /> <category android_name="android.intent.category.VOICE" /> </intent-filter> These lines will notify the operating system that your activity should be triggered when a certain voice command is spoken. The action "android.media.action.STILL_IMAGE_CAMERA" is associated with the "take a picture" command, so to activate the app using a different command, you need to specify a different action. Check out this list if you want to find out what other commands are supported. And that's all you need to do to implement Voice Action for your app. Build the app and run it on your phone. So when you say "OK Google, take a picture", your activity will show up. Voice Interaction All right, let's move on to Voice Interaction. When the activity is created, before you start the voice interaction part, you must always check whether the activity was started from Voice Action and whether the VoiceInteractor service is available. To do that, call the isVoiceInteraction() function to check the returned value. If it returns true, then it means the service is available for you to use. Let's say you want your app to first ask the user which side he/she is on, then changes the app background color accordingly. If the user chooses the dark side, the color will be black, but if the user chooses the light side, the app color will be white. Sounds like a simple and fun app, doesn't it? So first, let’s define what options are available for users to choose. You can do this by creating an instance of VoiceInteractor.PickOptionRequest.Option for each available choice. Note that you can associate more than one word with a single option, as can be seen in the following code. VoiceInteractor.PickOptionRequest.Option option1 = new VoiceInteractor.PickOptionRequest.Option(“Light”, 0); option1.addSynonym(“White”); option1.addSynonym(“Jedi”); VoiceInteractor.PickOptionRequest.Option option2 = new VoiceInteractor.PickOptionRequest.Option(“Dark”, 1); option12addSynonym(“Black”); option2.addSynonym(“Sith”); The next step is to define a Voice Interaction request and tell the VoiceInteractor service to execute that requests. For this app, use the PickOptionRequest for the request object. You can check out other request types on this page. VoiceInteractor.Option[] options = new VoiceInteractor.Option[] { option1, option2 } VoiceInteractor.Prompt prompt = new VoiceInteractor.Prompt("Which side are you on"); getVoiceInteractor().submitRequest(new PickOptionRequest(prompt, options, null) { //Handle each option here }); And determine what to do based on the choice picked by the user. This time, we'll simply check the index of the selected option and change the app background color based on that (we won't delve into how to change the app background color here; let's leave it for another occasion). @Override public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) { if (finished && selections.length == 1) { if (selections[0].getIndex() == 0) changeBackgroundToWhite(); else if (selections[0].getIndex() == 1) changeBackgroundToBlack(); } } @Override public void onCancel() { closeActivity(); } And that's it! When you run your app on your phone, it should ask which side you're on if you launch it using Voice Action. You've only learned the basics here, but this should be enough to add a little voice interactivity to your app. And if you ever want to create a Jarvis version, you just need to add "sir" to every question your app asks. About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also tweets regularly as @legacy99.
Read more
  • 0
  • 0
  • 3266
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-designing-simple-robust-object-detector-and-classifier
Packt
13 Jun 2016
19 min read
Save for later

Designing a Simple, Robust Object Detector and Classifier

Packt
13 Jun 2016
19 min read
In this article by Joseph Howse, author of the book, iOS Application Development with OpenCV 3, illustrates a scale-invariant,rotation-invariant approach to object detection and classification, using OpenCV 3and just 250 lines of custom C++ code. The technique relies on blob detection, histogram analysis, and SURF (or ORB if SURF is unavailable).The classifier is sensitive to colors as well as keypoints, and itcan work with a small number of training images. For background information, sample images, and a complete tutorial on how to integrate this detector and classifier into an iOS application, refer toChapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3 (Packt Publishing, 2016). You could also use this article's C++ code on other platforms besides iOS. (For more resources related to this topic, see here.) Defining blobs and a blob detector For our purposes, a blob simply has an image and a label. The image is cv::Mat and the label is an unsigned integer. The label's default value is 0, which shall signify that the blob has not yet been classified. Create a new header file, Blob.h, and fill it with the following declaration of a Blob class: #ifndef BLOB_H #define BLOB_H   #include <opencv2/core.hpp>   class Blob { public:   Blob(const cv::Mat &mat, uint32_t label = 0ul);     /**    * Construct an empty blob.    */   Blob();     /**    * Construct a blob by copying another blob.    */   Blob(const Blob &other);     bool isEmpty() const;     uint32_t getLabel() const;   void setLabel(uint32_t value);     const cv::Mat &getMat() const;   int getWidth() const;   int getHeight() const;   private:   uint32_t label;     cv::Mat mat; };   #endif // BLOB_H A Blob's image does not change after construction, but the label may change as a result of our classification process. Note that most of Blob's methods have the const modifier, but of course,setLabel does not because it changes the label. Now, let's declare a BlobDetector class in another new header file, BlobDetector.h. This class provides a detect public method to analyze a given image and populate vector<Blob> based on detected objects in the image. Another public method, getMask, returns a thresholded version of the most recent image that the detect method received. Internally, BlobDetector uses several more matrices and vectors to hold intermediate results, including the mask, detected edges, detected contours, and hierarchy that describes the contours' relationship to each other. Here is the detector's declaration: class BlobDetector { public:   void detect(cv::Mat &image, std::vector<Blob>&blob,     double resizeFactor = 1.0, bool draw = false);     const cv::Mat &getMask() const;   private:   void createMask(const cv::Mat &image);     cv::Mat resizedImage;   cv::Mat mask;   cv::Mat edges;   std::vector<std::vector<cv::Point>> contours;   std::vector<cv::Vec4i> hierarchy; };   #endif // !BLOB_DETECTOR_H Later, in the Detecting blobs against a plain background section, we will define the methods' bodies in new files called Blob.cpp and BlobDetector.cpp. Defining blob descriptors and a blob classifier If you are familiar with keypoint matching, you know that a keypoint has a descriptor or set of descriptive statistics. Similarly, we can define a custom descriptor for a blob. As our classifier relies on histogram comparison and keypoint matching, let's say that a blob's descriptor consists of a normalized histogram and matrix of keypoint descriptors. The descriptor object is also a convenient place to put the label. Create a new header file, BlobDescriptor.h, and put the following declaration of a BlobDescriptor class in it: #ifndef BLOB_DESCRIPTOR_H #define BLOB_DESCRIPTOR_H   #include <opencv2/core.hpp>   class BlobDescriptor { public:   BlobDescriptor(const cv::Mat &normalizedHistogram,     const cv::Mat &keypointDescriptors, uint32_t label);     const cv::Mat &getNormalizedHistogram() const;   const cv::Mat &getKeypointDescriptors() const;   uint32_t getLabel() const;   private:   cv::Mat normalizedHistogram;   cv::Mat keypointDescriptors;   uint32_t label; };   #endif // !BLOB_DESCRIPTOR_H Note that BlobDescriptor is designed as an immutable class. All its methods, except the constructor, have the const modifier. Now, let's declare a BlobClassifier class in another new header file, BlobClassifier.h. Publicly, this class receives Blob objects via an update method (for reference blobs) and a classify method (for blobs that the detector found in the scene). Privately, BlobClassifier creates, owns, and compares BlobDescriptor objects that pertain to the Blob objects. Thus, BlobClassifier is the only part of our program that needs to deal with BlobDescriptor. BlobClassifier also owns instances of OpenCV classes that are responsible for keypoint detection, description, and matching. Here is our classifier's declaration: #ifndef BLOB_CLASSIFIER_H #define BLOB_CLASSIFIER_H   #import "Blob.h" #import "BlobDescriptor.h"   #include <opencv2/features2d.hpp>   class BlobClassifier { public:   BlobClassifier();     /**    * Add a reference blob to the classification model.    */   void update(const Blob &referenceBlob);     /**    * Clear the classification model.    */   void clear();     /**    * Classify a blob that was detected in a scene.    */   void classify(Blob &detectedBlob) const;   private:   BlobDescriptor createBlobDescriptor(const Blob &blob) const;   float findDistance(const BlobDescriptor &detectedBlobDescriptor,     const BlobDescriptor &referenceBlobDescriptor) const;     /**    * A feature detector and descriptor extractor.    * It finds features in images.    * Then, it creates descriptors of the features.    */   cv::Ptr<cv::Feature2D> featureDetectorAndDescriptorExtractor;     /**    * A descriptor matcher.    * It matches features based on their descriptors.    */   cv::Ptr<cv::DescriptorMatcher> descriptorMatcher;     /**    * Descriptors of the reference blobs.    */   std::vector<BlobDescriptor> referenceBlobDescriptors; };   #endif // !BLOB_CLASSIFIER_H Later, in the Classifying blobs by color and keypoints section, we will write the methods' bodies in new files called BlobDescriptor.cpp and BlobClassifier.cpp. Detecting blobs against a plain background Let's assume that the background has a distinctive color range, such as "cream to snow white". Our blob detector will calculate the image's dominant color range and search for large regions whose colors differ from this range. These anomalous regions will constitute the detected blobs. For small objects such as a bean or coin, a user can easily find a plain background such as a blank sheet of paper, plain table-top, plain piece of clothing, or even the palm of a hand. As our blob detector dynamically estimates the background color range, it can cope with various backgrounds and lighting conditions; it is not limited to a lab environment. Create a new file, BlobDetector.cpp, for the implementation of our BlobDetector class. (To review the header, refer back to the Defining blobs and a blob detector section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the breadth of the background color range, the size and smoothing of the blobs, and the color of the blobs' rectangles in the preview image. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobDetector.h"   const double MASK_STD_DEVS_FROM_MEAN = 1.0; const double MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE = 0.005; const int MASK_NUM_EROSION_ITERATIONS = 8;   const double BLOB_RELATIVE_MIN_SIZE_IN_IMAGE = 0.05;   const cv::Scalar DRAW_RECT_COLOR(0, 255, 0); // Green Of course, the heart of BlobDetector is its detect method. Optionally, the method creates a downsized version of the image for faster processing. Then, we call a helper method, createMask, to perform thresholding and erosion on the (resized) image. We pass the resulting mask to the cv::Canny function to perform Canny edge detection. We pass the edge mask to the cv::findContours function, which populates a vector of contours, in the vector<vector<cv::Point>> format. That is to say, each contour is a vector of points. For each contour, we find the points' bounding rectangle. If we are working with a resized image, we restore the bounding rectangle to the original scale. We reject rectangles that are very small. Finally, for each accepted rectangle, we put a new Blob object in the output vector and optionally draw the rectangle in the original image. Here is the detect method's implementation: void BlobDetector::detect(cv::Mat &image,   std::vector<Blob>&blobs, double resizeFactor, bool draw) {   blobs.clear();     if (resizeFactor == 1.0) {     createMask(image);   } else {     cv::resize(image, resizedImage, cv::Size(), resizeFactor,       resizeFactor, cv::INTER_AREA);     createMask(resizedImage);   }     // Find the edges in the mask.   cv::Canny(mask, edges, 191, 255);     // Find the contours of the edges.   cv::findContours(edges, contours, hierarchy, cv::RETR_TREE,     cv::CHAIN_APPROX_SIMPLE);     std::vector<cv::Rect> rects;   int blobMinSize = (int)(MIN(image.rows, image.cols) *     BLOB_RELATIVE_MIN_SIZE_IN_IMAGE);   for (std::vector<cv::Point> contour : contours) {       // Find the contour's bounding rectangle.     cv::Rect rect = cv::boundingRect(contour);       // Restore the bounding rectangle to the original scale.     rect.x /= resizeFactor;     rect.y /= resizeFactor;     rect.width /= resizeFactor;     rect.height /= resizeFactor;       if (rect.width < blobMinSize || rect.height < blobMinSize) {       continue;     }       // Create the blob from the sub-image inside the bounding     // rectangle.     blobs.push_back(Blob(cv::Mat(image, rect)));       // Remember the bounding rectangle in order to draw it later.     rects.push_back(rect);   }     if (draw) {     // Draw the bounding rectangles.     for (const cv::Rect &rect : rects) {       cv::rectangle(image, rect.tl(), rect.br(), DRAW_RECT_COLOR);     }   } } The getMask method simply returns the mask that we previously computed in the detect method: const cv::Mat &BlobDetector::getMask() const {   return mask; } The createMask helper method begins by finding the image's mean color and standard deviation using the cv::meanStdDev function. We calculate a range of background colors based on a certain number of standard deviations from the mean, as defined by the MASK_STD_DEVS_FROM_MEAN constant near the top of BlobDetector.cpp. We deem values outside this range to be foreground colors. Using the cv::inRange function, we map the background colors (in the image) to white (in the mask) and the foreground colors (in the image) to black (in the mask). Then, we create a square kernel using the cv::getStructuringElement function. Finally, we use the kernel in the cv::erode function to apply the erosion morphological operation to the mask. This has the effect of smoothing the black (foreground) regions such that they swallow up little gaps that are probably just noise. Here is the relevant code: void BlobDetector::createMask(const cv::Mat &image) {     // Find the image's mean color.   // Presumably, this is the background color.   // Also find the standard deviation.   cv::Scalar meanColor;   cv::Scalar stdDevColor;   cv::meanStdDev(image, meanColor, stdDevColor);     // Create a mask based on a range around the mean color.   cv::Scalar halfRange = MASK_STD_DEVS_FROM_MEAN * stdDevColor;   cv::Scalar lowerBound = meanColor - halfRange;   cv::Scalar upperBound = meanColor + halfRange;   cv::inRange(image, lowerBound, upperBound, mask);     // Erode the mask to merge neighboring blobs.   int kernelWidth = (int)(MIN(image.cols, image.rows) *     MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE);   if (kernelWidth > 0) {     cv::Size kernelSize(kernelWidth, kernelWidth);     cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT,       kernelSize);     cv::erode(mask, mask, kernel, cv::Point(-1, -1),       MASK_NUM_EROSION_ITERATIONS);   } } That is the end of the blob detector's code. As you can see, it uses a general-purpose and rather linear approach, without any special cases for different kinds of objects.Moreover, we are using a separate blob detector and blob classifier, and this separation of responsibilities enables us to keep each class's implementation relatively simple. For completeness, note that the Blob class's constructors have straightforward implementations that copy the arguments. For the blob's image, we make a deep copy because the original may change. (For example, the original may be a subimage in a frame of video, and after detection we may draw rectangles atop the frame of video.) Similarly, Blob's getter and setter methods are self-explanatory. Create a new file, Blop.cpp, and fill it with the following implementation: #import "Blob.h"   Blob::Blob(const cv::Mat &mat, uint32_t label) : label(label) {   mat.copyTo(this->mat); }   Blob::Blob() { }   Blob::Blob(const Blob &other) : label(other.label) {   other.mat.copyTo(mat); }   bool Blob::isEmpty() const {   return mat.empty(); } uint32_t Blob::getLabel() const {   return label; } void Blob::setLabel(uint32_t value) {   label = value; } const cv::Mat &Blob::getMat() const {   return mat; } int Blob::getWidth() const {   return mat.cols; } int Blob::getHeight() const {   return mat.rows; } Classifying blobs by color and keypoints Our classifier operates on the assumption that a blob contains distinctive colors, distinctive keypoints, or both. To conserve memory and precompute as much relevant information as possible, we do not store images of the reference blobs, but instead we store histograms and keypoint descriptors. Create a new file, BlobClassifier.cpp, for the implementation of our BlobClassifier class. (To review the header, refer back to the Defining blob descriptors and a blob classifier section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the number of histogram bins, the histogram comparison method, and the relative importance of the histogram comparison versus the keypoint comparison. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobClassifier.h"   #ifdef WITH_OPENCV_CONTRIB #include <opencv2/xfeatures2d.hpp> #endif   const int HISTOGRAM_NUM_BINS_PER_CHANNEL = 32; const int HISTOGRAM_COMPARISON_METHOD = cv::HISTCMP_CHISQR_ALT;   const float HISTOGRAM_DISTANCE_WEIGHT = 0.98f; const float KEYPOINT_MATCHING_DISTANCE_WEIGHT = 1.0f -   HISTOGRAM_DISTANCE_WEIGHT; Beware that the HISTOGRAM_NUM_BINS_PER_CHANNEL constant has a cubic relationship to memory usage. For each blob descriptor, we store a three-dimensional (BGR) histogram with HISTOGRAM_NUM_BINS_PER_CHANNEL^3 elements, and each element is a 32-bit floating point number. If the constant is 32, each histogram's size in megabytes is (32^3)*32/(10^6)=1.0. This is fine for a small set of reference descriptors. If the constant is 256 (the maximum number of bins for an 8-bit color channel), the histogram's size goes up to a whopping value of (256^3)*32/(10^6)=536.9 megabytes! For an iOS application, this is unacceptable, given the platform's memory constraints. At best, in a high-end iOS device, one gigabyte of RAM might be available to each application. Conservatively, you should worry if your app's memory usage approaches 100 megabytes. Remember that OpenCV's SURF implementation is in the xfeatures2d module, which is part of opencv_contrib. If opencv_contrib is available, let's define the WITH_OPENCV_CONTRIB preprocessor flag. Then, our code imports the <opencv/xfeatures2d.hpp> header, and we use SURF. Otherwise, we use ORB. This selection also affects the implementation of BlobClassifier's constructor. OpenCV provides factory methods for various feature detectors, descriptors, and matchers, so we simply have to use the right combination of factory methods for SURF with Flann matching or ORB with brute-force matching based on the Hamming distance. Here is the constructor's implementation: BlobClassifier::BlobClassifier() { #ifdef WITH_OPENCV_CONTRIB   featureDetectorAndDescriptorExtractor =     cv::xfeatures2d::SURF::create();   descriptorMatcher = cv::DescriptorMatcher::create("FlannBased"); #else   featureDetectorAndDescriptorExtractor = cv::ORB::create();   descriptorMatcher = cv::DescriptorMatcher::create( "BruteForce-HammingLUT"); #endif } The update method's implementation calls a helper method, createBlobDescriptor, and adds the resulting BlobDescriptor to a vector of reference descriptors: void BlobClassifier::update(const Blob &referenceBlob) {   referenceBlobDescriptors.push_back(     createBlobDescriptor(referenceBlob)); } The clear method's implementation discards all the reference descriptors such that the BlobClassifier reverts to its initial, untrained state: void BlobClassifier::clear() {   referenceBlobDescriptors.clear(); } The implementation of the classify method relies on another helper method, findDistance. For each reference descriptor, classify calls findDistance to obtain a measure of dissimilarity between the query blob's descriptor and reference descriptor. We find the reference descriptor with the least distance (best similarity) and return its label as the classification result. If there are no reference descriptors, classify returns 0, the "unknown" label. Here is classify's implementation: void BlobClassifier::classify(Blob &detectedBlob) const {   BlobDescriptor detectedBlobDescriptor =     createBlobDescriptor(detectedBlob);   float bestDistance = FLT_MAX;   uint32_t bestLabel = 0;   for (const BlobDescriptor &referenceBlobDescriptor :       referenceBlobDescriptors) {     float distance = findDistance(detectedBlobDescriptor,       referenceBlobDescriptor);     if (distance < bestDistance) {       bestDistance = distance;       bestLabel = referenceBlobDescriptor.getLabel();     }   }   detectedBlob.setLabel(bestLabel); } The createBlobDescriptor helper method is responsible for calculating a normalized histogram of Bloband keypoint descriptors and using them to build a new BlobDescriptor. To calculate the (non-normalized) histogram, we use the cv::calcHist function. Among its arguments, it requires three arrays to specify the channels we want to use, the number of bins per channel, and the range of each channel's values. To normalize the resulting histogram, we divide by the number of pixels in the blob's image. The following code, pertaining to the histogram, is the first half of implementation of createBlobDescriptor: BlobDescriptor BlobClassifier::createBlobDescriptor(   const Blob &blob) const {    const cv::Mat &mat = blob.getMat();   int numChannels = mat.channels();     // Calculate the histogram of the blob's image.   cv::Mat histogram;   int channels[] = { 0, 1, 2 };   int numBins[] = { HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL };   float range[] = { 0.0f, 256.0f };   const float *ranges[] = { range, range, range };   cv::calcHist(&mat, 1, channels, cv::Mat(), histogram, 3,     numBins, ranges);     // Normalize the histogram.   histogram *= (1.0f / (mat.rows * mat.cols)); Now, we must convert the blob's image to grayscale and obtain keypoints and keypoint descriptors using the detect and compute methods of cv::Feature2D. With the normalized histogram and keypoint descriptors, we have everything that we need to construct and return a new BlobDescriptor. Here is the remainder of implementation of createBlobDescriptor: // Convert the blob's image to grayscale.   cv::Mat grayMat;   switch (numChannels) {     case 4:       cv::cvtColor(mat, grayMat, cv::COLOR_BGRA2GRAY);       break;     default:       cv::cvtColor(mat, grayMat, cv::COLOR_BGR2GRAY);       break;   }     // Detect features in the grayscale image.   std::vector<cv::KeyPoint> keypoints;   featureDetectorAndDescriptorExtractor->detect(grayMat,     keypoints);     // Extract descriptors of the features.   cv::Mat keypointDescriptors;   featureDetectorAndDescriptorExtractor->compute(grayMat,     keypoints, keypointDescriptors);     return BlobDescriptor(histogram, keypointDescriptors,     blob.getLabel()); } The findDistance helper method performs histogram comparison using the cv::compareHist function and keypoint matching using the match method of cv::DescriptorMatcher. Each of the resulting keypoint matches has a distance, and we sum these distances. Then, as an overall measure of distance between the two blob descriptors, we return a weighted average of the histogram distance and the total keypoint matching distance. Here is the relevant code: float BlobClassifier::findDistance(   const BlobDescriptor &detectedBlobDescriptor,   const BlobDescriptor &referenceBlobDescriptor) const {    // Calculate the histogram distance.   float histogramDistance = (float)cv::compareHist(     detectedBlobDescriptor.getNormalizedHistogram(),     referenceBlobDescriptor.getNormalizedHistogram(),     HISTOGRAM_COMPARISON_METHOD);     // Calculate the keypoint matching distance.   float keypointMatchingDistance = 0.0f;   std::vector<cv::DMatch> keypointMatches;   descriptorMatcher->match(     detectedBlobDescriptor.getKeypointDescriptors(),     referenceBlobDescriptor.getKeypointDescriptors(),     keypointMatches);   for (const cv::DMatch &keypointMatch : keypointMatches) {     keypointMatchingDistance += keypointMatch.distance;   }     return histogramDistance * HISTOGRAM_DISTANCE_WEIGHT +     keypointMatchingDistance * KEYPOINT_MATCHING_DISTANCE_WEIGHT; } That is the end of the blob classifier's code. Again, we see that a single class can provide useful, general-purpose computer vision functionality without a terribly complicated implementation. Perhaps this is a Zen moment; our previous work and studieshave been a path to (some kind of) simplicity! Of course, OpenCV hides a lot of complexity for us in its implementations of histogram-related functions and keypoint-related classes, and in this way, the library offers us a relatively gentle path. For completeness, note that the BlobDescriptor class has a straightforward implementation. Create a new file, BlobDescriptor.cpp, and fill it with the following bodies for a constructor and getters: #include "BlobDescriptor.h"   BlobDescriptor::BlobDescriptor(const cv::Mat &normalizedHistogram, const cv::Mat &keypointDescriptors, uint32_t label) : normalizedHistogram(normalizedHistogram) , keypointDescriptors(keypointDescriptors) , label(label) { }   const cv::Mat &BlobDescriptor::getNormalizedHistogram() const {   return normalizedHistogram; } const cv::Mat &BlobDescriptor::getKeypointDescriptors() const {   return keypointDescriptors; } uint32_t BlobDescriptor::getLabel() const {   return label; } Summary Now, we have finished all the code for the detector, descriptor, and classifier! Again, for more information, refer to Chapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3. Resources for Article: Further resources on this subject: Making subtle color shifts with curves [article] New functionality in OpenCV 3.0 [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 3800

article-image-understanding-uikitfundamentals
Packt
01 Jun 2016
9 min read
Save for later

Understanding UIKitFundamentals

Packt
01 Jun 2016
9 min read
In this article by Jak Tiano, author of the book Learning Xcode, we're mostly going to be talking about concepts rather than concrete code examples. Since we've been using UIKit throughout the whole book (and we will continue to do so), I'm going to do my best to elaborate on some things we've already seen and give you new information that you can apply to what we do in the future. (For more resources related to this topic, see here) As we've heard a lot about UIKit. We've seen it at the top of our Swift files in the form of import UIKit. We've used many of the UI elements and classes it provides for us. Now, it's time to take an isolated look at the biggest and most important framework in iOS development. Application management Unlike most other frameworks in the iOS SDK, UIKit is deeply integrated into the way your app runs. That's because UIKit is responsible for some of the most essential functionalities of an app. It also manages your application's window and view architecture, which we'll be talking about next. It also drives the main run loop, which basically means that it is executing your program. The UIDevice class In addition to these very important features, UIKit also gives you access to some other useful information about the device the app is currently running on through the UIDevice class. Using online resources and documentation: Since this article is about exploring frameworks, it is a good time to remind you that you can (and should!) always be searching online for anything and everything. For example, if you search for UIDevice, you'll end up on Apple's developer page for the UIDevice class, where you can see even more bits of information that you can pull from it. As we progress, keep in mind that searching the name of a class or framework will usually give you quick access to the full documentation. Here are some code examples of the information you can access: UIDevice.currentDevice().name UIDevice.currentDevice().model UIDevice.currentDevice().orientation UIDevice.currentDevice().batteryLevel UIDevice.currentDevice().systemVersion Some developers have a little bit of fun with this information: for example, Snapchat gives you a special filter to use for photos when your battery is fully charged.Always keep an open mind about what you can do with data you have access to! Views One of the most important responsibilities of UIKit is that it provides views and the view hierarchy architecture. We've talked before about what a view is within the MVC programming paradigm, but here we're referring to the UIView class that acts as the base for (almost) all of our visual content in iOS programming. While it wasn't too important to know about when just getting our feet wet, now is a good time to really dig in a bit and understand what UIViews are and how they work both on their own and together. Let's start from the beginning: a view (UIView) defines a rectangle on your screen that is responsible for output and input, meaning drawing to the screen and receiving touch events.It can also contain other views, known as subviews, which ultimately create a view hierarchy. As a result of this hierarchy, we have to be aware of the coordinate systems involved. Now, let's talk about each of these three functions: drawing, hierarchies, and coordinate systems. Drawing Each UIView is responsible for drawing itself to the screen. In order to optimize drawing performance, the views will usually try to render their content once and then reuse that image content when it doesn't change. It can even move and scale content around inside of it without needing to redraw, which can be an expensive operation: An overview of how UIView draws itself to the screen With the system provided views, all of this is handled automatically. However, if you ever need to create your own UIView subclass that uses custom drawing, it's important to know what goes on behind the scenes. To implement custom drawing in a view, you need to implement the drawRect() function in your subclass. When something changes in your view, you need to call the setNeedsDisplay() function, which acts as a marker to let the system know that your view needs to be redrawn. During the next drawing cycle, the code in your drawRect() function will be executed to refresh the content of your view, which will then be cached for performance. A code example of this custom drawing functionality is a bit beyond the scope of this article, but discussing this will hopefully give you a better understanding of how drawing works in addition to giving you a jumping off point should you need to do this in the future. Hierarchies Now, let's discuss view hierarchies. When we would use a view controller in a storyboard, we would drag UI elements onto the view controller. However, what we were actually doing is adding a subview to the base view of the view controller. And in fact, that base view was a subview of the UIWindow, which is also a UIView. So, though, we haven't really acknowledged it, we've already put view hierarchies to work many times. The easiest way to think about what happens in a view hierarchy is that you set one view's parent coordinate system relative to another view. By default, you'd be setting a view's coordinate system to be relative to the base view, which is normally just the whole screen. But you can also set the parent coordinate system to some other view so that when you move or transform the parent view, the children views are moved and transformed along with it. Example of how parenting works with a view hierarchy. It's also important to note that the view hierarchy impacts the draw order of your views. All of a view's subviews will be drawn on top of the parent view, and the subviews will be drawn in the order they were added (the last subview added will be on top). To add a subview through code, you can use the addSubview() function. Here's an example: var view1 = UIView() var view2 = UIView() view1.addSubview(view2) The top-most views will intercept a touch first, and if it doesn't respond, it will pass it down the view hierarchy until a view does respond. Coordinate systems With all of this drawing and parenting, we need to take a minute to look at how the coordinate system works in UIKit for our views.The origin (0,0 point) in UIKit is the top left of the screen, and increases along X to the right, and increases on the Y downward. Each view is placed in this upper-left positioning system relative to its parent view's origin. Be careful! Other frameworks in iOS use different coordinate systems. For example, SpriteKit uses the lower-left corner as the origin. Each view also has its own setof positioning information. This is composed of the view's frame, bounds, and center. The frame rectangle describes the origin and the size of view relative to its parent view's coordinate system. The bounds rectangle describes the origin and the size of the view from its local coordinate system. The center is just the center point of the view relative to the parent view. When dealing with so many different coordinate systems, it can seem like a nightmare to compare positions from different views. Luckily, the UIView class provides a simple convertPoint()function to convert points between systems. Try running this little experiment in a playground to see how the point gets converted from one view's coordinate system to the other: import UIKit let view1 = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50)) let view2 = UIView(frame: CGRect(x: 10, y: 10, width: 30, height: 30)) view1.addSubview(view2) let pointFrom1 = CGPoint(x: 20, y: 20) let pointFromView2 = view1.convertPoint(pointFrom1, toView: view2) Hopefully, you now have a much better understanding of some of the underlying workings of the view system in UIKit. Documents, displays, printing, and more In this section, I'm going to do my best to introduce you to the many additional features of the UIKit framework. The idea is to give you a better understanding of what is possible with UIKit, and if anything sounds interesting to you, you can go off and explore these features on your own. Documents UIKit has built in support for documents, much like you'd find on a desktop operating system. Using the UIDocument class, UIKit can help you save and load documents in the background in addition to saving them to iCloud. This could be a powerful feature for any app that allows the user to create content that they expect to save and resume working on later. Displays On most new iOS devices, you can connect external screens via HDMI. You can take advantage of these external displays by creating a new instance of the UIWindow class, and associating it with the external display screen. You can then add subviews to that window to create a secondscreen experience for devices like a bigscreen TV. While most consumers don't ever use HDMI-connected external displays, this is a great feature to keep in mind when working on internal applications for corporate or personal use. Printing Using the UIPrintInteractionController, you can set up and send print jobs to AirPrint-enabled printers on the user's network. Before you print, you can also create PDFs by drawing content off screen to make printing easier. And more! There are many more features of UIKit that are just waiting to be explored! To be honest, UIKit seems to be pretty much a dumping ground for any general features that were just a bit too small to deserve their own framework. If you do some digging in Apple's documentation, you'll find all kinds of interesting things you can do with UIKit, such as creating custom keyboards, creating share sheets, and custom cut-copy-paste support. Summary In this article, we looked at the biggest and most important UIKit and learned about some of the most important system processes like the view hierarchy. Resources for Article:   Further resources on this subject: Building Surveys using Xcode [article] Run Xcode Run [article] Tour of Xcode [article]
Read more
  • 0
  • 0
  • 1625

article-image-step-detector-and-step-counters-sensors
Packt
14 Apr 2016
13 min read
Save for later

Step Detector and Step Counters Sensors

Packt
14 Apr 2016
13 min read
In this article by Varun Nagpal, author of the book, Android Sensor Programming By Example, we will focus on learning about the use of step detector and step counter sensors. These sensors are very similar to each other and are used to count the steps. Both the sensors are based on a common hardware sensor, which internally uses accelerometer, but Android still treats them as logically separate sensors. Both of these sensors are highly battery optimized and consume very low power. Now, lets look at each individual sensor in detail. (For more resources related to this topic, see here.) In this article by Varun Nagpal, author of the book, Android Sensor Programming By Example, we will focus on learning about the use of step detector and step counter sensors. These sensors are very similar to each other and are used to count the steps. Both the sensors are based on a common hardware sensor, which internally uses accelerometer, but Android still treats them as logically separate sensors. Both of these sensors are highly battery optimized and consume very low power. Now, lets look at each individual sensor in detail. The step counter sensor The step counter sensor is used to get the total number of steps taken by the user since the last reboot (power on) of the phone. When the phone is restarted, the value of the step counter sensor is reset to zero. In the onSensorChanged() method, the number of steps is give by event.value[0]; although it's a float value, the fractional part is always zero. The event timestamp represents the time at which the last step was taken. This sensor is especially useful for those applications that don't want to run in the background and maintain the history of steps themselves. This sensor works in batches and in continuous mode. If we specify 0 or no latency in the SensorManager.registerListener() method, then it works in a continuous mode; otherwise, if we specify any latency, then it groups the events in batches and reports them at the specified latency. For prolonged usage of this sensor, it's recommended to use the batch mode, as it saves power. Step counter uses the on-change reporting mode, which means it reports the event as soon as there is change in the value. The step detector sensor The step detector sensor triggers an event each time a step is taken by the user. The value reported in the onSensorChanged() method is always one, the fractional part being always zero, and the event timestamp is the time when the user's foot hit the ground. The step detector sensor has very low latency in reporting the steps, which is generally within 1 to 2 seconds. The Step detector sensor has lower accuracy and produces more false positive, as compared to the step counter sensor. The step counter sensor is more accurate, but has more latency in reporting the steps, as it uses this extra time after each step to remove any false positive values. The step detector sensor is recommended for those applications that want to track the steps in real time and want to maintain their own history of each and every step with their timestamp. Time for action – using the step counter sensor in activity Now, you will learn how to use the step counter sensor with a simple example. The good thing about the step counter is that, unlike other sensors, your app doesn't need to tell the sensor when to start counting the steps and when to stop counting them. It automatically starts counting as soon as the phone is powered on. For using it, we just have to register the listener with the sensor manager and then unregister it after using it. In the following example, we will show the total number of steps taken by the user since the last reboot (power on) of the phone in the Android activity. We created a PedometerActivity and implemented it with the SensorEventListener interface, so that it can receive the sensor events. We initiated the SensorManager and Sensor object of the step counter and also checked the sensor availability in the OnCreate() method of the activity. We registered the listener in the onResume() method and unregistered it in the onPause() method as a standard practice. We used a TextView to display the total number of steps taken and update its latest value in the onSensorChanged() method. public class PedometerActivity extends Activity implements SensorEventListener{ private SensorManager mSensorManager; private Sensor mSensor; private boolean isSensorPresent = false; private TextView mStepsSinceReboot; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_pedometer); mStepsSinceReboot = (TextView)findViewById(R.id.stepssincereboot); mSensorManager = (SensorManager) this.getSystemService(Context.SENSOR_SERVICE); if(mSensorManager.getDefaultSensor(Sensor.TYPE_STEP_COUNTER) != null) { mSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_STEP_COUNTER); isSensorPresent = true; } else { isSensorPresent = false; } } @Override protected void onResume() { super.onResume(); if(isSensorPresent) { mSensorManager.registerListener(this, mSensor, SensorManager.SENSOR_DELAY_NORMAL); } } @Override protected void onPause() { super.onPause(); if(isSensorPresent) { mSensorManager.unregisterListener(this); } } @Override public void onSensorChanged(SensorEvent event) { mStepsSinceReboot.setText(String.valueOf(event.values[0])); } Time for action – maintaining step history with step detector sensor The Step counter sensor works well when we have to deal with the total number of steps taken by the user since the last reboot (power on) of the phone. It doesn't solve the purpose when we have to maintain history of each and every step taken by the user. The Step counter sensor may combine some steps and process them together, and it will only update with an aggregated count instead of reporting individual step detail. For such cases, the step detector sensor is the right choice. In our next example, we will use the step detector sensor to store the details of each step taken by the user, and we will show the total number of steps for each day, since the application was installed. Our next example will consist of three major components of Android, namely service, SQLite database, and activity. Android service will be used to listen to all the individual step details using the step counter sensor when the app is in the background. All the individual step details will be stored in the SQLite database and finally the activity will be used to display the list of total number of steps along with dates. Let's look at the each component in detail. The first component of our example is PedometerListActivity. We created a ListView in the activity to display the step count along with dates. Inside the onCreate() method of PedometerListActivity, we initiated the ListView and ListAdaptor required to populate the list. Another important task that we do in the onCreate() method is starting the service (StepsService.class), which will listen to all the individual steps' events. We also make a call to the getDataForList() method, which is responsible for fetching the data for ListView. public class PedometerListActivity extends Activity{ private ListView mSensorListView; private ListAdapter mListAdapter; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mSensorListView = (ListView)findViewById(R.id.steps_list); getDataForList(); mListAdapter = new ListAdapter(); mSensorListView.setAdapter(mListAdapter); Intent mStepsIntent = new Intent(getApplicationContext(), StepsService.class); startService(mStepsIntent); } In our example, the DateStepsModel class is used as a POJO (Plain Old Java Object) class, which is a handy way of grouping logical data together, to store the total number of steps and date. We also use the StepsDBHelper class to read and write the steps data in the database (discussed further in the next section). Inside the getDataForList() method, we initiated the object of the StepsDBHelper class and call the readStepsEntries() method of the StepsDBHelper class, which returns ArrayList of the DateStepsModel objects containing the total number of steps along with dates after reading from database. The ListAdapter class is used for populating the values for ListView, which internally uses ArrayList of DateStepsModel as the data source. The individual list item is the string, which is the concatenation of date and the total number of steps. class DateStepsModel { public String mDate; public int mStepCount; } private StepsDBHelper mStepsDBHelper; private ArrayList<DateStepsModel> mStepCountList; public void getDataForList() { mStepsDBHelper = new StepsDBHelper(this); mStepCountList = mStepsDBHelper.readStepsEntries(); } private class ListAdapter extends BaseAdapter{ private TextView mDateStepCountText; @Override public int getCount() { return mStepCountList.size(); } @Override public Object getItem(int position) { return mStepCountList.get(position); } @Override public long getItemId(int position) { return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { if(convertView==null){ convertView = getLayoutInflater().inflate(R.layout.list_rows, parent, false); } mDateStepCountText = (TextView)convertView.findViewById(R.id.sensor_name); mDateStepCountText.setText(mStepCountList.get(position).mDate + " - Total Steps: " + String.valueOf(mStepCountList.get(position).mStepCount)); return convertView; } } The second component of our example is StepsService, which runs in the background and listens to the step detector sensor until the app is uninstalled. We implemented this service with the SensorEventListener interface so that it can receive the sensor events. We also initiated theobjects of StepsDBHelper, SensorManager, and the step detector sensor inside the OnCreate() method of the service. We only register the listener when the step detector sensor is available on the device. A point to note here is that we never unregistered the listener because we expect our app to log the step information indefinitely until the app is uninstalled. Both step detector and step counter sensors are very low on battery consumptions and are highly optimized at the hardware level, so if the app really requires, it can use them for longer durations without affecting the battery consumption much. We get a step detector sensor callback in the onSensorChanged() method whenever the operating system detects a step, and from CC: specify, we call the createStepsEntry() method of the StepsDBHelperclass to store the step information in the database. public class StepsService extends Service implements SensorEventListener{ private SensorManager mSensorManager; private Sensor mStepDetectorSensor; private StepsDBHelper mStepsDBHelper; @Override public void onCreate() { super.onCreate(); mSensorManager = (SensorManager) this.getSystemService(Context.SENSOR_SERVICE); if(mSensorManager.getDefaultSensor(Sensor.TYPE_STEP_DETECTOR) != null) { mStepDetectorSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_STEP_DETECTOR); mSensorManager.registerListener(this, mStepDetectorSensor, SensorManager.SENSOR_DELAY_NORMAL); mStepsDBHelper = new StepsDBHelper(this); } } @Override public int onStartCommand(Intent intent, int flags, int startId) { return Service.START_STICKY; } @Override public void onSensorChanged(SensorEvent event) { mStepsDBHelper.createStepsEntry(); } The last component of our example is the SQLite database. We created a StepsDBHelper class and extended it from the SQLiteOpenHelper abstract utility class provided by the Android framework to easily manage database operations. In the class, we created a database called StepsDatabase, which is automatically created on the first object creation of the StepsDBHelper class by the OnCreate() method. This database has one table StepsSummary, which consists of only three columns (id, stepscount, and creationdate). The first column, id, is the unique integer identifier for each row of the table and is incremented automatically on creation of every new row. The second column, stepscount, is used to store the total number of steps taken for each date. The third column, creationdate, is used to store the date in the mm/dd/yyyy string format. Inside the createStepsEntry() method, we first check whether there is an existing step count with the current date, and we if find one, then we read the existing step count of the current date and update the step count by incrementing it by 1. If there is no step count with the current date found, then we assume that it is the first step of the current date and we create a new entry in the table with the current date and step count value as 1. The createStepsEntry() method is called from onSensorChanged() of the StepsService class whenever a new step is detected by the step detector sensor. public class StepsDBHelper extends SQLiteOpenHelper { private static final int DATABASE_VERSION = 1; private static final String DATABASE_NAME = "StepsDatabase"; private static final String TABLE_STEPS_SUMMARY = "StepsSummary"; private static final String ID = "id"; private static final String STEPS_COUNT = "stepscount"; private static final String CREATION_DATE = "creationdate";//Date format is mm/dd/yyyy private static final String CREATE_TABLE_STEPS_SUMMARY = "CREATE TABLE " + TABLE_STEPS_SUMMARY + "(" + ID + " INTEGER PRIMARY KEY AUTOINCREMENT," + CREATION_DATE + " TEXT,"+ STEPS_COUNT + " INTEGER"+")"; StepsDBHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_TABLE_STEPS_SUMMARY); } public boolean createStepsEntry() { boolean isDateAlreadyPresent = false; boolean createSuccessful = false; int currentDateStepCounts = 0; Calendar mCalendar = Calendar.getInstance(); String todayDate = String.valueOf(mCalendar.get(Calendar.MONTH))+"/" + String.valueOf(mCalendar.get(Calendar.DAY_OF_MONTH))+"/"+String.valueOf(mCalendar.get(Calendar.YEAR)); String selectQuery = "SELECT " + STEPS_COUNT + " FROM " + TABLE_STEPS_SUMMARY + " WHERE " + CREATION_DATE +" = '"+ todayDate+"'"; try { SQLiteDatabase db = this.getReadableDatabase(); Cursor c = db.rawQuery(selectQuery, null); if (c.moveToFirst()) { do { isDateAlreadyPresent = true; currentDateStepCounts = c.getInt((c.getColumnIndex(STEPS_COUNT))); } while (c.moveToNext()); } db.close(); } catch (Exception e) { e.printStackTrace(); } try { SQLiteDatabase db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(CREATION_DATE, todayDate); if(isDateAlreadyPresent) { values.put(STEPS_COUNT, ++currentDateStepCounts); int row = db.update(TABLE_STEPS_SUMMARY, values, CREATION_DATE +" = '"+ todayDate+"'", null); if(row == 1) { createSuccessful = true; } db.close(); } else { values.put(STEPS_COUNT, 1); long row = db.insert(TABLE_STEPS_SUMMARY, null, values); if(row!=-1) { createSuccessful = true; } db.close(); } } catch (Exception e) { e.printStackTrace(); } return createSuccessful; } The readStepsEntries() method is called from PedometerListActivity to display the total number of steps along with the date in the ListView. The readStepsEntries() method reads all the step counts along with their dates from the table and fills the ArrayList of DateStepsModelwhich is used as a data source for populating the ListView in PedometerListActivity. public ArrayList<DateStepsModel> readStepsEntries() { ArrayList<DateStepsModel> mStepCountList = new ArrayList<DateStepsModel>(); String selectQuery = "SELECT * FROM " + TABLE_STEPS_SUMMARY; try { SQLiteDatabase db = this.getReadableDatabase(); Cursor c = db.rawQuery(selectQuery, null); if (c.moveToFirst()) { do { DateStepsModel mDateStepsModel = new DateStepsModel(); mDateStepsModel.mDate = c.getString((c.getColumnIndex(CREATION_DATE))); mDateStepsModel.mStepCount = c.getInt((c.getColumnIndex(STEPS_COUNT))); mStepCountList.add(mDateStepsModel); } while (c.moveToNext()); } db.close(); } catch (Exception e) { e.printStackTrace(); } return mStepCountList; } What just happened? We created a small pedometer utility app that maintains the step history along with dates using the steps detector sensor. We used PedometerListActivityto display the list of the total number of steps along with their dates. StepsServiceis used to listen to all the steps detected by the step detector sensor in the background. And finally, the StepsDBHelperclass is used to create and update the total step count for each date and to read the total step counts along with dates from the database. Resources for Article: Further resources on this subject: Introducing the Android UI [article] Building your first Android Wear Application [article] Mobile Phone Forensics – A First Step into Android Forensics [article]
Read more
  • 0
  • 3
  • 25601

article-image-how-use-currying-swift-fun-and-profit
Alexander Altman
06 Apr 2016
5 min read
Save for later

How to Use Currying in Swift for Fun and Profit

Alexander Altman
06 Apr 2016
5 min read
Swift takes inspiration from functional languages in a lot of its features, and one of those features is currying. The idea behind currying is relatively straightforward, and Apple has already taken the time to explain the basics of it in The Swift Programming Language. Nonetheless, there's a lot more to currying in Swift than what first meets the eye. What is currying? Let's say we have a function, f, which takes two parameters, a: Int and b: String, and returns a Bool: func f(a: Int, _ b: String) -> Bool { // … do somthing here … } Here, we're taking both a and b simultaneously as parameters to our function, but we don't have to do it that way! We can just as easily write this function to take just a as a parameter and then return another function that takes b as it's only parameter and returns the final result: func f(a: Int) -> ((String) -> Bool) { return { b in // … do somthing here … } } (I've added a few extra parentheses for clarity, but Swift is actually just fine if you write String -> Bool instead of ((String) -> Bool); the two notations mean exactly the same thing.) This formulation uses a closure, but you can also use a nested function for the exact same effect: func f(a: Int) -> ((String) -> Bool) { func g(b: String) -> Bool { // … do somthing here … } return g } Of course, Swift wouldn't be Swift without providing a convenient syntax for things like this, so there is even a third way to write the curried version of f, and it's (usually) preferred over either of the previous two: func f(a: Int)(_ b: String) -> Bool { // … do somthing here … } Any of these iterations of our curried function f can be called like this: let res: Bool = f(1)("hello") Which should look very similar to the way you would call the original uncurried f: let res: Bool = f(1, "hello") Currying isn't limited to just two parameters either; here's an example of a partially curried function of five parameters (taking them in groups of two, one, and two): func weirdAddition(x: Int, use useX: Bool)(_ y: Int)(_ z: Int, use useZ: Bool) -> Int { return (useX ? x : 0) + y + (useZ ? z : 0) } How is currying used in Swift? Believe it or not, Swift actually uses currying all over the place, even if you don't notice it. Probably, the most prominent example is that of instance methods, which are just curried type methods: // This: NSColor.blueColor().shadowWithLevel(1/3) // …is the same as this: NSColor.shadowWithLevel(NSColor.blueColor())(1/3) But, there's a much deeper implication of currying's availability in Swift: all functions secretly take only one parameter! How is this possible, you ask? It has to do with how Swift treats tuples. A function that “officially” takes, say, three parameters, actually only takes one parameter that happens to be a three-tuple. This is perhaps most visible when exploited via the higher-order collections method: func dotProduct(xVec: [Double], _ yVec: [Double]) -> Double { // Note that (this particular overload of) the `*` operator // has the signature `(Double, Double) -> Double`. return zip(xVec, yVec).map(*).reduce(0, combine: +) } It would seem that anything you can do with tuples, you can do with a function parameter list and vice versa; in fact, that is almost true. The four features of function parameter lists that don't carry over directly into tuples are the variadic, inout, defaulted, and @autoclosure parameters. You can, technically, form a variadic, inout, defaulted, or @autoclosure tuple type, but if you try to use it in any context other than as a function's parameter type, swiftc will give you an error. What you definitely can do with tuples is use named values, notwithstanding the unfortunate prohibition on single-element tuples in Swift (named or not). Apple provides some information on tuples with named elements in The Swift Programming Language; it also gives an example of one in the same book. It should be noted that the names given to tuple elements are somewhat ephemeral in that they can very easily be introduced, eliminated, and altered via implicit conversions. This applies regardless of whether the tuple type is that of a standalone value or of a function's parameter: // converting names in a function's parameter list func printBoth(first x: Int, second y: String) { print(x, y, separator: ", ") } let printTwo: (a: Int, b: String) -> Void = printBoth // converting names in a standalone tuple type // (for some reason, Swift dislikes assigning `firstAndSecond` // directly to `aAndB`, but going through `nameless` is fine) let firstAndSecond: (first: Int, second: String) = (first: 1, second: "hello") let nameless: (Int, String) = firstAndSecond let aAndB: (a: Int, b: String) = nameless Currying, with its connection to tuples, is a very powerful feature of Swift. Use it wherever it seems helpful, and the language will be more than happy to oblige. About the author Alexander Altman is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift project.
Read more
  • 0
  • 0
  • 2707
article-image-integrating-objective-c
Packt
01 Apr 2016
11 min read
Save for later

Integrating with Objective-C

Packt
01 Apr 2016
11 min read
In this article written by Kyle Begeman author of the book Swift 2 Cookbook, we will cover the following recipes: Porting your code from one language to another Replacing the user interface classes Upgrading the app delegate Introduction Swift 2 is out, and we can see that it is going to replace Objective-C on iOS development sooner or later, however how should you migrate your Objective-C app? Is it necessary to rewrite everything again? Of course you don't have to rewrite a whole application in Swift from scratch, you can gradually migrate it. Imagine a four years app developed by 10 developers, it would take a long time to be rewritten. Actually, you've already seen that some of the codes we've used in this book have some kind of "old Objective-C fashion". The reason is that not even Apple computers could migrate the whole Objective-C code into Swift. (For more resources related to this topic, see here.) Porting your code from one language to another In the previous recipe we learned how to add a new code into an existing Objective-C project, however you shouldn't only add new code but also, as far as possible, you should migrate your old code to the new Swift language. If you would like to keep your application core on Objective-C that's ok, but remember that new features are going to be added on Swift and it will be difficult keeping two languages on the same project. In this recipe we are going to port part of the code, which is written in Objective-C to Swift. Getting ready Make a copy of the previous recipe, if you are using any version control it's a good time for committing your changes. How to do it… Open the project and add a new file called Setup.swift, here we are going to add a new class with the same name (Setup): class Setup { class func generate() -> [Car]{ var result = [Car]() for distance in [1.2, 0.5, 5.0] { var car = Car() car.distance = Float(distance) result.append(car) } var car = Car() car.distance = 4 var van = Van() van.distance = 3.8 result += [car, van] return result } } Now that we have this car array generator we can call it on the viewDidLoad method replacing the previous code: - (void)viewDidLoad { [super viewDidLoad]; vehicles = [Setup generate]; [self->tableView reloadData]; } Again press play and check that the application is still working. How it works… The reason we had to create a class instead of creating a function is that you can only export to Objective-C classes, protocols, properties, and subscripts. Bear that in mind in case of developing with the two languages. If you would like to export a class to Objective-C you have two choices, the first one is inheriting from NSObject and the other one is adding the @objc attribute before your class, protocol, property, or subscript. If you paid attention, our method returns a Swift array but it was converted to an NSArray, but as you might know, they are different kinds of array. Firstly, because Swift arrays are mutable and NSArray are not, and the other reason is that their methods are different. Can we use NSArray in Swift? The answer is yes, but I would recommend avoiding it, imagine once finished migrating to Swift your code still follows the old way, it would be another migration. There's more… Migrating from Objective-C is something that you should do with care, don't try to change the whole application at once, remember that some Swift objects behave differently from Objective-C, for example, dictionaries in Swift have the key and the value types specified but in Objective-C they can be of any type. Replacing the user interface classes At this moment you know how to migrate the model part of an application, however in real life we also have to replace the graphical classes. Doing it is not complicated but it could be a bit full of details. Getting ready Continuing with the previous recipe, make a copy of it or just commit the changes you have and let's continue with our migration. How to do it… First create a new file called MainViewController.swift and start importing the UIKit: import UIKit The next step is creating a class called MainViewController, this class must inherit from UIViewController and implement the protocols UITableViewDataSource and UITableViewDelegate: class MainViewController:UIViewController,UITableViewDataSource, UITableViewDelegate {  Then, add the attributes we had in the previous view controller, keep the same name you have used before: private var vehicles = [Car]() @IBOutlet var tableView:UITableView! Next, we need to implement the methods, let's start with the table view data source methods: func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int{ return vehicles.count } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell{ var cell:UITableViewCell? = self.tableView.dequeueReusableCellWithIdentifier ("vehiclecell") if cell == nil { cell = UITableViewCell(style: .Subtitle, reuseIdentifier: "vehiclecell") } var currentCar = self.vehicles[indexPath.row] cell!.textLabel?.numberOfLines = 1 cell!.textLabel?.text = "Distance (currentCar.distance * 1000) meters" var detailText = "Pax: (currentCar.pax) Fare: (currentCar.fare)" if currentCar is Van{ detailText += ", Volume: ( (currentCar as Van).capacity)" } cell!.detailTextLabel?.text = detailText cell!.imageView?.image = currentCar.image return cell! } Pay attention that this conversion is not 100% equivalent, the fare for example isn't going to be shown with two digits of precision, there is an explanation later of why we are not going to fix this now.  The next step is adding the event, in this case we have to do the action done when the user selects a car: func tableView(tableView: UITableView, willSelectRowAtIndexPath indexPath: NSIndexPath) -> NSIndexPath? { var currentCar = self.vehicles[indexPath.row] var time = currentCar.distance / 50.0 * 60.0 UIAlertView(title: "Car booked", message: "The car will arrive in (time) minutes", delegate: nil, cancelButtonTitle: "OK").show() return indexPath } As you can see, we need only do one more step to complete our code, in this case it's the view didLoad. Pay attention that another difference from Objective-C and Swift is that in Swift you have to specify that you are overloading an existing method: override func viewDidLoad() { super.viewDidLoad() vehicles = Setup.generate() self.tableView.reloadData() } } // end of class Our code is complete, but of course our application is still using the old code. To complete this operation, click on the storyboard, if the document outline isn't being displayed, click on the Editor menu and then on Show Document Outline: Now that you can see the document outline, click on View Controller that appears with a yellow circle with a square inside: Then on the right-hand side, click on the identity inspector, next go to the custom class and change the value of the class from ViewController to MainViewController. After that, press play and check that your application is running, select a car and check that it is working. Be sure that it is working with your new Swift class by paying attention on the fare value, which in this case isn't shown with two digits of precision. Is everything done? I would say no, it's a good time to commit your changes. Lastly, delete the original Objective-C files, because you won't need them anymore. How it works… As you can see, it's not so hard replacing an old view controller with a Swift one, the first thing you need to do is create a new view controller class with its protocols. Keep the same names you had on your old code for attributes and methods that are linked as IBActions, it will make the switch very straightforward otherwise you will have to link again. Bear in mind that you need to be sure that your changes are applied and that they are working, but sometimes it is a good idea to have something different, otherwise your application can be using the old Objective-C and you didn't realize it. Try to modernize our code using the Swift way instead of the old Objective-C style, for example, nowadays it's preferable using interpolation rather than using stringWithFormat. We also learned that you don't need to relink any action or outlet if you keep the same name. If you want to change the name of anything you might first keep its original name, test your app, and after that you can refactor it following the traditional factoring steps. Don't delete the original Objective-C files until you are sure that the equivalent Swift file is working on every functionality. There's more… This application had only one view controller, however applications usually have more than one view controller. In this case, the best way you can update them is one by one instead of all of them at the same time. Upgrading the app delegate As you know there is an object that controls the events of an application, which is called application delegate. Usually you shouldn't have much code here, but a few of them you might have. For example, you may deactivate the camera or the GPS requests when your application goes to the background and reactivate them when the app returns active. Certainly it is a good idea to update this file even if you don't have any new code on it, so it won't be a problem in the future. Getting ready If you are using the version control system, commit your changes from the last recipe or if you prefer just copy your application. How to do it… Open the previous application recipe and create a new Swift file called ApplicationDelegate.swift, then you can create a class with the same name. As in our previous class we didn't have any code on the application delegate, so we can differentiate it by printing on the log console. So add this traditional application delegate on your Swift file: class ApplicationDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { print("didFinishLaunchingWithOptions") return true } func applicationWillResignActive(application: UIApplication) { print("applicationWillResignActive") } func applicationDidEnterBackground(application: UIApplication) { print("applicationDidEnterBackground") } func applicationWillEnterForeground(application: UIApplication) { print("applicationWillEnterForeground") } func applicationDidBecomeActive(application: UIApplication) { print("applicationDidBecomeActive") } func applicationWillTerminate(application: UIApplication) { print("applicationWillTerminate") } } Now go to your project navigator and expand the Supporting Files group, after that click on the main.m file. In this file we are going to import the magic file, the Swift header file: #import "Chapter_8_Vehicles-Swift.h" After that we have to specify that the application delegate is the new class we have, so replace the AppDelegate class on the UIApplicationMain call with ApplicationDelegate. Your main function should be like this: int main(int argc, char * argv[]) { @autoreleasepool { return UIApplicationMain(argc, argv, nil, NSStringFromClass([ApplicationDelegate class])); } } It's time to press play and check whether the application is working or not. Press the home button or the combination shift + command + H if you are using the simulator and again open your application. Have a look that you have some messages on your log console. Now that you are sure that your Swift code is working, remove the original app delegate and its importation on the main.m. Test your app just in case. You could consider that we had finished this part, but actually we still have another step to do: removing the main.m file. Now it is very easy, just click on the ApplicationDelegate.swift file and before the class declaration add the attribute @UIApplicationMain, then right click on the main.h and choose to delete it. Test it and your application is done. How it works… The application delegate class has always been specified at the starting of an application. In Objective-C, it follows the C start point, which is a function called main. In iOS, you can specify the class that you want to use as an application delegate. If you program for OS X the procedure is different, you have to go to your nib file and change its class name to the new one. Why did we have to change the main function and then eliminate it? The reason is that you should avoid massive changes, if something goes wrong you won't know the step where you failed, so probably you will have to rollback everything again. If you do your migration step by step ensuring that it is still working, it means that in case of finding an error, it will be easier to solve it. Avoid doing massive changes on your project, changing step by step will be easier to solve issues. There's more… In this recipe, we learned the last steps of how to migrate an app from Objective-C to Swift code, however we have to remember that programming is not only about applications, you can also have a framework. In the next recipe, we are going to learn how to create your own framework compatible with Swift and Objective-C. Summary This article shows you how Swift and Objective-C can live together and give you a step-by-step guide on how to migrate your Objective-C app to Swift. Resources for Article: Further resources on this subject: Concurrency and Parallelism with Swift 2 [article] Swift for Open Source Developers [article] Your First Swift 2 Project [article]
Read more
  • 0
  • 0
  • 1673

article-image-alm-developers-and-qa
Packt
30 Mar 2016
15 min read
Save for later

ALM – Developers and QA

Packt
30 Mar 2016
15 min read
This article by Can Bilgin, the author of Mastering Cross-Platform Development with Xamarin, provides an introduction to Application Lifecycle Management (ALM) and continuous integration methodologies on Xamarin cross-platform applications. As the part of the ALM process that is most relevant for developers, unit test strategies will be discussed and demonstrated, as well as automated UI testing. This article is divided into the following sections: Development pipeline Troubleshooting Unit testing UI testing (For more resources related to this topic, see here.) Development pipeline The development pipeline can be described as the virtual production line that steers a project from a mere bundle of business requirements to the consumers. Stakeholders that are part of this pipeline include, but are not limited to, business proxies, developers, the QA team, the release and configuration team, and finally the consumers themselves. Each stakeholder in this production line assumes different responsibilities, and they should all function in harmony. Hence, having an efficient, healthy, and preferably automated pipeline that is going to provide the communication and transfer of deliverables between units is vital for the success of a project. In the Agile project management framework, the development pipeline is cyclical rather than a linear delivery queue. In the application life cycle, requirements are inserted continuously into a backlog. The backlog leads to a planning and development phase, which is followed by testing and QA. Once the production-ready application is released, consumers can be made part of this cycle using live application telemetry instrumentation. Figure 1: Application life cycle management In Xamarin cross-platform application projects, development teams are blessed with various tools and frameworks that can ease the execution of ALM strategies. From sketching and mock-up tools available for early prototyping and design to source control and project management tools that make up the backbone of ALM, Xamarin projects can utilize various tools to automate and systematically analyze project timeline. The following sections of this article concentrate mainly on the lines of defense that protect the health and stability of a Xamarin cross-platform project in the timeline between the assignment of tasks to developers to the point at which the task or bug is completed/resolved and checked into a source control repository. Troubleshooting and diagnostics SDKs associated with Xamarin target platforms and development IDEs are equipped with comprehensive analytic tools. Utilizing these tools, developers can identify issues causing app freezes, crashes, slow response time, and other resource-related problems (for example, excessive battery usage). Xamarin.iOS applications are analyzed using the XCode Instruments toolset. In this toolset, there are a number of profiling templates, each used to analyze a certain perspective of application execution. Instrument templates can be executed on an application running on the iOS simulator or on an actual device. Figure 2: XCode Instruments Similarly, Android applications can be analyzed using the device monitor provided by the Android SDK. Using Android Monitor, memory profile, CPU/GPU utilization, and network usage can also be analyzed, and application-provided diagnostic information can be gathered. Android Debug Bridge (ADB) is a command-line tool that allows various manual or automated device-related operations. For Windows Phone applications, Visual Studio provides a number of analysis tools for profiling CPU usage, energy consumption, memory usage, and XAML UI responsiveness. XAML diagnostic sessions in particular can provide valuable information on problematic sections of view implementation and pinpoint possible visual and performance issues: Figure 3: Visual Studio XAML analyses Finally, Xamarin Profiler, as a maturing application (currently in preview release), can help analyze memory allocations and execution time. Xamarin Profiler can be used with iOS and Android applications. Unit testing The test-driven development (TDD) pattern dictates that the business requirements and the granular use-cases defined by these requirements should be initially reflected on unit test fixtures. This allows a mobile application to grow/evolve within the defined borders of these assertive unit test models. Whether following a TDD strategy or implementing tests to ensure the stability of the development pipeline, unit tests are fundamental components of a development project. Figure 4: Unit test project templates Xamarin Studio and Visual Studio both provide a number of test project templates targeting different areas of a cross-platform project. In Xamarin cross-platform projects, unit tests can be categorized into two groups: platform-agnostic and platform-specific testing. Platform-agnostic unit tests Platform-agnostic components, such as portable class libraries containing shared logic for Xamarin applications, can be tested using the common unit test projects targeting the .NET framework. Visual Studio Test Tools or the NUnit test framework can be used according to the development environment of choice. It is also important to note that shared projects used to create shared logic containers for Xamarin projects cannot be tested with .NET unit test fixtures. For shared projects and the referencing platform-specific projects, platform-specific unit test fixtures should be prepared. When following an MVVM pattern, view models are the focus of unit test fixtures since, as previously explained, view models can be perceived as a finite state machine where the bindable properties are used to create a certain state on which the commands are executed, simulating a specific use-case to be tested. This approach is the most convenient way to test the UI behavior of a Xamarin application without having to implement and configure automated UI tests. While implementing unit tests for such projects, a mocking framework is generally used to replace the platform-dependent sections of the business logic. Loosely coupling these dependent components makes it easier for developers to inject mocked interface implementations and increases the testability of these modules. The most popular mocking frameworks for unit testing are Moq and RhinoMocks. Both Moq and RhinoMocks utilize reflection and, more specifically, the Reflection.Emit namespace, which is used to generate types, methods, events, and other artifacts in the runtime. Aforementioned iOS restrictions on code generation make these libraries inapplicable for platform-specific testing, but they can still be included in unit test fixtures targeting the .NET framework. For platform-specific implementation, the True Fakes library provides compile time code generation and mocking features. Depending on the implementation specifics (such as namespaces used, network communication, multithreading, and so on), in some scenarios it is imperative to test the common logic implementation on specific platforms as well. For instance, some multithreading and parallel task implementations give different results on Windows Runtime, Xamarin.Android, and Xamarin.iOS. These variations generally occur because of the underlying platform's mechanism or slight differences between the .NET and Mono implementation logic. In order to ensure the integrity of these components, common unit test fixtures can be added as linked/referenced files to platform-specific test projects and executed on the test harness. Platform-specific unit tests In a Xamarin project, platform-dependent features cannot be unit tested using the conventional unit test runners available in Visual Studio Test Suite and NUnit frameworks. Platform-dependent tests are executed on empty platform-specific projects that serve as a harness for unit tests for that specific platform. Windows Runtime application projects can be tested using the Visual Studio Test Suite. However, for Android and iOS, the NUnit testing framework should be used, since Visual Studio Test Tools are not available for the Xamarin.Android and Xamarin.iOS platforms.                              Figure 5: Test harnesses The unit test runner for Windows Phone (Silverlight) and Windows Phone 8.1 applications uses a test harness integrated with the Visual Studio test explorer. The unit tests can be executed and debugged from within Visual Studio. Xamarin.Android and Xamarin.iOS test project templates use NUnitLite implementation for the respective platforms. In order to run these tests, the test application should be deployed on the simulator (or the testing device) and the application has to be manually executed. It is possible to automate the unit tests on Android and iOS platforms through instrumentation. In each Xamarin target platform, the initial application lifetime event is used to add the necessary unit tests: [Activity(Label = "Xamarin.Master.Fibonacci.Android.Tests", MainLauncher = true, Icon = "@drawable/icon")] public class MainActivity : TestSuiteActivity { protected override void OnCreate(Bundle bundle) { // tests can be inside the main assembly //AddTest(Assembly.GetExecutingAssembly()); // or in any reference assemblies AddTest(typeof(Fibonacci.Android.Tests.TestsSample).Assembly); // Once you called base.OnCreate(), you cannot add more assemblies. base.OnCreate(bundle); } } In the Xamarin.Android implementation, the MainActivity class derives from the TestSuiteActivity, which implements the necessary infrastructure to run the unit tests and the UI elements to visualize the test results. On the Xamarin.iOS platform, the test application uses the default UIApplicationDelegate, and generally, the FinishedLaunching event delegate is used to create the ViewController for the unit test run fixture: public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { // Override point for customization after application launch. // If not required for your application you can safely delete this method var window = new UIWindow(UIScreen.MainScreen.Bounds); var touchRunner = new TouchRunner(window); touchRunner.Add(System.Reflection.Assembly.GetExecutingAssembly()); window.RootViewController = new UINavigationController(touchRunner.GetViewController()); window.MakeKeyAndVisible(); return true; } The main shortcoming of executing unit tests this way is the fact that it is not easy to generate a code coverage report and archive the test results. Neither of these testing methods provide the ability to test the UI layer. They are simply used to test platform-dependent implementations. In order to test the interactive layer, platform-specific or cross-platform (Xamarin.Forms) coded UI tests need to be implemented. UI testing In general terms, the code coverage of the unit tests directly correlates with the amount of shared code which amounts to, at the very least, 70-80 percent of the code base in a mundane Xamarin project. One of the main driving factors of architectural patterns was to decrease the amount of logic and code in the view layer so that the testability of the project utilizing conventional unit tests reaches a satisfactory level. Coded UI (or automated UI acceptance) tests are used to test the uppermost layer of the cross-platform solution: the views. Xamarin.UITests and Xamarin Test Cloud The main UI testing framework used for Xamarin projects is the Xamarin.UITests testing framework. This testing component can be used on various platform-specific projects, varying from native mobile applications to Xamarin.Forms implementations, except for the Windows Phone platform and applications. Xamarin.UITests is an implementation based on the Calabash framework, which is an automated UI acceptance testing framework targeting mobile applications. Xamarin.UITests is introduced to the Xamarin.iOS or Xamarin.Android applications using the publicly available NuGet packages. The included framework components are used to provide an entry point to the native applications. The entry point is the Xamarin Test Cloud Agent, which is embedded into the native application during the compilation. The cloud agent is similar to a local server that allows either the Xamarin Test Cloud or the test runner to communicate with the app infrastructure and simulate user interaction with the application. Xamarin Test Cloud is a subscription-based service allowing Xamarin applications to be tested on real mobile devices using UI tests implemented via Xamarin.UITests. Xamarin Test Cloud not only provides a powerful testing infrastructure for Xamarin.iOS and Xamarin.Android applications with an abundant amount of mobile devices but can also be integrated into Continuous Integration workflows. After installing the appropriate NuGet package, the UI tests can be initialized for a specific application on a specific device. In order to initialize the interaction adapter for the application, the app package and the device should be configured. On Android, the APK package path and the device serial can be used for the initialization: IApp app = ConfigureApp.Android.ApkFile("<APK Path>/MyApplication.apk") .DeviceSerial("<DeviceID>") .StartApp(); For an iOS application, the procedure is similar: IApp app = ConfigureApp.iOS.AppBundle("<App Bundle Path>/MyApplication.app") .DeviceIdentifier("<DeviceID of Simulator") .StartApp(); Once the App handle has been created, each test written using NUnit should first create the pre-conditions for the tests, simulate the interaction, and finally test the outcome. The IApp interface provides a set of methods to select elements on the visual tree and simulate certain interactions, such as text entry and tapping. On top of the main testing functionality, screenshots can be taken to document test steps and possible bugs. Both Visual Studio and Xamarin Studio provide project templates for Xamarin.UITests. Xamarin Test Recorder Xamarin Test Recorder is an application that can ease the creation of automated UI tests. It is currently in its preview version and is only available for the Mac OS platform. Figure 6: Xamarin Test Recorder Using this application, developers can select the application in need of testing and the device/simulator that is going to run the application. Once the recording session starts, each interaction on the screen is recorded as execution steps on a separate screen, and these steps can be used to generate the preparation or testing steps for the Xamarin.UITests implementation. Coded UI tests (Windows Phone) Coded UI tests are used for automated UI testing on the Windows Phone platform. Coded UI Tests for Windows Phone and Windows Store applications are not any different than their counterparts for other .NET platforms such as Windows Forms, WPF, or ASP.Net. It is also important to note that only XAML applications support Coded UI tests. Coded UI tests are generated on a simulator and written on an Automation ID premise. The Automation ID property is an automatically generated or manually configured identifier for Windows Phone applications (only in XAML) and the UI controls used in the application. Coded UI tests depend on the UIMap created for each control on a specific screen using the Automation IDs. While creating the UIMap, a crosshair tool can be used to select the application and the controls on the simulator screen to define the interactive elements: Figure 7:- Generating coded UI accessors and tests Once the UIMap has been created and the designer files have been generated, gestures and the generated XAML accessors can be used to create testing pre-conditions and assertions. For Coded UI tests, multiple scenario-specific input values can be used and tested on a single assertion. Using the DataRow attribute, unit tests can be expanded to test multiple data-driven scenarios. The code snippet below uses multiple input values to test different incorrect input values: [DataRow(0,"Zero Value")] [DataRow(-2, "Negative Value")] [TestMethod] public void FibonnaciCalculateTest_IncorrectOrdinal(int ordinalInput) { // TODO: Check if bad values are handled correctly } Automated tests can run on available simulators and/or a real device. They can also be included in CI build workflows and made part of the automated development pipeline. Calabash Calabash is an automated UI acceptance testing framework used to execute Cucumber tests. Cucumber tests provide an assertion strategy similar to coded UI tests, only broader and behavior oriented. The Cucumber test framework supports tests written in the Gherkin language (a human-readable programming grammar description for behavior definitions). Calabash makes up the necessary infrastructure to execute these tests on various platforms and application runtimes. A simple declaration of the feature and the scenario that is previously tested on Coded UI using the data-driven model would look similar to the excerpt below. Only two of the possible test scenarios are declared in this feature for demonstration; the feature can be extended: Feature: Calculate Single Fibonacci number. Ordinal entry should greater than 0. Scenario: Ordinal is lower than 0. Given I use the native keyboard to enter "-2" into text field Ordinal And I touch the "Calculate" button Then I see the text "Ordinal cannot be a negative number." Scenario: Ordinal is 0. Given I use the native keyboard to enter "0" into text field Ordinal And I touch the "Calculate" button Then I see the text "Cannot calculate the number for the 0th ordinal." Calabash test execution is possible on Xamarin target platforms since the Ruby API exposed by the Calabash framework has a bidirectional communication line with the Xamarin Test Cloud Agent embedded in Xamarin applications with NuGet packages. Calabash/Cucumber tests can be executed on Xamarin Test Cloud on real devices since the communication between the application runtime and Calabash framework is maintained by Xamarin Test Cloud Agent, the same as Xamarin.UI tests. Summary Xamarin projects can benefit from a properly established development pipeline and the use of ALM principles. This type of approach makes it easier for teams to share responsibilities and work out business requirements in an iterative manner. In the ALM timeline, the development phase is the main domain in which most of the concrete implementation takes place. In order for the development team to provide quality code that can survive the ALM cycle, it is highly advised to analyze and test native applications using the available tooling in Xamarin development IDEs. While the common codebase for a target platform in a Xamarin project can be treated and tested as a .NET implementation using the conventional unit tests, platform-specific implementations require more particular handling. Platform-specific parts of the application need to be tested on empty shell applications, called test harnesses, on the respective platform simulators or devices. To test views, available frameworks such as Coded UI tests (for Windows Phone) and Xamarin.UITests (for Xamarin.Android and Xamarin.iOS) can be utilized to increase the test code coverage and create a stable foundation for the delivery pipeline. Most tests and analysis tools discussed in this article can be integrated into automated continuous integration processes. Resources for Article:   Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Working with Xamarin.Android [article] Application Development Workflow [article]
Read more
  • 0
  • 0
  • 1520

article-image-get-your-apps-ready-android-n
Packt
18 Mar 2016
9 min read
Save for later

Get your Apps Ready for Android N

Packt
18 Mar 2016
9 min read
It seems likely that Android N will get its first proper outing in May, at this year's Google I/O conference, but there's no need to wait until then to start developing for the next major release of the Android platform. Thanks to Google's decision to release preview versions early you can start getting your apps ready for Android N today. In this article by Jessica Thornsby, author of the book Android UI Design, going to look at the major new UI features that you can start experimenting with right now. And since you'll need something to develop your Android N-ready apps in, we're also going to look at Android Studio 2.1, which is currently the recommended development environment for Android N. (For more resources related to this topic, see here.) Multi-window mode Beginning with Android N, the Android operating system will give users the option to display more than one app at a time, in a split-screen environment known as multi-window mode. Multi-window paves the way for some serious multi-app multi-tasking, allowing users to perform tasks such as replying to an email without abandoning the video they were halfway through watching on YouTube, and reading articles in one half of the screen while jotting down notes in Google Keep on the other. When two activities are sharing the screen, users can even drag data from one activity and drop it into another activity directly, for example dragging a restaurant's address from a website and dropping it into Google Maps. Android N users can switch to multi-window mode either by: Making sure one of the apps they want to view in multi-window mode is visible onscreen, then tapping their device's Recent Apps softkey (that's the square softkey). The screen will split in half, with one side displaying the current activity and the other displaying the Recent Apps carousel. The user can then select the secondary app they want to view, and it'll fill the remaining half of the screen. Navigating to the home screen, and then pressing the Recent Apps softkey to open the Recent Apps carousel. The user can then drag one of these apps to the edge of the screen, and it'll open in multi-window mode. The user can then repeat this process for the second activity. If your app targets Android N or higher, the Android operating system assumes that your app supports multi-window mode unless you explicitly state otherwise. To prevent users from displaying your app in multi-window mode, you'll need to add android:resizeableActivity="false" to the <activity> or <application> section of your project's Manifest file. If your app does support multi-window mode, you may want to prevent users from shrinking your app's UI beyond a specified size, using the android:minimalSize attribute. If the user attempts to resize your app so it's smaller than the android:minimalSize value, the system will crop your UI instead of shrinking it. Direct reply notifications Google are adding a few new features to notifications in Android N, including an inline reply action button that allows users to reply to notifications directly from the notification UI.   This is particularly useful for messaging apps, as it means users can reply to messages without even having to launch the messaging application. You may have already encountered direct reply notifications in Google Hangouts. To create a notification that supports direct reply, you need to create an instance of RemoteInput.Builder and then add it to your notification action. The following code adds a RemoteInput to a Notification.Action, and creates a Quick Reply key. When the user triggers the action, the notification prompts the user to input their response: private static final String KEY_QUICK_REPLY = "key_quick_reply"; String replyLabel = getResources().getString(R.string.reply_label); RemoteInput remoteInput = new RemoteInput.Builder(KEY_QUICK_REPLY) .setLabel(replyLabel) .build(); To retrieve the user's input from the notification interface, you need to call: getResultsFromIntent(Intent) and pass the notification action's intent as the input parameter: Bundle remoteInput = RemoteInput.getResultsFromIntent(intent); //This method returns a Bundle that contains the text response// if (remoteInput != null) { return remoteInput.getCharSequence(KEY_QUICK_REPLY); //Query the bundle using the result key, which is provided to the RemoteInput.Builder constructor// Bundled notifications Don't you just hate it when you connect to the World Wide Web first thing in the morning, and Gmail bombards you with multiple new message notifications, but doesn't give you anymore information about the individual emails? Not particularly helpful! When you receive a notification that consists of multiple items, the only thing you can really do is launch the app in question and take a closer look at the events that make up this grouped notification. Android N overcomes this drawback, by letting you group multiple notifications from the same app into a single, bundled notification via a new notification style: bundled notifications. A bundled notification consists of a parent notification that displays summary information for that group, plus individual notification items. If the user wants to see more information about one or more individual items, they can unfurl the bundled notification into separate notifications by swiping down with two fingers. The user can then act on each mini-notification individually, for example they might choose to dismiss the first three notifications about spam emails, but open the forth e-mail. To group notifications, you need to call setGroup() for each notification you want to add to the same notification stack, and then assign these notifications the same key. final static String GROUP_KEY_MESSAGES = "group_key_messages"; Notification notif = new NotificationCompat.Builder(mContext) .setContentTitle("New SMS from " + sender1) .setContentText(subject1) .setSmallIcon(R.drawable.new_message) .setGroup(GROUP_KEY_MESSAGES) .build(); Then when you create another notification that belongs to this stack, you just need to assign it the same group key. Notification notif2 = new NotificationCompat.Builder(mContext) .setContentTitle("New SMS from " + sender1) .setContentText(subject2) .setGroup(GROUP_KEY_MESSAGES) .build(); The second Android N developer preview introduced an Android-specific implementation of the Vulkan API. Vulkan is a cross-platform, 3D rendering API for providing high-quality, real-time 3D graphics. For draw-call heavy applications, Vulkan also promises to deliver a significant performance boost, thanks to a threading-friendly design and a reduction of CPU overhead. You can try Vulkan for yourself on devices running Developer Preview 2, or learn more about Vulkan at the official Android docs (https://developer.android.com/ndk/guides/graphics/index.html?utm_campaign=android_launch_npreview2_041316&utm_source=anddev&utm_medium=blog). Android N Support in Android Studio 2.1 The two Developer Previews aren't the only important releases for developers who want to get their apps ready for Android N. Google also recently released a stable version of Android Studio 2.1, which is the recommended IDE for developing Android N apps. Crucially, with the release of Android Studio 2.1 the emulator can now run the N Developer Preview Emulator System Images, so you can start testing your apps against Android N. Particularly with features like multi-window mode, it's important to test your apps across multiple screen sizes and configurations, and creating various Android N Android Virtual Devices (AVDs) is the quickest and easiest ways to do this. Android 2.1 also adds the ability to use the new Jack compiler (Java Android Compiler Kit), which compiles Java source code into Android dex bytecode. Jack is particularly important as it opens the door to using Java 8 language features in your Android N projects, without having to resort to additional tools or resources. Although not Android N-specific, Android 2.1 makes some improvements to the Instant Run feature, which should result in faster editing and deploy builds for all your Android projects. Previously, one small change in the Java code would cause all Java sources in the module to be recompiled. Instant Run aims to reduce compilation time by analyzing the changes you've made and determining how it can deploy them in the fastest way possible. This is instead of Android Studio automatically going through the lengthy process of recompiling the code, converting it to dex format, generating an APK and installing it on the connected device or emulator every time you make even a small change to your project. To start using Instant Run, select Android Studio from the toolbar followed by Preferences…. In the window that appears, select Build, Execution, Deployment from the side-menu and select Instant Run. Uncheck the box next to Restart activity on code changes. Instant Run is supported only when you deploy a debug build for Android 4.0 or higher. You'll also need to be using Android Plugin for Gradle version 2.0 or higher. Instant Run isn't currently compatible with the Jack toolchain. To use Instant Run, deploy your app as normal. Then if you make some changes to your project you'll notice that a yellow thunderbolt icon appears within the Run icon, indicating that Android Studio will push updates via Instant Run when you click this button. You can update to the latest version of Android Studio by launching the IDE and then selecting Android Studio from the toolbar, followed by Check for Updates…. Summary In this article, we looked at the major new UI features currently available in the Android N Developer Preview. We also looked at the Android Studio 2.1 features that are particularly useful for developing and testing apps that target the upcoming Android N release. Although we should expect some pretty dramatic changes between these early previews and the final release of Android N, taking the time to explore these features now means you'll be in a better position to update your apps when Android N is finally released. Resources for Article: Further resources on this subject: Drawing and Drawables In Android Canvas [article] Behavior-Driven Development With Selenium Webdriver [article] Development of Iphone Applications [article]
Read more
  • 0
  • 0
  • 2230
article-image-delegate-pattern-limitations-swift
Anthony Miller
18 Mar 2016
5 min read
Save for later

Delegate Pattern Limitations in Swift

Anthony Miller
18 Mar 2016
5 min read
If you've ever built anything using UIKit, then you are probably familiar with the delegate pattern. The delegate pattern is used frequently throughout Apple's frameworks and many open source libraries you may come in contact with. But many times, it is treated as a one-size-fits-all solution for problems that it is just not suited for. This post will describe the major shortcomings of the delegate pattern. Note: This article assumes that you have a working knowledge of the delegate pattern. If you would like to learn more about the delegate pattern, see The Swift Programming Language - Delegation. 1. Too Many Lines! Implementation of the delegate pattern can be cumbersome. Most experienced developers will tell you that less code is better code, and the delegate pattern does not really allow for this. To demonstrate, let's try implementing a new view controller that has a delegate using the least amount of lines possible. First, we have to create a view controller and give it a property for its delegate: class MyViewController: UIViewController { var delegate: MyViewControllerDelegate? } Then, we define the delegate protocol. protocol MyViewControllerDelegate { func foo() } Now we have to implement the delegate. Let's make another view controller that presents a MyViewController: class DelegateViewController: UIViewController { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } } Next, our DelegateViewController needs to conform to the delegate protocol: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } Finally, we can make our DelegateViewController the delegate of MyViewController: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() myViewController.delegate = self presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } That's a lot of boilerplate code that is repeated every time you want to create a new delegate. This opens you up to a lot of room for errors. In fact, the above code has a pretty big error already that we are going to fix now. 2. No Non-Class Type Delegates Whenever you create a delegate property on an object, you should use the weak keyword. Otherwise, you are likely to create a retain cycle. Retain cycles are one of the most common ways to create memory leaks and can be difficult to track down. Let's fix this by making our delegate weak: class MyViewController: UIViewController { weak var delegate: MyViewControllerDelegate? } This causes another problem though. Now we are getting a build error from Xcode! 'weak' cannot be applied to non-class type 'MyViewControllerDelegate'; consider adding a class bound. This is because you can't make a weak reference to a value type, such as a struct or an enum, so in order to use the weak keyword here, we have to guarantee that our delegate is going to be a class. Let's take Xcode's advice here and add a class bound to our protocol: protocol MyViewControllerDelegate: class { func foo() } Well, now everything builds just fine, but we have another issue. Now your delegate must be an object (sorry structs and enums!). You are now creating more constraints on what can conform to your delegate. The whole point of the delegate pattern is to allow an unknown "something" to respond to the delegate events. We should be putting as few constraints as possible on our delegate object, which brings us to the next issue with the delegate pattern. 3. Optional Delegate Methods In pure Swift, protocols don't have optional functions. This means, your delegate must implement every method in the delegate protocol, even if it is irrelevant in your case. For example, you may not always need to be notified when a user taps a cell in a UITableView. There are ways to get around this though. In Swift 2.0+, you can make a protocol extension on your delegate protocol that contains a default implementation for protocol methods that you want to make optional. Let's make a new optional method on our delegate protocol using this method: protocol MyViewControllerDelegate: class { func foo() func optionalFunction() } extension MyViewControllerDelegate { func optionalFunction() { } } This adds even more unnecessary code. It isn't really clear what the intention of this extension is unless you understand what's going on already, and there is no way to explicitly show that this method is optional. Alternatively, if you mark your protocol as @objc, you can use the optional keyword in your function declaration. The problem here is that now your delegate must be an Objective-C object. Just like our last example, this is creating additional constraints on your delegate, and this time they are even more restrictive. 4. There Can Be Only One The delegate pattern only allows for one delegate to respond to events. This may be just fine for some situations, but if you need multiple objects to be notified of an event, the delegate pattern may not work for you. Another common scenario you may come across is when you need different objects to be notified of different delegate events. The delegate pattern can be a very useful tool, which is why it is so widely used, but recognizing the limitations that it creates is important when you are deciding whether it is the right solution for any given problem. About the author Anthony Miller is the lead iOS developer at App-Order in Las Vegas, Nevada, USA. He has written and released numerous apps on the App Store and is an avid open source contributor. When he's not developing, Anthony loves board games, line-dancing, and frequent trips to Disneyland.
Read more
  • 0
  • 0
  • 3787

article-image-building-iphone-app-using-swift-part-1
Ryan Loomba
17 Mar 2016
6 min read
Save for later

Building an iPhone App Using Swift: Part 1

Ryan Loomba
17 Mar 2016
6 min read
In this post, I’ll be showing you how to create an iPhone app using Apple’s new Swift programming language. Swift is a new programming language that Apple released in June at their special WWDC event in San Francisco, CA. You can find more information about Swift on the official page. Apple has released a book on Swift, The Swift Programming Language, which is available on the iBook Store or can be viewed online here. OK—let’s get started! The first thing you need in order to write an iPhone app using Swift is to download a copy of Xcode 6. Currently, the only way to get a copy of Xcode 6 is to sign up for Apple’s developer program. The cost to enroll is $99 USD/year, so enroll here. Once enrolled, click on the iOS 8 GM Seed link, and scroll down to the link that says Xcode 6 GM Seed. Once Xcode is installed, go to File -> New -> New Project. We will click on Application within the iOS section and choose a Single View Application: Click on the play button in the top left of the project to build the project. You should see the iPhone simular open with a blank white screen. Next, click on the top-left blue Sample Swift App project file and navigate to the general tab. In the Deployment Info section, select portrait for the device orientation. This will force the app to only be viewed in portrait mode. First View Controller If we navigate on the left to Main.storyboard, we see a single View Controller, with a single View. First, make sure that Use Size Classes is unchecked in the Interface Builder Document section. Let’s add a text view to the top of our view. In the bottom right text box, search for Text View. Drag the Text View and position it at the top of the View. Click on the Attributes inspectoron the right toolbar to adjust the font and alignment. If we click the play button to build the project, we should see the same white screen, but now with our Swift Sample App text. View a web page Let’s add our first feature–a button that will open up a web page. First embed our controller in a navigation controller, so we can easily navigate back and forth between views. Select the view controller in the storyboard, then go to Editor -> Embed in -> Navigation controller. Note that you might need to resize the text view you added in the previous step. Now, let’s add a button that will open up a web view. Back to our view, in the bottom right let’s search for a button and drag it somewhere in the view and label it Web View. The final product should look like this: If we build the project and click on the button, nothing will happen. We need to create a destination controller that will contain the web view. Go to File -> New and create a new Cocoa Touch Class: Let’s name our new controller WebViewController and make it a subclass of UIViewController. Make sure you choose Swift as the language. Click Create to save the controller file. Back to our storyboard, search for a View Controller in the bottom-right search box and drag to the storyboard. In the Attributes inspector toolbar on the right side of the screen, let’s give this controller the title WebViewController. In the identity inspector, let’s give this view controller a custom class of WebViewController: Let’s wire up our two controllers. Ctrl + click on the Web View button we created earlier and hold. Drag your cursor over to your newly created WebViewController. Upon release, choose push. On our storyboard, let’s search for a web view in the lower-right search box and drag it into our newly created WebViewController. Resize the web view so that it takes up the entire screen, except for the top nav bar area. If we hit the large play button at the top left to build our app, clicking on the Web View link will take us to a blank screen. We’ll also have a back button that takes us back to the first screen. Writing some Swift code Let’s have the web view load up a pre-determined website. Time to get our hands dirty writing some Swift! The first thing we need to do is link the WebView in our controller to the WebViewController.swift file. In the storyboard, click on the Assistant editor button at the top-right of the screen. You should see the storyboard view of WebViewController and WebViewController.swift next to each other. Control click on WebViewController in the storyboard and drag it over to the line right before the WebViewController class is defined. Name the variable webView: In the viewDidLoad function, we are going to add some intitialization to load up our webpage. After super.viewDidLoad(), let’s first declare the URL we want to use. This can be any URL; for the example, I’m going to use my own homepage. It will look something like this: let requestURL = NSURL(string: http://ryanloomba.com) In Swift, the keyword let is used to desiginate contsants, or variables that will not change. Next, we will convert this URL into an NSURLRequest object. Finally, we will tell our WebView to make this request and pass in the request object: import UIKit class WebViewController: UIViewController { @IBOutlet var webView: UIWebView! override func viewDidLoad() { super.viewDidLoad() let requestURL = NSURL(string: "http://ryanloomba.com") let request = NSURLRequest(URL: requestURL) webView.loadRequest(request) // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } /* // MARK: - Navigation // In a storyboard-based application, you will often want to do a little preparation before navigation override func prepareForSegue(segue: UIStoryboardSegue!, sender: AnyObject!) { // Get the new view controller using segue.destinationViewController. // Pass the selected object to the new view controller. } */ } Try changing the URL to see different websites. Here’s an example of what it should look like: About the author Ryan is a software engineer and electronic dance music producer currently residing in San Francisco, CA. Ryan started up as a biomedical engineer but fell in love with web/mobile programming after building his first Android app, you can find him on GitHub @rloomba
Read more
  • 0
  • 0
  • 3133