Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-introduction-watchkit
Packt
04 Sep 2015
7 min read
Save for later

Introduction to WatchKit

Packt
04 Sep 2015
7 min read
In this article by Hossam Ghareeb, author of the book Application Development with Swift, we will talk about a new technology, WatchKit, and a new era of wearable technologies. Now technology is a part of all aspects of our lives, even wearable objects. You can see smart watches such as the new Apple watch or glasses such as Google glass. We will go through the new WatchKit framework to learn how to extend your iPhone app functionalities to your wrist. (For more resources related to this topic, see here.) Apple watch Apple watch is a new device on your wrist that can be used to extend your iPhone app functionality; you can access the most important information and respond in easy ways using the watch. The watch is now available in most countries in different styles and models so that everyone can find a watch that suits them. When you get your Apple watch, you can pair it with your iPhone. The watch can't be paired with the iPad; it can only be paired with your iPhone. To run third-party apps on your watch, iPhone should be paired with the watch. Once paired, when you install any app on your iPhone that has an extension for the watch, the app will be installed on the watch automatically and wirelessly. WatchKit WatchKit is a new framework to build apps for Apple watch. To run third-party apps on Apple watch, you need the watch to be connected to the iPhone. WatchKit helps you create apps for Apple watch by creating two targets in Xcode: The WatchKit app: The WatchKit app is an executable app to be installed on your watch, and you can install or uninstall it from your iPhone. The WatchKit app contains the storyboard file and resources files. It doesn't contain any source code, just the interface and resource files. The WatchKit extension: This extension runs on the iPhone and has the InterfaceControllers file for your storyboard. This extension just contains the model and controller classes. The actions and outlets from the previous WatchKit app will be linked to these controller files in the WatchKit extension. These bundles—the WatchKit extension and WatchKit app—are put together and packed inside the iPhone application. When the user installs the iPhone app, the system will prompt the user to install the WatchKit app if there is a paired watch. Using WatchKit, you can extend your iOS app in three different ways: The WatchKit app As we mentioned earlier, the WatchKit app is an app installed on Apple watch and the user can find it in the list of Watch apps. The user can launch, control, and interact with the app. Once the app is launched, the WatchKit extension on the iPhone app will run in the background to update a user interface, perform any logic required, and respond to user actions. Note that the iPhone app can't launch or wake up the WatchKit extension or the WatchKit app. However, the WatchKit extension can ask the system to launch the iPhone app and this will be performed in the background. Glances Glances are single interfaces that the user can navigate between. The glance view is just read-only information, which means that you can't add any interactive UI controls such as buttons and switches. Apps should use glances to display very important and timely information. The glance view is a nonscrolling view, so your glance view should fit the watch screen. Avoid using tables and maps in interface controllers and focus on delivering the most important information in a nice way. Once the user clicks on the glance view, the watch app will be launched. The glance view is optional in your app. The glance interface and its interface controller files are a part of your WatchKit extension and WatchKit app. The glance interface resides in a storyboard, which resides in the WatchKit app. The interface controller that is responsible for filling the view with the timely important information is located in the WatchKit extension, which runs in the background in the iPhone app, as we said before. Actionable notifications For sure, you can handle and respond to local and remote notifications in an easy and fast way using Apple watch. WatchKit helps you build user interfaces for the notification that you want to handle in your WatchKit app. WatchKit helps you add actionable buttons so that the user can take action based on the notification. For example, if a notification for an invitation is sent to you, you can take action to accept or reject the notification from your wrist. You can respond to these actions easily in interface controllers in WatchKit extension. Working with WatchKit Enough talking about theory, lets see some action. Go to our lovely Xcode and create a new single-view application and name it WatchKitDemo. Don't forget to select Swift as the app language. Then navigate to File | New | Target to create a new target for the WatchKit app: After you select the target, in the pop-up window, from the left side under iOS choose Apple Watch and select WatchKit App. Check the following screenshot: After you click on Next, it will ask you which application to embed the target in and which scenes to include. Please check the Include Notification Scene and Include Glance Scene options, as shown in the following screenshot: Click on Finish, and now you have an iPhone app with the built-in WatchKit extension and WatchKit app. Xcode targets Now your project should be divided into three parts. Check the following screenshot and let's explain these parts: As you see in this screenshot, the project files are divided into three sections. In section 1, you can see the iPhone app source files, interface files or storyboard, and resources files. In section 2, you can find the WatchKit extension, which contains only interface controllers and model files. Again, as we said before, this extension also runs in iPhone in the background. In section 3, you can see the WatchKit app, which runs in Apple watch itself. As we see, it contains the storyboard and resources files. No source code can be added in this target. Interface controllers In the WatchKit extension of your Xcode project, open InterfaceController.swift. You will find the interface controller file for the scene that exists in Interface.storyboard in the WatchKit app. The InterfaceController file extends from WKInterfaceController, which is the base class for interface controllers. Forget the UI classes that you were using in the iOS apps from the UIKit framework, as it has different interface controller classes in WatchKit and they are very limited in configuration and customization. In the InterfaceController file, you can find three important methods that explain the lifecycle of your controller: awakeWithContext, willActivate, and didDeactivate. Another important method that can be overridden for the lifecycle is called init, but it's not implemented in the controller file. Let's now explain the four lifecycle methods: init: You can consider this as your first chance to update your interface elements. awakeWithContext: This is called after the init method and contains context data that can be used to update your interface elements or to perform some logical operations on these data. Context data is passed between interface controllers when you push or present another controller and you want to pass some data. willActivate: Here, your scene is about to be visible onscreen, and its your last chance to update your UI. Try to put simple UI changes here in this method so as not to cause freezing in UI. didDeactivate: Your scene is about to be invisible and, if you want to clean up your code, it's time to stop animations or timers. Summary In this article, we covered a very important topic: how to develop apps for the new wearable technology, Apple watch. We first gave a quick introduction about the new device and how it can communicate with paired iPhones. We then talked about WatchKit, the new framework, that enables you to develop apps for Apple watch and design its interface. Apple has designed the watch to contain only the storyboard and resources files. All logic and operations are performed in the iPhone app in the background. Resources for Article: Further resources on this subject: Flappy Swift [article] Playing with Swift [article] Using OpenStack Swift [article]
Read more
  • 0
  • 0
  • 9475

article-image-page-events
Packt
02 Jan 2013
4 min read
Save for later

Page Events

Packt
02 Jan 2013
4 min read
(For more resources related to this topic, see here.) Page initialization events The jQuery Mobile framework provides the page plugin which automatically handles page initialization events. The pagebeforecreate event is fired before the page is created. The pagecreate event is fired after the page is created but before the widgets are initialized. The pageinit event is fired after the complete initialization. This recipe shows you how to use these events. Getting ready Copy the full code of this recipe from the code/08/pageinit sources folder. You can launch this code using the URL http://localhost:8080/08/pageinit/main.html How to do it... Carry out the following steps: Create main.html with three empty <div> tags as follows: <div id="content" data-role="content"> <div id="div1"></div> <div id="div2"></div> <div id="div3"></div> </div> Add the following script to the <head> section to handle the pagebeforecreate event : var str = "<a href='#' data-role='button'>Link</a>"; $("#main").live("pagebeforecreate", function(event) { $("#div1").html("<p>DIV1 :</p>"+str); }); Next, handle the pagecreate event : $("#main").live("pagecreate", function(event) { $("#div1").find("a").attr("data-icon", "star"); }); Finally, handle the pageinit event : $("#main").live("pageinit", function(event) { $("#div2").html("<p>DIV 2 :</p>"+str); $("#div3").html("<p>DIV 3 :</p>"+str); $("#div3").find("a").buttonMarkup({"icon": "star"}); }); How it works... In main.html, add three empty divs to the page content as shown. Add the given script to the page. In the script, str is an HTML string for creating an anchor link with the data-role="button" attribute. Add the callback for the pagebeforecreate event , and set str to the div1 container. Since the page was not yet created, the button in div1 is automatically initialized and enhanced as seen in the following image. Add the callback for the pagecreate event . Select the previous anchor button in div1 using the jQuery find() method, and set its data-icon attribute. Since this change was made after page initialization but before the button was initialized, the star icon is automatically shown for the div1 button as shown in the following screenshot. Finally, add the callback for the pageinit event and add str to both the div2 and div3 containers. At this point, the page and widgets are already initialized and enhanced. Adding an anchor link will now show it only as a native link without any enhancement for div2, as shown in the following screenshot. But, for div3, find the anchor link and manually call the buttonmarkup method on the button plugin, and set its icon to star. Now when you load the page, the link in div3 gets enhanced as follows:     There's more... You can trigger "create" or "refresh" on the plugins to let the jQuery Mobile framework enhance the dynamic changes done to the page or the widgets after initialization. Page initialization events fire only once The page initialization events fire only once. So this is a good place to make any specific initializations or to add your custom controls. Do not use $(document).ready() The $(document).ready() handler only works when the first page is loaded or when the DOM is ready for the first time. If you load a page via Ajax, then the ready() function is not triggered. Whereas, the pageinit event is triggered whenever a page is created or loaded and initialized. So, this is the best place to do post initialization activities in your app. $(document).bind("pageinit", callback() {...});</p>  
Read more
  • 0
  • 0
  • 9361

article-image-qr-codes-geolocation-google-maps-api-and-html5-video
Packt
07 Jun 2013
9 min read
Save for later

QR Codes, Geolocation, Google Maps API, and HTML5 Video

Packt
07 Jun 2013
9 min read
(For more resources related to this topic, see here.) QR codes We love our smartphones. We love showing off what our smartphones can do. So, when those cryptic squares, as shown in the following figure, started showing up all over the place and befuddling the masses, smartphone users quickly stepped up and started showing people what it's all about in the same overly-enthusiastic manner that we whip them out to answer even the most trivial question heard in passing. And, since it looks like NFC isn't taking off anytime soon, we'd better be familiar with QR codes and how to leverage them. The data shows that knowledge and usage of QR codes is very high according to surveys:(http://researchaccess.com/2012/01/new-data-on-qrcode-adoption/) More than two-thirds of smartphone users have scanned a code More than 70 percent of the users say they'd do it again (especially for a discount) Wait, what does this have to do with jQuery Mobile? Traffic. Big-time successful traffic. A banner ad is considered successful if only two percent of people lick through (http://en.wikipedia.org/wiki/Clickthrough_rate). QR codes get more than 66 percent! I'd say it's a pretty good way to get people to our reations and, thus, should be of concern. But QR codes are for more than just URLs. Here we have a URL, a block of text, a phone number, and an SMS in the following QR codes: There are many ways to generate QR codes (http://www.the-qrcode-generator.com/, http://www.qrstuff.com/). Really, just search for QR Code Generator on Google and you'll have numerous options. Let us consider a local movie theater chain. Dickinson Theatres (dtmovies.com) has been around since the 1920s and is considering throwing its hat into the mobile ring. Perhaps they will invest in a mobile website, and go all-out in placing posters and ads in bus stops and other outdoor locations. Naturally, people are going to start scanning, and this is valuable to us because they're going to tell us exactly which locations are paying off. This is really a first in the advertising industry. We have a medium that seems to spur people to interact on devices that will tell us exactly where they were when they scanned it. Geolocation matters and this can help us find the right locations. Geolocation When GPS first came out on phones, it was pretty useless for anything other than police tracking in case of emergencies. Today, it is making the devices that we hold in our hands even more personal than our personal computers. For now, we can get a latitude, longitude, and timestamp very dependably. The geolocation API specification from the W3C can be found at http://dev.w3.org/geo/api/spec-source.html. For now, we'll pretend that we have a poster prompting the user to scan a QR code to find the nearest theater and show the timings. It would bring the user to a page like this: Since there's no better first date than dinner and a movie, the movie going crowd tends to skew a bit to the younger side. Unfortunately, that group does not tend to have a lot of money. They may have more feature phones than smartphones. Some might only have very basic browsers. Maybe they have JavaScript, but we can't count on it. If they do, they might have geolocation. Regardless, given the audience, progressive enhancement is going to be the key. The first thing we'll do is create a base level page with a simple form that will submit a zip code to a server. Since we're using our template from before, we'll add validation to the form for anyone who has JavaScript using the validateMe class. If they have JavaScript and geolocation, we'll replace the form with a message saying that we're trying to find their location. For now, don't worry about creating this file. The source code is incomplete at this stage. This page will evolve and the final version will be in the source package for the article in the file called qrresponse. php as shown in the following code: <?php $documentTitle = "Dickinson Theatres"; $headerLeftHref = "/"; $headerLeftLinkText = "Home"; $headerLeftIcon = "home"; $headerTitle = ""; $headerRightHref = "tel:8165555555"; $headerRightLinkText = "Call"; $headerRightIcon = "grid"; $fullSiteLinkHref = "/"; ?> <!DOCTYPE html> <html> <head> <?php include("includes/meta.php"); ?> </head> <body> <div id="qrfindclosest" data-role="page"> <div class="logoContainer ui-shadow"></div> <div data-role="content"> <div id="latLong> <form id="findTheaterForm" action="fullshowtimes.php" method="get" class="validateMe"> <p> <label for="zip">Enter Zip Code</label> <input type="tel" name="zip" id="zip" class="required number"/> </p> <p><input type="submit" value="Go"></p> </form> </div> <p> <ul id="showing" data-role="listview" class="movieListings" data-dividertheme="g"> </ul> </p> </div> <?php include("includes/footer.php"); ?> </div> <script type="text/javascript"> //We'll put our page specific code here soon </script> </body> </html> For anyone who does not have JavaScript, this is what they will see, nothing special. We could spruce it up with a little CSS but what would be the point? If they're on a browser that doesn't have JavaScript, there's pretty good chance their browser is also miserable at rendering CSS. That's fine really. After all, progressive enhancement doesn't necessarily mean making it wonderful for everyone, it just means being sure it works for everyone. Most will never see this but if they do, it will work just fine For everyone else, we'll need to start working with JavaScript to get our theater data in a format we can digest programmatically. JSON is perfectly suited for this task. If you are already familiar with the concept of JSON, skip to the next paragraph now. If you're not familiar with it, basically, it's another way of shipping data across the Interwebs. It's like XML but more useful. It's less verbose and can be directly interacted with and manipulated using JavaScript because it's actually written in JavaScript. JSON is an acronym for JavaScript Object Notation. A special thank you goes out to Douglas Crockford (the father of JSON). XML still has its place on the server. It has no business in the browser as a data format if you can get JSON. This is such a widespread view that at the last developer conference I went to, one of the speakers chuckled as he asked, "Is anyone still actually using XML?" { "theaters":[ { "id":161, "name":"Chenal 9 IMAX Theatre", "address":"17825 Chenal Parkway", "city":"Little Rock", "state":"AR", "zip":"72223", "distance":9999, "geo":{"lat":34.7684775,"long":-92.4599322}, "phone":"501-821-2616" }, { "id":158, "name":"Gateway 12 IMAX Theatre", "address":"1935 S. Signal Butte", "city":"Mesa", "state":"AZ", "zip":"85209", "distance":9999, "geo":{"lat":33.3788674,"long":-111.6016081}, "phone":"480-354-8030" }, { "id":135, "name":"Northglen 14 Theatre", "address":"4900 N.E. 80th Street", "city":"Kansas City", "state":"MO", "zip":"64119", "distance":9999, "geo":{"lat":39.240027,"long":-94.5226432}, "phone":"816-468-1100" } ] } Now that we have data to work with, we can prepare the on-page scripts. Let's put the following chunks of JavaScript in a script tag at the bottom of the HTML where we had the comment: We'll put our page specific code here soon //declare our global variables var theaterData = null; var timestamp = null; var latitude = null; var longitude = null; var closestTheater = null; //Once the page is initialized, hide the manual zip code form //and place a message saying that we're attempting to find //their location. $(document).on("pageinit", "#qrfindclosest", function(){ if(navigator.geolocation){ $("#findTheaterForm").hide(); $("#latLong").append("<p id='finding'>Finding your location...</ p>"); } }); //Once the page is showing, go grab the theater data and find out which one is closest. $(document).on("pageshow", "#qrfindclosest", function(){ theaterData = $.getJSON("js/theaters.js", function(data){ theaterData = data; selectClosestTheater(); }); }); function selectClosestTheater(){ navigator.geolocation.getCurrentPosition( function(position) { //success latitude = position.coords.latitude; longitude = position.coords.longitude; timestamp = position.timestamp; for(var x = 0; x < theaterData.theaters.length; x++) { var theater = theaterData.theaters[x]; var distance = getDistance(latitude, longitude, theater.geo.lat, theater.geo.long); theaterData.theaters[x].distance = distance; }} theaterData.theaters.sort(compareDistances); closestTheater = theaterData.theaters[0]; _gaq.push(['_trackEvent', "qr", "ad_scan", (""+latitude+","+longitude) ]); var dt = new Date(); dt.setTime(timestamp); $("#latLong").html("<div class='theaterName'>" +closestTheater.name+"</div><strong>" +closestTheater.distance.toFixed(2) +"miles</strong><br/>" +closestTheater.address+"<br/>" +closestTheater.city+", "+closestTheater.state+" " +closestTheater.zip+"<br/><a href='tel:" +closestTheater.phone+"'>" +closestTheater.phone+"</a>"); $("#showing").load("showtimes.php", function(){ $("#showing").listview('refresh'); }); }, function(error){ //error switch(error.code) { case error.TIMEOUT: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Timeout</div>"); break; case error.POSITION_UNAVAILABLE: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Position unavailable</div>"); break; case error.PERMISSION_DENIED: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Permission denied. You may want to check your settings.</div>"); break; case error.UNKNOWN_ERROR: $("#latLong").prepend("<div class='ui-bar-e'> Unknown error while trying to access your position.</div>"); break; } $("#finding").hide(); $("#findTheaterForm").show(); }, {maximumAge:600000}); //nothing too stale } The key here is the function geolocation.getCurrentPosition, which will prompt the user to allow us access to their location data, as shown here on iPhone If somebody is a privacy advocate, they may have turned off all location services. In this case, we'll need to inform the user that their choice has impacted our ability to help them. That's what the error function is all about. In such a case, we'll display an error message and show the standard form again.
Read more
  • 0
  • 0
  • 9271

article-image-creating-instagram-clone-layout-using-ionic-framework
Packt
06 Oct 2015
7 min read
Save for later

Creating Instagram Clone Layout using Ionic framework

Packt
06 Oct 2015
7 min read
In this article by Zainul Setyo Pamungkas, author of the book PhoneGap 4 Mobile Application Development Cookbook, we will see how Ionic framework is one of the most popular HTML5 framework for hybrid application development. Ionic framework provides native such as the UI component that user can use and customize. (For more resources related to this topic, see here.) In this article, we will create a clone of Instagram mobile app layout: First, we need to create new Ionic tabs application project named ionSnap: ionic start ionSnap tabs Change directory to ionSnap: cd ionSnap Then add device platforms to the project: ionic platform add ios ionic platform add android Let's change the tab name. Open www/templates/tabs.html and edit each title attribute of ion-tab: <ion-tabs class="tabs-icon-top tabs-color-active-positive"> <ion-tab title="Timeline" icon-off="ion-ios-pulse" icon-on="ion-ios-pulse-strong" href="#/tab/dash"> <ion-nav-view name="tab-dash"></ion-nav-view> </ion-tab> <ion-tab title="Explore" icon-off="ion-ios-search" icon-on="ion-ios-search" href="#/tab/chats"> <ion-nav-view name="tab-chats"></ion-nav-view> </ion-tab> <ion-tab title="Profile" icon-off="ion-ios-person-outline" icon-on="ion-person" href="#/tab/account"> <ion-nav-view name="tab-account"></ion-nav-view> </ion-tab> </ion-tabs> We have to clean our application to start a new tab based application. Open www/templates/tab-dash.html and clean the content so we have following code: <ion-view view-title="Timeline"> <ion-content class="padding"> </ion-content> </ion-view> Open www/templates/tab-chats.html and clean it up: <ion-view view-title="Explore"> <ion-content> </ion-content> </ion-view> Open www/templates/tab-account.html and clean it up: <ion-view view-title="Profile"> <ion-content> </ion-content> </ion-view> Open www/js/controllers.js and delete methods inside controllers so we have following code: angular.module('starter.controllers', []) .controller('DashCtrl', function($scope) {}) .controller('ChatsCtrl', function($scope, Chats) { }) .controller('ChatDetailCtrl', function($scope, $stateParams, Chats) { }) .controller('AccountCtrl', function($scope) { }); We have clean up our tabs application. If we run our application, we will have view like this: The next steps, we will create layout for timeline view. Each post of timeline will be displaying username, image, Like button, and Comment button. Open www/template/tab-dash.html and add following div list: <ion-view view-title="Timelines"> <ion-content class="has-header"> <div class="list card"> <div class="item item-avatar"> <img src="http://placehold.it/50x50"> <h2>Some title</h2> <p>November 05, 1955</p> </div> <div class="item item-body"> <img class="full-image" src="http://placehold.it/500x500"> <p> <a href="#" class="subdued">1 Like</a> <a href="#" class="subdued">5 Comments</a> </p> </div> <div class="item tabs tabs-secondary tabs-icon-left"> <a class="tab-item" href="#"> <i class="icon ion-heart"></i> Like </a> <a class="tab-item" href="#"> <i class="icon ion-chatbox"></i> Comment </a> <a class="tab-item" href="#"> <i class="icon ion-share"></i> Share </a> </div> </div> </ion-content> </ion-view> Our timeline view will be like this: Then, we will create explore page to display photos in a grid view. First we need to add some styles on our www/css/styles.css: .profile ul { list-style-type: none; } .imageholder { width: 100%; height: auto; display: block; margin-left: auto; margin-right: auto; } .profile li img { float: left; border: 5px solid #fff; width: 30%; height: 10%; -webkit-transition: box-shadow 0.5s ease; -moz-transition: box-shadow 0.5s ease; -o-transition: box-shadow 0.5s ease; -ms-transition: box-shadow 0.5s ease; transition: box-shadow 0.5s ease; } .profile li img:hover { -webkit-box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); } Then we just put list with image item like so: <ion-view view-title="Explore"> <ion-content> <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> </ul> </ion-content> </ion-view> Now, our explore page will look like this: For the last, we will create our profile page. The profile page consists of two parts. The first one is profile header, which shows user information such as username, profile picture, and number of post. The second part is a grid list of picture uploaded by user. It's similar to grid view on explore page. To add profile header, open www/css/style.css and add following styles bellow existing style: .text-white{ color:#fff; } .profile-pic { width: 30%; height: auto; display: block; margin-top: -50%; margin-left: auto; margin-right: auto; margin-bottom: 20%; border-radius: 4em 4em 4em / 4em 4em; } Open www/templates/tab-account.html and then add following code inside ion-content: <ion-content> <div class="user-profile" style="width:100%;heigh:auto;background-color:#fff;float:left;"> <img src="img/cover.jpg"> <div class="avatar"> <img src="img/ionic.png" class="profile-pic"> <ul> <li> <p class="text-white text-center" style="margin-top:-15%;margin-bottom:10%;display:block;">@ionsnap, 6 Pictures</p> </li> </ul> </div> </div> … The second part of profile page is the grid list of user images. Let's add some pictures under profile header and before the close of ion-content tag: <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> </ul> </ion-content> Our profile page will now look like this: Summary In this article we have seen steps to create Instagram clone with an Ionic framework with the help of an example. If you are a developer who wants to get started with mobile application development using PhoneGap, then this article is for you. Basic understanding of web technologies such as HTML, CSS and JavaScript is a must. Resources for Article: Further resources on this subject: The Camera API [article] Working with the sharing plugin [article] Building the Middle-Tier [article]
Read more
  • 0
  • 0
  • 9265

article-image-cocos2d-iphone-surfing-through-scenes
Packt
29 Dec 2010
10 min read
Save for later

Cocos2d for iPhone: Surfing Through Scenes

Packt
29 Dec 2010
10 min read
  Cocos2d for iPhone 0.99 Beginner's Guide Make mind-blowing 2D games for iPhone with this fast, flexible, and easy-to-use framework! A cool guide to learning cocos2d with iPhone to get you into the iPhone game industry quickly Learn all the aspects of cocos2d while building three different games Add a lot of trendy features such as particles and tilemaps to your games to captivate your players Full of illustrations, diagrams, and tips for building iPhone games, with clear step-by-step instructions and practical examples         Read more about this book       (For more resources on Cocos2d, see here.) We'll be doing a lot of things in this article, so let's get started. Aerial Gun, a vertical shooter game Let's talk about the game we will be making. Aerial Gun, as I said earlier, is a vertical shooter game. That means you will be in control of an airship which will move vertically. Actually, the airship won't be moving anywhere (well except to the left and right); what is really going to move here is the background, giving the sense the airship is the one moving. The player will be able to control the airship by using accelerometer controls. We'll also provide some areas where the player can touch to fire bullets or bombs to destroy the enemies. Enemies will be appearing at different time intervals and will have some different behaviors attached to them. For example, some of them could just move towards the airship, others may stay there and shoot back, and so on. We'll do it in a way you can create more custom behaviors later. For this game we will be using the Cocos2d template again, just because it is the easier way to have a project properly set up, at least to begin our work. So, follow the same steps as when we created the Coloured Stones game. Just open Xcode and go to the File menu, select to create a new project. Then choose the Cocos2d template (the simple one, without any physics engine) and give a name to the project. I will call it AerialGun. Once you do that, your new project should be opened. Creating new scenes We will begin this new game by first doing a couple of scenes other than the one that holds the actual game. If you take a look at the class that the template generates, at first sight you won't find what would appear to be a scene. That is because the template handles that in a confusing fashion, at least for people who are just getting started with Cocos2d. Let's take a look at that. Open the newly generated HelloWorldScene.m file. This is the same code in which we based our first game. This time we will analyze it so you understand what it is doing behind scenes. Take a look at the first method of the implementation of the HelloWorld Layer: +(id) scene{ // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorld *layer = [HelloWorld node]; // add layer as a child to scene [scene addChild: layer]; // return the scene return scene;} If you remember well, this is the method that get's called in the AppDelegate when the Director needs to start running with a scene. So why are we passing the method of a layer instead of an instance of a CCScene class? If you pay attention to the +(id)scene method of the layer, what it does is to instantiate a CCScene object, then it instantiates a HelloWorld object (the layer), and then it adds that layer to the scene and returns it. This actually works as you have witnessed and is pretty quick to set up, and get you up and running. However, sometimes you will need to have you own custom scenes, that is a class that inherits from CCScene and extends it in some fashion. Now we are going to create some of the scenes that we need for this game and add a layer to each one. Then when it is time to work with the game scene we will modify it to match our needs. Before doing anything, please change the name of the HelloWorldlayer to GameLayer and the name of the HelloWorldScene to GameScene, just as we did back then with our first game. Time for action – creating the splash and main menu scene We will begin with a simple scene, the splash screen. A splash screen is the first thing that appears as you launch a game. Well, the second if you count the Default.png image. The objective of this screen is to show the player some information about the developer, other companies that helped, and so on. Most times you will just show a couple and logos and move to the main menu. So, what we will do now is put a new Default.png image, then create the SplashScene. The Default.png image is located in your project's Resources group folder and it is the first image that is shown when the application is launched. You should always have one, so that something is shown while the application is being launched. This happens before anything is initialized, so you can't do anything else other than show this image. Its dimensions must be 320 * 480 (when developing for older generation iPhones), so if your game is supposed to be in landscape mode, you should rotate the image with any graphics software before including it in your project. The SplashScene is going to show a sprite for a moment, then fade it, show another one and then move to the GameScene. Let's add a new file to the project. Select New File (CMD + N) from the File menu, then select Objective-C class (we are going to do it from scratch anyways) and name it SplashScene. This file will hold both the SplashScene and the SplashLayer classes. The SplashLayer will be added as a child of the SplashScene, and inside the layer we will add the sprites. Before writing any code, add to the project the three splash images I created for you. You can find them in the companion files folder. Once you have them added into your project we can begin working on the SplashScene. The following is the SplashScene.h: #import <Foundation/Foundation.h>#import "cocos2d.h"#import "MainMenuScene.h"@interface SplashScene : CCScene { }@end@interface SplashLayer : CCLayer { }@end And the SplashScene.m file: #import "SplashScene.h" //Here is the implementation of the SplashScene@implementation SplashScene- (id) init{ self = [super init]; if (self != nil) { [self addChild:[SplashLayer node]]; } return self;}-(void)dealloc{ [super dealloc];}@end//And here is the implementation of the SplashLayer@implementation SplashLayer- (id) init{ if ((self = [super init])) { isTouchEnabled = YES; NSMutableArray * splashImages = [[NSMutableArray alloc]init]; for(int i =1;i<=3;i++) { CCSprite * splashImage = [CCSprite spriteWithFile:[NSString stringWithFormat:@"splash%d.png",i]]; [splashImage setPosition:ccp(240,160)]; [self addChild:splashImage]; if(i!=1) [splashImage setOpacity:0]; [splashImages addObject:splashImage]; } [self fadeAndShow:splashImages]; } return self;}//Now we add the methods that handle the image switching-(void)fadeAndShow:(NSMutableArray *)images{ if([images count]<=1) { [images release]; [[CCDirector sharedDirector]replaceScene:[MainMenuScene node]]; } else { CCSprite * actual = (CCSprite *)[images objectAtIndex:0]; [images removeObjectAtIndex:0]; CCSprite * next = (CCSprite *)[images objectAtIndex:0]; [actual runAction:[CCSequence actions:[CCDelayTime actionWithDuration:2], [CCFadeOut actionWithDuration:1],[CCCallFuncN actionWithTarget:self selector:@selector(remove:)],nil]]; [next runAction:[CCSequence actions:[CCDelayTime actionWithDuration:2], [CCFadeIn actionWithDuration:1],[CCDelayTime actionWithDuration:2], [CCCallFuncND actionWithTarget:self selector:@selector(cFadeAndShow: data:) data:images],nil]]; } }-(void) cFadeAndShow:(id)sender data:(void*)data{ NSMutableArray * images = (NSMutableArray *)data; [self fadeAndShow:images];}-(void)remove:(CCSprite *)s{ [s.parent removeChild:s cleanup:YES];}-(void)dealloc{ [super dealloc];}@end As you may notice, I have put together two classes in only one file. You can do this with any number of classes, as long you don't get lost in such a long file. Generally, you will create a pair of files (the .m and .h) for each class, but as the SplashScene class has very little code, I'd rather put it there. For this to work properly, we need to make some other changes. First, open the AerialGunAppDelegate.m file and change the line where the the Director starts running the GameScene. We want it to start by running the SplashScene now. So replace that line with the following: [[CCDirector sharedDirector] runWithScene:[SplashScene node]]; Also remember to import your SplashScene.h file in order for the project to properly compile. Finally, we have to create the MainMenuScene, which is the scene the Director will run after the last image has faded out. So, let's create that one and leave it blank for now. The following is the MainMenuScene.h file: #import <Foundation/Foundation.h>#import "cocos2d.h"@interface MainMenuScene : CCScene { }@end@interface MainMenuLayer : CCLayer { }@end The following is the MainMenuScene.m file: #import "MainMenuScene.h"@implementation MainMenuScene- (id) init{ self = [super init]; if (self != nil) { [self addChild:[MainMenuLayer node]]; } return self;}-(void)dealloc{ [super dealloc];}@end@implementation MainMenuLayer- (id) init{ if ((self = [super init])) { isTouchEnabled = YES; } return self;}-(void)dealloc{ [super dealloc];}@end That would be all for now. Run the project and you should see the first image appear. Then fade to the next one after a while and continue till the last one. Once the last one has faded out, we move on to the MainMenuScene. What just happened? Creating a new scene is as easy as that. You can see from the MainMenuScene code that what we are doing is quite simple; when the MainMenuScene gets created we just instantiate the MainMenuLayer and add it to the scene. That layer is where we will later add all the necessary logic for the main menu. The MainMenuScene as well as the SplashScene both inherit from the CCScene class. This class is just another CCNode. Let's take a little look at the logic behind the SplashScene: NSMutableArray * splashImages = [[NSMutableArray alloc]init];for(int i =1;i<=3;i++){ CCSprite * splashImage = [CCSprite spriteWithFile:[NSString stringWithFormat:@"splash%d.png",i]]; [splashImage setPosition:ccp(240,160)]; [self addChild:splashImage]; if(i!=1) [splashImage setOpacity:0]; [splashImages addObject:splashImage];} [self fadeAndShow:splashImages]; This piece of code is from the SplashLayer init method. What we are doing here is creating an array that will hold any amount of sprites (in this case, three of them). Then we create and add those sprites to it. Finally, the fadeAndShow method is called, passing that newly created array to it. The fadeAndShow method is responsible for fading the images it has. It grabs the first image in the array (after that, the sprite is removed from the array). It then grabs the next one. Then it applies actions to both of them to fade them in and out. The last action of the second sprite's action is a CCCallFuncND, which we use to call the fadeAndShow method again with the modified array. This occurs if there is more than one remaining sprite in the array. If there isn't, the fadeAndShow method calls the Director's replaceScene method with the MainMenuScene. I have made the SplashLayer logic, so you can add more splash images to the array (or leave just one) if you like. Generally, just one or two images will be more than enough.
Read more
  • 0
  • 0
  • 9208

article-image-iphone-javascript-web-20-integration
Packt
04 Oct 2011
7 min read
Save for later

iPhone JavaScript: Web 2.0 Integration

Packt
04 Oct 2011
7 min read
  (For more resources on iPhone JavaScript, see here.) Introduction The mashup applications allow us to exchange data with other web applications or services. Web 2.0 applications provide this feature through different mechanisms. Currently, some of the most popular websites like YouTube, Flickr, and Twitter provide a way for exchanging data through their API's. From the point of view of the user interfaces, mashups allow us to build rich interfaces for our application. The first recipe for this article will cover embedding a standard RSS feed information. Later in our application we'll delve into YouTube, Facebook, Twitter, and Flickr and build some interesting mashup web applications for the iPhone. Embedding an RSS feed Our goal for this recipe will be to read an RSS feed and present the information provided n our application. In practice, we're going to use a feed offered by The New York Times newspaper, where each item provides a summary and a link to the original web page where the information resides. You could also choose another RSS feed for testing this recipe. The code for this recipe can be found at code/ch10/rss.html in the code bundle provided on the Packtpub site. Getting ready Make sure iWebKit is installed in your computer before continuing. How to do it... As you've learned in the previous recipes, you need to create an XHTML file with the required headers for loading the files provided by iWebKit: <link href="../iwebkit/css/style.css" rel="stylesheet" media="screen" type="text/css" /><scriptsrc="../iwebkit/javascript/functions.js" type="text/javascript"></script> The second step will be to build our simple user interface containing only a top bar and an unordered list with one item. The top bar is added with the following lines: <div id="topbar"> <div id="title">RSS feed</div></div> To add the unordered list , use the following code: <div id="content"> <ul class="pageitem"> <li class="textbox"> <p> <script src="http://rssxpress.ukoln.ac.uk/lite/ viewer/?rss=http://www.nytimes.com/services/xml/ rss/nyt/HomePage.xml" type="text/javascript"></script> </p> </li> </ul></div> Finally, you should add the code for closing the body and html tags and save the new file as rss.html. After loading your new application, you will see a screen, as shown in the screenshot: If you click on one of the items, Safari Mobile will open the web page for the article, as shown in the following screenshot: How it works... For avoiding complexity and keeping our recipe as simple as possible, we used a free web service provided by RSSxpress. This service is called RSSxpressLite and it works by returning a chunk of JavaScript code. This code inserts an HTML table, containing a summary and a link for each item provided by the referenced RSS feed. Thanks to this web service, we don't need to parse the response of the original feed; RSSxpressLite does the job for us. If the mentioned web service returns the code that we need, you should only write a small line of JavaScript code referring to the web service through its URL and pass as a parameter the RSS feed for displaying information. There's more... For learning more about RSSxpressLite, take a look at http://rssxpress.ukoln.ac.uk/lite/include/. Opening a YouTube video It is safe to say that everyone who uses the Internet knows of YouTube. It is one of the most popular websites in the world. Millions of people use YouTube to watch videos through an assortment of devices, such as PC's, tablets, and smartphones. Apple's devices are not an exception and of course we can watch YouTube videos on the iPhone and iPad. In this case, we're going to load a YouTube video when the user clicks on a specific button. The ink will open a new web page, which allows us to play it. The simple XHTML recipe can be found at code/ch10/youtube.html in the code bundle provided on the Packtpub site. Getting ready This recipe only requires the UiUIKit framework f or building the user interface for this application. You can use your favorite YouTube video for this recipe. By default, we're using a video provided by Apple introducing the new iPad 2 device. How to do it... Following the example from the previous recipe, create a new XHTML file called youtube.html and insert the standard headers for loading the UiUIKit framework. Then add the following CSS inside the <head> section to style the main button: <style type="text/css"> #btn { margin-right: 12px; }</style> Our graphical user interface will be completed by adding the following XHTML code: <div id="header"> <h1>YouTube video</h1></div><h1>Video</h1><p id="btn"> <a href="http://m.youtube.com/watch?v=Z_d6_gbb90I" class="button white">Watch</a></p> After loading the new application on your device, you will see a screen similar to the following screenshot: When the user clicks on our main button, Safari Mobile will go to the web page of the video at YouTube, as shown in the following screenshot: After clicking on the play button, the video will start playing. We can rotate our device for a better aspect ratio, as shown in the following screenshot: How it works... This recipe is pretty simple; we only need to create a link to the desired YouTube video. The most important thing to keep in mind is that we'll use a mobile-specific domain for loading our video to mobile devices. The URL is simply http://m.youtube.com, instead of the regular URL http://www.youtube.com. Posting on your Facebook wall The application developed for this recipe shows how to authenticate with Facebook and how to write a post on your public wall. If everything is successful, an alert box is used to report it to the user. Although there are myriad complex applications with better functionalities that can be built for Facebook, we will focus on simple posting in this recipe. This application will only allow you to post on your wall. For this you need to hardcode your Facebook account for posting. This is to keep the recipe as simple as possible and to get a good understanding of all the complex processes involved in dealing with the OAuth protocol used by Facebook. However, thanks to this open protocol, it is easier to allow secure authentication of APIs from web applications. Also, this recipe requires using a real Internet domain and a server with public access. Thus, you cannot test this application on your local machine. Our application needs to use a server- side language for which we'll use PHP. Currently, it's very easy to find hosting services for PHP applications. You can also find very cheap services for hosting your PHP code. You can find the complete code for this recipe at code/ch10/facebook/ in the code bundle provided on the Packtpub site. Getting ready To bu ild the application for this recipe, you need to have a public server with an Internet domain linked to it. Also, you must install a web server with a PHP interpreter and have the UiUIKit framework ready to use. You need to install the cURL library, which allows PHP to connect and communicate to many different types of servers. Two interesting resources for this issue are: http://www.php.net/manual/en/book.curl.php http://www.php.net/manual/en/curl.setup.php  
Read more
  • 0
  • 0
  • 9133
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-designing-simple-robust-object-detector-and-classifier
Packt
13 Jun 2016
19 min read
Save for later

Designing a Simple, Robust Object Detector and Classifier

Packt
13 Jun 2016
19 min read
In this article by Joseph Howse, author of the book, iOS Application Development with OpenCV 3, illustrates a scale-invariant,rotation-invariant approach to object detection and classification, using OpenCV 3and just 250 lines of custom C++ code. The technique relies on blob detection, histogram analysis, and SURF (or ORB if SURF is unavailable).The classifier is sensitive to colors as well as keypoints, and itcan work with a small number of training images. For background information, sample images, and a complete tutorial on how to integrate this detector and classifier into an iOS application, refer toChapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3 (Packt Publishing, 2016). You could also use this article's C++ code on other platforms besides iOS. (For more resources related to this topic, see here.) Defining blobs and a blob detector For our purposes, a blob simply has an image and a label. The image is cv::Mat and the label is an unsigned integer. The label's default value is 0, which shall signify that the blob has not yet been classified. Create a new header file, Blob.h, and fill it with the following declaration of a Blob class: #ifndef BLOB_H #define BLOB_H   #include <opencv2/core.hpp>   class Blob { public:   Blob(const cv::Mat &mat, uint32_t label = 0ul);     /**    * Construct an empty blob.    */   Blob();     /**    * Construct a blob by copying another blob.    */   Blob(const Blob &other);     bool isEmpty() const;     uint32_t getLabel() const;   void setLabel(uint32_t value);     const cv::Mat &getMat() const;   int getWidth() const;   int getHeight() const;   private:   uint32_t label;     cv::Mat mat; };   #endif // BLOB_H A Blob's image does not change after construction, but the label may change as a result of our classification process. Note that most of Blob's methods have the const modifier, but of course,setLabel does not because it changes the label. Now, let's declare a BlobDetector class in another new header file, BlobDetector.h. This class provides a detect public method to analyze a given image and populate vector<Blob> based on detected objects in the image. Another public method, getMask, returns a thresholded version of the most recent image that the detect method received. Internally, BlobDetector uses several more matrices and vectors to hold intermediate results, including the mask, detected edges, detected contours, and hierarchy that describes the contours' relationship to each other. Here is the detector's declaration: class BlobDetector { public:   void detect(cv::Mat &image, std::vector<Blob>&blob,     double resizeFactor = 1.0, bool draw = false);     const cv::Mat &getMask() const;   private:   void createMask(const cv::Mat &image);     cv::Mat resizedImage;   cv::Mat mask;   cv::Mat edges;   std::vector<std::vector<cv::Point>> contours;   std::vector<cv::Vec4i> hierarchy; };   #endif // !BLOB_DETECTOR_H Later, in the Detecting blobs against a plain background section, we will define the methods' bodies in new files called Blob.cpp and BlobDetector.cpp. Defining blob descriptors and a blob classifier If you are familiar with keypoint matching, you know that a keypoint has a descriptor or set of descriptive statistics. Similarly, we can define a custom descriptor for a blob. As our classifier relies on histogram comparison and keypoint matching, let's say that a blob's descriptor consists of a normalized histogram and matrix of keypoint descriptors. The descriptor object is also a convenient place to put the label. Create a new header file, BlobDescriptor.h, and put the following declaration of a BlobDescriptor class in it: #ifndef BLOB_DESCRIPTOR_H #define BLOB_DESCRIPTOR_H   #include <opencv2/core.hpp>   class BlobDescriptor { public:   BlobDescriptor(const cv::Mat &normalizedHistogram,     const cv::Mat &keypointDescriptors, uint32_t label);     const cv::Mat &getNormalizedHistogram() const;   const cv::Mat &getKeypointDescriptors() const;   uint32_t getLabel() const;   private:   cv::Mat normalizedHistogram;   cv::Mat keypointDescriptors;   uint32_t label; };   #endif // !BLOB_DESCRIPTOR_H Note that BlobDescriptor is designed as an immutable class. All its methods, except the constructor, have the const modifier. Now, let's declare a BlobClassifier class in another new header file, BlobClassifier.h. Publicly, this class receives Blob objects via an update method (for reference blobs) and a classify method (for blobs that the detector found in the scene). Privately, BlobClassifier creates, owns, and compares BlobDescriptor objects that pertain to the Blob objects. Thus, BlobClassifier is the only part of our program that needs to deal with BlobDescriptor. BlobClassifier also owns instances of OpenCV classes that are responsible for keypoint detection, description, and matching. Here is our classifier's declaration: #ifndef BLOB_CLASSIFIER_H #define BLOB_CLASSIFIER_H   #import "Blob.h" #import "BlobDescriptor.h"   #include <opencv2/features2d.hpp>   class BlobClassifier { public:   BlobClassifier();     /**    * Add a reference blob to the classification model.    */   void update(const Blob &referenceBlob);     /**    * Clear the classification model.    */   void clear();     /**    * Classify a blob that was detected in a scene.    */   void classify(Blob &detectedBlob) const;   private:   BlobDescriptor createBlobDescriptor(const Blob &blob) const;   float findDistance(const BlobDescriptor &detectedBlobDescriptor,     const BlobDescriptor &referenceBlobDescriptor) const;     /**    * A feature detector and descriptor extractor.    * It finds features in images.    * Then, it creates descriptors of the features.    */   cv::Ptr<cv::Feature2D> featureDetectorAndDescriptorExtractor;     /**    * A descriptor matcher.    * It matches features based on their descriptors.    */   cv::Ptr<cv::DescriptorMatcher> descriptorMatcher;     /**    * Descriptors of the reference blobs.    */   std::vector<BlobDescriptor> referenceBlobDescriptors; };   #endif // !BLOB_CLASSIFIER_H Later, in the Classifying blobs by color and keypoints section, we will write the methods' bodies in new files called BlobDescriptor.cpp and BlobClassifier.cpp. Detecting blobs against a plain background Let's assume that the background has a distinctive color range, such as "cream to snow white". Our blob detector will calculate the image's dominant color range and search for large regions whose colors differ from this range. These anomalous regions will constitute the detected blobs. For small objects such as a bean or coin, a user can easily find a plain background such as a blank sheet of paper, plain table-top, plain piece of clothing, or even the palm of a hand. As our blob detector dynamically estimates the background color range, it can cope with various backgrounds and lighting conditions; it is not limited to a lab environment. Create a new file, BlobDetector.cpp, for the implementation of our BlobDetector class. (To review the header, refer back to the Defining blobs and a blob detector section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the breadth of the background color range, the size and smoothing of the blobs, and the color of the blobs' rectangles in the preview image. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobDetector.h"   const double MASK_STD_DEVS_FROM_MEAN = 1.0; const double MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE = 0.005; const int MASK_NUM_EROSION_ITERATIONS = 8;   const double BLOB_RELATIVE_MIN_SIZE_IN_IMAGE = 0.05;   const cv::Scalar DRAW_RECT_COLOR(0, 255, 0); // Green Of course, the heart of BlobDetector is its detect method. Optionally, the method creates a downsized version of the image for faster processing. Then, we call a helper method, createMask, to perform thresholding and erosion on the (resized) image. We pass the resulting mask to the cv::Canny function to perform Canny edge detection. We pass the edge mask to the cv::findContours function, which populates a vector of contours, in the vector<vector<cv::Point>> format. That is to say, each contour is a vector of points. For each contour, we find the points' bounding rectangle. If we are working with a resized image, we restore the bounding rectangle to the original scale. We reject rectangles that are very small. Finally, for each accepted rectangle, we put a new Blob object in the output vector and optionally draw the rectangle in the original image. Here is the detect method's implementation: void BlobDetector::detect(cv::Mat &image,   std::vector<Blob>&blobs, double resizeFactor, bool draw) {   blobs.clear();     if (resizeFactor == 1.0) {     createMask(image);   } else {     cv::resize(image, resizedImage, cv::Size(), resizeFactor,       resizeFactor, cv::INTER_AREA);     createMask(resizedImage);   }     // Find the edges in the mask.   cv::Canny(mask, edges, 191, 255);     // Find the contours of the edges.   cv::findContours(edges, contours, hierarchy, cv::RETR_TREE,     cv::CHAIN_APPROX_SIMPLE);     std::vector<cv::Rect> rects;   int blobMinSize = (int)(MIN(image.rows, image.cols) *     BLOB_RELATIVE_MIN_SIZE_IN_IMAGE);   for (std::vector<cv::Point> contour : contours) {       // Find the contour's bounding rectangle.     cv::Rect rect = cv::boundingRect(contour);       // Restore the bounding rectangle to the original scale.     rect.x /= resizeFactor;     rect.y /= resizeFactor;     rect.width /= resizeFactor;     rect.height /= resizeFactor;       if (rect.width < blobMinSize || rect.height < blobMinSize) {       continue;     }       // Create the blob from the sub-image inside the bounding     // rectangle.     blobs.push_back(Blob(cv::Mat(image, rect)));       // Remember the bounding rectangle in order to draw it later.     rects.push_back(rect);   }     if (draw) {     // Draw the bounding rectangles.     for (const cv::Rect &rect : rects) {       cv::rectangle(image, rect.tl(), rect.br(), DRAW_RECT_COLOR);     }   } } The getMask method simply returns the mask that we previously computed in the detect method: const cv::Mat &BlobDetector::getMask() const {   return mask; } The createMask helper method begins by finding the image's mean color and standard deviation using the cv::meanStdDev function. We calculate a range of background colors based on a certain number of standard deviations from the mean, as defined by the MASK_STD_DEVS_FROM_MEAN constant near the top of BlobDetector.cpp. We deem values outside this range to be foreground colors. Using the cv::inRange function, we map the background colors (in the image) to white (in the mask) and the foreground colors (in the image) to black (in the mask). Then, we create a square kernel using the cv::getStructuringElement function. Finally, we use the kernel in the cv::erode function to apply the erosion morphological operation to the mask. This has the effect of smoothing the black (foreground) regions such that they swallow up little gaps that are probably just noise. Here is the relevant code: void BlobDetector::createMask(const cv::Mat &image) {     // Find the image's mean color.   // Presumably, this is the background color.   // Also find the standard deviation.   cv::Scalar meanColor;   cv::Scalar stdDevColor;   cv::meanStdDev(image, meanColor, stdDevColor);     // Create a mask based on a range around the mean color.   cv::Scalar halfRange = MASK_STD_DEVS_FROM_MEAN * stdDevColor;   cv::Scalar lowerBound = meanColor - halfRange;   cv::Scalar upperBound = meanColor + halfRange;   cv::inRange(image, lowerBound, upperBound, mask);     // Erode the mask to merge neighboring blobs.   int kernelWidth = (int)(MIN(image.cols, image.rows) *     MASK_EROSION_KERNEL_RELATIVE_SIZE_IN_IMAGE);   if (kernelWidth > 0) {     cv::Size kernelSize(kernelWidth, kernelWidth);     cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT,       kernelSize);     cv::erode(mask, mask, kernel, cv::Point(-1, -1),       MASK_NUM_EROSION_ITERATIONS);   } } That is the end of the blob detector's code. As you can see, it uses a general-purpose and rather linear approach, without any special cases for different kinds of objects.Moreover, we are using a separate blob detector and blob classifier, and this separation of responsibilities enables us to keep each class's implementation relatively simple. For completeness, note that the Blob class's constructors have straightforward implementations that copy the arguments. For the blob's image, we make a deep copy because the original may change. (For example, the original may be a subimage in a frame of video, and after detection we may draw rectangles atop the frame of video.) Similarly, Blob's getter and setter methods are self-explanatory. Create a new file, Blop.cpp, and fill it with the following implementation: #import "Blob.h"   Blob::Blob(const cv::Mat &mat, uint32_t label) : label(label) {   mat.copyTo(this->mat); }   Blob::Blob() { }   Blob::Blob(const Blob &other) : label(other.label) {   other.mat.copyTo(mat); }   bool Blob::isEmpty() const {   return mat.empty(); } uint32_t Blob::getLabel() const {   return label; } void Blob::setLabel(uint32_t value) {   label = value; } const cv::Mat &Blob::getMat() const {   return mat; } int Blob::getWidth() const {   return mat.cols; } int Blob::getHeight() const {   return mat.rows; } Classifying blobs by color and keypoints Our classifier operates on the assumption that a blob contains distinctive colors, distinctive keypoints, or both. To conserve memory and precompute as much relevant information as possible, we do not store images of the reference blobs, but instead we store histograms and keypoint descriptors. Create a new file, BlobClassifier.cpp, for the implementation of our BlobClassifier class. (To review the header, refer back to the Defining blob descriptors and a blob classifier section.) At the top of BlobDetector.cpp, we will define several constants that pertain to the number of histogram bins, the histogram comparison method, and the relative importance of the histogram comparison versus the keypoint comparison. Here is the relevant code: #include <opencv2/imgproc.hpp>   #include "BlobClassifier.h"   #ifdef WITH_OPENCV_CONTRIB #include <opencv2/xfeatures2d.hpp> #endif   const int HISTOGRAM_NUM_BINS_PER_CHANNEL = 32; const int HISTOGRAM_COMPARISON_METHOD = cv::HISTCMP_CHISQR_ALT;   const float HISTOGRAM_DISTANCE_WEIGHT = 0.98f; const float KEYPOINT_MATCHING_DISTANCE_WEIGHT = 1.0f -   HISTOGRAM_DISTANCE_WEIGHT; Beware that the HISTOGRAM_NUM_BINS_PER_CHANNEL constant has a cubic relationship to memory usage. For each blob descriptor, we store a three-dimensional (BGR) histogram with HISTOGRAM_NUM_BINS_PER_CHANNEL^3 elements, and each element is a 32-bit floating point number. If the constant is 32, each histogram's size in megabytes is (32^3)*32/(10^6)=1.0. This is fine for a small set of reference descriptors. If the constant is 256 (the maximum number of bins for an 8-bit color channel), the histogram's size goes up to a whopping value of (256^3)*32/(10^6)=536.9 megabytes! For an iOS application, this is unacceptable, given the platform's memory constraints. At best, in a high-end iOS device, one gigabyte of RAM might be available to each application. Conservatively, you should worry if your app's memory usage approaches 100 megabytes. Remember that OpenCV's SURF implementation is in the xfeatures2d module, which is part of opencv_contrib. If opencv_contrib is available, let's define the WITH_OPENCV_CONTRIB preprocessor flag. Then, our code imports the <opencv/xfeatures2d.hpp> header, and we use SURF. Otherwise, we use ORB. This selection also affects the implementation of BlobClassifier's constructor. OpenCV provides factory methods for various feature detectors, descriptors, and matchers, so we simply have to use the right combination of factory methods for SURF with Flann matching or ORB with brute-force matching based on the Hamming distance. Here is the constructor's implementation: BlobClassifier::BlobClassifier() { #ifdef WITH_OPENCV_CONTRIB   featureDetectorAndDescriptorExtractor =     cv::xfeatures2d::SURF::create();   descriptorMatcher = cv::DescriptorMatcher::create("FlannBased"); #else   featureDetectorAndDescriptorExtractor = cv::ORB::create();   descriptorMatcher = cv::DescriptorMatcher::create( "BruteForce-HammingLUT"); #endif } The update method's implementation calls a helper method, createBlobDescriptor, and adds the resulting BlobDescriptor to a vector of reference descriptors: void BlobClassifier::update(const Blob &referenceBlob) {   referenceBlobDescriptors.push_back(     createBlobDescriptor(referenceBlob)); } The clear method's implementation discards all the reference descriptors such that the BlobClassifier reverts to its initial, untrained state: void BlobClassifier::clear() {   referenceBlobDescriptors.clear(); } The implementation of the classify method relies on another helper method, findDistance. For each reference descriptor, classify calls findDistance to obtain a measure of dissimilarity between the query blob's descriptor and reference descriptor. We find the reference descriptor with the least distance (best similarity) and return its label as the classification result. If there are no reference descriptors, classify returns 0, the "unknown" label. Here is classify's implementation: void BlobClassifier::classify(Blob &detectedBlob) const {   BlobDescriptor detectedBlobDescriptor =     createBlobDescriptor(detectedBlob);   float bestDistance = FLT_MAX;   uint32_t bestLabel = 0;   for (const BlobDescriptor &referenceBlobDescriptor :       referenceBlobDescriptors) {     float distance = findDistance(detectedBlobDescriptor,       referenceBlobDescriptor);     if (distance < bestDistance) {       bestDistance = distance;       bestLabel = referenceBlobDescriptor.getLabel();     }   }   detectedBlob.setLabel(bestLabel); } The createBlobDescriptor helper method is responsible for calculating a normalized histogram of Bloband keypoint descriptors and using them to build a new BlobDescriptor. To calculate the (non-normalized) histogram, we use the cv::calcHist function. Among its arguments, it requires three arrays to specify the channels we want to use, the number of bins per channel, and the range of each channel's values. To normalize the resulting histogram, we divide by the number of pixels in the blob's image. The following code, pertaining to the histogram, is the first half of implementation of createBlobDescriptor: BlobDescriptor BlobClassifier::createBlobDescriptor(   const Blob &blob) const {    const cv::Mat &mat = blob.getMat();   int numChannels = mat.channels();     // Calculate the histogram of the blob's image.   cv::Mat histogram;   int channels[] = { 0, 1, 2 };   int numBins[] = { HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL,     HISTOGRAM_NUM_BINS_PER_CHANNEL };   float range[] = { 0.0f, 256.0f };   const float *ranges[] = { range, range, range };   cv::calcHist(&mat, 1, channels, cv::Mat(), histogram, 3,     numBins, ranges);     // Normalize the histogram.   histogram *= (1.0f / (mat.rows * mat.cols)); Now, we must convert the blob's image to grayscale and obtain keypoints and keypoint descriptors using the detect and compute methods of cv::Feature2D. With the normalized histogram and keypoint descriptors, we have everything that we need to construct and return a new BlobDescriptor. Here is the remainder of implementation of createBlobDescriptor: // Convert the blob's image to grayscale.   cv::Mat grayMat;   switch (numChannels) {     case 4:       cv::cvtColor(mat, grayMat, cv::COLOR_BGRA2GRAY);       break;     default:       cv::cvtColor(mat, grayMat, cv::COLOR_BGR2GRAY);       break;   }     // Detect features in the grayscale image.   std::vector<cv::KeyPoint> keypoints;   featureDetectorAndDescriptorExtractor->detect(grayMat,     keypoints);     // Extract descriptors of the features.   cv::Mat keypointDescriptors;   featureDetectorAndDescriptorExtractor->compute(grayMat,     keypoints, keypointDescriptors);     return BlobDescriptor(histogram, keypointDescriptors,     blob.getLabel()); } The findDistance helper method performs histogram comparison using the cv::compareHist function and keypoint matching using the match method of cv::DescriptorMatcher. Each of the resulting keypoint matches has a distance, and we sum these distances. Then, as an overall measure of distance between the two blob descriptors, we return a weighted average of the histogram distance and the total keypoint matching distance. Here is the relevant code: float BlobClassifier::findDistance(   const BlobDescriptor &detectedBlobDescriptor,   const BlobDescriptor &referenceBlobDescriptor) const {    // Calculate the histogram distance.   float histogramDistance = (float)cv::compareHist(     detectedBlobDescriptor.getNormalizedHistogram(),     referenceBlobDescriptor.getNormalizedHistogram(),     HISTOGRAM_COMPARISON_METHOD);     // Calculate the keypoint matching distance.   float keypointMatchingDistance = 0.0f;   std::vector<cv::DMatch> keypointMatches;   descriptorMatcher->match(     detectedBlobDescriptor.getKeypointDescriptors(),     referenceBlobDescriptor.getKeypointDescriptors(),     keypointMatches);   for (const cv::DMatch &keypointMatch : keypointMatches) {     keypointMatchingDistance += keypointMatch.distance;   }     return histogramDistance * HISTOGRAM_DISTANCE_WEIGHT +     keypointMatchingDistance * KEYPOINT_MATCHING_DISTANCE_WEIGHT; } That is the end of the blob classifier's code. Again, we see that a single class can provide useful, general-purpose computer vision functionality without a terribly complicated implementation. Perhaps this is a Zen moment; our previous work and studieshave been a path to (some kind of) simplicity! Of course, OpenCV hides a lot of complexity for us in its implementations of histogram-related functions and keypoint-related classes, and in this way, the library offers us a relatively gentle path. For completeness, note that the BlobDescriptor class has a straightforward implementation. Create a new file, BlobDescriptor.cpp, and fill it with the following bodies for a constructor and getters: #include "BlobDescriptor.h"   BlobDescriptor::BlobDescriptor(const cv::Mat &normalizedHistogram, const cv::Mat &keypointDescriptors, uint32_t label) : normalizedHistogram(normalizedHistogram) , keypointDescriptors(keypointDescriptors) , label(label) { }   const cv::Mat &BlobDescriptor::getNormalizedHistogram() const {   return normalizedHistogram; } const cv::Mat &BlobDescriptor::getKeypointDescriptors() const {   return keypointDescriptors; } uint32_t BlobDescriptor::getLabel() const {   return label; } Summary Now, we have finished all the code for the detector, descriptor, and classifier! Again, for more information, refer to Chapter 5, Classifying Coins and Commodities in the book,iOS Application Development with OpenCV 3. Resources for Article: Further resources on this subject: Making subtle color shifts with curves [article] New functionality in OpenCV 3.0 [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 9100

article-image-profiling-app
Packt
24 Apr 2015
9 min read
Save for later

Profiling an app

Packt
24 Apr 2015
9 min read
This article is written by Cecil Costa, the author of the book, Swift Cookbook. We'll delve into what profiling is and how we can profile an app by following some simple steps. It's very common to hear about issues, but if an app doesn't have any important issue, it doesn't mean that it is working fine. Imagine that you have a program that has a memory leak, presumably you won't find any problem using it for 10 minutes. However, a user may find it after using it for a few days. Don't think that this sort of thing is impossible; remember that iOS apps don't terminate, so if you do have memory leaks, it will be kept until your app blows up. Performance is another important, common topic. What if your app looks okay, but it gets slower with the passing of time? We, therefore, have to be aware of this problem. This kind of test is called profiling and Xcode comes with a very good tool for realizing this operation, which is called Instruments. In this instance, we will profile our app to visualize the amount of energy wasted by our app and, of course, let's try to reduce it. (For more resources related to this topic, see here.) Getting ready For this recipe you will need a physical device, and to install your app into the device you will need to be enrolled on the Apple Developer Program. If you have both the requirements, the next thing you have to do is create a new project called Chapter 7 Energy. How to do it... To profile an app, follow these steps: Before we start coding, we will need to add a framework to the project. Click on the Build Phases tab of your project, go to the Link Binaries with Libraries section, and press the plus sign. Once Xcode opens a dialog window asking for the framework to add, choose CoreLocation and MapKit. Now, go to the storyboard and place a label and a MapKit view. You might have a layout similar to this one: Link the MapKit view and call it just map and the UILabel class and call it just label:    @IBOutlet var label: UILabel!    @IBOutlet var map: MKMapView! Continue with the view controller; let's click at the beginning of the file to add the core location and MapKit imports: import CoreLocation import MapKit After this, you have to initialize the location manager object on the viewDidLoad method:    override func viewDidLoad() {        super.viewDidLoad()        locationManager.delegate = self        locationManager.desiredAccuracy =          kCLLocationAccuracyBest        locationManager.requestWhenInUseAuthorization()        locationManager.startUpdatingLocation()    } At the moment, you may get an error because your view controller doesn't conform with CLLocationManagerDelegate, so let's go to the header of the view controller class and specify that it implements this protocol. Another error we have to deal with is the locationManager variable, because it is not declared. Therefore, we have to create it as an attribute. And as we are declaring attributes, we will add the geocoder, which will be used later: class ViewController: UIViewController, CLLocationManagerDelegate {    var locationManager = CLLocationManager()    var geocoder = CLGeocoder() Before we implement this method that receives the positioning, let's create another method to detect whether there was any authorization error:    func locationManager(manager: CLLocationManager!,       didChangeAuthorizationStatus status:          CLAuthorizationStatus) {            var locationStatus:String            switch status {            case CLAuthorizationStatus.Restricted:                locationStatus = "Access: Restricted"               break            case CLAuthorizationStatus.Denied:                locationStatus = "Access: Denied"                break            case CLAuthorizationStatus.NotDetermined:                locationStatus = "Access: NotDetermined"               break            default:                locationStatus = "Access: Allowed"            }            NSLog(locationStatus)    } And then, we can implement the method that will update our location:    func locationManager(manager:CLLocationManager,      didUpdateLocations locations:[AnyObject]) {        if locations[0] is CLLocation {            let location:CLLocation = locations[0] as              CLLocation            self.map.setRegion(              MKCoordinateRegionMakeWithDistance(            location.coordinate, 800,800),              animated: true)                       geocoder.reverseGeocodeLocation(location,              completionHandler: { (addresses,              error) -> Void in                    let placeMarket:CLPlacemark =                      addresses[0] as CLPlacemark                let curraddress:String = (placeMarket.                  addressDictionary["FormattedAddressLines"                  ] as [String]) [0] as String                    self.label.text = "You are at                      (curraddress)"            })        }    } Before you test the app, there is still another step to follow. In your project navigator, click to expand the supporting files, and then click on info.plist. Add a row by right-clicking on the list and selecting add row. On this new row, type NSLocationWhenInUseUsageDescription as a key and on value Permission required, like the one shown here: Now, select a device and install this app onto it, and test the application walking around your street (or walking around the planet earth if you want) and you will see that the label will change, and also the map will display your current position. Now, go back to your computer and plug the device in again. Instead of clicking on play, you have to hold the play button until you see more options and then you have to click on the Profile option. The next thing that will happen is that instruments will be opened; probably, a dialog will pop up asking for an administrator account. This is due to the fact that instruments need to use some special permission to access some low-level information. On the next dialog, you will see different kinds of instruments, some of them are OS X specific, some are iOS specific, and others are for both. If you choose the wrong platform instrument, the record button will be disabled. For this recipe, click on Energy Diagnostics. Once the Energy Diagnostics window is open, you can click on the record button, which is on the upper-left corner and try to move around—yes, you need to keep the device connected to your computer, so you have to move around with both elements together—and do some actions with your device, such as pressing the home button and turning off the screen. Now, you may have a screen that displays an output similar to this one: Now, you can analyze who is spending more energy on you app. To get a better idea of this, go to your code and replace the constant kCLLocationAccuracyBest with kCLLocationAccuracyThreeKilometers and check whether you have saved some energy. How it works... Instruments are a tool used for profiling your application. They give you information about your app which can't be retrieved by code, or at least can't be retrieved easily. You can check whether your app has memory leaks, whether it is loosing performance, and as you can see, whether it is wasting lots of energy or not. In this recipe we used the GPS because it is a sensor that requires some energy. Also, you can check on the table at the bottom of your instrument to see that Internet requests were completed, which is something that if you do very frequently will also empty your battery fast. Something you might be asking is: why did we have to change info.plist? Since iOS 8, some sensors require user permission; the GPS is one of them, so you need to report what is the message that will be shown to the user. There's more... I recommend you to read the way instruments work, mainly those that you will use. Check the Apple documentation about instruments to get more details about this (https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Introduction/Introduction.html). Summary In this article, we looked at all the hows and whats of profiling an app. We specifically looked at profiling our app to visualize the amount of energy wasted by our app. So, go ahead to try doing it. Resources for Article: Further resources on this subject: Playing with Swift [article] Using OpenStack Swift [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 8841

article-image-making-your-iad
Packt
17 Feb 2012
6 min read
Save for later

Making Your iAd

Packt
17 Feb 2012
6 min read
Getting iAd Producer iAd Producer is the tool that allows us to assemble great interactive ads with a simple drag-and-drop visual interface. Download and install iAd Producer on your Mac, so that you can start creating an ad. Time for action – installing iAd Producer To install iAd Producer, follow these steps: To download and use iAd Producer, you need to be a paid member of the iOS Developer Program. Go to https://developer.apple.com/ios/ and click on the Log in button. Enter your Apple ID and password, and click on Sign In. After you've signed in, find the Downloads section at the bottom of the page. Click on iAd Producer to start downloading it. You can see the download highlighted here: If you cannot see iAd Producer in the Downloads, make sure you're logged in and your developer account has been activated. After the download is complete, open the file and run iAd Producer.mpkg to start the installation wizard. Follow the steps in the installation and enter your Mac password, if asked for it. When installing certain software, you need to enter your Mac password to allow it to have privileged access to your system. Don't confuse this with your Apple ID that we set up for the iOS Developer Program. If you don't have a password on your Mac, just leave the password area blank and click on OK. When you've gone through the installation steps it'll take a couple of moments to install. After you get a The installation was successful message you can close the installer. What just happened? We now have iAd Producer installed; whenever you need to open it, you can find it in the Applications folder on your Mac. Working with iAd Producer Let's take a look at some of the main parts of iAd Producer that you'll be using regularly, to familiarize yourself with the interface. Launch screen When you first open iAd Producer, you'll be able to start a new iPhone or iPad project from the project selector, as shown in the following screenshot. As the screen size and experience is so different between the two devices, we have to design and build ads specifically for each one: From the launch screen, you can also open existing projects you've been working on. Default ad Once you have chosen to create either an iPad or iPhone iAd, a placeholder ad is created for you, showing the visual flow. This is the overview of your ad, which you'll be using to piece the sections of your ad together. The following screenshot shows the default overview: Double-clicking on any of the screens in your ad flow will ask you to pick a template for that page; once assigned, you're then able to design the iAd using the canvas editor. Template selector Before we edit any page of an ad, we have to apply a template to it, even if it's just a blank canvas to build upon. iAd Producer automatically shows the relevant templates to the current page you're editing. This means your ad follows a structure that the users expect. Templates provide some great starting points for your iAd, whether it's for a simple banner with an image and text or a 3D image carousel that the user can flick and manipulate, all created with easy point and click. The following screenshot is an example of the template chooser: Asset Library The Asset Library holds all the media and content for your iAd, such as the images, videos, and audio. When adding media to your Asset Library, make sure you're using high-resolution images for the high-resolution Retina display. iAd Producer automatically generates the lower-resolution images for your ad, whenever you import resources. If you wanted an image to be 200px wide and 300px high, you should double the horizontal and vertical pixels to 400px wide and 600px high. This will mean your graphics look crisp and awesome on the high-resolution screens. The following screenshot shows an example of media in the Asset Library: Ad canvas Once you've selected a template, you can double-click on the item in the Overview to open up the canvas for that page. The ad canvas is where you customize your iAd with a powerful visual editor to manipulate each page of your ad. Here's an example of the ad canvas with a video carousel added to it: Setting up your ad Let's create and save an empty project to use as we create our iAd; you'll only need to do this once for each ad. Whenever you're working with something digital, it's important to save your iAd whenever you make a significant change, in case iAd Producer closes unexpectedly. Try to get into the habit of saving regularly, to avoid losing your ad. Time for action – creating a new project In order to create a new project, follow the ensuing steps: If you haven't created a new project already, open iAd Producer from your Applications folder. Select the iPhone from the launch screen and choose Select. You'll now see the default ad overview. iAd Producer has automatically made us a project called Untitled and populated it with the default set of pages. From the File menu, select Save to save your empty iAd, ready to have the components added to it later. Name the project something like Dino Stores, as that is the ad we'll be working on. You can now save the progress of your project at any time by choosing File then Save from the menu bar or pressing Command + S on your keyboard. What just happened? You've now seen the project selector and the launch screen in action, and have the base project that we'll be building upon as we make our first iAd. If you quit this project you can now open the project from within iAd Producer by clicking on File | Open, from the menu bar; or, simply double-click the project file in Finder to automatically open it. Getting the resources In this article, we'll be using the Dino Stores example resources that are available to download with this book. If you want to use your own assets, you'll need the following media: An image for your banner, approximately 120px wide and 100px high An image of your company logo or name, around 420px wide and 45px high An 80px square image, with transparency, to be used as a map pin A loading image, approximately 600px wide and 400px high Between six and 10 images for a gallery, each around 304px wide and 440px high Two or more images that will change when the iPhone is shaken, each around 600px wide and 800px high An image related to your product or service, at least 300px wide, to use on the main menu page These pixel sizes are at double-size to account for the high-resolution Retina display found on the iPhone 4 and later. iAd Producer will automatically create the lower-resolution versions for older devices.
Read more
  • 0
  • 0
  • 8819

article-image-creating-user-interfaces
Packt
15 Oct 2013
6 min read
Save for later

Creating User Interfaces

Packt
15 Oct 2013
6 min read
When creating an Android application, we have to be aware of the existence of multiple screen sizes and screen resolutions. It is important to check how our layouts are displayed in different screen configurations. To accomplish this, Android Studio provides a functionality to change the layout preview when we are in the design mode. We can find this functionality in the toolbar, the device definition option used in the preview is by default Nexus 4. Click on it to open the list of available device definitions. Try some of them. The difference between a tablet device and a device like the Nexus one are very notable. We should adapt the views to all the screen configurations our application supports to ensure they are displayed optimally. The device definitions indicate the screen inches, the resolution, and the screen density. Android divides into ldpi, mdpi, hdpi, xhdpi, and even xxhdpi the screen densities. ldpi (low-density dots per inch): About 120 dpi mdpi (medium-density dots per inch): About 160 dpi hdpi (high-density dots per inch): About 240 dpi xhdpi (extra-high-density dots per inch): About 320 dpi xxhdpi (extra-extra-high-density dots per inch): About 480 dpi The last dashboards published by Google show that most devices have high-density screens (34.3 percent), followed by xhdpi (23.7 percent) and mdpi (23.5 percent). Therefore, we can cover 81.5 percent of the devices by testing our application using these three screen densities. Official Android dashboards are available at http://developer.android.com/about/dashboards. Another issue to keep in mind is the device orientation. Do we want to support the landscape mode in our application? If the answer is yes, we have to test our layouts in the landscape orientation. On the toolbar, click on the layout state option to change the mode from portrait to landscape or from landscape to portrait. In the case that our application supports the landscape mode and the layout does not display as expected in this orientation, we may want to create a variation of the layout. Click on the first icon of the toolbar, that is, the configuration option, and select the option Create Landscape Variation. A new layout will be opened in the editor. This layout has been created in the resources folder, under the directory layout-land and using the same name as the portrait layout: /src/main/res/layout-land/activity_main.xml. Now we can edit the new layout variation perfectly conformed to the landscape mode. Similarly, we can create a variation of the layout for xlarge screens. Select the option Create layout-xlarge Variation. The new layout will be created in the layout xlarge folder: /src/main/res/layout-xlarge/activity_main.xml. Android divides into small, normal, large, and xlarge the actual screen sizes: small: Screens classified in this category are at least 426 dp x 320 dp normal: Screens classified in this category are at least 470 dp x 320 dp large: Screens classified in this category are at least 640 dp x 480 dp xlarge: Screens classified in this category are at least 960 dp x 720 dp A dp is a density independent pixel, equivalent to one physical pixel on a 160 dpi screen. The last dashboards published by Google show that most devices have a normal screen size (79.6 percent). If you want to cover a bigger percentage of devices, test your application by also using a small screen (9.5 percent), so the coverage will be 89.1 percent of devices. To display multiple device configurations at the same time, in the toolbar click on the configuration option and select the option Preview All Screen Sizes, or click on the Preview Representative Sample to open just the most important screen sizes. We can also delete any of the samples by clicking on it using the right mouse button and selecting the Delete option of the menu. Another useful action of this menu is the Save screenshot option, which allows us to take a screenshot of the layout preview. If we create some layout variations, we can preview all of them selecting the option Preview Layout Versions. Changing the UI theme Layouts and widgets are created using the default UI theme of our project. We can change the appearance of the elements of the UI by creating styles. Styles can be grouped to create a theme and a theme can be applied to a whole activity or application. Some themes are provided by default, such as the Holo style. Styles and themes are created as resources under the /src/res/values folder. Open the main layout using the graphical editor. The selected theme for our layout is indicated in the toolbar: AppTheme. This theme was created for our project and can be found in the styles file (/src/res/values/styles.xml). Open the styles file and notice that this theme is an extension of another theme (Theme.Light). To custom our theme, edit the styles file. For example, add the next line in the AppTheme definition to change the window background color: <style name="AppTheme" parent="AppBaseTheme"> <item name="android:windowBackground">#dddddd</item> </style> Save the file and switch to the layout tab. The background is now light gray. This background color will be applied to all our layouts due to the fact that we configured it in the theme and not just in the layout. To completely change the layout theme, click on the theme option from the toolbar in the graphical editor. The theme selector dialog is now opened, displaying a list of the available themes. The themes created in our own project are listed in the Project Themes section. The section Manifest Themes shows the theme configured in the application manifest file (/src/main/AndroidManifest.xml). The All section lists all the available themes. Summary In this article, we have seen how to develop and build Android applications with this new IDE.uide. It also shows how to support multiple screens and change their properties using the SDK. The changing of UI themes for the devices is also discussed, along with its properties. The main focus was on the creation of the user interfaces using both the graphical view and the text-based view. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments Building Android (Must know) Android Fragmentation Management
Read more
  • 0
  • 0
  • 8732
article-image-iphone-javascript-installing-frameworks
Packt
23 Jun 2011
14 min read
Save for later

iPhone JavaScript: Installing Frameworks

Packt
23 Jun 2011
14 min read
  iPhone JavaScript Cookbook Clear and practical recipes for building web applications using JavaScript and AJAX without having to learn Objective-C or Cocoa         Read more about this book       (For more resources related to this subject, see here.) Introduction Many web applications implement common features independent of the final purpose for which they have been designed. Functionalities and features such as authentication, forms validation, retrieving records from a database, caching, logging, and pagination are very common in modern web applications. As a developer, surely you have implemented one or more of these features in your applications more than once. Good developers and software engineers insist on concepts, such as modularity, reusability, and encapsulation; as a consequence you can find a lot of books, papers, and articles talking about how to design your software using these techniques. In fact, modern and popular methodologies, such as Extreme Programming, Scrum, and Test-driven Development are based on those principles. Although this approach sounds very appealing in theory, it might be complicated to carry it out in practice. Developing any kind of software from scratch for running in any platform is undoubtedly a hard task. Complexity grows up when the target platform, operating system, or machine has its own specific rules and mechanisms. Some tools can make our job less complicated but only one kind of them is definitely a safe bet. It is here when we meet frameworks, a set of proven code that offers common functionality and standard structures for software development. This code makes our life much easier without reinventing the wheel and gives a skeleton to our applications, making sure that we're doing things correctly. In addition, frameworks avoid starting from scratch once more. From a technical point of view, most frameworks are a set of libraries implementing functions, classes, and methods. Using frameworks, we can save time and money, writing less code due to its code skeleton, and features implemented on it. Usually, frameworks force us to follow standards and they offer well-proven code avoiding common mistakes for beginners. Tasks such as testing, maintenance, and deployment are easier to do using frameworks due to the tools and mechanisms included. On the other hand, the learning curve could be a big and difficult drawback for beginners. Through this article, we'll learn how to install the main frameworks for JavaScript, HTML, and CSS development for iPhone. All of them offer a base to develop applications with a consistent and native look and feel using different methods. While some of them are focused on the user interface, others allow using AJAX in an efficient and easy way. Even some frameworks allow building native applications from the original code of the web application. We have the chance to choose which is better to fulfill our requirements; it is even possible to use more than one of these solutions for the same application. For our recipes, we'll use the following frameworks: iUI: This is focused on the look and feel of iPhone and consists of CSS files, images, and a small JavaScript library. Its objective is to get a web application running on the device with a consistent interface such as a native application. This framework establishes a correspondence between HTML tags and conventions used for developing native applications. UiUIKit: Using a set of CSS and image files, it provides a coherent system for building web applications with a graphic interface such as native iPhone applications. The features offered for this framework are very similar to iUI. XUI: This is a pure JavaScript library specific for mobile development. It has been designed to be faster and lighter than other similar libraries, such as jQuery, MooTools, and prototype. iWebKit: This is developed specifically for Apple's devices and is compatible with CSS3 standard; it helps to write web applications or websites with minimum HTML knowledge. Its modular design supports plugins for adding new features and we can build and use the themes for UI customization. WebApp.Net: This framework comes loaded with JavaScript, CSS, and image files for developing web application for mobile devices that uses WebKit engine in its web browsers. Besides building interfaces, this framework includes functionality to use AJAX in an easy and efficient way. PhoneGap: This is designed to minimize efforts for developing native mobile applications for different operating systems, platforms, and devices. It is based on the WORE (Write once, run anywhere) principle and it allows conversion from a web application into a native application. It supports many platforms and operating systems, such as iOS, Android, webOS, Symbian, and BlackBerry OS. Apple Dashcode: Formally, this is a software development tool for Mac OS X included in Leopard and Snow Leopard versions, and focused on widget development for these operating systems. However, the last versions allow you to write web applications for iPhone and other iOS devices offering a graphic interface builder. Installing the iUI framework This recipe shows how to download and install the iUI framework on different operating systems. Particularly, we'll cover Microsoft Windows, Mac OS X, and GNU/Linux. Getting ready The first step is to install and get ready; some tools need to be downloaded and decompressed. As computer users, we know how to decompress files using software such as WinZip, Ark, or the built-in utility on Mac OS X. You will surely have installed a web browser on your computer. If you are a Linux or Mac developer, you already know how to use curl or wget. These tools are very useful for quick download and you only need to use the command line through applications such as GNOME Terminal, Konsole, iTerm, or Terminal. iUI is an open source project, so you can download the code for free. The open source project releases some stable versions packed and ready to download, but it is also possible to download a development version. This one could be suitable if you prefer working with the latest changes made by the official developers contributing to the project. Due to this, developers are using Mercurial version control and thus we'll need to install a client for it to get access to this code. How to do it... iUI is an open source project so you can download the code for free. Open your favorite web browser and enter this URL: http://code.google.com/p/iui/downloads/list In that web page, you'll see a list with files that refer to different release versions of this framework. Clicking on the link corresponding to the latest release's drives takes you to a new web page that shows you a new link for the file. Click on it for instant downloading. (Move the mouse over the image to enlarge it.) If you are a GNU/Linux user or a Mac developer you will be used to command line. Open your terminal application and launch this command from your desired directory: $ wget http://iui.googlecode.com/files/iui-0.31.tar.gz Once you have downloaded the tarball file, it's time to extract its content to a specific folder on our computer. WinZip and WinRAR are the most popular tools to do this task on Windows. Linux distributions, by default, install similar tools such as File Roller and Ark. Double-clicking from the download window of the Safari browser will extract the files directly to your default folder on your Mac, which is usually called Downloads. For command-line enthusiasts, execute the following command: $ tar -zxvf iui-0.31.tar.gz How it works... After decompressing the downloaded file, you'll find a folder with different subfolders and files. The most important is a subfolder called iui that contains CSS, images, and JavaScript files for building our web applications for iPhone. We need to copy this subfolder to our working folder where other application files reside. Sharing this framework across different web applications is possible; you only need to put the iUI at a place where these applications have permissions to access. Usually, this place is a folder under the DocumentRoot of your web server. If you're planning to write a high load application, it would be a good idea to use a cloud or CDN (Content Delivery Network) service such as Amazon Simple Storage Services (Amazon S3) for hosting and serving static HTML, CSS, JavaScript, and image files. Installing the iUI framework is a straightforward process. You simply download and decompress one file, and then copy one folder into an other, which has permission to be accessed by the web server. Apache is one of the most used and extended web servers in the world. Other popular options are Internet Information Server (IIS), lighttpd, and nginx. Apache web server is installed by default on Mac OS X; most of the operating systems based on Linux and UNIX offer binary packages for easy installation and you can find binary files for installing on Windows as well. IIS was designed for Windows operating systems, meanwhile, lighttpd and nginx are winning popularity and are used on UNIX systems as Linux's distros, FreeBSD, and OpenBSD. Ubuntu Linux uses /var/www/ directory as the main DocumentRoot for Apache. So, in order to share iUI framework across applications, you can copy the folder to the other folder by executing this command: $ cp -r iui-0.31/ui /var/www/iui If you are a Mac user, your target directory will be /Library/WebServer/Documents/iui. There's more... Inside the samples subfolder, you'll find some files showing capabilities of this framework, including HTML and PHP files. Some examples need a web server with PHP support but you can test others using Safari web browser or an other WebKit's browser such as Safari or Google Chrome. Open index.html with a web browser and use it as your starting point. If you prefer to use the latest version in development from the version control, you'll need to install a Mercurial client. Most of the GNU/Linux distribution such as Fedora, Debian, and Ubuntu includes binary packages ready to install them. Usually, the name of the binary package is mercurial. The following command will install the client on Ubuntu Linux: $ sudo apt-get install mercurial Mercurial is an open source project and offers a binary file ready to install for Mac OS X and Windows systems. If you're using one of these, go to the following page and download the specific file for your operating system and version: http://mercurial.selenic.com/downloads/ After downloading, you can install the client using the regular process for your operating system. Mac users will find a ZIP file containing a binary package. For Windows, the distributed file is a MSI (Microsoft Installer), ready for self-installation after clicking on it. Despite that the client of this version control was developed for the command line, we can find some GUI tools online such as TortoiseHG for Windows. These tools are intuitive and user-friendly, allowing an interactive use between the user and the source files hosted in the version control system. TortoiseHG can be downloaded from the same web page as the Mercurial client. Finally, we'll download the version development of the iUI framework executing the following command: $ hg clone https://iui.googlecode.com/hg/ iui The new iui folder includes all files of the iUI framework. We should copy this folder to our DocumentRoot. If you want to know more about this framework, point your browser at the official wiki project: http://code.google.com/p/iui/w/list Also, taking a look at the complete code of the project may be interesting for advanced developers or just for people wanting to learn more about internal details: http://code.google.com/p/iui/source/browse Installing the UiUIKit framework UiUIKit is the short name of the Universal iPhone UI Kit framework. The development of this framework is carried out through an open source project hosted in Google Code and is distributed under the GNU Public License v3. Let's see how to install it on different operating systems. Getting ready As the main project file is distributed as a ZIP file, we'll need to use one tool for decompressing these kind of files. Most of the modern operating systems include tools for this process. As seen in the previous recipe, we can use wget or curl programs for downloading the files. If you are planning to read the source code or you'd like to use the current development version of the framework, you'll need a Subversion client as the UiUIKit project is working with this open source version control. How to do it... Open your web browser and type the following URL: http://code.google.com/p/iphone-universal/downloads/list After downloading, click on the link for the latest version from the main list, for instance, the link called UiUIKit-2.1.zip. The next page will show you a different link for this file that represents the version 2.1 of the UiUIKit framework. Click on the link and the file will start downloading immediately. Mac users will see how the Safari browser shows a window with the content of the compressed file, which is a folder called UiUIKit, which is stored in the default folder for downloads. Command line's fans can use these simple commands from their favorite terminal tool: $ cd ~$ curl -O http://iphone-universal.googlecode.com/files/UiUIKit-2.1.zip After downloading, don't forget to decompress the file on your web-specific directory. The commands given next execute this action on Linux and Mac OS X systems: $ cd /var/www/$ unzip ~/UiUIKit-2.1.zip How it works... The main folder of the UiUIKit framework contains two subfolders called images and stylesheets. The first one includes many images used to get a native look for web applications on the iPhone. The other one offers a CSS file called iphone.css. We only need the images subfolder with its graphic files and the CSS file. In order to use this framework in our projects, we need to allow our HTML files access to the images and the CSS file of the framework. These files should be in a folder with permissions for the web server. For example, we'll have a directory structure for our new web application for iPhone as follows: myapp/ index.html images/ actionButtons.png apple-touch-icon.png backButton.png toolButton.png whiteButton.png first.html second.html stylesheets/ iphone.css Remember that this framework doesn't include HTML files; we only need a bunch of the graphic files and one stylesheet file. The HTML files showed in the previous example will be our own files created for the web application. We'll also find a lot of examples on different HTML files located in the root directory, outside the mentioned subfolders. These files are not required for development but they can be very useful to show how to use some features and functionalities. There's more... For an initial contact with the capabilities of the framework it would be interesting to take a look at the examples included in the main directory of the framework. We can load the index.html in our browser. This file is an index to the different examples and offers a native interface for the iPhone. Safari could be used but is better to access from a real iPhone device. Subversion is a well-proven version control used by many developers, companies, and, of course, open source projects. UiUIKit is an example of these projects using this popular version control. So, to access the latest version in development, we'll need a client to download it. Popular Linux distributions, including Ubuntu and Debian have binary packages ready to install. For instance, the following command is enough to install it on Ubuntu Linux: $ sudo apt-get install subversion The last versions of Mac OS X, including Leopard and Snow Leopard, includes a Subversion client ready to use. For Windows, you can download Slik SVN available for 32-bit and 64-bits platforms; installation programs can be downloaded from: http://www.sliksvn.com/en/download. When you are sure that your client is running, you could execute it for getting the latest development version of the UiUIKit framework. Mac and Linux users will execute the following command: $ svn checkout http://iphone-universal.googlecode.com/svn/trunk/ UiUIKit All information related to the UiUIKit framework project could be found at: http://code.google.com/p/iphone-universal/
Read more
  • 0
  • 0
  • 8652

article-image-new-ipad-features-ios-6
Packt
20 Feb 2013
2 min read
Save for later

New iPad Features in iOS 6

Packt
20 Feb 2013
2 min read
(For more resources related to this topic, see here.) New iPad features and native applications We're going to identify some of the applications that come built-in to the new iPad. The applications designed by Apple for the iPad are Safari, Mail, Photos, FaceTime, Maps, Siri, Newsstand, Messages, Calendar, Reminders, Contacts, App Store, iTunes, Music, Videos, Notes, Camera, Photo Booth, Clock, Game Center, and Settings. Let's get started by exploring these applications. Getting ready Locate the Mail, Photos, App Store, iTunes, Music, and Settings apps For details on each app, visit http://www.apple.com/ipad/built-in-apps/ How to do it... These apps form the base of the iPad and by themselves can satisfy most of our media and communication needs. We'll delve deeper into each of the following apps in the upcoming recipes. Mail Photos App Store iTunes and Music Settings How it works... The applications can work together to provide a unifying experience. From the Photos app, we are able to share photos via the Mail application. Purchasing music in iTunes will allow us playback in our Music app. Ubiquity is what makes this device and its apps so useful. Summary This article gave us a brief overview of the application that illustrate the new iPad's features. Resources for Article : Further resources on this subject: Build iPhone, Android and iPad Applications using jQTouch [Article] Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 8593

article-image-add-a-twitter-sign-in-to-your-ios-app-with-twitterkit
Doron Katz
15 Mar 2015
5 min read
Save for later

Add a Twitter Sign In To Your iOS App with TwitterKit

Doron Katz
15 Mar 2015
5 min read
What is TwitterKit & Digits? In this post we take a look at Twitter’s new Sign-in API, TwitterKit and Digits, bundled as part of it’s new Fabric suite of tools, announced by Twitter earlier this year, as well as providing you with two quick snippets on on how to integrate Twitter’s sign-in mechanism into your iOS app. Facebook, and to a lesser-extent Google, have for a while dominated the single sign-on paradigm, through their SDK or Accounts.framework on iOS, to encourage developers to provide a consolidated form of login for their users. Twitter has decided to finally get on the band-wagon and work toward improving its brand through increasing sign-on participation, and providing a concise way for users to log-in to their favorite apps without needing to remember individual passwords. By providing a Login via Twitter button, developers will gain the user’s Twitter identity, and subsequently their associated twitter tweets and associations. That is, once the twitter account is identified, the app can engage followers of that account (friends), or to access the user’s history of tweets, to data-mind for a specific keyword or hashtag. In addition to offering a single sign-on, Twitter is also offering Digits, the ability for users to sign-on anonymously using one’s phone number, synonymous with Facebook’s new Anonymous Login API.   The benefits of Digit The rationale behind Digits is to provide users with the option of trusting the app or website and providing their Twitter identification in order to log-in. Another option for the more hesitant ones wanting to protect their social graph history, is to only provide a unique number, which happens to be a mobile number, as a means of identification and authentication. Another benefit for users is that logging in is dead-simple, and rather than having to go through a deterring form of identification questions, you just ask them for their number, from which they will get an authentication confirmation SMS, allowing them to log-in. With a brief introduction to TwitterKit and Digits, let’s show you how simple it is to implement each one. Logging in with TwitterKit Twitter wanted to make implementing its authentication mechanism a more simpler and attractive process for developers, which they did. By using the SDK as part of Twitter’s Fabric suite, you will already get your Twitter app set-up and ready, registered for use with the company’s SDK. TwitterKit aims to leverage the existing Twitter account on the iOS, using the Accounts.framework, which is the preferred and most rudimentary approach, with a fallback to using the OAuth mechanism. The easiest way to implement Twitter authentication is through the generated button, TWTRLogInButton, as we will demonstrate using iOS’Swift language. let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) It’s quite simple, leaving you with a TWTRLoginButton button subclass, that users can add to your view hierarchy and have users interact with. Logging in with Digits Having created a login button using TwitterKit, we will now create the same feature using Digits. The simplicity of implementation is maintained with Digits, with the simplest process once again to create a pre-configured button, DGTAuthenticateButton: let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) Summary Implementing TwitterKit and Digits are both quite straight forward in iOS, with different intentions. Whereas TwitterKit allows you to have full-access to the authenticated user’s social history, the latter allows for a more abbreviated, privacy-protected approach to authenticating. If at some stage the user decides to trust the app and feels more comfortable providing full access of her or his social history, you can defer catering to that till later in the app usage. The complete iOS reference for TwitterKit and Digits can be found by clicking here. The popularity and uptake of TwitterKit remains to be seen, but as an extra option for developers, when adding Facebook and Google+ login, users will have the option to pick their most trusted social media tool as their choice of authentication. Providing an anonymous mode of login also falls in line with the more privacy-conscious world, and Digits certainly provides a seamless way of implementing, and impressively straight-forward way for users to authenticate using their phone number. We have briefly demonstrated how to interact with Twitter’s SDK using iOS and Swift, but there is also an Android SDK version, with a Web version in the pipeline very soon, according to Twitter. This is certainly worth exploring, along with the rest of the tools offered in the Fabric suite, including analytics and beta-distribution tools, and more. About the author Doron Katz is an established Mobile Project Manager, Architect and Developer, and a pupil of the methodology of Agile Project Management,such as applying Kanban principles. Doron also believes in BehaviourDriven Development (BDD), anticipating user interaction prior to design, that is. Doron is also heavily involved in various technical user groups, such as CocoaHeads Sydney, and Adobe user Group.
Read more
  • 0
  • 0
  • 8469
article-image-introducing-tablayout
Packt
29 Oct 2015
14 min read
Save for later

Introducing TabLayout

Packt
29 Oct 2015
14 min read
 In this article by Antonio Pachón, author of the book, Mastering Android Application Development, we take a look on the TabLayout design library and the different activities you can do with it. The TabLayout design library allows us to have fixed or scrollable tabs with text, icons, or a customized view. You would remember from the first instance of customizing tabs in this book that it isn't very easy to do, and to change from scrolling to fixed tabs, we need different implementations. (For more resources related to this topic, see here.) We want to change the color and design of the tabs now to be fixed; for this, we need to first go to activity_main.xml and add TabLayout, removing the previous PagerTabStrip. Our view will look as follows: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     android_layout_height="fill_parent"     android_layout_width="fill_parent"     android_orientation="vertical"     >       <android.support.design.widget.TabLayout         android_id="@+id/tab_layout"         android_layout_width="match_parent"         android_layout_height="50dp"/>       <android.support.v4.view.ViewPager           android_id="@+id/pager"         android_layout_width="match_parent"         android_layout_height="wrap_content">       </android.support.v4.view.ViewPager>   </LinearLayout> When we have this, we need to add tabs to TabLayout. There are two ways to do this; one is to create the tabs manually as follows: tabLayout.addTab(tabLayout.newTab().setText("Tab 1")); The second way, which is the one we'll use, is to set the view pager to TabLayout. In our example, MainActivity.java should look as follows: public class MainActivity extends ActionBarActivity {       @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main);           MyPagerAdapter adapter = new         MyPagerAdapter(getSupportFragmentManager());         ViewPager viewPager = (ViewPager)         findViewById(R.id.pager);         viewPager.setAdapter(adapter);         TabLayout tabLayout = (TabLayout)         findViewById(R.id.tab_layout);         tabLayout.setupWithViewPager(viewPager);         }     @Override     protected void attachBaseContext(Context newBase) {         super.attachBaseContext(CalligraphyContextWrapper         .wrap(newBase));     } } If we don't specify any color, TabLayout will use the default color from the theme, and the position of the tabs will be fixed. Our new tab bar will look as follows: Toolbar, action bar, and app bar Before continuing to add motion and animation to our app, we need to clarify the concepts of toolbar, the action bar, the app bar, and AppBarLayout, which might cause a bit of confusion. The action bar and app bar are the same component; app bar is just a new name that has been acquired in material design. This is the opaque bar fixed at the top of our activity that usually shows the title of the app, navigation options, and the different actions. The icon is or isn't displayed depending on the theme. Since Android 3.0, the theme Holo or any of its descendants is used by default for the action bar. Moving onto the next concept, the toolbar; introduced in API 21, Andorid Lollipop, it is a generalization of the action bar that doesn't need to be fixed at the top of the activity. We can specify whether a toolbar is to act as the activity action bar with the setActionBar() method. This means that a toolbar can act as an action bar depending on what we want. If we create a toolbar and set it as an action bar, we must use a theme with the .NoActionBar option to avoid having a duplicated action bar. Otherwise, we would have the one that comes by default in a theme along with the toolbar that we have created as the action bar. A new element, called AppBarLayout, has been introduced in the design support library; it is LinearLayout intended to contain the toolbar to display animations based on scrolling events. We can specify the behavior while scrolling in the children with the app:layout_scrollFlag attribute. AppBarLayout is intended to be contained in CoordinatorLayout—the component introduced as well in the design support library—which we will describe in the following section. Adding motion with CoordinatorLayout CoordinatorLayout allows us to add motion to our app, connecting touch events and gestures with views. We can coordinate a scroll movement with the collapsing animation of a view, for instance. These gestures or touch events are handled by the Coordinator.Behaviour class; AppBarLayout already has this private class. If we want to use this motion with a custom view, we would have to create this behavior ourselves. CoordinatorLayout can be implemented at the top level of our app, so we can combine this with the application bar or any element inside our activity or fragment. It also can be implemented as a container to interact with its child views. Continuing with our app, we will show a full view of a job offer when we click on a card. This will be displayed in a new activity. This activity will contain a toolbar showing the title of the job offer and logo of the company. If the description is long, we will need to scroll down to read it, and at the same time, we want to collapse the logo at the top, as it is not relevant anymore. In the same way, while scrolling back up, we want it to expand it again. To control the collapsing of the toolbar we will need CollapsingToolbarLayout. The description will be contained in NestedScrollView, which is a scroll view from the android v4 support library. The reason to use NestedScrollView is that this class can propagate the scroll events to the toolbar, while ScrollView can't. Ensure that compile com.android.support:support-v4:22.2.0 is up to date. So, for now, we can just place an image from the drawable folder to implement the CoordinatorLayout functionality. Our offer detail view, activity_offer_detail.xml, will look as follows: <android.support.design.widget.CoordinatorLayout     android_layout_width="match_parent"     android_layout_height="match_parent">      <android.support.design.widget.AppBarLayout         android_id="@+id/appbar"         android_layout_height="256dp"         android_layout_width="match_parent">         <android.support.design.widget.CollapsingToolbarLayout             android_id="@+id/collapsingtoolbar"             android_layout_width="match_parent"             android_layout_height="match_parent"             app_layout_scrollFlags="scroll|exitUntilCollapsed">               <ImageView                 android_id="@+id/logo"                 android_layout_width="match_parent"                 android_layout_height="match_parent"                 android_scaleType="centerInside"                 android_src="@drawable/googlelogo"                 app_layout_collapseMode="parallax" />             <android.support.v7.widget.Toolbar                 android_id="@+id/toolbar"                 android:layout_height="?attr/actionBarSize"                 android_layout_width="match_parent"                 app_layout_collapseMode="pin"/>         </android.support.design.widget.CollapsingToolbarLayout>     </android.support.design.widget.AppBarLayout>     <android.support.v4.widget.NestedScrollView         android_layout_width="fill_parent"         android_layout_height="fill_parent"         android_paddingLeft="20dp"         android_paddingRight="20dp"         app_layout_behavior=         "@string/appbar_scrolling_view_behavior">              <TextView                 android_id="@+id/rowJobOfferDesc"                 android_layout_width="fill_parent"                 android_layout_height="fill_parent"                 android_text="Long scrollabe text"                   android_textColor="#999"                 android_textSize="18sp"                 />      </android.support.v4.widget.NestedScrollView>  </android.support.design.widget.CoordinatorLayout> As you can see here, the CollapsingToolbar layout reacts to the scroll flag and tells its children how to react. The toolbar is pinned at the top, always remaining visible, app:layout_collapseMode="pin" however the logo will disappear with a parallax effect app:layout_collapseMode="parallax". Don't forget to add to NestedScrollview to the app:layout_behavior="@string/appbar_scrolling_view_behavior" attribute and clean the project to generate this string resource internally. If you have problems, you can set the string directly by typing "android.support.design.widget.AppBarLayout$ScrollingViewBehavior", and this will help you to identify the issue. When we click on a job offer, we need to navigate to OfferDetailActivity and send information about the offer. As you probably know from the beginner level, to send information between activities we use intents. In these intents, we can put the data or serialized objects to be able to send an object of the JobOffer type we have to make the The JobOffer class implements Serialization. Once we have done this, we can detect the click on the element in JobOffersAdapter: public class MyViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener, View.OnLongClickListener{     public TextView textViewName;     public TextView textViewDescription;       public MyViewHolder(View v){         super(v);         textViewName =         (TextView)v.findViewById(R.id.rowJobOfferTitle);         textViewDescription =         (TextView)v.findViewById(R.id.rowJobOfferDesc);         v.setOnClickListener(this);         v.setOnLongClickListener(this);     }     @Override     public void onClick(View view) {             Intent intent = new Intent(view.getContext(),             OfferDetailActivity.class);             JobOffer selectedJobOffer =             mOfferList.get(getPosition());             intent.putExtra("job_title",             selectedJobOffer.getTitle());             intent.putExtra("job_description",             selectedJobOffer.getDescription());             view.getContext().startActivity(intent);     } Once we start the activity, we need to retrieve the title and set it to the toolbar. Add a long text to the TextView description inside NestedScrollView to first test with dummy data. We want to be able to scroll to test the animation: public class OfferDetailActivity extends AppCompatActivity {     @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_offer_detail);         String job_title =         getIntent().getStringExtra("job_title");         CollapsingToolbarLayout collapsingToolbar =         (CollapsingToolbarLayout)         findViewById(R.id.collapsingtoolbar);         collapsingToolbar.setTitle(job_title);     } } Finally, ensure that your styles.xml file in the folder values uses a theme with no action bar by default: <resources>     <!-- Base application theme. -->     <style name="AppTheme"     parent="Theme.AppCompat.Light.NoActionBar">         <!-- Customize your theme here. -->     </style> </resources> We are now ready to test the behavior of the app. Launch it and scroll down. Take a look at how the image collapses and the toolbar is pinned at the top. It will look similar to this: We are missing an attribute to achieve a nice effect in the animation. Just collapsing the image doesn't collapse it enough; we need to make the image disappear in a smooth way and be replaced by the background color of the toolbar. Add the contentScrim attribute to CollapsingToolbarLayout, and this will fade in the image as is collapsing using the primary color of the theme, which is the same used by the toolbar at the moment: <android.support.design.widget.CollapsingToolbarLayout     android_id="@+id/collapsingtoolbar"     android_layout_width="match_parent"     android_layout_height="match_parent"     app_layout_scrollFlags="scroll|exitUntilCollapsed"     app_contentScrim="?attr/colorPrimary"> With this attribute, the app looks better when collapsed and expanded: We just need to style the app a bit more by changing colors and adding padding to the image; we can change the colors of the theme in styles.xml through the following code: <resources>      <!-- Base application theme. -->     <style name="AppTheme"     parent="Theme.AppCompat.Light.NoActionBar">         <item name="colorPrimary">#8bc34a</item>         <item name="colorPrimaryDark">#33691e</item>         <item name="colorAccent">#FF4081</item>     </style> </resources> Resize AppBarLayout to 190dp and add 50dp of paddingLeft and paddingRight to ImageView to achieve the following result: Summary In this article we learned what the TabLayout design library is and the different activities you can do with it. Resources for Article: Further resources on this subject: Using Resources[article] Prerequisites for a Map Application[article] Remote Desktop to Your Pi from Everywhere [article]
Read more
  • 0
  • 0
  • 8449

article-image-physics-uikit-dynamics
Packt
14 May 2014
8 min read
Save for later

Physics with UIKit Dynamics

Packt
14 May 2014
8 min read
(For more resources related to this topic, see here.) Motion and physics in UIKit With the introduction of iOS 7, Apple completely removed the skeuomorphic design that has been used since the introduction of the iPhone and iOS. In its place is a new and refreshing flat design that features muted gradients and minimal interface elements. Apple has strongly encouraged developers to move away from a skeuomorphic and real-world-based design in favor of these flat designs. Although we are guided away from a real-world look, Apple also strongly encourages that your user interface have a real-world feel. Some may think this is a contradiction; however, the goal is to give users a deeper connection to the user interface. UI elements that respond to touch, gestures, and changes in orientation are examples of how to apply this new design paradigm. In order to help assist in this new design approach, Apple has introduced two very nifty APIs, UIKit Dynamics and Motion Effects. UIKit Dynamics To put it simply, iOS 7 has a fully featured physics engine built into UIKit. You can manipulate specific properties to provide a more real-world feel to your interface. This includes gravity, springs, elasticity, bounce, and force to name a few. Each interface item will contain its own properties and the dynamic engine will abide by these properties. Motion effects One of the coolest features of iOS 7 on our devices is the parallax effect found on the home screen. Tilting the device in any direction will pan the background image to emphasize depth. Using motion effects, we can monitor the data supplied by the device's accelerometer to adjust our interface based on movement and orientation. By combining these two features, you can create great looking interfaces with a realistic feel that brings it to life. To demonstrate UIKit Dynamics, we will be adding some code to our FoodDetailViewController.m file to create some nice effects and animations. Adding gravity Open FoodDetailViewController.m and add the following instance variables to the view controller: UIDynamicAnimator* animator; UIGravityBehavior* gravity; Scroll to viewDidLoad and add the following code to the bottom of the method: animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view]; gravity = [[UIGravityBehavior alloc] initWithItems:@[self.foodImageView]]; [animator addBehavior:gravity]; Run the application, open the My Foods view, select a food item from the table view, and watch what happens. The food image should start to accelerate towards the bottom of the screen until it eventually falls off the screen, as shown in the following set of screenshots: Let's go over the code, specifically the two new classes that were just introduced, UIDynamicAnimator and UIGravityBehavior. UIDynamicAnimator This is the core component of UIKit Dynamics. It is safe to say that the dynamic animator is the physics engine itself wrapped in a convenient and easy-to-use class. The animator will do nothing on its own, but instead keep track of behaviors assigned to it. Each behavior will interact inside of this physics engine. UIGravityBehavior Behaviors are the core compositions of UIKit Dynamics animation. These behaviors all define individual responses to the physics environment. This particular behavior mimics the effects of gravity by applying force. Each behavior is associated with a view (or views) when created. Because you explicitly define this property, you can control which views will perform the behavior. Behavior properties Almost all behaviors have multiple properties that can be adjusted to the desired effect. A good example is the gravity behavior. We can adjust its angle and magnitude. Add the following code before adding the behavior to the animator: gravity.magnitude = 0.1f; Run the application and test it to see what happens. The picture view will start to fall; however, this time it will be at a much slower rate. Replace the preceding code line with the following line: gravity.magnitude = 10.0f; Run the application, and this time you will notice that the image falls much faster. Feel free to play with these properties and get a feel for each value. Creating boundaries When dealing with gravity, UIKit Dynamics does not conform to the boundaries of the screen. Although it is not visible, the food image continues to fall after it has passed the edge of the screen. It will continue to fall unless we set boundaries that will contain the image view. At the top of the file, create another instance variable: UICollisionBehavior *collision; Now in our viewDidLoad method, add the following code below our gravity code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView]]; collision.translatesReferenceBoundsIntoBoundary = YES; [animator addBehavior:collision]; Here we are creating an instance of a new class (which is a behavior), UICollisionBehavior. Just like our gravity behavior, we associate this behavior with our food image view. Rather than explicitly defining the coordinates for the boundary, we use the convenient translatesReferenceBoundsIntoBoundary property on our collision behavior. By setting this property to yes, the boundary will be defined by the bounds of the reference view that we set when allocating our UIDynamics animator. Because the reference view is self.view, the boundary is now the visible space of our view. Run the application and watch how the image will fall, but stop once it reaches the bottom of the screen, as shown in the following screenshot: Collisions With our image view responding to gravity and our screen bounds we can start detecting collisions. You may have noticed that when the image view is falling, it falls right through our two labels below it. This is because UIKit Dynamics will only respond to UIView elements that have been assigned behaviors. Each behavior can be assigned to multiple objects, and each object can have multiple behaviors. Because our labels have no behaviors associated with them, the UIKit Dynamics physics engine simply ignores it. Let's make the food image view collide with the date label. To do this, we simply need to add the label to the collision behavior allocation call. Here is what the new code looks like: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; As you can see, all we have done is add self.foodDateLabel to the initWithItems array property. As mentioned before, any single behavior can be associated with multiple items. Run your code and see what happens. When the image falls, it hits the date label but continues to fall, pushing the date label with it. Because we didn't associate the gravity behavior with the label, it does not fall immediately. Although it does not respond to gravity, the label will still be moved because it is a physics object after all. This approach is not ideal, so let's use another awesome feature of UIKit Dynamics, invisible boundaries. Creating invisible boundaries We are going to take a slightly different approach to this problem. Our label is only a point of reference for where we want to add a boundary that will stop our food image view. Because of this, the label does not need to be associated with any UIKit Dynamic behaviors. Remove self.foodDateLabel from the following code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; Instead, add the following code to the bottom of viewDidLoad but before we add our collision behavior to the animator: // Add a boundary to the top edge CGPoint topEdge = CGPointMake(self.foodDateLabel.frame.origin.x + self.foodDateLabel.frame.size.width, self.foodDateLabel.frame.origin.y); [collision addBoundaryWithIdentifier:@"barrier" fromPoint:self.foodDateLabel.frame.origin toPoint:topEdge]; Here we add a boundary to the collision behavior and pass some parameters. First we define an identifier, which we will use later, and then we pass the food date label's origin as the fromPoint property. The toPoint property is set to the CGPoint we created using the food date label's frame. Go ahead and run the application, and you will see that the food image will now stop at the invisible boundary we defined. The label is still visible to the user, but the dynamic animator ignores it. Instead the animator sees the barrier we defined and responds accordingly, even though the barrier is invisible to the user. Here is a side-by-side comparison of the before and after: Dynamic items When using UIKit Dynamics, it is important to understand what UIKit Dynamics items are. Rather than referencing dynamics as views, they are referenced as items, which adhere to the UIDynamicItem protocol. This protocol defines the center, transform, and bounds of any object that adheres to this protocol. UIView is the most common class that adheres to the UIDynamicItem protocol. Another example of a class that conforms to this protocol is the UICollectionViewLayoutAttributes class. Summary In this article, we covered some basics of how UIKit Dynamics manages your application's behaviors, that enables us to create some really unique interface effects. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Unity iOS Essentials: Flyby Background [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 8384
Modal Close icon
Modal Close icon