Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-understanding-passbook
Packt
15 Jul 2013
5 min read
Save for later

Understanding Passbook

Packt
15 Jul 2013
5 min read
(For more resources related to this topic, see here.) Getting ready With iOS 6, Apple introduced the Passbook app as a central digital wallet for all the store cards, coupons, boarding passes, and event tickets that have become a popular feature of apps. A company wishing to take advantage of this digital wallet and the extra functionality it provides, can use Apple's developer platform to create a Pass for their users. How to do it... To understand Passbook, we need to see a Pass in action. Download the example Pass from: http://passkit.pro/example-generic-pkpass If you open this link within Mobile Safari on an iPhone or iPod Touch running iOS 6, you will be presented with the Pass and the option to add it to your Passbook. Alternatively, you can download the Pass on a Mac or PC and e-mail it to yourself, and then open the e-mail within the Mail app on an iPhone or iPod Touch. Tapping the Pass attachment link will present the Pass. If you choose to add the Pass to your Passbook app, the displayed Pass will disappear, having been filed away within your Passbook. Now click on the home button to return to the home screen and launch the Passbook app. In the app you will now see the Pass that was just added. It contains information specified by the app creator and can be presented when interacting with the company providing the service. Additional information can be placed on the back of the Pass. Tap the i button in the top-right hand corner of the Pass, to reveal the this information. How it works… The following diagram describes how Passes are delivered to a Passbook, and how these can be updated: The process of creating a Pass involves cryptographically signing the Pass using a certificate and key generated from your iOS developer account. For this reason, the generation of the Pass needs to take place on a server, and then be delivered to Passbook either via your own app, as an e-mail attachment, or by embedding it in a website. It's important to note that Apple does not provide any system for the Pass providers to authenticate, validate, or invalidate Passes. The Pass can contain barcode information, but it is up to the Pass provider to provide the infrastructure for reading and processing these barcodes. Instead of just sitting in the Passbook app, waiting to be used, a Pass can contain location and time triggers, that proactively present the Pass to the user, serving as both a reminder and providing convenient access. For example, an event Pass could be set to appear 15 mins before the start time, at the time when a user likely wants to present their event Pass to an attendant. Alternatively, a coupon Pass could be presented as a user approaches their local store where the coupon can be redeemed. Passes that have been added to Passbook can also be updated dynamically. For example, if the Pass is for a store card, a change to the card balance may require an update to the Pass. In the case of, for an airline ticket Pass, a departure gate change should trigger a Pass update. When a Pass needs to be updated, your server sends a push notification to the Passbook app on the user's device. This push notification is not displayed to the user. Upon receiving this Push Notification, the Passbook app then makes a request to your server for the updated Pass information. Your server would then respond to the relevant request, and provide the updated information in the expected format. When the Passbook App on the user's device receives the updated information, it silently updates the Pass. The next time the user looks at the Pass contained in the Passbook app, the updated information is displayed. There's more… Support for Passbook is also built into OSX Mountain Lion (10.8.2). Pass files with the -pkpass file extension will open in a preview window: Clicking on the Add to Passbook button will place the Pass in the Passbook associated with the iCloud account set up in OSX system preferences. The OSX Mail app and Safari also support embedded Passes. When building a Pass, you can specify a relevant time and up to 10 relevant locations that will trigger a message to be displayed on the lock screen. The message looks similar to a push notification, however a Pass notification is less intrusive. When it is relevant to display, it doesn't vibrate the iPhone and it doesn't wake up the screen. The notification only becomes visible when the phone wakes up from sleep. The option to specify relevant time and locations, and how far from the location the notification is triggered, is determined by the Pass type, as we will see later. Apps using Passbook Some of the apps in the App Store using Passbook are as follows : Hotels.com: This uses Passbook for room reservation details. It can be downloaded from http://appstore.com/hotelscom/hotelscom. Starbucks: This uses Passbook for a store card. It can be downloaded from http://appstore.com/starbuckscoffeecompany. Ticketmaster: This uses Passbook for event tickets. It can be downloaded from http://appstore.com/ticketmaster/ticketmaster. United Airlines: This uses Passbook for boarding passes. It can be downloaded from http://appstore.com/unitedairlines. Summary This article introduced you to Passbook. Apple's Passbook feature is a collection of technologies that come together to provide digital wallet functionality to the user. We understood what Passbook consists of, from both the user and Pass creator perspective. Resources for Article : Further resources on this subject: Development of iPhone Applications [Article] iPhone Applications Tune-Up: Design for Performance [Article] New iPad Features in iOS 6 [Article]
Read more
  • 0
  • 0
  • 11159

article-image-geolocation-and-accelerometer-apis
Packt
25 Jan 2012
13 min read
Save for later

Geolocation and Accelerometer APIs

Packt
25 Jan 2012
13 min read
(For more resources on iOS, see here.) The iOS family makes use of many onboard sensors including the three-axis accelerometer, digital compass, camera, microphone, and global positioning system (GPS). Their inclusion has created a world of opportunity for developers, and has resulted in a slew of innovative, creative, and fun apps that have contributed to the overwhelming success of the App Store. Determining your current location The iOS family of devices are location-aware, allowing your approximate geographic position to be determined. How this is achieved depends on the hardware present in the device. For example, the original iPhone, all models of the iPod touch, and Wi-Fi-only iPads use Wi-Fi network triangulation to provide location information. The remaining devices can more accurately calculate their position using an on-board GPS chip or cell-phone tower triangulation. The AIR SDK provides a layer of abstraction that allows you to extract location information in a hardware-independent manner, meaning you can access the information on any iOS device using the same code. This recipe will take you through the steps required to determine your current location. Getting ready An FLA has been provided as a starting point for this recipe. From Flash Professional, open chapter9recipe1recipe.fla from the code bundle which can be downloaded from http://www.packtpub.com/support.   How to do it... Perform the following steps to listen for and display geolocation data: Create a document class and name it Main. Import the following classes and add a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private var geo:Geolocation; public function Main() { // constructor code } }}   Within the class' constructor, instantiate a Geolocation object and listen for updates from it: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(1000); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} Now, write an event handler that will obtain the updated geolocation data and populate the dynamic text fields with it: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} Save the class file as Main.as within the same folder as the FLA. Move back to the FLA and save it too. Publish and test the app on your device. When launched for the first time, a native iOS dialog will appear. Tap the OK button to grant your app access to the device's location data.     Devices running iOS 4 and above will remember your choice, while devices running older versions of iOS will prompt you each time the app is launched.   The location data will be shown on screen and periodically updated. Take your device on the move and you will see changes in the data as your geographical location changes. How it works... AIR provides the Geolocation class in the flash.sensors package, allowing the location data to be retrieved from your device. To access the data, create a Geolocation instance and listen for it dispatching GeolocationEvent.UPDATE events. We did this within our document class' constructor, using the geo member variable to hold a reference to the object: geo = new Geolocation();geo.setRequestedUpdateInterval(1000);geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); The frequency with which location data is retrieved can be set by calling the Geolocation. setRequestedUpdateInterval() method. You can see this in the earlier code where we requested an update interval of 1000 milliseconds. This only acts as a hint to the device, meaning the actual time between updates may be greater or smaller than your request. Omitting this call will result in the device using a default update interval. The default interval can be anything ranging from milliseconds to seconds depending on the device's hardware capabilities. Each UPDATE event dispatches a GeolocationEvent object, which contains properties describing your current location. Our geoUpdated() method handles this event by outputting several of the properties to the dynamic text fields sitting on the stage: private function geoUpdated(e:GeolocationEvent):void { latitudeField.text = e.latitude.toString(); longitudeField.text = e.longitude.toString(); altitudeField.text = e.altitude.toString(); hAccuracyField.text = e.horizontalAccuracy.toString(); vAccuracyField.text = e.verticalAccuracy.toString(); timestampField.text = e.timestamp.toString();} The following information was output: Latitude and longitude Altitude Horizontal and vertical accuracy Timestamp The latitude and longitude positions are used to identify your geographical location. Your altitude is also obtained and is measured in meters. As you move with the device, these values will update to reflect your new location. The accuracy of the location data is also shown and depends on the hardware capabilities of the device. Both the horizontal and vertical accuracy are measured in meters. Finally, a timestamp is associated with every GeolocationEvent object that is dispatched, allowing you to determine the actual time interval between each. The timestamp specifies the milliseconds that have passed since the app was launched. Some older devices that do not include a GPS unit only dispatch UPDATE events occasionally. Initially, one or two UPDATE events are dispatched, with additional events only being dispatched when location information changes noticeably. Also note the use of the static Geolocation.isSupported property within the constructor. Although this will currently return true for all iOS devices, it cannot be guaranteed for future devices. Checking for geolocation support is also advisable when writing cross-platform code. For more information, perform a search for flash.sensors.Geolocation and flash. events.GeolocationEvent within Adobe Community Help. There's more... The amount of information made available and the accuracy of that information depends on the capabilities of the device. Accuracy The accuracy of the location data depends on the method employed by the device to calculate your position. Typically, iOS devices with an on-board GPS chip will have a benefit over those that rely on Wi-Fi triangulation. For example, running this recipe's app on an iPhone 4, which contains a GPS unit, results in a horizontal accuracy of around 10 meters. The same app running on a third-generation iPod touch and relying on a Wi-Fi network, reports a horizontal accuracy of around 100 meters. Quite a difference! Altitude support The current altitude can only be obtained from GPS-enabled devices. On devices without a GPS unit, the GeolocationEvent.verticalAccuracy property will return -1 and GeolocationEvent.altitude will return 0. A vertical accuracy of -1 indicates that altitude cannot be detected. You should be aware of, and code for these restrictions when developing apps that provide location-based services. Do not make assumptions about a device's capabilities. If your application relies on the presence of GPS hardware, then it is possible to state this within your application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Mapping your location The most obvious use for the retrieval of geolocation data is mapping. Typically, an app will obtain a geographic location and display a map of its surrounding area. There are several ways to achieve this, but launching and passing location data to the device's native maps application is possibly the easiest solution. If you would prefer an ActionScript solution, then there is the UMap ActionScript 3.0 API, which integrates with map data from a wide range of providers including Bing, Google, and Yahoo!. You can sign up and download the API from www.umapper.com. Calculating distance between geolocations When the geographic coordinates of two separate locations are known, it is possible to determine the distance between them. AIR does not provide an API for this but an AS3 solution can be found on the Adobe Developer Connection website at: http://cookbooks.adobe.com/index.cfm?event=showdetails&postId=5701. The UMap ActionScript 3.0 API can also be used to calculate distances. Refer to www.umapper.com. Geocoding Mapping providers, such as Google and Yahoo!, provide geocoding and reverse-geocoding web services. Geocoding is the process of finding the latitude and longitude of an address, whereas reverse-geocoding converts a latitude-longitude pair into a readable address. You can make HTTP requests from your AIR for iOS application to any of these services. As an example, take a look at the Yahoo! PlaceFinder web service at http://developer.yahoo.com/geo/placefinder. Alternatively, the UMap ActionScript 3.0 API integrates with many of these services to provide geocoding functionality directly within your Flash projects. Refer to the uMapper website. Gyroscope support Another popular sensor is the gyroscope, which is found in more recent iOS devices. While the AIR SDK does not directly support gyroscope access, Adobe has made available a native extension for AIR 3.0, which provides a Gyroscope ActionScript class. A download link and usage examples can be found on the Adobe Developer Connection site at www.adobe.com/devnet/air/native-extensions-for-air/extensions/gyroscope.html. Determining your speed and heading The availability of an on-board GPS unit makes it possible to determine your speed and heading. In this recipe, we will write a simple app that uses the Geolocation class to obtain and use this information. In addition, we will add compass functionality by utilizing the user's current heading. Getting ready You will need a GPS-enabled iOS device. The iPhone has featured an on-board GPS unit since the release of the 3G. GPS hardware can also be found in all cellular network-enabled iPads. From Flash Professional, open chapter9recipe2recipe.fla from the code bundle. Sitting on the stage are three dynamic text fields. The first two (speed1Field and speed2Field) will be used to display the current speed in meters per second and miles per hour respectively. We will write the device's current heading into the third—headingField. Also, a movie clip named compass has been positioned near the bottom of the stage and represents a compass with north, south, east, and west clearly marked on it. We will update the rotation of this clip in response to heading changes to ensure that it always points towards true north. How to do it... To obtain the device's speed and heading, carry out the following steps: Create a document class and name it Main. Add the necessary import statements, a constant, and a member variable of type Geolocation: package { import flash.display.MovieClip; import flash.events.GeolocationEvent; import flash.sensors.Geolocation; public class Main extends MovieClip { private const CONVERSION_FACTOR:Number = 2.237; private var geo:Geolocation; public function Main() { // constructor code } }} Within the constructor, instantiate a Geolocation object and listen for updates: public function Main() { if(Geolocation.isSupported) { geo = new Geolocation(); geo.setRequestedUpdateInterval(50); geo.addEventListener(GeolocationEvent.UPDATE, geoUpdated); }} We will need an event listener for the Geolocation object's UPDATE event. This is where we will obtain and display the current speed and heading, and also update the compass movie clip to ensure it points towards true north. Add the following method: private function geoUpdated(e:GeolocationEvent):void { var metersPerSecond:Number = e.speed; var milesPerHour:uint = getMilesPerHour(metersPerSecond); speed1Field.text = String(metersPerSecond); speed2Field.text = String(milesPerHour); var heading:Number = e.heading; compass.rotation = 360 - heading; headingField.text = String(heading);} Finally, add this support method to convert meters per second to miles per hour: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Save the class file as Main.as. Move back to the FLA and save it too. Compile the FLA and deploy the IPA to your device. Launch the app. When prompted, grant your app access to the GPS unit. Hold the device in front of you and start turning on the spot. The heading (degrees) field will update to show the direction you are facing. The compass movie clip will also update, showing you where true north is in relation to your current heading. Take your device outside and start walking, or better still, start running. On average every 50 milliseconds you will see the top two text fields update and show your current speed, measured in both meters per second and miles per hour. How it works... In this recipe, we created a Geolocation object and listened for it dispatching UPDATE events. An update interval of 50 milliseconds was specified in an attempt to receive the speed and heading information frequently. Both the speed and heading information are obtained from the GeolocationEvent object, which is dispatched on each UPDATE event. The event is captured and handled by our geoUpdated() handler, which displays the speed and heading information from the accelerometer. The current speed is measured in meters per second and is obtained by querying the GeolocationEvent.speed property. Our handler also converts the speed to miles per hour before displaying each value within the appropriate text field. The following code does this: var metersPerSecond:Number = e.speed;var milesPerHour:uint = getMilesPerHour(metersPerSecond);speed1Field.text = String(metersPerSecond);speed2Field.text = String(milesPerHour); The heading, which represents the direction of movement (with respect to true north) in degrees, is retrieved from the GeolocationEvent.heading property. The value is used to set the rotation property of the compass movie clip and is also written to the headingField text field: var heading:Number = e.heading;compass.rotation = 360 - heading;headingField.text = String(heading); The remaining method is getMilesPerHour() and is used within geoUpdated() to convert the current speed from meters per second into miles per hour. Notice the use of the CONVERSION_FACTOR constant that was declared within your document class: private function getMilesPerHour(metersPerSecond:Number):uint { return metersPerSecond * CONVERSION_FACTOR;} Although the speed and heading obtained from the GPS unit will suffice for most applications, the accuracy can vary across devices. Your surroundings can also have an affect; moving through streets with tall buildings or under tree coverage can impair the readings. You can find more information regarding flash.sensors.Geolocation and flash.events.GeolocationEvent within Adobe Community Help. There's more... The following information provides some additional detail. Determining support Your current speed and heading can only be determined by devices that possess a GPS receiver. Although you can install this recipe's app on any iOS device, you won't receive valid readings from any model of iPod touch, the original iPhone, or W-Fi-only iPads. Instead the GeolocationEvent.speed property will return -1 and GeolocationEvent.heading will return NaN. If your application relies on the presence of GPS hardware, then it is possible to state this within the application descriptor file. Doing so will prevent users without the necessary hardware from downloading your app from the App Store. Simulating the GPS receiver During the development lifecycle it is not feasible to continually test your app in a live environment. Instead you will probably want to record live data from your device and re-use it during testing. There are various apps available that will log data from the sensors on your device. One such app is xSensor, which can be downloaded from iTunes or the App Store and is free. Its data sensor log is limited to 5KB but this restriction can be lifted by purchasing xSensor Pro. Preventing screen idle Many of this article's apps don't require you to touch the screen that often. Therefore you will be likely to experience the backlight dimming or the screen locking while testing them. This can be inconvenient and can be prevented by disabling screen locking.  
Read more
  • 0
  • 0
  • 11012

article-image-creating-grids-panels-and-other-widgets
Packt
10 Feb 2016
6 min read
Save for later

Creating Grids, Panels, and other Widgets

Packt
10 Feb 2016
6 min read
In this article by Raymond Camden, author of the book jQuery Mobile Web Development Essentials – Third Edition, we will look at dialogs, grids, and other widgets. While jQuery mobile provides great support for them, you get even more UI controls within the framework. In this article, we will see how to layout content with grids and make responsive grids. (For more resources related to this topic, see here.) Laying out content with grids Grids are one of the few features of jQuery mobile that do not make use of particular data attributes. Instead, you work with grids simply by specifying CSS classes for your content. Grids come in four flavors: two-column, three-column, four-column, and five-column grids. You will probably not want to use the five-column one on a phone device. Save that for a tablet instead. You begin a grid with a div block that makes use of the class ui-grid-X, where X will either be a, b, c, or d. ui-grid-a represents a two-column grid. The ui-grid-b class is a three-column grid. You can probably guess what c and d create. So, to begin a two-column grid, you would wrap your content with the following: <div class="ui-grid-a">   Content </div> Within the div tag, you then use div for each cell of the content. The class for grid calls begins with ui-block-X, where X goes from a to d. The ui-block-a class would be used for the first cell, ui-block-b for the next, and so on. This works much like the HTML tables. Putting it together, the following code snippet demonstrates a simple two-column grid with two cells of content: <div class="ui-grid-a">   <div class="ui-block-a">Left</div>   <div class="ui-block-b">Right</div> </div> The text within a cell will automatically wrap. Listing 7-1 demonstrates a simple grid with a large amount of text in one of the columns:   In the mobile browser, you can clearly see the two columns: If the text in these divs seems a bit close together, there is a simple fix for that. In order to add a bit more space between the content of the grid cells, you can add a class to your main div that specifies ui-content. This tells jQuery mobile to pad the content a bit. For example: <div class="ui-grid-a ui-content"> This small change modifies the previous screenshot like the following: Listing 7-1: test1.html <div data-role="page" id="first">       <div data-role="header">         <h1>Grid Test</h1>       </div>       <div role="main" class="ui-content">         <div class="ui-grid-a">           <div class="ui-block-a">           <p>           This is my left hand content. There won't be a lot of           it.           </p>           </div>           <div class="ui-block-b">             <p>               This is my right hand content. I'm going to fill it               with some dummy text.             </p>             <p>               Bacon ipsum dolor sit amet andouille capicola spare               ribs, short loin venison sausage prosciutto               turkey flank frankfurter pork belly short ribs.               chop, pancetta turkey bacon short ribs ham flank               pork belly. Tongue strip steak short ribs tail           </p>           </div>         </div>       </div>     </div> Working with other types of grids then is simply a matter of switching to the other classes. For example, a four-column grid would be set up similar to the following code snippet: <div class="ui-grid-c">   <div class="ui-block-a">1st cell</div>   <div class="ui-block-b">2nd cell</div>   <div class="ui-block-c">3rd cell</div> </div> Again, keep in mind your target audience. Anything over two columns may be too thin on a mobile phone. To create multiple rows in a grid, you need to simply repeat the blocks. The following code snippet demonstrates a simple example of a grid with two rows of cells: <div class="ui-grid-a">   <div class="ui-block-a">Left Top</div>   <div class="ui-block-b">Right Top</div>   <div class="ui-block-a">Left Bottom</div>   <div class="ui-block-b">Right Bottom</div> </div> Notice that there isn't any concept of a row. jQuery mobile handles knowing that it should create a new row when the block starts over with the one marked ui-block-a. The following code snippet, Listing 7-2, is a simple example: Listing 7-2:test2.html <div data-role="page" id="first">       <div data-role="header">         <h1>Grid Test</h1>       </div>       <div role="main" class="ui-content">         <div class="ui-grid-a">           <div class="ui-block-a">             <p>               <img src="ray.png">             </p>           </div>           <div class="ui-block-b">           <p>           This is Raymond Camden. Here is some text about him. It           may wrap or it may not but jQuery Mobile will make it          look good. Unlike Ray!           </p>           </div>           <div class="ui-block-a">             <p>               This is Scott Stroz. Scott Stroz is a guy who plays               golf and is really good at FPS video games.             </p>           </div>           <div class="ui-block-b">             <p>               <img src="scott.png">             </p>           </div>         </div>       </div>     </div> The following screenshot shows the result: Making responsive grids Earlier we mentioned that complex grids may not work depending on the size or your targeted devices. A simple two-column grid is fine, but the larger grids would render well on tablets only. Luckily, there's a simple solution for it. jQuery mobile's latest updates include much better support for responsive design. Let's consider a simple example. Here is a screenshot of a web page using a four-column grid: It is readable for sure, but it is a bit dense. By making use of responsive design, we could handle the different sizes intelligently using the same basic HTML. jQuery mobile enables a simple solution for this by adding the class ui-responsive to an existing grid. Here is an example: <div class="ui-grid-c ui-responsive"> By making this one small change, look how the phone version of our page changes: The four-column layout now is a one-column layout instead. If viewed in a tablet, the original four-column design will be preserved. Summary In this article, you learned more about how jQuery mobile enhances basic HTML to provide additional layout controls to our mobile pages. With grids, you learned a new way to easily layout content in columns. Resources for Article:   Further resources on this subject: Classes and Instances of Ember Object Model [article] Introduction to Akka [article] CoreOS Networking and Flannel Internals [article]
Read more
  • 0
  • 0
  • 11000

article-image-so-what-spring-android
Packt
20 Feb 2013
3 min read
Save for later

So, what is Spring for Android?

Packt
20 Feb 2013
3 min read
(For more resources related to this topic, see here.) RestTemplate The RestTemplate module is a port of the Java-based REST client RestTemplate, which initially appeared in 2009 in Spring for MVC. Like the other Spring template counterparts (JdbcTemplate, JmsTemplate, and so on), its aim is to bring to Java developers (and thus Android developers) a high-level abstraction of lower-level Java API; in this case, it eases the development of HTTP clients. In its Android version, RestTemplate relies on the core Java HTTP facilities (HttpURLConnection) or the Apache HTTP Client. According to the Android device version you use to run your app, RestTemplate for Android can pick the most appropriate one for you. This is according to Android developers' recommendations. See http://android-developers.blogspot.ca/2011/09/androids-http-clients.html. This blog post explains why in certain cases Apache HTTP Client is preferred over HttpURLConnection. RestTemplate for Android also supports gzip compression and different message converters to convert your Java objects from and to JSON, XML, and so on. Auth/Spring Social The goal of the Spring Android Auth module is to let an Android app gain authorization to a web service provider using OAuth (Version 1 or 2). OAuth is probably the most popular authorization protocol (and it is worth mentioning that, it is an open standard) and is currently used by Facebook, Twitter, Google apps (and many others) to let third-party applications access users account. Spring for Android Auth module is based on several Spring libraries because it needs to securely (with cryptography) persist (via JDBC) a token obtained via HTTP; here is a list of the needed libraries for OAuth: Spring Security Crypto: To encrypt the token Spring Android OAuth: This extends Spring Security Crypto adding a dedicated encryptor for Android, and SQLite based persistence provider Spring Android Rest Template: To interact with the HTTP services Spring Social Core: The OAuth workflow abstraction While performing the OAuth workflow, we will also need the browser to take the user to the service provider authentication page, for example, the following is the Twitter OAuth authentication dialog: What Spring for Android is not SpringSource (the company behind Spring for Android) is very famous among Java developers. Their most popular product is the Spring Framework for Java which includes a dependency injection framework (also called an inversion of control framework). Spring for Android does not bring inversion of control to the Android platform. In its very first release (1.0.0.M1), Spring for Android brought a common logging facade for Android; the authors removed it in the next version. Summary In this article, we have learned that Spring for Android helps in easy development of Android applications. We learned the details about the important modules present in it and its functions. We also learnt about dependency injection framework in short and that Spring for Android does not bring inversion of control to the Android platform. Resources for Article : Further resources on this subject: Top 5 Must-have Android Applications [Article] Creating, Compiling, and Deploying Native Projects from the Android NDK [Article] Manifest Assurance: Security and Android Permissions for Flash [Article]
Read more
  • 0
  • 0
  • 10857

article-image-get-your-apps-ready-android-n
Packt
18 Mar 2016
9 min read
Save for later

Get your Apps Ready for Android N

Packt
18 Mar 2016
9 min read
It seems likely that Android N will get its first proper outing in May, at this year's Google I/O conference, but there's no need to wait until then to start developing for the next major release of the Android platform. Thanks to Google's decision to release preview versions early you can start getting your apps ready for Android N today. In this article by Jessica Thornsby, author of the book Android UI Design, going to look at the major new UI features that you can start experimenting with right now. And since you'll need something to develop your Android N-ready apps in, we're also going to look at Android Studio 2.1, which is currently the recommended development environment for Android N. (For more resources related to this topic, see here.) Multi-window mode Beginning with Android N, the Android operating system will give users the option to display more than one app at a time, in a split-screen environment known as multi-window mode. Multi-window paves the way for some serious multi-app multi-tasking, allowing users to perform tasks such as replying to an email without abandoning the video they were halfway through watching on YouTube, and reading articles in one half of the screen while jotting down notes in Google Keep on the other. When two activities are sharing the screen, users can even drag data from one activity and drop it into another activity directly, for example dragging a restaurant's address from a website and dropping it into Google Maps. Android N users can switch to multi-window mode either by: Making sure one of the apps they want to view in multi-window mode is visible onscreen, then tapping their device's Recent Apps softkey (that's the square softkey). The screen will split in half, with one side displaying the current activity and the other displaying the Recent Apps carousel. The user can then select the secondary app they want to view, and it'll fill the remaining half of the screen. Navigating to the home screen, and then pressing the Recent Apps softkey to open the Recent Apps carousel. The user can then drag one of these apps to the edge of the screen, and it'll open in multi-window mode. The user can then repeat this process for the second activity. If your app targets Android N or higher, the Android operating system assumes that your app supports multi-window mode unless you explicitly state otherwise. To prevent users from displaying your app in multi-window mode, you'll need to add android:resizeableActivity="false" to the <activity> or <application> section of your project's Manifest file. If your app does support multi-window mode, you may want to prevent users from shrinking your app's UI beyond a specified size, using the android:minimalSize attribute. If the user attempts to resize your app so it's smaller than the android:minimalSize value, the system will crop your UI instead of shrinking it. Direct reply notifications Google are adding a few new features to notifications in Android N, including an inline reply action button that allows users to reply to notifications directly from the notification UI.   This is particularly useful for messaging apps, as it means users can reply to messages without even having to launch the messaging application. You may have already encountered direct reply notifications in Google Hangouts. To create a notification that supports direct reply, you need to create an instance of RemoteInput.Builder and then add it to your notification action. The following code adds a RemoteInput to a Notification.Action, and creates a Quick Reply key. When the user triggers the action, the notification prompts the user to input their response: private static final String KEY_QUICK_REPLY = "key_quick_reply"; String replyLabel = getResources().getString(R.string.reply_label); RemoteInput remoteInput = new RemoteInput.Builder(KEY_QUICK_REPLY) .setLabel(replyLabel) .build(); To retrieve the user's input from the notification interface, you need to call: getResultsFromIntent(Intent) and pass the notification action's intent as the input parameter: Bundle remoteInput = RemoteInput.getResultsFromIntent(intent); //This method returns a Bundle that contains the text response// if (remoteInput != null) { return remoteInput.getCharSequence(KEY_QUICK_REPLY); //Query the bundle using the result key, which is provided to the RemoteInput.Builder constructor// Bundled notifications Don't you just hate it when you connect to the World Wide Web first thing in the morning, and Gmail bombards you with multiple new message notifications, but doesn't give you anymore information about the individual emails? Not particularly helpful! When you receive a notification that consists of multiple items, the only thing you can really do is launch the app in question and take a closer look at the events that make up this grouped notification. Android N overcomes this drawback, by letting you group multiple notifications from the same app into a single, bundled notification via a new notification style: bundled notifications. A bundled notification consists of a parent notification that displays summary information for that group, plus individual notification items. If the user wants to see more information about one or more individual items, they can unfurl the bundled notification into separate notifications by swiping down with two fingers. The user can then act on each mini-notification individually, for example they might choose to dismiss the first three notifications about spam emails, but open the forth e-mail. To group notifications, you need to call setGroup() for each notification you want to add to the same notification stack, and then assign these notifications the same key. final static String GROUP_KEY_MESSAGES = "group_key_messages"; Notification notif = new NotificationCompat.Builder(mContext) .setContentTitle("New SMS from " + sender1) .setContentText(subject1) .setSmallIcon(R.drawable.new_message) .setGroup(GROUP_KEY_MESSAGES) .build(); Then when you create another notification that belongs to this stack, you just need to assign it the same group key. Notification notif2 = new NotificationCompat.Builder(mContext) .setContentTitle("New SMS from " + sender1) .setContentText(subject2) .setGroup(GROUP_KEY_MESSAGES) .build(); The second Android N developer preview introduced an Android-specific implementation of the Vulkan API. Vulkan is a cross-platform, 3D rendering API for providing high-quality, real-time 3D graphics. For draw-call heavy applications, Vulkan also promises to deliver a significant performance boost, thanks to a threading-friendly design and a reduction of CPU overhead. You can try Vulkan for yourself on devices running Developer Preview 2, or learn more about Vulkan at the official Android docs (https://developer.android.com/ndk/guides/graphics/index.html?utm_campaign=android_launch_npreview2_041316&utm_source=anddev&utm_medium=blog). Android N Support in Android Studio 2.1 The two Developer Previews aren't the only important releases for developers who want to get their apps ready for Android N. Google also recently released a stable version of Android Studio 2.1, which is the recommended IDE for developing Android N apps. Crucially, with the release of Android Studio 2.1 the emulator can now run the N Developer Preview Emulator System Images, so you can start testing your apps against Android N. Particularly with features like multi-window mode, it's important to test your apps across multiple screen sizes and configurations, and creating various Android N Android Virtual Devices (AVDs) is the quickest and easiest ways to do this. Android 2.1 also adds the ability to use the new Jack compiler (Java Android Compiler Kit), which compiles Java source code into Android dex bytecode. Jack is particularly important as it opens the door to using Java 8 language features in your Android N projects, without having to resort to additional tools or resources. Although not Android N-specific, Android 2.1 makes some improvements to the Instant Run feature, which should result in faster editing and deploy builds for all your Android projects. Previously, one small change in the Java code would cause all Java sources in the module to be recompiled. Instant Run aims to reduce compilation time by analyzing the changes you've made and determining how it can deploy them in the fastest way possible. This is instead of Android Studio automatically going through the lengthy process of recompiling the code, converting it to dex format, generating an APK and installing it on the connected device or emulator every time you make even a small change to your project. To start using Instant Run, select Android Studio from the toolbar followed by Preferences…. In the window that appears, select Build, Execution, Deployment from the side-menu and select Instant Run. Uncheck the box next to Restart activity on code changes. Instant Run is supported only when you deploy a debug build for Android 4.0 or higher. You'll also need to be using Android Plugin for Gradle version 2.0 or higher. Instant Run isn't currently compatible with the Jack toolchain. To use Instant Run, deploy your app as normal. Then if you make some changes to your project you'll notice that a yellow thunderbolt icon appears within the Run icon, indicating that Android Studio will push updates via Instant Run when you click this button. You can update to the latest version of Android Studio by launching the IDE and then selecting Android Studio from the toolbar, followed by Check for Updates…. Summary In this article, we looked at the major new UI features currently available in the Android N Developer Preview. We also looked at the Android Studio 2.1 features that are particularly useful for developing and testing apps that target the upcoming Android N release. Although we should expect some pretty dramatic changes between these early previews and the final release of Android N, taking the time to explore these features now means you'll be in a better position to update your apps when Android N is finally released. Resources for Article: Further resources on this subject: Drawing and Drawables In Android Canvas [article] Behavior-Driven Development With Selenium Webdriver [article] Development of Iphone Applications [article]
Read more
  • 0
  • 0
  • 10786

article-image-prototype-pattern
Packt
27 Dec 2016
5 min read
Save for later

The prototype pattern

Packt
27 Dec 2016
5 min read
In this article by Kyle Mew, author of the book Android Design Patterns and Best Practices, we will discuss how the prototype design pattern performs similar tasks to other creational patterns, such as builders and factories, but it takes a very different approach. Rather than relying heavily on a number of hardcoded subclasses, the prototype, as its name suggests, makes copies of an original, vastly reducing the number of subclasses required and preventing any lengthy creation processes. (For more resources related to this topic, see here.) Setting up a prototype The prototype is most useful when the creation of an instance is expensive in some way. This could be the loading of a large file, a detailed cross-examination of a database, or some other computationally expensive operation. Furthermore, it allows us to decouple cloned objects from their originals, allowing us to make modifications without having to reinstantiate each time. In the following example, we will demonstrate this using functions that take considerable time to calculate when first created, the nth prime number and the nth Fibonacci number. Viewed diagrammatically, our prototype will look like this: We will not need the prototype pattern in our main app as there are very few expensive creations as such. However, it is vitally important in many situations and should not be neglected. Follow the given steps to apply a prototype pattern: We will start with the following abstract class: public abstract class Sequence implements Cloneable { protected long result; private String id; public long getResult() { return result; } public String getId() { return id; } public void setId(String id) { this.id = id; } public Object clone() { Object clone = null; try { clone = super.clone(); } catch (CloneNotSupportedException e) { e.printStackTrace(); } return clone; } } Next, add this cloneable concrete class, as shown: // Calculates the 10,000th prime number public class Prime extends Sequence { public Prime() { result = nthPrime(10000); } public static int nthPrime(int n) { int i, count; for (i = 2, count = 0; count < n; ++i) { if (isPrime(i)) { ++count; } } return i - 1; } // Test for prime number private static boolean isPrime(int n) { for (int i = 2; i < n; ++i) { if (n % i == 0) { return false; } } return true; } } Add another Sequence class, like so: for the Fibonacci numbers, like so: // Calculates the 100th Fibonacci number public class Fibonacci extends Sequence { public Fibonacci() { result = nthFib(100); } private static long nthFib(int n) { long f = 0; long g = 1; for (int i = 1; i <= n; i++) { f = f + g; g = f - g; } return f; } } Next, create the cache class, as follows: public class SequenceCache { private static Hashtable<String, Sequence> sequenceHashtable = new Hashtable<String, Sequence>(); public static Sequence getSequence(String sequenceId) { Sequence cachedSequence = sequenceHashtable.get(sequenceId); return (Sequence) cachedSequence.clone(); } public static void loadCache() { Prime prime = new Prime(); prime.setId("1"); sequenceHashtable.put(prime.getId(), prime); Fibonacci fib = new Fibonacci(); fib.setId("2"); sequenceHashtable.put(fib.getId(), fib); } } Add three TextViews to your layout, and then, add the code to your Main Activity's onCreate() method. Add the following lines to the client code: // Load the cache once only SequenceCache.loadCache(); // Lengthy calculation and display of prime result Sequence prime = (Sequence) SequenceCache.getSequence("1"); primeText.setText(new StringBuilder() .append(getString(R.string.prime_text)) .append(prime.getResult()) .toString()); // Lengthy calculation and display of Fibonacci result SSequence fib = (Sequence) SequenceCache.getSequence("2"); fibText.setText(new StringBuilder() .append(getString(R.string.fib_text)) .append(fib.getResult()) .toString()); As you can see, the preceding code creates the pattern but does not demonstrate it. Once loaded, the cache can create instant copies of our previously expensive output. Furthermore, we can modify the copy, making the prototype very useful when we have a complex object and want to modify just one or two properties. Applying the prototype Consider a detailed user profile similar to what you might find on a social media site. Users modify details such as images and text, but the overall structure is the same for all profiles, making it an ideal candidate for a prototype pattern. To put this principle into practice, include the following code in your client source code: // Create a clone of already constructed object Sequence clone = (Fibonacci) new Fibonacci().clone(); // Modify the result long result = clone.getResult() / 2; // Display the result quickly cloneText.setText(new StringBuilder() .append(getString(R.string.clone_text)) .append(result) .toString()); The prototype is a very useful pattern in many occasions, where we have expensive objects to create or when we face a proliferation of subclasses. Although this is not the only pattern that helps reduce excessive sub classing, it leads us on to another design pattern, decorator. Summary In this article we explored the vital pattern, the prototype pattern and saw how vital it can be whenever we have large files or slow processes to contend with. Resources for Article: Further resources on this subject: Why we need Design Patterns? [article] Understanding Material Design [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article]
Read more
  • 0
  • 0
  • 10717
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-using-location-data-phonegap
Packt
30 Oct 2013
11 min read
Save for later

Using Location Data with PhoneGap

Packt
30 Oct 2013
11 min read
(For more resources related to this topic, see here.) An introduction to Geolocation The term geolocation is used in order to refer to the identification process of the real-world geographic location of an object. Devices that are able to detect the user's position are becoming more common each day and we are now used to getting content based on our location ( geo targeting ). Using the Global Positioning System (GPS )—a space-based satellite navigation system that provides location and time information consistently across the globe—you can now get the accurate location of a device. During the early 1970s, the US military created Navstar, a defense navigation satellite system. Navstar was the system that created the basis for the GPS infrastructure used today by billions of devices. Since 1978 more than 60 GPS satellites have been successfully placed in the orbit around the Earth (refer to http://en.wikipedia.org/wiki/List_of_GPS_satellite_launches for a detailed report about the past and planned launches). The location of a device is represented through a point. This point is comprised of two components: latitude and longitude. There are many methods for modern devices to determine the location information, these include: Global Positioning System (GPS) IP address GSM/CDMA cell IDs Wi-Fi and Bluetooth MAC address Each approach delivers the same information; what changes is the accuracy of the device's position. The GPS satellites continuously transmit information that can parse, for example, the general health of the GPS array, roughly, where all of the satellites are in orbit, information on the precise orbit or path of the transmitting satellite, and the time of the transmission. The receiver calculates its own position by timing the signals sent by any of the satellites in the array that are visible. The process of measuring the distance from a point to a group of satellites to locate a position is known as trilateration . The distance is determined using the speed of light as a constant along with the time that the signal left the satellites. The emerging trend in mobile development is GPS-based "people discovery" apps such as Highlight, Sonar, Banjo, and Foursquare. Each app has different features and has been built for different purposes, but all of them share the same killer feature: using location as a piece of metadata in order to filter information according to the user's needs. The PhoneGap Geolocation API The Geolocation API is not a part of the HTML5 specification but it is tightly integrated with mobile development. The PhoneGap Geolocation API and the W3C Geolocation API mirror each other; both define the same methods and relative arguments. There are several devices that already implement the W3C Geolocation API; for those devices you can use native support instead of the PhoneGap API. As per the HTML specification, the user has to explicitly allow the website or the app to use the device's current position. The Geolocation API is exposed through the geolocation object child of the navigator object and consists of the following three methods: getCurrentPosition() returns the device position. watchPosition() watches for changes in the device position. clearWatch() stops the watcher for the device's position changes. The watchPosition() and clearWatch() methods work in the same way that the setInterval() and clearInterval() methods work; in fact the first one returns an identifier that is passed in to the second one. The getCurrentPosition() and watchPosition() methods mirror each other and take the same arguments: a success and a failure callback function and an optional configuration object. The configuration object is used in order to specify the maximum age of a cached value of the device's position, to set a timeout after which the method will fail and to specify whether the application requires only accurate results. var options = {maximumAge: 3000, timeout: 5000, enableHighAccuracy: true }; navigator.geolocation.watchPosition(onSuccess, onFailure, options); Only the first argument is mandatory; but it's recommended to handle always the failure use case. The success handler function receives as argument, a Position object. Accessing its properties you can read the device's coordinates and the creation timestamp of the object that stores the coordinates. function onSuccess(position) { console.log('Coordinates: ' + position.coords); console.log('Timestamp: ' + position.timestamp); } The coords property of the Position object contains a Coordinates object; so far the most important properties of this object are longitude and latitude. Using those properties it's possible to start to integrate positioning information as relevant metadata in your app. The failure handler receives as argument, a PositionError object. Using the code and the message property of this object you can gracefully handle every possible error. function onError(error) { console.log('message: ' + error.message); console.log ('code: ' + error.code); } The message property returns a detailed description of the error, the code property returns an integer; the possible values are represented through the following pseudo constants: PositionError.PERMISSION_DENIED, the user denies the app to use the device's current position PositionError.POSITION_UNAVAILABLE, the position of the device cannot be determined If you want to recover the last available position when the POSITION_UNAVAILABLE error is returned, you have to write a custom plugin that uses the platform-specific API. Android and iOS have this feature. You can find a detailed example at http://stackoverflow.com/questions/10897081/retrieving-last-known-geolocation-phonegap. PositionError.TIMEOUT, the specified timeout has elapsed before the implementation could successfully acquire a new Position object JavaScript doesn't support constants such as Java and other object-oriented programming languages. With the term "pseudo constants", I refer to those values that should never change in a JavaScript app. One of the most common tasks to perform with the device position information is to show the device location on a map. You can quickly perform this task by integrating Google Maps in your app; the only requirement is a valid API key. To get the key, use the following steps: Visit the APIs console at https://code.google.com/apis/console and log in with your Google account. Click the Services link on the left-hand menu. Activate the Google Maps API v3 service. Time for action – showing device position with Google Maps Get ready to add a map renderer to the PhoneGap default app template. Refer to the following steps: Open the command-line tool and create a new PhoneGap project named MapSample. $ cordova create ~/the/path/to/your/source/ mapmample com.gnstudio.pg.MapSample MapSample Add the Geolocation API plugin using the command line. $ cordova plugins add https: //git-wip-us.apache.org /repos/asf/cordova-plugin-geolocation.git Go to the www folder, open the index.html file, and add a div element with the id value #map inside the main div of the app below the #deviceready one. <div id='map'></div> Add a new script tag to include the Google Maps JavaScript library. <script type="text/javascript" src ="https: //maps.googleapis.com/maps/api/js?key= YOUR_API_KEY &sensor=true"> </script> Go to the css folder and define a new rule inside the index.css file to give to the div element and its content an appropriate size. #map{ width: 280px; height: 230px; display: block; margin: 5px auto; position: relative; } Go to the js folder, open the index.js file, and define a new function named initMap. initMap: function(lat, long){ // The code needed to show the map and the // device position will be added here } In the body of the function, define an options object in order to specify how the map has to be rendered. var options = { zoom: 8, center: new google.maps.LatLng(lat, long), mapTypeId: google.maps.MapTypeId.ROADMAP }; Add to the body of the initMap function the code to initialize the rendering of the map, and to show a marker representing the current device's position over it. var map = new google.maps.Map(document.getElementById('map'), options); var markerPoint = new google.maps.LatLng(lat, long); var marker = new google.maps.Marker({ position: markerPoint, map: map, title: 'Device's Location' }); Define a function to use as the success handler and call from its body the initMap function previously defined. onSuccess: function(position){ var coords = position.coords; app.initMap(coords.latitude, coords.longitude); } Define another function in order to have a failure handler able to notify the user that something went wrong. onFailure: function(error){ navigator.notification.alert(error.message, null); } Go into the deviceready function and add as the last statement the call to the Geolocation API needed to recover the device's position. navigator.geolocation.getCurrentPosition(app.onSuccess, app.onError, {timeout: 5000, enableAccuracy: false}); Open the command-line tool, build the app, and then run it on your testing devices. $ cordova build $ cordova run android What just happened? You integrated Google Maps inside an app. The map is an interactive map most users are familiar with—the most common gestures are already working and the Google Street View controls are already enabled. To successfully load the Google Maps API on iOS, it's mandatory to whitelist the googleapis.com and gstatic.com domains. Open the .plist file of the project as source code (right-click on the file and then Open As | Source Code ) and add the following array of domains: <key>ExternalHosts</key> <array> <string>*.googleapis.com</string> <string>*.gstatic.com</string> </array> Other Geolocation data In the previous example, you only used the latitude and longitude properties of the position object that you received. There are other attributes that can be accessed as properties of the Coordinates object: altitude, the height of the device, in meters, above the sea level. accuracy, the accuracy level of the latitude and longitude, in meters; it can be used to show a radius of accuracy when mapping the device's position. altitudeAccuracy, the accuracy of the altitude in meters. heading, the direction of the device in degrees clockwise from true north. speed, the current ground speed of the device in meters per second. Latitude and longitude are the best supported of these properties, and the ones that will be most useful when communicating with remote APIs. The other properties are mainly useful if you're developing an application for which Geolocation is a core component of its standard functionality, such as apps that make use of this data to create a flow of information contextualized to the geolocation data. The accuracy property is the most important of these additional features, because as an application developer, you typically won't know which particular sensor is giving you the location and you can use the accuracy property as a range in your queries to external services. There are several APIs that allow you to discover interesting data related to a place; among these the most interesting are the Google Places API and the Foursquare API. The Google Places and Foursquare online documentation is very well organized and it's the right place to start if you want to dig deeper into these topics. You can access the Google Places docs at https://developers.google.com/maps/documentation/javascript/places and Foursquare at https://developer.foursquare.com/. The itinero reference app for this article implements both the APIs. In the next example, you will look at how to integrate Google Places inside the RequireJS app. In order to include the Google Places API inside an app, all you have to do is add the libraries parameter to the Google Maps API call. The resulting URL should look similar to http://maps.google.com/maps/api/js?key=SECRET_KEY&sensor=true&libraries=places. The itinero app lets users create and plan a trip with friends. Once the user provides the name of the trip, the name of the country to be visited, and the trip mates and dates, it's time to start selecting the travel, eat, and sleep options. When the user selects the Eat option, the Google Places data provider will return bakeries, take-out places, groceries, and so on, close to the trip's destination. The app will show on the screen a list of possible places the user can select to plan the trip. For a complete list of the types of place searches supported by the Google API, refer to the online documentation at https://developers.google.com/places/documentation/supported_types.
Read more
  • 0
  • 0
  • 10599

article-image-using-storyboards
Packt
14 Jun 2013
7 min read
Save for later

Using Storyboards

Packt
14 Jun 2013
7 min read
(For more resources related to this topic, see here.) Configuring storyboards for a project Getting ready In this recipe, we will learn how to configure an application's project properties using Xcode so that it is set up correctly to use a storyboard file. How to do it... To begin, perform the following simple steps: Select your project from the project navigator window. Then, select your project target from under the TARGETS group and select the Summary tab. Select MainStoryboard from the Main Storyboard drop-down menu, as shown in the preceding screenshot. How it works... In this recipe, we gained an understanding of what storyboards are, as well as how they differ from user interfaces created in the past, whereby a new view would need to be created for each XIB file for your application. Whether you are creating applications for the iPad or iPhone, each view controller that gets created within your storyboard represents the contents of a single screen, comprised of the contents of more than one scene. Each object contained within a view controller can be linked to another view controller that implements another scene. In our final steps, we looked at how to configure our project properties so that it is set up to use the storyboard user interface file by our application. There's more… You can also choose to manually add new Storyboard template to your project. This can be achieved by performing the following simple steps: Select your project from the project navigator window. Select File New | File…| or press command + N. Select the Storyboard template from the list of available templates, located under the User Interface subsection within the iOS section. Click on the Next button to proceed to the next step in the wizard. Ensure that you have selected iPhone from under the Device Family drop-down menu. Click on the Next button to proceed to the next step in the wizard. Specify the name of the storyboard file within the Save As field as the name of the file to be created. Click on the Create button to save the file to the specified folder. Finally, when we create our project using storyboards, we will need to modify our application's delegate AppDelegate.m file, as shown in the following code snippet: - (BOOL)application:(UIApplication *)application didFinishLaunchin gWithOptions:(NSDictionary *)launchOptions { // Override point for customization after // application launch. return YES; } For more information about using storyboards in your applications, you can refer to the Apple Developer documentation, located at https://developer.apple.com/library/ios/#documentation/ToolsLanguages/Conceptual/Xcode4UserGuide/InterfaceBuilder/InterfaceBuilder. Creating a Twitter application In this recipe, we will learn how to create a single view application to build our Twitter application. Getting ready In this recipe, we will start by creating our TwitterExample project. How to do it... To begin with creating a new Xcode project, perform the following simple steps: Launch Xcode from the /Developer/Applications folder. Select Create a new Xcode project, or click on File | New | Project…. Select Single View Application from the list of available templates. Click on the Next button to proceed to the next step in the wizard. Next, enter in TwitterExample as the name of your project. Select iPhone from under the Devices drop-down menu. Ensure that the Use Storyboards checkbox has been checked. Ensure that the Use Automatic Reference Counting checkbox has been checked. Ensure that the Include Unit Tests checkbox has not been checked. Click on the Next button to proceed to the next step in the wizard. Specify the location where you would like to save your project. Then, click on the Create button to save your project at the specified location. The Company Identifier for your app needs to be unique. Apple recommends that you use the reverse domain style (for example, com.domainName.appName). Once your project has been created, you will be presented with the Xcode development environment, along with the project files that the template created for you. How it works... In this recipe, we just created an application that contains a storyboard and consists of one view controller, which does not provide any functionality at the moment. In the following recipes, we will look at how we can add functionality to view controllers, create storyboard scenes, and transition between them. Creating storyboard scenes The process of creating scenes involves adding a new view controller to the storyboard, where each view controller is responsible for managing a single scene. A better way to describe scenes would be to think of a movie reel, where each frame that is being displayed is the actual scene that connects onto the next part. Getting ready When adding scenes to your storyboard file, you can add controls and views to the view controller's view, just as you would do for an XIB file, and have the ability to configure outlets and actions between your view controllers and its views. How to do it... To add a new scene into your storyboard file, perform the following simple steps: From the project navigator, select the file named MainStoryboard.storyboard. From Object Library, select and drag a new View Controller object on to the storyboard canvas. This is shown in the following screenshot: Next, drag a Label control on to the view and change the label's text property to read About Twitter App. Next, drag a Round Rect Button control on to the view that we will use in a later section to call the calling view. In the button's attributes, change the text to read Go Back. Next, on the first view controller, drag a Round Rect Button control on to the view. In the button's attributes, change the text to read About Twitter App. This will be used to call the new view that we added in the previous step. Next, on the first view controller, drag a Round Rect Button control on to the view, underneath the About Twitter App button that we created in the previous step. In the button's attributes, change the text to read Compose Tweet. Next, save your project by selecting File | Save from the menu bar, or alternatively by pressing command + S. Once you have added the controls to each of the view, your final interface should look something like what is shown in the following screenshot: The next step is to create the Action event for our Compose Tweet button so that it has the ability to post tweets. To create an action, perform the following steps: Open the assistant editor by selecting Navigate | Open In Assistant Editor or by pressing option + command + ,. Ensure that the ViewController.h interface file gets displayed. Select the Compose Tweet button; hold down the control key, and drag it from the Compose Tweet button to the ViewController.h interface file between the @interface and @end tags. Choose Action from the Connection drop-down menu for the connection to be created. Enter composeTweet for the name of the method to create. Choose UIButton from the Type drop-down menu for the type of method to create. The highlighted line in the following code snippet shows the completed ViewController.h interface file, with our method that will be responsible for calling and displaying our tweet sheet. // ViewController.h // TwitterExample // // Created by Steven F Daniel on 21/09/12. // Copyright (c) 2012 GenieSoft Studios. All rights reserved. #import @interface ViewController : UIViewController // Create the action methods - (IBAction)composeTweet:(UIButton *)sender; @end Now that we have created our scene, buttons, and actions, our next step is to configure the scene, which is shown in the next recipe. How it works... In this recipe, we looked at how we can add a new view controller to our storyboard and then started to add controls to each of our view controllers and customize their properties. Next, we looked at how we can create an Action event for our Compose Tweet button that will be responsible for responding and executing the associated code behind it to display our tweet sheet. Instead of us hooking up an event handler to the TouchUpInside event of the button, we decided to simply add an action to it and handle the output of this ourselves. These types of actions are called "instance methods". Here we are basically creating the Action method that will be responsible for allowing the user to compose and send a Twitter message.
Read more
  • 0
  • 0
  • 10518

article-image-building-your-first-android-wear-application
Packt
18 Feb 2016
18 min read
Save for later

Building your first Android Wear Application

Packt
18 Feb 2016
18 min read
One of the most exciting new directions that Android has gone in recently, is the extension of the platform from phones and tablets to televisions, car dashboards and wearables such as watches. These new devices allow us to provide added functionality to our existing apps, as well as creating wholly original apps designed specifically for these new environments. This article is really concerned with explaining the idiosyncrasies of each platform and the guidelines Google is keen for us to follow. This is particularly vital when it comes to developing apps that people will use when driving, as safety has to be of prime importance. There are also certain technical issues that need to be addressed when developing wearable apps, such as the pairing of the device with a handset and the entirely different UI and methods of use. In this article, you will: Create wearable AVDs Connect a wearable emulator to a handset with adb commands Connect a wearable emulator to a handset emulator Create a project with both mobile and wearable modules Use the Wearable UI Library Create shape-aware layouts Create and customize cards for wearables Understand wearable design principles (For more resources related to this topic, see here.) Android Wear Creating or adapting apps for wearables is probably the most complicated factors and requires a little more setting up than the other projects. However, wearables often give us access to one of the more fun new sensors, the heart rate monitor. In seeing how this works, we also get to see, how to manage sensors in general. Do not worry if you do not have access to an Android wearable device, as we will be constructing AVDs. You will ideally have an actual Android 5 handset, if you wish to pair it with the AVD. If you do not, it is still possible to work with two emulators but it is a little more complex to set up. Bearing this in mind, we can now prepare our first wearable app. Constructing and connecting to a wearable AVD It is perfectly possible to develop and test wearable apps on the emulator alone, but if we want to test all wearable features, we will need to pair it with a phone or a tablet. The next exercise assumes that you have an actual device. If you do not, still complete tasks 1 through 4 and we will cover how the rest can be achieved with an emulator a little later on. Open Android Studio. You do not need to start a project at this point. Start the SDK Manager and ensure you have the relevant packages installed. Open the AVD Manager. Create two new Android Wear AVDs, one round and one square, like so: Ensure USB Debugging is selected on your handset. Install the Android Wear app from the Play Store at this URL: https://play.google.com/store/apps/details?id=com.google.android.wearable.app. Connect it to your computer and start one of the AVDs we just created. Locate and open the folder containing the adb.exe file. It will probably be something like userAppDataLocalAndroidsdkplatform-tools. Using Shift + right-click, select Open command window here. In the command window, issue the following command: adb -d forward tcp:5601 tcp:5601 Launch the companion app and follow the instructions to pair the two devices. Being able to connect a real-world device to an AVD is a great way to develop form factors without having to own the devices. The wearable companion app simplifies the process of connecting the two. If you have had the emulator running for any length of time, you will have noticed that many actions, such as notifications, are sent to the wearable automatically. This means that very often our apps will link seamlessly with a wearable device, without us having to include code to pre-empt this. The adb.exe (Android Debug Bridge) is a vital part of our development toolkit. Most of the time, the Android Studio manages it for us. However, it is useful to know that it is there and a little about how to interact with it. We used it here to manually open a port between our wearable AVD and our handset. There are many adb commands that can be issued from the command prompt and perhaps the most useful is adb devices, which lists all currently debuggable devices and emulators, and is very handy when things are not working, to see if an emulator needs restarting. Switching the ADB off and on can be achieved using adb kill-server and adb start-server respectively. Using adb help will list all available commands. The port forwarding command we used in Step 10, needs to be issued every time the phone is disconnected from the computer. Without writing any code as such, we have already seen some of the features that are built into an Android Wear device and the way that the Wear UI differs from most other Android devices. Even if you usually develop with the latest Android hardware, it is often still a good idea to use an emulator, especially for testing the latest SDK updates and pre-releases. If you do not have a real device, then the next, small section will show you how to connect your wearable AVD to a handset AVD. Connecting a wearable AVD with another emulator Pairing two emulators is very similar to pairing with a real device. The main difference is the way we install the companion app without access to the Play Store. Follow these steps to see how it is done: Start up, an AVD. This will need to be targeting Google APIs as seen here: Download the com.google.android.wearable.app-2.apk. There are many places online where it can be found with a simple search, I used www.file-upload.net/download. Place the file in your sdk/platform-tools directory. Shift + right-click in this folder and select Open command window here. Enter the following command: adb install com.google.android.wearable.app-2.apk. Start your wearable AVD. Enter adb devices into the command prompt, making sure that both emulators are visible with an output similar to this: List of devices attached emulator-5554 device emulator-5555 device Enter adb telnet localhost 5554 at the command prompt, where 5554 is the phone emulator. Next, enter adb redir add tcp:5601:5601. You can now use the Wear app on the handheld AVD to connect to the watch. As we've just seen, setting up a Wear project takes a little longer than some of the other exercises we have performed. Once set up though, the process is very similar to that of developing for other form factors, and something we can now get on with. Creating a wearable project All of the apps that we have developed so far, have required just a single module, and this makes sense as we have only been building for single devices. In this next step, we will be developing across two devices and so will need two modules. This is very simple to do, as you will see in these next steps. Start a new project in the Android Studio and call it something like Wearable App. On the Target Android Devices screen, select both Phone and Tablet and Wear, like so: You will be asked to add two Activities. Select Blank Activity for the Mobile Activity and Blank Wear Activity for Wear. Everything else can be left as it is. Run the app on both round and square virtual devices. The first thing you will have noticed is the two modules, mobile and wear. The first is the same as we have seen many times, but there are a few subtle differences with the wear module and it is worth taking a little look at. The most important difference is the WatchViewStub class. The way it is used can be seen in the activity_main.xml and MainActivity.java files of the wear module. This frame layout extension is designed specifically for wearables and detects the shape of the device, so that the appropriate layout is inflated. Utilizing the WatchViewStub is not quite as straightforward, as one might imagine, as the appropriate layout is only inflated after the WatchViewStub has done its thing. This means that, to access any views within the layout, we need to employ a special listener that is called once the layout has been inflated. How this OnLayoutInflatedListener() works can be seen by opening the MainActivity.java file in the wear module and examining the onCreate() method, which will look like this: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final WatchViewStub stub = (WatchViewStub) findViewById(R.id.watch_view_stub); stub.setOnLayoutInflatedListener(new WatchViewStub.OnLayoutInflatedListener() { @Override public void onLayoutInflated(WatchViewStub stub) { mTextView = (TextView) stub.findViewById(R.id.text); } }); } Other than the way that wearable apps and devices are set up for developing, the other significant difference is the UI. The widgets and layouts that we use for phones and tablets are not suitable, in most cases, for the diminished size of a watch screen. Android provides a whole new set of UI components, that we can use and this is what we will look at next. Designing a UI for wearables As well as having to consider the small size of wearable when designing layouts, we also have the issue of shape. Designing for a round screen brings its own challenges, but fortunately the Wearable UI Library makes this very simple. As well as the WatchViewStub, that we encountered in the previous section that inflates the correct layout, there is also a way to design a single layout that inflates in such a way, that it is suitable for both square and round screens. Designing the layout The project setup wizard included this library for us automatically in the build.gradle (Module: wear) file as a dependency: compile 'com.google.android.support:wearable:1.1.0' The following steps demonstrate how to create a shape-aware layout with a BoxInsetLayout: Open the project we created in the last section. You will need three images that must be placed in the drawable folder of the wear module: one called background_image of around 320x320 px and two of around 50x50 px, called right_icon and left_icon. Open the activity_main.xml file in the wear module. Replace its content with the following code: <android.support.wearable.view.BoxInsetLayout android_background="@drawable/background_image" android_layout_height="match_parent" android_layout_width="match_parent" android_padding="15dp"> </android.support.wearable.view.BoxInsetLayout> Inside the BoxInsetLayout, add the following FrameLayout: <FrameLayout android_id="@+id/wearable_layout" android_layout_width="match_parent" android_layout_height="match_parent" android_padding="5dp" app_layout_box="all"> </FrameLayout> Inside this, add these three views: <TextView android_gravity="center" android_layout_height="wrap_content" android_layout_width="match_parent" android_text="Weather warning" android_textColor="@color/black" /> <ImageView android_layout_gravity="bottom|left" android_layout_height="60dp" android_layout_width="60dp" android_src="@drawable/left_icon" /> <ImageView android_layout_gravity="bottom|right" android_layout_height="60dp" android_layout_width="60dp" android_src="@drawable/right_icon" /> Open the MainActivity.java file in the wear module. In the onCreate() method, delete all lines after the line setContentView(R.layout.activity_main);. Now, run the app on both square and round emulators. As we can see, the BoxInsetLayout does a fine job of inflating our layout regardless of screen shape. How it works is very simple. The BoxInsetLayout creates a square region, that is as large as can fit inside the circle of a round screen. This is set with the app:layout_box="all" instruction, which can also be used for positioning components, as we will see in a minute. We have also set the padding of the BoxInsetLayout to 15 dp and that of the FrameLayout to 5 dp. This has the effect of a margin of 5 dp on round screens and 15 dp on square ones. Whether you use the WatchViewStub and create separate layouts for each screen shape or BoxInsetLayout and just one layout file depends entirely on your preference and the purpose and design of your app. Whichever method you choose, you will no doubt want to add Material Design elements to your wearable app, the most common and versatile of these being the card. In the following section, we will explore the two ways that we can do this, the CardScrollView and the CardFragment. Adding cards The CardFragment class provides a default card view, providing two text views and an image. It is beautifully simple to set up, has all the Material Design features such as rounded corners and a shadow, and is suitable for nearly all purposes. It can be customized, as we will see, although the CardScrollView is often a better option. First, let us see, how to implement a default card for wearables: Open the activity_main.xml file in the wear module of the current project. Delete or comment out the the text view and two image views. Open the MainActivity.java file in the wear module. In the onCreate() method, add the following code: FragmentManager fragmentManager = getFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager.beginTransaction(); CardFragment cardFragment = CardFragment.create("Short title", "with a longer description"); fragmentTransaction.add(R.id.wearable_layout, cardFragment); fragmentTransaction.commit(); Run the app on one or other of the wearable emulators to see how the default card looks. The way we created the CardFragment itself, is also very straightforward. We used two string parameters here, but there is a third, drawable parameter and if the line is changed to CardFragment cardFragment = CardFragment.create("TITLE", "with description and drawable", R.drawable.left_icon); then we will get the following output: This default implementation for cards on wearable is fine for most purposes and it can be customized by overriding its onCreateContentView() method. However, the CardScrollView is a very handy alternative, and this is what we will look at next. Customizing cards The CardScrollView is defined from within our layout and furthermore it detects screen shape and adjusts the margins to suit each shape. To see how this is done, follow these steps: Open the activity_main.xml file in the wear module. Delete or comment out every element, except the root BoxInsetLayout. Place the following CardScrollView inside the BoxInsetLayout: <android.support.wearable.view.CardScrollView android_id="@+id/card_scroll_view" android_layout_height="match_parent" android_layout_width="match_parent" app_layout_box="bottom"> </android.support.wearable.view.CardScrollView> Inside this, add this CardFrame: <android.support.wearable.view.CardFrame android_layout_width="match_parent" android_layout_height="wrap_content"> </android.support.wearable.view.CardFrame> Inside the CardFrame, add a LinearLayout. Add some views to this, so that the preview resembles the layout here: Open the MainActivity.java file. Replace the code we added to the onCreate() method with this: CardScrollView cardScrollView = (CardScrollView) findViewById(R.id.card_scroll_view); cardScrollView.setCardGravity(Gravity.BOTTOM); You can now test the app on an emulator, which will produce the following result: As can be seen in the previous image, the Android Studio has preview screens for both wearable shapes. Like some other previews, these are not always what you will see on a device, but they allow us to put layouts together very quickly, by dragging and dropping widgets. As we can see, the CardScrollView and CardFrame are even easier to implement than the CardFragment and also far more flexible, as we can design almost any layout we can imagine. We assigned app:layout_box here again, only this time using bottom, causing the card to be placed as low on the screen as possible. It is very important, when designing for such small screens, to keep our layouts as clean and simple as possible. Google's design principles state that wearable apps should be glanceable. This means that, as with a traditional wrist watch, the user should be able to glance at our app and immediately take in the information and return to what they were doing. Another of Google's design principle—Zero to low interaction—is only a single tap or swipe a user needs to do to interact with our app. With these principles in mind, let us create a small app, with some actual functionality. In the next section, we will take advantage of the new heart rate sensor found in many wearable devices and display current beats-per-minute on the display. Accessing sensor data The location of an Android Wear device on the user's wrist, makes it the perfect piece of hardware for fitness apps, and not surprisingly, these apps are immensely popular. As with most features of the SDK, accessing sensors is pleasantly simple, using classes such as managers and listeners and requiring only a few lines of code, as you will see by following these steps: Open the project we have been working on in this article. Replace the background image with one that might be suitable for a fitness app. I have used a simple image of a heart. Open the activity_main.xml file. Delete everything, except the root BoxInsetLayout. Place this TextView inside it: <TextView android_id="@+id/text_view" android_layout_width="match_parent" android_layout_height="wrap_content" android_layout_gravity="center_vertical" android_gravity="center" android_text="BPM" android_textColor="@color/black" android_textSize="42sp" /> Open the Manifest file in the wear module. Add the following permission inside the root manifest node: <uses-permission android_name="android.permission.BODY_SENSORS" /> Open the MainActivity.java file in the wear module. Add the following fields: private TextView textView; private SensorManager sensorManager; private Sensor sensor; Implement a SensorEventListener on the Activity: public class MainActivity extends Activity implements SensorEventListener { Implement the two methods required by the listener. Edit the onCreate() method, like this: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = (TextView) findViewById(R.id.text_view); sensorManager = ((SensorManager) getSystemService(SENSOR_SERVICE)); sensor = sensorManager.getDefaultSensor(Sensor.TYPE_HEART_RATE); } Add this onResume() method: protected void onResume() { super.onResume(); sensorManager.registerListener(this, this.sensor, 3); } And this onPause() method: @Override protected void onPause() { super.onPause(); sensorManager.unregisterListener(this); } Edit the onSensorChanged() callback, like so: @Override public void onSensorChanged(SensorEvent event) { textView.setText("" + (int) event.values[0]); } If you do not have access to a real device, you can download a sensor simulator from here:     https://code.google.com/p/openintents/wiki/SensorSimulator The app is now ready to test. We began by adding a permission in the AndroidManifest.xml file in the appropriate module; this is something we have done before and need to do any time we are using features, that require the user's permission before installing. The inclusion of a background image may seem necessary, but an appropriate background is a real aid to glancability as the user can tell instantly which app they are looking at. It should be clear, from the way the SensorManager and the Sensor are set up in the onCreate() method, that all sensors are accessed in the same way and different sensors can be accessed with different constants. We used TYPE_HEART_RATE here, but any other sensor can be started with the appropriate constant, and all sensors can be managed with the same basic structures as we found here, the only real difference being the way each sensor returns SensorEvent.values[]. A comprehensive list of all sensors, and descriptions of the values they produce can be found at http://developer.android.com/reference/android/hardware/Sensor.html. As with any time our apps utilize functions that run in the background, it is vital that we unregister our listeners, whenever they are no longer needed, in our Activity's onPause() method. We didn't use the onAccuracyChanged() callback here, but its purpose should be clear and there are many possible apps where its use is essential. This concludes our exploration of wearable apps and how they are put together. Such devices continue to become more prevalent and the possibility of ever more imaginative uses is endless. Providing we consider why and how people use smart watches, and the like, and develop to take advantage of the location of these devices by programming glanceable interfaces that require the minimum of interactivity, Android Wear seems certain to grow in popularity and use, and the developers will continue to produce ever more innovative apps. Summary In this article, we have explored Android wearable apps and how they are put together. Despite their diminutive size and functionality, wearables offer us an enormous range of possibilities. We know now how to create and connect wearable AVDs and how to develop easily for both square and round devices. We then designed the user interface for both round and square screens. To learn more about Android UI interface designing and developing android applications, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Android User Interface Development: Beginner's Guide, found at https://www.packtpub.com/application-development/android-user-interface-development-beginners-guide Android 4: New Features for Application Development, found at https://www.packtpub.com/application-development/android-4-new-features-application-development Resources for Article: Further resources on this subject: Understanding Material Design [Article] Speaking Java – Your First Game [Article] Mobile Game Design Best Practices [Article]
Read more
  • 0
  • 0
  • 10489

article-image-evenly-spaced-views-auto-layout-ios
Joe Masilotti
16 Apr 2015
5 min read
Save for later

Evenly Spaced Views with Auto Layout in iOS

Joe Masilotti
16 Apr 2015
5 min read
When the iPhone first came out there was only one screen size to worry about 320, 480. Then the Retina screen was introduced doubling the screen's resolution. Apple quickly introduced the iPhone 5 and added an extra 88 points to the bottom. With the most recent line of iPhones two more sizes were added to the mix. Before even mentioning the iPad line that is already five different combinations of heights and widths to account for. To help remedy this growing number of sizes and resolutions Apple introduced Auto Layout with iOS 6. Auto Layout is a dynamic way of laying out views with constraints and rules to let the content fit on multiple screen sizes; think "responsive" for mobile. Lots of layouts are possible with Auto Layout but some require an extra bit of work. One of the more common, albeit tricky, arrangements is to have evenly spaced elements. Having the view scale up to different resolutions and look great on all devices isn't hard and can be done in both Interface Builder or manually in code. Let's walk through how to evenly space views with Auto Layout using Xcode's Interface Builder. Using Interface Builder The easiest way to play around and test layout in IB is to create a new Single View Application iOS project.   Open Main.storyboard and select ViewController on the left. Don't worry that it is showing a square view since we will be laying everything out dynamically. The first addition to the view will be the three `UIView`s we will be evenly spacing. Add them along the view from left to right and assign different colors to each. This will make it easier to distinguish them later. Don't worry about where they are we will fix the layout soon enough.   Spacer View Layout Ideally we would be able to add constraints that evenly space these out directly. Unfortunately, you can not set *equals* restrictions on constraints, only on views. What that means is we have to create spacer views in between our content and then set equal constraints on those. Add four more views, one between the edges and the content views.   Before we add our first constraint let's name each view so we can have a little more context when adding their attributes. One of the most frustrating things when working with Auto Layout is seeing the little red arrow telling you something is wrong. Let's try and incrementally add constraints and get back to a working state as quickly as possible. The first item we want to add will constrain the Left Content view using the spacer. Select the Left Spacer and add left 20, top 20, and bottom 20 constraints.   To fix this first error we need to assign a width to the spacer. While we will be removing it later it makes sense to always have a clean slate when moving on to another view. Add in a width (50) constraint and let IB automatically and update its frame.   Now do the same thing to the Right Spacer.   Content View Layout We will remove the width constraints when everything else is working correctly. Consider them temporary placeholders for now. Next lets lay out the Left Content view. Add a left 0, top 20, bottom 20, width 20 constraint to it.   Follow the same method on the Right Content view.   Twice more follow the same procedure for the Middle Spacer Views giving them left/right 0, top 20, bottom 20, width 50 constraints.   Finally, let's constrain the Middle Content view. Add left 0, top 20, right 0, bottom 20 constraints to it and lay it out.   Final Constraints Remember when I said it was tricky? Maybe a better way to describe this process is long and tedious. All of the setup we have done so far was to set us up in a good position to give the constraints we actually want. If you look at the view it doesn't look very special and it won't even resize the right way yet. To start fix this we bring in the magic constraint of this entire example, Equal Widths on the spacer views. Go ahead and delete the four explicit Width constraints on the spacer views and add an Equal Width constraint to each. Select them all at the same time then add the constraint so they work off of each other.   Finally, set explicit widths on the three content views. This is where you can start customizing the layout to have it look the way you want. For my view I want the three views to be 75 points wide so I removed all of the Width constraints and added them back in for each. Now set the background color of the four spacer views to clear and hide them. Running the app on different size simulators will produce the same result: the three content views remain the same width and stay evenly spaced out along the screen. Even when you rotate the device the views remain spaced out correctly.   Try playing around with different explicit widths of the content views. The same technique can be used to create very dynamic layouts for a variety of applications. For example, this procedurecan be used to create a table cell with an image on the left, text in the middle, and a button on the right. Or it can make one row in a calculator that sizes to fit the screen of the device. What are you using it for? About the author Joe Masilotti is a test-driven iOS developer living in Brooklyn, NY. He contributes to open-source testing tools on GitHub and talks about development, cooking, and craft beer on Twitter.
Read more
  • 0
  • 0
  • 10436
article-image-creating-compiling-and-deploying-native-projects-android-ndk
Packt
13 Feb 2012
13 min read
Save for later

Creating, Compiling, and Deploying Native Projects from the Android NDK

Packt
13 Feb 2012
13 min read
(For more resources on Android, see here.) Compiling and deploying NDK sample applications I guess you cannot wait anymore to test your new development environment. So why not compile and deploy elementary samples provided by the Android NDK first to see it in action? To get started, I propose to run HelloJni, a sample application which retrieves a character string defined inside a native C library into a Java activity (an activity in Android being more or less equivalent to an application screen). Time for action – compiling and deploying hellojni sample Let's compile and deploy HelloJni project from command line using Ant: Open a command-line prompt (or Cygwin prompt on Windows). Go to hello-jni sample directory inside the Android NDK. All the following steps have to performed from this directory: $ cd $ANDROID_NDK/samples/hello-jni Create Ant build file and all related configuration files automatically using android command (android.bat on Windows). These files describe how to compile and package an Android application: android update project –p . (Move the mouse over the image to enlarge.) Build libhello-jni native library with ndk-build, which is a wrapper Bash script around Make. Command ndk-build sets up the compilation toolchain for native C/ C++ code and calls automatically GCC version featured with the NDK. $ ndk-build Make sure your Android development device or emulator is connected and running. Compile, package, and install the final HelloJni APK (an Android application package). All these steps can be performed in one command, thanks to Ant build automation tool. Among other things, Ant runs javac to compile Java code, AAPT to package the application with its resources, and finally ADB to deploy it on the development device. Following is only a partial extract of the output: $ ant install The result should look like the following extract: Launch a shell session using adb (or adb.exe on Windows). ADB shell is similar to shells that can be found on the Linux systems: $ adb shell From this shell, launch HelloJni application on your device or emulator. To do so, use am, the Android Activity Manager. Command am allows to start Android activities, services or sending intents (that is, inter-activity messages) from command line. Command parameters come from the Android manifest: # am start -a android.intent.action.MAIN -n com.example.hellojni/com.example.hellojni.HelloJni Finally, look at your development device. HelloJni appears on the screen! What just happened? We have compiled, packaged, and deployed an official NDK sample application with Ant and SDK command-line tools. We will explore them more in later part. We have also compiled our first native C library (also called module) using the ndk-build command. This library simply returns a character string to the Java part of the application on request. Both sides of the application, the native and the Java one, communicate through Java Native Interface. JNI is a standard framework that allows Java code to explicitly call native C/C++ code with a dedicated API. Finally, we have launched HelloJni on our device from an Android shell (adb shell) with the am Activity Manager command. Command parameters passed in step 8 come from the Android manifest: com.example.hellojni is the package name and com.example.hellojni. HelloJni is the main Activity class name concatenated to the main package. <?xml version="1.0" encoding="utf-8"?><manifest package="com.example.hellojni" HIGHLIGHT android_versionCode="1" android_versionName="1.0">... <activity android_name=".HelloJni" HIGHLIGHT android_label="@string/app_name">... Automated build Because Android SDK, NDK, and their open source bricks are not bound to Eclipse or any specific IDE, creating an automated build chain or setting up a continuous integration server becomes possible. A simple bash script with Ant is enough to make it work! HelloJni sample is a little bit... let's say rustic! So what about trying something fancier? Android NDK provides a sample named San Angeles. San Angeles is a coding demo created in 2004 for the Assembly 2004 competition. It has been later ported to OpenGL ES and reused as a sample demonstration in several languages and systems, including Android. You can find more information by visiting one of the author's page: http://jet.ro/visuals/4k-intros/san-angeles-observation/. Have a go hero – compiling san angeles OpenGL demo To test this demo, you need to follow the same steps: Go to the San Angeles sample directory. Generate project files. Compile and install the final San Angeles application. Finally run it. As this application uses OpenGL ES 1, AVD emulation will work, but may be somewhat slow! You may encounter some errors while compiling the application with Ant: The reason is simple: in res/layout/ directory, main.xml file is defined. This file usually defines the main screen layout in Java application—displayed components and how they are organized. However, when Android 2.2 (API Level 8) was released, the layout_width and layout_height enumerations, which describe the way UI components should be sized, were modified: FILL_PARENT became MATCH_PARENT. But San Angeles uses API Level 4. There are basically two ways to overcome this problem. The first one is selecting the right Android version as the target. To do so, specify the target when creating Ant project files: $ android update project –p . -–target android-8 This way, build target is set to API Level 8 and MATCH_PARENT is recognized. You can also change the build target manually by editing default.properties at the project root and replacing: target=android-4 with the following line: target=android-8 The second way is more straightforward: erase the main.xml file! Indeed, this file is in fact not used by San Angeles demo, as only an OpenGL screen created programmatically is displayed, without any UI components. Target right! When compiling an Android application, always check carefully if you are using the right target platform, as some features are added or updated between Android versions. A target can also dramatically change your audience wideness because of the multiple versions of Android in the wild... Indeed, targets are moving a lot and fast on Android!. All these efforts are not in vain: it is just a pleasure to see this old-school 3D environment full of flat-shaded polygons running for the first time. So just stop reading and run it! Exploring android SDK tools Android SDK includes tools which are quite useful for developers and integrators. We have already overlooked some of them including the Android Debug Bridge and android command. Let's explore them deeper.   Android debug bridge You may have not noticed it specifically since the beginning but it has always been there, over your shoulder. The Android Debug Bridge is a multifaceted tool used as an intermediary between development environment and emulators/devices. More specifically, ADB is: A background process running on emulators and devices to receive orders or requests from an external computer. A background server on your development computer communicating with connected devices and emulators. When listing devices, ADB server is involved. When debugging, ADB server is involved. When any communication with a device happens, ADB server is involved! A client running on your development computer and communicating with devices through ADB server. That is what we have done to launch HelloJni: we got connected to our device using adb shell before issuing the required commands. ADB shell is a real Linux shell embedded in ADB client. Although not all standard commands are available, classical commands, such as ls, cd, pwd, cat, chmod, ps, and so on are executable. A few specific commands are also provided such as: logcat To display device log messages dumpsys To dump system state dmesg To dump kernel messages ADB shell is a real Swiss Army knife. It also allows manipulating your device in a flexible way, especially with root access. For example, it becomes possible to observe applications deployed in their "sandbox" (see directory /data/data) or to a list and kill currently running processes. ADB also offers other interesting options; some of them are as follows: pull <device path> <local path> To transfer a file to your computer push <local path> <device path> To transfer a file to your device or emulator install <application package> To install an application package install -r <package to reinstall> To reinstall an application, if already deployed devices To list all Android devices currently connected, including emulators reboot To restart an Android device programmatically wait-for-device To sleep, until a device or emulator is connected to your computer (for example,. in a script) start-server To launch the ADB server communicating with devices and emulators kill-server To terminate the ADB server bugreport To print the whole device state (like dumpsys) help To get an exhaustive help with all options and flags available To ease the writing of issued command, ADB provides facultative flags to specify before options: -s <device id> To target a specific device -d To target current physical device, if only one is connected (or an error message is raised) -e To target currently running emulator, if only one is connected (or an error message is raised) ADB client and its shell can be used for advanced manipulation on the system, but most of the time, it will not be necessary. ADB itself is generally used transparently. In addition, without root access to your phone, possible actions are limited. For more information, see http://developer.android.com/guide/developing/tools/adb.html. Root or not root. If you know the Android ecosystem a bit, you may have heard about rooted phones and non-rooted phones. Rooting a phone means getting root access to it, either "officially" while using development phones or using hacks with an end user phone. The main interest is to upgrade your system before the manufacturer provides updates (if any!) or to use a custom version (optimized or modified, for example, CyanogenMod). You can also do any possible (especially dangerous) manipulations that an Administrator can do (for example, deploying a custom kernel). Rooting is not an illegal operation, as you are modifying YOUR device. But not all manufacturers appreciate this practice and usually void the warranty. Have a go hero – transferring a file to SD card from command line Using the information provided, you should be able to connect to your phone like in the good old days of computers (I mean a few years ago!) and execute some basic manipulation using a shell prompt. I propose you to transfer a resource file by hand, like a music clip or a resource that you will be reading from a future program of yours. To do so, you need to open a command-line prompt and perform the following steps: Check if your device is available using adb from command line. Connect to your device using the Android Debug Bridge shell prompt. Check the content of your SD card using standard Unix ls command. Please note that ls on Android has a specific behavior as it differentiates ls mydir from ls mydir/, when mydir is a symbolic link. Create a new directory on your SD card using the classic command mkdir . Finally, transfer your file by issuing the appropriate adb command. Project configuration tool The command named android is the main entry point when manipulating not only projects but also AVDs and SDK updates. There are few options available, which are as follows: create project: This option is used to create a new Android project through command line. A few additional options must be specified to allow proper generation: -p The project path -n The project name -t The Android API target -k The Java package, which contains application's main class -a The application's main class name (Activity in Android terms) For example: $ android create project –p ./MyProjectDir –n MyProject –t android-8 –k com.mypackage –a MyActivity update project: This is what we use to create Ant project files from an existing source. It can also be used to upgrade an existing project to a new version. Main parameters are as follows: -p The project path -n To change the project name -l To include an Android library project (that is, reusable code). The path must be relative to the project directory). -t To change the Android API target There are also options to create library projects (create lib-project, update lib- project) and test projects (create test-project, update test-project). I will not go into details here as this is more related to the Java world. As for ADB, android command is your friend and can give you some help: $ android create project –help   Command android is a crucial tool to implement a continuous integration toolchain in order to compile, package, deploy, and test a project automatically entirely from command line. Have a go hero – towards continuous integration With adb, android, and ant commands, you have enough knowledge to build a minimal automatic compilation and deployment script to perform some continuous integration. I assume here that you have a versioning software available and you know how to use it. Subversion (also known as SVN) is a good candidate and can work in local (without a server). Perform the following operations: Create a new project by hand using android command. Then, create a Unix or Cygwin shell script and assign it the necessary execution rights (chmod command). All the following steps have to be scribbled in it. In the script, check out sources from your versioning system (for example, using a svn checkout command) on disk. If you do not have a versioning system, you can still copy your own project directory using Unix commands. Build the application using ant. Do not forget to check command results using $?. If the returned value is different from 0, it means an error occurred. Additionally, you can use grep or some custom tools to check potential error messages. If needed, you can deploy resources files using adb Install it on your device or on the emulator (which you can launch from the script) using ant as shown previously. You can even try to launch your application automatically and check Android logs (see logcat option in adb). Of course, your application needs to make use of logs! A free monkey to test your App! In order to automate UI testing on an Android application, an interesting utility that is provided with the Android SDK is MonkeyRunner, which can simulate user actions on a device to perform some automated UI testing. Have a look at http://developer.android.com/guide/developing/tools/monkeyrunner_concepts.html . To favor automation, a single Android shell statement can be executed from command-line as follows: adb shell ls /sdcard/   To execute a command on an Android device and retrieve its result back on your host shell, execute the following command: adb shell "ls / notexistingdir/ 1> /dev/null 2> &1; echo $?" Redirection is necessary to avoid polluting the standard output. The escape character before $? is required to avoid early interpretation by the host shell. Now you are fully prepared to automate your own build toolchain!
Read more
  • 0
  • 0
  • 10352

article-image-getting-ready-rubymotion
Packt
22 Aug 2013
13 min read
Save for later

Getting Ready for RubyMotion

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) How can I develop an iOS application? To develop iOS applications, there are various third-party frameworks available, apart from Apple libraries. If we broadly categorize the ways in which we can create iOS applications, we can divide them into three ways. Native apps using Objective-C This is the most standard way to build your application, by interacting with Apple APIs and writing apps in Objective-C. Applications made using native Apple APIs can use all possible device capabilities, and are relatively more reliable and high performing (however, the topic of performance is debatable based on the quality of the developer's code). Mobile web applications Mobile web applications are simple web applications extended for mobile web browsers, which can be created using standard web technologies such as HTML5. For example, if we browse through http://www.twitter.com in a mobile browser, it will be redirected to http://mobile.twitter.com, which renders its corresponding views for mobile devices. These applications are easy to create but the downside is that they have limited access to user data (for example, phonebook) and hardware (for example, camera). Hybrid applications These applications are somewhere in between mobile web apps and native applications. They are created using common web technologies such as HTML5 and JavaScript and have the ability to use device capabilities via their homegrown APIs. Some of the popular hybrid frameworks include Rhomobile and Phonegap. If we compare the speed of development and user experience, it can be summed up with the following diagrams: From the preceding diagrams we see that mobile web apps can be created very quickly but we have to compromise on user experience. While native apps using Objective-C have good user experience, they have a very steep initial learning curve for web developers. RubyMotion is good news for both users and developers. Users get an amazing experience of a native application and developers are able to develop applications rapidly in comparison to applications developed using Objective-C. Let's now learn about RubyMotion. What is RubyMotion? RubyMotion is a toolchain that allows developers to develop native iOS applications using the Ruby programming language. RubyMotion acts as a compiler that interacts with the iOS SDK(Software Development Kit). This gives us enormous power to make use of Apple libraries; therefore, once the application has compiled and loaded, the device has no idea whether it's an application made using Objective-C or RubyMotion. RubyMotion is a product of HipByte, founded by Laurent Sansonetti. While developing applications with RubyMotion using Ruby, you always have access to the iOS SDK classes. This gives you the benefit of even mixing Objective-C and Ruby code, as RubyMotion implements Ruby on top of the Objective-C runtime and iOS Foundation classes. This is how a typical RubyMotion application works. The code written in RubyMotion is fully compiled into machine code, so the application created by RubyMotion is as fast as the one created using Objective-C. Why RubyMotion? So far we have learned what RubyMotion is, but the question that comes to mind is, why should we use RubyMotion? There are many reasons why RubyMotion is a good choice for building robust iOS apps. The following sections detail a few that we think matter the most. If you are not an Objective-C fan For a newbie developer, Objective-C is an arduous affair. It's complicated to code; even for doing a simple thing, we have to write many lines of code. Though it is a powerful language and one of the best object-oriented ones available, it is time consuming and the learning curve is very steep. On the other hand, Ruby is more expressive, simple, and productive in comparison to Objective-C. Because of its simplicity, developers can shift their focus onto problem solving rather than spending time on trivial stuff, which is taken care by Ruby itself. In short, we can say RubyMotion allows us to use the power of Objective-C with the simplicity of Ruby. Ruby classes used in RubyMotion are inherited from Objective-C classes. If you are familiar with the concept of object-oriented programming, you can understand its power. This means we can directly use Apple iOS SDK classes from your RubyMotion code. It is not a bridge RubyMotion apps get direct access to iOS SDK APIs, which means the size of application and performance created using RubyMotion is comparable to the one created using Objective-C. It implements Ruby on top of the Objective-C runtime and iOS Foundation classes. RubyMotion uses a state-of-the-art static compiler based on Low Level Virtual Machine (LLVM), which converts the Ruby source code into a blazing fast machine code. The original source code is never present in the application bundle. A typical application weighs less than 1 MB, but the size can increase depending on the use case. Managed memory One of the key features of RubyMotion is that it takes care of memory management. Just like ARC (Automatic Reference Counting) with Xcode 4.4 and above, we don't have to take the pain of releasing the memory once an object is no longer used. RubyMotion does the magic and we don't need to think about it. It handles it on its own. Terminal-based workflow RubyMotion has a terminal-based workflow; from creation of the application to deployment, everything can be done through terminals. If you are used to working on terminals, you know it adds to speedier development. Easy debugging with REPL The terminal window where you run Rake also gives you the option to debug with REPL (Read Evaluate Print Loop), which lets you use Ruby expressions that are evaluated on the spot, and the results are reflected on the simulator while the application is still running. The ability to make live changes to the user interface and internal application data structures at runtime is extremely useful for testing and troubleshooting issues with the application, as this saves a lot of time and is much faster than a traditional edit-compile-run loop. It is extendable We can use RubyMotion salted gems easily by just adding them in the Rakefile. What are RubyMotion salted gems? We can't use all the gems that are available for Ruby right now, but there are a lot of gems specifically developed for RubyMotion. As the RubyMotion developer community expands, so will its gem bouquet, and this will make our application development rapid. Third-party Objective-C libraries can be easily used in a RubyMotion project. It supports CocoaPods, which is a dependency manager for Objective-C libraries, making this process a bit easier. Debugging and testing RubyMotion has a console-based inbuilt interactive debugger for troubleshooting the issues both on a simulator and on a device using GDB (GNU Debugger). GDB is extremely powerful on its own, and RubyMotion uses it for debugging the compiled Ruby code. Also, RubyMotion projects are fit for Test Driven Development (TDD). We can write a unit test for our code from the beginning. We can use Behavior Driven Development (BDD) with RubyMotion, which is integrated into every project. Pop quiz Q.1.How can we distinguish between the iOS application created by RubyMotion and the iOS application created by Objective-C? You can distinguish based on the user experience of the application. You can distinguish based on the performance of the application. You can't distinguish based on the user experience and performance of the application. Solution: If your answer was option 3, you were right. We can't distinguish between applications created by RubyMotion or Objective-C as the user experience and performance are similar. Q.2. How can we extend RubyMotion? We can use Objective-C libraries. We can use all Ruby gems. We can use RubyMotion-flavored gems. We can't use any other libraries. Solution: If your answer was option 1 and 3, you were right. Yes, we can use Objective-C libraries and also RubyMotion-flavored gems. RubyMotion installation – furnish your environment Now that we have got a good introduction to RubyMotion, let's set up our development environment; but before that let's run through some of the prerequisites. Prerequisites for RubyMotion You need a Mac OS: we can't develop iOS applications with RubyMotion on any other operating system; so we definitely need a Mac OS. OSX 10.6 or higher: RubyMotion requires a Mac running OSX 10.6 or higher. OSX 10.7 Lion is highly recommended. Ruby: the Ruby framework comes preinstalled with Mac OS X. If you have multiple versions of Ruby, we recommend that you use Ruby Version Manager (RVM). For more details, visit https://rvm.io/. Xcode: next we need to install Xcode, which includes the iOS SDK, developed by Apple and essential for developing iOS applications. It can be downloaded from the App Store for free. It also includes the iPhone/iPad simulator, which will be used for testing our application. Command Line Tools: after installing the Xcode toolchain, we need to install the command-line tools package, which is necessary for RubyMotion. To confirm that command-line tools is installed with your Xcode, open Xcode in your Applications folder, go to the Preferences window, and click on the Downloads tab. You should see the Command Line Tools package in this list. If it is not yet installed, make sure to click on the Install button. If you have an old version of Xcode, run the following command on the terminal: sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer This command will set up the default Xcode path. Installing RubyMotion RubyMotion installation is really simple and takes no time at all. RubyMotion is a commercial product that you need to purchase from www.rubymotion.com. Once purchased, you will receive your unique license key and installer. RubyMotion installation is a five-step procedure and is given here: Once you have received the package, run the RubyMotion installer as follows: Read and accept the EULA (End User License Agreement). Enter the license number you have received as shown in the following screenshot: Time for a short break—it will take a few minutes for RubyMotion to get downloaded and installed on your system. You can relax for some time. Yippee!! There is no step 5. And that's how quick it is to start working with RubyMotion. Update RubyMotion RubyMotion is a fast-moving framework and we need to upgrade it once there is a new release available. Upgrading RubyMotion is again really simple—with one command, you can easily upgrade it to the latest version. sudo motion update You need to be connected to the Internet for an upgrade to happen. If you want to work on an old version, you can downgrade using the following command: sudo motion update –force-version=1.2 But we recommend using the latest version. How do we check we've done everything correctly? Now that we have installed our RubyMotion copy, it's good practice to confirm which version we have installed; to do this, go to the terminal and run the following: motion –v This command outputs the RubyMotion version installed on your machine. If you get an error, you need to reinstall. Pick your own editor – you are not forced to use Xcode With RubyMotion, you are not forced to use Xcode. As every developer is more comfortable with a specific editor, you are open to choose what you like. However, we recommend the following editors for Ruby development: RubyMine Vim TextMate Sublime Emacs RubyMine now provides full support to a RubyMotion project. How to get help If you are facing some issues, the preferred way to get a solution is to discuss it at the RubyMotion Google group, (https://groups.google.com/forum/?fromgroups#!forum/rubymotion), where you can interact with fellow developers from the community and get a speedy resolution. Sometimes you might not get a precise response from the RubyMotion group. Not to worry, RubyMotion support is there to rescue you. If you have a feature request, an issue, or simply want to ask a question, you can log a support ticket—that too from the command line using the following command: $ motion support This will open up a new window in your browser. You can fill and submit the form with your query. Your RubyMotion license key, email address, and environment details will be added automatically. The RubyMotion community is growing at a very fast pace. In a short span of time, a lot of popular RubyMotion gems have been created by developers. FAQs We believe no question is silly. By now you will have many questions in your mind regarding RubyMotion. We have tried to answer a few of the most frequently asked questions (FAQs) related to topics covered so far in this section. Here are a few of them: Q1. Are the applications created by RubyMotion in keeping with Apple guidelines? Answer. Yes, RubyMotion strongly follows the review guidelines provided by Apple. Many applications created using RubyMotion are already available at the App Store. Q2. Will my RubyMotion application work on a Blackberry, Android, or Windows phone? Answer. No, applications created using RubyMotion are only for iOS devices; it is an alternative to programming in Objective-C. For a single-source multi-device application, we would recommend hybrid frameworks such as Rhomobile, Phonegap, and Titanium. For android development using Ruby, you can try Rubuto. Q3. Can I share an application with someone? Answer. Yes and no. With the Apple Developer Program membership, you can share your application only for testing purposes with a maximum of 100 devices, where each device has to be registered individually with Apple. Also, you cannot distribute your application on the App Store for testing. Once you have finished developing your application and are ready to ship, you can submit it to Apple for an App Store review. Q4. Can I use Ruby gems? Answer. Yes and no. No because we can't use normal Ruby gems, which you generally use in your Ruby on Rails projects; and yes because you can use gems that are specifically developed for RubyMotion, and there are already many such gems. Q5. Will my application work on iPad and iPod Touch? Answer. Absolutely, your application will work on any iOS devices, namely iPhone, iPad, and iPod Touch. Q6 Is Ruby allowed on the App Store? Answer. The App Store can't distinguish between applications made using Objective-C and those made using RubyMotion. So, no worries, our RubyMotion applications are fit for the App Store. Q7. Can I use third-party Objective-C libraries? Answer. Certainly. Third-party Objective-C libraries can be used in your project. RubyMotion provides integration with the CocoaPods dependency manager, which helps in reducing the hassle. You also can use C/C++ code provided that you wrap it into the Objective-C classes and methods. Q8. Is RubyMotion open source? Answer. RubyMotion as a toolchain is open source (available at GitHub). The closed source part is the Ruby runtime, which is, however, very similar to MacRuby runtime (which is open source). Summary Let's review all that we have learned so far. We first discussed the different ways to create iOS applications. Then we started with RubyMotion and discussed why to use it. And in the last section, we learned how to get started with RubyMotion and which editor fits with it. Now that we have our RubyMotion framework up and running, the next obvious task is to create our very first application, the most rudimentary Hello World application. Resources for Article : Further resources on this subject: Introducing RubyMotion and the Hello World app [Article] iPhone Applications Tune-Up: Design for Performance [Article] Introducing Xcode Tools for iPhone Development [Article]
Read more
  • 0
  • 0
  • 10090

article-image-how-interact-database-using-rhom
Packt
28 Jul 2011
5 min read
Save for later

How to Interact with a Database using Rhom

Packt
28 Jul 2011
5 min read
  Rhomobile Beginner's Guide Step-by-step instructions to build an enterprise mobile web application from scratch         Read more about this book       (For more resources on this topic, see here.) What is ORM? ORM connects business objects and database tables to create a domain model where logic and data are presented in one wrapping. In addition, the ORM classes wrap our database tables to provide a set of class-level methods that perform table-level operations. For example, we might need to find the Employee with a particular ID. This is implemented as a class method that returns the corresponding Employee object. In Ruby code, this will look like: employee= Employee.find(1) This code will return an employee object whose ID is 1.   Exploring Rhom Rhom is a mini Object Relational Mapper (ORM) for Rhodes. It is similar to another ORM, Active Record in Rails but with limited features. Interaction with the database is simplified, as we don't need to worry about which database is being used by the phone. iPhone uses SQLite and Blackberry uses HSQL and SQLite depending on the device. Now we will create a new model and see how Rhom interacts with the database.   Time for action – Creating a company model We will create a model company. In addition to a default attribute ID that is created by Rhodes, we will have one attribute name that will store the name of the company. Now, we will go to the application directory and run the following command: $ rhogen model company name which will generate the following: [ADDED] app/Company/index.erb[ADDED] app/Company/edit.erb[ADDED] app/Company/new.erb[ADDED] app/Company/show.erb[ADDED] app/Company/index.bb.erb[ADDED] app/Company/edit.bb.erb[ADDED] app/Company/new.bb.erb[ADDED] app/Company/show.bb.erb[ADDED] app/Company/company_controller.rb[ADDED] app/Company/company.rb[ADDED] app/test/company_spec.rb We can notice the number of files generated by the Rhogen command. Now, we will add a link on the index page so that we can browse it from our homepage. Add a link in the index.erb file for all the phones except Blackberry. If the target phone is a Blackberry, add this link to the index.bb.erb file inside the app folder. We will have different views for Blackberry. <li> <a href="<%= url_for :controller => :Company %>"><span class="title"> Company</span><span class="disclosure_indicator"/></a></li> We can see from the image that a Company link is created on the homepage of our application. Now, we can build our application to add some dummy data. You can see that we have added three companies Google, Apple, and Microsoft. What just happened? We just created a model company with an attribute name, made a link to access it from our homepage, and added some dummy data to it. We will add a few companies' names because it will help us in the next section. Association Associations are connections between two models, which make common operations simpler and easier for your code. So we will create an association between the Employee model and the Company model.   Time for action – Creating an association between employee and company The relationship between an employee and a company can be defined as "An employee can be in only one company but one company may have many employees". So now we will be adding an association between an employee and the company model. After we make entries for the company in the company model, we would be able to see the company select box populated in the employee form. The relationship between the two models is defined in the employee.rb file as: belongs_to :company_id, 'Company' Here, Company corresponds to the model name and company_id corresponds to the foreign key. Since at present we have the company field instead of company_id in the employee model, we will rename company to company_id. To retrieve all the companies, which are stored in the Company model, we need to add this line in the new action of the employee_controller: @companies = Company.find(:all) The find command is provided by Rhom, which is used to form a query and retrieve results from the database. Company.find(:all) will return all the values stored in the Company model in the form of an array of objects. Now, we will edit the new.erb and edit.erb files present inside the Employee folder. <h4 class="groupTitle">Company</h4><ul> <li> <select name="employee[company_id]"> <% @companies.each do |company|%> <option value="<%= company.object%>" <%= "selected" if company.object == @employee.company_id%> > <%=company.name %></option> <%end%> </select> </li></ul> If you observe in the code, we have created a select box for selecting a company. Here we have defined a variable @companies that is an array of objects. And in each object we have two fields named company name and its ID. We have created a loop and shown all the companies that are there in the @companies object. In the above image the companies are populated in the select box, which we added before and it is displayed in the employee form. What just happened? We just created an association between the employee and company model and used this association to populate the company select box present in the employee form. As of now, Rhom has fewer features then other ORMs like Active Record. As of now there is very little support for database associations.  
Read more
  • 0
  • 0
  • 10041
article-image-batterymonitor-application
Packt
30 Nov 2012
9 min read
Save for later

BatteryMonitor Application

Packt
30 Nov 2012
9 min read
(For more resources related to this topic, see here.) Overview of the technologies The BatteryMonitor application makes reference to two very important frameworks to allow for drawing of graphics to the iOS device's view, as well as composing and sending of e-mail messages, directly within the application. In this article, we will be making use of the Core Graphics framework that will be responsible for handling the creation of our battery gauge to allow the contents to be filled based on the total amount of battery life remaining on the device. We will then use the MessageUI framework that will be responsible for composing and sending e-mails whenever the application has determined that the battery levels fall below the 20 percent threshold. This is all handled and done directly within our app. We will make use of the UIDevice class that will be used to gather the device information for our iOS device. This class enables you to recover device-specific values, including the model of the iOS device that is being used, the device name, and the OS name and version. We will then use the MFMailComposeViewController class object to directly open up the e-mail dialog box within the application. The information that you can retrieve from the UIDevice class is shown in the following table: Type Description System name This returns the name of the operating system that is currently in use. Since all current generation iOS devices run using the same OS, only one will be displayed; that is iOS 5.1. System version This lists the firmware version that is currently installed on the iOS device; that is, 4.3, 4.31, 5.01, and so on. Unique identifier The unique identifier of the iOS device generates a hexadecimal number to guarantee that it is unique for each iOS device, and does this by applying an internal hash to several of its hardware specifiers, including the device's serial number. This unique identifier is used to register the iOS devices at the iOS portal for provisioning of distribution of software apps. Apple is currently phasing out and rejecting apps that access the Unique Device Identifier on an iOS device to solve issues with piracy, and has suggested that you should create a unique identifier that is specific to your app. Model The iOS model returns a string that describes its platform; that is, iPhone, iPod Touch, and iPad. Name This represents the assigned name of the iOS device that has been assigned by the user within iTunes. This name is also used to create the localhost names for the device, particularly when networking is used. For more information on the UIDevice class, you can refer to the Apple Developer Documentation that can be found and located at the following URL: https://developer.apple.com/library/ios/#DOCUMENTATION/UIKit/Reference/UIDevice_Class/Reference/UIDevice.html. Building the BatteryMonitor application Monitoring battery levels is a common thing that we do in our everyday lives. The battery indicator on the iPhone/iPad lets us know when it is time for us to recharge our iOS device. In this section, we will look at how to create an application that can run on an iOS device to enable us to monitor battery levels on an iOS device, and then send an e-mail alert when the battery levels fall below the threshold. We first need to create our BatteryMonitor project. It is very simple to create this in Xcode. Just follow the steps listed here. Launch Xcode from the /Xcode4/Applications folder. Choose Create a new Xcode project, or File | New Project. Select the Single View Application template from the list of available templates. Select iPad from under the Device Family drop-down list. Ensure that the Use Storyboard checkbox has not been selected. Select the Use Automatic Reference Counting checkbox. Ensure that the Include Unit Tests checkbox has not been selected. Click on the Next button to proceed with the next step in the wizard. Enter in BatteryMonitor as the name for your project. Then click on the Next button to proceed with the next step of the wizard. Specify the location where you would like to save your project. Then, click on the Save button to continue and display the Xcode workspace environment. Now that we have created our BatteryMonitor project, we need to add the MessageUI framework to our project. This will enable us to send e-mail alerts when the battery levels fall below the threshold. Adding the MessageUI framework to the project As we mentioned previously, we need to add the MessageUI framework to our project to allow us to compose and send an e-mail directly within our iOS application, whenever we determine that our device is running below the allowable percentage. To add the MessageUI framework, select Project Navigator Group, and follow the simple steps outlined here: Click and select your project from Project Navigator. Then, select your project target from under the TARGETS group. Select the Build Phases tab. Expand the Link Binary With Libraries disclosure triangle. Finally, use + to add the library you want. Select MessageUI.framework from the list of available frameworks. Now that we have added MessageUI.framework into our project, we need to start building our user interface that will be responsible for allowing us to monitor the battery levels of our iOS device, as well as handle sending out e-mails when the battery levels fall below the agreed threshold. Creating the main application screen The BatteryMonitor application doesn't do anything at this stage; all we have done is created the project and added the MessageUI framework to handle the sending of e-mails when our battery levels are falling below the threshold. We now need to start building the user interface for our BatteryMonitor application. This screen will consist of a View controller, and some controls to handle setting the number of bars to be displayed, as well as whether the monitoring of the battery should be enabled or disabled. Select the ViewController.xib file from Project Navigator. Set the value of Background of the View controller to read Black Color. Next, from Object Library, select-and-drag a (UILabel) Label control, and add this to our view. Modify the Text property of the control to Battery Status:. Modify the Font property of the control to System 42.0. Modify the Alignment property of the control to Center. Next, from Object Library, select-and-drag another (UILabel) Label control, and add this to our view directly underneath the Battery Status label. Modify the Text property of the control to Battery Level:. Modify the Font property of the control to System 74.0. Modify the Alignment property of the control to Center. Now that we have added our label controls to our view controller, our next step is to start adding the rest of our controls that will make up our user interface. So let's proceed to the next section. Adding the Enable Monitoring UISwitch control Our next step is to add a switch control to our view controller; this will be responsible for determining whether or not we are to monitor our battery levels and send out alert e-mails whenever our battery life is running low on our iOS device. This can be achieved by following these simple steps: From Object Library, select-and-drag a (UILabel) Label control, and add this to the bottom right-hand corner of our view controller. Modify the Text property of the control to Enable Monitoring:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Enable Monitoring label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. Now that we have added our Enable Monitoring switch control to our BatteryMonitor View controller, our next step is to add the Send E-mail Alert switch that will be responsible for sending out e-mail alerts if it has determined that the battery levels have fallen below our threshold. So, let's proceed with the next section. Adding the Send E-mail Alert UISwitch control Now, we need to add another switch control to our view that will be responsible for sending e-mail alerts. This can be achieved by following these simple steps: From Object Library, select-and-drag another (UILabel) Label control, and add this underneath our Enable Monitoring label. Modify the Text property of the control to Send E-mail Alert:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Send Email Alert label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. To duplicate a UILabel and/or UISwitch control and have them retain the same attributes, you can use the keyboard shortcut Command + D. You can then update the Text label for the newly added control. Now that we have added our Send E-mail Alert button to our BatteryMonitor view controller, our next step is to add the Fill Gauge Levels switch that will be responsible for filling our battery gauge when it has been set to ON. Adding the Fill Gauge Levels UISwitch control Now, we need to add another switch control to our view that will be responsible for determining whether our gauge should be filled to show the amount of battery remaining. This can be achieved by following these simple steps: From Object Library, select-and-drag another (UILabel) Label control, and add this underneath our Send E-mail Alert label. Modify the Text property of the control to Fill Gauge Levels:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Fill Gauge Levels label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. Now that we have added our Fill Gauge Levels switch control to our BatteryMonitor view controller, our next step is to add the Increment Bars stepper that will be responsible for increasing the number of bar cells within our battery gauge.
Read more
  • 0
  • 0
  • 9687

article-image-tour-xcode
Packt
06 Feb 2015
13 min read
Save for later

Tour of Xcode

Packt
06 Feb 2015
13 min read
In this article, written by Jayant Varma, the author of Xcode 6 Essentials, we shall look at Xcode closely as this is going to be the tool you would use quite a lot for all aspects of your app development for Apple devices. It is a good idea to know and be familiar with the interface, the sections, shortcut keys, and so on. (For more resources related to this topic, see here.) Starting Xcode Xcode, like many other Mac applications, is found in the Applications folder or the Launchpad. On starting Xcode, you will be greeted with the launch screen that offers some entry points for working with Xcode. Mostly, you will select Create a new Xcode project or Check out an existing project , if you have an existing project to continue work on. Xcode remembers what it was doing last, so if you had a project or file open, it will open up those windows again. Creating a new project After selecting the Create a new project option, we are guided via a wizard that helps us get started. Selecting the project type The first step is to select what type of project you want to create. At the moment, there are two distinct types of projects, mobile (iOS) or desktop (OS X) that you can create. Within each of those types, you can select the type of project you want. The screenshot displays a standard configuration for iOS application projects. The templates used when the selected type of project is created are self sufficient, that is, when the Run button is pressed, the app compiles and runs. It might do nothing, as this is a minimalistic template. On selecting the type of project, we can select the next step: Setting the project options This step allows selecting the options, namely setting the application name, the organization name, identifier, language, and devices to support. In the past, the language was always set to Objective-C, however with Xcode 6, there are two options: objective-C and Swift Setting the project properties On creation, the main screen is displayed. Here it offers the option to change other details related to the application such as the version number and build. It also allows you to configure the team ID and certificates used for signing the application to test on a mobile device or for distribution to the App Store. It also allows you to set the compatibility for earlier versions. The orientation and app icons, splash screens, and so on are also set from this screen. If you want to set these up later on in the project, it is fine, this can be accessed at any time and does not stop you from development. It needs to be set prior to deploying it on a device or creating an App Store ready application. Xcode overview Let us have a look at the Xcode interface to familiarize ourselves with the same as it would help improve productivity when building your application. The top section immediately following the traffic light (window chrome) displays a Play and Stop button. This allows the project to run and stop. The breadcrumb toolbar displays the project-specific settings with respect to the product and the target. With an iOS project, it could be a particular simulator for iPhone, iPad, and so on, or a physical device (number 5 in the following screenshot). Just under this are vertical areas that are the main content area with all the files, editors, UI, and so on. These can be displayed or hidden as required and can be stacked vertically or horizontally. The distinct areas in Xcode are as follows: Project navigation (number1) Editor and assistant editor (number 2 ) and (number 3 ) Utility/inspector (number 4 ) The toolbar (number 5 ) and (number 6 ) These sections can be switched on and off (shown or hidden) as required to make space for other sections or more screen space to work with: Sections in Xcode The project section The project navigation section has three sub sections, the topmost being the project toolbar that has eight icons. These can be seen as in the following screenshot. The next sub section contains the project files and all the assets required for this project. The bottom most section consists of recently edited files and filters: You can use the keyboard shortcuts to access these areas quickly with the CMD + 1...8 keys. The eight areas available under project navigation are key and for the beginner to Xcode, this could be a bit daunting. When you run the project, the current section might change and display another where you might wonder how to get back to the project (file) navigator. Getting familiar with these is always helpful and the easiest way to navigate between these is the CMD + 1..8 keys. Project navigator ( CMD + 1 ): This displays all of the files, folders, assets, frameworks, and so on that are part of this project. This is displayed as a hierarchical view and is the way that a majority of developers access their files, folders, and so on. Symbol navigator ( CMD + 2 ): This displays all of the classes, members, and methods that are available in them. This is the easiest way to navigate quickly to a method/function, attribute/property. Search navigator ( CMD + 3 ): This allows you to search the project for a particular match. This is quite useful to find and replace text. Issues navigator ( CMD + 4 ): This displays the warning and errors that occur while typing your code or on building and running it. This also displays the results of the static analyzer. Tests navigator ( CMD + 5 ); This displays the tests that you have present in your code either added by yourself or the default ones created with the project. Debug navigator ( CMD + 6 ): This displays the information about the application when you choose to run it. It has some amazing detailed information on CPU usage, memory usage, disk usage, threads, and so on. Breakpoint navigator ( CMD + 7 ): This displays all the breakpoints in your project from all files. This also allows you to create exception and symbolic breakpoints. Log navigator ( CMD + 8 ): This displays a log of all actions carried out, namely compiling, building, and running. This is more useful when used to determine the results of automated builds The editor and assistant editor sections The second area contains the editor and assistant editor sections. These display the code, the XIB (as appropriate), storyboard files, device previews, and so on. Each of the sub sections have a jump bar on the top that relates to files and allow for navigating back and forth in the files and display the location of the file in the workspace. To the right from this is a mini issues navigator that displays all warnings and errors. In the case of the assistant editors, it also displays two buttons: one to add a new assistant editor area and another to close it.   Source code editors While we are looking at the interface, it is worth noting that the Xcode code editor is a very advanced editor with a lot of features, which is now seen as standard with a lot of text editors. Some of the features that make working with Xcode easier are as follows: Code folding : This feature helps to hide code at points such as the function declaration, loops, matching brace brackets, and so on. When a function or portion of code is folded, it hides it from view, thereby allowing you to view other areas of the code that would not be visible unless you scrolled. Syntax highlighting : This is one of the most useful features as it helps you, the developer, to visually, at a glance, differentiate your source code from variables, constants, and strings. Xcode has syntax highlighting for a couple of languages as mentioned earlier. Context help : This is one of the best features whereby when you hover over a word in the source code with OPT pressed, it shows a dotted underline and the cursor changes to a question mark. When you click on a word with the dotted underline and the question mark cursor, it displays a popup with details about that word. It also highlights all instances of that word in the file. The popup details as much information as available. If it is a variable or a function that you have added to the code, then it will display the name of the file where it was declared. If it is a word that is contained in the Apple libraries, then it displays the description and other additional details. Context jump : This is another cool feature that allows jumping to the point of declaration of that word. This is achieved by clicking on a word while keeping the CMD button pressed. In many cases, this is mainly helpful to know how the function is declared and what parameters it expects. It can also be useful to get information on other enumerators and constants used with that function. The jump could be in the same file as where you are editing the code or it could be to the header files where they are declared. Edit all in scope : This is a cool feature where you can edit all of the instances of the word together rather than using search and replace. A case scenario is if you want to change the name of a variable and ensure that all instances you are using in the file are changed but not the ones that are text, then you can use this option to quickly change it. Catching mistakes with fix-it : This is another cool feature in Xcode that will save you a lot of time and hassle. As you type text, Xcode keeps analyzing the code and looking for errors. If you have declared a variable and not used it in your code, Xcode immediately draws attention to it suggesting that the variable is an unused variable. However, if it was supposed to be a pointer and you have declared it without *; Xcode immediately flags it as an error that the interface type cannot be statically allocated. It offers a fix-it solution of inserting * and the code has a greyed * character showing where it will be added. This helps the developer fix commonly overlooked issues such as missing semicolons, missing declarations, or misspelled variable names. Code completion : This is the bit that makes writing code so much easier, type in a few letters of the function name and Xcode pops up a list of functions, constants, methods, and so on that start with those letters and displays all of the required parameters (as applicable) including the return type. When selected, it adds the token placeholders that can be replaced with the actual parameter values. The results might vary from person to person depending on the settings and the speed of the system you run Xcode on. The assistant editor The assistant editor is mainly used to display the counterparts and related files to the file open in the primary editor (generally used when working with Objective-C where the .h or.m files are the related files). The assistant editors track the contents of the editor. Xcode is quite intelligent and knows the corresponding sections and counterparts. When you click on a file, it opens up in the editor. However, pressing the OPT + Shift while clicking on the file, you would be provided with an interactive dialog to select where to open the file. The options include the primary editor or the assistant editor. You can also add assistant editors as required.   Another way to open a file quickly is to use the Open Quickly option, which has a shortcut key of CMD + Shift + O . This displays a textbox that allows accessing a file from the project. The utility/inspector section The last section contains the inspector and library. This section changes based on the type of file selected in the current editor. The inspector has 6 tabs/sections and they are as follows: The file inspector ( CMD + OPT + 1 ): This displays the physical file information for the file selected. For code files, it is the text encoding, the targets that it belongs to, and the physical file path. While for the storyboard, it is the physical file path and allows setting attributes such as auto layout and size classes (new in Xcode 6). The quick help inspector ( CMD + OPT + 2 ): This displays information about the class or object selected. The identity inspector ( CMD + OPT + 3 ): This displays the class name, ID, and others that identify the object selected. The attributes inspector ( CMD + OPT + 4 ): This displays the attributes for the object selected as if it is the initial root view controller, does it extend under the top bars or not, if it has a navigation bar or not, and others. This also displays the user-defined attributes (a new feature with Xcode 6). The size inspector ( CMD + OPT + 5 ): This displays the size of the control selected and the associated constraints that help position it on the container. The connections inspector ( CMD + OPT + 6 ): This displays the connections created in the Interface Builder between the UI and the code. The lower half of this inspector contains four options that help you work efficiently, they are as follows: The file template library : This contains the options to create a new class, protocol. The options that are available when selecting the File | New option from the menu. The code snippets library : This is a wonderful but not widely used option. This can hold code snippets that can help you avoid writing repetitive blocks of code in your app. You can drag and drop the snippet to your code in the editor. This also offers features such as shortcuts, scopes, platforms, and languages. So you can have a shortcut such as appDidLoad (for example) that inserts the code to create and populate a button. This is achieved simply by setting the platform as appropriate to iOS or OS X. After creating a code snippet, as soon as you type the first few characters, the code snippet shows up in the list of autocomplete options; The object library : This is the toolbox that contains all of the controls that you need for creating your UI, be it a button, a label, a Table View, view, View Controller, or anything else. Adding a code snippet is as easy as dragging the selected code from the editor onto the snippet area. It is a little tricky because the moment you start dragging, it could break your selection highlight. You need to select the text, click (hold) and then drag it. The media library : This contains the list of all images and other media types that are available to this project/workspace. Summary In this article, you have seen a quick tour of Xcode, keeping the shortcuts and tips handy as they really do help get things done faster. The code snippets are a wonderful feature that allow for quickly setting up commonly used code with shortcut keywords. Resources for Article: Further resources on this subject: Introducing Xcode Tools for iPhone Development [article] Xcode 4 ios: Displaying Notification Messages [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 9665
Modal Close icon
Modal Close icon