Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-publication-apps
Packt
22 Feb 2016
10 min read
Save for later

Publication of Apps

Packt
22 Feb 2016
10 min read
Ever wondered if you could prepare and publish an app on Google Play and you needed a short article on how you could get this done quickly? Here it is! Go ahead, read this piece of article, and you'll be able to get your app running on Google Play. (For more resources related to this topic, see here.) Preparing to publish You probably don't want to upload any of the apps from this book, so the first step is to develop an app that you want to publish. Head over to https://play.google.com/apps/publish/ and follow the instructions to get a Google Play developer account. This was $25 at the time of writing and is a one-time charge with no limit on the number of apps you can publish. Creating an app icon Exactly how to design an icon is beyond the remit of this book. But, simply put, you need to create a nice image for each of the Android screen density categorizations. This is easier than it sounds. Design one nice app icon in your favorite drawing program and save it as a .png file. Then, visit http://romannurik.github.io/AndroidAssetStudio/icons-launcher.html. This will turn your single icon into a complete set of icons for every single screen density. Warning! The trade-off for using this service is that the website will collect your e-mail address for their own marketing purposes. There are many sites that offer a similar free service. Once you have downloaded your .zip file from the preceding site, you can simply copy the res folder from the download into the main folder within the project explorer. All icons at all densities have now been updated. Preparing the required resources When we log into Google Play to create a new listing in the store, there is nothing technical to handle, but we do need to prepare quite a few images that we will need to upload. Prepare upto 8 screenshots for each device type (a phone/tablet/TV/watch) that your app is compatible with. Don't crop or pad these images. Create a 512 x 512 pixel image that will be used to show off your app icon on the Google Play store. You can prepare your own icon, or the process of creating app icons that we just discussed will have already autogenerated icons for you. You also need to create three banner graphics, which are as follows: 1024 x 500 180 x 120 320 x 180 These can be screenshots, but it is usually worth taking a little time to create something a bit more special. If you are not artistically minded, you can place a screenshot inside some quite cool device art and then simply add a background image. You can generate some device art at https://developer.android.com/distribute/tools/promote/device-art.html. Then, just add the title or feature of your app to the background. The following banner was created with no skill at all, just with a pretty background purchased for $10 and the device art tool I just mentioned: Also, consider creating a video of your app. Recording video of your Android device is nearly impossible unless your device is rooted. I cannot recommend you to root your device; however, there is a tool called ARC (App Runtime for Chrome) that enables you to run APK files on your desktop. There is no debugging output, but it can run a demanding app a lot more smoothly than the emulator. It will then be quite simple to use a free, open source desktop capture program such as OBS (Open Broadcaster Software) to record your app running within ARC. You can learn more about ARC at https://developer.chrome.com/apps/getstarted_arc and about OBS at https://obsproject.com/. Building the publishable APK file What we are doing in this section is preparing the file that we will upload to Google Play. The format of the file we will create is .apk. This type of file is often referred to as an APK. The actual contents of this file are the compiled class files, all the resources that we've added, and the files and resources that Android Studio has autogenerated. We don't need to concern ourselves with the details, as we just need to follow these steps. The steps not only create the APK, but they also create a key and sign your app with the key. This process is required and it also protects the ownership of your app:   Note that this is not the same thing as copy protection/digital rights management. In Android Studio, open the project that you want to publish and navigate to Build | Generate Signed APK and a pop-up window will open, as shown: In the Generate Signed APK window, click on the Create new button. After this, you will see the New Key Store window, as shown in the following screenshot: In the Key store path field, browse to a location on your hard disk where you would like to keep your new key, and enter a name for your key store. If you don't have a preference, simply enter keys and click on OK. Add a password and then retype it to confirm it. Next, you need to choose an alias and type it into the Alias field. You can treat this like a name for your key. It can be any word that you like. Now, enter another password for the key itself and type it again to confirm. Leave Validity (years) at its default value of 25. Now, all you need to do is fill out your personal/business details. This doesn't need to be 100% complete as the only mandatory field is First and Last Name. Click on the OK button to continue. You will be taken back to the Generate Signed APK window with all the fields completed and ready to proceed, as shown in the following window: Now, click on Next to move to the next screen: Choose where you would like to export your new APK file and select release for the Build Type field. Click on Finish and Android Studio will build the shiny new APK into the location you've specified, ready to be uploaded to the App Store. Taking a backup of your key store in multiple safe places! The key store is extremely valuable. If you lose it, you will effectively lose control over your app. For example, if you try to update an app that you have on Google Play, it will need to be signed by the same key. Without it, you would not be able to update it. Think of the chaos if you had lots of users and your app needed a database update, but you had to issue a whole new app because of a lost key store. As we will need it quite soon, locate the file that has been built and ends in the .apk extension. Publishing the app Log in to your developer account at https://play.google.com/apps/publish/. From the left-hand side of your developer console, make sure that the All applications tab is selected, as shown: On the top right-hand side corner, click on the Add new application button, as shown in the next screenshot: Now, we have a bit of form filling to do, and you will need all the images from the Preparing to publish section that is near the start of the chapter. In the ADD NEW APPLICATION window shown next, choose a default language and type the title of your application: Now, click on the Upload APK button and then the Upload your first APK button and browse to the APK file that you built and signed in. Wait for the file to finish uploading: Now, from the inner left-hand side menu, click on Store Listing: We are faced with a fair bit of form filling here. If, however, you have all your images to hand, you can get through this in about 10 minutes. Almost all the fields are self-explanatory, and the ones that aren't have helpful tips next to the field entry box. Here are a few hints and tips to make the process smooth and produce a good end result: In the Full description and Short description fields, you enter the text that will be shown to potential users/buyers of your app. Be sure to make the description as enticing and exciting as you can. Mention all the best features in a clear list, but start the description with one sentence that sums up your app and what it does. Don't worry about the New content rating field as we will cover that in a minute. If you haven't built your app for tablet/phone devices, then don't add images in these tabs. If you have, however, make sure that you add a full range of images for each because these are the only images that the users of this type of device will see. When you have completed the form, click on the Save draft button at the top-right corner of the web page. Now, click on the Content rating tab and you can answer questions about your app to get a content rating that is valid (and sometimes varied) across multiple countries. The last tab you need to complete is the Pricing and Distribution tab. Click on this tab and choose the Paid or Free distribution button. Then, enter a price if you've chosen Paid. Note that if you choose Free, you can never change this. You can, however, unpublish it. If you chose Paid, you can click on Auto-convert prices now to set up equivalent pricing for all currencies around the world. In the DISTRIBUTE IN THESE COUNTRIES section, you can select countries individually or check the SELECT ALL COUNTRIES checkbox, as shown in the next screenshot:   The next six options under the Device categories and User programs sections in the context of what you have learned in this book should all be left unchecked. Do read the tips to find out more about Android Wear, Android TV, Android Auto, Designed for families, Google Play for work, and Google Play for education, however. Finally, you must check two boxes to agree with the Google consent guidelines and US export laws. Click on the Publish App button in the top-right corner of the web page and your app will soon be live on Google Play. Congratulations. Summary You can now start building Android apps. Don't run off and build the next Evernote, Runtatstic, or Angry Birds just yet. Head over to our book, Android Programming for Beginners: https://www.packtpub.com/application-development/android-programming-beginners. Here are a few more books that you can check out to learn more about Android: Android Studio Cookbook (https://www.packtpub.com/application-development/android-studio-cookbook) Learning Android Google Maps (https://www.packtpub.com/application-development/learning-android-google-maps) Android 6 Essentials (https://www.packtpub.com/application-development/android-6-essentials) Android Sensor Programming By Example (https://www.packtpub.com/application-development/android-sensor-programming-example) Resources for Article: Further resources on this subject: Saying Hello to Unity and Android[article] Android and iOS Apps Testing at a Glance[article] Testing with the Android SDK[article]
Read more
  • 0
  • 0
  • 13775

article-image-payment-processing-workflow
Packt
09 Feb 2016
4 min read
Save for later

Payment Processing Workflow

Packt
09 Feb 2016
4 min read
In this article by Ernest Bruce, author of the book, Apply Pay Essentials, the author talks about actors and operations in the payment processing workflow. After the user authorizes the payment request, the user app, the payment gateway, and the order processing web app team up to securely deliver the payment information to the issuing bank, to transfer the funds form the user's account to the acquiring bank, and to inform the user of the transaction status (approved or declined). (For more resources related to this topic, see here.) The payment processing workflow is made up of three phases: Preprocess phase: This is the phase where the app gets a charge token from the payment gateway, and it sends order information (including the charge token) to the order-processing server Process phase: This is the phase where the order-processing web app (running on your server) charges the user's card through the payment gateway, updates order and inventory data if the charge is successful, and sends the transaction status to the user app Postprocess phase: This is the phase where the user app informs the user about the status of the transaction and dismisses the payment sheet As, in general, the payment-gateway API does not run appropriately in the Simulator app, you must use an actual iOS device to test the payment-processing workflow in your development environment. In addition to this, you need either a separate computer to run the order-processing web app or a proxy server that intercepts network traffic from the device and redirects the appropriate requests to your development computer. For details, see the documentation for the example project. Actors and Operations in the Processing Workflow The payment processing workflow is the process by which the payment information that is generated by Apple Pay from the payment request and the information that the user entered in the payment sheet is transmitted to your payment gateway and the card's issuing bank to charge the card and make the payment's funds available in your acquiring bank. The workflow starts when the payment sheet calls the paymentAuthorizationViewController:didAuthorizePayment:completion: delegate method, providing the user app general order information (such as shipping and billing information) and a payment token containing encrypted-payment data. This diagram depicts the actors, operations, and data that are part of the payment processing workflow: Payment_processing_workflow These are the operations and data that are part of the workflow: payment authorized: This is when the payment sheet tells the app that the user authorized the payment payment token: This is when the app provides the payment token to the payment gateway, which returns a charge token order info and charge token: This is when the app sends information about the order and the charge token to the order processing web app charge card: This is when the web app charges the card through the payment gateway approved or declined: This is when the payment gateway tells the web app whether the payment is approved or declined transaction result and order metadata: This is when the web app provides the user app the result of the transaction and order information, such as the order number transaction result: This is when the app tells the payment sheet the result of the payment transaction: approved, or declined payment sheet done: This is when the payment sheet tells the app that the transaction is complete dismiss: This is when the app dismisses the payment sheet Summary You can also check out the following books on Apple: Mastering Apple Aperture, Thomas Fitzgerald by Packt Publishing Apple Motion 5 Cookbook, Nick Harauz, by Packt Publishing Resources for Article: Further resources on this subject: BatteryMonitor Application [article] Introducing Xcode Tools for iPhone Development [article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 13735

article-image-livecode-loops-and-timers
Packt
10 Sep 2014
9 min read
Save for later

LiveCode: Loops and Timers

Packt
10 Sep 2014
9 min read
In this article by Dr Edward Lavieri, author of LiveCode Mobile Development Cookbook, you will learn how to use timers and loops in your mobile apps. Timers can be used for many different functions, including a basketball shot clock, car racing time, the length of time logged into a system, and so much more. Loops are useful for counting and iterating through lists. All of this will be covered in this article. (For more resources related to this topic, see here.) Implementing a countdown timer To implement a countdown timer, we will create two objects: a field to display the current timer and a button to start the countdown. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to create a countdown timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Down. Add the following code to the Count Down button: on mouseUp local pTime put 19 into pTime put pTime into fld "timerDisplay" countDownTimer pTime end mouseUp Add the following code to the Count Down button: on countDownTimer currentTimerValue subtract 1 from currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue > 0 then send "countDownTimer" && currentTimerValue to me in 1 sec end if end countDownTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countDownTimer method will be called each second until the timer is zero. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... LiveCode provides us with the send command, which allows us to transfer messages to handlers and objects immediately or at a specific time, such as this recipe's example. Implementing a count-up timer To implement a count-up timer, we will create two objects: a field to display the current timer and a button to start the upwards counting. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to implement a count-up timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 10 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countUpTimer method will be called each second until the timer is at 10. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... Timers can be tricky, especially on mobile devices. For example, using the repeat loop control when working with timers is not recommended because repeat blocks other messages. Pausing a timer It can be important to have the ability to stop or pause a timer once it is started. The difference between stopping and pausing a timer is in keeping track of where the timer was when it was interrupted. In this recipe, you'll learn how to pause a timer. Of course, if you never resume the timer, then the act of pausing it has the same effect as stopping it. How to do it... Use the following steps to create a count-up timer and pause function: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp In LiveCode, the pendingMessages option returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. To test this, first click on the Count Up button, and then click on the Pause button before the timer reaches 60. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. Resuming a timer If you have a timer as part of your mobile app, you will most likely want the user to be able to pause and resume a timer, either directly or through in-app actions. See previous recipes in this article to create and pause a timer. This recipe covers how to resume a timer once it is paused. How to do it... Perform the following steps to resume a timer once it is paused: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp Place a button on the card and name it Resume. Add the following code to the Resume button: on mouseUp local pTime put the text of fld "timerDisplay" into pTime countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue <60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer To test this, first, click on the Count Up button, then click on the Pause button before the timer reaches 60. Finally, click on the Resume button. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. When the Resume button is clicked on, the current value of the timer, based on the timerDisplay button, is used to continue incrementing the timer. In LiveCode, pendingMessages returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. Using a loop to count There are numerous reasons why you might want to implement a counter in a mobile app. You might want to count the number of items on a screen (that is, cold pieces in a game), the number of players using your app simultaneously, and so on. One of the easiest methods of counting is to use a loop. This recipe shows you how to easily implement a loop. How to do it... Use the following steps to instantiate a loop that counts: Create a new main stack. Rename the stack's default card to MainScreen. Drag a label field to the card and name it counterDisplay. Drag five checkboxes to the card and place them anywhere. Change the names to 1, 2, 3, 4, and 5. Drag a button to the card and name it Loop to Count. Add the following code to the Loop to Count button: on mouseUp local tButtonNumber put the number of buttons on this card into tButtonNumber if tButtonNumber > 0 then repeat with tLoop = 1 to tButtonNumber set the label of btn value(tLoop) to "Changed " & tLoop end repeat put "Number of button's changed: " & tButtonNumber into fld "counterDisplay" end if end mouseUp Test the code by running it in a mobile simulator or on an actual device. How it works... In this recipe, we created several buttons on a card. Next, we created code to count the number of buttons and a repeat control structure to sequence through the buttons and change their labels. Using a loop to iterate through a list In this recipe, we will create a loop to iterate through a list of text items. Our list will be a to-do or action list. Our loop will process each line and number them on screen. This type of loop can be useful when you need to process lists of unknown lengths. How to do it... Perform the following steps to create an iterative loop: Create a new main stack. Drag a scrolling list field to the stack's card and name it myList. Change the contents of the myList field to the following, paying special attention to the upper- and lowercase values of each line: Wash Truck Write Paper Clean Garage Eat Dinner Study for Exam Drag a button to the card and name it iterate. Add the following code to the iterate button: on mouseUp local tLines put the number of lines of fld "myList" into tLines repeat with tLoop = 1 to tLines put tLoop & " - " & line tLoop of fld "myList"into line tLoop of fld "myList" end repeat end mouseUp Test the code by clicking on the iterate button. How it works... We used the repeat control structure to iterate through a list field one line at a time. This was accomplished by first determining the number of lines in that list field, and then setting the repeat control structure to sequence through the lines. Summary In this article we examined the LiveCode scripting required to implement and control count-up and countdown timers. We also learnt how to use loops to count and iterate through a list. Resources for Article:  Further resources on this subject: Introduction to Mobile Forensics [article] Working with Pentaho Mobile BI [article] Building Mobile Apps [article]
Read more
  • 0
  • 0
  • 13704

article-image-how-make-generic-typealiases-swift
Alexander Altman
16 Nov 2015
4 min read
Save for later

How to Make Generic typealiases in Swift

Alexander Altman
16 Nov 2015
4 min read
Swift's typealias declarations are a good way to clean up our code. It's generally considered good practice in Swift to use typealiases to give more meaningful and domain-specific names to types that would otherwise be either too general-purpose or too long and complex. For example, the declaration: typealias Username = String gives a less vague name to the type String, since we're going to be using strings as usernames and we want a more domain-relevant name for that type. Similarly, the declaration: typealias IDMultimap = [Int: Set<Username>] gives a name for [Int: Set<Username>] that is not only more meaningful, but somewhat shorter. However, we run into problems when we want to do something a little more advanced; there is a possible application of typealias that Swift doesn't let us make use of. Specifically, Swift doesn't accept typealiases with generic parameters. If we try it the naïvely obvious way, typealias Multimap<Key: Hashable, Value: Hashable> = [Key: Set<Value>] we get an error at the begining of the type parameter list: Expected '=' in typealias declaration. Swift (as of version 2.1, at least) doesn't let us directly declare a generic typealias. This is quite a shame, as such an ability would be very useful in a lot of different contexts, and languages that have it (such as Rust, Haskell, Ocaml, F♯, or Scala) make use of it all the time, including in their standard libraries. Is there any way to work around this linguistic lack? As it turns out, there is! The Solution It's actually possible to effectively give a typealias type parameters, despite Swift appearing to disallow it. Here's how we can trick Swift into accepting a generic typealias: enum Multimap<Key: Hashable, Value: Hashable> { typealias T = [Key: Set<Value>] } The basic idea here is that we declare a generic enum whose sole purpose is to hold our (now technically non-generic) typealias as a member. We can then supply the enum with its type parameters and project out the actual typealias inside, like so: let idMMap: Multimap<Int, Username>.T = [0: ["alexander", "bob"], 1: [], 2: ["christina"]] func hasID0(user: Username) -> Bool { return idMMap[0]?.contains(user) ?? false } Notice that we used an enum rather than a struct; this is because an enum with no cases cannot have any instances (which is exactly what we want), but a struct with no members still has (at least) one instance, which breaks our layer of abstraction. We are essentially treating our caseless enum as a tiny generic module, within which everything (that is, just the typealias) has access to the Key and Value type parameters. This pattern is used in at least a few libraries dedicated to functional programming in Swift, since such constructs are especially valuable there. Nonetheless, this is a broadly useful technique, and it's the best method currently available for creating generic typealias in Swift. The Applications As sketched above, this technique works because Swift doesn't object to an ordinary typealias nested inside a generic type declaration. However, it does object to multiple generic types being nested inside each other; it even objects to either a generic type being nested inside a non-generic type or a non-generic type being nested inside a generic type. As a result, type-level currying is not possible. Despite this limitation, this kind of generic typealias is still useful for a lot of purposes; one big one is specialized error-return types, in which Swift can use this technique to imitate Rust's standard library: enum Result<V, E> { case Ok(V) case Err(E) } enum IO_Result<V> { typealias T = Result<V, ErrorType> } Another use for generic typealiases comes in the form of nested collections types: enum DenseMatrix<Element> { typealias T = [[Element]] } enum FlatMatrix<Element> { typealias T = (width: Int, elements: [Element]) } enum SparseMatrix<Element> { typealias T = [(x: Int, y: Int, value: Element)] } Finally, since Swift is a relatively young language, there are sure to be undiscovered applications for things like this; if you search, maybe you'll find one! Super-charge your Swift development by learning how to use the Flyweight pattern – Click here to read more About the author Alexander Altman (https://pthariensflame.wordpress.com) is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift (https://github.com/typelift) project.
Read more
  • 0
  • 0
  • 13545

article-image-project-structure
Packt
08 Jan 2016
14 min read
Save for later

The Project Structure

Packt
08 Jan 2016
14 min read
In this article written by Nathanael Anderson, author of the book Getting Started with NativeScript, we will see how to navigate through your new project and its full structure. We will explain each of the files that are automatically created and where you create your own files. Then, we will proceed to gain a deeper understanding of some of the basic components of your application, and we will finally see how to change screens. In this article, we will cover the following topics: An overview of the project directory The root directory The application components (For more resources related to this topic, see here.) Project directory overview By running the nativescript create crossCommunicator command, the command creates a nice structure of files and folders for us to explore. First, we will do a high level overview of the different folders and their purposes and touch on the important files in those folders. Then, we will finish the overview by going into depth about the App directory, which is where you will spend pretty much your entire time in developing an application. To give you a good visual overview, here is what the project hierarchy structure looks like: The root directory In your Root folder, you will see only a couple of directories. The package.json file will look something like this: {   "nativescript": {     "id": "org.nativescript.crossCommunicator",     "tns-android": {       "version": "1.5.0"     },     "tns-ios": {       "version": "1.5.0"     },   },   "dependencies": {     "tns-core-modules": "1.5.0"   } } This is the NativeScript master project configuration file for your entire application. It basically outlines the basic information and all platform requirements of the project. It will also be modified by the nativescript tool when you add and remove any plugins or platforms. So, in the preceding package.json file, you can see that I have installed the Android (tns-android) and iOS (tns-ios) platforms, using the nativscript platform add command, and they are both currently at version 1.5.0. The tns-core-modules dependency was added by the nativescript command when we created the project and it is the core modules. Changing the app ID Now, if you want this to be your company's name instead of the default ID of org.nativescript.yourProjectName, there are two ways to set the app ID. The first way is to set it up when you create the project; executing a nativescript create myProjName --appid com.myCompany.myProjName command will automatically set the ID value. If you forget to run the create command with a --appid option; you can change it here. However, any time you change the option here, you will also have to remove all the installed platforms and then re-add the platforms you are using. This must be done as when each platform is added because it uses this configuration id while building all the platform folders and all of the needed platform files. The package.json file This file contains basic information about the current template that is installed. When you create a new application via the nativescript create command, by default, it copies everything from a template project called tns-template-hello-world. In the future, NativeScript will allow you to choose the template, but currently this is the only template that is available and working. This template contains the folders we discussed earlier and all the files that we will discuss further. The package.json file is from this template, and it basically tells you about the template and its specific version that is installed. Le's take a look at the following code snippet: {   "name": "tns-template-hello-world",   "main": "app.js",   "version": "1.5.0", ... more json documentation fields... } Feel free to modify the package.json file so that it matches your project's name and details. This file currently does not currently serve much purpose beyond template documentation and the link to the main application file. License The License file that is present in the the App folder is the license that the tns-template-hello-world is under. In your case, your app will probably be distributed under a different license. You can either update this file to match your application's license or delete it. App.js Awesome! We have finally made it to the first of the JavaScript files that make up the code that we will change to make this our application. This file is the bootstrap file of the entire application, and this is where the fun begins. In the preceding package.json file, you can see that the package.json file references this file (app.js) under the main key. This key in the package.json file is what NativeScript uses to make this file the entry point for the entire application. Looking at this file, we can see that it currently has four simple lines of code: var application = require("application"); application.mainModule = "main-page"; application.cssFile = "./app.css"; application.start(); The file seems simple enough, so let's work through the code as we see it. The first line loads the NativeScript common application component class, and the application component wraps the initialization and life cycle of the entire application. The require() function is the what we use to reference another file in your project. It will look in your current directory by default and then it will use some additional logic to look into a couple places in the tns_core_modules folder to check whether it can find the name as a common component. Now that we have the application component loaded, the next lines are used to configure what the application class does when your program starts and runs. The second line tells which one of the files is the main page. The third line tells the application which CSS file is the main CSS file for the entire application. And finally, we tell the application component to actually start. Now, in a sense, the application has actually started running, we are already running our JavaScript code. This start() function actually starts the code that manages the application life cycle and the application events, loads and applies your main CSS file, and finally loads your main page files. If you want to add any additional code to this file, you will need to put it before the application.start() command. On some platforms, nothing below the application.start() command will be run in the app.js file. The main-page.js file The JavaScript portion of the page is probably where you will spend the majority of your time developing, as it contains all of the logic of your page. To create a page, you typically have a JavaScript file as you can do everything in JavaScript, and this is where all your logic for the page resides. In our application, the main page currently has only six lines of code: var vmModule = require("./main-view-model"); function pageLoaded(args) {   var page = args.object;   page.bindingContext = vmModule.mainViewModel; } exports.pageLoaded = pageLoaded; The first line loads the main-view-model.js file. If you haven't guessed yet, in our new application, this is used as the model file for this page. We will check out the optional model file in a few minutes after we are done exploring the rest of the main-page files. An app page does not have to have a model, they are totally optional. Furthermore, you can actually combine your model JavaScript into the page's JavaScript file. Some people find it easier to keep their model separate, so when Telerik designed this example, they built this application using the MVVM pattern, which uses a separate view model file. For more information on MVVM, you can take a look at the Wikipedia entry at https://en.wikipedia.org/wiki/Model_View_ViewModel. This file also has a function called pageLoaded, which is what sets the model object as the model for this page. The third line assigns the page variable to the page component object that is passed as part of the event handler. The fourth line assigns the model to the current page's bindingContext attribute. Then, we export the pageLoaded function as a function called pageLoaded. Using exports and module.exports is the way in which we publish something to other files that use require() to load it. Each file is its own independent blackbox, nothing that is not exported can be seen by any of the other files. Using exports, you can create the interface of your code to the rest of the world. This is part of the CommonJS standard, and you can read more about it at the NodeJS site. The main-page.xml file The final file of our application folder is also named main-page; it is the page layout file. As you can see, the main-page.xml layout consists of seven simple lines of XML code, which actually does quite a lot, as you can see: <Page loaded="pageLoaded">   <StackLayout>     <Label text="Tap the button" cssClass="title"/>     <Button text="TAP" tap="{{ tapAction }}" />     <Label text="{{ message }}" cssClass="message"     textWrap="true"/>   </StackLayout> </Page> Each of the XML layout files are actually a simplified way to load your visual components that you want on your page. In this case, it is what made your app look like this: The main-view-model.js file The final file in our tour of the App folder is the model file. This file has about 30 lines of code. By looking at the first couple of lines, you might have figured out that this file was transpiled from TypeScript. Since this file actually has a lot of boilerplate and unneeded code from the TypeScript conversion, we will rewrite the code in plain JavaScript to help you easily understand what each of the parts are used for. This rewrite will be as close to what as I can make it. So without further ado, here is the original transpiled code to compare our new code with: var observable = require("data/observable"); var HelloWorldModel = (function (_super) {   __extends(HelloWorldModel, _super);   function HelloWorldModel() {     _super.call(this);     this.counter = 42;     this.set("message", this.counter + " taps left");   }   HelloWorldModel.prototype.tapAction = function () {     this.counter--;     if (this.counter <= 0) {       this.set("message", "Hoorraaay! You unlocked the       NativeScript clicker achievement!");     }     else {       this.set("message", this.counter + " taps left");     }   };   return HelloWorldModel; })(observable.Observable); exports.HelloWorldModel = HelloWorldModel; exports.mainViewModel = new HelloWorldModel(); The Rewrite of the main-view-model.js file The rewrite of the main-view-model.js file is very straightforward. The first thing we need for a working model to also require the Observable class is the primary class that handles data binding events in NativeScript. We then create a new instance of the Observable class named mainViewModel. Next, we need to assign the two default values to the mainViewModel instance. Then, we create the same tapAction() function, which is the code that is executed each time when the user taps on the button. Finally, we export the mainViewModel model we created so that it is available to any other files that require this file. This is what the new JavaScript version looks like: // Require the Observable class and create a new Model from it var Observable = require("data/observable").Observable; var mainViewModel = new Observable(); // Setup our default values mainViewModel.counter = 42; mainViewModel.set("message", mainViewModel.counter + " taps left"); // Setup the function that runs when a tap is detected. mainViewModel.tapAction = function() {   this.counter--;   if (this.counter <= 0) {     this.set("message", "Hoorraaay! You unlocked the NativeScript     clicker achievement!");   } else {     this.set("message", this.counter + " taps left");   } }; // Export our already instantiated model class as the variable name that the main-page.js is expecting on line 4. exports.mainViewModel = mainViewModel; The set() command is the only thing that is not totally self-explanatory or explained in this code. What is probably fairly obvious is that this command sets the variable specified to the value specified. However, what is not obvious is when a value is set on an instance of the Observable class, it will automatically send a change event to anyone who has asked to be notified of any changes to that specific variable. If you recall, in the main-page.xml file, the: <Label text="{{ message }}" ….> line will automatically register the label component as a listener for all change events on the message variable when the layout system creates the label. Now, every time the message variable is changed, the text on this label changes. The application component If you recall, earlier in the article, we discussed the app.js file. It basically contains only the code to set up the properties of your application, and then finally, it starts the application component. So, you probably have guessed that this is the primary component for your entire application life cycle. A part of the features that this component provides us is access to all the application-wide events. Frequently in an app, you will want to know when your app is no longer the foreground application or when it finally returns to being the foreground application. To get this information, you can attach  the code to two of the events that it provides like this: application.on("suspend", function(event) {   console.log("Hey, I was suspended – I thought I was your   favorite app!"); }); application.on("resume", function(event) {   console.log("Awesome, we are back!"); }); Some of the other events that you can watch from the application component are launch, exit, lowMemory and uncaughtError. These events allow you to handle different application wide issues that your application might need to know about. Creating settings.js In our application, we will need a settings page; so, we will create the framework for our application's setting page now. We will just get our feet a little wet and explore how to build it purely in JavaScript. As you can see, the following code is fairly straightforward. First, we require all the components that we will be using: Frame, Page, StackLayout, Button, and finally, the Label component. Then, we have to export a createPage function, which is what NativeScript will be running to generate the page if you do not have an XML layout file to go along with the page's JavaScript file. At the beginning of our createPage function, we create each of the four components that we will need. Then, we assign some values and properties to make them have some sort of visual capability that we will be able to see. Next, we create the parent-child relationships and add our label and button to the Layout component, and then we assign that layout to the Page component. Finally, we return the page component: // Add our Requires for the components we need on our page var frame = require("ui/frame"); var Page = require("ui/page").Page; var StackLayout = require("ui/layouts/stack-layout").StackLayout; var Label = require("ui/label").Label; var Button = require("ui/button").Button;   // Create our required function which NativeScript uses // to build the page. exports.createPage = function() {   // Create our components for this page   var page = new Page();   var layout = new StackLayout();   var welcomeLabel = new Label();   var backButton = new Button();     // Assign our page title   page.actionBar.title = "Settings";   // Setup our welcome label   welcomeLabel.text = "You are now in Settings!";   welcomeLabel.cssClass = "message";     // Setup our Go Back button   backButton.text = "Go Back";   backButton.on("tap", function () {     frame.topmost().goBack();   });     // Add our layout items to our StackLayout   layout.addChild(welcomeLabel);   layout.addChild(backButton);     // Assign our layout to the page.   page.content = layout;   // Return our created page   return page; }; One thing I did want to mention here is that if you are creating a page totally programmatically without the use of a Declarative XML file, the createPage function must return the page component. The frame component is expected to have a Page component. Summary We have covered a large amount of foundational information in this article. We also covered which files are used for your application and where to find and make any changes to the project control files. In addition to all this, we also covered several foundational components such as the Application, Frame, and Page components. Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding outside-in [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 13370

article-image-directives-and-services-ionic
Packt
28 Jul 2015
18 min read
Save for later

Directives and Services of Ionic

Packt
28 Jul 2015
18 min read
In this article by Arvind Ravulavaru, author of Learning Ionic, we are going to take a look at Ionic directives and services, which provides reusable components and functionality that can help us in developing applications even faster. (For more resources related to this topic, see here.) Ionic Platform service The first service we are going to deal with is the Ionic Platform service ($ionicPlatform). This service provides device-level hooks that you can tap into to better control your application behavior. We will start off with the very basic ready method. This method is fired once the device is ready or immediately, if the device is already ready. All the Cordova-related code needs to be written inside the $ionicPlatform.ready method, as this is the point in the app life cycle where all the plugins are initialized and ready to be used. To try out Ionic Platform services, we will be scaffolding a blank app and then working with the services. Before we scaffold the blank app, we will create a folder named chapter5. Inside that folder, we will run the following command: ionic start -a "Example 16" -i app.example.sixteen example16 blank Once the app is scaffolded, if you open www/js/app.js, you should find a section such as: .run(function($ionicPlatform) {   $ionicPlatform.ready(function() {     // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard     // for form inputs)     if(window.cordova && window.cordova.plugins.Keyboard) {       cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);     }     if(window.StatusBar) {       StatusBar.styleDefault();     }   }); }) You can see that the $ionicPlatform service is injected as a dependency to the run method. It is highly recommended to use $ionicPlatform.ready method inside other AngularJS components such as controllers and directives, where you are planning to interact with Cordova plugins. In the preceding run method, note that we are hiding the keyboard accessory bar by setting: cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); You can override this by setting the value to false. Also, do notice the if condition before the statement. It is always better to check for variables related to Cordova before using them. The $ionicPlatform service comes with a handy method to detect the hardware back button event. A few (Android) devices have a hardware back button and, if you want to listen to the back button pressed event, you will need to hook into the onHardwareBackButton method on the $ionicPlatform service: var hardwareBackButtonHandler = function() {   console.log('Hardware back button pressed');   // do more interesting things here }         $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler); This event needs to be registered inside the $ionicPlatform.ready method preferably inside AngularJS's run method. The hardwareBackButtonHandler callback will be called whenever the user presses the device back button. A simple functionally that you can do with this handler is to ask the user if they want to really quit your app, making sure that they have not accidently hit the back button. Sometimes this may be annoying. Thus, you can provide a setting in your app whereby the user selects if he/she wants to be alerted when they try to quit. Based on that, you can either defer registering the event or you can unsubscribe to it. The code for the preceding logic will look something like this: .run(function($ionicPlatform) {     $ionicPlatform.ready(function() {         var alertOnBackPress = localStorage.getItem('alertOnBackPress');           var hardwareBackButtonHandler = function() {             console.log('Hardware back button pressed');             // do more interesting things here         }           function manageBackPressEvent(alertOnBackPress) {             if (alertOnBackPress) {                 $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler);             } else {                 $ionicPlatform.offHardwareBackButton(hardwareBackButtonHandler);             }         }           // when the app boots up         manageBackPressEvent(alertOnBackPress);           // later in the code/controller when you let         // the user update the setting         function updateSettings(alertOnBackPressModified) {             localStorage.setItem('alertOnBackPress', alertOnBackPressModified);             manageBackPressEvent(alertOnBackPressModified)         }       }); }) In the preceding code snippet, we are looking in localStorage for the value of alertOnBackPress. Next, we create a handler named hardwareBackButtonHandler, which will be triggered when the back button is pressed. Finally, a utility method named manageBackPressEvent() takes in a Boolean value that decides whether to register or de-register the callback for HardwareBackButton. With this set up, when the app starts we call the manageBackPressEvent method with the value from localStorage. If the value is present and is equal to true, we register the event; otherwise, we do not. Later on, we can have a settings controller that lets users change this setting. When the user changes the state of alertOnBackPress, we call the updateSettings method passing in if the user wants to be alerted or not. The updateSettings method updates localStorage with this setting and calls the manageBackPressEvent method, which takes care of registering or de-registering the callback for the hardware back pressed event. This is one powerful example that showcases the power of AngularJS when combined with Cordova to provide APIs to manage your application easily. This example may seem a bit complex at first, but most of the services that you are going to consume will be quite similar. There will be events that you need to register and de-register conditionally, based on preferences. So, I thought this would be a good place to share an example such as this, assuming that this concept will grow on you. registerBackButtonAction The $ionicPlatform also provides a method named registerBackButtonAction. This is another API that lets you control the way your application behaves when the back button is pressed. By default, pressing the back button executes one task. For example, if you have a multi-page application and you are navigating from page one to page two and then you press the back button, you will be taken back to page one. In another scenario, when a user navigates from page one to page two and page two displays a pop-up dialog when it loads, pressing the back button here will only hide the pop-up dialog but will not navigate to page one. The registerBackButtonAction method provides a hook to override this behavior. The registerBackButtonAction method takes the following three arguments: callback: This is the method to be called when the event is fired priority: This is the number that indicates the priority of the listener actionId (optional): This is the ID assigned to the action By default the priority is as follows: Previous view = 100 Close side menu = 150 Dismiss modal = 200 Close action sheet = 300 Dismiss popup = 400 Dismiss loading overlay = 500 So, if you want a certain functionality/custom code to override the default behavior of the back button, you will be writing something like this: var cancelRegisterBackButtonAction = $ionicPlatform.registerBackButtonAction(backButtonCustomHandler, 201); This listener will override (take precedence over) all the default listeners below the priority value of 201—that is dismiss modal, close side menu, and previous view but not above the priority value. When the $ionicPlatform.registerBackButtonAction method executes, it returns a function. We have assigned that function to the cancelRegisterBackButtonAction variable. Executing cancelRegisterBackButtonAction de-registers the registerBackButtonAction listener. The on method Apart from the preceding handy methods, $ionicPlatform has a generic on method that can be used to listen to all of Cordova's events (https://cordova.apache.org/docs/en/edge/cordova_events_events.md.html). You can set up hooks for application pause, application resume, volumedownbutton, volumeupbutton, and so on, and execute a custom functionality accordingly. You can set up these listeners inside the $ionicPlatform.ready method as follows: var cancelPause = $ionicPlatform.on('pause', function() {             console.log('App is sent to background');             // do stuff to save power         });   var cancelResume = $ionicPlatform.on('resume', function() {             console.log('App is retrieved from background');             // re-init the app         });           // Supported only in BlackBerry 10 & Android var cancelVolumeUpButton = $ionicPlatform.on('volumeupbutton', function() {             console.log('Volume up button pressed');             // moving a slider up         });   var cancelVolumeDownButton = $ionicPlatform.on('volumedownbutton', function() {             console.log('Volume down button pressed');             // moving a slider down         }); The on method returns a function that, when executed, de-registers the event. Now you know how to control your app better when dealing with mobile OS events and hardware keys. Content Next, we will take a look at content-related directives. The first is the ion-content directive. Navigation The next component we are going to take a look at is the navigation component. The navigation component has a bunch of directives as well as a couple of services. The first directive we are going to take a look at is ion-nav-view. When the app boots up, $stateProvider will look for the default state and then will try to load the corresponding template inside the ion-nav-view. Tabs and side menu To understand navigation a bit better, we will explore the tabs directive and the side menu directive. We will scaffold the tabs template and go through the directives related to tabs; run this: ionic start -a "Example 19" -i app.example.nineteen example19 tabs Using the cd command, go to the example19 folder and run this:   ionic serve This will launch the tabs app. If you open www/index.html file, you will notice that this template uses ion-nav-bar to manage the header with ion-nav-back-button inside it. Next open www/js/app.js and you will find the application states configured: .state('tab.dash', {     url: '/dash',     views: {       'tab-dash': {         templateUrl: 'templates/tab-dash.html',         controller: 'DashCtrl'       }     }   }) Do notice that the views object has a named object: tab-dash. This will be used when we work with the tabs directive. This name will be used to load a given view when a tab is selected into the ion-nav-view directive with the name tab-dash. If you open www/templates/tabs.html, you will find a markup for the tabs component: <ion-tabs class="tabs-icon-top tabs-color-active-positive">     <!-- Dashboard Tab -->   <ion-tab title="Status" icon-off="ion-ios-pulse" icon-on="ion- ios-pulse-strong" href="#/tab/dash">     <ion-nav-view name="tab-dash"></ion-nav-view>   </ion-tab>     <!-- Chats Tab -->   <ion-tab title="Chats" icon-off="ion-ios-chatboxes-outline" icon-on="ion-ios-chatboxes" href="#/tab/chats">     <ion-nav-view name="tab-chats"></ion-nav-view>   </ion-tab>     <!-- Account Tab -->   <ion-tab title="Account" icon-off="ion-ios-gear-outline" icon- on="ion-ios-gear" href="#/tab/account">     <ion-nav-view name="tab-account"></ion-nav-view>   </ion-tab>   </ion-tabs> The tabs.html will be loaded before any of the child tabs load, since tab state is defined as an abstract route. The ion-tab directive is nested inside ion-tabs and every ion-tab directive has an ion-nav-view directive nested inside it. When a tab is selected, the route with the same name as the name attribute on the ion-nav-view will be loaded inside the corresponding tab. Very neatly structured! You can read more about tabs directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionTabs/. Next, we are going to scaffold an app using the side menu template and go through the navigation inside it; run this: ionic start -a "Example 20" -i app.example.twenty example20 sidemenu Using the cd command, go to the example20 folder and run this:   ionic serve This will launch the side menu app. We start off exploring with www/index.html. This file has only the ion-nav-view directive inside the body. Next, we open www/js/app/js. Here, the routes are defined as expected. But one thing to notice is the name of the views for search, browse, and playlists. It is the same—menuContent—for all: .state('app.search', {     url: "/search",     views: {       'menuContent': {         templateUrl: "templates/search.html"       }     } }) If we open www/templates/menu.html, you will notice ion-side-menus directive. It has two children ion-side-menu-content and ion-side-menu. The ion-side-menu-content displays the content for each menu item inside the ion-nav-view named menuContent. This is why all the menu items in the state router have the same view name. The ion-side-menu is displayed on the left-hand side of the page. You can set the location on the ion-side-menu to the right to show the side menu on the right or you can have two side menus. Do notice the menu-toggle directive on the button inside ion-nav-buttons. This directive is used to toggle the side menu. If you want to have the menu on both sides, your menu.html will look as follows: <ion-side-menus enable-menu-with-back-views="false">   <ion-side-menu-content>     <ion-nav-bar class="bar-stable">       <ion-nav-back-button>       </ion-nav-back-button>         <ion-nav-buttons side="left">         <button class="button button-icon button-clear ion- navicon" menu-toggle="left">         </button>       </ion-nav-buttons>       <ion-nav-buttons side="right">         <button class="button button-icon button-clear ion- navicon" menu-toggle="right">         </button>       </ion-nav-buttons>     </ion-nav-bar>     <ion-nav-view name="menuContent"></ion-nav-view>   </ion-side-menu-content>     <ion-side-menu side="left">     <ion-header-bar class="bar-stable">       <h1 class="title">Left</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu>   <ion-side-menu side="right">     <ion-header-bar class="bar-stable">       <h1 class="title">Right</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu> </ion-side-menus> You can read more about side menu directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionSideMenus/. This concludes our journey through the navigation directives and services. Next, we will move to Ionic loading. Ionic loading The first service we are going to take a look at is $ionicLoading. This service is highly useful when you want to block a user's interaction from the main page and indicate to the user that there is some activity going on in the background. To test this, we will scaffold a new blank template and implement $ionicLoading; run this: ionic start -a "Example 21" -i app.example.twentyone example21 blank Using the cd command, go to the example21 folder and run this:   ionic serve This will launch the blank template in the browser. We will create an app controller and define the show and hide methods inside it. Open www/js/app.js and add the following code: .controller('AppCtrl', function($scope, $ionicLoading, $timeout) {       $scope.showLoadingOverlay = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     $scope.hideLoadingOverlay = function() {         $ionicLoading.hide();     };       $scope.toggleOverlay = function() {         $scope.showLoadingOverlay();           // wait for 3 seconds and hide the overlay         $timeout(function() {             $scope.hideLoadingOverlay();         }, 3000);     };   }) We have a function named showLoadingOverlay, which will call $ionicLoading.show(), and a function named hideLoadingOverlay(), which will call $ionicLoading.hide(). We have also created a utility function named toggleOverlay(), which will call showLoadingOverlay() and after 3 seconds will call hideLoadingOverlay(). We will update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-stable">         <h1 class="title">$ionicLoading service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-dark" ng-click="toggleOverlay()">             Toggle Overlay         </button>     </ion-content> </body> We have a button that calls toggleOverlay(). If you save all the files, head back to the browser, and click on the Toggle Overlay button, you will see the following screenshot: As you can see, the overlay is shown till the hide method is called on $ionicLoading. You can also move the preceding logic inside a service and reuse it across the app. The service will look like this: .service('Loading', function($ionicLoading, $timeout) {     this.show = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     this.hide = function() {         $ionicLoading.hide();     };       this.toggle= function() {         var self  = this;         self.show();           // wait for 3 seconds and hide the overlay         $timeout(function() {             self.hide();         }, 3000);     };   }) Now, once you inject the Loading service into your controller or directive, you can use Loading.show(), Loading.hide(), or Loading.toggle(). If you would like to show only a spinner icon instead of text, you can call the $ionicLoading.show method without any options: $scope.showLoadingOverlay = function() {         $ionicLoading.show();     }; Then, you will see this: You can configure the show method further. More information is available at http://ionicframework.com/docs/nightly/api/service/$ionicLoading/. You can also use the $ionicBackdrop service to show just a backdrop. Read more about $ionicBackdrop at http://ionicframework.com/docs/nightly/api/service/$ionicBackdrop/. You can also checkout the $ionicModal service at http://ionicframework.com/docs/api/service/$ionicModal/; it is quite similar to the loading service. Popover and Popup services Popover is a contextual view that generally appears next to the selected item. This component is used to show contextual information or to show more information about a component. To test this service, we will be scaffolding a new blank app: ionic start -a "Example 23" -i app.example.twentythree example23 blank Using the cd command, go to the example23 folder and run this:   ionic serve This will launch the blank template in the browser. We will add a new controller to the blank project named AppCtrl. We will be adding our controller code in www/js/app.js. .controller('AppCtrl', function($scope, $ionicPopover) {       // init the popover     $ionicPopover.fromTemplateUrl('button-options.html', {         scope: $scope     }).then(function(popover) {         $scope.popover = popover;     });       $scope.openPopover = function($event, type) {         $scope.type = type;         $scope.popover.show($event);     };       $scope.closePopover = function() {         $scope.popover.hide();         // if you are navigating away from the page once         // an option is selected, make sure to call         // $scope.popover.remove();     };   }); We are using the $ionicPopover service and setting up a popover from a template named button-options.html. We are assigning the current controller scope as the scope to the popover. We have two methods on the controller scope that will show and hide the popover. The openPopover method receives two options. One is the event and second is the type of the button we are clicking (more on this in a moment). Next, we update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-positive">         <h1 class="title">Popover Service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-block button-dark" ng- click="openPopover($event, 'dark')">             Dark Button         </button>         <button class="button button-block button-assertive" ng- click="openPopover($event, 'assertive')">             Assertive Button         </button>         <button class="button button-block button-calm" ng- click="openPopover($event, 'calm')">             Calm Button         </button>     </ion-content>     <script id="button-options.html" type="text/ng-template">         <ion-popover-view>             <ion-header-bar>                 <h1 class="title">{{type}} options</h1>             </ion-header-bar>             <ion-content>                 <div class="list">                     <a href="#" class="item item-icon-left">                         <i class="icon ion-ionic"></i> Option One                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-help-buoy"></i> Option Two                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-hammer"></i> Option Three                     </a>                     <a href="#" class="item item-icon-left" ng- click="closePopover()">                         <i class="icon ion-close"></i> Close                     </a>                 </div>             </ion-content>         </ion-popover-view>     </script> </body> Inside ion-content, we have created three buttons, each themed with a different mood (dark, assertive, and calm). When a user clicks on the button, we show a popover that is specific for that button. For this example, all we are doing is passing in the name of the mood and showing the mood name as the heading in the popover. But you can definitely do more. Do notice that we have wrapped our template content inside ion-popover-view. This takes care of positioning the modal appropriately. The template must be wrapped inside the ion-popover-view for the popover to work correctly. When we save all the files and head back to the browser, we will see the three buttons. Depending on the button you click, the heading of the popover changes, but the options remain the same for all of them: Then, when we click anywhere on the page or the close option, the popover closes. If you are navigating away from the page when an option is selected, make sure to call: $scope.popover.remove(); You can read more about Popover at http://ionicframework.com/docs/api/controller/ionicPopover/. Our GitHub organization With the ever-changing frontend world, keeping up with latest in the business is quite essential. During the course of the book, Cordova, Ionic, and Ionic CLI has evolved a lot and we are predicting that they will keep evolving till they become stable. So, we have created a GitHub organization named Learning Ionic (https://github.com/learning-ionic), which consists of code for all the chapters. You can raise issues, submit pull requests and we will also try and keep it updated with the latest changes. So, you can always refer back to GitHub organization for the possible changes. Summary In this article, we looked at various Ionic directives and services that help us develop applications easily. Resources for Article: Further resources on this subject: Mailing with Spring Mail [article] Implementing Membership Roles, Permissions, and Features [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 13291
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-adaptive-application-layout
Packt
14 Jan 2016
17 min read
Save for later

Creating an Adaptive Application Layout

Packt
14 Jan 2016
17 min read
In this article by Jim Wilson, the author of the book, Creating Dynamic UI with Android Fragments - Second Edition, we will see how to create an adaptive application layout. (For more resources related to this topic, see here.) In our application, we'll leave the wide-display aspect of the program alone because static layout management is working fine there. We work on the portrait-oriented handset aspect of the application. For these devices, we'll update the application's main activity to dynamically switch between displaying the fragment containing the list of books and the fragment displaying the selected book's description. Updating the layout to support dynamic fragments Before we write any code to dynamically manage the fragments within our application, we first need to modify the activity layout resource for portrait-oriented handset devices. This resource is contained in the activity_main.xml layout resource file that is not followed by (land) or (600dp). The layout resource currently appears as shown here: <LinearLayout android_orientation="vertical" android_layout_width="match_parent" android_layout_height="match_parent" > <!-- List of Book Titles --> <fragment android_layout_width="match_parent" android_layout_height="0dp" android_layout_weight="1" android_name="com.jwhh.fragments.BookListFragment2" android_id="@+id/fragmentTitles" tools_layout="@layout/fragment_book_list"/> </LinearLayout> We need to make two changes to the layout resource. The first is to add an id attribute to the LinearLayout view group so that we can easily locate it in code. The other change is to completely remove the fragment element. The updated layout resource now contains only the LinearLayout view group, which includes an id attribute value of @+id/layoutRoot. The layout resource now appears as shown here: <LinearLayout android_id="@+id/layoutRoot" android_orientation="vertical" android_layout_width="match_parent" android_layout_height="match_parent" > </LinearLayout> We still want our application to initially display the book list fragment, so removing the fragment element may seem like a strange change, but doing so is essential as we move our application to dynamically manage the fragments. We will eventually need to remove the book list fragment to replace it with the book description fragment. If we were to leave the book list fragment in the layout resource, our attempt to dynamically remove it later would silently fail. Only dynamically added fragments can be dynamically removed. Attempting to dynamically remove a fragment that was statically added with the fragment element in a layout resource will silently fail. Adapting to device differences When our application is running on a portrait-oriented handset device, the activity needs to programmatically load the fragment containing the book list. This is the same Fragment class, BookListFragment2, we were previously loading with the fragment element in the activity_main.xml layout resource file. Before we load the book list fragment, we first need to determine whether we're running on a device that requires dynamic fragment management. Remember that, for the wide-display devices, we're going to leave the static fragment management in place. There'll be a couple of places in our code where we'll need to take different logic paths depending on which layout we're using. So we'll need to add a boolean class-level field to the MainActivity class in which we can store whether we're using dynamic or static fragment management: boolean mIsDynamic; We can interrogate the device for its specific characteristics such as screen size and orientation. However, remember that much of our previous work was to configure our application to take the advantage of the Android resource system to automatically load the appropriate layout resources based on the device characteristics. Rather than repeating those characteristics checks in code, we can instead simply include the code to determine which layout resource was loaded. The layout resource for wide-display devices we created earlier, activity_main_wide.xml, statically loads both the book list fragment and the book description fragment. We can include in our activity's onCreate method code to determine whether the loaded layout resource includes one of those fragments as shown here: public class MainActivity extends Activity implements BookListFragment.OnSelectedBookChangeListener { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_dynamic); // Get the book description fragment FragmentManager fm = getFragmentManager(); Fragment bookDescFragment = fm.findFragmentById(R.id.fragmentDescription); // If not found than we're doing dynamic mgmt mIsDynamic = bookDescFragment == null || !bookDescFragment.isInLayout(); } // Other members elided for clarity } When the call to the setContentView method returns, we know that the appropriate layout resource for the current device has been loaded. We then use the FragmentManager instance to search for the fragment with an id value of R.id.fragmentDescription that is included in the layout resource for wide-display devices, but not the layout resource for portrait-oriented handsets. A return value of null indicates that the fragment was not loaded and we are, therefore, on a device that requires us to dynamically manage the fragments. In addition to the test for null, we also include the call to the isInLayout method to protect against one special case scenario. In the scenario where the device is in a landscape layout and then rotated to portrait, a cached instance to the fragment identified by R.id.fragmentDescription may still exist even though in the current orientation, the activity is not using the fragment. By calling the isInLayout method, we're able to determine whether the returned reference is part of the currently loaded layout. With this, our test to set the mIsDynamic member variable effectively says that we'll set mIsDynamic to true when the R.id.fragmentDescription fragment is not found (equals null), or it's found but is not a part of the currently loaded layout (!bookDescFragment.isInLayout). Dynamically loading a fragment at startup Now that we're able to determine whether or not dynamically loading the book list fragment is necessary, we add the code to do so to our onCreate method as shown here: protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_dynamic); // Get the book description fragment FragmentManager fm = getFragmentManager(); Fragment bookDescFragment = fm.findFragmentById(R.id.fragmentDescription); // If not found than we're doing dynamic mgmt mIsDynamic = bookDescFragment == null || !bookDescFragment.isInLayout(); // Load the list fragment if necessary if (mIsDynamic) { // Begin transaction FragmentTransaction ft = fm.beginTransaction(); // Create the Fragment and add BookListFragment2 listFragment = new BookListFragment2(); ft.add(R.id.layoutRoot, listFragment, "bookList"); // Commit the changes ft.commit(); } } Following the check to determine whether we're on a device that requires dynamic fragment management, we include FragmentTransaction to add an instance of the BookListFragment2 class to the activity as a child of the LinearLayout view group identified by the id value R.id.layoutRoot. This code capitalizes on the changes we made to the activity_main.xml resource file by removing the fragment element and including an id value on the LinearLayout view group. Now that we're dynamically loading the book list, we're ready to get rid of that other activity. Transitioning between fragments As you'll recall, whenever the user selects a book title within the BookListFragment2 class, the fragment notifies the main activity by calling the MainActivity.onSelectedBookChanged method passing the index of the selected book. The onSelectedBookChanged method currently appears as follows: public void onSelectedBookChanged(int bookIndex) { FragmentManager fm = getFragmentManager(); // Get the book description fragment BookDescFragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); // Check validity of fragment reference if(bookDescFragment == null || !bookDescFragment.isVisible()){ // Use activity to display description Intent intent = new Intent(this, BookDescActivity.class); intent.putExtra("bookIndex", bookIndex); startActivity(intent); } else { // Use contained fragment to display description bookDescFragment.setBook(bookIndex); } } In the current implementation, we use a technique similar to what we did in the onCreate method to determine which layout is loaded; we try to find the book description fragment within the currently loaded layout. If we find it, we know the current layout includes the fragment, so we go ahead and set the book description directly on the fragment. If we don't find it, we call the startActivity method to display the activity that contains the book description fragment. Starting a separate activity to handle the interaction with the BookListFragment2 class unnecessarily adds complexity to our program. Doing so requires that we pass data from one activity to another, which can sometimes be complex, especially if there are a large number of values or some of those values are object types that require additional coding to be passed in an Intent instance. More importantly, using a separate activity to manage the interaction with the BookListFragment2 class results in redundant work due to the fact that we already have all the code necessary to interact with the BookListFragment2 class in the MainActivity class. We'd prefer to handle the interaction with the BookListFragment2 class consistently in all cases. Eliminating redundant handling To eliminate this redundant handling, we start by stripping any code in the current implementation that deals with starting an activity. We can also avoid repeating the check for the book description fragment because we performed that check earlier in the onCreate method. Instead, we can now check the mIsDynamic class-level field to determine the proper handling. With that in mind, now we can initially modify the onSelectedBookChanged method to look like the following code: public void onSelectedBookChanged(int bookIndex) { BookDescFragment bookDescFragment; FragmentManager fm = getFragmentManager(); // Check validity of fragment reference if(mIsDynamic) { // Handle dynamic switch to description fragment } else { // Use the already visible description fragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); bookDescFragment.setBook(bookIndex); } } We now check the mIsDynamic member field to determine the appropriate code path. We still have some work to do if it turns out to be true, but in the case of it being false, we can simply get a reference to the book description fragment that we know is contained within the current layout and set the book index on it much like we were doing before. Creating the fragment on the fly In the case of the mIsDynamic field being true, we can display the book description fragment by simply replacing the book list fragment we added in the onCreate method with the book description fragment using the code shown here: FragmentTransaction ft = fm.beginTransaction(); bookDescFragment = new BookDescFragment(); ft.replace(R.id.layoutRoot, bookDescFragment, "bookDescription"); ft.addToBackStack(null); ft.setCustomAnimations( android.R.animator.fade_in, android.R.animator.fade_out); ft.commit(); Within FragmentTransaction, we create an instance of the BookDescFragment class and call the replace method passing the ID of the same view group that contains the BookListFragment2 instance that we added in the onCreate method. We include a call to the addToBackStack method so that the back button functions correctly, allowing the user to tap the back button to return to the book list. The code includes a call to the FragmentTransaction class' setCustomAnimations method that creates a fade effect when the user switches from one fragment to another. Managing asynchronous creation We have one final challenge that is to set the book index on the dynamically added book description fragment. Our initial thought might be to simply call the BookDescFragment class' setBook method after we create the BookDescFragment instance, but let's first take a look at the current implementation of the setBook method. The method currently appears as follows: public void setBook(int bookIndex) { // Lookup the book description String bookDescription = mBookDescriptions[bookIndex]; // Display it mBookDescriptionTextView.setText(bookDescription); } The last line of the method attempts to set the value of mBookDescriptionTextView within the fragment, which is a problem. Remember that the work we do within a FragmentTransaction class is not immediately applied to the user interface. Instead, as we discussed earlier in this chapter, in the deferred execution of transaction changes section, the work within the transaction is performed sometime after the completion of the call to the commit method. Therefore, the BookDescFragment instance's onCreate and onCreateView methods have not yet been called. As a result, any views associated with the BookDescFragment instance have not yet been created. An attempt to call the setText method on the BookDescriptionTextView instance will result in a null reference exception. One possible solution is to modify the setBook method to be aware of the current state of the fragment. In this scenario, the setBook method checks whether the BookDescFragment instance has been fully created. If not, it will store the book index value in the class-level field and later automatically set the BookDescriptionTextView value as part of the creation process. Although there may be some scenarios that warrant such a complicated solution, fragments give us an easier one. The Fragment base class includes a method called setArguments. With the setArguments method, we can attach data values, otherwise known as arguments, to the fragment that can then be accessed later in the fragment lifecycle using the getArguments method. Much like we do when associating extras with an Intent instance, a good practice is to define constants on the target class to name the argument values. It is also a good programming practice to provide a constant for an argument default value in the case of non-nullable types such as integers as shown here: public class BookDescFragment extends Fragment { // Book index argument name public static final String BOOK_INDEX = "book index"; // Book index default value private static final int BOOK_INDEX_NOT_SET = -1; // Other members elided for clarity } If you used Android Studio to generate the BookDescFragment class, you'll find that the ARG_PARAM1 and ARG_PARAM2 constants are included in the class. Android Studio includes these constants to provide examples of how to pass values to fragments just as we're discussing now. Since we're adding our own constant declarations, you can delete the ARG_PARAM1 and ARG_PARAM2 constants from the BookDescFragment class and also the lines in the generated BookDescFragment.onCreate and BookDescFragment.newInstance methods that reference them. We'll use the BOOK_INDEX constant to get and set the book index value and the BOOK_INDEX_NOT_SET constant to indicate whether the book index argument has been set. To simplify the process of creating the BookDescFragment instance and passing it the book index value, we'll add a static factory method named newInstance to the BookDescFragment class that appears as follows: public static BookDescFragment newInstance(int bookIndex) { BookDescFragment fragment = new BookDescFragment(); Bundle args = new Bundle(); args.putInt(BOOK_INDEX, bookIndex); fragment.setArguments(args); return fragment; } The newInstance methods start by creating an instance of the BookDescFragment class. It then creates an instance of the Bundle class, stores the book index in the Bundle instance, and then uses the setArguments method to attach it to the BookDescFragment instance. Finally, the newInstance method returns the BookDescFragment instance. We'll use this method shortly within the MainActivity class to create our BookDescFragment instance. If you used Android Studio to generate the BookDescFragment class, you'll find that most of the newInstance method is already in place. The only change you'll have to make is replace the two lines that referenced the ARG_PARAM1 and ARG_PARAM2 constants you deleted with the call to the args.putInt method shown in the preceding code. We can now update the BookDescFragment class' onCreateView method to look for arguments that might be attached to the fragment. Before we make any changes to the onCreateView method, let's look at the current implementation that appears as follows: public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View viewHierarchy = inflater.inflate( R.layout.fragment_book_desc, container, false); // Load array of book descriptions mBookDescriptions = getResources().getStringArray(R.array.bookDescriptions); // Get reference to book description text view mBookDescriptionTextView = (TextView) viewHierarchy.findViewById(R.id.bookDescription); return viewHierarchy; } As the onCreateView method is currently implemented, it simply inflates the layout resource, loads the array containing the book descriptions, and caches a reference to the TextView instance where the book description is loaded. We can now update the method to look for and use a book index that might be attached as an argument. The updated method appears as follows: public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View viewHierarchy = inflater.inflate( R.layout.fragment_book_desc, container, false); // Load array of book descriptions mBookDescriptions = getResources().getStringArray(R.array.bookDescriptions); // Get reference to book description text view mBookDescriptionTextView = (TextView) viewHierarchy.findViewById(R.id.bookDescription); // Retrieve the book index if attached Bundle args = getArguments(); int bookIndex = args != null ? args.getInt(BOOK_INDEX, BOOK_INDEX_NOT_SET) : BOOK_INDEX_NOT_SET; // If we find the book index, use it if (bookIndex != BOOK_INDEX_NOT_SET) setBook(bookIndex); return viewHierarchy; } Just before we return the fragment's view hierarchy, we call the getArguments method to retrieve any arguments that might be attached. The arguments are returned as an instance of the Bundle class. If the Bundle instance is non-null, we call the Bundle class' getInt method to retrieve the book index and assign it to the bookIndex local variable. The second parameter of the getInt method, BOOK_INDEX_NOT_SET, is returned if the fragment happens to have arguments attached that do not include the book index. Although this should not normally be the case, being prepared for any such an unexpected circumstance is a good idea. Finally, we check the value of the bookIndex variable. If it contains a book index, we call the fragment's setBook method to display it. Putting it all together With the BookDescFragment class now including support for attaching the book index as an argument, we're ready to fully implement the main activity's onSelectedBookChanged method to include switching to the BookDescFragment instance and attaching the book index as an argument. The method now appears as follows: public void onSelectedBookChanged(int bookIndex) { BookDescFragment bookDescFragment; FragmentManager fm = getFragmentManager(); // Check validity of fragment reference if(mIsDynamic){ // Handle dynamic switch to description fragment FragmentTransaction ft = fm.beginTransaction(); // Create the fragment and pass the book index bookDescFragment = BookDescFragment.newInstance(bookIndex); // Replace the book list with the description ft.replace(R.id.layoutRoot, bookDescFragment, "bookDescription"); ft.addToBackStack(null); ft.setCustomAnimations( android.R.animator.fade_in, android.R.animator.fade_out); ft.commit(); } else { // Use the already visible description fragment bookDescFragment = (BookDescFragment) fm.findFragmentById(R.id.fragmentDescription); bookDescFragment.setBook(bookIndex); } } Just as before, we start with the check to see whether we're doing dynamic fragment management. Once we determine that we are, we start the FragmentTransaction instance and create the BookDescFragment instance. We then create a new Bundle instance, store the book index into it, and then attach the Bundle instance to the BookDescFragment instance with the setArguments method. Finally, we put the BookDescFragment instance into place as the current fragment, take care of the back stack, enable animation, and complete the transaction. Everything is now complete. When the user selects a book title from the list, the onSelectedBookChanged method gets called. The onSelectedBookChanged method then creates and displays the BookDescFragment instance with the appropriate book index attached as an argument. When the BookDescFragment instance is ultimately created, its onCreateView method will then retrieve the book index from the arguments and display the appropriate description. Summary In this article, we saw that using the FragmentTransaction class, we're able to dynamically switch between individual fragments within an activity, eliminating the need to create a separate activity class for each screen in our application. This helps to prevent the proliferation of unnecessary activity classes, better organize our applications, and avoid the associated increase in complexity. Resources for Article: Further resources on this subject: UI elements and their implementation[article] Creating Dynamic UI with Android Fragments[article] Android Fragmentation Management[article]
Read more
  • 0
  • 0
  • 12740

article-image-how-use-currying-swift-fun-and-profit
Alexander Altman
06 Apr 2016
5 min read
Save for later

How to Use Currying in Swift for Fun and Profit

Alexander Altman
06 Apr 2016
5 min read
Swift takes inspiration from functional languages in a lot of its features, and one of those features is currying. The idea behind currying is relatively straightforward, and Apple has already taken the time to explain the basics of it in The Swift Programming Language. Nonetheless, there's a lot more to currying in Swift than what first meets the eye. What is currying? Let's say we have a function, f, which takes two parameters, a: Int and b: String, and returns a Bool: func f(a: Int, _ b: String) -> Bool { // … do somthing here … } Here, we're taking both a and b simultaneously as parameters to our function, but we don't have to do it that way! We can just as easily write this function to take just a as a parameter and then return another function that takes b as it's only parameter and returns the final result: func f(a: Int) -> ((String) -> Bool) { return { b in // … do somthing here … } } (I've added a few extra parentheses for clarity, but Swift is actually just fine if you write String -> Bool instead of ((String) -> Bool); the two notations mean exactly the same thing.) This formulation uses a closure, but you can also use a nested function for the exact same effect: func f(a: Int) -> ((String) -> Bool) { func g(b: String) -> Bool { // … do somthing here … } return g } Of course, Swift wouldn't be Swift without providing a convenient syntax for things like this, so there is even a third way to write the curried version of f, and it's (usually) preferred over either of the previous two: func f(a: Int)(_ b: String) -> Bool { // … do somthing here … } Any of these iterations of our curried function f can be called like this: let res: Bool = f(1)("hello") Which should look very similar to the way you would call the original uncurried f: let res: Bool = f(1, "hello") Currying isn't limited to just two parameters either; here's an example of a partially curried function of five parameters (taking them in groups of two, one, and two): func weirdAddition(x: Int, use useX: Bool)(_ y: Int)(_ z: Int, use useZ: Bool) -> Int { return (useX ? x : 0) + y + (useZ ? z : 0) } How is currying used in Swift? Believe it or not, Swift actually uses currying all over the place, even if you don't notice it. Probably, the most prominent example is that of instance methods, which are just curried type methods: // This: NSColor.blueColor().shadowWithLevel(1/3) // …is the same as this: NSColor.shadowWithLevel(NSColor.blueColor())(1/3) But, there's a much deeper implication of currying's availability in Swift: all functions secretly take only one parameter! How is this possible, you ask? It has to do with how Swift treats tuples. A function that “officially” takes, say, three parameters, actually only takes one parameter that happens to be a three-tuple. This is perhaps most visible when exploited via the higher-order collections method: func dotProduct(xVec: [Double], _ yVec: [Double]) -> Double { // Note that (this particular overload of) the `*` operator // has the signature `(Double, Double) -> Double`. return zip(xVec, yVec).map(*).reduce(0, combine: +) } It would seem that anything you can do with tuples, you can do with a function parameter list and vice versa; in fact, that is almost true. The four features of function parameter lists that don't carry over directly into tuples are the variadic, inout, defaulted, and @autoclosure parameters. You can, technically, form a variadic, inout, defaulted, or @autoclosure tuple type, but if you try to use it in any context other than as a function's parameter type, swiftc will give you an error. What you definitely can do with tuples is use named values, notwithstanding the unfortunate prohibition on single-element tuples in Swift (named or not). Apple provides some information on tuples with named elements in The Swift Programming Language; it also gives an example of one in the same book. It should be noted that the names given to tuple elements are somewhat ephemeral in that they can very easily be introduced, eliminated, and altered via implicit conversions. This applies regardless of whether the tuple type is that of a standalone value or of a function's parameter: // converting names in a function's parameter list func printBoth(first x: Int, second y: String) { print(x, y, separator: ", ") } let printTwo: (a: Int, b: String) -> Void = printBoth // converting names in a standalone tuple type // (for some reason, Swift dislikes assigning `firstAndSecond` // directly to `aAndB`, but going through `nameless` is fine) let firstAndSecond: (first: Int, second: String) = (first: 1, second: "hello") let nameless: (Int, String) = firstAndSecond let aAndB: (a: Int, b: String) = nameless Currying, with its connection to tuples, is a very powerful feature of Swift. Use it wherever it seems helpful, and the language will be more than happy to oblige. About the author Alexander Altman is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift project.
Read more
  • 0
  • 0
  • 12534

article-image-bsd-socket-library
Packt
17 Jan 2014
9 min read
Save for later

BSD Socket Library

Packt
17 Jan 2014
9 min read
(For more resources related to this topic, see here.) Introduction The Berkeley Socket API (where API stands for Application Programming Interface) is a set of standard functions used for inter-process network communications. Other socket APIs also exist; however, the Berkeley socket is generally regarded as the standard. The Berkeley Socket API was originally introduced in 1983 when 4.2 BSD was released. The API has evolved with very few modifications into a part of the Portable Operating System Interface for Unix (POSIX) specification. All modern operating systems have some implementation of the Berkeley Socket Interface for connecting devices to the Internet. Even Winsock, which is MS Window's socket implementation, closely follows the Berkeley standards. BSD sockets generally rely on client/server architecture when they establish their connections. Client/server architecture is a networking approach where a device is assigned one of the two following roles: Server: A server is a device that selectively shares resources with other devices on the network Client: A client is a device that connects to a server to make use of the shared resources Great examples of the client/server architecture are web pages. When you open a web page in your favorite browser, for example https://www.packtpub.com, your browser (and therefore your computer) becomes the client and Packt Publishing's web servers become the servers. One very important concept to keep in mind is that any device can be a server, a client, or both. For example, you may be visiting the Packt Publishing website, which makes you a client, and at the same time you have file sharing enabled, which also makes your device a server. The Socket API generally uses one of the following two core protocols: Transmission Control Protocol (TCP): TCP provides a reliable, ordered, and error-checked delivery of a stream of data between two devices on the same network. TCP is generally used when you need to ensure that all packets are correctly received and are in the correct order (for example, web pages). User Datagram Protocol (UDP): UDP does not provide any of the error-checking or reliability features of TCP, but offers much less overhead. UDP is generally used when providing information to the client quickly is more important than missing packets (for example, a streaming video). Darwin, which is an open source POSIX compliant operating system, forms the core set of components upon which Mac OS X and iOS are based. This means that both OS X and iOS contain the BSD Socket Library. The last paragraph is very important to understand when you begin thinking about creating network applications for the iOS platform, because almost any code example that uses the BSD Socket Library will work on the iOS platform. The biggest difference between using the BSD Socket API on any standard Unix platform and the iOS platform is that the iOS platform does not support forking of processes. You will need to use multiple threads rather than multiple processes. The BSD Socket API can be used to build both client and server applications. There are a few networking concepts that you should understand: IP address: Any device on an Internet Protocol (IP) network, whether it is a client or server, has a unique identifier known as an IP address. The IP address serves two basic purposes: host identification and location identification. There are currently two IP address formats: IPv4: This is currently the standard for the Internet and most internal intranets. This is an example of an IPv4 address: 83.166.169.231. IPv6: This is the latest revision of the Internet Protocol (IP). It was developed to eventually replace IPv4 and to address the long-anticipated problem of running out of IPv4 addresses. This is an example of an IPv6 address: 2001:0db8:0000:0000:0000:ff00:0042:8329. An IPv6 can be shortened by replacing all the consecutive zero fields with two colons. The previous address could be rewritten as 2001:0db8::ff00:0042:8329. Ports: A port is an application or process-specific software construct serving as a communications endpoint on a device connected to an IP network, where the IP address identifies the device to connect to, and the port number identifies the application to connect to. The best way to think of network addressing is to think about how you mail a letter. For a letter to reach its destination, you must put the complete address on the envelope. For example, if you were going to send a letter to friend who lived at the following address: Your Friend 123 Main St Apt. 223 San Francisco CA, 94123 If I were to translate that into network addressing, the IP address would be equal to the street address, city, state, and zip code (123 Main St, San Francisco CA, 94123), and the apartment number would be equal to the port number (223). So the IP address gets you to the exact location, and the port number will tell you which door to knock on. A device has 65,536 available ports with the first 1024 being reserved for common protocols such as HTTP, HTTPS, SSH, and SMTP. Fully Qualified Domain Name (FQDN): As humans, we are not very good at remembering numbers; for example, if your friend tells you that he found a really excellent website for books and the address was 83.166.169.231, you probably would not remember that two minutes from now. However, if he tells you that the address was www.packtpub.com, you would probably remember it. FQDN is the name that can be used to refer to a device on an IP network. So now you may be asking yourself, how does the name get translated to the IP address? The Domain Name Server (DNS) would do that. Domain Name System Servers: A Domain Name System Server translates a fully qualified domain name to an IP address. When you use an FQDN of www.packtpub.com, your computer must get the IP address of the device from the DNS configured in your system. To find out what the primary DNS is for your machine, open a terminal window and type the following command: cat /etc/resolv.conf Byte order: As humans, when we look at a number, we put the most significant number first and the least significant number last; for example, in number 123, 1 represents 100, so it is the most significant number, while 3 is the least significant number. For computers, the byte order refers to the order in which data (not only integers) is stored into memory. Some computers store the most significant bytes first (at the lowest byte address), while others store the most significant bytes last. If a device stores the most significant bytes first, it is known as big-endian, while a device that stores the most significant bytes last is known as little-endian. The order of how data is stored in memory is of great importance when developing network applications, where you may have two devices that use different byte-ordering communication. You will need to account for this by using the Network-to-Host and Host-to-Network functions to convert between the byte order of your device and the byte order of the network. The byte order of the device is commonly referred to as the host byte order, and the byte order of the network is commonly referred to as the network byte order. Finding the byte order of your device One of the concepts that was briefly discussed in this article was how devices store information in memory (byte order). After that discussion, you may be wondering what the byte order of your device is. The byte order of a device depends on the Microprocessor architecture being used by the device. You can pretty easily go on to the Internet and search for "Mac OS X i386 byte order" and find out what the byte order is, but where is the fun in that? We are developers, so let's see if we can figure it out with code. We can determine the byte order of our devices with a few lines of C code; however, like most of the code in this article, we will put the C code within an Objective-C wrapper to make it easy to port to your projects. How to do it… Let's get started by defining an ENUM in our header file: We create an ENUM that will be used to identify the byte order of the system as shown in the following code: typedef NS_ENUM(NSUInteger, EndianType) { ENDIAN_UNKNOWN, ENDIAN_LITTLE, ENDIAN_BIG }; To determine the byte order of the device, we will use the byteOrder method as shown in the following code: -( EndianType)byteOrder { union { short sNum; char cNum[sizeof(short)]; } un; un.sNum = 0x0102; if (sizeof(short) == 2) { if(un.cNum[0] == 1 && un.cNum[1] == 2) return ENDIAN_BIG; else if (un.cNum[0] == 2 && un.cNum[1] == 1) return ENDIAN_LITTLE; else return ENDIAN_UNKNOWN; } else return ENDIAN_UNKNOWN; } How it works… In the ByteOrder header file, we defined an ENUM with three constants. The constants are as follows: ENDIAN_UNKNOWN: We are unable to determine the byte order of the device ENDIAN_LITTLE: This specifies that the most significant bytes are last (little-endian) ENDIAN_BIG: This specifies that the most significant bytes are first (big-endian) The byteOrder method determines the byte order of our device and returns an integer that can be translated using the constants defined in the header file. To determine the byte order of our device, we begin by creating a union of short int and char[]. We then store the value 0x0102 in the union. Finally, we look at the character array to determine the order in which the integer was stored in the character array. If the number one was stored first, it means that the device uses big-endian; if the number two was stored first, it means that the device uses little-endian.
Read more
  • 0
  • 0
  • 12273

article-image-creating-puzzle-app
Packt
06 Sep 2013
11 min read
Save for later

Creating a Puzzle App

Packt
06 Sep 2013
11 min read
(For more resources related to this topic, see here.) A quick introduction to puzzle games Puzzle games are a genre of video games that have been around for decades. These types of games challenge players to use logic and critical thinking to complete patterns. There is a large variety of puzzle games available, and in this article, we'll start by learning how to create a 3-by-3 jigsaw puzzle titled My Jigsaw Puzzle. In My Jigsaw Puzzle, players will have to complete a jigsaw puzzle by using nine puzzle pieces. Each puzzle piece will have an image on it, and the player will have to match the puzzle piece to the puzzle board by dragging the pieces from right to left. When the puzzle piece matches the correct location, the game will lock in the piece. Let's take a look at the final game product. Downloading the starter kit Before creating our puzzle app, you can get the starter kit for the jigsaw puzzle from the code files available with this book. The starter kit includes all of the graphics that we will be using in this article. My Jigsaw Puzzle For the Frank's Fitness app, we used Corona's built-in new project creator to help us with setting up our project. With My Jigsaw Puzzle, we will be creating the project from scratch. Although creating a project from scratch can be more time consuming, the process will introduce you to each element that goes into Corona's new project creator. Creating the project will include creating the build.settings, config.lua, main.lua, menu.lua, and gameplay.lua files. Before we can start creating the files for our project, we will need to create a new project folder on your computer. This folder will hold all of the files that will be used in our app. build.settings The first file that we will create is the build.settings file. This file will handle our device orientation and specify our icons for the iPhone. Inside our build.settings file, we will create one table named settings, which will hold two more tables named orientation and iphone. The orientation table will tell our app to start in landscape mode and to only support landscapeLeft and landscapeRight. The iphone table will specify the icons that we want to use for our app. To create the build.settings file, create a new file named build.settings in your project's folder and input the following code: settings = { orientation = { default = "landscapeRight", supported = { "landscapeLeft", "landscapeRight" }, }, iphone = { plist = { CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "Icon@2x.png", "Icon-72.png", } } }} config.lua Next, we will be creating a file named config.lua in our project's folder. The config.lua file is used to specify any runtime properties for our app. For My Jigsaw Puzzle, we will be specifying the width, height, and scale methods. We will be using the letterbox scale method, which will uniformly scale content as much as possible. When letterbox doesn't scale to the entire screen, our app will display black borders outside of the playable screen. To create the config.lua file, create a new file named config.lua in your project's folder and input the following code: application ={ content = { width = 320, height = 480, scale = "letterbox" }} main.lua Now that we've configured our project, we will be creating the main.lua file—the start point for every app. For now, we are going to keep the file simple. Our main. lua file will hide the status bar while the app is active and redirect the app to the next file—menu.lua. To create main.lua, create a new file named main.lua in your project's folder and copy the following code into the file: display.setStatusBar ( display.HiddenStatusBar )local storyboard = require ( "storyboard" )storyboard.gotoScene("menu") menu.lua Our next step is to create the menu for our app. The menu will show a background, the game title, and a play button. The player can then tap on the PLAY GAME button to start playing the jigsaw puzzle. To get started, create a new file in your project's folder called menu.lua. Once the file has been created, open menu.lua in your favorite text editor. Let's start the file by getting the widget and storyboard libraries. We'll also set up Storyboard by assigning the variable scene to storyboard.newScene(). local widget = require "widget"local storyboard = require( "storyboard" )local scene = storyboard.newScene() Next, we will set up our createScene() function. The function createScene() is called when entering a scene for the first time. Inside this function, we will create objects that will be displayed on the screen. Most of the following code should look familiar by now. Here, we are creating two image display objects and one widget. Each object will also be inserted into the variable group to let our app know that these objects belong to the scene menu.lua. function scene:createScene( event ) local group = self.view background = display.newImageRect( "woodboard.png", 480, 320 ) background.x = display.contentWidth*0.5 background.y = display.contentHeight*0.5 group:insert(background) logo = display.newImageRect( "logo.png", 400, 54 ) logo.x = display.contentWidth/2 logo.y = 65 group:insert(logo) function onPlayBtnRelease() storyboard.gotoScene("gameplay") end playBtn = widget.newButton{ default="button-play.png", over="button-play.png", width=200, height=83, onRelease = onPlayBtnRelease } playBtn.x = display.contentWidth/2 playBtn.y = display.contentHeight/2 group:insert(playBtn)end After the createScene() function, we will set up the enterScene() function. The enterScene() function is called after a scene has moved on to the screen. In My Jigsaw Puzzle, we will be using this function to remove the gameplay scene. We need to make sure we are removing the gameplay scene so that the jigsaw puzzle is reset and the player can play a new game. function scene:enterScene( event ) storyboard.removeScene( "gameplay" )end After we've created our createScene() and enterScene() functions, we need to set up our event listeners for Storyboard. scene:addEventListener( "createScene", scene )scene:addEventListener( "enterScene", scene ) Finally, we end our menu.lua file by adding the following line: return scene This line of code lets our app know that we are done with this scene. Now that we've added the last line, we have finished editing menu.lua, and we will now start setting up our jigsaw puzzle. gameplay.lua By now, our game has been configured and we have set up two files—main.lua and menu.lua. In our next step, we will be creating the jigsaw puzzle. The following screenshot shows the puzzle that we will be making: Getting local libraries To get started, create a new file called gameplay.lua and open the file in your favorite text editor. Similar to our menu.lua file, we need to start the file by getting in other libraries and setting up Storyboard. local widget = require("widget")local storyboard = require( "storyboard" )local scene = storyboard.newScene() Creating variables After our local libraries, we are going to create some variables to use in gameplay. lua. When you separate the variables from the rest of your code, the process of refining your app later becomes easier. _W = display.contentWidth_H = display.contentHeightpuzzlePiecesCompleted = 0totalPuzzlePieces = 9puzzlePieceWidth = 120puzzlePieceHeight = 120puzzlePieceStartingY = { 80,220,360,500,640,780,920,1060,1200 }puzzlePieceSlideUp = 140puzzleWidth, puzzleHeight = 320, 320puzzlePieces = {}puzzlePieceCheckpoint = { {x=-243,y=76}, {x=-160, y=76}, {x=-76, y=74}, {x=-243,y=177}, {x=-143, y=157}, {x=-57, y=147}, {x=-261,y=258}, {x=-176,y=250}, {x=-74,y=248}}puzzlePieceFinalPosition = { {x=77,y=75}, {x=160, y=75}, {x=244, y=75}, {x=77,y=175}, {x=179, y=158}, {x=265, y=144}, {x=58,y=258}, {x=145,y=251}, {x=248,y=247}} Here's a breakdown of what we will be using each variable for in our app: _W and _H: These variables capture the width and height of the screen. In our app, we have already specified the size of our app to be 480 x 320. puzzlePiecesCompleted: This variable is used to track the progress of the game by tracking the number of puzzle pieces completed. totalPuzzlePieces: This variable allows us to tell our app how many puzzle pieces we are using. puzzlePieceWidth and puzzlePieceHeight: These variables specify the width and height of our puzzle piece images within the app. puzzlePieceStartingY: This table contains the starting Y location of each puzzle piece. Since we can't have all nine puzzle pieces on screen at the same time, we are displaying the first two pieces and the other seven pieces are placed off the screen below the first two. We will be going over this in detail when we add the puzzle pieces. puzzlePieceSlideUp: After a puzzle piece is added, we will slide the puzzle pieces up; this variable sets the sliding distance. puzzleWidth and puzzleHeight: These variables specify the width and height of our puzzle board. puzzlePieces: This creates a table to hold our puzzle pieces once they are added to the board. puzzlePieceCheckpoint: This table sets up the checkpoints for each puzzle piece in x and y coordinates. When a puzzle piece is dragged to the checkpoint, it will be locked into position. When we add the checkpoint logic, we will learn more about this in greater detail. puzzlePieceFinalPosition: This table sets up the final puzzle location in x and y coordinates. This table is only used once the puzzle piece passes the checkpoint. Creating display groups After we have added our variables, we are going to create two display groups to hold our display objects. Display groups are simply a collection of display objects that allow us to manipulate multiple display objects at once. In our app, we will be creating two display groups—playGameGroup and finishGameGroup. playGameGroup will contain objects that are used when the game is being played and the finishGameGroup will contain objects that are used when the puzzle is complete.Insert the following code after the variables: playGameGroup = display.newGroup()finishGameGroup = display.newGroup() The shuffle function Our next task is to create a shuffle function for My Jigsaw Puzzle. This function will randomize the puzzle pieces that are presented on the right side of the screen. Without the shuffle function, our puzzle pieces would be presented in a 1, 2, 3 manner, while the shuffle function makes sure that the player has a new experience every time. Creating the shuffle function To create the shuffle function, we will start by creating a function named shuffle. This function will accept one argument (t) and proceed to randomize the table for us. We're going to be using some advanced topics in this function, but before we start explaining it, let's add the following code to gameplay.lua under our display group: function shuffle(t) local n = #t while n > 2 do local k = math.random(n) t[n] = t[k] t[k] = t[n] n = n - 1 end return tend At first glance, this function may look complex; however, the code gets a lot simpler once it's explained. Here's a line-by-line breakdown of our shuffle function. The local n = #t line introduces two new features—local and #. By using the keyword local in front of our variable name, we are saying that this variable (n) is only needed for the duration of the function or loop that we are in. By using local variables, you are getting the most out of your memory resources and practicing good programming techniques. For more information about local variables, visit www.lua.org/pil/4.2.html. In this line, we are also using the # symbol. This symbol will tell us how many pieces or elements are in a table. In our app, our table will contain nine pieces or elements. Inside the while loop, the very first line is local k = math.random(n). This line is assigning a random number between 1 and the value of n (which is 9 in our app) to the local variable k. Then, we are randomizing the elements of the table by swapping the places of two pieces within our table. Finally, we are using n = n – 1 to work our way backwards through all of the elements in the table. Summary After reading this article, you will have a game that is ready to be played by you and your friends. We learned how to use Corona's feature set to create our first business app. In My Jigsaw Puzzle, we only provided one puzzle for the player, and although it's a great puzzle, I suggest adding more puzzles to make the game more appealing to more players. Resources for Article: Further resources on this subject: Atmosfall – Managing Game Progress with Coroutines [Article] Creating and configuring a basic mobile application [Article] Defining the Application's Policy File [Article]
Read more
  • 0
  • 0
  • 12273
article-image-common-asynctask-issues
Packt
20 Dec 2013
7 min read
Save for later

Common AsyncTask issues

Packt
20 Dec 2013
7 min read
(For more resources related to this topic, see here.) Fragmentation issues AsyncTask has evolved with new releases of the Android platform, resulting in behavior that varies with the platform of the device running the task, which is a part of the wider issue of fragmentation. The simple fact is that if we target a broad range of API levels, the execution characteristics of our AsyncTasks—and therefore, the behavior of our apps—can vary considerably on different devices. So what can we do to reduce the likelihood of encountering AsyncTask issues due to fragmentation? The most obvious approach is to deliberately target devices running at least Honeycomb, by setting a minSdkVersion of 11 in the Android Manifest file. This neatly puts us in the category of devices, which, by default, execute AsyncTasks serially, and therefore, much more predictably. However, this significantly reduces the market reach of our apps. At the time of writing in September 2013, more than 34 percent of Android devices in the wild run a version of Android in the danger zone between API levels 4 and 10. A second option is to design our code carefully and test exhaustively on a range of devices—always commendable practices of course, but as we've seen, concurrent programming is hard enough without the added complexity of fragmentation, and invariably, subtle bugs will remain. A third solution that has been suggested by the Android development community is to reimplement AsyncTaskin a package within your own project, then extend your own AsyncTask class instead of the SDK version. In this way, you are no longer at the mercy of the user's device platform, and can regain control of your AsyncTasks. Since the source code for AsyncTask is readily available, this is not difficult to do. Activity lifecycle issues Having deliberately moved any long-running tasks off the main thread, we've made our applications nice and responsive—the main thread is free to respond very quickly to any user interaction. Unfortunately, we have also created a potential problem for ourselves, because the main thread is able to finish the Activity before our background tasks complete. Activity might finish for many reasons, including configuration changes caused the by the user rotating the device (the default behavior of Activity on a change in orientation is to restart with an entirely new instance of the activity). If we continue processing a background task after the Activity has finished, we are probably doing unnecessary work, and therefore wasting CPU and other resources (including battery life), which could be put to better use. Also, any object references held by the AsyncTask will not be eligible for garbage collection until the task explicitly nulls those references or completes and is itself eligible for GC ( garbage collection ). Since our AsyncTask probably references the Activity or parts of the View hierarchy, we can easily leak a significant amount of memory in this way. A common usage of AsyncTask is to declare it as an anonymous inner class of the host Activity, which creates an implicit reference to the Activity and an even bigger memory leak. There are two approaches for preventing these resource wastage problems. Handling lifecycle issues with early cancellation First, and most obviously, we can synchronize our AsyncTask lifecycle with that of the Activity by canceling running tasks when our Activity is finishing. When an Activity finishes, its lifecycle callback methods are invoked on the main thread. We can check to see why the lifecycle method is being called, and if the Activity is finishing, cancel the background tasks. The most appropriate Activity lifecycle method for this is onPause, which is guaranteed to be called before the Activity finishes. protected void onPause() { super.onPause(); if ((task != null) && (isFinishing())) task.cancel(false); } If the Activity is not finishing—say because it has started another Activity and is still on the back stack—we might simply allow our background task to continue to completion. Handling lifecycle issues with retained headless fragments If the Activity is finishing because of a configuration change, it may still be useful to complete the background task and display the results in the restarted Activity. One pattern for achieving this is through the use of retained Fragments. Fragments were introduced to Android at API level 11, but are available through a support library to applications targeting earlier API levels. All of the downloadable examples use the support library, and target API levels 7 through 19. To use Fragments, our Activity must extend the FragmentActivity class. The Fragment lifecycle is closely bound to that of the host Activity, and a fragment will normally be disposed when the activity restarts. However, we can explicitly prevent this by invoking setRetainInstance (true) on our Fragment so that it survives across Activity restarts. Typically, a Fragment will be responsible for creating and managing at least a portion of the user interface of an Activity, but this is not mandatory. A Fragment that does not manage a view of its own is known as a headless Fragment. Isolating our AsyncTask in a retained headless Fragment makes it less likely that we will accidentally leak references to objects such as the View hierarchy, because the AsyncTask will no longer directly interact with the user interface. To demonstrate this, we'll start by defining an interface that our Activity will implement: public interface AsyncListener<Progress, Result> { void onPreExecute(); void onProgressUpdate(Progress... progress); void onPostExecute(Result result); void onCancelled(Result result); } Next, we'll create a retained headless Fragment, which wraps our AsyncTask. For brevity, doInBackground is omitted, as it is unchanged from the previous examples—see the downloadable samples for the complete code. public class PrimesFragment extends Fragment { private AsyncListener<Integer,BigInteger> listener; private PrimesTask task; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setRetainInstance(true); task = new PrimesTask(); task.execute(2000); } public void onAttach(Activity activity) { super.onAttach(activity); listener = (AsyncListener<Integer,BigInteger>)activity; } public void onDetach() { super.onDetach(); listener = null; } class PrimesTask extends AsyncTask<Integer, Integer, BigInteger>{ protected void onPreExecute() { if (listener != null) listener.onPreExecute(); } protected void onProgressUpdate(Integer... values) { if (listener != null) listener.onProgressUpdate(values); } protected void onPostExecute(BigInteger result) { if (listener != null) listener.onPostExecute(result); } protected void onCancelled(BigInteger result) { if (listener != null) listener.onCancelled(result); } // … doInBackground elided for brevity … } } We're using the Fragment lifecycle methods (onAttach and onDetach) to add or remove the current Activity as a listener, and PrimesTask delegates directly to it from all of its main-thread callbacks. Now, all we need is the host Activity that implements AsyncListener and uses PrimesFragment to implement its long-running task. The full source code is available to download from the Packt Publishing website, so we'll just take a look at the highlights. First, the code in the button's OnClickListener now checks to see if Fragment already exists, and only creates one if it is missing: FragmentManager fm = getSupportFragmentManager(); PrimesFragment primes = (PrimesFragment)fm.findFragmentByTag("primes"); if (primes == null) { primes = new PrimesFragment(); FragmentTransaction transaction = fm.beginTransaction(); transaction.add(primes, "primes").commit(); } If our Activity has been restarted, it will need to re-display the progress dialog when a progress update callback is received, so we check and show it, if necessary, before updating the progress bar: public void onProgressUpdate(Integer... progress) { if (dialog == null) prepareProgressDialog(); progress.setProgress(progress[0]); } Finally, Activity will need to implement the onPostExecute and onCancelled callbacks defined by AsyncListener. Both methods will update the resultView as in the previous examples, then do a little cleanup—dismissing the dialog and removing Fragment as its work is now done: public void onPostExecute(BigInteger result) { resultView.setText(result.toString()); cleanUp(); } public void onCancelled(BigInteger result) { resultView.setText("cancelled at " + result); cleanUp(); } private void cleanUp() { dialog.dismiss(); dialog = null; FragmentManager fm = getSupportFragmentManager(); Fragment primes = fm.findFragmentByTag("primes"); fm.beginTransaction().remove(primes).commit(); } Summary In this article, we've taken a look at AsyncTask and how to use it to write responsive applications that perform operations without blocking the main thread. Resources for Article: Further resources on this subject: Android Native Application API [Article] So, what is Spring for Android? [Article] Creating Dynamic UI with Android Fragments [Article]
Read more
  • 0
  • 0
  • 12112

article-image-first-look-ionic
Packt
13 Jan 2016
9 min read
Save for later

First Look at Ionic

Packt
13 Jan 2016
9 min read
In this article, Sani Yusuf the author of the book Ionic Framework By Example, explains how Ionic framework lets you build Hybrid mobile applications with web technologies such as HTML5, CSS, and JavaScript. But that is not where it stops. Ionic provides you with components that you use to build native-like features for your mobile applications. Think of Ionic as the SDK for making your Hybrid mobile application. Most of the feature you have on native apps such as Modals, Gestures, and POP-UPs etc are all provided to you by Ionic and can be easily extended for new features or customized to suit your needs. Ionic itself does not grant you the ability to communicate with device features such as GPS and Camera, instead it works side by side with Cordova to achieve this. Another great feature of Ionic is how loosely coupled all its components are. You can decide to use only some of Ionic on an already existing hybrid application if you wish to do so. The Ionic framework is built with AngularJS, which is arguably the most well tested and widely used JavaScript framework out there. This feature is particularly powerful as it gives you all the goodness of angular as a part of any Ionic app you develop. In the past, architecting hybrid application proved to be difficult, but with angular, we can create our mobile application using the Single Page Application (SPA) technique. Angular also makes it really easy to organize your application for development and working across teams whiles providing you the possibility of easily adding custom features or libraries. (For more resources related to this topic, see here.) Features of Ionic Ionic provides you with a lot of cool neat features and tricks that help you create beautiful and well-functional hybrid apps in no time. The features of Ionic come under three categories: CSS features JavaScript features Ionic CLI CSS features To start off with, Ionic comes in stock with a great CSS library that provides you with some boilerplate styles. This Ionic CSS styles are generated with SASS, a CSS pre-processor for more advanced CSS style manipulation. Some of the cool CSS features that come built-in with Ionic include: Buttons Cards Header and Footers Lists Forms Elements Grid System All these features and many more come already provided for you and are easily customizable. They also have the same look and feel that native equivalents have, so you will not have to do any editing to make them look like native components. The JavaScript features The JavaScript features are at the heart of the Ionic framework and are essential for building Ionic apps. They also consist of other features that let you do things from under hood like customize your application or even provide you with helper functions that you can use to make developing your app more pleasant. A lot of these JavaScript features actually exist as HTML custom elements that make it easy to declaratively use these features. Some of these features include: Modal Slide-Box Action Sheet Side Menu Tabs Complex Lists Collection Repeat All the JavaScript features of Ionic are built with Angular, and most can be easily plugged in as angular directives. Each of them also perform different actions that help you achieve specific functions and are all documented in the Ionic website. The Ionic CLI This is the final part that makes up the three major arms of the Ionic framework. The Ionic CLI is a very important tool that lets you use the issue Ionic commands via the command line/terminal. It is also with the Ionic CLI that we get access to some Ionic features that make our app development process more streamlined. It is arguably most important part of Ionic and it is also the feature you will use to do most actions. The features of the Ionic CLI include: Creating Ionic projects Issuing Cordova commands Developing and testing Ionic splash/Icon generator Ionic labs SASS Uploading the app to Ionic view Accessing Ionic.IO tools The Ionic CLI is a very powerful tool, and for the most part it is the tool we will be using throughout this book to perform specific actions. This is why the first thing we are going to do is to set up the Ionic CLI. Setting up Ionic The following steps will give a brief of how to setup Ionic: Install Node JS: To set up Ionic, the first thing you will need to do is to install Node JS on your computer so that you can have access to Node Package Manager (NPM). If you already have node installed on your computer, you can skip this step and got to step 2. Go to www.nodejs.org and click on the Install button. That should download the latest version of Node JS on your computer. Don't worry if you are on a Mac, PC, or Linux, the correct one for your operating system will be automatically downloaded. After the download is finished, install the downloaded software on your computer. You may need to restart your computer if you are running windows. Open up the terminal if you are on Mac/Linux or the Windows command line if you are on a Windows machine. Type the following command node –v and press Enter. You should see the version number of your current installation of Node JS. If you do not see a version number, this might mean that you have not correctly installed Node JS and should try running step 1 again. Install Ionic CLI: The next step is to use NPM to install the Ionic CLI. Open a new Terminal (OSX and Linux) or command line (Windows) window and run the npm install Ionic –g command. If you are on Linux/OSX, you might need to run sudo install Ionic –g. This command will aim to install Ionic globally. After this has finished running, run the Ionic –v command on your Terminal/command line and press Enter. You should have seen a version number of your Ionic CLI. This means that you have Ionic installed correctly and are good to go. If you do not see a version number, then you have not installed Ionic correctly on your machine and should perform step 2 again. The Ionic workflow When you create a new Ionic project, there are a couple of folders and files that come in stock. You directory should look similar to the following screenshot: The structure you see is pretty much the same as in every Cordova project with the exception of a few files and folders. For example, there is an scss folder. This contains a file that lets us customize the look and feel of our application. You will also note that in your www/lib folder, there is a folder called ionic, which contains all the required files to run Ionic. There is a CSS, fonts, JS, and SCSS folder: CSS: This folder contains all the default CSS that come with an Ionic app. Fonts: Ionic comes with its own font and Icon library called Ionicons. This Ionicons library contains hundreds of icons, which are all available for use in your app. JS: This contains all the code for the core Ionic library. Since Ionic is built with angular, there is a version of Angular here with a bunch of other files that make up the ionic framework. SCSS: This is a folder that contains SASS files used to build the beautiful Ionic framework CSS styles. Everything here can be overwritten easily. If you take a look at the root folder, you will see a lot of other files that are generated for you as part of the Ionic workflow. These files are not overly important now, but let's take a look at the more important ones: Bower.json: This is the file that contains some of the dependencies gotten from the bower package manager. The bower dependencies are resolved in the lib folder by as specified in the bowerrc file. This is a great place to specify other third party dependencies that your project might need. Config.xml: This is the standard config file that comes along with any Phonegap/Cordova project. This is where you request permissions for device features and also specify universal and platform-specific configurations for you app. Gulpfile: Ionic uses the gulp build tool, and this file contains some code that is provided by Ionic that enables you do some amazing things. Ionic.project: This is a file specific for Ionic services. It is the file used by the Ionic CLI and the Ionic.IO services as a place to specify some of your Ionic specific configuration. Package.json: This is a file used by node to specify some node dependencies. When you create a project with the Ionic CLI, Ionic uses both the Node and Bowe Package manager to resolve some of your dependencies. If you require a node module when you are developing ionic apps, you can specify these dependencies here. These files are some of the more important files that come stock with a project created with the Ionic CLI. At the moment, you do not need to worry too much about them but it's always good to know that they exist and have an idea about what they actually represent. Summary In this article, we discussed exactly what Ionic means and what problems it aims to solve. We also got to discuss the CSS, JS, and Ionic CLI features of the Ionic framework lightly. Resources for Article: Further resources on this subject: Ionic JS Components [article] Creating Instagram Clone Layout using Ionic framework [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 12052

article-image-drawing-and-drawables-android-canvas
Packt
21 Nov 2013
8 min read
Save for later

Drawing and Drawables in Android Canvas

Packt
21 Nov 2013
8 min read
In this article by Mir Nauman Tahir, the author of the book Learning Android Canvas, our goal is to learn about the following: Drawing on a Canvas Drawing on a View Drawing on a SurfaceView Drawables Drawables from resource images Drawables from resource XML Shape Drawables (For more resources related to this topic, see here.) Android provides us with 2D drawing APIs that enable us to draw our custom drawing on the Canvas. When working with 2D drawings, we will either draw on view or directly on the surface or Canvas. Using View for our graphics, the drawing is handled by the system's normal View hierarchy drawing process. We only define our graphics to be inserted in the View; the rest is done automatically by the system. While using the method to draw directly on the Canvas, we have to manually call the suitable drawing Canvas methods such as onDraw() or createBitmap(). This method requires more efforts and coding and is a bit more complicated, but we have everything in control such as the animation and everything else like being in control of the size and location of the drawing and the colors and the ability to move the drawing from its current location to another location through code. The implementation of the onDraw() method can be seen in the drawing on the view section and the code for createBitmap() is shown in the Drawing on a Canvas section. We will use the drawing on the View method if we are dealing with static graphics–static graphics do not change dynamically during the execution of the application–or if we are dealing with graphics that are not resource hungry as we don't wish to put our application performance at stake. Drawing on a View can be used for designing eye-catching simple applications with static graphics and simple functionality–simple attractive backgrounds and buttons. It's perfectly okay to draw on View using the main UI thread as these graphics are not a threat to the overall performance of our application. The drawing on a Canvas method should be used when working with heavy graphics that change dynamically like those in games. In this scenario, the Canvas will continuously redraw itself to keep the graphics updated. We can draw on a Canvas using the main UI thread, but when working with heavy, resource-hungry, dynamically changing graphics, the application will continuously redraw itself. It is better to use a separate thread to draw these graphics. Keeping such graphics on the main UI thread will not make them go into the non-responding mode, and after working so hard we certainly won't like this. So this choice should be made very carefully. Drawing on a Canvas A Canvas is an interface, a medium that enables us to actually access the surface, which we will use to draw our graphics. The Canvas contains all the necessary drawing methods needed to draw our graphics. The actual internal mechanism of drawing on a Canvas is that, whenever anything needs to be drawn on the Canvas, it's actually drawn on an underlying blank bitmap image. By default, this bitmap is automatically provided for us. But if we want to use a new Canvas, then we need to create a new bitmap image and then a new Canvas object while providing the already created bitmap to the constructor of the Canvas class. A sample code is explained as follows. Initially, the bitmap is drawn but not on the screen; it's actually drawn in the background on an internal Canvas. But to bring it to the front, we need to create a new Canvas object and provide the already created bitmap to it to be painted on the screen. Bitmap ourNewBitmap = Bitmap.CreateBitmap(100,100,Bitmap.Config.ARGB_8888); Canvas ourNewCanvas = new Canvas(ourNewBitmap); Drawing on a View If our application does not require heavy system resources or fast frame rates, we should use View.onDraw(). The benefit in this case is that the system will automatically give the Canvas its underlying bitmap as well. All we need is to make our drawing calls and be done with our drawings. We will create our class by extending it from the View class and will define the onDraw() method in it. The onDraw() method is where we will define whatever we want to draw on our Canvas. The Android framework will call the onDraw() method to ask our View to draw itself. The onDraw() method will be called by the Android framework on a need basis; for example, whenever our application wants to draw itself, this method will be called. We have to call the invalidate() method whenever we want our view to redraw itself. This means that, whenever we want our application's view to be redrawn, we will call the invalidate() method and the Android framework will call the onDraw() method for us. Let's say we want to draw a line, then the code would be something like this: class DrawView extends View { Paint paint = new Paint(); public DrawView(Context context) { super(context); paint.setColor(Color.BLUE); } @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawLine(10, 10, 90, 10, paint); } } Inside the onDraw() method, we will use all kinds of facilities that are provided by the Canvas class such as the different drawing methods made available by the Canvas class. We can also use drawing methods from other classes as well. The Android framework will draw a bitmap on the Canvas for us once our onDraw() method is complete with all our desired functionality. If we are using the main UI thread, we will call the invalidate() method, but if we are using another thread, then we will call the postInvalidate() method. Drawing on a SurfaceView The View class provides a subclass SurfaceView that provides a dedicated drawing surface within the hierarchy of the View. The goal is to draw using a secondary thread so that the application won't wait for the resources to be free and ready to redraw. The secondary thread has access to the SurfaceView object that has the ability to draw on its own Canvas with its own redraw frequency. We will start by creating a class that will extend the SurfaceView class. We should implement an interface SurfaceHolder.Callback. This interface is important in the sense that it will provide us with the information when a surface is created, modified, or destroyed. When we have timely information about the creation, change, or destruction of a surface, we can make a better decision on when to start drawing and when to stop. The secondary thread class that will perform all the drawing on our Canvas can also be defined in the SurfaceView class. To get information, the Surface object should be handled through SurfaceHolder and not directly. To do this, we will get the Holder by calling the getHolder() method when the SurfaceView is initialized. We will then tell the SurfaceHolder object that we want to receive all the callbacks; to do this, we will call addCallBacks(). After this, we will override all the methods inside the SurfaceView class to get our job done according to our functionality. The next step is to draw the surface's Canvas from inside the second thread; to do this, we will pass our SurfaceHandler object to the thread object and will get the Canvas using the lockCanvas() method. This will get the Canvas for us and will lock it for the drawing from the current thread only. We need to do this because we don't want an open Canvas that can be drawn by another thread; if this is the situation, it will disturb all our graphics and drawings on the Canvas. When we are done with drawing our graphics on the Canvas, we will unlock the Canvas by calling the unlockCanvasAndPost() method and will pass our Canvas object. To have a successful drawing, we will need repeated redraws; so we will repeat this locking and unlocking as needed and the surface will draw the Canvas. To have a uniform and smooth graphic animation, we need to have the previous state of the Canvas; so we will retrieve the Canvas from the SurfaceHolder object every time and the whole surface should be redrawn each time. If we don't do so, for instance, not painting the whole surface, the drawing from the previous Canvas will persist and that will destroy the whole look of our graphic-intense application. A sample code would be the following: class OurGameView extends SurfaceView implements SurfaceHolder.Callback { Thread thread = null; SurfaceHolder surfaceHolder; volatile boolean running = false; public void OurGameView (Context context) { super(context); surfaceHolder = getHolder(); } public void onResumeOurGameView (){ running = true; thread = new Thread(this); thread.start(); } public void onPauseOurGameView(){ boolean retry = true; running = false; while(retry){ thread.join(); retry = false; } public void run() { while(running){ if(surfaceHolder.getSurface().isValid()){ Canvas canvas = surfaceHolder.lockCanvas(); //... actual drawing on canvas surfaceHolder.unlockCanvasAndPost(canvas); } } } }
Read more
  • 0
  • 0
  • 11690
article-image-article-what_is_ngui
Packt
20 Jan 2014
8 min read
Save for later

What is NGUI?

Packt
20 Jan 2014
8 min read
(For more resources related to this topic, see here.) The Next-Gen User Interface kit is a plugin for Unity 3D. It has the great advantage of being easy to use, very powerful, and optimized compared to Unity's built-in GUI system, UnityGUI. Since it is written in C#, it is easily understandable and you may tweak it or add your own features, if necessary. The NGUI Standard License costs $95. With this, you will have useful example scenes included. I recommend this license to start comfortably—a free evaluation version is available, but it is limited, outdated, and not recommended. The NGUI Professional License, priced at $200, gives you access to NGUI's GIT repository to access the latest beta features and releases in advance. A $2000 Site License is available for an unlimited number of developers within the same studio. Let's have an overview of the main features of this plugin and see how they work. UnityGUI versus NGUI With Unity's GUI, you must create the entire UI in code by adding lines that display labels, textures, or any other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. This is no longer necessary; with NGUI, UI elements are simple GameObjects! You can create widgets—this is what NGUI calls labels, sprites, input fields, and so on—move them, rotate them, and change their dimensions using handles or the Inspector. Copying, pasting, creating prefabs, and every other useful feature of Unity's workflow is also available. These widgets are viewed by a camera and rendered on a layer that you can specify. Most of the parameters are accessible through Unity's Inspector, and you can see what your UI looks like directly in the Game window, without having to hit the Play button. Atlases Sprites and fonts are all contained in a large texture called atlas. With only a few clicks, you can easily create and edit your atlases. If you don't have any images to create your own UI assets, simple default atlases come with the plugin. That system means that for a complex UI window composed of different textures and fonts, the same material and texture will be used when rendering. This results in only one draw call for the entire window. This, along with other optimizations, makes NGUI the perfect tool to work on mobile platforms. Events NGUI also comes with an easy-to-use event framework that is written in C#. The plugin comes with a large number of additional components that you can attach to GameObjects. These components can perform advanced tasks depending on which events are triggered: hover, click, input, and so on. Therefore, you may enhance your UI experience while keeping it simple to configure. Code less, get more! Localization NGUI comes with its own localization system, enabling you to easily set up and change your UI's language with the push of a button. All your strings are located in the .txt files: one file per language. Shaders Lighting, normal mapping, and refraction shaders are supported in NGUI, which can give you beautiful results. Clipping is also a shader-controlled feature with NGUI, used for showing or hiding specific areas of your UI. We've now covered what NGUI's main features are, and how it can be useful to us as a plugin, and now it's time to import it inside Unity. Importing NGUI After buying the product from the Asset Store or getting the evaluation version, you have to download it. Perform the following steps to do so: Create a new Unity project. Navigate to Window | Asset Store. Select your downloads library. Click on the Download button next to NGUI: Next-Gen UI. When the download completes, click on the NGUI icon / product name in the library to access the product page. Click on the Import button and wait for a pop-up window to appear. Check the checkbox for NGUI v.3.0.2.unity package and click on Import. In the Project view, navigate to Assets | NGUI and double-click on NGUI v.3.0.2. A new imported pop-up window will appear. Click on Import again. Click any button on the toolbar to refresh it.The NGUI tray will appear! The NGUI tray will look like the following screenshot: You have now successfully imported NGUI to your project. Let's create your first 2D UI. Creating your UI We will now create our first 2D user interface with NGUI's UI Wizard. This wizard will add all the elements needed for NGUI to work. Before we continue, please save your scene as Menu.unity. UI Wizard Create your UI by opening the UI Wizard by navigating to NGUI | Open | UIWizard from the toolbar. Let's now take a look at the UI Wizard window and its parameters. Window You should now have the following pop-up window with two parameters: Parameters The two parameters are as follows: Layer: This is the layer on which your UI will be displayed Camera: This will decide if the UI will have a camera, and its drop-down options are as follows: None: No camera will be created Simple 2D:Uses a camera with orthographic projection Advanced 3D:Uses a camera with perspective projection Separate UI Layer I recommend that you separate your UI from other usual layers. We should do it as shown in the following steps: Click on the drop-down menu next to the Layer parameter. Select Add Layer. Create a new layer and name it GUI2D. Go back to the UI Wizard window and select this new GUI2D layer for your UI. You can now click on the Create Your UI button. Your first 2D UI has been created! Your UI structure The wizard has created four new GameObjects on the scene for us: UI Root (2D) Camera Anchor Panel Let's now review each in detail. UI Root (2D) The UIRoot component scales widgets down to keep them at a manageable size. It is also responsible for the Scaling Style—it will either scale UI elements to remain pixel perfect or to occupy the same percentage of the screen, depending on the parameters you specify. Select the UI Root (2D) GameObject in the Hierarchy. It has the UIRoot.cs script attached to it. This script adjusts the scale of the GameObject it's attached to in order to let you specify widget coordinates in pixels, instead of Unity units as shown in the following screenshot: Parameters The UIRoot component has four parameters: Scaling Style: The following are the available scaling styles: PixelPerfect:This will ensure that your UI will always try to remain at the same size in pixels, no matter what resolution. In this scaling mode, a 300 x 200 window will be huge on a 320 x 240 screen and tiny on a 1920 x 1080 screen. That also means that if you have a smaller resolution than your UI, it will be cropped. FixedSize:This will ensure that your UI will be proportionally resized depending on the screen's height. The result is that your UI will not be pixel perfect but will scale to fit the current screen size. FixedSizeOnMobiles:This will ensure fixed size on mobiles and pixel perfect everywhere else. Manual Height: With the FixedSize scaling style, the scale will be based on this height. If your screen's height goes over or under this value, it will be resized to be displayed identically while maintaining the aspect ratio(width/height proportional relationship). Minimum Height: With the PixelPerfect scaling style, this parameter defines the minimum height for the screen. If your screen height goes below this value, your UI will resize. It will be as if the Scaling Style parameter was set to FixedSize with Manual Height set to this value. Maximum Height: With the PixelPerfect scaling style, this parameter defines the maximum height for the screen. If your screen height goes over this value,your UI will resize. It will be as if the ScalingStyle parameter was set to FixedSize with Manual Height set to this value. Please set the Scaling Style parameter to FixedSize with a Manual Height value of 1080. This will allow us to have the same UI on any screen size up to 1920 x 1080. Even though the UI will look the same on different resolutions, the aspect ratio is still a problem since the rescale is based on the screen's height only. If you want to cover both 4:3 and 16:9 screens, your UI should not be too large—try to keep it square.Otherwise, your UI might be cropped on certain screen resolutions. On the other hand, if you want a 16:9 UI, I recommend you force this aspect ratio only. Let's do it now for this project by performing the following steps: Navigate to Edit| Project Settings | Player. In the Inspectoroption, unfold the Resolution and Presentationgroup. Unfold the Supported Aspect Ratios group. Check only the 16:9 box. Summary In this article, we discussed NGUI's basic workflow—it works with GameObjects, uses atlases to combine multiple textures in one large texture, has an event system, can use shaders, and has a localization system. After importing the NGUI plugin, we created our first 2D UI with the UI Wizard, reviewed its parameters, and created our own GUI 2D layer for our UI to reside on. Resources for Article: Further resources on this subject: Unity 3D Game Development: Don't Be a Clock Blocker [Article] Component-based approach of Unity [Article] Unity 3: Building a Rocket Launcher [Article]
Read more
  • 0
  • 0
  • 11613

article-image-developing-apps-google-speech-apis
Packt
21 Nov 2013
6 min read
Save for later

Developing apps with the Google Speech APIs

Packt
21 Nov 2013
6 min read
(For more resources related to this topic, see here.) Speech technology has come of age. If you own a smartphone or tablet you can perform many tasks on your device using voice. For example, you can send a text message, update your calendar, set an alarm, and do many other things with a single spoken command that would take multiple steps to complete using traditional methods such as tapping and selecting. You can also ask the sorts of queries that you would previously have typed into your Google search box and get a spoken response. For those who wish to develop their own speech-based apps, Google provides APIs for the basic technologies of text-to-speech synthesis (TTS) and automated speech recognition (ASR). Using these APIs developers can create useful interactive voice applications. This article provides a brief overview of the Google APIs and then goes on to show some examples of voice-based apps built around the APIs. Using the Google text-to-speech synthesis API TTS has been available on Android devices since Android 1.6 (API Level 4). The components of the Google TTS API (package android.speech.tts) are documented at http://developer.android.com/reference/android/speech/tts/package-summary.html. Interfaces and classes are listed here and further details can be obtained by clicking on these. Starting the TTS engine involves creating an instance of the TextToSpeech class along with the method that will be executed when the TTS engine is initialized. Checking that TTS has been initialized is done through an interface called OnInitListener. If TTS initialization is complete, the method onInit is invoked. If TTS has been initialized correctly, the speak method is invoked to speak out some words: TextToSpeech tts = new TextToSpeech(this, new OnInitListener(){ public void onInit(int status) { if (status == TextToSpeech.SUCCESS) speak(“Hello world”, TextToSpeech.QUEUE_ADD, null); } } Due to limited storage on some devices, not all languages that are supported may actually be installed on a particular device so it is important to check if a particular language is available before creating the TextToSpeech object. This way, it is possible to download and install the required language-specific resource files if necessary. This is done by sending an Intent with the action ACTION_CHECK_TTS_DATA, which is part of the TextToSpeech.Engine class: Intent intent = new In-tent(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(intent,TTS_DATA_CHECK); If the language data has been correctly installed, the onActivityResult handler will receive a CHECK_VOICE_DATA_PASS. If the data is not available, the action ACTION_INSTALL_TTS_DATA will be executed: Intent installData = new Intent (Engine. ACTION_INSTALL_TTS_DATA); startActivity(installData); The next figure shows an example of an app using the TTS API. A potential use-case for this type of app is when the user accesses some text on the Web - for example, a news item, email, or a sports report. This is useful if the user’s eyes and hands are busy, or if there are problems reading the text on the screen. In this example, the app retrieves some text and the user presses the Speak button to hear it. A Stop button is provided in case the user does not wish to hear all of the text. Using the Google speech recognition API The components of the Google Speech API (package android.speech) are documented at http://developer.android.com/reference/android/speech/package-summary.html. Interfaces and classes are listed and further details can be obtained by clicking on these. There are two ways in which speech recognition can be carried out on an Android Device: based solely on a RecognizerIntent, or by creating an instance of SpeechRecognizer. The following code shows how to start an activity to recognize speech using the first approach: Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); // Specify language model intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, languageModel); // Specify how many results to receive. Results listed in order of confidence intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, numberRecoResults); // Start listening startActivityForResult(intent, ASR_CODE); The app shown below illustrates the following: The user selects the parameters for speech recognition. The user presses a button and says some words. The words recognized are displayed in a list along with their confidence scores. Multilingual apps It is important to be able to develop apps in languages other than English. The TTS and ASR engines can be configured to a wide range of languages. However, we cannot expect that all languages will be available or that they are supported on a particular device. Thus, before selecting a language it is necessary to check whether it is one of the supported languages, and if not, to set the currently preferred language. In order to do this, a RecognizerIntent.ACTION_GET_LANGUAGE_DETAILS ordered broadcast is sent that returns a Bundle from which the information about the preferred language (RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE) and the list of supported languages (RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES) can be extracted. For speech recognition this introduces a new parameter for the intent in which the language is specified that will be used for recognition, as shown in the following code line: intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, language); As shown in the next figure, the user is asked for a language and then the app recognizes what the user says and plays back the best recognition result in the language selected. Creating a Virtual Personal Assistant (VPA) Many voice-based apps need to do more than simply speak and understand speech. For example, a VPA also needs the ability to engage in dialog with the user and to perform operations such as connecting to web services and activating device functions. One way to enable these additional capabilities is to make use of chatbot technology (see, for example, the Pandorabots web service: http://www.pandorabots.com/). The following figure shows two VPAs, Jack and Derek, that have been developed in this way. Jack is a general-purpose VPA, while Derek is a specialized VPA that can answer questions about Type 2 diabetes, such as symptoms, causes, treatment, risks to children, and complications. Summary The Google Speech APIs can be used in countless ways to develop interesting and useful voice-based apps. This article has shown some examples. By building on these you will be able to bring the power of voice to your Android apps, making them smarter and more intuitive, and boosting your users' mobile experience. Resources for Article: Further resources on this subject: Introducing an Android platform [Article] Building Android (Must know) [Article] Top 5 Must-have Android Applications [Article]
Read more
  • 0
  • 0
  • 11558
Modal Close icon
Modal Close icon