Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Android Programming

62 Articles
article-image-memory
Packt
01 Aug 2016
26 min read
Save for later

Memory

Packt
01 Aug 2016
26 min read
In this article by Enrique López Mañas and Diego Grancini, authors of the book Android High Performance Programming explains how memory is the matter to focus on. A bad memory managed application can affect the behavior of the whole system or it can affect the other applications installed on our device in the same way as other applications could affect ours. As we all know, Android has a wide range of devices in the market with a lot of different configurations and memory amounts. It's up to the developers to understand the strategy to take while dealing with this big amount of fragmentation, the pattern to follow while developing, and the tools to use to profile the code. This is the aim of this article. In the following sections, we will focus on heap memory. We will take a look at how our device handles memory deepening, what the garbage collection is, and how it works in order to understand how to avoid common developing mistakes and clarify what we will discuss to define best practices. We will also go through patterns definition in order to reduce drastically the risk of what we will identify as a memory leak and memory churn. This article will end with an overview of official tools and APIs that Android provides to profile our code and to find possible causes of memory leaks and that aren't deepened. (For more resources related to this topic, see here.) Walkthrough Before starting the discussion about how to improve and profile our code, it's really important to understand how Android devices handle memory. Then, in the following pages, we will analyze differences between the runtimes that Android uses, know more about the garbage collection, understand what a memory leak and memory churn are, and how Java handles object references. How memory works Have you ever thought about how a restaurant works during its service? Let's think about it for a while. When new groups of customers get into the restaurant, there's a waiter ready to search for a place to allocate them. But, the restaurant is a limited space. So, there is the need to free tables when possible. That's why, when a group has finished to eat, another waiter cleans and prepares the just freed table for other groups to come. The first waiter has to find the table with the right number of seats for every new group. Then, the second waiter's task should be fast and shouldn't hinder or block others' tasks. Another important aspect of this is how many seats are occupied by the group; the restaurant owner wants to have as much free seats as possible to place new clients. So, it's important to control that every group fills the right number of seats without occupying tables that could be freed and used in order to have more tables for other new groups. This is absolutely similar to what happens in an Android system. Every time we create a new object in our code, it needs to be saved in memory. So, it's allocated as part of our application private memory to be accessed whenever needed and the system keeps allocating memory for us during the whole application lifetime. Nevertheless, the system has a limited memory to use and it cannot allocate memory indefinitely. So, how is it possible for the system to have enough memory for our application all the time? And, why is there no need for an Android developer to free up memory? Let's find it out. Garbage collection The Garbage collection is an old concept that is based on two main aspects: Find no more referenced objects Free the memory referenced by those objects When that object is no more referenced, its "table" can be cleaned and freed up. This is, what it's done to provide memory for future new objects allocations. These operations of allocation of new objects and deallocation of no more referenced objects are executed by the particular runtime in use in the device, and there is no need for the developer to do anything just because they are all managed automatically. In spite of what happens in other languages, such as C or C++, there is no need for the developer to allocate and deallocate memory. In particular, while the allocation is made when needed, the garbage collection task is executed when a memory upper limit is reached. Those automatic operations in the background don't exempt developers from being aware of their app's memory management; if the memory management is not well done, the application can be lead to lags, malfunctions and, even, crashes when an OutOfMemoryError exception is thrown. Shared memory In Android, every app has its own process that is completely managed by the runtime with the aim to reclaim memory in order to free resources for other foreground processes, if needed. The available amount of memory for our application lies completely in RAM as Android doesn't use swap memory. The main consequence to this is that there is no other way for our app to have more memory than to unreferenced no longer used objects. But Android uses paging and memory mapping; the first technique defines blocks of memory of the same size called pages in a secondary storage, while the second one uses a mapping in memory with correlated files in secondary storage to be used as primary. They are used when the system needs to allocate memory for other processes, so the system creates paged memory-mapped files to save Dalvik code files, app resources, or native code files. In this way, those files can be shared between multiple processes. As a matter of fact, Android system uses a shared memory in order to better handle resources from a lot of different processes. Furthermore, every new process to be created is forked by an already existing one that is called Zygote. This particular process contains common framework classes and resources to speed up the first boot of the application. This means that the Zygote process is shared between processes and applications. This large use of shared memory makes it difficult to profile the use of memory of our application because there are many facets to be consider before reaching a correct analysis of memory usage. Runtime Some functions and operations of memory management depend on the runtime used. That's why we are going through some specific features of the two main runtime used by Android devices. They are as follows: Dalvik Android runtime (ART) ART has been added later to replace Dalvik to improve performance from different point of view. It was introduced in Android KitKat (API Level 19) as an option for developer to be enabled, and it has become the main and only runtime from Android Lollipop (API Level 21) on. Besides the difference between Dalvik and ART in compiling code, file formats, and internal instructions, what we are focusing on at the moment is memory management and garbage collection. So, let's understand how the Google team improved performance in runtimes garbage collection over time and what to pay attention at while developing our application. Let's step back and return to the restaurant for a bit more. What would happen if everything, all employees, such as other waiters and cooks, and all of the services, such as dishwashers, and so on, stop their tasks waiting for just a waiter to free a table? That single employee performance would make success or fail of all. So, it's really important to have a very fast waiter in this case. But, what to do if you cannot afford him? The owner wants him to do what he has to as fast as possible, by maximizing his productivity and, then, allocating all the customers in the best way and this is exactly what we have to do as developers. We have to optimize memory allocations in order to have a fast garbage collection even if it stops all the other operations. What is described here is just like the runtime garbage collection works. When the upper limit of memory is reached, the garbage collection starts its task pausing any other method, task, thread, or process execution, and those objects won't resume until the garbage collection task is completed. So, it's really important that the collection is fast enough not to impede the reaching of the 16 ms per frame rule, resulting in lags, and jank in the UI. The more time the garbage collection works, the less time the system has to prepare frames to be rendered on the screen. Keep in mind that automatic garbage collection is not free; bad memory management can lead to bad UI performance and, thus, bad UX. No runtime feature can replace good memory management. That's why we need to be careful about new allocations of objects and, above all, references. Obviously, ART introduced a lot of improvement in this process after the Dalvik era, but the background concept is the same; it reduces the collection steps, it adds a particular memory for Bitmap objects, it uses new fast algorithms, and it does other cool stuff getting better in the future, but there is no way to escape that we need to profile our code and memory usage if we want our application to have the best performance. Android N JIT compiler The ART runtime uses an ahead-of-time compilation that, as the name suggests, performs compilation when the applications are first installed. This approach brought in advantages to the overall system in different ways because, the system can: Reduce battery consumption due to pre-compilation and, then, improve autonomy Execute application faster than Dalvik Improve memory management and garbage collection However, those advantages have a cost related to installation timings; the system needs to compile the application at that time, and then, it's slower than a different type of compiler. For this reason, Google added a just-in-time (JIT) compiler to the ahead-of-time compiler of ART into the new Android N. This one acts when needed, so during the execution of the application and, then, it uses a different approach compared to the ahead-of-time one. This compiler uses code profiling techniques and it's not a replacement for the ahead-of-time, but it's in addition to it. It's a good enhancement to the system for the advantages in terms of performance it introduces. The profile-guided compilation adds the possibility to precompile and, then, to cache and to reuse methods of the application, depending on usage and/or device conditions. This feature can save time to the compilation and improve performance in every kind of system. Then, all of the devices benefit of this new memory management. The key advantages are: Less used memory Less RAM accesses Lower impact on battery All of these advantages introduced in Android N, however, shouldn't be a way to avoid a good memory management in our applications. For this, we need to know what pitfalls are lurking behind our code and, more than this, how to behave in particular situations to improve the memory management of the system while our application is active. Memory leak The main mistake from the memory performance perspective a developer can do while developing an Android application is called memory leak, and it refers to an object that is no more used but it's referenced by another object that is, instead, still active. In this situation, the garbage collector skips it because the reference is enough to leave that object in memory. Actually, we are avoiding that the garbage collector frees memory for other future allocations. So, our heap memory gets smaller because of this, and this leads to the garbage collection to be invoked more often, blocking the rest of executions of the application. This could lead to a situation where there is no more memory to allocate a new object and, then, an OutOfMemoryError exception is thrown by the system. Consider the case where a used object references no more used objects, that reference no more used objects, and so on; none of them can be collected, just because the root object is still in use. Memory churn Another anomaly in memory management is called memory churn, and it refers to the amount of allocations that is not sustainable by the runtime for the too many new instantiated objects in a small period of time. In this case, a lot of garbage collection events are called many times affecting the overall memory and UI performance of the application. The need to avoid allocations in the View.onDraw() method, is closely related to memory churn; we know that this method is called every time the view needs to be drawn again and the screen needs to be refreshed every 16.6667 ms. If we instantiate objects inside that method, we could cause a memory churn because those objects are instantiated in the View.onDraw() method and no longer used, so they are collected very soon. In some cases, this leads to one or more garbage collection events to be executed every time the frame is drawn on the screen, reducing the available time to draw it below the 16.6667 ms, depending on collection event duration. References Let's have a quick overview of different objects that Java provides us to reference objects. This way, we will have an idea of when we can use them and how. Java defines four levels of strength: Normal: It's the main type of reference. It corresponds to the simple creation of an object and this object will be collected when it will be no more used and referenced, and it's just the classical object instantiation: SampleObject sampleObject = new SampleObject(); Soft: It's a reference not enough strong to keep an object in memory when a garbage collection event is triggered. So, it can be null anytime during the execution. Using this reference, the garbage collector decides when to free the object memory based on memory demand of the system. To use it, just create a SoftReference object passing the real object as parameter in the constructor and call the SoftReference.get() method to get the object: SoftReference<SampleObject> sampleObjectSoftRef = new SoftReference<SampleObject>(new SampleObject()); SampleObject sampleObject = sampleObjectSoftRef.get(); Weak: It's exactly as SoftReferences, but this is weaker than the soft one: WeakReference<SampleObject> sampleObjectWeakRef = new WeakReference<SampleObject>(new SampleObject()); Phantom: This is the weakest reference; the object is eligible for finalization. This kind of references is rarely used and the PhantomReference.get() method returns always null. This is for reference queues that don't interest us at the moment, but it's just to know that this kind of reference is also provided. These classes may be useful while developing if we know which objects have a lower level of priority and can be collected without causing problems to the normal execution of our application. We will see how can help us manage memory in the following pages. Memory-side projects During the development of the Android platform, Google has always tried to improve the memory management system of the platform to maintain a wide compatibility with increasing performance devices and low resources ones. This is the main purpose of two project Google develops in parallel with the platform, and, then, every new Android version released means new improvements and changes to those projects and their impacts on the system performance. Every one of those side projects is focusing on a different matter: Project Butter: This is introduced in Android Jelly Bean 4.1 (API Level 16) and then improved in Android Jelly Bean 4.2 (API Level 17), added features related to the graphical aspect of the platform (VSync and buffering are the main addition) in order to improve responsiveness of the device while used. Project Svelte: This is introduced inside Android KitKat 4.4 (API Level 19), it deals with memory management improvements in order to support low RAM devices. Project Volta: This is introduced in Android Lollipop (API Level 21), it focuses on battery life of the device. Then, it adds important APIs to deal with batching expensive battery draining operations, such as the JobSheduler or new tools such as the Battery Historian. Project Svelte and Android N When it was first introduced, Project Svelte reduced the memory footprint and improved the memory management in order to support entry-level devices with low memory availability and then broaden the supported range of devices with clear advantage for the platform. With the new release of Android N, Google wants to provide an optimized way to run applications in background. We know that the process of our application last in background even if it is not visible on the screen, or even if there are no started activities, because a service could be executing some operations. This is a key feature for memory management; the overall system performance could be affected by a bad memory management of the background processes. But what's changed in the application behavior and the APIs with the new Android N? The chosen strategy to improve memory management reducing the impact of background processes is to avoid to send the application the broadcasts for the following actions: ConnectivityManager.CONNECTIVITY_ACTION: Starting from Android N, a new connectivity action will be received just from those applications that are in foreground and, then, that have registered BroadcastReceiver for this action. No application with implicit intent declared inside the manifest file will receive it any longer. Hence, the application needs to change its logics to do the same as before. Camera.ACTION_NEW_PICTURE: This one is used to notify that a picture has just been taken and added to the media store. This action won't be available anymore neither for receiving nor for sending and it will be for any application, not just for the ones that are targeting the new Android N. Camera.ACTION_NEW_VIDEO: This is used to notify a video has just been taken and added to the media store. As the previous one, this action cannot be used anymore, and it will be for any application too. Keep in mind these changes when targeting the application with the new Android N to avoid unwanted or unexpected behaviors. All of the preceding actions listed have been changed by Google to force developers not to use them in applications. As a more general rule, we should not use implicit receivers for the same reason. Hence, we should always check the behavior of our application while it's in the background because this could lead to an unexpected usage of memory and battery drain. Implicit receivers can start our application components, while the explicit ones are set up for a limited time while the activity is in foreground and then they cannot affect the background processes. It's a good practice to avoid the use of implicit broadcast while developing applications to reduce the impact of it on background operations that could lead to unwanted waste of memory and, then, a battery drain. Furthermore, Android N introduces a new command in ADB to test the application behavior ignoring the background processes. Use the following command to ignore background services and processes: adb shell cmd appops set RUN_IN_BACKGROUND ignore Use the following one to restore the initial state: adb shell cmd appops set RUN_IN_BACKGROUND allow Best practices Now that we know what can happen in memory while our application is active, let's have a deep examination of what we can do to avoid memory leaks, memory churns, and optimize our memory management in order to reach our performance target, not just in memory usage, but in garbage collection attendance, because, as we know, it stops any other working operation. In the following pages, we will go through a lot of hints and tips using a bottom-up strategy, starting from low-level shrewdness in Java code to highest level Android practices. Data types We weren't joking; we are really talking about Java primitive types as they are the foundation of all the applications, and it's really important to know how to deal with them even though it may be obvious. It's not, and we will understand why. Java provides primitive types that need to be saved in memory when used: the system allocate an amount of memory related to the needed one requested for that particular type. The followings are Java primitive types with related amount of bits needed to allocate the type: byte: 8 bit short: 16 bit int: 32 bit long: 64 bit float: 32 bit double: 64 bit boolean: 8 bit, but it depends on virtual machine char: 16 bit At first glance, what is clear is that you should be careful in choosing the right primitive type every time you are going to use them. Don't use a bigger primitive type if you don't really need it; never use long, float, or double, if you can represent the number with an integer data type. Otherwise, it would be a useless waste of memory and calculations every time the CPU need to deal with it and remember that to calculate an expression, the system needs to do a widening primitive implicit conversion to the largest primitive type involved in the calculation. Autoboxing Autoboxing is the term used to indicate an automatic conversion between a primitive type and its corresponding wrapper class object. Primitive type wrapper classes are the followings: java.lang.Byte java.lang.Short java.lang.Integer java.lang.Long java.lang.Float java.lang.Double java.lang.Boolean java.lang.Character They can be instantiated using the assignation operator as for the primitive types, and they can be used as their primitive types: Integer i = 0; This is exactly as the following: Integer i = new Integer(0); But the use of autoboxing is not the right way to improve the performance of our applications; there are many costs for that: first of all, the wrapper object is much bigger than the corresponding primitive type. For instance, an Integer object needs 16 bytes in memory instead of 16 bits of the primitive one. Hence, the bigger amount of memory used to handle that. Then, when we declare a variable using the primitive wrapper object, any operation on that implies at least another object allocation. Take a look at the following snippet: Integer integer = 0; integer++; Every Java developer knows what it is, but this simple code needs an explanation about what happened step by step: First of all, the integer value is taken from the Integer value integer and it's added 1: int temp = integer.intValue() + 1; Then the result is assigned to integer, but this means that a new autoboxing operation needs to be executed: i = temp; Undoubtedly, those operations are slower than if we used the primitive type instead of the wrapper class; no needs to autoboxing, hence, no more bad allocations. Things can get worse in loops, where the mentioned operations are repeated every cycle; take, for example the following code: Integer sum = 0; for (int i = 0; i < 500; i++) { sum += i; } In this case, there are a lot of inappropriate allocations caused by autoboxing, and if we compare this with the primitive type for loop, we notice that there are no allocations: int sum = 0; for (int i = 0; i < 500; i++) { sum += i; } Autoboxing should be avoided as much as possible. The more we use primitive wrapper classes instead of primitive types themselves, the more waste of memory there will be while executing our application and this waste could be propagated when using autoboxing in loop cycles, affecting not just memory, but CPU timings too. Sparse array family So, in all of the cases described in the previous paragraph, we can just use the primitive type instead of the object counterpart. Nevertheless, it's not always so simple. What happens if we are dealing with generics? For example, let's think about collections; we cannot use a primitive type as generics for objects that implements one of the following interfaces. We have to use the wrapper class this way: List<Integer> list; Map<Integer, Object> map; Set<Integer> set; Every time we use one of the Integer objects of a collection, autoboxing occurs at least once, producing the waste outlined above, and we know well how many times we deal with this kind of objects in every day developing time, but isn't there a solution to avoid autoboxing in these situations? Android provides a useful family of objects created on purpose to replace Maps objects and avoid autoboxing protecting memory from pointless bigger allocations; they are the Sparse arrays. The list of Sparse arrays, with related type of Maps they can replace, is the following: SparseBooleanArray: HashMap<Integer, Boolean> SparseLongArray: HashMap<Integer, Long> SparseIntArray: HashMap<Integer, Integer> SparseArray<E>: HashMap<Integer, E> LongSparseArray<E>: HashMap<Long, E> In the following, we will talk about SparseArray object specifically, but everything we say is true for all other object above as well. The SparseArray uses two different arrays to store hashes and objects. The first one collects the sorted hashes, while the second one stores the key/value pairs ordered conforming to the key hashes array sorting as in Figure 1: Figure 1: SparseArray's hashes structure When you need to add a value, you have to specify the integer key and the value to be added in SparseArray.put() method, just like in the HashMap case. This could create collisions if multiple key hashes are added in the same position. When a value is needed, simply call SparseArray.get(), specifying the related key; internally, the key object is used to binary search the index of the hash, and then the value of the related key, as in Figure 2: Figure 2: SparseArray's workflow When the key found at the index resulting from binary search does not match with the original one, a collision happened, so the search keeps on in both directions to find the same key and to provide the value if it's still inside the array. Thus, the time needed to find the value increases significantly with a large number of object contained by the array. By contrast, a HashMap contains just a single array to store hashes, keys, and values, and it uses largest arrays as a technique to avoid collisions. This is not good for memory, because it's allocating more memory than what it's really needed. So HashMap is fast, because it implements a better way to avoid collisions, but it's not memory efficient. Conversely, SparseArray is memory efficient because it uses the right number of object allocations, with an acceptable increase of execution timings. The memory used for these arrays is contiguous, so every time you remove a key/value pair from SparseArray, they can be compacted or resized: Compaction: The object to remove is shifted at the end and all the other objects are shifted left. The last block containing the item to be removed can be reused for future additions to save allocations. Resize: All the elements of the arrays are copied to other arrays and the old ones are deleted. On the other hand, the addition of new elements produces the same effect of copying all elements into new arrays. This is the slowest method, but it's completely memory safe because there are no useless memory allocations. In general, HashMap is faster while doing these operations because it contains more blocks than what it's really needed. Hence, the memory waste. The use of SparseArray family objects depends of the strategy applied for memory management and CPU performance patterns because of calculations performance cost compared to the memory saving. So, the use is right in some situations. Consider the use of it when: The number of object you are dealing with is below a thousand, and you are not going to do a lot of additions and deletions. You are using collections of Maps with a few items, but lots of iterations. Another useful feature of those objects is that they let you iterate over indexing, instead of using the iterator pattern that is slower and memory inefficient. The following snippet shows how the iteration doesn't involve objects: // SparseArray for (int i = 0; i < map.size(); i++) { Object value = map.get(map.keyAt(i)); } Contrariwise, the Iterator object is needed to iterate through HashMaps: // HashMap for (Iterator iter = map.keySet().iterator(); iter.hasNext(); ) { Object value = iter.next(); } Some developers think the HashMap object is the better choice because it can be exported from an Android application to other Java ones, while the SparseArray family's object don't. But what we analyzed here as memory management gain is applicable to any other cases. And, as developers, we should strive to reach performance goals in every platform, instead of reusing the same code in different platform, because different platform could have been affected differently from a memory perspective. That's why, our main suggestion is to always profile the code in every platform we are working on, and then make our personal considerations on better or worse approaches depending on results. ArrayMap An ArrayMap object is an Android implementation of the Map interface that is more memory efficient than the HashMap one. This class is provided by the Android platform starting from Android KitKat (API Level 19), but there is another implementation of this inside the Support package v4 because of its main usage on older and lower-end devices. Its implementation and usage is totally similar to the SparseArray objects with all the implications about memory usage and computational costs, but its main purpose is to let you use objects as keys of the map, just like the HashMap does. Hence, it provides the best of both worlds. Summary We defined a lot of best practices to help keep a good memory management, introducing helpful design patterns and analyzing which are the best choices while developing things taken for granted that can actually affect memory and performance. Then, we faced the main causes for the worst leaks in Android platform, those related to main components such as Activities and Services. As a conclusion for the practices, we introduced APIs both to use and not to use. Then, other ones able to define a strategy for events related to the system and, then, external to the application. Resources for Article: Further resources on this subject: Hacking Android Apps Using the Xposed Framework [article] Speeding up Gradle builds for Android [article] Get your Apps Ready for Android N [article]
Read more
  • 0
  • 0
  • 7537

article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 7490

article-image-asynctask-and-hardwaretask-classes
Packt
17 Feb 2015
10 min read
Save for later

The AsyncTask and HardwareTask Classes

Packt
17 Feb 2015
10 min read
This article is written by Andrew Henderson, the author of Android for the BeagleBone Black. This article will cover the usage of the AsyncTask and HardwareTask classes. (For more resources related to this topic, see here.) Understanding the AsyncTask class HardwareTask extends the AsyncTask class, and using it provides a major advantage over the way hardware interfacing is implemented in the gpio app. AsyncTasks allows you to perform complex and time-consuming hardware-interfacing tasks without your app becoming unresponsive while the tasks are executed. Each instance of an AsyncTask class can create a new thread of execution within Android. This is similar to how multithreaded programs found on other OSes spin new threads to handle file and network I/O, manage UIs, and perform parallel processing. The gpio app only used a single thread during its execution. This thread is the main UI thread that is part of all Android apps. The UI thread is designed to handle UI events as quickly as possible. When you interact with a UI element, that element's handler method is called by the UI thread. For example, clicking a button causes the UI thread to invoke the button's onClick() handler. The onClick() handler then executes a piece of code and returns to the UI thread. Android is constantly monitoring the execution of the UI thread. If a handler takes too long to finish its execution, Android shows an Application Not Responding (ANR) dialog to the user. You never want an ANR dialog to appear to the user. It is a sign that your app is running inefficiently (or even not at all!) by spending too much time in handlers within the UI thread. The Application Not Responding dialog in Android The gpio app performed reads and writes of the GPIO states very quickly from within the UI thread, so the risk of triggering the ANR was very small. Interfacing with the FRAM is a much slower process. With the BBB's I2C bus clocked at its maximum speed of 400 KHz, it takes approximately 25 microseconds to read or write a byte of data when using the FRAM. While this is not a major concern for small writes, reading or writing the entire 32,768 bytes of the FRAM can take close to a full second to execute! Multiple reads and writes of the full FRAM can easily trigger the ANR dialog, so it is necessary to move these time-consuming activities out of the UI thread. By placing your hardware interfacing into its own AsyncTask class, you decouple the execution of these time-intensive tasks from the execution of the UI thread. This prevents your hardware interfacing from potentially triggering the ANR dialog. Learning the details of the HardwareTask class The AsyncTask base class of HardwareTask provides many different methods, which you can further explore by referring to the Android API documentation. The four AsyncTask methods that are of immediate interest for our hardware-interfacing efforts are: onPreExecute() doInBackground() onPostExecute() execute() Of these four methods, only the doInBackground() method executes within its own thread. The other three methods all execute within the context of the UI thread. Only the methods that execute within the UI thread context are able to update screen UI elements.   The thread contexts in which the HardwareTask methods and the PacktHAL functions are executed Much like the MainActivity class of the gpio app, the HardwareTask class provides four native methods that are used to call PacktHAL JNI functions related to FRAM hardware interfacing: public class HardwareTask extends AsyncTask<Void, Void, Boolean> {   private native boolean openFRAM(int bus, int address); private native String readFRAM(int offset, int bufferSize); private native void writeFRAM(int offset, int bufferSize,      String buffer); private native boolean closeFRAM(); The openFRAM() method initializes your app's access to a FRAM located on a logical I2C bus (the bus parameter) and at a particular bus address (the address parameter). Once the connection to a particular FRAM is initialized via an openFRAM() call, all readFRAM() and writeFRAM() calls will be applied to that FRAM until a closeFRAM() call is made. The readFRAM() method will retrieve a series of bytes from the FRAM and return it as a Java String. A total of bufferSize bytes are retrieved starting at an offset of offset bytes from the start of the FRAM. The writeFRAM() method will store a series of bytes to the FRAM. A total of bufferSize characters from the Java string buffer are stored in the FRAM started at an offset of offset bytes from the start of the FRAM. In the fram app, the onClick() handlers for the Load and Save buttons in the MainActivity class each instantiate a new HardwareTask. Immediately after the instantiation of HardwareTask, either the loadFromFRAM() or saveToFRAM() method is called to begin interacting with the FRAM: public void onClickSaveButton(View view) {    hwTask = new HardwareTask();    hwTask.saveToFRAM(this); }    public void onClickLoadButton(View view) {    hwTask = new HardwareTask();    hwTask.loadFromFRAM(this); } Both the loadFromFRAM() and saveToFRAM() methods in the HardwareTask class call the base AsyncTask class execution() method to begin the new thread creation process: public void saveToFRAM(Activity act) {    mCallerActivity = act;    isSave = true;    execute(); }    public void loadFromFRAM(Activity act) {    mCallerActivity = act;    isSave = false;    execute(); } Each AsyncTask instance can only have its execute() method called once. If you need to run an AsyncTask a second time, you must instantiate a new instance of it and call the execute() method of the new instance. This is why we instantiate a new instance of HardwareTask in the onClick() handlers of the Load and Save buttons, rather than instantiating a single HardwareTask instance and then calling its execute() method many times. The execute() method automatically calls the onPreExecute() method of the HardwareTask class. The onPreExecute() method performs any initialization that must occur prior to the start of the new thread. In the fram app, this requires disabling various UI elements and calling openFRAM() to initialize the connection to the FRAM via PacktHAL: protected void onPreExecute() {    // Some setup goes here    ...   if ( !openFRAM(2, 0x50) ) {      Log.e("HardwareTask", "Error opening hardware");      isDone = true; } // Disable the Buttons and TextFields while talking to the hardware saveText.setEnabled(false); saveButton.setEnabled(false); loadButton.setEnabled(false); } Disabling your UI elements When you are performing a background operation, you might wish to keep your app's user from providing more input until the operation is complete. During a FRAM read or write, we do not want the user to press any UI buttons or change the data held within the saveText text field. If your UI elements remain enabled all the time, the user can launch multiple AsyncTask instances simultaneously by repeatedly hitting the UI buttons. To prevent this, disable any UI elements required to restrict user input until that input is necessary. Once the onPreExecute() method finishes, the AsyncTask base class spins a new thread and executes the doInBackground() method within that thread. The lifetime of the new thread is only for the duration of the doInBackground() method. Once doInBackground() returns, the new thread will terminate. As everything that takes place within the doInBackground() method is performed in a background thread, it is the perfect place to perform any time-consuming activities that would trigger an ANR dialog if they were executed from within the UI thread. This means that the slow readFRAM() and writeFRAM() calls that access the I2C bus and communicate with the FRAM should be made from within doInBackground(): protected Boolean doInBackground(Void... params) {    ...    Log.i("HardwareTask", "doInBackground: Interfacing with hardware");    try {      if (isSave) {          writeFRAM(0, saveData.length(), saveData);      } else {        loadData = readFRAM(0, 61);      }    } catch (Exception e) {      ... The loadData and saveData string variables used in the readFRAM() and writeFRAM() calls are both class variables of HardwareTask. The saveData variable is populated with the contents of the saveEditText text field via a saveEditText.toString() call in the HardwareTask class' onPreExecute() method. How do I update the UI from within an AsyncTask thread? While the fram app does not make use of them in this example, the AsyncTask class provides two special methods, publishProgress() and onPublishProgress(), that are worth mentioning. The AsyncTask thread uses these methods to communicate with the UI thread while the AsyncTask thread is running. The publishProgress() method executes within the AsyncTask thread and triggers the execution of onPublishProgress() within the UI thread. These methods are commonly used to update progress meters (hence the name publishProgress) or other UI elements that cannot be directly updated from within the AsyncTask thread. After doInBackground() has completed, the AsyncTask thread terminates. This triggers the calling of doPostExecute() from the UI thread. The doPostExecute() method is used for any post-thread cleanup and updating any UI elements that need to be modified. The fram app uses the closeFRAM() PacktHAL function to close the current FRAM context that it opened with openFRAM() in the onPreExecute() method. protected void onPostExecute(Boolean result) {    if (!closeFRAM()) {    Log.e("HardwareTask", "Error closing hardware"); }    ... The user must now be notified that the task has been completed. If the Load button was pressed, then the string displayed in the loadTextField widget is updated via the MainActivity class updateLoadedData() method. If the Save button was pressed, a Toast message is displayed to notify the user that the save was successful. Log.i("HardwareTask", "onPostExecute: Completed."); if (isSave) {    Toast toast = Toast.makeText(mCallerActivity.getApplicationContext(),      "Data stored to FRAM", Toast.LENGTH_SHORT);    toast.show(); } else {    ((MainActivity)mCallerActivity).updateLoadedData(loadData); } Giving Toast feedback to the user The Toast class is a great way to provide quick feedback to your app's user. It pops up a small message that disappears after a configurable period of time. If you perform a hardware-related task in the background and you want to notify the user of its completion without changing any UI elements, try using a Toast message! Toast messages can only be triggered by methods that are executing from within the UI thread. An example of the Toast message Finally, the onPostExecute() method will re-enable all of the UI elements that were disabled in onPreExecute(): saveText.setEnabled(true);saveButton.setEnabled(true); loadButton.setEnabled(true); The onPostExecute() method has now finished its execution and the app is back to patiently waiting for the user to make the next fram access request by pressing either the Load or Save button. Are you ready for a challenge? Now that you have seen all of the pieces of the fram app, why not change it to add new functionality? For a challenge, try adding a counter that indicates to the user how many more characters can be entered into the saveText text field before the 60-character limit is reached. Summary The fram app in this article demonstrated how to use the AsyncTask class to perform time-intensive hardware interfacing tasks without stalling the app's UI thread and triggering the ANR dialog. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 7460

article-image-signing-application-android-using-maven
Packt
18 Mar 2015
10 min read
Save for later

Signing an application in Android using Maven

Packt
18 Mar 2015
10 min read
In this article written by Patroklos Papapetrou and Jonathan LALOU, authors of the book Android Application Development with Maven, we'll learn different modes of digital signing and also using Maven to digitally sign the applications. The topics that we will explore in this article are: (For more resources related to this topic, see here.) Signing an application Android requires that all packages, in order to be valid for installation in devices, need to be digitally signed with a certificate. This certificate is used by the Android ecosystem to validate the author of the application. Thankfully, the certificate is not required to be issued by a certificate authority. It would be a total nightmare for every Android developer and it would increase the cost of developing applications. However, if you want to sign the certificate by a trusted authority like the majority of the certificates used in web servers, you are free to do it. Android supports two modes of signing: debug and release. Debug mode is used by default during the development of the application, and the release mode when we are ready to release and publish it. In debug mode, when building and packaging an application the Android SDK automatically generates a certificate and signs the package. So don't worry; even though we haven't told Maven to do anything about signing, Android knows what to do and behind the scenes signs the package with the autogenerated key. When it comes to distributing an application, debug mode is not enough; so, we need to prepare our own self-signed certificate and instruct Maven to use it instead of the default one. Before we dive to Maven configuration, let us quickly remind you how to issue your own certificate. Open a command window, and type the following command: keytool -genkey -v -keystore my-android-release-key.keystore -alias my-android-key -keyalg RSA -keysize 2048 -validity 10000 If the keytool command line utility is not in your path, then it's a good idea to add it. It's located under the %JAVA_HOME%/bin directory. Alternatively, you can execute the command inside this directory. Let us explain some parameters of this command. We use the keytool command line utility to create a new keystore file under the name my-android-release-key.keystore inside the current directory. The -alias parameter is used to define an alias name for this key, and it will be used later on in Maven configuration. We also specify the algorithm, RSA, and the key size, 2048; finally we set the validity period in days. The generated key will be valid for 10,000 days—long enough for many many new versions of our application! After running the command, you will be prompted to answer a series of questions. First, type twice a password for the keystore file. It's a good idea to note it down because we will use it again in our Maven configuration. Type the word :secret in both prompts. Then, we need to provide some identification data, like name, surname, organization details, and location. Finally, we need to set a password for the key. If we want to keep the same password with the keystore file, we can just hit RETURN. If everything goes well, we will see the final message that informs us that the key is being stored in the keystore file with the name we just defined. After this, our key is ready to be used to sign our Android application. The key used in debug mode can be found in this file: ~/.android.debug.keystore and contains the following information: Keystore name: "debug.keystore" Keystore password: "android" Key alias: "androiddebugkey" Key password: "android" CN: "CN=Android Debug,O=Android,C=US" Now, it's time to let Maven use the key we just generated. Before we add the necessary configuration to our pom.xml file, we need to add a Maven profile to the global Maven settings. The profiles defined in the user settings.xml file can be used by all Maven projects in the same machine. This file is usually located under this folder: %M2_HOME%/conf/settings.xml. One fundamental advantage of defining global profiles in user's Maven settings is that this configuration is not shared in the pom.xml file to all developers that work on the application. The settings.xml file should never be kept under the Source Control Management (SCM) tool. Users can safely enter personal or critical information like passwords and keys, which is exactly the case of our example. Now, edit the settings.xml file and add the following lines inside the <profiles> attribute: <profile> <id>release</id> <properties>    <sign.keystore>/path/to/my/keystore/my-android-release-    key.keystore</sign.keystore>    <sign.alias>my-android-key</sign.alias>    <sign.storepass>secret</sign.storepass>    <sign.keypass>secret</sign.keypass> </properties> </profile> Keep in mind that the keystore name, the alias name, the keystore password, and the key password should be the ones we used when we created the keystore file. Clearly, storing passwords in plain text, even in a file that is normally protected from other users, is not a very good practice. A quick way to make it slightly less easy to read the password is to use XML entities to write the value. Some sites on the internet like this one http://coderstoolbox.net/string/#!encoding=xml&action=encode&charset=none provide such encryptions. It will be resolved as plain text when the file is loaded; so Maven won't even notice it. In this case, this would become: <sign.storepass>&#115;&#101;&#99;&#114;&#101;&#116;</sign.storepass> We have prepared our global profile and the corresponding properties, and so we can now edit the pom.xml file of the parent project and do the proper configuration. Adding common configuration in the parent file for all Maven submodules is a good practice in our case because at some point, we would like to release both free and paid versions, and it's preferable to avoid duplicating the same configuration in two files. We want to create a new profile and add all the necessary settings there, because the release process is not something that runs every day during the development phase. It should run only at a final stage, when we are ready to deploy our application. Our first priority is to tell Maven to disable debug mode. Then, we need to specify a new Maven plugin name: maven-jarsigner-plugin, which is responsible for driving the verification and signing process for custom/private certificates. You can find the complete release profile as follows: <profiles> <profile>    <id>release</id>    <build>      <plugins>        <plugin>          <groupId>com.jayway.maven.plugins.android.generation2          </groupId>          <artifactId>android-maven-plugin</artifactId>          <extensions>true</extensions>          <configuration>            <sdk>              <platform>19</platform>            </sdk>            <sign>              <debug>false</debug>            </sign>          </configuration>        </plugin>        <plugin>          <groupId>org.apache.maven.plugins</groupId>          <artifactId>maven-jarsigner-plugin</artifactId>          <executions>            <execution>              <id>signing</id>              <phase>package</phase>              <goals>                <goal>sign</goal>                <goal>verify</goal>              </goals>              <inherited>true</inherited>              <configuration>                <removeExistingSignatures>true                </removeExistingSignatures>                <archiveDirectory />                <includes>                  <include>${project.build.directory}/                  ${project.artifactId}.apk</include>                </includes>                <keystore>${sign.keystore}</keystore>                <alias>${sign.alias}</alias>                <storepass>${sign.storepass}</storepass>                <keypass>${sign.keypass}</keypass>                <verbose>true</verbose>              </configuration>            </execution>          </executions>        </plugin>      </plugins>    </build> </profile> </profiles> We instruct the JAR signer plugin to be triggered during the package phase and run the goals of verification and signing. Furthermore, we tell the plugin to remove any existing signatures from the package and use the variable values we have defined in our global profile, $sign.alias, $sign.keystore, $sign.storepass and $sign.keypass. The "verbose" setting is used here to verify that the private key is used instead of the debug key. Before we run our new profile, for comparison purposes, let's package our application without using the signing capability. Open a terminal window, and type the following Maven command: mvn clean package When the command finishes, navigate to the paid version target directory, /PaidVersion/target, and take a look at its contents. You will notice that there are two packaging files: a PaidVersion.jar (size 14KB) and PaidVersion.apk (size 46KB). Since we haven't discussed yet about releasing an application, we can just run the following command in a terminal window and see how the private key is used for signing the package: mvn clean package -Prelease You must have probably noticed that we use only one profile name, and that is the beauty of Maven. Profiles with the same ID are merged together, and so it's easier to understand and maintain the build scripts. If you want to double-check that the package is signed with your private certificate, you can monitor the Maven output, and at some point you will see something similar to the following image: This output verifies that the classes have been properly signed through the execution of the Maven JAR signer plugin. To better understand how signing and optimization affects the packages generation, we can navigate again to the /PaidVersion/target directory and take a look at the files created. You will be surprised to see that the same packages exist again but they have different sizes. The PaidVersion.jar file has a size of 18KB, which is greater than the file generated without signing. However, the PaidVersion.apk is smaller (size 44KB) than the first version. These differences happen because the .jar file is signed with the new certificate; so the size is getting slightly bigger, but what about the .apk file. Should be also bigger because every file is signed with the certificate. The answer can be easily found if we open both the .apk files and compare them. They are compressed files so any well-known tool that opens compressed files can do this. If you take a closer look at the contents of the .apk files, you will notice that the contents of the .apk file that was generated using the private certificate are slightly larger except the resources.arsc file. This file, in the case of custom signing, is compressed, whereas in the debug signing mode it is in raw format. This explains why the signed version of the .apk file is smaller than the original one. There's also one last thing that verifies the correct completion of signing. Keep the compressed .apk files opened and navigate to the META-INF directory. These directories contain a couple of different files. The signed package with our personal certificate contains the key files named with the alias we used when we created the certificate and the package signed in debug mode contains the default certificate used by Android. Summary We know that Android developers struggle when it comes to proper package and release of an application to the public. We have analyzed in many details the necessary steps for a correct and complete packaging of Maven configuration. After reading this article, you should have basic knowledge of digitally signing packages with and without the help of Mavin in Android. Resources for Article: Further resources on this subject: Best Practices [article] Installing Apache Karaf [article] Apache Maven and m2eclipse [article]
Read more
  • 0
  • 0
  • 7229

article-image-tracking-objects-videos
Packt
12 Aug 2015
13 min read
Save for later

Tracking Objects in Videos

Packt
12 Aug 2015
13 min read
In this article by Salil Kapur and Nisarg Thakkar, authors of the book Mastering OpenCV Android Application Programming, we will look at the broader aspects of object tracking in Videos. Object tracking is one of the most important applications of computer vision. It can be used for many applications, some of which are as follows: Human–computer interaction: We might want to track the position of a person's finger and use its motion to control the cursor on our machines Surveillance: Street cameras can capture pedestrians' motions that can be tracked to detect suspicious activities Video stabilization and compression Statistics in sports: By tracking a player's movement in a game of football, we can provide statistics such as distance travelled, heat maps, and so on In this article, you will learn the following topics: Optical flow Image Pyramids (For more resources related to this topic, see here.) Optical flow Optical flow is an algorithm that detects the pattern of the motion of objects, or edges, between consecutive frames in a video. This motion may be caused by the motion of the object or the motion of the camera. Optical flow is a vector that depicts the motion of a point from the first frame to the second. The optical flow algorithm works under two basic assumptions: The pixel intensities are almost constant between consecutive frames The neighboring pixels have the same motion as the anchor pixel We can represent the intensity of a pixel in any frame by f(x,y,t). Here, the parameter t represents the frame in a video. Let's assume that, in the next dt time, the pixel moves by (dx,dy). Since we have assumed that the intensity doesn't change in consecutive frames, we can say: f(x,y,t) = f(x + dx,y + dy,t + dt) Now we take the Taylor series expansion of the RHS in the preceding equation: Cancelling the common term, we get: Where . Dividing both sides of the equation by dt we get: This equation is called the optical flow equation. Rearranging the equation we get: We can see that this represents the equation of a line in the (u,v) plane. However, with only one equation available and two unknowns, this problem is under constraint at the moment. The Horn and Schunck method By taking into account our assumptions, we get: We can say that the first term will be small due to our assumption that the brightness is constant between consecutive frames. So, the square of this term will be even smaller. The second term corresponds to the assumption that the neighboring pixels have similar motion to the anchor pixel. We need to minimize the preceding equation. For this, we differentiate the preceding equation with respect to u and v. We get the following equations: Here, and  are the Laplacians of u and v respectively. The Lucas and Kanade method We start off with the optical flow equation that we derived earlier and noticed that it is under constrained as it has one equation and two variables: To overcome this problem, we make use of the assumption that pixels in a 3x3 neighborhood have the same optical flow: We can rewrite these equations in the form of matrices, as shown here: This can be rewritten in the form: Where: As we can see, A is a 9x2 matrix, U is a 2x1 matrix, and b is a 9x1 matrix. Ideally, to solve for U, we just need to multiply by A-1on both sides of the equation. However, this is not possible, as we can only take the inverse of square matrices. Thus, we try to transform A into a square matrix by first multiplying the equation by AT on both sides of the equation: Now is a square matrix of dimension 2x2. Hence, we can take its inverse: On solving this equation, we get: This method of multiplying the transpose and then taking an inverse is called pseudo-inverse. This equation can also be obtained by finding the minimum of the following equation: According to the optical flow equation and our assumptions, this value should be equal to zero. Since the neighborhood pixels do not have exactly the same values as the anchor pixel, this value is very small. This method is called Least Square Error. To solve for the minimum, we differentiate this equation with respect to u and v, and equate it to zero. We get the following equations: Now we have two equations and two variables, so this system of equations can be solved. We rewrite the preceding equations as follows: So, by arranging these equations in the form of a matrix, we get the same equation as obtained earlier: Since, the matrix A is now a 2x2 matrix, it is possible to take an inverse. On taking the inverse, the equation obtained is as follows: This can be simplified as: Solving for u and v, we get: Now we have the values for all the , , and . Thus, we can find the values of u and v for each pixel. When we implement this algorithm, it is observed that the optical flow is not very smooth near the edges of the objects. This is due to the brightness constraint not being satisfied. To overcome this situation, we use image pyramids. Checking out the optical flow on Android To see the optical flow in action on Android, we will create a grid of points over a video feed from the camera, and then the lines will be drawn for each point that will depict the motion of the point on the video, which is superimposed by the point on the grid. Before we begin, we will set up our project to use OpenCV and obtain the feed from the camera. We will process the frames to calculate the optical flow. First, create a new project in Android Studio. We will set the activity name to MainActivity.java and the XML resource file as activity_main.xml. Second, we will give the app the permissions to access the camera. In the AndroidManifest.xml file, add the following lines to the manifest tag: <uses-permission android_name="android.permission.CAMERA" /> Make sure that your activity tag for MainActivity contains the following line as an attribute: android:screenOrientation="landscape" Our activity_main.xml file will contain a simple JavaCameraView. This is a custom OpenCV defined layout that enables us to access the camera frames and processes them as normal Mat objects. The XML code has been shown here: <LinearLayout       android_layout_width="match_parent"    android_layout_height="match_parent"    android_orientation="horizontal">      <org.opencv.android.JavaCameraView        android_layout_width="fill_parent"        android_layout_height="fill_parent"        android_id="@+id/main_activity_surface_view" />   </LinearLayout> Now, let's work on some Java code. First, we'll define some global variables that we will use later in the code: private static final String   TAG = "com.packtpub.masteringopencvandroid.chapter5.MainActivity";      private static final int       VIEW_MODE_KLT_TRACKER = 0;    private static final int       VIEW_MODE_OPTICAL_FLOW = 1;      private int                   mViewMode;    private Mat                   mRgba;    private Mat                   mIntermediateMat;    private Mat                   mGray;    private Mat                   mPrevGray;      MatOfPoint2f prevFeatures, nextFeatures;    MatOfPoint features;      MatOfByte status;    MatOfFloat err;      private MenuItem               mItemPreviewOpticalFlow, mItemPreviewKLT;      private CameraBridgeViewBase   mOpenCvCameraView; We will need to create a callback function for OpenCV, like we did earlier. In addition to the code we used earlier, we will also enable CameraView to capture frames for processing: private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {        @Override        public void onManagerConnected(int status) {            switch (status) {                case LoaderCallbackInterface.SUCCESS:                {                    Log.i(TAG, "OpenCV loaded successfully");                      mOpenCvCameraView.enableView();                } break;                default:                {                    super.onManagerConnected(status);                } break;            }        }    }; We will now check whether the OpenCV manager is installed on the phone, which contains the required libraries. In the onResume function, add the following line of code: OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10,   this, mLoaderCallback); In the onCreate() function, add the following line before calling setContentView to prevent the screen from turning off, while using the app: getWindow().addFlags(WindowManager.LayoutParams. FLAG_KEEP_SCREEN_ON); We will now initialize our JavaCameraView object. Add the following lines after setContentView has been called: mOpenCvCameraView = (CameraBridgeViewBase)   findViewById(R.id.main_activity_surface_view); mOpenCvCameraView.setCvCameraViewListener(this); Notice that we called setCvCameraViewListener with the this parameter. For this, we need to make our activity implement the CvCameraViewListener2 interface. So, your class definition for the MainActivity class should look like the following code: public class MainActivity extends Activity   implements CvCameraViewListener2 We will add a menu to this activity to toggle between different examples in this article. Add the following lines to the onCreateOptionsMenu function: mItemPreviewKLT = menu.add("KLT Tracker"); mItemPreviewOpticalFlow = menu.add("Optical Flow"); We will now add some actions to the menu items. In the onOptionsItemSelected function, add the following lines: if (item == mItemPreviewOpticalFlow) {            mViewMode = VIEW_MODE_OPTICAL_FLOW;            resetVars();        } else if (item == mItemPreviewKLT){            mViewMode = VIEW_MODE_KLT_TRACKER;            resetVars();        }          return true; We used a resetVars function to reset all the Mat objects. It has been defined as follows: private void resetVars(){        mPrevGray = new Mat(mGray.rows(), mGray.cols(), CvType.CV_8UC1);        features = new MatOfPoint();        prevFeatures = new MatOfPoint2f();        nextFeatures = new MatOfPoint2f();        status = new MatOfByte();        err = new MatOfFloat();    } We will also add the code to make sure that the camera is released for use by other applications, whenever our application is suspended or killed. So, add the following snippet of code to the onPause and onDestroy functions: if (mOpenCvCameraView != null)            mOpenCvCameraView.disableView(); After the OpenCV camera has been started, the onCameraViewStarted function is called, which is where we will add all our object initializations: public void onCameraViewStarted(int width, int height) {        mRgba = new Mat(height, width, CvType.CV_8UC4);        mIntermediateMat = new Mat(height, width, CvType.CV_8UC4);        mGray = new Mat(height, width, CvType.CV_8UC1);        resetVars();    } Similarly, the onCameraViewStopped function is called when we stop capturing frames. Here we will release all the objects we created when the view was started: public void onCameraViewStopped() {        mRgba.release();        mGray.release();        mIntermediateMat.release();    } Now we will add the implementation to process each frame of the feed that we captured from the camera. OpenCV calls the onCameraFrame method for each frame, with the frame as a parameter. We will use this to process each frame. We will use the viewMode variable to distinguish between the optical flow and the KLT tracker, and have different case constructs for the two: public Mat onCameraFrame(CvCameraViewFrame inputFrame) {        final int viewMode = mViewMode;        switch (viewMode) {            case VIEW_MODE_OPTICAL_FLOW: We will use the gray()function to obtain the Mat object that contains the captured frame in a grayscale format. OpenCV also provides a similar function called rgba() to obtain a colored frame. Then we will check whether this is the first run. If this is the first run, we will create and fill up a features array that stores the position of all the points in a grid, where we will compute the optical flow:                mGray = inputFrame.gray();                if(features.toArray().length==0){                   int rowStep = 50, colStep = 100;                    int nRows = mGray.rows()/rowStep, nCols = mGray.cols()/colStep;                      Point points[] = new Point[nRows*nCols];                    for(int i=0; i<nRows; i++){                        for(int j=0; j<nCols; j++){                            points[i*nCols+j]=new Point(j*colStep, i*rowStep);                        }                    }                      features.fromArray(points);                      prevFeatures.fromList(features.toList());                    mPrevGray = mGray.clone();                    break;                } The mPrevGray object refers to the previous frame in a grayscale format. We copied the points to a prevFeatures object that we will use to calculate the optical flow and store the corresponding points in the next frame in nextFeatures. All of the computation is carried out in the calcOpticalFlowPyrLK OpenCV defined function. This function takes in the grayscale version of the previous frame, the current grayscale frame, an object that contains the feature points whose optical flow needs to be calculated, and an object that will store the position of the corresponding points in the current frame:                nextFeatures.fromArray(prevFeatures.toArray());                Video.calcOpticalFlowPyrLK(mPrevGray, mGray,                    prevFeatures, nextFeatures, status, err); Now, we have the position of the grid of points and their position in the next frame as well. So, we will now draw a line that depicts the motion of each point on the grid:                List<Point> prevList=features.toList(), nextList=nextFeatures.toList();                Scalar color = new Scalar(255);                  for(int i = 0; i<prevList.size(); i++){                    Core.line(mGray, prevList.get(i), nextList.get(i), color);                } Before the loop ends, we have to copy the current frame to mPrevGray so that we can calculate the optical flow in the subsequent frames:                mPrevGray = mGray.clone();                break; default: mViewMode = VIEW_MODE_OPTICAL_FLOW; After we end the switch case construct, we will return a Mat object. This is the image that will be displayed as an output to the user of the application. Here, since all our operations and processing were performed on the grayscale image, we will return this image: return mGray; So, this is all about optical flow. The result can be seen in the following image: Optical flow at various points in the camera feed Image pyramids Pyramids are multiple copies of the same images that differ in their sizes. They are represented as layers, as shown in the following figure. Each level in the pyramid is obtained by reducing the rows and columns by half. Thus, effectively, we make the image's size one quarter of its original size: Relative sizes of pyramids Pyramids intrinsically define reduce and expand as their two operations. Reduce refers to a reduction in the image's size, whereas expand refers to an increase in its size. We will use a convention that lower levels in a pyramid mean downsized images and higher levels mean upsized images. Gaussian pyramids In the reduce operation, the equation that we use to successively find levels in pyramids, while using a 5x5 sliding window, has been written as follows. Notice that the size of the image reduces to a quarter of its original size: The elements of the weight kernel, w, should add up to 1. We use a 5x5 Gaussian kernel for this task. This operation is similar to convolution with the exception that the resulting image doesn't have the same size as the original image. The following image shows you the reduce operation: The reduce operation The expand operation is the reverse process of reduce. We try to generate images of a higher size from images that belong to lower layers. Thus, the resulting image is blurred and is of a lower resolution. The equation we use to perform expansion is as follows: The weight kernel in this case, w, is the same as the one used to perform the reduce operation. The following image shows you the expand operation: The expand operation The weights are calculated using the Gaussian function to perform Gaussian blur. Summary In this article, we have seen how to detect a local and global motion in a video, and how we can track objects. We have also learned about Gaussian pyramids, and how they can be used to improve the performance of some computer vision tasks. Resources for Article: Further resources on this subject: New functionality in OpenCV 3.0 [article] Seeing a Heartbeat with a Motion Amplifying Camera [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 6468

article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 6345
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6271

article-image-applications-physics
Packt
18 Feb 2013
16 min read
Save for later

Applications of Physics

Packt
18 Feb 2013
16 min read
(For more resources related to this topic, see here.) Introduction to the Box2D physics extension Physics-based games are one of the most popular types of games available for mobile devices. AndEngine allows the creation of physics-based games with the Box2D extension. With this extension, we can construct any type of physically realistic 2D environment from small, simple simulations to complex games. In this recipe, we will create an activity that demonstrates a simple setup for utilizing the Box2D physics engine extension. Furthermore, we will use this activity for the remaining recipes in this article. Getting ready... First, create a new activity class named PhysicsApplication that extends BaseGameActivity and implements IAccelerationListener and IOnSceneTouchListener. How to do it... Follow these steps to build our PhysicsApplication activity class: Create the following variables in the class: public static int cameraWidth = 800; public static int cameraHeight = 480; public Scene mScene; public FixedStepPhysicsWorld mPhysicsWorld; public Body groundWallBody; public Body roofWallBody; public Body leftWallBody; public Body rightWallBody; We need to set up the foundation of our activity. To start doing so, place these four, common overridden methods in the class to set up the engine, resources, and the main scene: @Override public Engine onCreateEngine(final EngineOptions pEngineOptions) { return new FixedStepEngine(pEngineOptions, 60); } @Override public EngineOptions onCreateEngineOptions() { EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new FillResolutionPolicy(), new Camera(0,0, cameraWidth, cameraHeight)); engineOptions.getRenderOptions().setDithering(true); engineOptions.getRenderOptions(). getConfigChooserOptions() .setRequestedMultiSampling(true); engineOptions.setWakeLockOptions( WakeLockOptions.SCREEN_ON); return engineOptions; } @Override public void onCreateResources(OnCreateResourcesCallback pOnCreateResourcesCallback) { pOnCreateResourcesCallback. onCreateResourcesFinished(); } @Override public void onCreateScene(OnCreateSceneCallback pOnCreateSceneCallback) { mScene = new Scene(); mScene.setBackground(new Background(0.9f,0.9f,0.9f)); pOnCreateSceneCallback.onCreateSceneFinished(mScene); } Continue setting up the activity by adding the following overridden method, which will be used to populate our scene: @Override public void onPopulateScene(Scene pScene, OnPopulateSceneCallback pOnPopulateSceneCallback) { } Next, we will fill the previous method with the following code to create our PhysicsWorld object and Scene object: mPhysicsWorld = new FixedStepPhysicsWorld(60, new Vector2(0f,-SensorManager.GRAVITY_EARTH*2), false, 8, 3); mScene.registerUpdateHandler(mPhysicsWorld); final FixtureDef WALL_FIXTURE_DEF = PhysicsFactory.createFixtureDef(0, 0.1f, 0.5f); final Rectangle ground = new Rectangle(cameraWidth / 2f, 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle roof = new Rectangle(cameraWidth / 2f, cameraHeight – 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle left = new Rectangle(6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); final Rectangle right = new Rectangle(cameraWidth - 6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); ground.setColor(0f, 0f, 0f); roof.setColor(0f, 0f, 0f); left.setColor(0f, 0f, 0f); right.setColor(0f, 0f, 0f); groundWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, ground, BodyType.StaticBody, WALL_FIXTURE_DEF); roofWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, roof, BodyType.StaticBody, WALL_FIXTURE_DEF); leftWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, left, BodyType.StaticBody, WALL_FIXTURE_DEF); rightWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, right, BodyType.StaticBody, WALL_FIXTURE_DEF); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); // Further recipes in this chapter will require us to place code here. mScene.setOnSceneTouchListener(this); pOnPopulateSceneCallback.onPopulateSceneFinished(); The following overridden activities handle the scene touch events, the accelerometer input, and the two engine life cycle events—onResumeGame and onPauseGame. Place them at the end of the class to finish this recipe: @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { // Further recipes in this chapter will require us to place code here. return true; } @Override public void onAccelerationAccuracyChanged( AccelerationData pAccelerationData) {} @Override public void onAccelerationChanged( AccelerationData pAccelerationData) { final Vector2 gravity = Vector2Pool.obtain( pAccelerationData.getX(), pAccelerationData.getY()); this.mPhysicsWorld.setGravity(gravity); Vector2Pool.recycle(gravity); } @Override public void onResumeGame() { super.onResumeGame(); this.enableAccelerationSensor(this); } @Override public void onPauseGame() { super.onPauseGame(); this.disableAccelerationSensor(); } How it works... The first thing that we do is define a camera width and height. Then, we define a Scene object and a FixedStepPhysicsWorld object in which the physics simulations will take place. The last set of variables defines what will act as the borders for our physics-based scenes. In the second step, we override the onCreateEngine() method to return a FixedStepEngine object that will process 60 updates per second. The reason that we do this, while also using a FixedStepPhysicsWorld object, is to create a simulation that will be consistent across all devices, regardless of how efficiently a device can process the physics simulation. We then create the EngineOptions object with standard preferences, create the onCreateResources() method with only a simple callback, and set the main scene with a light-gray background. In the onPopulateScene() method, we create our FixedStepPhysicsWorld object that has double the gravity of the Earth, passed as an (x,y) coordinate Vector2 object, and will update 60 times per second. The gravity can be set to other values to make our simulations more realistic or 0 to create a zero gravity simulation. A gravity setting of 0 is useful for space simulations or for games that use a top-down camera view instead of a profile. The false Boolean parameter sets the AllowSleep property of the PhysicsWorld object, which tells PhysicsWorld to not let any bodies deactivate themselves after coming to a stop. The last two parameters of the FixedStepPhysicsWorld object tell the physics engine how many times to calculate velocity and position movements. Higher iterations will create simulations that are more accurate, but can cause lag or jitteriness because of the extra load on the processor. After creating the FixedStepPhysicsWorld object, we register it with the main scene as an update handler. The physics world will not run a simulation without being registered. The variable WALL_FIXTURE_DEF is a fixture definition. Fixture definitions hold the shape and material properties of entities that will be created within the physics world as fixtures. The shape of a fixture can be either circular or polygonal. The material of a fixture is defined by its density, elasticity, and friction, all of which are required when creating a fixture definition. Following the creation of the WALL_FIXTURE_DEF variable, we create four rectangles that will represent the locations of the wall bodies. A body in the Box2D physics world is made of fixtures. While only one fixture is necessary to create a body, multiple fixtures can create complex bodies with varying properties. Further along in the onPopulateScene() method, we create the box bodies that will act as our walls in the physics world. The rectangles that were previously created are passed to the bodies to define their position and shape. We then define the bodies as static, which means that they will not react to any forces in the physics simulation. Lastly, we pass the wall fixture definition to the bodies to complete their creation. After creating the bodies, we attach the rectangles to the main scene and set the scene's touch listener to our activity, which will be accessed by the onSceneTouchEvent() method. The final line of the onPopulateScene() method tells the engine that the scene is ready to be shown. The overridden onSceneTouchEvent() method will handle all touch interactions for our scene. The onAccelerationAccuracyChanged() and onAccelerationChanged() methods are inherited from the IAccelerationListener interface and allow us to change the gravity of our physics world when the device is tilted, rotated, or panned. We override onResumeGame() and onPauseGame() to keep the accelerometer from using unnecessary battery power when our game activity is not in the foreground. There's more... In the overridden onAccelerationChanged() method, we make two calls to the Vector2Pool class. The Vector2Pool class simply gives us a way of re-using our Vector2 objects that might otherwise require garbage collection by the system. On newer devices, the Android Garbage Collector has been streamlined to reduce noticeable hiccups, but older devices might still experience lag depending on how much memory the variables being garbage collected occupy. Visit http://www.box2d.org/manual.htmlto see the Box2D User Manual. The AndEngine Box2D extension is based on a Java port of the official Box2D C++ physics engine, so some variations in procedure exist, but the general concepts still apply. See also Understanding different body types in this article. Understanding different body types The Box2D physics world gives us the means to create different body types that allow us to control the physics simulation. We can generate dynamic bodies that react to forces and other bodies, static bodies that do not move, and kinematic bodies that move but are not affected by forces or other bodies. Choosing which type each body will be is vital to producing an accurate physics simulation. In this recipe, we will see how three bodies react to each other during collision, depending on their body types. Getting ready... Follow the recipe in the Introduction to the Box2D physics extension section given at the beginning of this article to create a new activity that will facilitate the creation of our bodies with varying body types. How to do it... Complete the following steps to see how specifying a body type for bodies affects them: First, insert the following fixture definition into the onPopulateScene() method: FixtureDef BoxBodyFixtureDef = PhysicsFactory.createFixtureDef(20f, 0f, 0.5f); Next, place the following code that creates three rectangles and their corresponding bodies after the fixture definition from the previous step: Rectangle staticRectangle = new Rectangle(cameraWidth / 2f,75f,400f,40f,this.getVertexBufferObjectManager()); staticRectangle.setColor(0.8f, 0f, 0f); mScene.attachChild(staticRectangle); PhysicsFactory.createBoxBody(mPhysicsWorld, staticRectangle, BodyType.StaticBody, BoxBodyFixtureDef); Rectangle dynamicRectangle = new Rectangle(400f, 120f, 40f, 40f, this.getVertexBufferObjectManager()); dynamicRectangle.setColor(0f, 0.8f, 0f); mScene.attachChild(dynamicRectangle); Body dynamicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, dynamicRectangle, BodyType.DynamicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( dynamicRectangle, dynamicBody); Rectangle kinematicRectangle = new Rectangle(600f, 100f, 40f, 40f, this.getVertexBufferObjectManager()); kinematicRectangle.setColor(0.8f, 0.8f, 0f); mScene.attachChild(kinematicRectangle); Body kinematicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, kinematicRectangle, BodyType.KinematicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( kinematicRectangle, kinematicBody); Lastly, add the following code after the definitions from the previous step to set the linear and angular velocities for our kinematic body: kinematicBody.setLinearVelocity(-2f, 0f); kinematicBody.setAngularVelocity((float) (-Math.PI)); How it works... In the first step, we create the BoxBodyFixtureDef fixture definition that we will use when creating our bodies in the second step. For more information on fixture definitions, see the Introduction to the Box2D physics extension recipe in this article. In step two, we first define the staticRectangle rectangle by calling the Rectangle constructor. We place staticRectangle at the position of cameraWidth / 2f, 75f, which is near the lower-center of the scene, and we set the rectangle to have a width of 400f and a height of 40f, which makes the rectangle into a long, flat bar. Then, we set the staticRectangle rectangle's color to be red by calling staticRectangle. setColor(0.8f, 0f, 0f). Lastly, for the staticRectangle rectangle, we attach it to the scene by calling the mScene.attachChild() method with staticRectangle as the parameter. Next, we create a body in the physics world that matches our staticRectangle. To do this, we call the PhysicsFactory.createBoxBody() method with the parameters of mPhysicsWorld, which is our physics world, staticRectangle to tell the box to be created with the same position and size as the staticRectangle rectangle, BodyType. StaticBody to define the body as static, and our BoxBodyFixtureDef fixture definition. Our next rectangle, dynamicRectangle, is created at the location of 400f and 120f, which is the middle of the scene slightly above the staticRectangle rectangle. Our dynamicRectangle rectangle's width and height are set to 40f to make it a small square. Then, we set its color to green by calling dynamicRectangle.setColor(0f, 0.8f, 0f) and attach it to our scene using mScene.attachChild(dynamicRectangle). Next, we create the dynamicBody variable using the PhysicsFactory.createBoxBody() method in the same way that we did for our staticRectangle rectangle. Notice that we set the dynamicBody variable to have BodyType of DynamicBody. This sets the body to be dynamic. Now, we register PhysicsConnector with the physics world to link dynamicRectangle and dynamicBody. A PhysicsConnecter class links an entity within our scene to a body in the physics world, representing the body's realtime position and rotation in our scene. Our last rectangle, kinematicRectangle, is created at the location of 600f and 100f, which places it on top of our staticRectangle rectangle toward the right-hand side of the scene. It is set to have a height and width of 40f, which makes it a small square like our dynamicRectangle rectangle. We then set the kinematicRectangle rectangle's color to yellow and attach it to our scene. Similar to the previous two bodies that we created, we call the PhysicsFactory.createBoxBody() method to create our kinematicBody variable. Take note that we create our kinematicBody variable with a BodyType type of KinematicBody. This sets it to be kinematic and thus moved only by the setting of its velocities. Lastly, we register a PhysicsConnector class between our kinematicRectangle rectangle and our kinematicBody body type. In the last step, we set our kinematicBody body's linear velocity by calling the setLinearVelocity() method with a vector of -2f on the x axis, which makes it move to the left. Finally, we set our kinematicBody body's angular velocity to negative pi by calling kinematicBody.setAngularVelocity((float) (-Math.PI)). For more information on setting a body's velocities, see the Using forces, velocities, and torque recipe in this article. There's more... Static bodies cannot move from applied or set forces, but can be relocated using the setTransform() method. However, we should avoid using the setTransform() method while a simulation is running, because it makes the simulation unstable and can cause some strange behaviors. Instead, if we want to change the position of a static body, we can do so whenever creating the simulation or, if we need to change the position at runtime, simply check that the new position will not cause the static body to overlap existing dynamic bodies or kinematic bodies. Kinematic bodies cannot have forces applied, but we can set their velocities via the setLinearVelocity() and setAngularVelocity() methods. See also Introduction to the Box2D physics extension in this article. Using forces, velocities, and torque in this article. Creating category-filtered bodies Depending on the type of physics simulation that we want to achieve, controlling which bodies are capable of colliding can be very beneficial. In Box2D, we can assign a category, and category-filter to fixtures to control which fixtures can interact. This recipe will cover the defining of two category-filtered fixtures that will be applied to bodies created by touching the scene to demonstrate category-filtering. Getting ready... Create an activity by following the steps in the Introduction to the Box2D physics extension section given at the beginning of the article. This activity will facilitate the creation of the category-filtered bodies used in this section. How to do it... Follow these steps to build our category-filtering demonstration activity: Define the following class-level variables within the activity: private int mBodyCount = 0; public static final short CATEGORYBIT_DEFAULT = 1; public static final short CATEGORYBIT_RED_BOX = 2; public static final short CATEGORYBIT_GREEN_BOX = 4; public static final short MASKBITS_RED_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_RED_BOX; public static final short MASKBITS_GREEN_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_GREEN_BOX; public static final FixtureDef RED_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_RED_BOX, MASKBITS_RED_BOX, (short)0); public static final FixtureDef GREEN_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_GREEN_BOX, MASKBITS_GREEN_BOX, (short)0); Next, create this method within the class that generates new category-filtered bodies at a given location: private void addBody(final float pX, final float pY) { this.mBodyCount++; final Rectangle rectangle = new Rectangle(pX, pY, 50f, 50f, this.getVertexBufferObjectManager()); rectangle.setAlpha(0.5f); final Body body; if(this.mBodyCount % 2 == 0) { rectangle.setColor(1f, 0f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, RED_FIXTURE_DEF); } else { rectangle.setColor(0f, 1f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, GREEN_FIXTURE_DEF); } this.mScene.attachChild(rectangle); this.mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( rectangle, body, true, true)); } Lastly, fill the body of the onSceneTouchEvent() method with the following code that calls the addBody() method by passing the touched location: if(this.mPhysicsWorld != null) if(pSceneTouchEvent.isActionDown()) this.addBody(pSceneTouchEvent.getX(), pSceneTouchEvent.getY()); How it works... In the first step, we create an integer, mBodyCount, which counts how many bodies we have added to the physics world. The mBodyCount integer is used in the second step to determine which color, and thus which category, should be assigned to the new body. We also create the CATEGORYBIT_DEFAULT, CATEGORYBIT_RED_BOX, and CATEGORYBIT_ GREEN_BOX category bits by defining them with unique power-of-two short integers and the MASKBITS_RED_BOX and MASKBITS_GREEN_BOX mask bits by adding their associated category bits together. The category bits are used to assign a category to a fixture, while the mask bits combine the different category bits to determine which categories a fixture can collide with. We then pass the category bits and mask bits to the fixture definitions to create fixtures that have category collision rules. The second step is a simple method that creates a rectangle and its corresponding body. The method takes the X and Y location parameters that we want to use to create a new body and passes them to a Rectangle object's constructor, to which we also pass a height and width of 50f and the activity's VertexBufferObjectManager. Then, we set the rectangle to be 50 percent transparent using the rectangle.setAlpha() method. After that, we define a body and modulate the mBodyCount variable by 2 to determine the color and fixture of every other created body. After determining the color and fixture, we assign them by setting the rectangle's color and creating a body by passing our mPhysicsWorld physics world, the rectangle, a dynamic body type, and the previously-determined fixture to use. Finally, we attach the rectangle to our scene and register a PhysicsConnector class to connect the rectangle to our body. The third step calls the addBody() method from step two only if the physics world has been created and only if the scene's TouchEvent is ActionDown. The parameters that are passed, pSceneTouchEvent.getX() and pSceneTouchEvent.getY(), represent the location on the scene that received a touch input, which is also the location where we want to create a new category-filtered body. There's more... The default category of all fixtures has a value of one. When creating mask bits for specific fixtures, remember that any combination that includes the default category will cause the fixture to collide with all other fixtures that are not masked to avoid collision with the fixture. See also Introduction to the Box2D physics extension in this article. Understanding different body types in this article.
Read more
  • 0
  • 0
  • 5978

article-image-android-30-application-development-gps-locations-and-maps
Packt
22 Jul 2011
7 min read
Save for later

Android 3.0 Application Development: GPS, Locations, and Maps

Packt
22 Jul 2011
7 min read
  Android 3.0 Application Development Cookbook Design and develop rich smartphone and tablet applications for Android 3.0         Introduction For managing location based information, Android provides the android.location package which in turn gives us the LocationManager class that gives us access to location based functions such as the latitude and longitude of a device's position. Tracking a device over time is made equally convenient and the LocationListener class monitors changes in location as they occur. Listening for location changes is only a part of the story, as Google provides APIs for managing Google Maps data and displaying and manipulating maps through the use of the MapView and MapController classes. These powerful tools require us to sign up with Google first, and once done enable us to zoom in and out of maps, pan to any location that we are looking for, and when we want to, include application information on a map, and even add our own layers to maps and mark locations on a Google map. Detecting a device's location Android locations are expressed in terms of latitude and longitude coordinates. The default format is degrees. The Location object can also be used to store a time-stamp and other information such as speed and distance traveled. Although obtaining a device's last known location does not always yield the most accurate information, it is often the first reading that we may want. It is fast, simple to employ, and makes a good introduction to the LocationManager. <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> How to do it... Use the TextView provided in the main.xml file and give it a resource ID: android:id="@+id/text_view" Declare a TextView as a class-wide field in the Java activity code: TextView textView; Then, find it in the usual way, from within the onCreate() method: textView = (TextView) findViewById(R.id.text_view); Next, and still within onCreate(), declare and define our LocationManager: LocationManager manager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); Then, to retrieve the last known location using GPS and display this in the text view, add these lines: Location loc = manager.getLastKnownLocation(LocationManager.GPS_PROVIDER); textView.setText("latitude: " + loc.getLatitude() + "nlongitude: " + loc.getLongitude()); Run the code on a handset or emulator to obtain its location: How it works... The use of a LocationManager to obtain the device's last known location is very straightforward. As with other system services, we obtained it with getSystemService() and the getLastKnownLocation() method returns the Location object itself, which can be further queried to provide latitude and longitude coordinates. We could have done more with the Location object, for example Location.getAltitude() will return altitude and getDistance(Location) and getBearing(Location) will return distance and bearing to another Location. It is possible to send mock locations to an emulator using the DDMS perspective in Eclipse: Before sending location data this way, make sure that you have set the emulator to allow mock locations under Settings | Applications | Development. It is worth noting that although use of the getLastKnownLocation() method may not always be accurate, particularly if the device has been switched off for some time, it does have the advantage of yielding almost immediate results. There's more... Using GPS to obtain a location has a couple of drawbacks. Firstly, it does not work indoors; and secondly, it is very demanding on the battery. Location can be determined by comparing cell tower signal strengths, and although this method is not as accurate, it works well indoors and is much more considerate to the device's battery. Obtaining a location with a network provider The network provider is set up in exactly the same way as the previous GPS example, simply exchange the Location declaration with: Location loc = manager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER); You will also need to change, or amend, the permission in the manifest file with: <uses-permission android_name="android.permission.ACCESS_COURSE_LOCATION" /> Listening for location changes Obtaining the last known location as we did in the previous recipe is all well and good and handy for retrieving a Location quickly, but it can be unreliable if the handset has been switched off or if the user is on the move. Ideally we want to be able to detect location changes as they happen and to do this we employ a LocationListener. In this recipe we will create a simple application that keeps track of a mobile device's movements. Getting ready This task can be performed most easily by starting where the previous one left off. If you have not completed that task yet, do so now—it is very short—then return here. If you have already completed the recipe then simply open it up to proceed. How to do it... First, move the declaration of our LocationManager so that it is a class-wide field: LocationManager manager; In the main Java activity code, before the TextView.setText() call, add the following three lines: LocationListener listener = new MyLocationListener(); manager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 30000, 50, listener); Location location = manager.getLastKnownLocation(LocationManager.GPS_PROVIDER); Now create an inner class called MyLocationListener that implements LocationListener: LocationListener: public class MyLocationListener implements LocationListener { } Eclipse will most likely insist that you add some unimplemented methods and you should do so. For now, only complete one of them, the onLocationChanged() callback: @Override public void onLocationChanged(Location l) { textView.setText("/n/nlatitude: " + l.getLatitude() + "nlongitude: " + l.getLongitude()); } Leave the others as they are: @Override public void onProviderDisabled(String provider) {} @Override public void onProviderEnabled(String provider) {} @Override public void onStatusChanged(String provider, int status, Bundle extras) {} If you want to test this code on an emulator, then go right ahead. However, this code will create a serious drain on the battery of a handset, and it is wise to switch our listener off when it is not needed. Here we have used the activity's onPause() and onResume() functions to control this. You may wish to include these statements in any part of your activity's life cycle that suits your application's purpose: @Override protected void onResume() { super.onResume(); manager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 30000, 50, listener); } @Override protected void onPause() { super.onPause(); manager.removeUpdates(this); } If you have not already tested this application, do so now. You will need to move around if you are testing it on a real device, or send mock locations to an emulator to see the code in action: How it works... In this recipe we used the LocationManager to provide location updates roughly every 30 seconds (30000 milliseconds) or whenever the location changed by more than 50 meters. We say 'roughly' because these values work only as a guide and the actual frequency of updates often varies from the values we set. Nevertheless, setting these two parameters of the requestLocationUpdates() method to high values can make a big difference to the amount of battery power the GPS provider consumes. Hopefully the use of the provider and the LocationListener as the other two parameters is self explanatory. The LocationListener operates very much as other listeners do and the purpose of the onProviderEnabled() and onProviderDisabled() should be clear. The onStatusChanged() method is called whenever a provider becomes unavailable after a period of availability or vice versa. The int, status can represent 0 = OUT_OF_SERVICE, 1 = TEMPORARILY_UNAVAILABLE, or 2 = AVAILABLE.  
Read more
  • 0
  • 0
  • 4961

Packt
18 Jan 2013
7 min read
Save for later

New Connectivity APIs – Android Beam

Packt
18 Jan 2013
7 min read
(For more resources related to this topic, see here.) Android Beam Devices that have NFC hardware can share data by tapping them together. This could be done with the help of the Android Beam feature. It is similar to Bluetooth, as we get seamless discovery and pairing as in a Bluetooth connection. Devices connect when they are close to each other (not more than a few centimeters). Users can share pictures, videos, contacts, and so on, using the Android Beam feature. Beaming NdefMessages In this section, we are going to implement a simple Android Beam application. This application will send an image to another device when two devices are tapped together. There are three methods that are introduced with Android Ice Cream Sandwich that are used in sending NdefMessages. These methods are as follows: setNdefPushMessage() : This method takes an NdefMessage as a parameter and sends it to another device automatically when devices are tapped together. This is commonly used when the message is static and doesn't change. setNdefPushMessageCallback() : This method is used for creating dynamic NdefMessages. When two devices are tapped together, the createNdefMessage() method is called. setOnNdefPushCompleteCallback() : This method sets a callback which is called when the Android Beam is successful. We are going to use the second method in our sample application. Our sample application's user interface will contain a TextView component for displaying text messages and an ImageView component for displaying the received images sent from another device. The layout XML code will be as follows: <RelativeLayout android_layout_width="match_parent" android_layout_height="match_parent" > <TextView android_id="@+id/textView" android_layout_width="wrap_content" android_layout_height="wrap_content" android_layout_centerHorizontal="true" android_layout_centerVertical="true" android_text="" /> <ImageView android_id="@+id/imageView" android_layout_width="wrap_content" android_layout_height="wrap_content" android_layout_below="@+id/textView" android_layout_centerHorizontal="true" android_layout_marginTop="14dp" /> </RelativeLayout> Now, we are going to implement, step-by-step, the Activity class of the sample application. The code of the Activity class with the onCreate() method is as follows: public class Chapter9Activity extends Activity implementsCreateNdefMessageCallback{NfcAdapter mNfcAdapter;TextView mInfoText;ImageView imageView;@Overridepublic void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.main);imageView = (ImageView) findViewById(R.id.imageView);mInfoText = (TextView) findViewById(R.id.textView);// Check for available NFC AdaptermNfcAdapter =NfcAdapter.getDefaultAdapter(getApplicationContext());if (mNfcAdapter == null){mInfoText = (TextView) findViewById(R.id.textView);mInfoText.setText("NFC is not available on this device.");finish();return;}// Register callback to set NDEF messagemNfcAdapter.setNdefPushMessageCallback(this, this);}@Overridepublic boolean onCreateOptionsMenu(Menu menu) {getMenuInflater().inflate(R.menu.main, menu);return true;}} As you can see in this code, we can check whether the device provides an NfcAdapter. If it does, we get an instance of NfcAdapter. Then, we call the setNdefPushMessageCallback() method to set the callback using the NfcAdapter instance. We send the Activity class as a callback parameter because the Activity class implements CreateNdefMessageCallback.In order to implement CreateNdefMessageCallback, we should override the createNdefMessage()method as shown in the following code block: @Overridepublic NdefMessage createNdefMessage(NfcEvent arg0) {Bitmap icon =BitmapFactory.decodeResource(this.getResources(),R.drawable.ic_launcher);ByteArrayOutputStream stream = new ByteArrayOutputStream();icon.compress(Bitmap.CompressFormat.PNG, 100, stream);byte[] byteArray = stream.toByteArray();NdefMessage msg = new NdefMessage(new NdefRecord[] {createMimeRecord("application/com.chapter9", byteArray), NdefRecord.createApplicationRecord("com.chapter9")});return msg;}public NdefRecord createMimeRecord(String mimeType, byte[]payload) {byte[] mimeBytes = mimeType.getBytes(Charset.forName("USASCII"));NdefRecord mimeRecord = newNdefRecord(NdefRecord.TNF_MIME_MEDIA,mimeBytes, new byte[0], payload);return mimeRecord;} As you can see in this code, we get a drawable, convert it to bitmap, and then to a byte array. Then we create an NdefMessage with two NdefRecords. The first record contains the mime type and the byte array. The first record is created by the createMimeRecord() method. The second record contains the Android Application Record ( AAR). The Android Application Record was introduced with Android Ice Cream Sandwich. This record contains the package name of the application and increases the certainty that your application will start when an NFC Tag is scanned. That is, the system firstly tries to match the intent filter and AAR together to start the activity. If they don't match, the activity that matches the AAR is started. When the activity is started by an Android Beam event, we need to handle the message that is sent by the Android Beam. We handle this message in the onResume() method of the Activity class as shown in the following code block: @Overridepublic void onResume() {super.onResume();// Check to see that the Activity started due to an AndroidBeamif (NfcAdapter.ACTION_NDEF_DISCOVERED.equals(getIntent().getAction())) {processIntent(getIntent());}}@Overridepublic void onNewIntent(Intent intent) {// onResume gets called after this to handle the intentsetIntent(intent);}void processIntent(Intent intent) {Parcelable[] rawMsgs = intent.getParcelableArrayExtra(NfcAdapter.EXTRA_NDEF_MESSAGES);// only one message sent during the beamNdefMessage msg = (NdefMessage) rawMsgs[0];// record 0 contains the MIME type, record 1 is the AARbyte[] bytes = msg.getRecords()[0].getPayload();Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0,bytes.length);imageView.setImageBitmap(bmp);} As you can see in this code, we firstly check whether the intent is ACTION_NDEF_DISCOVERED. This means the Activity class is started due to an Android Beam. If it is started due to an Android Beam, we process the intent with the processIntent() method. We firstly get NdefMessage from the intent. Then we get the first record and convert the byte array in the first record to bitmap using BitmapFactory . Remember that the second record is AAR, we do nothing with it. Finally, we set the bitmap of the ImageView component. The AndroidManifest.xml file of the application should be as follows: <manifest package="com.chapter9"android:versionCode="1"android:versionName="1.0" ><uses-permission android_name="android.permission.NFC"/><uses-feature android_name="android.hardware.nfc"android:required="false" /><uses-sdkandroid:minSdkVersion="14"android:targetSdkVersion="15" /><applicationandroid:icon="@drawable/ic_launcher"android:label="@string/app_name"android:theme="@style/AppTheme" ><activityandroid:name=".Chapter9Activity"android:label="@string/title_activity_chapter9" ><intent-filter><action android_name="android.intent.action.MAIN" /><categoryandroid:name="android.intent.category.LAUNCHER" /></intent-filter><intent-filter><actionandroid:name="android.nfc.action.NDEF_DISCOVERED" /><categoryandroid:name="android.intent.category.DEFAULT" /><data android_mimeType="application/com.chapter9" /></intent-filter></activity></application></manifest> As you can see in this code, we need to set the minimum SDK to API Level 14 or more in the AndroidManifest.xml file because these APIs are available in API Level 14 or more. Furthermore, we need to set the permissions to use NFC. We also set the uses feature in AndroidManifest.xml. The feature is set as not required. This means that our application would be available for devices that don't have NFC support. Finally, we create an intent filter for android.nfc.action.NDEF_DISCOVERED with mimeType of application/com.chapter9. When a device sends an image using our sample application, the screen will be as follows: Summary In this article, we firstly learned the Android Beam feature of Android. With this feature, devices can send data using the NFC hardware. We implemented a sample Android Beam application and learned how to use Android Beam APIs. Resources for Article : Further resources on this subject: Android 3.0 Application Development: Multimedia Management [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] Android User Interface Development: Animating Widgets and Layouts [Article]
Read more
  • 0
  • 0
  • 4862
article-image-manifest-assurance-security-and-android-permissions-flash
Packt
29 Jun 2011
8 min read
Save for later

Manifest Assurance: Security and Android Permissions for Flash

Packt
29 Jun 2011
8 min read
Setting application permissions with the Android Manifest file When users choose to install an application on Android, they are always presented with a warning about which permissions the application will have within their particular system. From Internet access to full Geolocation, Camera, or External Storage permissions; the user is explicitly told what rights the application will have on their system. If it seems as though the application is asking for more permissions than necessary, the user will usually refuse the install and look for another application to perform the task they need. It is very important to only require the permissions your application truly needs, or else users might be suspicious of you and the applications you make available. How to do it... There are three ways in which we can modify the Android Manifest file to set application permissions for compiling our application with Adobe AIR. Using Flash Professional: Within an AIR for Android project, open the Properties panel and click the little wrench icon next to Player selection: The AIR for Android Settings dialog window will appear. You will be presented with a list of permissions to either enable or disable for your application. Check only the ones your application will need and click OK when finished. Using Flash Builder: When first setting up your AIR for Android project in Flash Builder, define everything required in the Project Location area, and click Next. You are now in the Mobile Settings area of the New Flex Mobile Project dialog. Click the Permissions tab, making sure that Google Android is the selected platform. You will be presented with a list of permissions to either enable or disable for your application. Check only the ones your application will need and continue along with your project setup: To modify any of these permissions after you've begun developing the application, simply open the AIR descriptor file and edit it as is detailed in the following sections. Using a simple text editor: Find the AIR Descriptor File in your project. It is normally named something like {MyProject}-app.xml as it resides at the project root. Browse the file for a node named <android> within this node will be another called <manifestAdditions> which holds a child node called <manifest>. This section of the document contains everything we need to set permissions for our Android application. All we need to do is either comment out or remove those particular permissions that our application does not require. For instance, this application needs Internet, External Storage, and Camera access. Every other permission node is commented out using the standard XML comment syntax of <!-- <comment here> -->: <uses-permission name="android.permission.INTERNET"/> <uses-permission name="android.permission.WRITE_EXTERNAL_ STORAGE"/> <!--<uses-permission name="android.permission.READ_PHONE_ STATE"/>--> <!--<uses-permission name="android.permission.ACCESS_FINE_ LOCATION"/>--> <!--<uses-permission name="android.permission.DISABLE_ KEYGUARD"/>--> <!--<uses-permission name="android.permission.WAKE_LOCK"/>-- > <uses-permission name="android.permission.CAMERA"/> <!--<uses-permission name="android.permission.RECORD_ AUDIO"/>--> <!--<uses-permission name="android.permission.ACCESS_ NETWORK_STATE"/>--> <!--<uses-permission name="android.permission.ACCESS_WIFI_ STATE"/>--> How it works... The permissions you define within the AIR descriptor file will be used to create an Android Manifest file to be packaged within the .apk produced by the tool used to compile the project. These permissions restrict and enable the application, once installed on a user's device, and also alert the user as to which activities and resources the application will be given access to prior to installation. It is very important to provide only the permissions necessary for an application to perform the expected tasks once installed upon a device. The following is a list of the possible permissions for the Android manifest document: ACCESS_COARSE_LOCATION: Allows the Geoloctaion class to access WIFI and triangulated cell tower location data. ACCESS_FINE_LOCATION: Allows the Geolocation class to make use of the device GPS sensor. ACCESS_NETWORK_STATE: Allows an application to access the network state through the NetworkInfo class. ACCESS_WIFI_STATE: Allows and application to access the WIFI state through the NetworkInfo class. CAMERA: Allows an application to access the device camera. INTERNET: Allows the application to access the Internet and perform data transfer requests. READ_PHONE_STATE: Allows the application to mute audio when a phone call is in effect. RECORD_AUDIO: Allows microphone access to the application to record or monitor audio data. WAKE_LOCK: Allows the application to prevent the device from going to sleep using the SystemIdleMode class. (Must be used alongside DISABLE_KEYGUARD.) DISABLE_KEYGUARD: Allows the application to prevent the device from going to sleep using the SystemIdleMode class. (Must be used alongside WAKE_LOCK.) WRITE_EXTERNAL_STORAGE: Allows the application to write to external memory. This memory is normally stored as a device SD card. Preventing the device screen from dimming The Android operating system will dim, and eventually turn off the device screen after a certain amount of time has passed. It does this to preserve battery life, as the display is the primary power drain on a device. For most applications, if a user is interacting with the interface, that interaction will prevent the screen from dimming. However, if your application does not involve user interaction for lengthy periods of time, yet the user is looking at or reading something upon the display, it would make sense to prevent the screen from dimming. How to do it... There are two settings in the AIR descriptor file that can be changed to ensure the screen does not dim. We will also modify properties of our application to complete this recipe: Find the AIR descriptor file in your project. It is normally named something like {MyProject}-app.xml as it resides at the project root. Browse the file for a node named <android> within this node will be another called <manifestAdditions>, which holds a child node called <manifest>. This section of the document contains everything we need to set permissions for our Android application. All we need to do is make sure the following two nodes are present within this section of the descriptor file. Note that enabling both of these permissions is required to allow application control over the system through the SystemIdleMode class. Uncomment them if necessary. <uses-permission android_name="android.permission.WAKE_LOCK" /> <uses-permission android_name="android.permission.DISABLE_ KEYGUARD" /> Within our application, we will import the following classes: import flash.desktop.NativeApplication; import flash.desktop.SystemIdleMode; import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.text.TextField; import flash.text.TextFormat; Declare a TextField and TextFormat pair to trace out messages to the user: private var traceField:TextField; private var traceFormat:TextFormat; Now, we will set the system idle mode for our application by assigning the SystemIdleMode.KEEP_AWAKE constant to the NativeApplication.nativeApplication.systemIdleMode property: protected function setIdleMode():void { NativeApplication.nativeApplication.systemIdleMode = SystemIdleMode.KEEP_AWAKE; } We will, at this point, continue to set up our TextField, apply a TextFormat, and add it to the DisplayList. Here, we create a method to perform all of these actions for us: protected function setupTraceField():void { traceFormat = new TextFormat(); traceFormat.bold = true; traceFormat.font = "_sans"; traceFormat.size = 24; traceFormat.align = "left"; traceFormat.color = 0xCCCCCC; traceField = new TextField(); traceField.defaultTextFormat = traceFormat; traceField.selectable = false; traceField.multiline = true; traceField.wordWrap = true; traceField.mouseEnabled = false; traceField.x = 20; traceField.y = 20 traceField.width = stage.stageWidth-40; traceField.height = stage.stageHeight - traceField.y; addChild(traceField); } Here, we simply output the currently assigned system idle mode String to our TextField, letting the user know that the device will not be going to sleep: protected function checkIdleMode():void { traceField.text = "System Idle Mode: " + NativeApplication. nativeApplication.systemIdleMode; } When the application is run on a device, the System Idle Mode will be set and the results traced out to our display. The user can leave the device unattended for as long as necessary and the screen will not dim or lock. In the following example, this application was allowed to run for five minutes without user intervention: How it works... There are two things that must be done in order to get this to work correctly and both are absolutely necessary. First, we have to be sure the application has correct permissions through the Android Manifest file. Allowing the application permissions for WAKE_LOCK and DISABLE_KEYGUARD within the AIR descriptor file will do this for us. The second part involves setting the NativeApplication.systemIdleMode property to keepAwake. This is best accomplished through use of the SystemIdleMode.KEEP_AWAKE constant. Ensuring that these conditions are met will enable the application to keep the device display lit and prevent Android from locking the device after it has been idle.
Read more
  • 0
  • 0
  • 4806

article-image-writing-tag-content
Packt
10 Jun 2014
9 min read
Save for later

Writing Tag Content

Packt
10 Jun 2014
9 min read
(For more resources related to this topic, see here.) The NDEF Message is composed by one or more NDEF records, and each record contains the payload and a header in which the data length and type and identifier are stored. In this article, we will create some examples that will allow us to work with the NDEF standard and start writing NFC applications. Working with the NDEF record Android provides an easy way to read and write data when it is formatted as per the standard NDEF. This format is the easiest way for us to work with tag data because it saves us from performing lots of operations and processes of reading and writing raw bytes. So, unless we need to get our hands dirty and write our own protocol, this is the way to go (you can still build it on top of NDEF and achieve a custom, yet standard-based protocol). Getting ready Make sure you have a working Android development environment. If you don't, ADT Bundle is a good environment to start with (you can access it by navigating to http://developer.android.com/sdk/index.html). Make sure you have an NFC-enabled Android device or a virtual test environment. It will be assumed that Eclipse is the development IDE. How to do it... We are going to create an application that writes any NDEF record to a tag by performing the following steps: Open Eclipse and create a new Android application project named NfcBookCh3Example1 with the package name nfcbook.ch3.example1. Make sure the AndroidManifest.xml file is correctly configured. Open the MainActivity.java file located under com.nfcbook.ch3.example1 and add the following class member: private NfcAdapter nfcAdapter; Implement the enableForegroundDispatch method and filter tags by using Ndef and NdefFormatable.Invoke in the onResume method: private void enableForegroundDispatch() { Intent intent = new Intent(this, MainActivity.class).addFlags(Intent.FLAG_RECEIVER_REPLACE_PENDING); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0); IntentFilter[] intentFilter = new IntentFilter[] {}; String[][] techList = new String[][] { { android.nfc.tech.Ndef.class.getName() }, { android.nfc.tech.NdefFormatable.class.getName() } }; if ( Build.DEVICE.matches(".*generic.*") ) { //clean up the tech filter when in emulator since it doesn't work properly. techList = null; } nfcAdapter.enableForegroundDispatch(this, pendingIntent, intentFilter, techList); } Instantiate the nfcAdapter class field in the onCreate method: protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Implement the formatTag method: private boolean formatTag(Tag tag, NdefMessage ndefMessage) { try { NdefFormatable ndefFormat = NdefFormatable.get(tag); if (ndefFormat != null) { ndefFormat.connect(); ndefFormat.format(ndefMessage); ndefFormat.close(); return true; } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the writeNdefMessage method: private boolean writeNdefMessage(Tag tag, NdefMessage ndefMessage) { try { if (tag != null) { Ndef ndef = Ndef.get(tag); if (ndef == null) { return formatTag(tag, ndefMessage); } else { ndef.connect(); if (ndef.isWritable()) { ndef.writeNdefMessage(ndefMessage); ndef.close(); return true; } ndef.close(); } } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the isNfcIntent method: boolean isNfcIntent(Intent intent) { return intent.hasExtra(NfcAdapter.EXTRA_TAG); } Override the onNewIntent method using the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord ndefEmptyRecord = new NdefRecord(NdefRecord.TNF_EMPTY, new byte[]{}, new byte[]{}, new byte[]{}); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { ndefEmptyRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); if (writeNdefMessage(tag, ndefMessage)) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Failed to write tag", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } } Override the onPause method and insert the following code: @Override protected void onPause() { super.onPause(); nfcAdapter.disableForegroundDispatch(this); } Open the NFC Simulator tool and simulate a few tags. The tag should be marked as modified, as shown in the following screenshot: In the NFC Simulator, a tag name that ends with _LOCKED is not writable, so we won't be able to write any content to the tag and, therefore, a Failed to write tag toast will appear. How it works... NFC intents carry an extra value with them that is a virtual representation of the tag and can be obtained using the NfcAdapter.EXTRA_TAG key. We can get information about the tag, such as the tag ID and its content type, through this object. In the onNewIntent method, we retrieve the tag instance and then use other classes provided by Android to easily read, write, and retrieve even more information about the tag. These classes are as follows: android.nfc.tech.Ndef: This class provides methods to retrieve and modify the NdefMessage object on a tag android.nfc.tech.NdefFormatable: This class provides methods to format tags that are capable of being formatted as NDEF The first thing we need to do while writing a tag is to call the get(Tag tag) method from the Ndef class, which will return an instance of the same class. Then, we need to open a connection with the tag by calling the connect() method. With an open connection, we can now write a NDEF message to the tag by calling the writeNdefMessage(NdefMessage msg) method. Checking whether the tag is writable or not is always a good practice to prevent unwanted exceptions. We can do this by calling the isWritable() method. Note that this method may not account for physical write protection. When everything is done, we call the close() method to release the previously opened connection. If the get(Tag tag) method returns null, it means that the tag is not formatted as per the NDEF format, and we should try to format it correctly. For formatting a tag with the NDEF format, we use the NdefFormatable class in the same way as we did with the Ndef class. However, in this case, we want to format the tag and write a message. This is achieved by calling the format(NdefMessage firstMessage) method. So, we should call the get(Tag tag) method, then open a connection by calling connect(), format the tag, and write the message by calling the format(NdefMessage firstMessage) method. Finally, close the connection with the close() method. If the get(Tag tag) method returns null, it means that the tag is the Android NFC API that cannot automatically format the tag to NDEF. An NDEF message is composed of several NDEF records. Each of these records is composed of four key properties: Type Name Format (TNF): This property defines how the type field should be interpreted Record Type Definition (RTD): This property is used together with the TNF to help Android create the correct NDEF message and trigger the corresponding intent Id: This property lets you define a custom identifier for the record Payload: This property contains the content that will be transported to the record Using the combinations between the TNF and the RTD, we can create several different NDEF records to hold our data and even create our custom types. In this recipe, we created an empty record. The main TNF property values of a record are as follows: - TNF_ABSOLUTE_URI: This is a URI-type field - TNF_WELL_KNOWN: This is an NFC Forum-defined URN - TNF_EXTERNAL_TYPE: This is a URN-type field - TNF_MIME_MEDIA: This is a MIME type based on the type specified The main RTF property values of a record are: - RTD_URI: This is the URI based on the payload - RTD_TEXT: This is the NFC Forum-defined record type Writing a URI-formatted record URI is probably the most common content written to NFC tags. It allows you to share a website, an online service, or a link to the online content. This can be used, for example, in advertising and marketing. How to do it... We are going to create an application that writes URI records to a tag by performing the following steps. The URI will be hardcoded and will point to the Packt Publishing website. Open Eclipse and create a new Android application project named NfcBookCh3Example2. Make sure the AndroidManifest.xml file is configured correctly. Set the minimum SDK version to 14: <uses-sdk android_minSdkVersion="14" /> Implement the enableForegroundDispatch, isNfcIntent, formatTag, and writeNdefMessage methods from the previous recipe—steps 2, 4, 6, and 7. Add the following class member and instantiate it in the onCreate method: private NfcAdapter nfcAdapter; protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Override the onNewIntent method and place the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord uriRecord = NdefRecord.createUri("http://www.packtpub.com"); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { uriRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); boolean writeResult = writeNdefMessage(tag, ndefMessage); if (writeResult) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Tag write failed!", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } super.onNewIntent(intent); } Run the application. Tap a tag on your phone or simulate a tap in the NFC Simulator. A Write Successful! toast should appear. How it works... URIs are perfect content for NFC tags because with a relatively small amount of content, we can send users to more rich and complete resources. These types of records are the most easy to create, and this is done by calling the NdefRecord.createUri method and passing the URI as the first parameter. URIs are not necessarily URLs for a website. We can use other URIs that are quite well-known in Android such as the following: - tel:+000 000 000 000 - sms:+000 000 000 000 If we write the tel: uri syntax, the user will be prompted to initiate a phone call. We can always create a URI record the hard way without using the createUri method: public NdefRecord createUriRecord(String uri) { NdefRecord rtdUriRecord = null; try { byte[] uriField; uriField = uri.getBytes("UTF-8"); byte[] payload = new byte[uriField.length + 1]; //+1 for the URI prefix payload[0] = 0x00; //prefixes the URI System.arraycopy(uriField, 0, payload, 1, uriField.length); rtdUriRecord = new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload); } catch (UnsupportedEncodingException e) { Log.e("createUriRecord", e.getMessage()); } return rtdUriRecord; } The first byte in the payload indicates which prefix should be used with the URI. This way, we don't need to write the whole URI in the tag, which saves some tag space. The following list describes the recognized prefixes: 0x00 No prepending is done 0x01 http://www. 0x02 https://www. 0x03 http:// 0x04 https:// 0x05 tel: 0x06 mailto: 0x07 ftp://anonymous:anonymous@ 0x08 ftp://ftp. 0x09 ftps:// 0x0A sftp:// 0x0B smb:// 0x0C nfs:// 0x0D ftp:// 0x0E dav:// 0x0F news: 0x10 telnet:// 0x11 imap: 0x12 rtsp:// 0x13 urn: 0x14 pop: 0x15 sip: 0x16 sips: 0x17 tftp: 0x18 btspp:// 0x19 btl2cap:// 0x1A btgoep:// 0x1B tcpobex:// 0x1C irdaobex:// 0x1D file:// 0x1E urn:epc:id: 0x1F urn:epc:tag: 0x20 urn:epc:pat: 0x21 urn:epc:raw: 0x22 urn:epc: 0x23 urn:nfc:
Read more
  • 0
  • 0
  • 4732

article-image-creating-sample-application-simple
Packt
04 Sep 2013
8 min read
Save for later

Creating a sample application (Simple)

Packt
04 Sep 2013
8 min read
(For more resources related to this topic, see here.) How to do it... To create an application, include the JavaScript and CSS files in your page. Perform the following steps: Create an HTML document, index.html, under your project directory. Please note that this directory should be placed in the web root of your web server. Create the directories styles and scripts under your project directory. Copy the CSS file kendo.mobile.all.min.css, from <downloaded directory>/styles to the styles directory created in step 2. Then add a reference to the CSS file in the head section of the document. Download the jQuery library from jQuery.com. Place this file in the scripts directory and add a reference to this file in the document before closing the body tag. You can also specify the CDN location of the file in the document. Copy the JavaScript file kendo.mobile.min.js, from the <downloaded directory>/js tag to the scripts directory created in step 2. Then add a reference to this JavaScript file in the document (after jQuery). Add the text "Hello Kendo!!" in the body tag of the index.html file as follows: <!DOCTYPE HTML><html><head><title>My first Kendo Mobile Application</title><link rel="stylesheet"type="text/css"href="styles/kendo.mobile.all.min.css"></head><body>Hello Kendo!!<script type="text/javascript"src = "scripts/jquery.min.js"></script><script type="text/javascript"src = "scripts/kendo.mobile.min.js"></script></body></html> The preceding code snippet is a simple HTML page with references to Kendo Mobile CSS and JavaScript files. These files are minified and contain all the features, themes, and widgets. In production, you would like to include only those that are required. The downloaded ZIP file includes CSS and JavaScript files for specific features. However, in development you can use the minified files that contain all features. Another thing to note is that apart from the reference to the script kendo.mobile.min.js, the page also includes a reference to jQuery. It is the only external dependency for Kendo UI. When you view this page on a mobile device, you will see the text Hello Kendo!! shown. This page does not include any of the widgets that come as a part of the library. Now let's build on top of our Hello World application and add some visual elements; that is, UI widgets to the page. This can be done with the following steps: Add a layout first. A mobile application generally has a header, a footer, and multiple views. It is also observed that while navigating through different views in an application, the header and footer remain constant. The framework allows you to define a global layout that may contain a header and a footer for all the views in the application. Also, the framework allows you to define multiple views that can share the same layout. The following is the same page that now includes a header and footer defined in the layout: <body><div data-role="layout" data-id="defaultLayout"> <header data-role="header"> <div data-role="navbar"> My first application </div> </header> <footer data-role="footer"> <div data-role="tabstrip"> <a data-icon="about">About</a> <a data-icon="settings">Settings</a> </div> </footer> </div></body> The body contains a few div tags with data attributes. Let's look into one of these tags in detail. <div data-role="layout" data-id="defaultLayout"> Here, the div tag contains two data attributes, role and id. The role data attribute is used to initialize and configure a widget. The data-role attribute has a value, layout, identifying the target element as a layout widget. All the widgets are expected to have a role data attribute that helps in marking the target element for a specific purpose. It instructs the library which widget needs to be added to the page. The id data attribute is used to identify the widget (the layout widget) in the page. A page may define several layout widgets and each one of these must be identified by a unique ID. Here, the data-id attribute has defaultLayout as its value. Now there can be many views referring to this layout by its id. Similarly, there are other elements in the page with the data-role attribute, defining them as one of widgets in the page. Let's take a look at the header and footer widgets defined inside the layout. <header data-role="header">... </header><footer data-role="footer">...</footer> The header and footer tags have the role data attribute set to header and footer respectively. This aligns them to the top and bottom of the page, giving the rest of the available space for different views to render. Also, note that there is a navbar widget in the header and a tabstrip widget defined in the footer. As mentioned earlier, the framework comes with several widgets that can help you build the application rapidly. Now add views to the page. The index.html page now has a layout defined and when you run the page in the browser, you will see an error message in the console which says: Uncaught Error: Your kendo mobile application element does not contain any direct child elements with data-role="view" attribute set. Make sure that you instantiate the mobile application using the correct container. Views represent the actual content that has to be displayed between the header and the footer that we defined while creating a layout. A layout cannot exist without a view and hence you see that error message in the console. To fix this error, you need to define a view for your mobile application. Add the following to your index.html page: <div data-role="view" data-layout="defaultLayout"> Hello Kendo!!</div> As mentioned earlier, every widget needs to have a role data attribute to identify itself as a particular widget in the page. Here, the target element is defined as a view widget and tied to the layout by defining the data-layout attribute. The data-layout attribute has a value defaultLayout that is the same as the value for the data-id attribute of the layout that we defined earlier. This attaches the view to the layout and you will not see the error message anymore. Similarly, you can have multiple Views defined in the page that can make use of the same layout. Now, there's only one pending task for the application to start working: initializing the application. A Kendo Mobile application can be initialized using the Application object. To do that, add the following code to the page: <script> var app = new kendo.mobile.Application();</script> Include the previous script block right after references to jQuery and Kendo Mobile and before closing the body tag. This single line of JavaScript code will initialize your Kendo Mobile application and all the widgets with the data-role attribute. The Application object is used for many other purposes . How it works... When you run the index.html page in a browser, you will see a navbar and a tabstrip in the header and footer of the page. Also, the message Hello Kendo!! being shown in the body of the page. The following screenshot shows how it will look like when you view the page on an iPhone: If you have noticed, this looks like a native iOS application. The framework has the capability to render the application that looks like a native application on a device. When you view the same page on an Android device, it will look like an native Android application, as shown in the following screenshot: The framework identifies the platform on which the mobile application is being run and then provides native look and feel to the application. There are ways in which you can customize this behavior. Summary Creating a sample application (Simple)got us started with the Kendo UI Mobile framework and showed us how to build a sample application using the same. We also saw some of the Mobile UI widgets, such as layouts, views, navbar, and tabstrip in brief. Resources for Article : Further resources on this subject: Working with remote data [Article] The Decider: External APIs [Article] Constructing and Evaluating Your Design Solution [Article]
Read more
  • 0
  • 0
  • 4682
article-image-android-30-application-development-managing-menus
Packt
26 Jul 2011
7 min read
Save for later

Android 3.0 Application Development: Managing Menus

Packt
26 Jul 2011
7 min read
Android 3.0 Application Development Cookbook All Android handsets have a hard menu key for calling up secondary choices that do not need to be made available from the main screen or perhaps need to be made available across an application. In concord with Android's philosophy of separating appearance from function, menus are generally created in the same way as other visual elements, that is, with the use of a definitive XML layout file. There is a lot that can be done to control menus dynamically and Android provides classes and interfaces for displaying context-sensitive menus, organizing menu items into groups, and including shortcuts. Creating and inflating an options menu To keep our application code separate from our menu layout information, Android uses a designated resource folder (res/menu) and an XML layout file to define the physical appearance of our menu; such as the titles and icons we see in Android pop-up menus. The Activity class contains a callback method, onCreateOptionsMenu(), that can be overridden to inflate a menu. Getting ready Android menus are defined in a specific, designated folder. Eclipse does not create this folder by default so start up a new project and add a new folder inside the res folder and call it menu. How to do it... Create a new XML file in our new res/menu folder and call it my_menu.xml. Complete the new file as follows: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_title="first item" /> <item android_id="@+id/item_two" android_title="second item" /> </menu> In the Java application file, include the following overridden callback: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.my_menu, menu); return true; } Run the application on a handset or emulator and press the hard menu key to view the menu: How it works... Whenever we create an Android menu using XML we must place it in the folder we used here (res/menu). Likewise, the base node of our XML structure must be <menu>. The purpose of the id element should be self explanatory and the title attribute is used to set the text that the user sees when the menu item is inflated. The MenuInflater object is a straightforward way of turning an XML layout file into a Java object. We create a MenuInflater with getMenuInflater() which returns a MenuInflater from the current activity, of which it is a member. The inflate() call takes both the XML file and the equivalent Java object as its parameters. There's more... The type of menu we created here is referred to as an options menu and it comes in two flavors depending on how many items it contains. There is also a neater way to handle item titles when they are too long to be completely displayed. Handling longer options menus When an options menu has six or fewer items it appears as a block of items at the bottom of the screen. This is called the icon menu and is, as its name suggests, the only menu type capable of displaying icons. On tablets running API level 11 or greater the Action bar can also be used to access the menu. The icon menu is also the only menu type that cannot display radio buttons or check marks. When an inflated options menu has more than six items, the sixth place on the icon menu is replaced by the system's own More item, which when pressed calls up the extended menu which displays all items from the sixth onwards, adding a scroll bar if necessary. Providing condensed menu titles If Android cannot fit an item's title text into the space provided (often as little as one third of the screen width) it will simply truncate it. To provide a more readable alternative, include the android:titleCondensed="string" attribute alongside android:title in the item definition. Adding Option menu items to the Action Bar For tablet devices targeting Android 3.0 or greater, option menu items can be added to the Action Bar. Adjust the target build of the above project to API level 11 or above and replace the res/menu/my_menu.xml file with the following: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_title="first item" android_icon="@drawable/icon" android_showAsAction="ifRoom" /> <item android_id="@+id/item_two" android_title="second item" android_icon="@drawable/icon" android_showAsAction="ifRoom|withText" /> <item android_id="@+id/item_three" android_title="third item" android_icon="@drawable/icon" android_showAsAction="always" /> <item android_id="@+id/item_four" android_title="fourth item" android_icon="@drawable/icon" android_showAsAction="never" /> </menu> Note from the output that unless the withText flag is included, the menu item will display only as an icon: Designing Android compliant menu icons The menu items we defined in the previous recipe had only text titles to identify them to the user, however nearly all Icon Menus that we see on Android devices combine a text title with an icon. Although it is perfectly possible to use any graphic image as a menu icon, using images that do not conform to Android's own guidelines on icon design is strongly discouraged, and Android's own development team are particularly insistent that only the subscribed color palette and effects are used. This is so that these built-in menus which are universal across Android applications provide a continuous experience for the user. Here we examine the colors and dimensions prescribed and also examine how to provide the subsequent images as system resources in such a way as to cater for a variety of screen densities. Getting ready The little application we put together in the last recipe makes a good starting point for this one. Most of the information here is to do with design of the icons, so you may want to have a graphics editor such as GIMP or PhotoShop open, or you may want to refer back here later for the exact dimensions and palettes. How to do it... Open the res/menu/my_menu.xml file and add the android:icon elements seen here to each item: <?xml version="1.0" encoding="utf-8"?> <menu > <item android_id="@+id/item_one" android_icon="@drawable/my_menu_icon" android_title="first item" /> <item android_id="@+id/item_two" android_icon="@drawable/my_menu_icon" android_title="second item" /> </menu> With your graphics editor, create a new transparent PNG file, precisely 48 by 48 pixels in dimension. Ensuring that there is at least a 6 pixel border all the way around, produce your icon as a simple two-dimensional flat shape. Something like this: Fill the shape with a grayscale gradient that ranges from 47% to 64% (white) with the lighter end at the top. Provide a black inner shadow with the following settings: 20% opaque 90° angle (top to bottom) 2 pixel width 2 pixel distance Next, add an inner bevel with: Depth of 1% 90° altitude 70% opaque, white highlight 25% opaque, black shadow Now give the graphic a white outer glow with: 55% opacity 3 pixel size 10% spread Make two copies of our graphic, one resized to 36 by 36 pixels and one 72 by 72 pixels. Save the largest file in the res/drawable-hdpi as my_menu_icon.png. Save the 48 by 48 pixel file with the same name in the drawable-mdpi folder and the smallest image in drawable-ldpi. To see the full effect of these three files in action you will need to run the software on handsets with different screen resolutions or construct emulators to that purpose. How it works... As already mentioned, Android currently insists that menu icons conform to their guidelines and most of the terms used here should be familiar to anyone who has designed an icon before. The designated drawable folders allow us to provide the best possible graphics for a wide variety of screen densities. Android will automatically select the most appropriate graphic for a handset or tablet so that we can refer to our icons generically with @drawable/. It is only ever necessary to provide icons for the first five menu items as the Icon Menu is the only type to allow icons.  
Read more
  • 0
  • 0
  • 4310

article-image-android-30-application-development-multimedia-management
Packt
08 Aug 2011
6 min read
Save for later

Android 3.0 Application Development: Multimedia Management

Packt
08 Aug 2011
6 min read
Android 3.0 Application Development Cookbook Over 70 working recipes covering every aspect of Android development Very few successful applications are completely silent or have only static graphics, and in order that Android developers take full advantage of the advanced multimedia capabilities of today's smartphones, the system provides the android.media package, which contains many useful classes. The MediaPlayer class allows the playback of both audio and video from raw resources, files, and network streams, and the MediaRecorder class makes it possible to record both sound and images. Android also offers ways to manipulate sounds and create interactive effects through the use of the SoundPool class, which allows us to not only bend the pitch of our sounds but also to play more than one at a time.   Playing an audio file from within an application One of the first things that we may want to do with regards to multimedia is play back an audio file. Android provides the android.media.MediaPlayer class for us and this makes playback and most media related functions remarkably simple. In this recipe we will create a simple media player that will play a single audio file. Getting ready Before we start this project we will need an audio file for playback. Android can decode audio with any of the following file extensions: .3GP .MP4 .M4A .MP3 .OGG .WAV There are also quite a few MIDI file formats that are acceptable but have not been included here as their use is less common and their availability often depends on whether a device is running the standard Android platform or a specific vendor extension. Before you start this exercise create or find a short sound sample in one of the given formats. We used a five second Ogg Vorbis file and called it my_sound_file.ogg. How to do it... Start up a new Android project in Eclipse and create a new folder: res/raw. Place the sound file that you just prepared in this folder. In this example we refer to it as my_sound_file. Using either the Graphical Layout or the main.xml panel edit the file res/layout/main.xml to contain three buttons, as seen in the following screenshot: Call these buttons play_button, pause_button and stop_button. In the Java activity code declare a MediaPlayer in the onCreate() method: @Override public void onCreate(Bundle state) { super.onCreate(state); setContentView(R.layout.main); final MediaPlayer mPlayer; Associate the buttons we added in step 3 with Java variables by adding the following lines to onCreate(): Button playButton = (Button) findViewById(R.id.play_button); Button pauseButton = (Button) findViewById(R.id.pause_button); Button stopButton = (Button) findViewById(R.id.stop_button); We need a click listener for our play button. This also can be defined from within onCreate(): playButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer = MediaPlayer.create(this, R.raw.my_sound_file); mPlayer.setLooping(true); mPlayer.start(); } }); Next add a listener for the pause button as follows: pauseButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer.pause(); } }); Finally, include a listener for the stop button: stopButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mPlayer.stop(); mPlayer.reset(); } }); Now run this code on an emulator or your handset and test each of the buttons. How it works... The MediaPlayer class provides some useful functions and the use of start(), pause(), stop(), and setLooping() should be clear. However, if you are thinking that calling MediaPlayer.create(context, ID) every time the start button is pressed is overkill, you would be correct. This is because once stop() has been called on the MediaPlayer, the media needs to be reset and prepared (with reset() and prepare()) before start() can be called again. Fortunately MediaPlayer.create() also calls prepare() so that the first time we play an audio file we do not have to worry about this. The lifecycle of the MediaPlayer is not always straightforward and the order in which it takes on various states is best explained diagrammatically: Otherwise, MediaPlayer has lots of useful methods such as isPlaying(), which will return a Boolean telling us whether our file is being played or not, or getDuration() and getCurrentPosition(), which inform us of how long the sample is and how far through it we are. There are also some useful hooks that we can employ using MediaPlayer and the most commonly used are onCompletionListener() and onErrorListener(). There's more... We are not restricted to playing back raw resources. We can also playback local files or even stream audio. Playing back a file or a stream Use the MediaPlayer.setDataSource(String) method to play an audio file or stream. In the case of streaming audio this will need to be a URL representing a media file that is capable of being played progressively, and you will need to prepare the media player each time it runs: MediaPlayer player = new MediaPlayer(); player.setDataSource("string value of your file path or URL"); player.prepare(); player.start(); It is essential to surround setDataSource() with a try/catch clause in case the source does not exist when dealing with removable or online media.   Playing back video from external memory The MediaPlayer class that we met in the previous recipe works for video in the same manner that it does for audio and so as not to make this task a near copy of the last, here we will look at how to play back video files stored on an SD card using the VideoView object. Getting ready This recipe requires a video file for our application to playback. Android can decode H.263, H.264 and MPEG-4 files; generally speaking this means files with .3gp and .mp4 file extensions. For platforms since 3.0 (API level 11) it is also possible to manage H.264 AVC files. Find a short video clip in one of these compatible formats and save it on the SD card of your handset. Alternatively you can create an emulator with an SD card enabled and push your video file onto it. This can be done easily through Eclipse's DDMS perspective from the File Explorer tab: In this example we called our video file my_video.3gp.  
Read more
  • 0
  • 0
  • 4295
Modal Close icon
Modal Close icon