Intents for Mobile Components

Exclusive offer: get 50% off this eBook here
Learning Android Intents

Learning Android Intents — Save 50%

Explore and apply the power of intents in Android application development with this book and ebook

$26.99    $13.50
by Muhammad Usama bin Aftab Wajahat Karim | January 2014 | Open Source

In this article by Wajahat Karim, the author of the book "Learning Android Intents" has discussed some of the applications of intents with a more practical approach. Also, he has discussed the mobile components that are commonly found in all Android phones.

Now, we will see how these mobile components can be accessed and used very easily via intents. Android provides a vast collection of libraries and features through which a developer can utilize mobile components. This is as easy as a walk in the park. This article mainly includes four different categories of components: visual components such as camera, communication components such as Wi-Fi and Bluetooth, media components such as video and audio recording, speech recognition, and text-to-speech conversion, and finally, motion components such as proximity sensor. The following topics will be discussed in this article:

  • Common mobile components
  • Components and intents
  • Communication components
  • Using Bluetooth through intents
  • Using Wi-Fi through intents
  • Media components
  • Taking pictures and recording video through intents
  • Speech recognition using intents
  • Role of intents in text-to-speech conversion
  • Motion components
  • Proximity alerts through intents

The concepts and structures of intents are the prerequisites for understanding this article.

(For more resources related to this topic, see here.)

Common mobile components

Due to the open source nature of the Android operating system, many different companies such as HTC and Samsung ported the Android OS on their devices with many different functionalities and styles. Each Android phone is unique in some way or the other and possesses many unique features and components different from other brands and phones. But there are some components that are found to be common in all the Android phones.

We are using two key terms here: components and features. Component is the hardware part of an Android phone, such as camera, Bluetooth and so on. And Feature is the software part of an Android phone, such as the SMS feature, E-mail feature, and so on. This article is all about hardware components, their access, and their use through intents.

These common components can be generally used and implemented independently of any mobile phone or model. And there is no doubt that intents are the best asynchronous messages to activate these Android components. These intents are used to trigger the Android OS when some event occurrs and some action should be taken. Android, on the basis of the data received, determines the receiver for the intent and triggers it. Here are a few common components found in each Android phone:

The Wi-Fi component

Each Android phone comes with a complete support of the Wi-Fi connectivity component. The new Android phones having Android Version 4.1 and above support the Wi-Fi Direct feature as well. This allows the user to connect to nearby devices without the need to connect with a hotspot or network access point.

The Bluetooth component

An Android phone includes Bluetooth network support that allows the users of Android phones to exchange data wirelessly in low range with other devices. The Android application framework provides developers with the access to Bluetooth functionality through Android Bluetooth APIs.

The Cellular component

No mobile phone is complete without a cellular component. Each Android phone has a cellular component for mobile communication through SMS, calls, and so on. The Android system provides very high, flexible APIs to utilize telephony and cellular components to create very interesting and innovative apps.

Global Positioning System (GPS) and geo-location

GPS is a very useful but battery-consuming component in any Android phone. It is used for developing location-based apps for Android users. Google Maps is the best feature related to GPS and geo-location. Developers have provided so many innovative apps and games utilizing Google Maps and GPS components in Android.

The Geomagnetic field component

Geomagnetic field component is found in most Android phones. This component is used to estimate the magnetic field of an Android phone at a given point on the Earth and, in particular, to compute magnetic declination from the North.

The geomagnetic field component uses the World Magnetic Model produced by United States National Geospatial-Intelligence Agency. The current model that is being used for the geomagnetic field is valid until 2015. Newer Android phones will have the newer version of the geomagnetic field.

Sensor components

Most Android devices have built-in sensors that measure motion, orientation, environment conditions, and so on. These sensors sometimes act as the brains of the app. For example, they take actions on the basis of the mobile's surrounding (weather) and allow users to have an automatic interaction with the app. These sensors provide raw data with high precision and accuracy for measuring the respective sensor values. For example, gravity sensor can be used to track gestures and motions, such as tilt, shake, and so on, in any app or game. Similarly, a temperature sensor can be used to detect the mobile temperature, or a geomagnetic sensor (as introduced in the previous section) can be used in any travel application to track the compass bearing. Broadly, there are three categories of sensors in Android: motion, position, and environmental sensors. The following subsections discuss these types of sensors briefly.

Motion sensors

Motion sensors let the Android user monitor the motion of the device. There are both hardware-based sensors such as accelerometer, gyroscope, and software-based sensors such as gravity, linear acceleration, and rotation vector sensors. Motion sensors are used to detect a device's motion including tilt effect, shake effect, rotation, swing, and so on. If used properly, these effects can make any app or game very interesting and flexible, and can prove to provide a great user experience.

Position sensors

The two position sensors, geomagnetic sensor and orientation sensor, are used to determine the position of the mobile device. Another sensor, the proximity sensor, lets the user determine how close the face of a device is to an object. For example, when we get any call on an Android phone, placing the phone on the ear shuts off the screen, and when we hold the phone back in our hands, the screen display appears automatically. This simple application uses the proximity sensor to detect the ear (object) with the face of the device (the screen).

Environmental sensors

These sensors are not used much in Android apps, but used widely by the Android system to detect a lot of little things. For example, the temperature sensor is used to detect the temperature of the phone, and can be used in saving the battery and mobile life.

At the time of writing this article, the Samsung Galaxy S4 Android phone has been launched. The phone has shown a great use of environmental gestures by allowing users to perform actions such as making calls by no-touch gestures such as moving your hand or face in front of the phone.

Components and intents

Android phones contain a large number of components and features. This becomes beneficial to both Android developers and users. Android developers can use these mobile components and features to customize the user experience. For most components, developers get two options; either they extend the components and customize those according to their application requirements, or they use the built-in interfaces provided by the Android system. We won't read about the first choice of extending components as it is beyond the scope of this article. However, we will study the other option of using built-in interfaces for mobile components.

Generally, to use any mobile component from our Android app, the developers send intents to the Android system and then Android takes the action accordingly to call the respective component. Intents are asynchronous messages sent to the Android OS to perform any functionality. Most of the mobile components can be triggered by intents just by using a few lines of code and can be utilized fully by developers in their apps. In the following sections of this article, we will see few components and how they are used and triggered by intents with practical examples. We have divided the components in three ways: communication components, media components, and motion components. Now, let's discuss these components in the following sections.

Communication components

Any mobile phone's core purpose is communication. Android phones provide a lot of features other than communication features. Android phones contain SMS/MMS, Wi-Fi, and Bluetooth for communication purposes. This article focuses on the hardware components; so, we will discuss only Wi-Fi and Bluetooth. The Android system provides built-in APIs to manage and use Bluetooth devices, settings, discoverability, and much more. It offers full network APIs not only for Bluetooth but also for Wi-Fi, hotspots, configuring settings, Internet connectivity, and much more. More importantly, these APIs and components can be used very easily by writing few lines of code through intents. We will start by discussing Bluetooth, and how we can use Bluetooth through intents in the next section.

Learning Android Intents Explore and apply the power of intents in Android application development with this book and ebook
Published: January 2014
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Using Bluetooth through intents

Bluetooth is a communication protocol that is designed for short-range, low bandwidth, peer-to-peer communication. In this section, we will discuss how to interact and communicate with local Bluetooth devices and how we can communicate with the nearby, remote devices using Bluetooth. Bluetooth is a very low-range protocol, but it can be used to transmit and receive data such as files, media, and so on. As of Android 2.1, only paired devices can communicate with each other via Bluetooth devices due to encryption of the data.

Bluetooth APIs and libraries became available from Android 2.0 Version (SDK API Level 5). It should also be noted that not all Android phones will necessarily include the Bluetooth hardware.

The Bluetooth API provided by the Android system is used to perform a lot of actions related to Bluetooth that includes turning the Bluetooth on/off, pairing with nearby devices, communicating with other Bluetooth devices, and much more. But, not all of these actions can be performed through intents. We will discuss only those actions that can be performed through intents. These actions include setting the Bluetooth On/Off from our Android app, tracking the Bluetooth adapter state, and making our device discoverable for a small time. The actions that can't be performed through intents include sending data and files to other Bluetooth devices, pairing with other devices, and so on. Now, let's explain these actions one by one in the following sections.

Some Bluetooth API classes

In this section, we will discuss some classes from the Android Bluetooth API that are used in all Android apps using Bluetooth. Understanding these classes will help the developers understand the following examples more easily.


This class represents each remote device with which the user is communicating. This class is a thin wrapper for the Bluetooth hardware of the phone. To perform the operations on the object of this class, developers have to use the BluetoothAdapter class. The objects of this class are immutable. We can get BluetoothDevice by calling BluetoothAdapter.getRemoteDevice(String macAddress) and passing the MAC address of any device. Some important methods of this class are:

  • BluetoothDevice.getAddress(): It returns the MAC address of the current device.
  • BluetoothDevice.getBondState(): It returns the bonding state of the current device, such as not bonded, bonding, or bonded.

The MAC address is a string of 12 characters represented in the form of xx:xx:xx:xx:xx:xx. For example, 00:11:22:AA:BB:CC.


This class represents the current device on which our Android app is running. It should be noted that the BluetoothAdapter class represents the current device, and the BluetoothDevice class represents the other devices that can or cannot bonded with our device. This class is a singleton class and cannot be instantiated. To get the object of this class, we can use the BluetoothAdapter.getDefaultAdapter() method. To perform any action related to Bluetooth communication, this class is the main starting point for it. Some of the methods of this class include BluetoothAdapter.getBondedDevices(), which returns all paired devices, BluetoothAdapter.startDiscovery(), which searches for all discoverable devices nearby, and so on. There is a method called startLeScan(BluetoothAdapter.LeScanCallback callback) that is used to receive a callback whenever a device is discovered. This method was introduced in API Level 18.

Some of the methods in the BluetoothAdapter and BluetoothDevice classes require the BLUETOOTH permission, and some require the BLUETOOTH_ADMIN permission as well. So, when using these classes in your app, don't forget to add these permissions in your Android manifest file.

So far, we have discussed some Bluetooth classes in the Android OS along with some of the methods in those classes. In the next section, we will develop our first Android app that will ask the user to turn on the Bluetooth.

Turning on the Bluetooth app

To perform any Bluetooth action, Bluetooth must first be turned on. So, in this section, we will develop an Android app that will ask the user to turn on the Bluetooth device if it is not already on. The user can accept it and the Bluetooth will be turned on, or the user can also reject it. In the latter case, the application will continue and the Bluetooth will remain in the off state. It would be great to say that this action can be performed very easily using intents. Let's see how we can do this by looking at the code.

First, create an empty Android project in your favourite IDE. We have developed it in Android Studio. At the time of writing this article, the project is in the Preview Mode, and its beta launch is expected soon. Now, we will modify a few files from the project to make our Android Bluetooth app. We will modify two files. Let's see those files in the following sections.

The file

This class represents the main activity of our Android app. The following code is implemented in this class:

In our activity, we have declared a constant value with the name BLUETOOTH_REQUEST_CODE. This constant is used as a request code or request unique identifier in the communication between our app and the Android system. When we request the Android OS to perform some action, we pass any request code. Then, the Android system performs the action and returns the same request code back to us. After comparing our request code with Android's request code, we get to know about the action that has been performed. If the code doesn't match, it means that this action is for some other request. It is not our request. In the onCreate() method, we set the layout of the activity by calling the setContentView() method. And then, we perform our real task in the next few lines.

We create a string enableBT that gets the value of the ACTION_REQUEST_ENABLE method that pertains to the BluetoothAdapter class. This string is passed in the intent constructor to tell the intent that it is meant to enable the Bluetooth device. Like the Bluetooth-enable request string, the Android OS also contains many other requests for various actions such as Wi-Fi, Sensors, Camera, and more. In this article, we will learn about a few request strings. After creating the request string, we create our intent and pass the request string to it. And then, we start our intent by passing it in the startActivityForResult() method.

Basically, the startActivity() method just starts any activity that is passed through intents, but the startActivityForResult() method starts any activity, and after performing some action, it returns to the original activity and presents the results of the action. So, in this example, we called the activity that requests the Android system to enable the Bluetooth device. The Android system performs the action and asks the user whether it should enable the device or not. Then, the Android system returns the result to the original activity that started the intent earlier. To get any result from other activities to our activity, we override the onActivityResult() method. This method is called after returning from other activities. The method contains three parameters: requestCode, resultCode, and dataIntent. The requestCode parameter is an integer value and contains the request code value of the request provided by the developer. The resultCode parameter is the result of the action. It tells the developer whether the action has been performed successfully with a positive response or with a negative response. The dataIntent object contains the original calling-intent data, such as which activity started the intent and all the related information. Now, let's see our overridden method in detail. We have first checked whether requestCode, our request code, is BLUETOOTH_REQUEST_CODE, or not. If both are the same, we have compared the result code to check whether our result is okay or not. If it is okay, it means that Bluetooth has been enabled; so, we display a toast notifying the user about it, and if the result is not okay, that means Bluetooth has not been enabled. Here also we notify the user by displaying a toast.

This was the activity class that performs the core functionality of our Bluetooth-enabling app. Now, let's see the Android manifest file in the following section.

The AndroidManifest.xml file

The AndroidManifest.xml file contains all the necessary settings and preferences for the app. The following is the code contained in this manifest file:

Any Android application that uses the Bluetooth device must have the permission of Bluetooth usage. So, to provide the permission to the user, the developer declares the <uses-permission> tag in the Android manifest file and writes the necessary permissions. As shown in the code, we have provided two permissions: android.permission.BLUETOOTH and android.permission.BLUETOOTH_ADMIN. For most Bluetooth-enabled apps, only the BLUETOOTH permission does most of the work. The BLUETOOTH_ADMIN permission is only for those apps that use Bluetooth admin settings such as making the device discoverable, searching for other devices, pairing, and so on. When the user first installs the application, he is provided with details about which permissions are needed for the app. If the user accepts and grants the permissions to the app, the app gets installed; otherwise, the user can't install the app.

After discussing the Android manifest and activity files, we would test our project by compiling and running it. When we run the project, we should see the screens as shown in the following screenshots:

Enabling Bluetooth App

As the app starts, the user is presented with a dialog to enable or disable the Bluetooth device. If the user chooses Yes , the Bluetooth is turned on, and a toast updates the status by displaying the status of the Bluetooth.

Tracking the Bluetooth adapter state

In the previous example, we saw how we can turn on the Bluetooth device just by passing the intent of the Bluetooth request to the Android system in just a few lines. But enabling and disabling the Bluetooth are time-consuming and asynchronous operations. So, instead of polling the state of the Bluetooth adapter, we can use a broadcast receiver for the state change. In this example, we will see how we can track the Bluetooth state using intents in a broadcast receiver.

This example is the extension of the previous example, and we will use the same code and add a new code to it. Let's look at the code now. We have three files,,, and AndroidManifest.xml. Let's discuss these files one by one.

The file

This class represents the main activity of our Android app. The following code is implemented in this class:

public class MainActivity extends Activity { final int BLUETOOTH_REQUEST_CODE = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); registerReceiver(new BluetoothStateReceiver(), new IntentFilter( BluetoothAdapter.ACTION_STATE_CHANGED)); String enableBT = BluetoothAdapter.ACTION_REQUEST_ENABLE; Intent bluetoothIntent = new Intent(enableBT); startActivityForResult(bluetoothIntent, BLUETOOTH_REQUEST_CODE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); if (resultCode == RESULT_OK) { if (requestCode == BLUETOOTH_REQUEST_CODE) { Toast.makeText(this, "Turned On", Toast.LENGTH_SHORT).show(); } } else if (resultCode == RESULT_CANCELED) { Toast.makeText(this, "Didn't Turn On", Toast.LENGTH_SHORT).show(); } } }

From the code, it is clear that the code is almost the same as in the previous example. The only difference is that we have added one line after setting the content view of the activity. We called the registerReceiver() method that registers any broadcast receiver with the Android system programmatically. We can also register the receivers via XML by declaring them in the Android manifest file. A broadcast receiver is used to receive the broadcasts sent from the Android system.

While performing general actions such as turning the Bluetooth on, turning the Wi-Fi on/off and so on, the Android system sends broadcast notifications that can be used by developers to detect the state changes in the mobile. There are two types of broadcasts. Normal broadcasts are completely asynchronous. The receivers of these broadcasts run in a disorderly manner, and multiple receivers can receive broadcasts at the same time. These broadcasts are more efficient as compared to the other type of broadcasts that are ordered broadcasts. Ordered broadcasts are sent to one receiver at a time. As each receiver receives the results, it passes the results to the next receiver or completely aborts the broadcast. In this case, other receivers don't receive the broadcast.

Although the Intent class is used for sending and receiving broadcasts, the intent broadcast is a completely different mechanism and is separate from the intents used in the startActivity() method. There is no way for the broadcast receiver to see or capture the intents used with the startActivity() method. The main difference between these two intent mechanisms is that the intents used in the startActivity() method perform the foreground operation that the user is currently engaged in. However, the intent used with the broadcast receivers performs some background operations that the user is not aware of.

In our activity code, we used the registerReceiver() method to register an object of our customized broadcast receiver defined in the BluetoothStateReceiver class, and we passed an intent filter BluetoothAdapter.ACTION_STATE_CHANGED according to the type of the receiver. This state tells the intent filter that our object of the broadcast receiver is used in detecting the Bluetooth state change in the app. After the Register receiver, we created an intent passing BluetoothAdapter.ACTION_REQUEST_ENABLE, telling the app to turn on the Bluetooth. Finally, we start our action by calling startActivityForResult(), and we compare the results in the onActivityResult() method to see whether the Bluetooth is turned on or not. You can read about these processes in the previous example of this article.

When you register the receiver in the onCreate() or onResume() method of the activity, you should unregister it in the onPause() or onDestroy() method. The advantage of this approach is that you won't receive any broadcasts when the app is paused or closed, and this can decrease Android's unnecessary overhead operations resulting in a better battery life.

Now, let's see the code of our customized broadcast receiver class.

The file

This class represents our customized broadcast receiver that tracks the state change in the Bluetooth device. The following code shows the implementation of the file:

Just as we did for activities and services, to create a custom broadcast receiver, we extend from the BroadcastReceiver class and override methods to declare the custom behavior. We have overridden the onReceive() method and performed the main functionality of tracking the Bluetooth device status in this method. First, we will create a string variable to store the string value of the current state. To retrieve the string value, we have used BluetoothAdapter.EXTRA_STATE. Now, we can pass this value in the get() method of the intent to get our required data. As our states are integers and also extras, we have called Intent.getIntExtra() and passed our required string in it along with its default value as -1. Now, as we have got the current state code, we can compare these codes with the pre-defined codes in BluetoothAdapter to see the state of the Bluetooth device. There are four predefined states.

  • STATE_TURNING_ON: This state notifies the user that the Bluetooth turn-on operation is in progress.
  • STATE_ON: This state notifies the user that Bluetooth has already been turned on.
  • STATE_TURNING_OFF: This state notifies the user that the Bluetooth device is being turned off.
  • STATE_OFF: This state notifies the user that the Bluetooth has been turned off.

We compare our state with these constants, and display a toast according to the result we get.The Android manifest file is the same as in the previous example.

Thus, in a nutshell, we discussed how we can enable the Bluetooth device and ask the user to turn it on or off through intents. We also saw how to track the state of the Bluetooth operations using intents in the broadcast receiver and displaying the toasts. The following screenshots show the application demo:

Enabling the Bluetooth App

Being discoverable

So far, we have only been interacting with Bluetooth by turning it on or off. But, to start communication via Bluetooth, one's device must be discoverable to start pairing. We will not create any example for this application of intents, but we will only explain how this can be done via intents. To turn on Bluetooth, we used the BluetoothAdapter.ACTION_REQUEST_ENABLE intent. We passed the intent in the startActivityForResult() method and checked the result in the onActivityResult() method. Now, to make the device discoverable, we can pass the BluetoothAdapter.ACTION_REQUEST_DISCOVERABLE string in the intent. And then, we pass this intent in the startActivityForResult() method, and track the result in the onActivityResult() method to compare the results.

The following code snippet shows the intent-creation process for making a device discoverable:

In the code, you can see that there is nothing new that hasn't been discussed earlier. Only the intent action string type has been changed, and the rest is the same. This is the power of intents; you can do almost anything with just a few lines of code in a matter of minutes.

Monitoring the discoverability modes

As we tracked the state changes of Bluetooth, we can also monitor the discoverability mode using exactly the same method explained earlier in this article. We have to create a customized broadcast receiver by extending the BroadcastReceiver class. In the onReceive() method, we will get two extra strings: BluetoothAdapter.EXTRA_PREVIOUS_SCAN_MODE, and BluetoothAdapter.EXTRA_SCAN_MODE. Then, we pass those strings in the Intent.getIntExtra() method to get the integer values for the mode, and then we compare these integers with the predefined modes to detect our mode. The following code snippet shows the code sample:


Communication via Bluetooth

The Bluetooth communication APIs are just wrappers around the standard RFCOMM , the standard Bluetooth radio frequency communications protocol. To communicate with other Bluetooth devices, they must be paired with each other. We can carry out a bidirectional communication via Bluetooth using the BluetoothServerSocket class that is used to establish a listening socket for initiating a link between devices and BluetoothSocket that is used to create a new client socket to listen to the Bluetooth server socket. This new client socket is returned by the server socket once a connection is established. We will not discuss how Bluetooth is used in communication because it is beyond the scope of this article.

Using Wi-Fi through intents

Today, the era of the Internet and its vast usage in mobile phones have made worldwide information available on the go. Almost every Android phone user expects an optimal use of the Internet from all apps. It becomes the developer's responsibility to add Internet access in the app. For example, when users use your apps, they would like to share the use and their activities performed in your app, such as completing any level of a game or reading any article from any news app, with their friends on various social networks, or by sending messages and so on. So, if users don't get connected through your app to the Internet, social platforms, or worldwide information, then the app becomes too limited and maybe boring.

To perform any activity that uses the Internet, we first have to deal with Internet connectivity itself, such as whether the phone has any active connection. In this section, we will see how we can access Internet connectivity through our core topic—the intents. Like Bluetooth, we can do much work through intents related to Internet connectivity. We will implement three main examples: to check the Internet status of a phone, to pick any available Wi-Fi network, and to open the Wi-Fi settings. Let's start our first example of checking the Internet connectivity status of a phone using intents.

Checking the Internet connectivity status

Before we start coding our example, we need to know some important things. Any Android phone connected to the Internet can have any type of connection. Mobile phones can be connected using data connection to the Internet or it can be any open or secured Wi-Fi. Data connection is called mobile connection, and is connected via the mobile network provided by the SIM and service providers. In this example, we will detect whether the mobile phone is connected to any network or not, and if it is connected, which type of network it is connected to. Let's implement the code now.

There are two main files that perform the functionality of the app: and AndroidManifest.xml. You might be wondering about the file. In the following example, this file is not used because of the requirements of the app. What we are going to do in this example is that whenever the Internet connectivity status of a phone is changed, such as the Wi-Fi is turned on or off, this app will display a toast showing the status. The app will be performing its work in the background; so, activity and layouts are not needed in this app. Now, let's explain these files one by one:

The file

This class represents our customized broadcast receiver that tracks the state change in the network connectivity of the device. The following code shows the implementation of the file:

Just as we did for activities and services, to create a custom broadcast receiver, we extend from the BroadcastReceiver class and override methods to declare the custom behavior. We have overridden the onReceive() method, and we are performing the main functionality of tracking the Wi-Fi device status in this method. We have registered this receiver in the Android manifest file as a network status change, and we will discuss that file in the next section. This onReceive() method is called only when the network status is changed. So, we first display a toast stating that the network connectivity status has changed.

It must be noted that any broadcast receiver cannot be passed using this in the context parameter of Toast as we used to do in the Activity class because the BroadcastReceiver class doesn't extend the Context class like the Activity class.

We have already notified the user about the network status changes, but we still have not notified the user about which change has occurred. So, at this point, our intent object becomes handy. It contains all the information and data of the network in the form of extra objects. Extra is an object of the Bundle class. We create a local Bundle reference and store the intent extra objects in it by calling the getExtras() method. Along with it, we also store the no connectivity extra object in a boolean variable. EXTRA_NO_CONNECTIVITY is the lookup key for a boolean variable that indicates whether there is a complete lack of network connectivity, that is, whether any network is available, or not. If this value is true, it means that there is no network available.

After storing our required extra objects, we need to check whether the extra objects are available or not. So, we have checked the extra objects with null, and if the extra objects are available, we extract more network information from these extra objects. In the Android system, the developer is told about the data of interest in the form of constant strings. So, we first get our constant string of network information, which is EXTRA_NETWORK_INFO. We store it in a string variable, and then we use it as a key value parameter in the get() method of the extra objects. The Bundle.get() method returns an Object type of the object, and we need to typecast it to our required class. We are looking for network information; so, we are using the NetworkInfo class object.

The Intent.EXTRA_NETWORK_INFO string was deprecated in API Level 14. Since NetworkInfo can vary based on the User ID ( UID ), the application should always obtain the network information through the getActiveNetworkInfo() or getAllNetworkInfo() method.

We have got all our values and data of interest; now, we will compare and check the data to find the connectivity status. We check whether this NetworkInfo data is null or not. If it is not null, we check whether the network is connected by checking the value from the getState() method of NetworkInfo. The NetworkInfo.State state that represents the coarse-grained network state is an enum. If the NetworkInfo.State enum is equal to NetworkInfo.State.CONNECTED, it means that the phone is connected to any network. Remember that we still don't know which type of network we are connected to. We can find the type of network by calling the NetworkInfo.getTypeName() method. This method will return Mobile or Wi-Fi in the respective cases.

Coarse-grained network state is mostly used in apps rather than DetailedState. The difference between these two states' mapping is that the coarse-grained network only shows four states: CONNECTING, CONNECTED, DISCONNECTING, and DISCONNECTED. However, DetailedState shows other states for more details, such as IDLE, SCANNING, AUTHENTICATING, UNAVAILABLE, FAILED, and the other four coarse-grained states.

The rest is an if-else block checking the state of the network and showing the relative toasts of status on the screen. Overall, we first extracted our extra objects from intent, stored them in local variables, extracted network info from extras, checked the state, and finally displayed the info in the form of toasts. Now, we will discuss the Android manifest file in the next section.

The AndroidManifest.xml file

As we have used a broadcast receiver in our application to detect the network connectivity status, it is necessary to register and unregister the broadcast receiver in the app. In our manifest file, we have performed two main tasks. First, we have added the permissions of accessing the network state. We have used android.permissions.ACCESS_NETWORK_STATE. Second, we have registered our receiver in the app using the receiver tag and added the name of the class.

Also, we have added the intent filters. These intent filters define the purpose of the receiver, such as what type of data should be received from the system. We have used the filter action for detecting the network connectivity change broadcast. The following is the code implementation of the file:

Summarizing the details of the preceding app, we created a customized broadcast receiver, and defined our custom behavior of network change, that is, displaying toasts, and then we registered our receiver in the manifest file along with the declarations of the required permissions. The following screenshots show a simple demo of the app when turning the Wi-Fi on in the phone:

The Network Change Status app

In the previous screenshot, we can see that when we turn the Wi-Fi on, the app displays a toast saying that the network status has changed. And after that toast, it displays the change; in our case, the Wi-Fi is connected. You might be wondering about the role of intents in this app. This app was not possible without using intents. The first use of intents was in registering the receiver in the manifest file to filter it for network status change. The other use of intents was in the receiver when we have received the update and we want to know the change. So, we used the intents and extracted the data from it in the form of extra objects and used it for our purpose. We didn't create our own intents in this example; instead, we only used the provided intents. In our next example, we will create our own intents and use them to open the Wi-Fi settings from our app.

Opening the Wi-Fi Settings app

Until now, we have only used intents for network and Wi-Fi purposes. In this example, we are going to create intent objects and use it in our app. In the previous app example, we detected the network change status of the phone and displayed it on the screen. In this example, we will add a button in the same app. On clicking on or tapping the button, the app will open the Wi-Fi settings. And the user can turn the Wi-Fi on or off from there. As the user performs any action, the app will display the network status change on the screen. For the network status, we used the and AndroidManifest.xml files. Now, let's open the same project and change our and layout_main.xml files to add a button and its functionality to them. Let's see these two files one by one:

The activity_main.xml file

This file is a visual layout of our main activity file. We will add a button view in this XML file. The code implementation of the file is as follows:

We have added a button in the layout with the view ID of btnWifiSettings. We will use this ID to get the button View in the layout file. Let's now see our main activity file that will use this layout as the visual content.

The file

This file represents the main activity file as a launcher point of the app. We will implement our button's core functionality in this file. The code implementation of the file is as follows:

As discussed many times, we have extended our class from the Activity class, and we have overridden the onCreate() method of the class. After calling the super method, we have first referenced our layout file (explained in the previous section) using the setContentView() method and passed the layout ID as the parameter. After getting the layout file, we have extracted our Wi-Fi settings button from the layout by calling the findViewById() method. Remember, we set the button View's ID to btnWifiSettings; so, we will pass this ID in the method as an argument. We stored the referenced file of our button in a local Button object.reference object. Now, we will set View.OnClickListener of the local button to perform our tasks on a button click. We have passed an anonymous object of OnClickListener in the button.setOnClickListener() method, and overridden the onClick() method of the anonymous object.

Until now, we have only performed some initial steps to create a setup for our app. Now, let's focus on opening the Wi-Fi settings task. We will create an Intent object, and we have to pass a constant string ID to tell the intent about what to start. We will use the Settings.ACTION_WIFI_SETTINGS constant that shows the settings to allow the configuration of the Wi-Fi. After creating the Intent object, we will pass it in the startActivity() method to open the activity containing the Wi-Fi settings. It is that simple with no rocket science at all. When we run the app, we will have something similar to the following screenshots:

Opening the Wi-Fi Settings app

As seen from the preceding screenshot, when we click or tap the Wi-Fi Settings button, it will open the Wi-Fi settings screen of the Android phone. On changing the settings, such as turning on the Wi-Fi, it will display the toasts to show the updated changes and network status.

We have finished discussing the communication components using intents, in which we used Bluetooth and Wi-Fi via intents and saw how these can be used in various examples and applications. Now, we will discuss how the media components can be used via intents and what we can do for media components in the following sections.

Learning Android Intents Explore and apply the power of intents in Android application development with this book and ebook
Published: January 2014
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Media components

The preceding section was all about communication components. But the difference between old phones and the new smartphones is the media capability, such as high-definition audio-video features. And the multimedia capabilities of mobile phones have become a more significant consideration to many consumers. Fortunately, the Android system provides multimedia API's for many features such as playing and recording a wide range of image, audio, and video formats both locally and streamed. If we describe the media components in simple words, this topic is beyond the scope of this article. We will only discuss those media components that can be triggered, used, and accessed through intents. The components to be discussed in this section include using intents to take pictures, using intents to record video, speech recognition using intents, and the role of intents in text-to-speech conversion. The first three topics use intents to perform the actions; but the last topic of text-to-speech conversion doesn't use intents on a complete basis. We will also develop a sample application to see the intents in action. Let's discuss these topics one by one in the following subsections.

Using intents to take pictures

Today, almost every phone has a digital camera component. The popularity of digital cameras embedded within mobile phones has caused their prices to drop along with their size. Android phones also include digital cameras varying from 3.2 megapixels to 32 megapixels. From the development perspective, pictures can be taken via many different methods. The Android system has also provided the APIs for camera control and pictures, but we will only be focusing on one method that uses intents in it. This is the easiest way to take pictures in Android development, and contains no more than few lines of code.

We will first create a layout with the image View and button. Then, in the Activity class, we will get the references of our views from the layout file, and set the click listener of the button. On clicking the button, we will create the intent of the capture image, and start another activity as a child class. After getting the result, we will display that captured image in our image View.

So, with the basic empty Hello World project ready, we will change three files and add our code to it. The files are activity_main.xml,, and AndroidManifest.xml. Let's explain the changes in each file one by one:

The activity_main.xml file

This file represents the visual layout for the file. We will add an ImageView tag to show the captured image and a Button tag to take a picture and trigger the camera.

The code implementation of the file is as follows:

As you can see in the code, we have placed an ImageView tag in the relative layout with the ID of imageView1. This ID will be used in the main activity file to extract the view from the layout to use in the Java file. We have placed the view in the horizontal centre of the layout by assigning the value true in the android:layout_centerHorizontal tag. Initially, we have set a default image of our app's launcher icon to our image View. Below the image View, we have placed a button View. On tapping the button, the camera will be started. The button's ID is set to btnTakePicture by the android:layout_below tag below the image View layout. This relativity is the main advantage of the relative layouts as compared to linear layouts. So now, let's have a look at the activity of the app that performs the main functionality and uses this layout as a visual part as well.

The file

This file represents the main launching activity of the app. This file uses the layout_main.xml file as the visual part, and it is extended from the Activity class. The code implementation of the file is as follows:

We start our class by overriding the onCreate() method of the activity. We set the visual layout of the activity to the activity_main.xml layout by calling the setContentView() method. Now, as the layout is set, we can get references to the views in the layout file.

We create two fields in the class; takenImage of the ImageView class to be used to show the captured image and imageButton of the Button class to be used to trigger the camera by clicking on it. The onClick() method will be called when the button is tapped/clicked. So, we will define our camera-triggering code in this method. So, in this method, we are creating an instance of the Intent class, and we are passing the MediaStore.ACTION_IMAGE_CAPTURE constant in the constructor. This constant will tell Android that the intent is for the purpose of image capture, and Android will start the camera on starting this intent. If a user has installed more than one camera app, Android will present a list of all valid camera apps, and the user can choose any to take the image.

After creating an intent instance, we pass this intent object in the startActivityForResult() method. In our picture-capturing app, clicking on the button will start another activity of the camera. And when we close the camera activity, it will come back to the original activity of our app and give us some result of the captured picture. So, to get the result in any activity, we have to override the onActivityResult() method. This method is called when the parent activity is started after the child activity is completed. When this method is called, it means that we have used the camera and are now back to our parent activity. If the result is successful, we can display the captured image in the image View.

First, we can learn whether this method is called after the camera or if another action has happened. For this purpose, we have to compare the requestCode parameter of the method. Remember, when calling the startActivityForResult() method, we passed the TAKE_IMAGE_CODE constant as the other parameter. This is the request code to be compared to.

After that, to check the result, we can see the resultCode parameter of the method. As we used this code for the camera picture intent, we will compare our resultCode with the RESULT_OK constant. After the success of both conditions, we can conclude that we have received our image. So, we use the intent to get our image data by calling the getExtras().get() method. This will give us the Object type of data. We further typecast it to Bitmap to prepare it for ImageView.

Finally, we call the setImageBitmap method to set the new bitmap to our image View. If you run the code, you will see an icon image and a button. After clicking on the button, the camera will be started. When you take the picture, the app will crash and shut down. You can see it in the following screenshots:

The app crashed after taking a picture

You might be wondering why the crash occurred. We forgot to mention one thing; whenever any app uses the camera, we have to add the uses-feature tag in our manifest file to tell the app that it will use the camera feature. Let's see our Android manifest file to understand the uses-feature tag.

The AndroidManifest.xml file

This file defines all the settings and features to be used in our app. There is only one new thing that we haven't seen. The code implementation of the file is as follows:

You can see that we have added the uses-feature tag, and we have assigned in the android:name property. This tag tells the app about the camera usage in it, and the Android OS gives our app the permission to use the external camera.

After adding this line in the manifest file and running the code, you will see something similar to the following screenshot if you have more than one camera app in your phone:

Taking pictures through the intents app

In the screenshot, you can see that the user is asked to choose the camera, and when a picture is taken, the image is shown in the app.

When we summarized the code, we first created a layout with image View and button. Then, in the Activity class, we got the references of our views from the layout file, and set the click listener of the button. After clicking on the button, we created the intent of capture image, and started another activity as the child activity. After getting the result, we displayed that captured image in our image View. It was as easy as a walk in the park. In the next section, we will see how we can record video using intents.

Using intents to record video

Until now, we have already seen how to take pictures using intents. In this section, we will see how we can record video using intents. We will not discuss the whole project in this section. The procedure to record videos using intents is almost the same as taking pictures with few minor changes. We will only discuss those changes in this section. Now, let's see how the app works to record video.

The first change that we have done is in our layout file. We removed the image View section, and have placed the VideoView tag. The following code implementation shows that tag:

You can see that everything is the same as it was in ImageView. Now, as we have changed image view to video view in our layout, we have to change that in our activity as well. Just as we did for ImageView, we will create a field object of VideoView, and get the reference in our onCreate() method of the activity. The following code sample shows the field object of VideoView line:

Everything is the same, and we have already discussed it. Now, in our onClick() method, we will see how we send the intent that triggers the video recording. The code implementation to be put on the onClick() method to send an intent is as follows:

You can see that we have created an intent object, and instead of passing MediaStore.ACTION_IMAGE_CAPTURE, we have passed MediaStore.ACTION_VIDEO_CAPTURE in the constructor of the intent. Also, we have put an extra object in the intent by calling the putExtra() method. We have put the extra object defining the video quality as high by assigning the MediaStore.EXTRA_VIDEO_QUALITY value to 1. Then, we pass the intent in the startActivityForResult() method again to start the camera activity.

The next change is in the onActivityResult() method when we get the video from the intent. The following code shows some sample code to get the video and pass it in the VideoView tag and play it:

In the case of taking a picture, we restored raw data from the intent, typecasted it to Bitmap, and then set our ImageView to Bitmap. But here, in case of recording a video, we are only getting the URI of the video. The Uri object declares the reference of data in the mobile phone. We get the URI of the video, and set it in our VideoView using the setVideoURI() method. Finally, we play the video by calling the VideoView.start() method.

From these sections, you can see how easy it is to use the intents to capture images or record videos. Through intents, we are using the already built-in camera or camera apps. If we want our own custom camera to capture images and videos, we have to use the Camera APIs of Android.

We can use the MediaPlayer class to play video, audio, and so on. The MediaPlayer class contains methods like start(), stop(), seekTo(), isLooping(), setVolume(), and much more. To record a video, we can use the MediaRecorder class. This class contains methods including start(), stop(), release(), setAudioSource(), setVideoSource(), setOutputFormat(), setAudioEncoder(), setVideoEncoder(), setOutputFile(), and much more.

When you are using MediaRecorder APIs in your app, don't forget to add the permissions of android.premission.RECORD_AUDIO and android.permission.RECORD_VIDEO in your manifest file.

To take pictures without using intents, we can use the Camera class. This class includes the methods open(), release(), startPreview(), stopPreview(), takePicture(), and much more.

When you are using Camera APIs in your app, don't forget to add the permissions of android.premission.CAMERA in your manifest file.

Until now, we have used visual media components for videos and pictures using intents. In the next section, we will use audio components of a phone using intents. We will see how we can use the speech recognition and text-to-speech supports using intents in the next sections.

Speech recognition using intents

Smartphones introduced voice recognition that became a very big achievement for disabled people. Android introduced speech recognition in API Level 3 in Version 1.5. Android supports voice input and speech recognition using the RecognizerIntent class. Android's default keyboard contains a button with a microphone icon on it. This allows the user to speak instead of typing a text. It uses the speech-recognition API for this purpose. The following screenshot shows the keyboard with the microphone button on it:

Android's default keyboard with the microphone button

In this section, we will create a sample application that will have a button and text field. After clicking on the button, Android's standard voice-input dialog will be displayed, and the user will be asked to speak something. The app will try to recognize whatever the user speaks and type it in the text field. We will start by creating an empty project in the Android Studio or any other IDE, and we will modify two files in it. Let's start with our layout file in the next section.

The activity_main.xml file

This file represents the visual content of the app. We will add the text field and button view in this file. The code implementation of the file is as follows:

As you can see, we have placed an EditText field. We have set android:inputType to textMultiLine to type the text in multiple lines. Below the text field, we have added a Button view with an ID of btnRecognize. This button will be used to start the speech-recognition activity when it is tapped or clicked on. Now, let's discuss the main activity file.

The file

This file represents the main activity of the project. The code implementation of the file is as follows:

As usual, we override the onCreate() method, and get our button reference from the layout set by the setContentView() method. We set the button's listener to this class, and in the activity, we implement OnClickListener along with overriding the onClick() method. In the onClick() method, we create an intent object and pass RecognizerIntent.ACTION_RECOGNIZE_SPEECH as an action string in the constructor. This constant will tell Android that the intent is for speech-recognition purpose. Then, we have to add some extra objects to provide more information to Android about the intent and speech recognition. The most important extra object to be added is RecognizerIntent.EXTRA_LANGUAGE_MODEL. This informs the recognizer about which speech model to use when recognizing speech. The recognizer uses this extra to fine-tune the results with more accuracy. This extra method is required and must be provided when calling the speech-recognizer intent. We have passed the RecognizerInent.ACTION_LANGUAGE_MODEL_FREE_FORM model for speech. This is a language model based on a free-form speech recognition. Now, we have some optional extra objects that help the recognizer with more accurate results. We have added extra of RecognizerIntent.EXTRA_PROMPT and passed some string value in it. This will notify the user that speech recognition has been started.

Next, we add the RecognizerIntent.EXTRA_MAX_RESULTS extra and set its value as 1. Speech recognition's accuracy always varies. So, the recognizer will try to recognize with more accuracy. So, it creates different results with different accuracies and maybe different meanings. So, through this extra, we can tell the recognizer about how many results we are interested in. In our app, we have put it as 1. So, that means the recognizer will provide us with only one result. There is no guarantee that this result will be accurate enough; that's why it is recommended to pass a value greater than 1. For a simple case, you can pass a value upto 5. Remember, the greater the value you pass, the more time will it take to recognize it.

Finally, we put our last optional extra of language. We pass Locale.ENGLISH as the value of the RecognizerIntent.EXTRA_LANGUAGE extra. This will tell the recognizer about the language of the speech. So, the recognizer didn't have to detect the language, this results in more accuracy in speech recognition.

The speech-recognition engine may not be able to understand all the languages available in the Locale class. Also, it is not necessary that all the devices will support speech recognition.

After adding all the extra objects, we have ensured that our intent object is ready. We pass it in the startActivityForResult() method with requestCode as 1. When this method is called, a standard voice-recognition dialog is shown with the prompt message that we had given. After we finish speaking, our parent activity's onActivityResult() method is called. We first check whether requestCode is 1 or not so that we can be sure that this is our result of speech recognition. After that, we will check resultCode to see whether the result was okay or not. After successful results, we will get an array list of strings containing all the words recognized by the recognizer. We can get these words' lists by calling the getStringArrayListExtra() method and passing RecognizerIntent.EXTRA_RESULTS. This list is only returned when resultCode is okay; otherwise, we will get a null value. After wrapping up the speech-recognition stuff, we can now set the text value to the result. For that, we first extract the EditText view from the layout, and set our result to the value of the text field by calling the setText() method.

An active Internet connection is required to run speech recognition. The speech-recognition process is executed on the servers of Google. An Android phone takes the voice input, sends it to Google Servers, and it is processed there for recognition. After recognition, Google sends the results back to the Android phone, the phone informs the user about the results, and the cycle is complete.

If you run the project, you will see something similar to the following screenshots:

Speech recognition using intents

In the image, you can see that after clicking on the Recognize button, a standard voice-input dialog is shown. On speaking something, we will return back to our parent activity, and after recognizing the speech, it will print all the text in the text field.

Role of intents in text-to-speech conversion

In the previous section, we discussed how the Android system can recognize our speech and perform actions such as controlling the mobile phone via speech commands. We also developed a simple speech-to-text example using intents. This section is the opposite of the previous section. In this section, we will discuss how the Android system can convert our text into a beautiful voice narration. We can call it text-to-speech conversion. Android introduced the Text-To-Speech ( TTS ) Conversion engine in Version 1.6 API Level 4. We can use this API to produce speech from within our application, thus allowing our app to talk with our users. And if we add speech recognition, it will be like talking with our application. Text-to-speech conversion requires preinstalled language packs, and due to the lack of storage space on mobile phones, it is not necessary that the phone will have any language packs already installed in it. So, while creating any app using the text-to-speech engine, it is a good practice to check whether the language packs are installed or not.

We can't use text-to-speech conversion using intents. We can only use it through the text-to-speech engine called TTS. But, there is a minor role of intents in text-to-speech conversion. Intents are used only to check whether the language packs are preinstalled or not. So, creating any app that uses text-to-speech will first have to use intents to check the language packs' installation status. That's the role of intents in text-to-speech conversion. Let's look at the sample code of checking the language packs' installation state:

The first thing we will do for text-to-speech conversion is to check the language packs. In the code, we can see that we are creating an intent object. And we are passing the Engine.ACTION_CHECK_TTS_DATA constant that will tell the system that the intent will check the text-to-speech (TTS) data and language packs. We are then passing the intent in the startActivityForResult() method along with the VAL_TTS_DATA constant value used as requestCode. Now, if the language packs are installed and everything is okay, we will get resultCode as RESULT_OK in the onActivityResult() method. So, if the result is okay, we can use text-to-speech conversion. So, let's see the code sample for the onActivityResult() method, as shown in the following screenshot:

So, we first check requestCode of our passed code. Then, we check resultCode to Engine.CHECK_VOICE_DATA_PASS. This constant is used to tell whether voice data is available or not. If we have data available in our phone, we can do our text-to-speech conversion there. Otherwise, it is clear that we have to install voice data first before doing the text-to-speech conversion. You will be pleased to know that installing voice data is also very easy; it uses intents for this purpose. The following code snippet shows how to install voice data using intents:

We created an intent object and passed Engine.ACTION_INSTALL_TTS_DATA in the constructor. This constant will tell Android that the intent is for the installation of text-to-speech language packs' data. And then, we pass the intent into the startActivity() method to start installation. After the language pack's installation, we have to create an object of the TextToSpeech class and call its speak() method when we want to do some text-to-speech conversion. The following is the code implementation showing how to use the object of the TextToSpeech class in the onActivityResult() method:

protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VAL_TTS_DATA) { if (resultCode == Engine.CHECK_VOICE_DATA_PASS) { TextToSpeech tts = new TextToSpeech(this, new OnInitListener() { public void onInit(int status) { if (status == TextToSpeech.SUCCESS) { tts.setLanguage(Locale.US); tts.setSpeechRate(1.1f); tts.speak("Hello, I am writing book for Packt", TextToSpeech.QUEUE_ADD, null); } } }); } else { Intent installLanguage = new Intent ( Engine.ACTION_INSTALL_TTS_DATA); startActivity(installLanguage); } } }

As seen in the code, after the successful installation of the language data packs, we have created an instance of TextToSpeech and passed an anonymous OnInitListener object. We have implemented the onInit() method. This method will set the initial settings of the TextToSpeech object. If the status is a success, we are setting the language, speech rate, and finally, we are calling the speak() method. In this method, we passed a string of characters, and Android will read these letters aloud.

Concluding the whole topic, the role of intents in text-to-speech conversion is of checking and installing voice-data packs. Intents don't contribute directly to text-to-speech conversion, but they just set the initial setup for text-to-speech conversion.

With text-to-speech conversion, we have finished the discussions on media components. In media components, we discussed taking pictures, recording videos, speech recognition, and text-to-speech conversion. In the next section, we will discuss motion components and see how intents play a role in these components.

Motion components

Motion components in an Android phone include many different types of sensors that perform many different tasks and actions. In this section, we will discuss motion and position sensors such as accelerometer, geomagnetic sensor, orientation sensor, and proximity sensor. All these sensors play a role in the motion and position of the Android phone. We will discuss only those sensors that use intents to get triggered. We have only one such sensor that uses intents and that is the proximity sensor. Let's discuss it in the following section.

Intents and proximity alerts

Before learning about the role of intents in proximity alerts, we will discuss what proximity alerts are and how these can be useful in various applications.

What are proximity alerts?

The proximity sensor lets the user determine how close the device is to an object. It's often useful when your application reacts when a phone's screen moves towards or away from any specific object. For example, when we get any incoming call on an Android phone, placing the phone on the ear shuts off the screen and holding it back in the hands switches the screen on automatically. This application is using proximity alerts to detect the distance between the ear and the proximity sensor of the device. The following figure shows it in the visual format:

Another example can be when our phone has been idle for a while and its screen is switched off, it will vibrate if we have some missed calls or give notifications hinting us to check our phone. This can also be done using proximity sensors.

Proximity sensors use proximity alerts that detect the distance between the sensor of the phone and any object. These alerts let your application set triggers that are fired when a user is moved within or beyond a set distance from a geographic location. We will not discuss all the details for the use of proximity alerts in this section, but we will only cover some basic information and the role of intents in using proximity alerts. For example, we set a proximity alert for a given coverage area. We select a point in the form of longitude and latitude, a radius around that point in meters, and some expiry time for the alert. Now, after using proximity alerts, an alert will fire if the device crosses that boundary. It can be either that the device moves from outside to within the radius or it moves from inside the radius to beyond it.

Role of intents in proximity alerts

When proximity alerts are triggered, they fire intents. We will use a PendingIntent object to specify the intent to be fired. Let's see some code sample of the application of the distance that we discussed in the earlier section in the following implementation:

In the preceding code, we are implementing the very first step to use proximity alerts in our app. First of all, we create a proximity alert that can be done through PendingIntent. We define the name of the alert as DISTANCE_PROXIMITY_ALERT, and then get the location manager service by calling the getSystemService() method of the current activity we have written the code in. We then set some random values for latitude, longitude, radius, and expiration to infinity. It should be remembered that these values can be set to any depending on the type of application you are creating.

Now comes our most important part of creating the proximity alert. We create the intent, and we pass our own alert name in the constructor to create our own intent. Then, we create an object of PendingIntent by getting a broadcast intent using the getBroadcast() method. Finally, we are adding the proximity alert in our location manager service by calling the addProximityAlert() method.

This code snippet has only created the alert and set initial values for it. Now, assume that we have completely finished our distance app. So, whenever our device passes the boundary that we specified in the app or gets inside it, LocationManager will detect that we have crossed the boundary, and it will fire an intent having an extra value of LocationManager.KEY_PROXIMITY_ENTERING. This value is a Boolean value. If it's value is true it means we have entered into the boundary, and if it is false, we have left the boundary. To receive this intent, we will create a broadcast receiver and perform the action. The following code snippet shows the sample implementation of the receiver:

public class ProximityAlertReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Boolean isEntered = intent.getBooleanExtra( LocationManager.KEY_PROXIMITY_ENTERING, false); if (isEntered) Toast.makeText(context, "Device has Entered!", Toast.LENGTH_SHORT).show(); else Toast.makeText(context, "Device has Left!", Toast.LENGTH_SHORT).show(); } }

In the code, you can see that we are getting the extra value of LocationManager.KEY_PROXIMITY_ENTERING using the getBooleanExtra() method. We compare the value and display the toast accordingly. It was quite easy as you can see. But, like all the receivers, this receiver will not work until it is registered in AndroidManifest.xml or via code in Java. The java code for registering the receiver is as follows:

There is nothing to explain here except that we are calling the registerReceiver() method of the Activity class.

In a nutshell, intents play a minor role in getting proximity alerts. Intents are only used to tell the Android OS about the type of proximity alert that has been added, when it is fired, and what information should be included in it so that the developers can use it in their apps.


In this article, we discussed the common mobile components found in almost all Android phones. These components include the Wi-Fi component, Bluetooth, Cellular, Global Positioning System, geomagnetic field, motion sensors, position sensors, and environmental sensors. Then, we discussed the role of intents with these components. To explain that role in more detail, we used intents for Bluetooth communication, turning Bluetooth on/off, making a device discoverable, turning Wi-Fi turn on/off, and opening Wi-Fi settings. We also saw how we can take pictures, record videos, do speech recognition, and text-to-speech conversion via intents. In the end, we saw how we can use the proximity sensor through intents.

Resources for Article :

Further resources on this subject:

About the Author :

Muhammad Usama bin Aftab

Muhammad Usama bin Aftab is a telecommunications engineer with a flair for programming. He has been working in the IT industry for the last two years, in which he worked on Android Development, AndEngine GLES 1 and 2, Starling, Adobe Air, and Unity 3D. He also has a total of two years of Android experience consisting of professional and freelance work that he has done. In June 2011, he started his career from a silicon-valley-based company named Folio3 Pvt. Ltd. Folio3 guided him a lot. This helped him discover various technologies with highly qualified professionals.

Wajahat Karim

Wajahat Karim is a software engineer and has a high interest in game development for mobile and Facebook platforms. He completed his graduation from NUST School of Electrical Engineering & Computer Sciences (SEECS), Islamabad, Pakistan. He has been working on games since he was in the third year of his graduation. He is skilled in many platforms including Android SDK, AndEngine GLES 1 and 2, Adobe Flash, Adobe Flex, Adobe AIR, Unity3D, and Game Maker. He is also skilled, not only in programming and coding, but also in computer graphics tools, such as Adobe Photoshop CS5, Adobe Illustrator, Adobe Flash, 3D Studio Max, and Autodesk Maya 2012. After working on a Facebook game in WhiteRabbit Studios until September 2012, he joined a silicon valley-based company, Folio3 Pvt.Ltd, where he provides his services in mobile games using Unity3D, Adobe Flash, and AndEngine. He also runs his own mobile app/game startup called AppSoul Studio (Pvt.) Ltd. in his part time

Books From Packt

Android Application Programming with OpenCV
Android Application Programming with OpenCV

Augmented Reality for Android Application Development
Augmented Reality for Android Application Development

Android User Interface Development: Beginner's Guide
Android User Interface Development: Beginner's Guide

 Android Application Security Essentials
Android Application Security Essentials

Instant Android Fragmentation Management How-to [Instant]
Instant Android Fragmentation Management How-to [Instant]

Android 4: New Features for Application Development
Android 4: New Features for Application Development

Android Development Tools for Eclipse
Android Development Tools for Eclipse

Android Database Programming
Android Database Programming

Your rating: None Average: 1.5 (4 votes)

Post new comment

This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software