Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Voice Application Development for Android

You're reading from  Voice Application Development for Android

Product type Book
Published in Nov 2013
Publisher Packt
ISBN-13 9781783285297
Pages 134 pages
Edition 1st Edition
Languages

Table of Contents (19) Chapters

Voice Application Development for Android
Credits
Foreword
About the Authors
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
1. Speech on Android Devices 2. Text-to-Speech Synthesis 3. Speech Recognition 4. Simple Voice Interactions 5. Form-filling Dialogs 6. Grammars for Dialog 7. Multilingual and Multimodal Dialogs 8. Dialogs with Virtual Personal Assistants 9. Taking it Further Afterword
Index

Chapter 1. Speech on Android Devices

Have you ever wanted to create voice-based apps that you could run on your own Android device; apps that you could talk to and that could talk back to you? This chapter provides an introduction to the use of speech on Android devices, using open-source APIs from Google for text-to-speech synthesis and speech recognition. Following a brief overview of the world of Voice User Interfaces (VUIs), the chapter outlines the components of an interactive voice application (or virtual personal assistant).

By the end of this chapter you should have a good understanding of what is required to create a voice-based app using freely available resources from Google.

Using speech on an Android device


Android devices provide built-in speech-to-text and text-to-speech capabilities. The following are some examples of speech-based apps on Android:

Speech-to-text

With speech-to-text users of Android devices can dictate into any text box on the device where textual input is required, for example, e-mail, text messaging, and search. The keyboard control contains a button with a microphone symbol and two letters indicating the language input settings, which can be changed by the user. On pressing the microphone button a window pops up asking the user to Speak Now. The spoken input is automatically transcribed into written text. The user can then decide what to do with the transcribed text.

Accuracy rates have improved considerably for dictation on small devices, on one hand due to the use of large-scale cloud-based resources for speech recognition, and on the other, to the fact that the device is usually held close to the user's mouth so that a more reliable acoustic signal can be obtained. One of the main challenges for voice dictation is that the input is unpredictable—users can say literally anything—and so a large general vocabulary is required to cover all possible inputs. Other challenges include dealing with background noise, sloppy speech, and unfamiliar accents.

Text-to-speech

Text-to-speech (TTS) is used to convert text to speech. Various applications can take advantage of TTS. For example, TalkBack, which is available through the Accessibility option, uses TTS to help blind and visually impaired users by describing what items are touched, selected and activated. TalkBack can also be used to read a book in the Google Play Books app. The TTS function is also available on Android Kindle as well as on Google Maps for giving step-by-step driving instructions. There is a wide range of third-party apps that make use of TTS, and alternative TTS engines are also available.

Voice Search

Voice Search provides the same functionality on Android devices as the traditional Google Search except that instead of typing a query the user speaks it. Voice Search is available using the microphone in the Google Search widget. In Voice Search the recognized text is passed to the search engine and executed in the same way that a typed query is executed.

A new feature of Voice Search is that, in addition to returning a list of links, a spoken response to the query is returned. For example, in response to the question "How tall is the Eiffel tower?", the app replies, "The Eiffel tower is 324 meters tall." It is also possible to ask follow-up questions using pronouns, for example, "When was it built?". This additional functionality is made possible by combining Google's Knowledge Graph—a knowledge base used by Google—with its conversational search technology to provide a more conversational style of interaction.

Android Voice Actions

Android Voice Actions can also be accessed using the microphone in the Google Search widget. Voice Actions allow the user to control their device using voice commands. Voice Actions require input that matches a particular structure, as shown in the following list from Google's webpage: http://www.google.co.uk/intl/en_uk/mobile/voice-actions/. Note: items with * are optional. Italicized items are the words to be spoken.

Voice Action

Structure

Example

Send text messages

send text to [recipient] [message]*

send text to Allison Miller Running late. I will be home around 9

Call businesses

call [business name] [location]*

call Soho Pizzeria London

View a map

map of [address/city]

map of London

Search Google

[your query]

pictures of Stonehenge at sunset

Get directions

navigate to [address/city/business name]

navigate to British Museum London

or

navigate to 24 Mill Street

Call contacts

call [contact name] [phone type]*

call Allison Miller home

Go to websites

go to [website]

go to Wikipedia

The structures in Voice Actions allow them to be mapped on to actions that are available on the device. For example, the keyword call indicates a phone call while the key phrase go to indicates a website to be launched. Additional processing is required to extract the parameters of the actions, such as contact name and website.

Virtual Personal Assistants

One of the most exciting speech-based apps is the Virtual Personal Assistant (VPA), which acts like a personal assistant, performing a range of tasks such as finding information about local restaurants; carrying out commands involving apps on the device, for example, using speech to set the alarm or update the calendar; and engaging in general conversation. There are at least 20 VPAs available for Android devices (see the web page for this book) although the best-known VPA is Siri, which has been available on the iPhone iOS since 2011. You can find examples of interactions with Siri that are similar to those performed by Android VPAs on Apple's website http://www.apple.com/uk/ios/siri/. Many VPAs, including Siri, have been created with a personality and an ability to respond in a humorous way to trick questions and dubious input, thus adding to their entertainment value. See examples at http://www.sirifunny.com as well as numerous video clips on YouTube.

It is worth mentioning that a number of technologies share some of the characteristics of VPAs as explained in the following:

Dialog systems, which have a long tradition in academic research, are based on the vision of developing systems that can communicate with humans in natural language (initially written text but more recently speech). The first systems were concerned with obtaining information, for example, flight times or stock quotes. The next generation enabled users to engage in some form of transaction, in banking or making a travel reservation, while more recent systems are being developed to assist in troubleshooting, for example, guiding a user who is having difficulty setting up some item of equipment. A wide range of techniques have been used to implement dialog systems, including rule-based and statistically-based dialog processing.

Voice User Interfaces (VUIs), which are similar to dialog systems but with the emphasis on commercial deployment. Here the focus has tended to be on systems for specific purposes, such as call routing, directory assistance, and transactional dialogs for example, travel, hotel, flight, car rental, or bank balance. Many current VUIs have been designed using VoiceXML, a markup language based on XML. The VoiceXML scripts are then interpreted on a voice browser that also provides the required speech and telephony functions.

Chatbots, which have been used traditionally to simulate human conversation. The earliest chatbots go back to the 1960s with the famous ELIZA program written by Joseph Weizenbaum that simulated a Rogerian psychotherapist—often in a convincing way. More recently chatbots have been used in education, information retrieval, business, e-commerce, and in automated help desks. Chatbots use a sophisticated pattern-matching algorithm to match the user's input and to retrieve appropriate responses. Most chatbots have been text-based although increasingly speech-based chatbots are beginning to emerge (see further in Chapter 8, Dialogs with Virtual Personal Assistants).

Embodied conversational agents (ECAs), are computer-generated animated characters that combine facial expression, body stance, hand gestures, and speech to provide an enriched channel of communication. By enhancing the visual dimensions of face-to-face interaction embodied conversational agents can appear more trustworthy and believable, and also more interesting and entertaining. Embodied conversational agents have been used in applications such as interactive language learning, virtual training environments, virtual reality game shows, and interactive fiction and storytelling systems. Increasingly they are being used in e-commerce and e-banking to provide friendly and helpful automated help. See, for example, the agent Anna at the IKEA website http://www.ikea.com/gb/en/.

Virtual Personal Assistants differ from these technologies in that they allow users to use speech to perform many of the functions that are available on mobile devices, such as sending a text message, consulting and updating the calendar, or setting an alarm. They also provide access to web services, such as finding a restaurant, tracking a delivery, booking a flight, or using information services such as Knowledge Graph, Wolfram Alpha, or Wikipedia. Because they have access to contextual information on the device such as the user's location, time and date, contacts, and calendar, the VPA can provide information such as restaurant recommendations relevant to the user's location and preferences.

Designing and developing a speech app


Speech app design shares many of the characteristics of software design in general, but there are also some aspects unique to voice interfaces—for example, dealing with the issue that speech recognition is always going to be less than 100 percent accurate, and so is less reliable compared with input when using a GUI. Another issue is that, since speech is transient, especially on devices with no visual display, greater demands are put on the user's memory compared with a GUI app.

There are many factors that contribute to the usability of a speech-based app. It is important to perform extensive use case analysis in order to determine the requirements of the system, looking at issues such as whether the app is to replace or complement an existing app; whether speech is appropriate as a medium for input/output; the type of service to be provided by the app; the types of user who will make use of the app; and the general deployment environment for the app.

Why Google speech?


The following are our reasons for using Google speech:

  • The proliferation of Android devices: Recent information on Android states that "Android had a worldwide smartphone market share of 75% during the third quarter of 2012,with 750 million devices activated in total and 1.5 million activations per day." (From http://www.idc.com/getdoc.jsp?containerId=prUS23771812 Retrieved 09/07/2013).

  • The Android SDK is open source: The fact that the Android SDK is open source makes it more easily available for developers and enthusiasts to create apps, compared with some other operating systems. Anyone can develop their own apps using a free development environment such as Eclipse and then upload it to their Android device for their own personal use and enjoyment.

  • The Google Speech APIs: The Google Speech APIs are available for free for use on Android devices. This means that the Speech APIs are useful for developers wishing to try out speech without investing in expensive commercially available alternatives. As Google employs many of the top speech scientists, their speech APIs are comparable in performance to those on offer commercially.

Tip

You may also try…

Nuance NDEV Mobile, which supports a number of languages for text-to-speech synthesis and speech recognition as well as providing a PhoneGap plug-in to enable developers to implement their apps on different platforms (http://dragonmobile.nuancemobiledeveloper.com).

The AT&T Speech Mashup (http://www.research.att.com/projects/SpeechMashup/), which supports the development of speech-based apps and the use of W3C standard speech recognition grammars.

What is needed to create a Virtual Personal Assistant?


The following figure shows the various components required to build a speech-enabled VPA.

A basic requirement for a VPA is that it should be able to speak and to understand speech. Text to speech synthesis, which provides the ability to speak, is discussed in Chapter 2, Text To Speech Synthesis, while speech recognition is covered in Chapter 3, Speech Recognition. However, while these capabilities are fundamental for a voice-enabled assistant, they are not sufficient. The ability to engage in dialog and connect to web services and device functions is also required as the basis of personal assistance. To do these things a VPA requires the following:

  • A method for controlling the dialog, determining who should take the dialog initiative and what topics they should cover. In practice this can be simplified by having one-shot interactions in which the user simply speaks their query and the app responds. One-shot interactions are covered in Chapter 4, Simple Voice Interactions. System-directed dialogs, in which the app asks a series of questions—as in web-based form-filling (for example, to book a hotel or rent a car), are covered in Chapter 5, Form-filling Dialogs.

  • A method for interpreting the user's input once it has been recognized. This is the task of the Spoken Language Understanding component which, among other things, provides a semantic interpretation representing the meaning of what the user said. Since in many commercial systems input is restricted to single words or phrases, the interpretation is relatively straightforward. Two different approaches will be illustrated in Chapter 6, Grammars for Dialog: how to create a hand-crafted grammar that covers the words and phrases that the user might say; and how to use statistical grammars to cover a wider range of inputs and to provide a more robust interpretation. It also provides different modalities if speech input and output is not possible or performance is poor. A VPA should also have the ability to use different languages, if required. These topics are covered in Chapter 7, Multilingual and Multimodal Dialogs.

  • Determining relevant actions and generating appropriate responses. These aspects of dialog management and response generation are described in Chapter 7, Multilingual and Multimodal Dialogs, and in Chapter 8, Dialogs with Personal Virtual Assistants.

Building on the basic technologies of text-to-speech synthesis and speech recognition, as presented in Chapter 2 and Chapter 3, Chapters 4-8 cover a range of techniques that will enable developers to take the basic technologies further and create speech-based apps using the Google speech APIs.

Summary


This chapter has provided an introduction to speech technology on Android devices. We examined various types of speech app that are currently available on Android devices. We also looked at why we decided to focus on Google Speech APIs as tools for the developer. Finally we introduced the main technologies required to create a Virtual Personal Assistant. These technologies will be covered in the remaining chapters of this book.

We will introduce you to text-to-speech synthesis (TTS) and show how to use the Google TTS API to develop applications that speak in the next chapter.

You have been reading a chapter from
Voice Application Development for Android
Published in: Nov 2013 Publisher: Packt ISBN-13: 9781783285297
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}