Augmented Reality for Android Application Development — Save 50%
Learn how to develop advanced Augmented Reality applications for Android with this book and ebook
In this article by Jens Grubert and Dr. Raphael Grasset, authors of the book Augmented Reality for Android Application Development, you will learn about the AR concepts.
(For more resources related to this topic, see here.)
A quick overview of AR concepts
As AR has become increasingly popular in the media over the last few years, unfortunately, several distorted notions of Augmented Reality have evolved. Anything that is somehow related to the real world and involves some computing, such as standing in front of a shop and watching 3D models wear the latest fashions, has become AR. Augmented Reality emerged from research labs a few decades ago and different definitions of AR have been produced. As more and more research fields (for example, computer vision, computer graphics, human-computer interaction, medicine, humanities, and art) have investigated AR as a technology, application, or concept, multiple overlapping definitions now exist for AR. Rather than providing you with an exhaustive list of definitions, we will present some major concepts present in any AR application.
The term Augmented Reality itself contains the notion of reality. Augmenting generally refers to the aspect of influencing one of your human sensory systems, such as vision or hearing, with additional information. This information is generally defined as digital or virtual and will be produced by a computer. The technology currently uses displays to overlay and merge the physical information with the digital information. To augment your hearing, modified headphones or earphones equipped with microphones are able to mix sound from your surroundings in realtime with sound generated by your computer.
The TV screen at home is the ideal device to perceive virtual content, streamed from broadcasts or played from your DVD. Unfortunately, most common TV screens are not able to capture the real world and augment it. An Augmented Reality display needs to simultaneously show the real and virtual worlds.
One of the first display technologies for AR was produced by Ivan Sutherlandin 1964 (named "The Sword of Damocles"). The system was rigidly mounted on the ceiling and used some CRT screens and a transparent display to be able to create the sensation of visually merging the real and virtual.
Since then, we have seen different trends in AR display, going from static to wearable and handheld displays. One of the major trends is the usage of optical see-through (OST) technology. The idea is to still see the real world through a semitransparent screen and project some virtual content on the screen. The merging of the real and virtual worlds does not happen on the computer screen, but directly on the retina of your eye, as depicted in the following figure:
The other major trend in AR display is what we call video see-through (VST) technology. You can imagine perceiving the world not directly, but through a video on a monitor. The video image is mixed with some virtual content (as you will see in a movie) and sent back to some standard display, such as your desktop screen, your mobile phone, or the upcoming generation of head-mounted displays as shown in the following figure:
In this book, we will work on Android-driven mobile phones and, therefore, discuss only VST systems; the video camera used will be the one on the back of your phone.
Registration in 3D
With a display (OST or VST) in your hands, you are already able to superimpose things from your real world, as you will see in TV advertisements with text banners at the bottom of the screen. However, any virtual content (such as text or images will remain fixed in its position on the screen. The superposition being really static, your AR display will act as a head-up display (HUD), but won't really be an AR as shown in the following figure:
Google Glass is an example of an HUD. While it uses a semitransparent screen like an OST, the digital content remains in a static position.
AR needs to know more about real and virtual content. It needs to know where things are in space (registration) and follow where they are moving (tracking).
Registration is basically the idea of aligning virtual and real content in the same space. If you are into movies or sports, you will notice that 2D or 3D graphics are superimposed onto scenes of the physical world quite often. In ice hockey, the puck is often highlighted with a colored trail. In movies such as Walt Disney'sTRON (1982 version), the real and virtual elements are seamlessly blended. However, AR differs from those effects as it is based on all of the following aspects (proposed by Ronald T. Azumain 1997):
- It's in 3D: In the olden days, some of the movies were edited manually to merge virtual visual effects with real content. A well-known example is Star Wars, where all the lightsaber effects have been painted by hand by hundreds of artists and, thus, frame by frame. Nowadays, more complex techniques support merging digital 3D content (such as characters or cars) with the video image (and is called match moving). AR is inherently always doing that in a 3D space.
- The registration happens in real time: In a movie, everything is prerecorded and generated in a studio; you just play the media. In AR, everything is in real time, so your application needs to merge, at each instance, reality and virtuality.
- It's interactive: In a movie, you only look passively at the scene from where it has been shot. In AR, you can actively move around, forward, and backward and turn your AR display—you will still see an alignment between both worlds.
Interaction with the environment
Building a rich AR application needs interaction between environments; otherwise you end up with pretty, 3D graphics that can turn boring quite fast. AR interaction refers to selecting and manipulating digital and physical objects and navigating in the augmented scene. Rich AR applications allow you to use objects which can be on your table, to move some virtual characters, use your hands to select some floating virtual objects while walking on the street, or speak to a virtual agent appearing on your watch to arrange a meeting later in the day. We will look at how some of the standard mobile interaction techniques can also be applied to AR. We will also dig into specific techniques involving the manipulation of the real world.
Thus we have learned about the AR concepts through this article.
Resources for Article:
- Marker-based Augmented Reality on iPhone or iPad [Article]
- Creating Dynamic UI with Android Fragments [Article]
- Introducing an Android platform [Article]
|Learn how to develop advanced Augmented Reality applications for Android with this book and ebook|
eBook Price: $16.99
Book Price: $34.99
About the Author :
Dr. Raphael Grasset is a senior researcher at the Institute for Computer Graphics and Vision. He was previously a senior researcher at the HIT Lab NZ and completed his Ph.D. in 2004. His main research interests include 3D interaction, computer-human interaction, augmented reality, mixed reality, visualization, and CSCW. His work is highly multidisciplinary; he has been involved in a large number of academic and industrial projects over the last decade. He is the author of more than 50 international publications, was previously a lecturer on Augmented Reality, and has supervised more than 50 students. He has more than 10 years of experience in Augmented Reality (AR) for a broad range of platforms (desktop, mobile, and the Web) and programming languages (C++, Python, and Java). He has contributed to the development of AR software libraries (ARToolKit, osgART, and Android AR), AR plugins (Esperient Creator and Google Sketchup), and has been involved in the development of numerous AR applications.
Jens Grubert is a researcher at the Graz University of Technology. He has received his Bakkalaureus (2008) and Dipl.-Ing. with distinction (2009) at Otto-von-Guericke University Magdeburg, Germany. As a research manager at Fraunhofer Institute for Factory Operation and Automation IFF, Germany, he conducted evaluations of industrial Augmented Reality systems until August 2010. He has been involved in several academic and industrial projects over the past years and is the author of more than 20 international publications. His current research interests include mobile interfaces for situated media and user evaluations for consumer-oriented Augmented Reality interfaces in public spaces. He has over four years of experience in developing mobile Augmented Reality applications. He initiated the development of a natural feature tracking system that is now commercially used for creating Augmented Reality campaigns. Furthermore, he is teaching university courses about Distributed Systems, Computer Graphics, Virtual Reality, and Augmented Reality.