Virtual Reality Blueprints

By Charles Palmer , John Williamson
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. The Past, Present, and Future of VR

About this book

Are you new to virtual reality? Do you want to create exciting interactive VR applications? There's no need to be daunted by the thought of creating interactive VR applications, it's much easier than you think with this hands-on, project-based guide that will take you through VR development essentials for desktop and mobile-based games and applications. Explore the three top platforms—Cardboard VR, Gear VR, and OculusVR —to design immersive experiences from scratch.

You’ll start by understanding the science-fiction roots of virtual reality and then build your first VR experience using Cardboard VR. You'll then delve into user interactions in virtual space for the Google Cardboard then move on to creating a virtual gallery with Gear VR. Then you will learn all about virtual movements, state machines, and spawning while you shoot zombies in the Oculus Rift headset. Next, you'll construct a Carnival Midway, complete with two common games to entertain players.

Along the way, you will explore the best practices for VR development, review game design tips, discuss methods for combating motion sickness and identify alternate uses for VR applications

Publication date:
February 2018


The Past, Present, and Future of VR

This book is designed to serve as a hands-on introduction to virtual reality, commonly known as simply VR. The book includes a brief history of the technology, some definitions of popular terms, as well as best practices to stave off motion sickness and ensure your trackers are working perfectly.

In the following chapters, you will begin creating your very own virtual worlds which you can explore using your mobile device or Head Mounted Display (HMD). Unity 3D will be used for all projects. Unity is a flexible and powerful video game engine used to create some of your favorite video games, such as Hearthstone, Cities Skylines, Kerbal Space Program, Cuphead, Super Hot, and Monument Valley.

The Virtual Worlds you will be creating in Unity include:

  • Image Gallery: A Virtual Art Gallery/Museum where you get to decide what hangs on the walls in this Samsung Gear VR project.
  • Solar System: Ignore the rules of time and physics as you travel through a model of the solar system in Google Cardboard.
  • Zombie Shooter: The only good zombie is a dead one—wait, an undead one? Anyway, you'll shoot them in the Oculus Rift.
  • Carnival Games: The only thing missing from these realistic VR carnival games is the smell of fried pickles.

Additionally, you will cover Unity development topics, such as:

  • System requirements
  • Scripting in Unity
  • User interaction in VR
  • Building VR environments
  • Equirectangular images
  • Improving performance
  • The Samsung Gear VR workflow process
  • The Oculus Rift workflow process
  • Combating VR sickness

The history of virtual reality

The development of virtual reality has been driven by the confluence of three improvements to display technologies:

  • Field of View: the size of the area that we can see
  • Stereoscopic 3D: the depth cue from viewing the world from two different horizontally separated viewpoints
  • Interactivity: the ability to change the virtual environment in real time

In this chapter, we are going to illustrate the history of virtual reality and how earlier designs have served as inspiration for today, even a few older ideas that we have not quite duplicated with our current generation of VR hardware.

We will investigate the following:

  • 19th century panoramic paintings
  • Cycloramas and Sensoramas
  • NASA Moon Landing Simulators
  • Nintendo Powerglove
  • Hasbro Toaster

Static 2D images, whether paintings or photographs, are poor representations of reality. As such, people have striven to enhance their images, to make them more realistic and immersive since the first flickering shadows added motion to cave paintings 20,000 years ago. Simple puppets added motion and then depth. More complex solutions were designed: motion and audio effects were added to Greek and Roman temples to give a complete sensory experience. Doors would open without apparent human interaction, thunder would rumble overhead, and fountains would dance: all designed to create an augmented experience beyond simple static statues and paintings.


Through the looking glass

Perspective, a great example of the intersection between art and math, allowed us to accurately trace the world as we see it. Artists learned to mix paints to create the illusion of translucency in pigments. Magicians were able to use persistence of vision to create illusions and children's toys that would one day lead to moving pictures and create holograms of dead pop stars:

Magic lanterns are an extension of the Camera Obscura and can be traced back to the 17th century. Rather than using the light from the sun, as a Camera Obscura did, an artificial light was used: first candlelight, then limelight, then electricity. The light was shown through a painted glass plate which was projected on the wall, very much like a slide projector (or even video projector). The images could be scrolled, giving the illusion of motion, or two images could be alternated, giving the illusion of animation. The simple technology of magic lanterns was popular for over 200 years, up until the 1920s when moving pictures finally dethroned them:


Making a static image dance

Zoetropes: Several designs and children's toys experimented with the idea of rapidly presenting a series of static images to create the illusion of motion. But not until 18-year-old William Lincoln presented his design to Milton Bradley did the design take off. The trick was to place slits in the viewing device so that the images would only be seen briefly and not continuously. This persistence of vision, the fact that an image remains visible after it is removed, is what allows movies, a series of still images, to be seen in motion:


The bigger the better – panoramic paintings

Along the way, artists began to experiment with large-scale paintings, that were so large they would eclipse the viewer's entire field of view, enhancing the illusion that one was not simply in the theater or gallery, but on location. The paintings would be curved in a semi-circle to enhance the effect, the first of which may have been created in 1791 in England by Robert Barker. They proved so financially successful that over 120 different Panorama installations were reported between 1793 and 1863 in London alone. This wide field-of-view solution used to draw audiences would be repeated by Hollywood in the 1950s with Cinerama, a projection system that required three synchronized cameras and three synchronized projectors to create an ultra-wide field of view. This was again improved on by Circle Vision 360 and IMAX over the years:

For a brief period of time, these large-scale paintings were taken even a step further with two enhancements. The first was to shorten the field of view to a narrower fixed point, but to make the image very long—hundreds of meters long, in fact. This immense painting would be slowly unrolled to an audience, often through a window to add an extra layer of depth, typically and with a narrated tour guide, thus giving the illusion that they were looking out of a steamboat window, floating down the Mississippi (or the Nile) as their guide lectured about the flora and fauna that drifted by.

As impressive as this technology was, it had first been used centuries earlier by the Chinese, though on a smaller scale:

A version of this mechanism was used in the 1899 Broadway production of Ben-Hur, which ran for 21 years and sold over 20 million tickets. One of the advertising posters is shown here. The actual result was a little different:

A giant screen of the Roman coliseum would scroll behind two chariots, pulled by live horse while fans blew dust, giving the impression of being on the chariot track in the middle of the race. This technique would be used later in rear-screen projections for film special effects, and even to help land a man on the moon, as we shall see:

The next step was to make the painting rotate a full 360 degrees and place the viewer in the center. Later versions would add 3D sculptures, then animated lights and positional scripted audio, to further enhance the experience and blur the line between realism and 2D painting. Several of these can still be seen in the US, including the Battle of Gettysburg Cyclorama, in Gettysburg, Pennsylvania.


Stereoscopic viewers

Stereoscopic vision is an important evolutionary trait with a set of trade-offs. Nearly all land-based, mammalian predators have forward facing eyes for stereoscopic vision. This allows them to detect objects that may be camouflaged or helps them sort depth to be able to know where to attack. However, this gives predators a narrow field of view, and they are unable to detect movement behind them. Nearly all prey animals have eyes on the sides of their heads for a far wider field of view, allowing them to detect predators from the side and rear.

Artificial stereoscopic images where first created in 1838 when Charles Wheatstone created his stereoscopic viewer, built with mirrors. Fortunately, the invention of the photographic camera occurred nearly simultaneously, as drawing two nearly identical images, separated by just 6.5 centimeters, is a very difficult task:

Brewster improved on the design just 10 years later. But, it was Poet and Dean of Harvard Medical School, Oliver Wendell Holmes Sr., whose 1861 redesign made it a phenomenon. His belief in stereo-photography as an educational tool was such that he purposely did not patent it. This lowered the cost and ensured it would be used as widely as possible. There was scarcely a Victorian parlor that did not have a Holmes stereo viewer. This is the design most frequently associated with stereoscopic antiques, and remained the most popular design until View Master took over in 1939:

Stereoscopic media requires that two, near-identical images be presented, one to each eye. However, each solution varies in the way they ensure the eye sees only the image intended for it: the left eye sees only the left image and the right eye sees only the right image.

In early stereoscopic displays, and all VR HMDs today, this is done simply by using separate lenses directed at each eye, ensuring that each eye sees only the image designed for it. From the Viewmaster to the Vive, the same basic technique is used. This works perfectly for one viewer at a time.

But, if you want to show a 3D movie to a crowd, you need different techniques to ensure that there is no crosstalk. That is, you still want the left eye to see only the left image and the right eye only the right image. With 3D movies, there were a handful of different techniques, each with advantages and disadvantages.

The most affordable option is the one often used for 3D comic books: anaglyph 3D. Typically, this involves Red/Cyan lenses (though sometimes Green/Yellow). The left image would be printed in Red ink, the right image in Cyan ink. The Red lens would block the Cyan, the Cyan lens would block the Red. The effect does work, though there is always some crosstalk when some of each image is still visible by the opposite eye.

Polarized 3D allowed for full-color images to be used, using a technique such as anaglyph glasses. One set of polarized light would pass through one lens, but the orthogonal polarized light would be blocked. This could even be used in print images, though at a significant cost increase over anaglyph 3D. Polarized 3D is the type most commonly used in 3D movies today.

Active shutters were, at first, a mechanical shutter that would block each eye in sync with the movie, which would only show one eye at a time. Later, these mechanical shutters were replaced with Liquid Crystal Displays (LCD) that would block out most (but not all) light. This is the 3D technique used in the Sega 3D system, and was also used in some IMAX 3D systems:

There were many other 3D techniques: volumetric displays built with lasers and mirrors or light arrays, Chromadepth, and holograms. But none were as successful as the techniques previously discussed.

Real, interactive holograms exist only in science fiction. The Holograms of Tupac and Michael Jackson are actually based on a 19th century magic trick called Pepper's Ghost, and are simply 2D images projected onto a pane of glass. The Holographic images of the HoloLens are also not real holograms, as they too use a version of Pepper's Ghost, where the images above are projected down into a set of semi-transparent optics:

Lenticular displays deserve to be mentioned for two reasons. First, they allow the user to see 3D without the use of glasses. Second, even though the technology has been around for at least 75 years, most people are familiar with lenticular displays because of the Nintendo 3DS. A lenticular display cuts the image into very thin, alternating vertical strips, one set for the left eye, one set for the right eye. Each eye is prevented from seeing the other eye's images though the use of a parallax barrier.


Why stop at just sight and sound? – Smell o' Vision and Sensorama

While this generation of VR has not (yet) added the olfactory sense to their set of outputs, this does not mean that older systems have not tried.

Movies tried to add the sense of smell in 1960 with the movie Scent of Mystery. At specific times in the movie, smells were sprayed into the audience. Some theatergoers complained that the smells were overpowering, while others complained they could not smell them at all. But everyone agreed that the movie, even with Elizabeth Taylor, was not worth seeing, and this technology quietly faded away:

Morton Heilig built the Sensorama in 1962. Only a single viewer at a time could experience the short films, but the viewers were exposed to all the senses: Stereoscopic 3D, smells, vibration, wind in the hair, and stereo sound. Today, 4D movies at many major theme parks are its closest relatives.

Heilig did attempt to create a large audience version of his immersive films, which included elements from Sensorama and the Cyclorama. He called this the Telesphere. The large field of view, stereoscopic 3D images, and vibrations were designed to create an immersive experience.


Link Trainers and Apollo

World War One took aviation from flights of hundreds of meters to flights measured in hundreds of kilometers. Early flight trainers were no more than barrels on ropes. Edward Link saw the potential for growth in aviation and the need for trained pilots to fly these more complex aircraft. The complexity of new planes would require a new level of fidelity in training systems, and the number of new pilots could not meet demand with current techniques.

This was brought to the forefront when 12 pilots were killed in training in less than three months. Link took his knowledge of building pump organs and created analog flight simulators designed to teach flight by instruments. There were no graphics of any kind and no scrolling landscapes, and the pilots were enclosed in a darkened covered cockpit. The trainers would respond accurately to the pilot's stick and rudder inputs and the little boxes would pitch and roll a few degrees. Link Trainers would add small stubby wings and a tail, making them look like the children's rides outside grocery stores in the 1950s, but over 500,000 pilots were trained with them.

For the Apollo program, true digital computers were used in simulators, but the computers were not powerful enough to display graphics. The computers displayed the simple analog readouts of the computers in the capsules. To simulate the view from the capsule, large three-dimensional models and paintings were built of the moon and space vehicles. The moon was scrolled under a Closed-Circuit TV Camera:

This was not unlike the scrolling panoramic paintings used a hundred years earlier. The video feed from the camera was sent to a special infinity optical display system mounted in the simulator capsule, which had a wide field of view of 110 degrees. As the astronaut trained in the simulator, the movement of his joystick was fed into the position of the cameras, changing the images projected in real time. This system featured wide field of view and interactivity, but not stereoscopic 3D images (though the life-sized cockpit model they looked through would add binocular depth to the presentation).


Interactivity and True HMDs

In this section we will conduct a quick overview of the evolution of Head Mounted Displays, including how they displayed their images and what head tracking technologies they used.

1960 – TelesphereMask

Morton Heilig patented one of the first functioning HMDs in 1960. While it did not have any head tracking and was designed exclusively for his stereoscopic 3D movies, the images from the patent look remarkably familiar to our designs 50 years later:

1961 – Headsight

This was a remote-viewing HMD designed for safely inspecting dangerous situations. This was also the first interactive HMD. The user could change the direction of the cameras from the live video feed by rotating their head, seeing the view angles update in real time. This was one step closer to immersive environments and telepresence.

1968 – Teleyeglasses

Hugo Gernsback bridged the gap between the sections on Science Fiction and HMD Design. Hugo was a prolific publisher, publishing over 50 hobbyist magazines based around science, technology, and science fiction. The Hugo awards for science fiction are named after him. While readers loved him, his reputation among contemporary writers was less than flattering.

Hugo not only published science fiction, he himself wrote about futurism, from color television, to cities on the moon, to farming in space. In 1968, he debuted a wireless HMD, called the Teleyglasses, constructed from twin Cathode-ray tubes (CRT) with rabbit ear antenna.

1965 – The Ultimate Display

Ivan Sutherland wrote about the Ultimate Display, a computer system that could simulate reality to the point where one could not differentiate between reality and simulation. The concept included haptic inputs and an HMD, the first complete definition of what Star Trek would call the Holodeck. We still do not have high-fidelity haptic feedback, though prototypes do exist.

The ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked. - Ivan Sutherland

1968 – Sword of Damocles

In 1968, Sutherland demonstrated an HMD with interactive computer graphics: the first true HMD. It was too large and heavy to be comfortably worn, so it was suspended from the ceiling. This gave it its name (Damocles had a sword suspended over his head by a single hair, to show him the perilous nature of those in power).

The computer-generated images were interactive: as the user turned their head, the images would update accordingly. But, given the computer processing power of the time, the images were simple white vector-line drawings against a black background. The rotation of the head was tracked electromechanically through gears (unlike today's HMDs, which use gyroscopes and light sensors), which no doubt added to the weight. Sutherland would go on to co-found Evans and Southerland, a leading computer image processing company in the 1970s and 1980s:

1968 – The mother of all demos

While this demo did not have an HMD, it did contain a demo of virtually every system used in computers today: the Mouse, Lightpen, Networking with audio, video, collaborative word processing, hypertext, and more. Fortunately, the videos of the event are online and well worth a look.

1969 – Virtual Cockpit/Helmet Mounted Sight

Dr. Tom Furness began working on HMDs for the US Air Force in 1967, moving from Heads Up Displays, to Helmet Mounted Displays, to Head Mounted Displays in 1969. At first, the idea was just to be able to take some of the mental load off of the pilot and allow them to focus on the most important instruments. Later, this evolved into linking the head of the pilot to the gun turret, allowing them to fire where they looked. The current F-35 Glass Cockpit HMD can be traced directly to his work. In a Glass Cockpit or even Glass Tank, the pilot or driver is able to see through their plane or tank via an HMD, giving them a complete unobstructed view of the battlefield through sensors and cameras mounted in the hull. His Retinal Display Systems, which do away with pixels by writing directly on the eye with lasers, is possibly similar to the solution of Magic Leap.

1969 – Artificial Reality

Myron Kruegere was a virtual reality computer artist who is credited with coming up with the term Artificial Reality to describe several of his interactive, computer-powered art installations: GLOWFLOW, METAPLAY, PSYCHIC SPACE, and VIDEOPLACE. If you visited a hands-on science museum from the 1970s through the 1990s, you no doubt experienced variations of his video/computer interactions. Several web and phone camera apps have similar features built-in now.

1995 – CAVE

Cave Automatic Virtual Environment (CAVE) is a self-referential acronym. It was the first collaborative space where multiple users could interact in virtual space. The systems used at least three, though sometimes more, stereoscopic 3D projectors covering at least three walls a room. Creating life-sized 3D computer images, the user could walk through them. While stereoscopic 3D projectors are inexpensive today, at the time their extremely high cost, coupled with the cost of computers to create realistic images in real time, meant CAVE systems were relegated to biomedical and automotive research facilities.

1987 – Virtual reality and VPL

Jaron Lanier coined (or popularized, depending on your sources) the term virtual reality. Jaron Lanier designed and built the first most complete commercially-available virtual reality system. It included the Dataglove and the EyePhone head-mounted display. The Dataglove would evolve into the Nintendo Powerglove. The Dataglove was able to track hand gestures through a unique trait of fiber optic cable. If you scratch a fiber optic cable, and shine light through it while it is straight, very little of the light will escape. But if you bend the scratched fiber optic cable, light will escape, and the more you bend it, the more light escapes. This light was measured and then used to calculate finger movements.

While the first generation of VR tried to use natural, gesture-based input (specifically hand), today's latest iteration of VR is, for the most part, skips hand-based input (with the exception of Leap and, to a very limited extent, HoloLens). My theory is that the new generation of VR developers grew up with a controller in their hands and were very comfortable with that input device, whereas the original set of VR designers had very little experience with a controller and that is why they felt the need to use natural input.

1989 – Nintendo Powerglove

The Nintendo Powerglove was an accessory designed to run on the Nintendo Entertainment System (NES). It used a set of three ultrasonic speakers that would be mounted to a TV to track the location of the player's hand. In theory, the player could grab objects by making a fist with their hand in the glove. Tightening and relaxing the hand would change how much electrical resistance was captured, allowing the NES to register a fist or an open hand. Only two games were released for the system, though its cultural impact was far greater:


1990s – VR Explosion

In the 1990s, VR made its first foray into the mainstream. At least three PC HMDs made it to market and several console VR systems were built.

1991 – Virtuality Dactyl Nightmare

Virtuality was the first VR Experience to be publicly available in video game arcades, as the price point of the computers required to draw even these simple 3D images were well beyond what any household could afford. These experiences were even networked together for multiplayer VR interaction:

1993 – SEGA VR glasses

Sega announced the Sega VR headset for the Sega Genesis console in 1993 at the Consumer Electronics Show. The system included head tracking, stereo sound, and images drawn on two LCD screens. SEGA built four games for the system, but a high price point and lack of computer power meant the HMD never went into production:

1995 – VRML – Virtual reality Markup Language

VRML is a text file format designed to link 3D worlds together, much like Hyper Text Markup Language (HTML) links pages together. Vertices, edges, surface color, UV mapped textures, and transparency for 3D polygons could be specified. Animated effects and audio could be triggered as well:

1995 – Nintendo Virtual Boy

The Nintendo Virtual Boy did not have any head tracking at all; it was a stereoscopic 3D display system that ran proprietary games that only had two colors: red and black. While several dozen games were released for it, the system was difficult to use comfortably and the experience could not be shared. The Virtual Boy was only on the market for two years:

1995 – Hasbro Toaster

Hasbro, the toy manufacturer, had missed out on the console booms of the 80s and 90s and wanted to get into that very lucrative market. They designed a CD-ROM-based system powerful enough to run VR and built a VR prototype. After a great deal of money was invested, the system was never released, though the game Night Trap was built for the system. As the game was so expensive to produce, the developers salvaged what they could and released it for the Sega CD system:

2013 – Oculus Rift

Palmer Lucky was able to leverage the never-ending demand for VR; the dramatic increase in PC processing power, the advent of low-cost, high-quality hobbyist hardware and circuit boards; the explosion of the large screen; high-resolution smart phone display, which is still the primary source of screens in HMDs and the raw power and acceptance of crowd-sourcing. This perfect storm of tech, money, ideas, enthusiasm, and execution allowed him to sell his company to Facebook for $2 billion.

Palmer deserves credit for re-creating the VR demand that has brought us Cardboard, Gear, Daydream, and HoloLens, which are all discussed in greater detail elsewhere in this book.

2014 – Google Cardboard

While Facebook invested $2 billion to bring Rift to the market, Google took a different approach. Google Cardboard is an HMD kit that contains two 40 mm focal distance lenses and a sheet of pre-folded die-cut cardboard. Rubber bands and Velcro strips are used to keep the device closed and support the user's mobile device. With a low price point, around $9.00, and simple design, VR finally made it into the hands of millions of users.

Since its initial release, knock-off brands have kept the price low and Google has developed educational materials for K-8 students across the U.S.

2015 – Samsung Gear VR

In 2005, Samsung obtained a patent to use a mobile device as a head-mounted display. This led to the release of the Gear VR in November of 2015. The Gear VR is designed to work with Samsung's flagship smartphones, with an integrated calibration wheel and trackpad for user interaction.

The Oculus-compatible Samsung device supports Motion to Photon (MTP) with latency less than 20 ms, optimized hardware and kernel, and higher-resolution rendering for the 96 degree field of view for the first three models, and 101 degree for the SM-R323 and beyond.

2018 – Magic Leap

Magic Leap is one of many unreleased HMDs, but, as it has the backing of Google to the tune of $2 billion and promises an AR experience far beyond that of the HoloLens, the system deserves to be mentioned, even though there is little to write about beyond some proof of concept videos.



This look at the science and technical history of virtual reality now prepares us to build new VR experiences. In the next six chapters, we will provide tutorials on building four VR solutions. Each project is presented as a series of steps to illustrate the process for completion. However, we want to stress that this is only a start. Use this work as a starting point for your own creativity.

About the Authors

  • Charles Palmer

    Charles Palmer is an associate professor and executive director at Harrisburg University. He oversees the design and development of new and emerging technologies, chairs the undergraduate Interactive Media program, and advises students on applied projects in AR/VR, game development, mobile computing, web design, social media, and gamification. He is also a happy husband, father, award-winning web designer, international speaker, and 3D printing enthusiast.

    Browse publications by this author
  • John Williamson

    John Williamson has worked in VR since 1995. As a producer/designer, he has shipped over three dozen games (America’s Army, Hawken, SAW, and Spec Ops) in nearly every genre (RTS, FPS, Arcade, Simulation, Survival Horror) on nearly every platform (iOS, Android, Wii, Playstation, Xbox, web, PC, and VR). He is also an award-winning filmmaker and has taught  game design at DigiPen and Harrisburg University. Now, he works in VR, creating immersive training for a wide range of high-consequence trainers for the US Air Force, Army, and NASA

    Browse publications by this author
Virtual Reality Blueprints
Unlock this book and the full library for FREE
Start free trial