Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
The Immersive Metaverse Playbook for Business Leaders
The Immersive Metaverse Playbook for Business Leaders

The Immersive Metaverse Playbook for Business Leaders: A guide to strategic decision-making and implementation in the metaverse for improved products and services

Arrow left icon
Profile Icon Irena Cronin Profile Icon Robert Scoble
Arrow right icon
NZ$65.99
Paperback Nov 2023 458 pages 1st Edition
eBook
NZ$36.99 NZ$52.99
Paperback
NZ$65.99
Subscription
Free Trial
Arrow left icon
Profile Icon Irena Cronin Profile Icon Robert Scoble
Arrow right icon
NZ$65.99
Paperback Nov 2023 458 pages 1st Edition
eBook
NZ$36.99 NZ$52.99
Paperback
NZ$65.99
Subscription
Free Trial
eBook
NZ$36.99 NZ$52.99
Paperback
NZ$65.99
Subscription
Free Trial

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

The Immersive Metaverse Playbook for Business Leaders

The What and Why of the Metaverse

The term Metaverse has become widely known within a very short period of time. But what does it really refer to, and why should anyone be interested in it? Part of the answer is in explaining which relevant technologies came before the Metaverse and how the Metaverse improves on them. This is addressed in this chapter, along with a fuller explanation of what the Metaverse is and why we should care about it.

In this chapter, we’re going to cover the following main topics:

  • Origins of the Metaverse
  • What is the Metaverse?
  • Why is the Metaverse so important?

Origins of the Metaverse

All of a sudden, we have something called the Metaverse that’s in the media and being talked about by seemingly everyone. Almost everyone’s consensus is that it will be revolutionary for business and society. However, the main topic when talking about the Metaverse is to try to pin down what it actually is, because it is not here today. Before beginning to explain what the Metaverse is, it helps to understand its origins and then, what it is and the reasons behind its anticipation.

Human beings have a wish to communicate with each other in person, and when not in person, in a way that they can convey their thoughts and—in many instances—their emotions. Whether it was writing on animal bones, stone, clay, papyrus, or—more recently—paper, or speaking on the telephone, humans always found some way to communicate. There are so many reasons to want to communicate, from the personal— to catch up with close and not-so-close friends and family—to the non-personal and professional—to complain to a co-op board, to appraise a work of art, negotiate to buy a house or building, buy clothes and food, applaud or disparage a certain politician, provide instructions to an employee, and so on. The motivations to communicate are many.

Several major things have changed from several thousands of years ago to today when it comes to communication, apart from the method of communication—speed, ease, precision, and audience reach.

The most obvious change is the increased speed with which communication can be done. Speed has even increased when it comes to communicating in person. From coming together by walking, riding horses, or traveling by chariot, carriage, car, plane, or helicopter, the opportunity for in-person communication has increased manifold, and with it, the speed of getting that communication done. And that is just in-person communication. When communicating not in person, the speed of that kind of communication has increased tremendously—from messengers and postmen delivering items of communication by walking or running, by horse, then by car and carrier plane, to wires being sent, to phone calls by landline, to email, cellphone, and videoconference.

With increased speed comes the ease with which communication is accomplished. That ease has compelled more people to communicate more often. That increase in communication has allowed for more precision of communicated intent. Let’s say that when letters were the largest mode of not-in-person communication, if the receiver of a letter misconstrued what the writer of that letter intended, the receiver could decide to not communicate anymore with the writer, or if they did, a return letter would have to be sent, and so on. The ease of faster communications allows for the correcting of any misconstrued messaging. And this is important when communication is able to reach as many people as is digitally possible.

More precise, easier, and faster communication, together with the capability of large audience reach is where we are today. Social media can be counted as a mode of communication, but the current formats don’t allow for much flexibility and personalization. Outside of social media, increased speed, ease, precision, and audience reach of current communications have made people more efficient and productive in both their personal and professional lives. Although it’s commonly thought that the main original motivation behind the Metaverse is to foster interoperability among different computer games, the origins of the current manifestation of the imagined Metaverse come out of a need for more improved and enhanced communication capabilities. And this improved and enhanced communication has the benefit of bringing about multiplying business opportunities. Use cases that exemplify what can be done in the Metaverse come later in this book, in Part 3.

To better understand how the Metaverse came about and its place in technology, it’s helpful to think of it as part of a paradigm shift; in this case, the fourth paradigm shift— spatial computing.

The Metaverse – part of the fourth paradigm

A technological paradigm shift is a change in the underlying principles that shape the development and use of technology in a society. A classic technological paradigm shift is the shift from horse and buggy to the automobile. Four technological paradigm shifts have been recognized in computing.

The first paradigm – the personal computer arrives

The shift from mainframe computers to personal computers (PCs) is considered the first technological paradigm shift. Mainframe computers were large, expensive, and complex machines that were primarily used by businesses, government agencies, and other organizations. They were typically housed in dedicated computer rooms and operated by trained technicians.

The IAS machine (also known as the Institute for Advanced Study computer) was an early computer built between 1946 and 1951 at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It was one of the first electronic computers to be built and was used continually and productively until 1960 for a wide range of research projects, including the development of the first high-level programming language, called FORTRAN.

In the 1970s, computer rooms were typically large, specialized spaces that housed mainframe computers and their associated equipment such as IBM’s very large Access Client Solutions (ACS) chip arrays that are shown in Figure 1.1. These computers were much larger and more expensive than the PCs that became popular in the 1980s, and they required dedicated space with specialized cooling and electrical systems to operate. The computer room was usually a secure area that was only accessible to authorized personnel, and it was often monitored by technicians who were responsible for maintaining the computer equipment:

Figure 1.1 – A section of IBM’s 1968-era very large ACS circuit board with a 10 x 10 array of chip packages that were used to power one computer (source: Robert Scoble)

Figure 1.1 – A section of IBM’s 1968-era very large ACS circuit board with a 10 x 10 array of chip packages that were used to power one computer (source: Robert Scoble)

The Apple 1 was a PC released in 1976 by Apple Computer, Inc. It was a small, relatively inexpensive PC that could be used by an individual or small group and was designed to be assembled by the user to be used in a home or small office setting. It was one of the first PCs on the market and was designed to be a kit that users could assemble themselves. The Apple 1 was powered by a MOS Technology 6502 microprocessor and had 4 KB of RAM, which could be expanded to 8 or 48 KB. It used a cassette tape to store data and programs, and it had a simple command-line interface for users to input commands.

Figure 1.2 – Steve Wozniak, co-founder of Apple Computer, stands with the Apple II that he helped develop and is now in the Computer History Museum (source: Robert Scoble)

Figure 1.2 – Steve Wozniak, co-founder of Apple Computer, stands with the Apple II that he helped develop and is now in the Computer History Museum (source: Robert Scoble)

The development and widespread adoption of PCs represented a paradigm shift in the way that people used computers. Before the development of PCs, computers were large, expensive machines that were used primarily by large organizations, such as businesses, universities, and government agencies, to support the computing needs of hundreds or thousands of users. These computers were operated by specialized personnel and were typically accessed remotely through terminals or other devices.

In contrast, PCs are smaller, more affordable, and easier to use than mainframe computers. They can be used by individuals and small businesses and do not require specialized training to operate. The development of the microprocessor and the PC revolutionized the way people interacted with computers and made it possible for people to use computers for a wide range of tasks, from word processing and spreadsheet creation to internet browsing and gaming. The development of PCs was a key factor in the growth of the digital economy.

The second paradigm – graphical user interfaces

A graphical user interface (GUI) is a type of UI that allows users to interact with electronic devices through graphical icons and visual indicators, rather than text-based commands. GUIs are designed to make it easier and more intuitive for users to access and use computer programs and other electronic devices. They use visual elements, such as icons, menus, and buttons, to represent different options and functions, which users can access using a pointing device, such as a mouse or a touchpad.

The concept of a GUI was first introduced in the 1970s, but it was not until the 1980s that GUIs became widely adopted. The first GUI was developed at Xerox Palo Alto Research Center (PARC) in the 1970s, and it was used on the Xerox Alto, one of the first PCs. The Xerox Alto was the first computer to use a mouse-based input system, which made it possible to use a GUI to navigate and interact with the computer.

The first widely available PC to use a GUI was the Apple Macintosh, which was introduced in 1984 and it helped to popularize the use of GUIs in PCs. In the following years, other companies, such as Microsoft, introduced their own GUI-based operating systems, and the use of GUIs became widespread in the PC market. Today, GUIs are the standard interface for most PCs and are widely used in a variety of electronic devices.

GUIs represented a paradigm shift in the way that people interact with computers because they made it much easier and more intuitive for users to access and use computer programs. Prior to the development of GUIs, computers used command-line interfaces, which required users to input commands using a keyboard. This was a time-consuming and error-prone process, and it was difficult for people who were not familiar with computers to learn how to use them.

The adoption of GUIs had a significant impact on the way that people use computers and has contributed to the widespread adoption of PCs. GUIs made it possible for people with little or no computer experience to use computers with ease, which has had a profound impact on many aspects of society, including education, business, and communication.

The third paradigm – mobile

The first mobile phones were developed in the late 1940s and 1950s, but they were large and expensive and were only used by a small number of people, such as wealthy individuals and businesses. The first commercially available mobile phone was the Motorola DynaTAC 8000X, which was released in 1983. These early mobile phones were quite large and expensive and were only used by a small number of people. Over time, mobile phones became smaller, less expensive, and more widely available, and their use became more widespread.

The LG Prada (also known as the LG KE850) was a mobile phone released by LG Electronics in May 2007. It was one of the first phones to feature a touchscreen display and was widely considered to be a fashionable and high-end device.

The first iPhone, on the other hand, was released by Apple in June 2007. It was a revolutionary device that introduced a new type of UI based on a multi-touch screen and established the smartphone as a new category of device. The iPhone also had a number of features that set it apart from other mobile phones at the time, such as a high-resolution display, a digital camera, and the ability to access the internet and run a wide range of apps.

Overall, the LG Prada was an important early touchscreen phone, but the iPhone was a more significant and influential device that set the stage for the modern smartphone market:

Figure 1.3 – The first iPhone versus the Nokia N97; the first iPhone was released in June 2007 and the Nokia N97 in December 2008 (source: Robert Scoble)

Figure 1.3 – The first iPhone versus the Nokia N97; the first iPhone was released in June 2007 and the Nokia N97 in December 2008 (source: Robert Scoble)

Apple is also widely known for obliterating the importance of Nokia when it comes to mobile phones. Nokia was considered the mobile phone leader before the iPhone came out. Yet, due to its miscalculation of the importance of the iPhone’s innovations, Nokia mistakenly thought that it would not need to do too much to stay ahead, which led to its steady downfall in the area.

The mobile phone has become a technological paradigm because it has fundamentally changed the way that people communicate and access information. Before the widespread adoption of mobile phones, people had to be physically present in a specific location to make phone calls or access information. With the advent of mobile phones, people are able to communicate and access information from anywhere at any time. This has had a profound impact on society and has led to the development of new industries and business models. Mobile phones have also had a major impact on the way that people interact with each other and with the world around them, and they have become an essential part of daily life for many people.

The fourth paradigm – spatial computing

Spatial computing refers to the use of technology to create an immersive, 3D digital environment that interacts with the physical world. It is a multidisciplinary field that combines computer science, engineering, design, and other areas to create an interactive experience that goes beyond traditional 2D screens. Spatial computing includes any technology that would be used to move about in a virtual or augmented 3D world. This includes virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), computer vision (CV), and sensor technology, among others.

Spatial computing is considered the fourth paradigm because it represents a new way of interacting with technology that goes beyond traditional 2D screens and input methods. Applications of spatial computing include gaming, education, design, and industrial training, and it has emerging uses in many other industries such as healthcare, retail, and entertainment.

In 1987, Jaron Lanier coined the term VR. Lanier was a founder of VPL Research, a company that made early commercial VR headsets and wired gloves. There were earlier attempts to make headsets that were either completely experimental or commercially failed, such as Morton Heilig’s Telesphere Mask:

Figure 1.4 – Morton Heilig’s Telesphere Mask, a head-mounted display device patented in 1960 that commercially failed (source: United States Patent and Trademark Office (USPTO))

Figure 1.4 – Morton Heilig’s Telesphere Mask, a head-mounted display device patented in 1960 that commercially failed (source: United States Patent and Trademark Office (USPTO))

Others, such as a patent filed in 2008 by Apple for a VR headset and a remote controller, portrayed a product that was never produced. In 2012, the company Oculus VR was founded, with a VR headset, the Oculus Rift, becoming commercially available in 2016. A couple of years earlier in 2014, Facebook bought Oculus VR and started on the journey to creating more VR headset models. HTC and a couple of other players joined Oculus in creating competitive VR headsets:

Figure 1.5 – A patent filed in 2008 by Apple for a VR headset and a remote controller that would use an iPhone’s screen as the headset’s primary display; the headset was never commercially made (source: USPTO)

Figure 1.5 – A patent filed in 2008 by Apple for a VR headset and a remote controller that would use an iPhone’s screen as the headset’s primary display; the headset was never commercially made (source: USPTO)

The first functional AR headset was made in 1980 by Steve Mann and was called the EyeTap, a helmet that displays virtual information in front of the wearer’s eye. Early AR headsets were not widely adopted due to limitations in technology and high cost. In the 2010s, advancements in technology, such as the development of smartphones and improved displays, led to the resurgence of interest in AR and the introduction of more advanced and affordable AR headsets, such as the Microsoft HoloLens and the Magic Leap.

Spatial computing has many potential benefits, some of which include the following:

  • Immersive experience: Spatial computing allows for a more immersive and engaging experience for users, as it creates a 3D digital environment that interacts with the physical world. This allows for a more natural and intuitive way to interact with information and technology.
  • Enhanced productivity: Spatial computing can be used to create more efficient and effective ways of working, such as VR and AR tools for industrial training, design, and education. It can also improve remote collaboration by creating shared virtual spaces.
  • Improved accessibility: Spatial computing can be used to create more accessible experiences for users with disabilities, such as those who are visually impaired or have difficulty with fine motor skills.
  • New opportunities in various industries: Spatial computing has potential use cases in various industries such as healthcare, retail, and entertainment. For example, in healthcare, it can be used for training and surgeries, in retail for virtual shopping, and in entertainment for games and movies.
  • Increased convenience: Spatial computing can make it more convenient for users to access and interact with information, such as overlaying virtual instructions on real-world objects for repair or assembly.
  • Data visualization: Spatial computing can be used to create 3D visualizations of complex data, making it easier to understand and analyze.

Spatial computing is a key enabler for the Metaverse, providing technology that allows for the creation of immersive, 3D digital environments that can be used for socializing, entertainment, work, and many other use cases. Now that we have glanced through the history of the technology that led up to the Metaverse, let’s understand what the Metaverse actually is.

What is the Metaverse?

What the Metaverse promises to bring to communications would be an improvement in the areas of speed, ease, precision, and audience reach. So, what is this Metaverse?

Here is writer Matthew Ball’s definition, which has gained some traction in the public eye:

The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds and environments which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with a continuity of data, such as identity, history, entitlements, objects, communications, and payments.

Does this explain why the Metaverse is an improvement on other means of communication? Kind of…let’s unpack Ball’s definition:

  • Massively scaled and interoperable network

    This means that the Metaverse can handle a large number of people and their activities at the same time and that people can easily roam and operate between different environments and apps.

  • Real-time rendered 3D virtual worlds and environments

    Real-time rendered here means that images are dynamically produced and updated in near real time so that a person can interact or move around in a virtual environment without lag.

  • Experienced synchronously and persistently

    Synchronously means that people’s interactions in the Metaverse happen simultaneously. Persistently means that in whichever state the environment exists and whatever was updated in it remains in that state if and until that environment is actively changed.

  • Effectively unlimited number of users with an individual sense of presence, and with a continuity of data, such as identity, history, entitlements, objects, communications, and payments

    An individual sense of presence means that a person in the Metaverse experiences it almost as if they were there in a psychological sense.

My definition

The Metaverse allows many people to simultaneously come together and interact in real-time 3D virtual environments or worlds with the capability of retaining distinct and persistent digital identities, including the trail of each individual’s activities.

With my definition, how the Metaverse can provide improvements to communication is much clearer. Improvements to speed, ease, precision, and audience reach are all there.

Here is a breakdown:

  • Speed: Simultaneously come together and interact in real time speaks for itself.
  • Ease: Real-time and retaining distinct and persistent digital identities

    The ease of communication in real time is intuitively understood. Retaining distinct and persistent digital identities allows a person to return to the Metaverse and interact more easily each time because that person’s data is continuously being stored for that person’s use.

  • Precision: Real-time 3D virtual environments or worlds and retaining distinct and persistent digital identitiesReal-time 3D virtual environments or worlds bring about precision by providing real-time capability and by allowing people to view objects more realistically since they are in 3D. This could be immensely helpful in business scenarios, such as when a product is demoed or shown in 3D in the Metaverse.
  • Audience reach: Allows many people speak for itself.

The definition of the Metaverse is different from its fictional version. The term The Metaverse was coined by the author Neal Stephenson in his 1992 novel Snow Crash. In that novel, there was only one Metaverse and people would use VR headsets to go online and enter it to escape their dystopian reality. More recently, and inherent in the definition, the idea that there will be many available Metaverse manifestations or platforms has gained traction and is seen as the most viable, given business limitations, profit motives, and antitrust concerns. Additionally, the Metaverse is envisioned to be accessed by both AR headsets and glasses and VR headsets.

In addition to what was covered here in this chapter, in this section, and under The fourth paradigm – spatial computing subheading, in-depth detail on AR and VR, including relevant companies and their products, can be found in Chapters 2 and 3, respectively, and Metaverse-supporting technologies are covered in Part 2, Key Technologies that Power the Metaverse. As an introduction, next is a short overview of technologies that are generally needed to create, manage, and/or experience the Metaverse, including blockchain and other decentralized system technologies (more detail on these can be found in Chapter 6, The Different Types of Computing Technologies).

Basic technical needs

The following technologies are summarized here: game engines, other design software, AI, CV, payment processing systems, UIs, cloud computing, application programming interfaces (APIs), tracking and capture technologies, VR headsets, and AR headsets and glasses.

Game engines

Game engines are software frameworks that provide developers with the tools to build video games more efficiently. These tools typically include things such as a rendering engine for graphics, a physics engine for simulating realistic movements, and support for input, audio, and networking. Some game engines also offer additional features such as level editors and animation tools. Game engines are designed to be flexible and reusable so that developers can build a variety of different types of games with them.

In addition to video games, game engines are also used for other types of interactive applications, such as the following:

  • VR and AR experiences
  • Simulation and training software
  • Interactive architectural and product visualizations
  • Educational software
  • GUI applications

Game engines are well suited to these types of applications because they provide a high-performance environment for rendering 3D graphics and handling user interaction.

Some well-known game engine companies include the following:

  • Unity Technologies: Unity is a cross-platform game engine that is widely used for building 2D and 3D games, as well as other interactive content. It is known for its ease of use and flexibility, and it has a large community of developers who contribute to its development.
  • Epic Games: Epic Games is the company behind Unreal Engine, a powerful game engine used for building high-quality 3D games. Unreal Engine is known for its advanced graphics capabilities and is used by many AAA game studios.
  • Crytek: Crytek developed CryEngine, a game engine that is used for building 3D games. CryEngine is known for its advanced graphics and its support for VR.
  • Open 3D Engine (O3DE): O3DE is a free and open source 3D game engine developed by Open 3D Foundation, a subsidiary of the Linux Foundation. The engine’s initial version came from an updated version of the Amazon Lumberyard engine, which was developed and contributed by Amazon Games.

Other design software – in-production 3D software and avatar creation

3D modeling software is a type of software that allows users to create three-dimensional digital models of objects or environments. These models can be used for a variety of purposes, such as creating 3D art, visualizing architectural plans, and designing products. Some common features of 3D modeling software include the ability to create and manipulate 3D shapes, apply textures and materials, and add lighting and other effects. There are many different 3D modeling software programs available, ranging from professional tools used by artists and designers to more beginner-friendly options that are accessible to hobbyists and students. Some examples of 3D modeling software include the following:

  • Autodesk 3ds Max: 3ds Max is a professional 3D modeling, animation, and rendering software that is widely used in the film, television, and gaming industries.
  • Autodesk Maya: Maya is a professional 3D modeling, animation, and rendering software that is used in a variety of industries, including film, television, and games.
  • Houdini: Houdini is a 3D animation and visual effects software that is used in the film and television industry to create complex effects and simulations.
  • Blender: Blender is a free, open source 3D modeling and animation software that is used in a variety of industries, including film, television, and games.
  • Cinema 4D: Cinema 4D is a professional 3D modeling, animation, and rendering software that is used in the film, television, and games industries.

Avatar creation software is software that allows users to create custom avatars or digital representations of themselves that could be used in the Metaverse. Some avatar creation software allows users to create avatars by selecting from a range of predefined options, such as hairstyles, facial features, and clothing. Other software allows users to create more detailed avatars by importing and manipulating 3D models or by using tools to sculpt and shape the avatar’s appearance. Some avatar creation software allows users to create a realistic 3D model of their own face or body, while others offer a more stylized or cartoonish approach.

AI

AI has the potential to be used in a variety of applications within the Metaverse. The type of AI that is most promising for the Metaverse is generative AI (genAI).

GenAI refers to a type of AI that is able to generate new content or ideas based on a set of input data or rules. This can be achieved using techniques such as machine learning (ML), neural networks (NNs), and evolutionary algorithms.

Examples of genAI models are GPT-3 (includes ChatGPT; GPT-4 forthcoming), DALL-E 2, Stable Diffusion, Midjourney, Meta’s Make-A-Video, and Google DreamFusion (text-to-3D image generator; in research stage).

In the context of the Metaverse, genAI could be used in a variety of ways, such as the following:

  • Generating virtual objects: GenAI could be used to create a wide range of virtual objects and assets for use in the Metaverse, such as buildings, furniture, vehicles, or clothing.
  • Designing virtual environments: GenAI could be used to design and generate virtual environments and landscapes for the Metaverse, such as cities, forests, or planets.
  • Generating virtual events: GenAI could be used to create and schedule virtual events and experiences within the Metaverse, such as concerts, festivals, or sports games.
  • Creating virtual characters: GenAI could be used to design and generate virtual characters or avatars for use in the Metaverse, such as non-player characters (NPCs) or virtual assistants that can converse with users naturally.
  • Generating virtual content: GenAI could be used to create a variety of virtual content for the Metaverse, such as music, videos, or games.
  • Personalization: GenAI could be used to create personalized experiences within the Metaverse. For example, an AI system could analyze a user’s preferences and generate customized virtual spaces or events that match their interests.
  • Automation: GenAI could be used to automate tasks and processes within the Metaverse, such as managing virtual assets or handling transactions.

CV

CV is a field of study that focuses on enabling computers to interpret and understand visual information from the world, such as images and videos. It involves a combination of computer science, electrical engineering, and AI to develop algorithms, models, and systems that can recognize and interpret visual data and make decisions based on that data.

Several technologies are used in CV, including the following:

  • Image processing: Techniques for improving the quality of images, such as noise reduction and image enhancement, which make it easier for CV algorithms to interpret them
  • Feature extraction: Algorithms for identifying and extracting key features from images, such as edges, corners, and textures, which are used to identify objects and understand the scene
  • Deep learning (DL): Using NNs, specifically convolutional NNs (CNNs) that can learn to recognize patterns in images and videos
  • ML: Techniques for training models to recognize patterns in data, such as supervised learning (SL), unsupervised learning (UL), and reinforcement learning (RL)
  • Object detection: Algorithm to detect objects of a certain class in images or videos
  • Semantic segmentation: Algorithm that assigns a class label to each pixel in images
  • Motion analysis and object tracking: Algorithms that can track the movement of objects in a scene over time
  • 3D CV: Techniques for creating 3D models of scenes and objects from 2D images, such as structure from motion and stereo vision

Specifically, for the Metaverse, CV can be used in many ways, such as for the following:

  • User identity and tracking: CV algorithms can be used to identify and track users within the Metaverse, and to recognize and respond to their gestures and movements. This could be used to create more immersive and interactive experiences and to enable new types of social interactions.
  • Environment and object recognition: CV algorithms can be used to recognize and understand the environment and objects within the Metaverse, and to respond to them in a realistic and believable way. This could be used to create more realistic virtual worlds and to enable new types of interactions with virtual objects.
  • Spatial mapping and navigation: CV can be used to map and understand the spatial layout of the Metaverse, and to enable users to navigate through it. This could be used to create more intuitive and user-friendly Metaverse experiences and to enable new types of virtual travel.
  • Human-computer interaction (HCI): CV can be used to enable more natural and intuitive forms of interaction with virtual worlds, such as hand and body tracking, facial recognition, and speech recognition.
  • AR: AR technology can be integrated with CV to overlay virtual elements onto the real world, providing more immersive and interactive experiences.

Overall, CV has a lot of potential to enable new and exciting experiences within the Metaverse, by enabling computers to understand and interpret visual information and to respond to it in real time.

Payment processing systems

Traditional payment systems can be used in the Metaverse in a few different ways. Here are some examples:

  • In-game currencies and microtransactions: Many Metaverse platforms allow users to purchase virtual currencies or items that can be used within a virtual world. These transactions can be processed using traditional payment systems such as credit cards, debit cards, and digital wallets.
  • Virtual real estate: Some Metaverse platforms allow users to purchase virtual real estate or other virtual assets, which can be treated as a form of investment. These transactions can be processed using traditional payment systems, and in some cases, virtual assets may be traded on secondary markets.
  • Subscriptions: Access to certain areas or services within the Metaverse may require a subscription. Traditional payment systems can be used to process these payments on a recurring basis.
  • Virtual goods and services: Virtual goods such as clothes, skins, and other accessories for avatars can be sold for real money. Similarly, virtual services such as tutoring or guiding in a virtual world can also be sold with traditional payment systems.

Next is a diagram of how a traditional payment processing system works. When a customer orders an item on a website using a credit or debit card, the payment goes through a gateway and a verification process before it goes from the card-issuing bank to the merchant:

Figure 1.6 – Traditional payment processing system

Figure 1.6 – Traditional payment processing system

While traditional payment systems can be used to facilitate transactions in the Metaverse, the Metaverse itself may introduce new and innovative forms of payment as well. For example, virtual currencies and blockchain technology could become increasingly important in the Metaverse economy.

The next diagram shows how a transaction that is done on a blockchain would typically work, which goes as follows. Someone initiates a transaction, which is then broadcast to computers whose networks are called “nodes.” The next step is validation; passing that allows for a transaction to form a data block. This block is then in turn added to a chain of other blocks—a “blockchain.” This blockchain is then broadcast to nodes, whereupon the transaction is entered into a decentralized digital ledger:

Figure 1.7 – Blockchain payment flow

Figure 1.7 – Blockchain payment flow

UI

A UI is the point of interaction between a user and a computer or other device. It refers to the way in which a user interacts with and controls the device, and it includes visual elements, such as buttons and icons, as well as non-visual elements, such as audio and haptic feedback. The goal of a UI is to make the device easy to use and understand, by providing a clear and consistent way for the user to interact with the device, and by providing clear and useful feedback to the user.

Some issues related to a UI for the Metaverse include the following:

  • Navigation: In a virtual world, users must be able to easily move around and explore the environment. This can be challenging to implement in a way that feels intuitive and natural.
  • Interaction: Users must be able to interact with virtual objects and other users in ways that are familiar and easy to understand.
  • Identity: Users must be able to express their identity in a way that is unique and meaningful in a virtual world, while also protecting their privacy.
  • Performance: The Metaverse should be able to perform well on a wide range of devices and networks while providing a smooth and responsive experience.
  • Scale: The Metaverse must be able to handle a large number of users and a wide range of activities while maintaining stability and security.
  • Accessibility: The UI for the Metaverse should be accessible to people with disabilities, the elderly, and non-technical users.
  • Safety: A virtual world should have safety features to protect users from harassment, bullying, or other forms of abuse.
  • Usability: The interface should be easy to use and understand, with minimal training required.

Cloud computing

Cloud computing plays an important role in the development and deployment of the Metaverse in the following ways:

  • Hosting: The Metaverse relies on servers to host a virtual world and all of its components, such as 3D models, textures, and animations. Cloud computing providers, such as Amazon Web Services (AWS), Azure, and Google Cloud Platform (GCP), offer a wide range of server resources that can be used to host the Metaverse.
  • Scalability: The Metaverse is expected to grow rapidly in terms of the number of users and the amount of content that is created. Cloud computing allows the Metaverse to scale up and down as needed, by allocating more or fewer resources as needed.
  • Security: Cloud computing providers offer a wide range of security features—such as encryption, authentication, and access control—that can be used to protect the Metaverse and its users.
  • Analytics and intelligence: Cloud computing services can provide advanced analytics and AI capabilities that can be used to understand user behavior, optimize performance, and improve the overall user experience.
  • Infrastructure as a Service (IaaS): Instead of building and managing the hardware and software infrastructure for the Metaverse, developers and companies can use a cloud service provider’s (CSP’s) infrastructure to build and deploy their own Metaverse experiences.
  • Platform as a Service (PaaS): Some companies offer a ready-made platform for building Metaverse experiences. Developers can build and run their experiences on that platform.

Overall, cloud computing is an important enabler for the development and deployment of the Metaverse. It allows developers and companies to focus on creating the best possible user experience, without having to worry about the underlying infrastructure and scaling concerns.

APIs

An API is a set of tools, protocols, and standards for building and integrating software applications. APIs allow different software applications to communicate and exchange data with one another. They provide a way for different pieces of software to talk to each other, and for different systems to share data and functionality.

APIs are a key technology used in the Metaverse for the following reasons:

  • Interoperability: Metaverse experiences are built by different creators and companies, and APIs allow them to communicate and interact with each other seamlessly. This enables users to move between different parts of the Metaverse without interruption and for different experiences to interact with one another in meaningful ways.
  • Data sharing: APIs can be used to share data between different parts of the Metaverse, such as user information, inventory, and transactions. This enables a more cohesive and integrated user experience across the Metaverse.
  • Third-party integration: APIs can be used to integrate external services, such as payments, identity management, and analytics, into the Metaverse. This enables Metaverse developers to leverage the capabilities of these services to enhance their experiences.
  • Development and automation: APIs can be used to automate certain functions and tasks, such as creating new assets or managing inventory, as well as to create tools for developers to work more efficiently.
  • Extending functionality: APIs can enable developers to access and extend the functionality of different components of the Metaverse, such as avatars, physics engines, or scripting engines.
  • Analytics and monitoring: APIs can be used to gather data on the usage, behavior, and performance of the Metaverse, which can be used for monitoring and improving Metaverse experiences.

APIs are a powerful technology that can be used to connect different parts of the Metaverse, enabling a more seamless and integrated user experience and creating opportunities for innovation and growth.

Tracking and capture technologies

Tracking and capture technologies are a group of tools and methods used to measure and record various aspects of the physical world and human behavior. These technologies are used to capture and track different types of data, such as movement, position, and expression, in order to provide a more realistic and immersive experience.

Here are some examples of tracking and capture technologies used in the Metaverse:

  • Head-mounted displays (HMDs), AR glasses, and hand controllers: These devices are used to track the position and movement of a user’s head and hands in a virtual world, allowing for a more natural and intuitive experience.
  • Motion capture: This technology is used to track the movement and posture of a user’s body, using sensors or cameras. The data is then used to control the movement of an avatar in a virtual world, providing a more realistic representation of the user.
  • Facial tracking and expression: This technology tracks the movements of the user’s face and uses this data to control the expression of the avatar in a virtual world.
  • Voice recognition: This technology is used to capture and interpret a user’s voice, allowing for natural speech-based interactions in a virtual world.
  • Eye tracking: This technology is used to track a user’s gaze, allowing for more realistic interactions with virtual objects and other users.
  • Haptic feedback: This technology is used to provide tactile feedback to the user, allowing them to feel like they are physically interacting with objects in a virtual world.

These technologies help create a more realistic and immersive experience for users in the Metaverse, allowing them to interact with virtual objects and other users in natural and intuitive ways. Additionally, the captured data can be used to improve performance, analyze user behavior, and even personalize the experience for the user. What is not covered here is brain-computer interfaces (BCIs) or capturing neuromotor impulses for tracking and capture purposes; these technologies will take at least a few decades and, as such, are out of the scope of this book, which is meant to help people within this decade.

VR and MR headsets

VR and MR (combined VR and AR) headsets are key technologies used in the Metaverse:

Figure 1.8 – The popular Oculus Quest VR headset being used by a child at school (source: Robert Scoble)

Figure 1.8 – The popular Oculus Quest VR headset being used by a child at school (source: Robert Scoble)

They provide the following:

  • Immersive experience: VR and MR headsets provide users with a fully immersive experience by blocking out the real world and creating a sense of presence in a virtual one.
  • Interaction: They can be used in conjunction with hand-held controllers, or even with hand and finger-tracking, to allow users to interact with virtual objects and other users in natural and intuitive ways.
  • Exploration: They allow users to explore virtual worlds in a more natural and intuitive way, providing a sense of freedom and presence in virtual space.
  • Social interaction: They can be used to socially interact in the Metaverse, allowing users to communicate with others in real time.
  • Training and education: They can be used to provide a safe and immersive environment for training and education.
  • Remote collaboration: They can be used to facilitate remote collaboration in fields such as architecture, design, and engineering.
  • Treatment: They have been used in the treatment of psychological disorders such as post-traumatic stress disorder (PTSD) and phobias, as well as in physical therapy and pain management.

Overall, VR and MR headsets are important technologies for viewing immersive and interactive experiences in the Metaverse, providing users with a sense of presence and realism in a virtual world.

Apple Reality Pro

On June 5, 2023, Apple announced an MR headset that was highly anticipated, taking over 7 years and a purported $40 billion to produce. This MR headset, Apple Reality Pro, is very important as validation that the immersive industry is a very viable one that is here to stay for the long term. By extension, it also validates the Metaverse.

More about Apple Reality Pro and other VR and MR headsets can be found in Chapter 3, Where is Virtual Reality Heading?

AR headsets and glasses

AR headsets and glasses are technologies that can be used in the Metaverse, providing the following:

  • Enhanced reality: AR headsets and glasses superimpose virtual objects and information onto the real world, rather than completely blocking it out, providing a more enriched version of the real world that can be utilized to enhance various activities.
  • Navigation and information: They can be used to provide navigation and information in the Metaverse, such as showing users where to go, providing information about nearby virtual objects, and displaying notifications or messages.
  • Interaction: They can be paired with hand controllers or other input devices, to allow users to interact with virtual objects and other users in natural and intuitive ways.
  • Remote collaboration: They can be used to connect users remotely and collaborate on different tasks and projects, and they can work together in the same virtual space, even if they are in different physical locations.
  • Training and education: They can be used to create interactive and immersive training simulations that can enhance the learning experience and increase retention of the material.
  • Industrial and commercial uses: They can be used in a variety of industries and commercial settings, such as providing instructions and guidance for maintenance and repair, or for helping with customer service and sales.
  • Maintenance and repair: They can be used to assist in maintenance and repair tasks, by superimposing virtual instructions, information, and even guidance over the real world to help technicians, mechanics, and engineers.
  • Gaming: They can enhance the gaming experience by superimposing virtual characters, objects, and information over the real world, creating a more immersive and interactive gaming experience.
Figure 1.9 – The original Magic Leap AR headset (source: Robert Scoble)

Figure 1.9 – The original Magic Leap AR headset (source: Robert Scoble)

Overall, AR headsets and glasses are technologies that can enhance the Metaverse experience by providing users with virtual objects and information that can interact with the real world, allowing for new possibilities for interaction, navigation, information, collaboration, and more.

More about AR headsets and glasses can be found in Chapter 2, Augmented Reality Status Quo.

Left arrow icon Right arrow icon

Key benefits

  • Understand the metaverse and learn how augmented reality and virtual reality are integral to it
  • Get a solid understanding of core metaverse technologies
  • Become a metaverse business thought leader by learning from real-world use cases
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

“The metaverse” has become a widely known term within a very short time span. The Immersive Metaverse Playbook for Business Leaders explicitly explains what it really refers to and shows you how to plot your business road map using the metaverse. This book helps you understand the concept of the metaverse, along with the implementation of generative AI in it. You'll not only get to grips with the underlying concepts, but also take a closer look at key technologies that power the metaverse, enabling you to plan your business road map. The chapters include use cases on social interaction, work, entertainment, art, and shopping to help you make better decisions when it comes to metaverse product and service development. You’ll also explore the overall societal benefits and dangers related to issues such as privacy encroachment, technology addiction, and sluggishness. The concluding chapters discuss the future of AR and VR roles in the metaverse and the metaverse as a whole to enable you to make long-term business plans. By the end of this book, you'll be able to successfully invest, build, and market metaverse products and services that set you apart as a progressive technology leader.

Who is this book for?

If you are a C-suite technology and business executive, this book is for you. Investors, entrepreneurs, and other tech professionals will also find it beneficial. This book does not require any previous understanding of the metaverse or immersive technologies.

What you will learn

  • Get to grips with the concept of the metaverse, its origin, and its present state
  • Understand how AR and VR strategically fit into the metaverse
  • Delve into core technologies that power the metaverse
  • Dig into use cases that enable finer strategic decision-making
  • Understand the benefits and possible dangers of the metaverse
  • Plan further ahead by understanding the future of the metaverse
Estimated delivery fee Deliver to New Zealand

Standard delivery 10 - 13 business days

NZ$20.95

Premium delivery 5 - 8 business days

NZ$74.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 30, 2023
Length: 458 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632848
Category :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to New Zealand

Standard delivery 10 - 13 business days

NZ$20.95

Premium delivery 5 - 8 business days

NZ$74.95
(Includes tracking information)

Product Details

Publication date : Nov 30, 2023
Length: 458 pages
Edition : 1st
Language : English
ISBN-13 : 9781837632848
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 213.97
50 Algorithms Every Programmer Should Know
NZ$73.99
The Immersive Metaverse Playbook for Business Leaders
NZ$65.99
Modern Generative AI with ChatGPT and OpenAI Models
NZ$73.99
Total NZ$ 213.97 Stars icon

Table of Contents

21 Chapters
Part 1: The Reality of the AR/MR Metaverse Chevron down icon Chevron up icon
Chapter 1: The What and Why of the Metaverse Chevron down icon Chevron up icon
Chapter 2: Augmented Reality Status Quo Chevron down icon Chevron up icon
Chapter 3: Where Is Virtual Reality Heading? Chevron down icon Chevron up icon
Chapter 4: The Value of Using 3D Visuals to Interact Chevron down icon Chevron up icon
Part 2: Key Technologies That Power the Metaverse Chevron down icon Chevron up icon
Chapter 5: Understanding Perception Technologies Chevron down icon Chevron up icon
Chapter 6: The Different Types of Computing Technologies Chevron down icon Chevron up icon
Chapter 7: Where Are APIs Needed Chevron down icon Chevron up icon
Chapter 8: Making and Using 3D Models and Integrating 2D Content Chevron down icon Chevron up icon
Chapter 9: Understanding User Experience Design and User Interface Chevron down icon Chevron up icon
Part 3: Consumer and Enterprise Use Cases Chevron down icon Chevron up icon
Chapter 10: New Ways of Social Interaction Chevron down icon Chevron up icon
Chapter 11: Virtual and Onsite Work Chevron down icon Chevron up icon
Chapter 12: 3D and 2D Content Forms and Creation Chevron down icon Chevron up icon
Chapter 13: Retail Experiences Chevron down icon Chevron up icon
Part 4: Why Metaverse Redux? Chevron down icon Chevron up icon
Chapter 14: Benefits and Possible Dangers Reframed Chevron down icon Chevron up icon
Chapter 15: Future Vision Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon