Reader small image

You're reading from  The Immersive Metaverse Playbook for Business Leaders

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781837632848
Edition1st Edition
Right arrow
Authors (2):
Irena Cronin
Irena Cronin
author image
Irena Cronin

Irena Cronin is the SVP of product for DADOS technology, which involves making an app for the Apple Vision Pro that offers data analytics and visualization. She is also the CEO of Infinite Retina, which provides research to help companies develop and implement AI, AR, and other new technologies for their businesses. Prior to this, she worked for several years as an equity research analyst and gained extensive experience in evaluating both public and private companies. Cronin has a joint MBA/MA from the University of Southern California and an MS with distinction in management and systems from New York University. She graduated with a BA from the University of Pennsylvania with a major in economics (summa cum laude).
Read more about Irena Cronin

Robert Scoble
Robert Scoble
author image
Robert Scoble

Robert Scoble has coauthored four books on technology innovation – each a decade before the said technology went completely mainstream. He has interviewed thousands of entrepreneurs in the tech industry and has long kept his social media audiences up to date on what is happening inside the world of tech, which is bringing us so many innovations. Robert currently tracks the AI industry and is the host of a new video show, Unaligned, where he interviews entrepreneurs from the thousands of AI companies he tracks as head of strategy for Infinite Retina.
Read more about Robert Scoble

View More author details
Right arrow

What is the Metaverse?

What the Metaverse promises to bring to communications would be an improvement in the areas of speed, ease, precision, and audience reach. So, what is this Metaverse?

Here is writer Matthew Ball’s definition, which has gained some traction in the public eye:

The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds and environments which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with a continuity of data, such as identity, history, entitlements, objects, communications, and payments.

Does this explain why the Metaverse is an improvement on other means of communication? Kind of…let’s unpack Ball’s definition:

  • Massively scaled and interoperable network

    This means that the Metaverse can handle a large number of people and their activities at the same time and that people can easily roam and operate between different environments and apps.

  • Real-time rendered 3D virtual worlds and environments

    Real-time rendered here means that images are dynamically produced and updated in near real time so that a person can interact or move around in a virtual environment without lag.

  • Experienced synchronously and persistently

    Synchronously means that people’s interactions in the Metaverse happen simultaneously. Persistently means that in whichever state the environment exists and whatever was updated in it remains in that state if and until that environment is actively changed.

  • Effectively unlimited number of users with an individual sense of presence, and with a continuity of data, such as identity, history, entitlements, objects, communications, and payments

    An individual sense of presence means that a person in the Metaverse experiences it almost as if they were there in a psychological sense.

My definition

The Metaverse allows many people to simultaneously come together and interact in real-time 3D virtual environments or worlds with the capability of retaining distinct and persistent digital identities, including the trail of each individual’s activities.

With my definition, how the Metaverse can provide improvements to communication is much clearer. Improvements to speed, ease, precision, and audience reach are all there.

Here is a breakdown:

  • Speed: Simultaneously come together and interact in real time speaks for itself.
  • Ease: Real-time and retaining distinct and persistent digital identities

    The ease of communication in real time is intuitively understood. Retaining distinct and persistent digital identities allows a person to return to the Metaverse and interact more easily each time because that person’s data is continuously being stored for that person’s use.

  • Precision: Real-time 3D virtual environments or worlds and retaining distinct and persistent digital identitiesReal-time 3D virtual environments or worlds bring about precision by providing real-time capability and by allowing people to view objects more realistically since they are in 3D. This could be immensely helpful in business scenarios, such as when a product is demoed or shown in 3D in the Metaverse.
  • Audience reach: Allows many people speak for itself.

The definition of the Metaverse is different from its fictional version. The term The Metaverse was coined by the author Neal Stephenson in his 1992 novel Snow Crash. In that novel, there was only one Metaverse and people would use VR headsets to go online and enter it to escape their dystopian reality. More recently, and inherent in the definition, the idea that there will be many available Metaverse manifestations or platforms has gained traction and is seen as the most viable, given business limitations, profit motives, and antitrust concerns. Additionally, the Metaverse is envisioned to be accessed by both AR headsets and glasses and VR headsets.

In addition to what was covered here in this chapter, in this section, and under The fourth paradigm – spatial computing subheading, in-depth detail on AR and VR, including relevant companies and their products, can be found in Chapters 2 and 3, respectively, and Metaverse-supporting technologies are covered in Part 2, Key Technologies that Power the Metaverse. As an introduction, next is a short overview of technologies that are generally needed to create, manage, and/or experience the Metaverse, including blockchain and other decentralized system technologies (more detail on these can be found in Chapter 6, The Different Types of Computing Technologies).

Basic technical needs

The following technologies are summarized here: game engines, other design software, AI, CV, payment processing systems, UIs, cloud computing, application programming interfaces (APIs), tracking and capture technologies, VR headsets, and AR headsets and glasses.

Game engines

Game engines are software frameworks that provide developers with the tools to build video games more efficiently. These tools typically include things such as a rendering engine for graphics, a physics engine for simulating realistic movements, and support for input, audio, and networking. Some game engines also offer additional features such as level editors and animation tools. Game engines are designed to be flexible and reusable so that developers can build a variety of different types of games with them.

In addition to video games, game engines are also used for other types of interactive applications, such as the following:

  • VR and AR experiences
  • Simulation and training software
  • Interactive architectural and product visualizations
  • Educational software
  • GUI applications

Game engines are well suited to these types of applications because they provide a high-performance environment for rendering 3D graphics and handling user interaction.

Some well-known game engine companies include the following:

  • Unity Technologies: Unity is a cross-platform game engine that is widely used for building 2D and 3D games, as well as other interactive content. It is known for its ease of use and flexibility, and it has a large community of developers who contribute to its development.
  • Epic Games: Epic Games is the company behind Unreal Engine, a powerful game engine used for building high-quality 3D games. Unreal Engine is known for its advanced graphics capabilities and is used by many AAA game studios.
  • Crytek: Crytek developed CryEngine, a game engine that is used for building 3D games. CryEngine is known for its advanced graphics and its support for VR.
  • Open 3D Engine (O3DE): O3DE is a free and open source 3D game engine developed by Open 3D Foundation, a subsidiary of the Linux Foundation. The engine’s initial version came from an updated version of the Amazon Lumberyard engine, which was developed and contributed by Amazon Games.

Other design software – in-production 3D software and avatar creation

3D modeling software is a type of software that allows users to create three-dimensional digital models of objects or environments. These models can be used for a variety of purposes, such as creating 3D art, visualizing architectural plans, and designing products. Some common features of 3D modeling software include the ability to create and manipulate 3D shapes, apply textures and materials, and add lighting and other effects. There are many different 3D modeling software programs available, ranging from professional tools used by artists and designers to more beginner-friendly options that are accessible to hobbyists and students. Some examples of 3D modeling software include the following:

  • Autodesk 3ds Max: 3ds Max is a professional 3D modeling, animation, and rendering software that is widely used in the film, television, and gaming industries.
  • Autodesk Maya: Maya is a professional 3D modeling, animation, and rendering software that is used in a variety of industries, including film, television, and games.
  • Houdini: Houdini is a 3D animation and visual effects software that is used in the film and television industry to create complex effects and simulations.
  • Blender: Blender is a free, open source 3D modeling and animation software that is used in a variety of industries, including film, television, and games.
  • Cinema 4D: Cinema 4D is a professional 3D modeling, animation, and rendering software that is used in the film, television, and games industries.

Avatar creation software is software that allows users to create custom avatars or digital representations of themselves that could be used in the Metaverse. Some avatar creation software allows users to create avatars by selecting from a range of predefined options, such as hairstyles, facial features, and clothing. Other software allows users to create more detailed avatars by importing and manipulating 3D models or by using tools to sculpt and shape the avatar’s appearance. Some avatar creation software allows users to create a realistic 3D model of their own face or body, while others offer a more stylized or cartoonish approach.

AI

AI has the potential to be used in a variety of applications within the Metaverse. The type of AI that is most promising for the Metaverse is generative AI (genAI).

GenAI refers to a type of AI that is able to generate new content or ideas based on a set of input data or rules. This can be achieved using techniques such as machine learning (ML), neural networks (NNs), and evolutionary algorithms.

Examples of genAI models are GPT-3 (includes ChatGPT; GPT-4 forthcoming), DALL-E 2, Stable Diffusion, Midjourney, Meta’s Make-A-Video, and Google DreamFusion (text-to-3D image generator; in research stage).

In the context of the Metaverse, genAI could be used in a variety of ways, such as the following:

  • Generating virtual objects: GenAI could be used to create a wide range of virtual objects and assets for use in the Metaverse, such as buildings, furniture, vehicles, or clothing.
  • Designing virtual environments: GenAI could be used to design and generate virtual environments and landscapes for the Metaverse, such as cities, forests, or planets.
  • Generating virtual events: GenAI could be used to create and schedule virtual events and experiences within the Metaverse, such as concerts, festivals, or sports games.
  • Creating virtual characters: GenAI could be used to design and generate virtual characters or avatars for use in the Metaverse, such as non-player characters (NPCs) or virtual assistants that can converse with users naturally.
  • Generating virtual content: GenAI could be used to create a variety of virtual content for the Metaverse, such as music, videos, or games.
  • Personalization: GenAI could be used to create personalized experiences within the Metaverse. For example, an AI system could analyze a user’s preferences and generate customized virtual spaces or events that match their interests.
  • Automation: GenAI could be used to automate tasks and processes within the Metaverse, such as managing virtual assets or handling transactions.

CV

CV is a field of study that focuses on enabling computers to interpret and understand visual information from the world, such as images and videos. It involves a combination of computer science, electrical engineering, and AI to develop algorithms, models, and systems that can recognize and interpret visual data and make decisions based on that data.

Several technologies are used in CV, including the following:

  • Image processing: Techniques for improving the quality of images, such as noise reduction and image enhancement, which make it easier for CV algorithms to interpret them
  • Feature extraction: Algorithms for identifying and extracting key features from images, such as edges, corners, and textures, which are used to identify objects and understand the scene
  • Deep learning (DL): Using NNs, specifically convolutional NNs (CNNs) that can learn to recognize patterns in images and videos
  • ML: Techniques for training models to recognize patterns in data, such as supervised learning (SL), unsupervised learning (UL), and reinforcement learning (RL)
  • Object detection: Algorithm to detect objects of a certain class in images or videos
  • Semantic segmentation: Algorithm that assigns a class label to each pixel in images
  • Motion analysis and object tracking: Algorithms that can track the movement of objects in a scene over time
  • 3D CV: Techniques for creating 3D models of scenes and objects from 2D images, such as structure from motion and stereo vision

Specifically, for the Metaverse, CV can be used in many ways, such as for the following:

  • User identity and tracking: CV algorithms can be used to identify and track users within the Metaverse, and to recognize and respond to their gestures and movements. This could be used to create more immersive and interactive experiences and to enable new types of social interactions.
  • Environment and object recognition: CV algorithms can be used to recognize and understand the environment and objects within the Metaverse, and to respond to them in a realistic and believable way. This could be used to create more realistic virtual worlds and to enable new types of interactions with virtual objects.
  • Spatial mapping and navigation: CV can be used to map and understand the spatial layout of the Metaverse, and to enable users to navigate through it. This could be used to create more intuitive and user-friendly Metaverse experiences and to enable new types of virtual travel.
  • Human-computer interaction (HCI): CV can be used to enable more natural and intuitive forms of interaction with virtual worlds, such as hand and body tracking, facial recognition, and speech recognition.
  • AR: AR technology can be integrated with CV to overlay virtual elements onto the real world, providing more immersive and interactive experiences.

Overall, CV has a lot of potential to enable new and exciting experiences within the Metaverse, by enabling computers to understand and interpret visual information and to respond to it in real time.

Payment processing systems

Traditional payment systems can be used in the Metaverse in a few different ways. Here are some examples:

  • In-game currencies and microtransactions: Many Metaverse platforms allow users to purchase virtual currencies or items that can be used within a virtual world. These transactions can be processed using traditional payment systems such as credit cards, debit cards, and digital wallets.
  • Virtual real estate: Some Metaverse platforms allow users to purchase virtual real estate or other virtual assets, which can be treated as a form of investment. These transactions can be processed using traditional payment systems, and in some cases, virtual assets may be traded on secondary markets.
  • Subscriptions: Access to certain areas or services within the Metaverse may require a subscription. Traditional payment systems can be used to process these payments on a recurring basis.
  • Virtual goods and services: Virtual goods such as clothes, skins, and other accessories for avatars can be sold for real money. Similarly, virtual services such as tutoring or guiding in a virtual world can also be sold with traditional payment systems.

Next is a diagram of how a traditional payment processing system works. When a customer orders an item on a website using a credit or debit card, the payment goes through a gateway and a verification process before it goes from the card-issuing bank to the merchant:

Figure 1.6 – Traditional payment processing system

Figure 1.6 – Traditional payment processing system

While traditional payment systems can be used to facilitate transactions in the Metaverse, the Metaverse itself may introduce new and innovative forms of payment as well. For example, virtual currencies and blockchain technology could become increasingly important in the Metaverse economy.

The next diagram shows how a transaction that is done on a blockchain would typically work, which goes as follows. Someone initiates a transaction, which is then broadcast to computers whose networks are called “nodes.” The next step is validation; passing that allows for a transaction to form a data block. This block is then in turn added to a chain of other blocks—a “blockchain.” This blockchain is then broadcast to nodes, whereupon the transaction is entered into a decentralized digital ledger:

Figure 1.7 – Blockchain payment flow

Figure 1.7 – Blockchain payment flow

UI

A UI is the point of interaction between a user and a computer or other device. It refers to the way in which a user interacts with and controls the device, and it includes visual elements, such as buttons and icons, as well as non-visual elements, such as audio and haptic feedback. The goal of a UI is to make the device easy to use and understand, by providing a clear and consistent way for the user to interact with the device, and by providing clear and useful feedback to the user.

Some issues related to a UI for the Metaverse include the following:

  • Navigation: In a virtual world, users must be able to easily move around and explore the environment. This can be challenging to implement in a way that feels intuitive and natural.
  • Interaction: Users must be able to interact with virtual objects and other users in ways that are familiar and easy to understand.
  • Identity: Users must be able to express their identity in a way that is unique and meaningful in a virtual world, while also protecting their privacy.
  • Performance: The Metaverse should be able to perform well on a wide range of devices and networks while providing a smooth and responsive experience.
  • Scale: The Metaverse must be able to handle a large number of users and a wide range of activities while maintaining stability and security.
  • Accessibility: The UI for the Metaverse should be accessible to people with disabilities, the elderly, and non-technical users.
  • Safety: A virtual world should have safety features to protect users from harassment, bullying, or other forms of abuse.
  • Usability: The interface should be easy to use and understand, with minimal training required.

Cloud computing

Cloud computing plays an important role in the development and deployment of the Metaverse in the following ways:

  • Hosting: The Metaverse relies on servers to host a virtual world and all of its components, such as 3D models, textures, and animations. Cloud computing providers, such as Amazon Web Services (AWS), Azure, and Google Cloud Platform (GCP), offer a wide range of server resources that can be used to host the Metaverse.
  • Scalability: The Metaverse is expected to grow rapidly in terms of the number of users and the amount of content that is created. Cloud computing allows the Metaverse to scale up and down as needed, by allocating more or fewer resources as needed.
  • Security: Cloud computing providers offer a wide range of security features—such as encryption, authentication, and access control—that can be used to protect the Metaverse and its users.
  • Analytics and intelligence: Cloud computing services can provide advanced analytics and AI capabilities that can be used to understand user behavior, optimize performance, and improve the overall user experience.
  • Infrastructure as a Service (IaaS): Instead of building and managing the hardware and software infrastructure for the Metaverse, developers and companies can use a cloud service provider’s (CSP’s) infrastructure to build and deploy their own Metaverse experiences.
  • Platform as a Service (PaaS): Some companies offer a ready-made platform for building Metaverse experiences. Developers can build and run their experiences on that platform.

Overall, cloud computing is an important enabler for the development and deployment of the Metaverse. It allows developers and companies to focus on creating the best possible user experience, without having to worry about the underlying infrastructure and scaling concerns.

APIs

An API is a set of tools, protocols, and standards for building and integrating software applications. APIs allow different software applications to communicate and exchange data with one another. They provide a way for different pieces of software to talk to each other, and for different systems to share data and functionality.

APIs are a key technology used in the Metaverse for the following reasons:

  • Interoperability: Metaverse experiences are built by different creators and companies, and APIs allow them to communicate and interact with each other seamlessly. This enables users to move between different parts of the Metaverse without interruption and for different experiences to interact with one another in meaningful ways.
  • Data sharing: APIs can be used to share data between different parts of the Metaverse, such as user information, inventory, and transactions. This enables a more cohesive and integrated user experience across the Metaverse.
  • Third-party integration: APIs can be used to integrate external services, such as payments, identity management, and analytics, into the Metaverse. This enables Metaverse developers to leverage the capabilities of these services to enhance their experiences.
  • Development and automation: APIs can be used to automate certain functions and tasks, such as creating new assets or managing inventory, as well as to create tools for developers to work more efficiently.
  • Extending functionality: APIs can enable developers to access and extend the functionality of different components of the Metaverse, such as avatars, physics engines, or scripting engines.
  • Analytics and monitoring: APIs can be used to gather data on the usage, behavior, and performance of the Metaverse, which can be used for monitoring and improving Metaverse experiences.

APIs are a powerful technology that can be used to connect different parts of the Metaverse, enabling a more seamless and integrated user experience and creating opportunities for innovation and growth.

Tracking and capture technologies

Tracking and capture technologies are a group of tools and methods used to measure and record various aspects of the physical world and human behavior. These technologies are used to capture and track different types of data, such as movement, position, and expression, in order to provide a more realistic and immersive experience.

Here are some examples of tracking and capture technologies used in the Metaverse:

  • Head-mounted displays (HMDs), AR glasses, and hand controllers: These devices are used to track the position and movement of a user’s head and hands in a virtual world, allowing for a more natural and intuitive experience.
  • Motion capture: This technology is used to track the movement and posture of a user’s body, using sensors or cameras. The data is then used to control the movement of an avatar in a virtual world, providing a more realistic representation of the user.
  • Facial tracking and expression: This technology tracks the movements of the user’s face and uses this data to control the expression of the avatar in a virtual world.
  • Voice recognition: This technology is used to capture and interpret a user’s voice, allowing for natural speech-based interactions in a virtual world.
  • Eye tracking: This technology is used to track a user’s gaze, allowing for more realistic interactions with virtual objects and other users.
  • Haptic feedback: This technology is used to provide tactile feedback to the user, allowing them to feel like they are physically interacting with objects in a virtual world.

These technologies help create a more realistic and immersive experience for users in the Metaverse, allowing them to interact with virtual objects and other users in natural and intuitive ways. Additionally, the captured data can be used to improve performance, analyze user behavior, and even personalize the experience for the user. What is not covered here is brain-computer interfaces (BCIs) or capturing neuromotor impulses for tracking and capture purposes; these technologies will take at least a few decades and, as such, are out of the scope of this book, which is meant to help people within this decade.

VR and MR headsets

VR and MR (combined VR and AR) headsets are key technologies used in the Metaverse:

Figure 1.8 – The popular Oculus Quest VR headset being used by a child at school (source: Robert Scoble)

Figure 1.8 – The popular Oculus Quest VR headset being used by a child at school (source: Robert Scoble)

They provide the following:

  • Immersive experience: VR and MR headsets provide users with a fully immersive experience by blocking out the real world and creating a sense of presence in a virtual one.
  • Interaction: They can be used in conjunction with hand-held controllers, or even with hand and finger-tracking, to allow users to interact with virtual objects and other users in natural and intuitive ways.
  • Exploration: They allow users to explore virtual worlds in a more natural and intuitive way, providing a sense of freedom and presence in virtual space.
  • Social interaction: They can be used to socially interact in the Metaverse, allowing users to communicate with others in real time.
  • Training and education: They can be used to provide a safe and immersive environment for training and education.
  • Remote collaboration: They can be used to facilitate remote collaboration in fields such as architecture, design, and engineering.
  • Treatment: They have been used in the treatment of psychological disorders such as post-traumatic stress disorder (PTSD) and phobias, as well as in physical therapy and pain management.

Overall, VR and MR headsets are important technologies for viewing immersive and interactive experiences in the Metaverse, providing users with a sense of presence and realism in a virtual world.

Apple Reality Pro

On June 5, 2023, Apple announced an MR headset that was highly anticipated, taking over 7 years and a purported $40 billion to produce. This MR headset, Apple Reality Pro, is very important as validation that the immersive industry is a very viable one that is here to stay for the long term. By extension, it also validates the Metaverse.

More about Apple Reality Pro and other VR and MR headsets can be found in Chapter 3, Where is Virtual Reality Heading?

AR headsets and glasses

AR headsets and glasses are technologies that can be used in the Metaverse, providing the following:

  • Enhanced reality: AR headsets and glasses superimpose virtual objects and information onto the real world, rather than completely blocking it out, providing a more enriched version of the real world that can be utilized to enhance various activities.
  • Navigation and information: They can be used to provide navigation and information in the Metaverse, such as showing users where to go, providing information about nearby virtual objects, and displaying notifications or messages.
  • Interaction: They can be paired with hand controllers or other input devices, to allow users to interact with virtual objects and other users in natural and intuitive ways.
  • Remote collaboration: They can be used to connect users remotely and collaborate on different tasks and projects, and they can work together in the same virtual space, even if they are in different physical locations.
  • Training and education: They can be used to create interactive and immersive training simulations that can enhance the learning experience and increase retention of the material.
  • Industrial and commercial uses: They can be used in a variety of industries and commercial settings, such as providing instructions and guidance for maintenance and repair, or for helping with customer service and sales.
  • Maintenance and repair: They can be used to assist in maintenance and repair tasks, by superimposing virtual instructions, information, and even guidance over the real world to help technicians, mechanics, and engineers.
  • Gaming: They can enhance the gaming experience by superimposing virtual characters, objects, and information over the real world, creating a more immersive and interactive gaming experience.
Figure 1.9 – The original Magic Leap AR headset (source: Robert Scoble)

Figure 1.9 – The original Magic Leap AR headset (source: Robert Scoble)

Overall, AR headsets and glasses are technologies that can enhance the Metaverse experience by providing users with virtual objects and information that can interact with the real world, allowing for new possibilities for interaction, navigation, information, collaboration, and more.

More about AR headsets and glasses can be found in Chapter 2, Augmented Reality Status Quo.

Previous PageNext Page
You have been reading a chapter from
The Immersive Metaverse Playbook for Business Leaders
Published in: Nov 2023Publisher: PacktISBN-13: 9781837632848
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Irena Cronin

Irena Cronin is the SVP of product for DADOS technology, which involves making an app for the Apple Vision Pro that offers data analytics and visualization. She is also the CEO of Infinite Retina, which provides research to help companies develop and implement AI, AR, and other new technologies for their businesses. Prior to this, she worked for several years as an equity research analyst and gained extensive experience in evaluating both public and private companies. Cronin has a joint MBA/MA from the University of Southern California and an MS with distinction in management and systems from New York University. She graduated with a BA from the University of Pennsylvania with a major in economics (summa cum laude).
Read more about Irena Cronin

author image
Robert Scoble

Robert Scoble has coauthored four books on technology innovation – each a decade before the said technology went completely mainstream. He has interviewed thousands of entrepreneurs in the tech industry and has long kept his social media audiences up to date on what is happening inside the world of tech, which is bringing us so many innovations. Robert currently tracks the AI industry and is the host of a new video show, Unaligned, where he interviews entrepreneurs from the thousands of AI companies he tracks as head of strategy for Infinite Retina.
Read more about Robert Scoble