Home Game Development Unreal Engine: Game Development from A to Z

Unreal Engine: Game Development from A to Z

By Nitish Misra , John P. Doran , Joanna Lee
books-svg-icon Book
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
Unreal Engine technology powers hundreds of games. This Learning Path will help you create great 2D and 3D games that are distributed across multiple platforms. The first module, Learning Unreal Engine Game Development, starts with small, simple game ideas and playable projects. It starts by showing you the basics in the context of an individual game level. Then, you'll learn how to add details such as actors, animation, effects, and so on to the game. This module aims to equip you with the confidence and skills to design and build your own games using Unreal Engine 4. By the end of this module, you will be able to put into practise your own content.After getting familiar with Unreal Engine’s core concepts, it’s time that you dive into the field of game development. In this second module, Unreal Engine Game Development Cookbook we show you how to solve development problems using Unreal Engine, which you can work through as you build your own unique project. Every recipe provides step-by-step instructions, with explanations of how these features work, and alternative approaches and research materials so you can learn even more. You will start by building out levels for your game, followed by recipes to help you create environments, place meshes, and implement your characters. By the end of this module, you will see how to create a health bar and main menu, and then get your game ready to be deployed and published.The final step is to create your very own game that will keep mobile users hooked. This is what you'll be learning in our third module, Learning Unreal Engine Android Game Development,Once you get the hang of things, you will start developing our game, wherein you will graduate from movement and character control to AI and spawning. Once you've created your application, you will learn how to port and publish your game to the Google Play Store. With this course, you will be inspired to come up with your own great ideas for your future game development projects.
Publication date:
August 2016
Publisher
Packt
ISBN
9781787123281

 

Part 1. Module 1

Learning Unreal Engine Game Development

A step-by-step guide that paves the way for developing fantastic games with Unreal Engine 4

 

Chapter 1. An Overview of Unreal Engine

First of all, thank you for picking up this book. I am sure you are excited to learn how to make your own game. In this chapter, I will run you through the different fundamental components in a game and what Unreal Engine 4 offers to help you make your dream game.

The following topics will be covered in this chapter:

  • What is in a game?
  • The history of Unreal Engine (UE)
  • How is game development done?
  • The components of UE and its editors

What goes into a game?

When you play a game, you probably are able to identify what needs to go into a game. In a simple PC shooting game example, when you press the left mouse button, the gun triggers. You see bullets flying, hear the sound of the gun and look around to see if you have shot anything. If you did hit something, for example, a wall, the target receives some form of damage.

As a game creator, we need to learn breakdown what we see in a game to figure out what we need for a game. A simple breakdown without going into too much detail: link the mouse click to the firing of the bullets, play a sound file that sounds like a gun firing, display sparks (termed as particle effect) near the barrel of the gun and the target shows some visible damage.

Bearing this example in mind, try visualizing and breaking any game down into its fundamental components. This will greatly help you in designing and creating a game level.

There is a lot going on behind the scenes when you are playing a game. With the help of Unreal Engine, the interaction of the many components has been designed and you will need to customize it for your own game. This is a huge time saver when you use an engine to create a game.

What is a game engine?

What a game engine does is that it provides you with tools and programs to help you customize and build a game; it gives you a head-start in making your own game. Unreal Engine is one of the more popular choices in the market currently and it is free for anyone to use for development (royalties need to be paid only if your game makes a profit; visit https://www.unrealengine.com/custom-licensing for more information). Its popularity is mainly due to its extensive customizability, multiplatform capabilities, and the ability to create high quality AAA games with it. If you intend to start a career in game development, this is definitely one of the engines you want to start playing with and using to build your portfolio.

The history of Unreal Engine

Before explaining what this amazingly powerful game engine can do and how it works, let us take a short trip back into the past to see how UE came about and how it has evolved into what we have today.

For gamers, you are probably familiar with the Unreal game series. Do you know how the first Unreal game was made? The engineers at Epic Games built an engine to help them create the very first Unreal game. Over the years, with the development of each generation the Unreal game series, more and more functionalities were added to the engine to aid in the development of the game. This, in turn, increased UE's capabilities and improved the game engine very quickly over the years.

In 1998, the first version of UE made the modding of a first player shooting game possible. You could replace Unreal content using your own and tweak the behavior of the non-player characters (NPCs), also known as bots (players that are controlled by the computer through artificial intelligence) using UnrealScript. Then multiplayer online features were added into UE through the development of Unreal Tournament, which is an online game. This game also added PlayStation 2 to the list of compatible platforms in addition to the PC and Mac.

By 2002, UE had improved by leaps and bounds, bringing it into the next generation with the development of a particle system (a system to generate effects such as fog and smoke), static mesh tools (tools to manipulate objects), a physics engine (allows interaction between objects such as collisions) and a Matinee (a tool to create cut scenes, which is a brief, non interactive movie). This improvement saw to the development of the Unreal Championship and Unreal Tournament 2003. The release of Unreal Championship also added the Xbox game console to the list, with multiplayer capabilities in Xbox Live.

The development of Epic's next game Unreal II: The Awakening edged UE forward with an animation system and overall improvement with their existing engine. The development of faster Internet speeds in the early 2000s also increased the demand of multiplayer online gaming. Unreal Tournament 2004 allowed players to engage in online battles with one another. This saw the creation of vehicles and large battlefields, plus improvements in online network capabilities. In 2005, the release of Unreal Champion 2 on the Xbox game console reinforced UE capabilities on the Xbox console. It also saw the creation of a very important feature of a new third-person camera. This opened up greater possibilities in the types of games that could be created using the engine.

Gears of War, one of the most well-known franchises in the video games industry, pushed Epic Games to create and release the third version of its game engine, Unreal Engine 3, in 2006.

The improvement of the graphics engine used DirectX 9/10 to allow more realistic characters and objects to be made. The introduction of Kismet, which is a visual scripting system, allowed game and level designers to create game play logic for more engaging combat play without having to delve into writing codes. Platform capabilities of UE3 include Xbox360 and PlayStation 3 was added. There was a revamp in the light control and materials. UE3 also had a new physics engine. Gears of War 2 released in 2008 saw the progressive improvements to UE3. In 2013, the Gears of War Judgment was released.

PC online gaming was also under the radar of Epic Game's developers. In 2009, Atlas Technology was released to be used in conjunction with UE to allow massively multiplayer online games (MMOG) to be created.

The increasing demand of mobile gaming also led to UE3 being pushed in the direction of increasing its supportability for various mobile platforms. All these advancements and technological capabilities have made UE3 the most popular version of Unreal Engine and it is still very widely used today.

UE3 dominated the market for 8 years until UE4 came along. UE4 was launched in 2014 and introduced the biggest change by replacing Kismet with the new concept of Blueprint. We will discuss more about the features of UE4 later in the chapter.

Game development

Each game studio has its own set of processes to ensure the successful launch of its game. Game production typically goes through several stages before a game is launched. In general, there is a preproduction/planning, production stage, and postproduction stage. Most of the time is normally spent in the production stage.

Game development is an iterative process. The birth of an idea is the start of this process. The idea of the game must first be tested to see if it is actually fun to the target audience. This is done through prototyping the level quickly. Iterations of this prototype into a fully-fledged game can go from weeks to months to years.

The development team takes care of this iteration process. Everyone's contribution of the game throughout the development cycle directly affects the game and its success.

Development teams loosely consist of several specialized groups: artists (2D/3D modeler, animator), cinematic creators, sound designers, game designers, and programmers.

Artists

They create all visible objects in the game from menu buttons to the trees in the game level. Some artists specialize in 3D modeling, while others are focused on animation. Artists make the game look beautiful and realistic. Artists have to learn how to import their created images/models, which are normally created first using other software such as 3DMax, Maya, and MODO into UE4. They would most likely need to make use of Blueprint to create certain custom behaviors for the game.

Cinematic creators

Many cinematic experts are also trained artists. They have a special eye and creative skills to create short movie scenes/cut scenes. The Matinee tool in UE4 will be what they would be using most of the time.

Sound designers

Sound designers have an acute sense of hearing and they are mostly musically trained. They work in the sound labs to create custom sounds/music for the game. They are in charge of importing sound files into UE4 to be played at suitable instances in the game. When using UE4, they would be spending most of their time using the Sound Cue Editor.

Game designers

Designers determine what happens in the game, what goes on in the game, and what the game will be about. In the planning stage, most of the time will be spent in discussion, presentations, and documentation. In the production stage, they will oversee the game prototyping process to ensure that the game level is created as designed. Very often designers spend their time in the Unreal Editor to customize and fine-tune the level.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

Artists

They create all visible objects in the game from menu buttons to the trees in the game level. Some artists specialize in 3D modeling, while others are focused on animation. Artists make the game look beautiful and realistic. Artists have to learn how to import their created images/models, which are normally created first using other software such as 3DMax, Maya, and MODO into UE4. They would most likely need to make use of Blueprint to create certain custom behaviors for the game.

Cinematic creators

Many cinematic experts are also trained artists. They have a special eye and creative skills to create short movie scenes/cut scenes. The Matinee tool in UE4 will be what they would be using most of the time.

Sound designers

Sound designers have an acute sense of hearing and they are mostly musically trained. They work in the sound labs to create custom sounds/music for the game. They are in charge of importing sound files into UE4 to be played at suitable instances in the game. When using UE4, they would be spending most of their time using the Sound Cue Editor.

Game designers

Designers determine what happens in the game, what goes on in the game, and what the game will be about. In the planning stage, most of the time will be spent in discussion, presentations, and documentation. In the production stage, they will oversee the game prototyping process to ensure that the game level is created as designed. Very often designers spend their time in the Unreal Editor to customize and fine-tune the level.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

Cinematic creators

Many cinematic experts are also trained artists. They have a special eye and creative skills to create short movie scenes/cut scenes. The Matinee tool in UE4 will be what they would be using most of the time.

Sound designers

Sound designers have an acute sense of hearing and they are mostly musically trained. They work in the sound labs to create custom sounds/music for the game. They are in charge of importing sound files into UE4 to be played at suitable instances in the game. When using UE4, they would be spending most of their time using the Sound Cue Editor.

Game designers

Designers determine what happens in the game, what goes on in the game, and what the game will be about. In the planning stage, most of the time will be spent in discussion, presentations, and documentation. In the production stage, they will oversee the game prototyping process to ensure that the game level is created as designed. Very often designers spend their time in the Unreal Editor to customize and fine-tune the level.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

Sound designers

Sound designers have an acute sense of hearing and they are mostly musically trained. They work in the sound labs to create custom sounds/music for the game. They are in charge of importing sound files into UE4 to be played at suitable instances in the game. When using UE4, they would be spending most of their time using the Sound Cue Editor.

Game designers

Designers determine what happens in the game, what goes on in the game, and what the game will be about. In the planning stage, most of the time will be spent in discussion, presentations, and documentation. In the production stage, they will oversee the game prototyping process to ensure that the game level is created as designed. Very often designers spend their time in the Unreal Editor to customize and fine-tune the level.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

Game designers

Designers determine what happens in the game, what goes on in the game, and what the game will be about. In the planning stage, most of the time will be spent in discussion, presentations, and documentation. In the production stage, they will oversee the game prototyping process to ensure that the game level is created as designed. Very often designers spend their time in the Unreal Editor to customize and fine-tune the level.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

Programmers

They are the group that looks into the technology and software the team needs to create the game. In pre-production, they are responsible for deciding which software programs are required and are capable of creating the game. They also have to ensure that the different software used are compatible with one another. Programmers also write codes to make the objects created by the artist come alive according to the idea that the designers came up with. They program the rules and functionality of the game. Some programmers are also involved in creating tools and research for the games. They are not directly involved in creating the game but instead are supporting the production pipeline. Games with extreme graphics usually have a team of researchers optimizing the graphics and creating more realistic graphics for the game. They spend most of their time in codes, probably coding in Visual Studio using C++. They are also able to modify and extend the features of UE4 to support the needs of the game that they are developing.

The components of Unreal Engine 4

Unreal Engine is a game engine that helps you make games. Unreal Engine is made up of several components that work together to drive the game. Its massive system of tools and editors allows you to organize your assets and manipulate them to create the gameplay for your game.

Unreal Engine components include a sound engine, physics engine, graphics engine, input and the Gameplay framework, and an online module.

The sound engine

The sound engine is responsible for having music and sounds in the game. Its integration into Unreal allows you to play various sound files to set the mood and add realism to the game. There are many uses for sounds in the game. Ambient sounds are constantly in the background. Sound effects can be repeated when needed or one-off and are triggered by specific events in the game.

In a forest setting, you can have a combination of bird sounds, wind, trees, and leaves rustling as the ambient sound. These individual sounds can be combined as a forest ambient sound and be constantly playing softly in the background when the game character is in the forest. Recurring sounds such as footprint sound files can be connected to the animation of the walking movement. One-time sound effects, such as the explosion of a particular building in the city, can be linked to an event trigger in the game. In Unreal, the triggering of the sounds is implemented through cues known as Sound Cue.

The physics engine

In the real world, objects are governed by the laws of physics. Objects collide and are set in motion according to Newton's laws of motion. Attraction between objects also obeys the law of gravity and Einstein's theory of general relativity. In the game world, for objects to react similarly to real life, it has to have the same system built through programming. Unreal physics engine makes use of the PhysX engine, developed by NVIDIA, to perform calculations for lifelike physical interactions, such as collisions and fluid dynamics. The presence of this advanced physics engine in place allows us to concentrate on making the game instead of spending time making objects interact with the game world correctly.

The graphics engine

For an image to show up on screen, it has to be rendered onto your display monitor (such as your PC/TV or mobile devices) The graphics engine is responsible for the output on your display by taking in information about the entire scene such as color, texture, geometry, the shadow of an individual object and lighting, and the viewpoint of a scene, and consider the cross-interaction of the factors that affect the overall color, light, shadow, and occlusion of the objects.

The graphics engine then undergoes massive calculations in the background using all these information before it is able to output the final pixel information to the screen. The power of a graphics engine affects how realistic your scene will look. Unreal graphics engine has the capabilities to output photorealistic qualities for your game. Its ability to optimize the scene and to process huge amount calculations for real-time lighting allows users to create realistic objects in the game.

This engine can be used to create games for all platforms (PC, Xbox, PlayStation, and mobile devices). It supports DirectX 11/12, OpenGL, and JavaScript/WebGL rendering.

Input and the Gameplay framework

Unreal Engine consists of an input system that converts key and button presses by the player into actions performed by the in-game character. This input system can be configured through the Gameplay framework. The Gameplay framework contains the functionality to track game progress and control the rules of the game. Heads-up displays (HUDs)/user interfaces (UIs) are part of the Gameplay framework to provide feedback to the player during the course of the game. Gameplay classes such as GameMode, GameState, and PlayerState set the rules and control the state of the game. The in-game characters are controlled either by players (using the PlayerController class) or AI (using AIController class). In-game characters, whether controlled by the player or AI, are part of a base class known as the Pawn class. The Character class is a subset of the Pawn class, which is specifically made for vertically-oriented player representation, for example, a human.

With the Unreal Gameplay framework and controllers in place, it allows for full customization of the player's behavior and flexibility, as shown in the following figure:

Input and the Gameplay framework

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

The sound engine

The sound engine is responsible for having music and sounds in the game. Its integration into Unreal allows you to play various sound files to set the mood and add realism to the game. There are many uses for sounds in the game. Ambient sounds are constantly in the background. Sound effects can be repeated when needed or one-off and are triggered by specific events in the game.

In a forest setting, you can have a combination of bird sounds, wind, trees, and leaves rustling as the ambient sound. These individual sounds can be combined as a forest ambient sound and be constantly playing softly in the background when the game character is in the forest. Recurring sounds such as footprint sound files can be connected to the animation of the walking movement. One-time sound effects, such as the explosion of a particular building in the city, can be linked to an event trigger in the game. In Unreal, the triggering of the sounds is implemented through cues known as Sound Cue.

The physics engine

In the real world, objects are governed by the laws of physics. Objects collide and are set in motion according to Newton's laws of motion. Attraction between objects also obeys the law of gravity and Einstein's theory of general relativity. In the game world, for objects to react similarly to real life, it has to have the same system built through programming. Unreal physics engine makes use of the PhysX engine, developed by NVIDIA, to perform calculations for lifelike physical interactions, such as collisions and fluid dynamics. The presence of this advanced physics engine in place allows us to concentrate on making the game instead of spending time making objects interact with the game world correctly.

The graphics engine

For an image to show up on screen, it has to be rendered onto your display monitor (such as your PC/TV or mobile devices) The graphics engine is responsible for the output on your display by taking in information about the entire scene such as color, texture, geometry, the shadow of an individual object and lighting, and the viewpoint of a scene, and consider the cross-interaction of the factors that affect the overall color, light, shadow, and occlusion of the objects.

The graphics engine then undergoes massive calculations in the background using all these information before it is able to output the final pixel information to the screen. The power of a graphics engine affects how realistic your scene will look. Unreal graphics engine has the capabilities to output photorealistic qualities for your game. Its ability to optimize the scene and to process huge amount calculations for real-time lighting allows users to create realistic objects in the game.

This engine can be used to create games for all platforms (PC, Xbox, PlayStation, and mobile devices). It supports DirectX 11/12, OpenGL, and JavaScript/WebGL rendering.

Input and the Gameplay framework

Unreal Engine consists of an input system that converts key and button presses by the player into actions performed by the in-game character. This input system can be configured through the Gameplay framework. The Gameplay framework contains the functionality to track game progress and control the rules of the game. Heads-up displays (HUDs)/user interfaces (UIs) are part of the Gameplay framework to provide feedback to the player during the course of the game. Gameplay classes such as GameMode, GameState, and PlayerState set the rules and control the state of the game. The in-game characters are controlled either by players (using the PlayerController class) or AI (using AIController class). In-game characters, whether controlled by the player or AI, are part of a base class known as the Pawn class. The Character class is a subset of the Pawn class, which is specifically made for vertically-oriented player representation, for example, a human.

With the Unreal Gameplay framework and controllers in place, it allows for full customization of the player's behavior and flexibility, as shown in the following figure:

Input and the Gameplay framework

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

The physics engine

In the real world, objects are governed by the laws of physics. Objects collide and are set in motion according to Newton's laws of motion. Attraction between objects also obeys the law of gravity and Einstein's theory of general relativity. In the game world, for objects to react similarly to real life, it has to have the same system built through programming. Unreal physics engine makes use of the PhysX engine, developed by NVIDIA, to perform calculations for lifelike physical interactions, such as collisions and fluid dynamics. The presence of this advanced physics engine in place allows us to concentrate on making the game instead of spending time making objects interact with the game world correctly.

The graphics engine

For an image to show up on screen, it has to be rendered onto your display monitor (such as your PC/TV or mobile devices) The graphics engine is responsible for the output on your display by taking in information about the entire scene such as color, texture, geometry, the shadow of an individual object and lighting, and the viewpoint of a scene, and consider the cross-interaction of the factors that affect the overall color, light, shadow, and occlusion of the objects.

The graphics engine then undergoes massive calculations in the background using all these information before it is able to output the final pixel information to the screen. The power of a graphics engine affects how realistic your scene will look. Unreal graphics engine has the capabilities to output photorealistic qualities for your game. Its ability to optimize the scene and to process huge amount calculations for real-time lighting allows users to create realistic objects in the game.

This engine can be used to create games for all platforms (PC, Xbox, PlayStation, and mobile devices). It supports DirectX 11/12, OpenGL, and JavaScript/WebGL rendering.

Input and the Gameplay framework

Unreal Engine consists of an input system that converts key and button presses by the player into actions performed by the in-game character. This input system can be configured through the Gameplay framework. The Gameplay framework contains the functionality to track game progress and control the rules of the game. Heads-up displays (HUDs)/user interfaces (UIs) are part of the Gameplay framework to provide feedback to the player during the course of the game. Gameplay classes such as GameMode, GameState, and PlayerState set the rules and control the state of the game. The in-game characters are controlled either by players (using the PlayerController class) or AI (using AIController class). In-game characters, whether controlled by the player or AI, are part of a base class known as the Pawn class. The Character class is a subset of the Pawn class, which is specifically made for vertically-oriented player representation, for example, a human.

With the Unreal Gameplay framework and controllers in place, it allows for full customization of the player's behavior and flexibility, as shown in the following figure:

Input and the Gameplay framework

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

The graphics engine

For an image to show up on screen, it has to be rendered onto your display monitor (such as your PC/TV or mobile devices) The graphics engine is responsible for the output on your display by taking in information about the entire scene such as color, texture, geometry, the shadow of an individual object and lighting, and the viewpoint of a scene, and consider the cross-interaction of the factors that affect the overall color, light, shadow, and occlusion of the objects.

The graphics engine then undergoes massive calculations in the background using all these information before it is able to output the final pixel information to the screen. The power of a graphics engine affects how realistic your scene will look. Unreal graphics engine has the capabilities to output photorealistic qualities for your game. Its ability to optimize the scene and to process huge amount calculations for real-time lighting allows users to create realistic objects in the game.

This engine can be used to create games for all platforms (PC, Xbox, PlayStation, and mobile devices). It supports DirectX 11/12, OpenGL, and JavaScript/WebGL rendering.

Input and the Gameplay framework

Unreal Engine consists of an input system that converts key and button presses by the player into actions performed by the in-game character. This input system can be configured through the Gameplay framework. The Gameplay framework contains the functionality to track game progress and control the rules of the game. Heads-up displays (HUDs)/user interfaces (UIs) are part of the Gameplay framework to provide feedback to the player during the course of the game. Gameplay classes such as GameMode, GameState, and PlayerState set the rules and control the state of the game. The in-game characters are controlled either by players (using the PlayerController class) or AI (using AIController class). In-game characters, whether controlled by the player or AI, are part of a base class known as the Pawn class. The Character class is a subset of the Pawn class, which is specifically made for vertically-oriented player representation, for example, a human.

With the Unreal Gameplay framework and controllers in place, it allows for full customization of the player's behavior and flexibility, as shown in the following figure:

Input and the Gameplay framework

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Input and the Gameplay framework

Unreal Engine consists of an input system that converts key and button presses by the player into actions performed by the in-game character. This input system can be configured through the Gameplay framework. The Gameplay framework contains the functionality to track game progress and control the rules of the game. Heads-up displays (HUDs)/user interfaces (UIs) are part of the Gameplay framework to provide feedback to the player during the course of the game. Gameplay classes such as GameMode, GameState, and PlayerState set the rules and control the state of the game. The in-game characters are controlled either by players (using the PlayerController class) or AI (using AIController class). In-game characters, whether controlled by the player or AI, are part of a base class known as the Pawn class. The Character class is a subset of the Pawn class, which is specifically made for vertically-oriented player representation, for example, a human.

With the Unreal Gameplay framework and controllers in place, it allows for full customization of the player's behavior and flexibility, as shown in the following figure:

Input and the Gameplay framework

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Light and shadow

Light is a powerful tool in game creation. It can be used in many ways, such as to create the mood of a scene or focus a player's attention on objects in the game. Unreal Engine 4 provides a set of basic lights that could be easily placed in your game level. They are Directional Light, Point Light, Spot Light, and Sky Light.

Directional Light emits beams of parallel lights, Point Light emits light like a light bulb (from a single point radially outward in all directions), Spot Light emits light in a conical shape outwards, and Sky Light mimics light from the sky downwards on the objects in the level:

Light and shadow

The effective design of light also creates realistic shadows for your game. By choosing the types of light in the level, you can affect both the mood and time it takes to render the scene, which in turns affect the frames per second of your game. In the game world, you can have two types of shadows: static and dynamic. Static shadows can be prebaked into the scene and, which makes them quick to render. Dynamic shadows are changed during runtime and are more expensive to render. We will learn more about lights and shadows in Chapter 4, Material and Light.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Post-process effects

Post-process effects are effects that are added at the end to improve the quality of the scene. Unreal Engine 4 provides a very good selection of post-process effects, which you can add to your level to accentuate the overall scene.

It offers full scene high dynamic range rendering (HDRR). This allows objects that are bright to be very bright and dark to be very dark, but we are still able to see details in them. (This is NVDIA's motivation for HDR rendering.)

UE4 post-process effects include Anti-Aliasing using Temporal Anti-Aliasing (TXAA), Bloom, Color Grading, Depth of Field, Eye Adaptation, Lens Flare, Post Process Materials, Scene Fringe, Screen Space Reflection, and Vignette. Although a game is often designed with the post-process effects in mind, users are normally given the option to turn them off, if desired. This is because they often consume reasonable amount of additional resources in return for better visuals.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Artificial intelligence

If you are totally new to the concept of artificial intelligence (AI), it can be thought of as intelligence created by humans to mimic real life. Humans created AI to give objects a brain, the ability to think, and make decisions on their own.

Fundamentally, AI is made up of complex rule sets that help objects make decisions and perform their designed function/behavior. In games, NPCs are given some form of AI so that players can interact with them. For example, give NPCs the ability to find a sweet spot to attack. If being attacked, they will run, hide, and find a better position to fight back.

Unreal Engine 4 provides a good basic AI and lays the foundation for you to customize and improve the AI of the NPCs in your game. More details on how AI is designed in Unreal Engine will be discussed in Chapter 5, Animation and AI.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Online and multiplatform capabilities

Unreal Engine 4 offers the ability to create game for many platforms. If you create a game using Unreal Engine 4, it is portable into different platforms, such as Web, iOS, Linux, Windows, and Android. Also, Universal Windows Platform (UWP) will soon be added as well. It also has an online subsystem to provide games the ability to integrate functionalities that are available on Xbox Live, Facebook, Steam, and so on.

Unreal Engine and its powerful editors

After learning about the different components of Unreal Engine, it is time to learn more about the various editors and how they are able to empower us with the actual functionalities to create a game.

Unreal Editor

Unreal Engine has a number of editors that help in the creation of the game. By default, the Unreal Editor is the startup editor for Unreal Engine. It can be considered as the main editor that allows access to other subsystems, such as the Material and Blueprint subsystems.

The Unreal Editor provides a visual interface made up of viewports and windows to enable you to import, organize, edit, and add behaviors/interactions to your game assets. Other subeditors/subsystems have very specialized functions that allow you to control details of an asset (how it looks, how it behaves).

The Unreal Editor, together with all the subsystems, is a great tool especially for designers. It allows physical placement of assets and gives users the ability to control gameplay variables without having to make changes in the code.

Material Editor

Shaders and Materials give objects its unique color and texture. Unreal Engine 4 makes use of physically-based shading. This new material pipeline gives artists greater control over the look and feel of an object. Physically-based shading has a more detailed relationship of light and its surface. This theory binds two physical attributes (micro surface detail and reflectivity) to achieve the final look of the object.

In the past, much of the final look is achieved by tweaking values in the shader/material algorithms. In Unreal Engine 4, we are now able to achieve high quality content by adjusting the values of the light and shading algorithms, which produces more consistent and predictable results. More details about Shaders and Materials will be provided in Chapter 4, Material and Light. The following screenshot shows the Material Editor in UE4:

Material Editor

The Cascade particle system

The Cascade particle system provides extensive capabilities to design and create particle effects. Effects from things such as smoke, sparks, and fire can be created by designing the size, color, and texture of each particle and how groups of these particles interact with each other to mimic real-life particle effect behavior. The following screenshot shows the Cascade particle system in UE4:

The Cascade particle system

The Persona skeletal mesh animation

The Persona animation system lets you design and control the animation of the skeleton, skeleton mesh, and sockets of a character. This tool can be used to preview a character's animation and set up blend animation between key frames. The physics and collision properties can also be adjusted through Physics Asset Tool (PhAT). The following screenshot shows the Persona animation system in UE4:

The Persona skeletal mesh animation

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor

Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Unreal Editor

Unreal Engine has a number of editors that help in the creation of the game. By default, the Unreal Editor is the startup editor for Unreal Engine. It can be considered as the main editor that allows access to other subsystems, such as the Material and Blueprint subsystems.

The Unreal Editor provides a visual interface made up of viewports and windows to enable you to import, organize, edit, and add behaviors/interactions to your game assets. Other subeditors/subsystems have very specialized functions that allow you to control details of an asset (how it looks, how it behaves).

The Unreal Editor, together with all the subsystems, is a great tool especially for designers. It allows physical placement of assets and gives users the ability to control gameplay variables without having to make changes in the code.

Material Editor

Shaders and Materials give objects its unique color and texture. Unreal Engine 4 makes use of physically-based shading. This new material pipeline gives artists greater control over the look and feel of an object. Physically-based shading has a more detailed relationship of light and its surface. This theory binds two physical attributes (micro surface detail and reflectivity) to achieve the final look of the object.

In the past, much of the final look is achieved by tweaking values in the shader/material algorithms. In Unreal Engine 4, we are now able to achieve high quality content by adjusting the values of the light and shading algorithms, which produces more consistent and predictable results. More details about Shaders and Materials will be provided in Chapter 4, Material and Light. The following screenshot shows the Material Editor in UE4:

Material Editor

The Cascade particle system

The Cascade particle system provides extensive capabilities to design and create particle effects. Effects from things such as smoke, sparks, and fire can be created by designing the size, color, and texture of each particle and how groups of these particles interact with each other to mimic real-life particle effect behavior. The following screenshot shows the Cascade particle system in UE4:

The Cascade particle system

The Persona skeletal mesh animation

The Persona animation system lets you design and control the animation of the skeleton, skeleton mesh, and sockets of a character. This tool can be used to preview a character's animation and set up blend animation between key frames. The physics and collision properties can also be adjusted through Physics Asset Tool (PhAT). The following screenshot shows the Persona animation system in UE4:

The Persona skeletal mesh animation

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor

Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Material Editor

Shaders and Materials give objects its unique color and texture. Unreal Engine 4 makes use of physically-based shading. This new material pipeline gives artists greater control over the look and feel of an object. Physically-based shading has a more detailed relationship of light and its surface. This theory binds two physical attributes (micro surface detail and reflectivity) to achieve the final look of the object.

In the past, much of the final look is achieved by tweaking values in the shader/material algorithms. In Unreal Engine 4, we are now able to achieve high quality content by adjusting the values of the light and shading algorithms, which produces more consistent and predictable results. More details about Shaders and Materials will be provided in Chapter 4, Material and Light. The following screenshot shows the Material Editor in UE4:

Material Editor

The Cascade particle system

The Cascade particle system provides extensive capabilities to design and create particle effects. Effects from things such as smoke, sparks, and fire can be created by designing the size, color, and texture of each particle and how groups of these particles interact with each other to mimic real-life particle effect behavior. The following screenshot shows the Cascade particle system in UE4:

The Cascade particle system

The Persona skeletal mesh animation

The Persona animation system lets you design and control the animation of the skeleton, skeleton mesh, and sockets of a character. This tool can be used to preview a character's animation and set up blend animation between key frames. The physics and collision properties can also be adjusted through Physics Asset Tool (PhAT). The following screenshot shows the Persona animation system in UE4:

The Persona skeletal mesh animation

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor

Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

The Cascade particle system

The Cascade particle system provides extensive capabilities to design and create particle effects. Effects from things such as smoke, sparks, and fire can be created by designing the size, color, and texture of each particle and how groups of these particles interact with each other to mimic real-life particle effect behavior. The following screenshot shows the Cascade particle system in UE4:

The Cascade particle system

The Persona skeletal mesh animation

The Persona animation system lets you design and control the animation of the skeleton, skeleton mesh, and sockets of a character. This tool can be used to preview a character's animation and set up blend animation between key frames. The physics and collision properties can also be adjusted through Physics Asset Tool (PhAT). The following screenshot shows the Persona animation system in UE4:

The Persona skeletal mesh animation

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor
Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor
The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

The Persona skeletal mesh animation

The Persona animation system lets you design and control the animation of the skeleton, skeleton mesh, and sockets of a character. This tool can be used to preview a character's animation and set up blend animation between key frames. The physics and collision properties can also be adjusted through Physics Asset Tool (PhAT). The following screenshot shows the Persona animation system in UE4:

The Persona skeletal mesh animation

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor
Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor
The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Landscape – building large outdoor worlds and foliage

To create large outdoor spaces using the editor, Unreal Engine provides sculpting and painting tools through the Landscape system to help us with it. An efficient level of detail (LOD) system and memory utilization allows large scaled terrain shaping. There is also a Foliage editor to apply grass, snow, and sand into the outdoor environment.

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor
Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor
The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Sound Cue Editor

The control of sound and music is done via the Sound Cue Editor. Sounds and music are triggered to play via cues known as Sound Cues. The ability to start/stop/repeat/fade in or out can be achieved using this editor. The following screenshot shows the Sound Cue Editor in UE4:

Sound Cue Editor

Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Matinee Editor

The Matinee Editor toolset enables the creation of game cut scenes and movies. These short clips created could be used to introduce the start of a game level, tell a story before the game begins or even as a promotional video for the game. The following screenshot shows the Matinee Editor in UE4:

Matinee Editor

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

The Blueprint visual scripting system

The Blueprint system is a new feature in Unreal Engine. Unreal Engine 4 is the first engine to utilize this revolutionary system. For those who are familiar with Unreal Engine 3, it can be thought of as the enhanced and improved combined version of the Unreal scripting system, Kismet, and the Prefab functionality. The Blueprint visual scripting system enables you to extend code functionality using visual scripting language (box-like flow diagrams joined with lines). This capability means that you do not have to write or compile code in order to create, arrange, and customize behavior/interaction of in-game objects. This also provides nonprogrammers (artists/designers) with the ability to prototype or create a level quickly and manipulate gameplay without having to tackle the challenges of game programming. A cool feature of Blueprint is that you can create variables like in programming by clicking on the object and selecting Create Variable. This opens up what developers can do without messing around with complex coding.

To help developers debug Blueprint scripting logic, you can see the sequence of events and property values visually on the flow diagrams as it is being executed. Similar to troubleshooting in coding, break points can also be set to pause a Blueprint sequence. The following screenshot shows the Level Blueprint Editor in UE4:

The Blueprint visual scripting system

Unreal programming

The access to Unreal Engine's source code gives users the freedom to create almost about anything they can dream of. Functionalities of the base code can be extended and customized to create whatever the game needs to have. Learning how Unreal Engine works from the inside can unlock its full potential in game creation.

Unreal Engine has also incorporated very useful debugging features for the coding folks. One of them is the Hot Reload function. This tool enables changes in the C++ code to be reflected immediately in the game. To facilitate quick changes in code, Unreal Engine has also included Code View. By clicking on a function of an object in the Code View category, it shows you directly the relevant codes in Visual Studio where you could make code changes to the object.

Versioning and source control can be set up for game projects that include code changes.

Unreal objects

Actors are the base class of all gameplay objects in Unreal. For the Actors to have more properties and functionalities, the Actor class is extended to various more complex classes. In terms of programming, the Actor class acts as a container class to hold specialized objects called Components. The combination of the functionalities of the Components gives the Actor its unique properties.

Unreal objects

Actors are the base class of all gameplay objects in Unreal. For the Actors to have more properties and functionalities, the Actor class is extended to various more complex classes. In terms of programming, the Actor class acts as a container class to hold specialized objects called Components. The combination of the functionalities of the Components gives the Actor its unique properties.

A beginner's guide to the Unreal Editor

This is a quick overview of what we can do with the Unreal Editor. We will briefly touch on how we can use the various windows in the editor to create a game.

The start menu

When starting up Unreal Engine, you will be first brought to a menu window by default. This new start menu is simple and easy to navigate. It features a large tab that allows you to select which version of game engine you want to launch and has a clear representation of the projects you have created. It also provides access to Marketplace, which is a library of game samples that are created by others, which you could download (some free, some paid). The menu also provides latest updates and news from Epic to ensure developers are kept abreast of the latest development and changes. The following screenshot shows the start menu:

The start menu

Project Browser

After launching the desired version of Unreal Engine, the Unreal Project Browser pops up. This browser provides you with the option to create game levels that have been pre-customized. This means that you have a list of generic levels, which you can start building your game levels with. For those who are new to game making, this feature lets you dive straight into building various types of games quickly. You can have a first-person shooting level and third-person game setup, or a 2D/3D side-scrolling platform level directly in either Blueprint or C++ as the base template. What is so awesome about the New Project tab is that it also allows you to select your target device (PC/mobile), image quality target, with or without the Unreal content included in the startup project. The following screenshot shows the Project Browser:

Project Browser

Content Browser

When the Unreal Editor starts, there is a default layout of various windows and panels. One of them is the Content Browser. The Content Browser is a window where you can find all the content (game assets) that you have. It categorizes your assets into different folders such as Audio, Materials, Animations, Particle Effects, and so on. This window has also the Import button, which lets you bring in game assets that were created using other software into the game. The following screenshot shows the default location of the Content Browser (outlined in green):

Content Browser

Toolbar

The Toolbar is a customizable ribbon that provides quick access to tools and editors. The default layout includes quick access to the Blueprint and Matinee editors. Quick play and launch game function is also part of the standard ribbon layout. These buttons allow you to quickly view your creation in-game. The following screenshot shows the default Toolbar:

Toolbar

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

The start menu

When starting up Unreal Engine, you will be first brought to a menu window by default. This new start menu is simple and easy to navigate. It features a large tab that allows you to select which version of game engine you want to launch and has a clear representation of the projects you have created. It also provides access to Marketplace, which is a library of game samples that are created by others, which you could download (some free, some paid). The menu also provides latest updates and news from Epic to ensure developers are kept abreast of the latest development and changes. The following screenshot shows the start menu:

The start menu

Project Browser

After launching the desired version of Unreal Engine, the Unreal Project Browser pops up. This browser provides you with the option to create game levels that have been pre-customized. This means that you have a list of generic levels, which you can start building your game levels with. For those who are new to game making, this feature lets you dive straight into building various types of games quickly. You can have a first-person shooting level and third-person game setup, or a 2D/3D side-scrolling platform level directly in either Blueprint or C++ as the base template. What is so awesome about the New Project tab is that it also allows you to select your target device (PC/mobile), image quality target, with or without the Unreal content included in the startup project. The following screenshot shows the Project Browser:

Project Browser

Content Browser

When the Unreal Editor starts, there is a default layout of various windows and panels. One of them is the Content Browser. The Content Browser is a window where you can find all the content (game assets) that you have. It categorizes your assets into different folders such as Audio, Materials, Animations, Particle Effects, and so on. This window has also the Import button, which lets you bring in game assets that were created using other software into the game. The following screenshot shows the default location of the Content Browser (outlined in green):

Content Browser

Toolbar

The Toolbar is a customizable ribbon that provides quick access to tools and editors. The default layout includes quick access to the Blueprint and Matinee editors. Quick play and launch game function is also part of the standard ribbon layout. These buttons allow you to quickly view your creation in-game. The following screenshot shows the default Toolbar:

Toolbar

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Project Browser

After launching the desired version of Unreal Engine, the Unreal Project Browser pops up. This browser provides you with the option to create game levels that have been pre-customized. This means that you have a list of generic levels, which you can start building your game levels with. For those who are new to game making, this feature lets you dive straight into building various types of games quickly. You can have a first-person shooting level and third-person game setup, or a 2D/3D side-scrolling platform level directly in either Blueprint or C++ as the base template. What is so awesome about the New Project tab is that it also allows you to select your target device (PC/mobile), image quality target, with or without the Unreal content included in the startup project. The following screenshot shows the Project Browser:

Project Browser

Content Browser

When the Unreal Editor starts, there is a default layout of various windows and panels. One of them is the Content Browser. The Content Browser is a window where you can find all the content (game assets) that you have. It categorizes your assets into different folders such as Audio, Materials, Animations, Particle Effects, and so on. This window has also the Import button, which lets you bring in game assets that were created using other software into the game. The following screenshot shows the default location of the Content Browser (outlined in green):

Content Browser

Toolbar

The Toolbar is a customizable ribbon that provides quick access to tools and editors. The default layout includes quick access to the Blueprint and Matinee editors. Quick play and launch game function is also part of the standard ribbon layout. These buttons allow you to quickly view your creation in-game. The following screenshot shows the default Toolbar:

Toolbar

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Content Browser

When the Unreal Editor starts, there is a default layout of various windows and panels. One of them is the Content Browser. The Content Browser is a window where you can find all the content (game assets) that you have. It categorizes your assets into different folders such as Audio, Materials, Animations, Particle Effects, and so on. This window has also the Import button, which lets you bring in game assets that were created using other software into the game. The following screenshot shows the default location of the Content Browser (outlined in green):

Content Browser

Toolbar

The Toolbar is a customizable ribbon that provides quick access to tools and editors. The default layout includes quick access to the Blueprint and Matinee editors. Quick play and launch game function is also part of the standard ribbon layout. These buttons allow you to quickly view your creation in-game. The following screenshot shows the default Toolbar:

Toolbar

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Toolbar

The Toolbar is a customizable ribbon that provides quick access to tools and editors. The default layout includes quick access to the Blueprint and Matinee editors. Quick play and launch game function is also part of the standard ribbon layout. These buttons allow you to quickly view your creation in-game. The following screenshot shows the default Toolbar:

Toolbar

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Viewport

The Viewport is the window to the game world so what you see is what is in the game. If you have created a level using one of the options provided in the New Project menu, you would notice that the camera has been adjusted accordingly to the settings of that pre-customized level. This is the window that you will use to place objects into and move them around. When you click on the Play button in the toolbar, this Viewport window comes alive and allows you to interact with game level. The following screenshot shows the Viewport window being highlighted in the editor:

Viewport

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Scene Outliner

The Scene Outliner contains the list of objects that are placed in the scene. It is only what is loaded currently in the scene. You can create folders and have customized names for the objects (to help you identify the objects easily). It is also a quick way to group items so that you can select them and make changes in bulk. The following screenshot shows the Scene Outliner highlighted in the editor:

Scene Outliner

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Modes

The Modes window gives you the power to create and place objects into the game world. You can select the type of activity you wish to execute. Select from Place, Paint, Landscape, Foliage and Geometry Editing. Place is to put objects into the game world. Paint allows you to paint vertices and textures of objects. Landscape and Foliage are useful tools for making large scale natural terrains in the game. Geometry Editing provides the tools to modify and edit the object. The highlighted area in the following screenshot shows the Modes window:

Modes

Summary

In this chapter, we covered introductory content about what a game engine is, specifically Unreal Engine 4 and its history. We also talked a little about how games are developed and various roles that exist in a game company to help create different components of a game. Then, we covered the different components of Unreal Engine and how we can use these different features to help us make our game. Lastly, we covered the different editors that are available to us to help us customize each of the components of the game.

In the upcoming chapters, we'll be going into the details of the functionalities and features of Unreal Engine 4. In the next chapter, you will be exposed to some basic functions in the Unreal Editor and start making your own game level.

 

Chapter 2. Creating Your First Level

In this chapter, you will create and run a simple level with the help of step-by-step instructions. Since the objective of this book is to equip you with the skills to confidently create your own game using Unreal Engine 4 and not to simply follow a list of steps to create a fixed example, I will provide as much additional information as possible that you could use to create your own game level as we go about learning the basic techniques.

In this chapter, we will cover the following topics:

  • How to control views and viewports
  • How to move, scale, and rotate objects in a level
  • How to use the BSP Box brush to create the ground and a wall using the Additive mode
  • How to carve a hole in a wall using the Subtractive mode of the BSP Box brush
  • How to add a simple Directional Light to a level to mimic sunlight
  • How to spawn a player who's facing the right direction on a map using Player Start
  • How to create the sky in your map using atmospheric fog
  • How to save the map you've created and set it as the default load up map for a project
  • How to add a material to the geometries you've created so that it looks realistic
  • How to duplicate BSP Brushes to help create things quickly
  • How to add props (which are also known as static meshes) to a room
  • How to concentrate light on important parts of a map using Lightmass Importance Volume

Exploring preconfigured levels

Before we create a level, it is good to have an idea of what levels look like in Unreal Engine 4. Unreal Engine 4 offers the possibility to load up various types of game levels with a default playable level that's straight from the Project Browser menu option (this pops up immediately after launching the Unreal Editor). Personally, I really like this particular new feature of Unreal Engine 4 as it gives me a quick feel of the types of presets that are available, and I could easily select something as a base for the game level I want to create.

We will create a new map using one of the preset project types as the base for our first level.

Tip

How to quickly explore different project types

I normally click on the Play button on the toolbar after a project loads with the default level. The play function allows you to be in a game and you can see what has been precreated for you in the level.

Creating a new project

In this chapter, we will use the Blueprint First Person template to create our first game project.

The steps to create a new Blueprint First Person Project are as follows:

  1. Launch Unreal Engine 4.
  2. Select the New Project tab.
  3. Select Blueprint and then First Person.
  4. Choose a name and path for the project (or leave it as the default MyProject).
  5. Click on Create Project.

    Ensure that the With Starter Content option is selected.

    Creating a new project

On creation of the project, the default example level for Blueprint First Person will load. The following screenshot shows how the default level looks:

Creating a new project

Using the preset project type with the example level, the first thing you'll probably want to do is run the level and see what the default game level contains.

Navigating the viewport

Using the loaded example level, you should get yourself familiarized with the mouse and keyboard controls in order to navigate in the viewport. You might consider bookmarking this section until you can navigate the viewport to zoom in/out or view any object from all angles easily.

Views

Here is some quick information on the different views in 3D modeling creation: the example map is loaded by default in the Perspective view. Other than having the map in the Perspective view, you can change what you see in the viewport in the top, side, or front views, respectively. The option to switch to any of these is in the left-hand corner of the viewport. The following screenshot shows the location of the button to press so that you can switch views:

Views

If you wish to see more than one view concurrently, navigate to Windows | Viewports and then select any of the viewports (The default viewport uses Viewport 1.).

The selected viewport number will pop up. You can drag and dock this Viewport window and add it to the default Viewport 1. The following screenshot shows Viewport 1 and Viewport 2 displayed at the same time (one in the Perspective view and the other in the Top view):

Views

Control keys

Here are some of the key presses to help you move around and view objects:

In the Perspective view:

Shortcut action

Description

Left-click + drag

This moves the camera forward and backward and rotates from left to right

Right-click + drag

This rotates the viewport camera

Left-click + right-click + drag

This moves objects up and down

In the Orthographic (Top, Front, and Side) view:

Shortcut

Description

Left-click + drag

This creates a marquee selection box

Right-click + drag

This pans the viewport camera

Left-click + right-click + drag

This zooms the viewport camera in and out

For those of you who are familiar with games, you can use WASD to navigate the camera in the editor too.

WASD control in the Perspective view:

Shortcut action

Description

Any mouse click + W

This moves the camera forward

Any mouse click + A

This moves the camera to the left

Any mouse click + S

This moves the camera backward

Any mouse click + D

This moves the camera to the right

On selection of an object:

Shortcut action

Description

W

This displays the Translation tool

E

This displays the Rotation tool

R

This displays the Scale tool

F

This focuses the camera on a selected object

Alt + Shift + Drag along the x/y/z axis

This duplicates an object and moves it along the x/y/z axis

Views

Here is some quick information on the different views in 3D modeling creation: the example map is loaded by default in the Perspective view. Other than having the map in the Perspective view, you can change what you see in the viewport in the top, side, or front views, respectively. The option to switch to any of these is in the left-hand corner of the viewport. The following screenshot shows the location of the button to press so that you can switch views:

Views

If you wish to see more than one view concurrently, navigate to Windows | Viewports and then select any of the viewports (The default viewport uses Viewport 1.).

The selected viewport number will pop up. You can drag and dock this Viewport window and add it to the default Viewport 1. The following screenshot shows Viewport 1 and Viewport 2 displayed at the same time (one in the Perspective view and the other in the Top view):

Views

Control keys

Here are some of the key presses to help you move around and view objects:

In the Perspective view:

Shortcut action

Description

Left-click + drag

This moves the camera forward and backward and rotates from left to right

Right-click + drag

This rotates the viewport camera

Left-click + right-click + drag

This moves objects up and down

In the Orthographic (Top, Front, and Side) view:

Shortcut

Description

Left-click + drag

This creates a marquee selection box

Right-click + drag

This pans the viewport camera

Left-click + right-click + drag

This zooms the viewport camera in and out

For those of you who are familiar with games, you can use WASD to navigate the camera in the editor too.

WASD control in the Perspective view:

Shortcut action

Description

Any mouse click + W

This moves the camera forward

Any mouse click + A

This moves the camera to the left

Any mouse click + S

This moves the camera backward

Any mouse click + D

This moves the camera to the right

On selection of an object:

Shortcut action

Description

W

This displays the Translation tool

E

This displays the Rotation tool

R

This displays the Scale tool

F

This focuses the camera on a selected object

Alt + Shift + Drag along the x/y/z axis

This duplicates an object and moves it along the x/y/z axis

Control keys

Here are some of the key presses to help you move around and view objects:

In the Perspective view:

Shortcut action

Description

Left-click + drag

This moves the camera forward and backward and rotates from left to right

Right-click + drag

This rotates the viewport camera

Left-click + right-click + drag

This moves objects up and down

In the Orthographic (Top, Front, and Side) view:

Shortcut

Description

Left-click + drag

This creates a marquee selection box

Right-click + drag

This pans the viewport camera

Left-click + right-click + drag

This zooms the viewport camera in and out

For those of you who are familiar with games, you can use WASD to navigate the camera in the editor too.

WASD control in the Perspective view:

Shortcut action

Description

Any mouse click + W

This moves the camera forward

Any mouse click + A

This moves the camera to the left

Any mouse click + S

This moves the camera backward

Any mouse click + D

This moves the camera to the right

On selection of an object:

Shortcut action

Description

W

This displays the Translation tool

E

This displays the Rotation tool

R

This displays the Scale tool

F

This focuses the camera on a selected object

Alt + Shift + Drag along the x/y/z axis

This duplicates an object and moves it along the x/y/z axis

Creating a level from a new blank map

Now that you are familiar with the controls, you are ready to create a map on your own. In this chapter, we will go through how to build a basic room from scratch. To create a new map for your first person game, go to File | New Level…. The following screenshot shows you how to create a new level:

Creating a level from a new blank map

There are two options when creating a new level: Default and Empty Level. Select Empty Level to create a completely blank map. The following screenshot shows you the options that are available when creating a new level:

Creating a level from a new blank map

Do not be surprised when the viewport is void. We will add objects to the level in the next few sections. The following screenshot shows what an empty level looks like in the Perspective view:

Creating a level from a new blank map

Creating the ground using the BSP Box brush

The BSP Box brush can be used to create rectangular objects in the map. The first thing to do when creating a level is to have a ground to stand on.

Before we begin with this, make sure the viewport is in the Perspective view. We will mainly use this view for most of the level creation unless specified explicitly.

Go to the Modes window, click on BSP and then click and drag Box into the viewport. This is where you can find the Box brush:

Creating the ground using the BSP Box brush

Here, a Box brush has been successfully added to the viewport:

Creating the ground using the BSP Box brush

You have now successfully created your first object in the level. We will go on to change the size of this box to a suitable size so that it can act as the ground for the level.

Select the box that was just created, and go to Details | Brush Settings. Fill in the following values for X, Y, and Z. The following screenshot shows the values that need to be set:

Creating the ground using the BSP Box brush

When you have set the values correctly, the box should look like this:

Creating the ground using the BSP Box brush

Useful tip – selecting an object easily

To help you select objects in the level more easily, you can go to World Outliner (its default location is in the top right-hand corner of the editor), and you will see a full list of all the objects in the level. Click on the name of an object to select it and its details will also be displayed. This is a very useful way to help you select objects when you have many objects in the level. The following screenshot shows how World Outliner can be used to select the Box brush (which we've just created) in the level:

Useful tip – selecting an object easily

Useful tip – changing View Mode to aid visuals

If you have difficulties seeing the box, you can change View Mode to Unlit (the button is in the viewport that's next to the Perspective button). The following screenshot shows you how to change View Mode to Unlit:

Useful tip – changing View Mode to aid visuals

Useful tip – selecting an object easily

To help you select objects in the level more easily, you can go to World Outliner (its default location is in the top right-hand corner of the editor), and you will see a full list of all the objects in the level. Click on the name of an object to select it and its details will also be displayed. This is a very useful way to help you select objects when you have many objects in the level. The following screenshot shows how World Outliner can be used to select the Box brush (which we've just created) in the level:

Useful tip – selecting an object easily

Useful tip – changing View Mode to aid visuals

If you have difficulties seeing the box, you can change View Mode to Unlit (the button is in the viewport that's next to the Perspective button). The following screenshot shows you how to change View Mode to Unlit:

Useful tip – changing View Mode to aid visuals

Useful tip – changing View Mode to aid visuals

If you have difficulties seeing the box, you can change View Mode to Unlit (the button is in the viewport that's next to the Perspective button). The following screenshot shows you how to change View Mode to Unlit:

Useful tip – changing View Mode to aid visuals

Adding light to a level

To help us see the level better, it is time to learn how to illuminate the level. To mimic ambient light from the sun, we will use Directional Light for the level.

In the same way as adding a BSP Box brush, we will go to Modes Window | Lights | Directional Light. Click and drag Directional Light into the Viewport window. The following screenshot zooms in on the Modes window, showing that the Directional Light item can be created by dragging it into the viewport:

Adding light to a level

For now, let's place the light just slightly above the BSP Box brush as shown in the following screenshot:

Adding light to a level

Useful tip – positioning objects in a level

To position an object in a level, we use the Transform tool to move objects in the x, y, and z directions. Select the object and press the W key to display the Transform tool. Three arrows will appear to extrude from the object. Click and hold the red arrow to move the object along the x axis, the green arrow to move it along the y axis, and the blue arrow to it move along the z axis.

To help you position the objects more accurately, you can also switch to the Top view when moving objects in the x and y directions, the Side view for adjustments in the y and z directions, and the Front view to adjust the x and z directions.

For those of you who want precise position control, you can use Details. Select the object to display details. Go to Transform | Location. You can select Relative or World position by clicking on the arrow next to Location. Change the X, Y, and Z values to move the object with more precision.

Useful tip – positioning objects in a level

To position an object in a level, we use the Transform tool to move objects in the x, y, and z directions. Select the object and press the W key to display the Transform tool. Three arrows will appear to extrude from the object. Click and hold the red arrow to move the object along the x axis, the green arrow to move it along the y axis, and the blue arrow to it move along the z axis.

To help you position the objects more accurately, you can also switch to the Top view when moving objects in the x and y directions, the Side view for adjustments in the y and z directions, and the Front view to adjust the x and z directions.

For those of you who want precise position control, you can use Details. Select the object to display details. Go to Transform | Location. You can select Relative or World position by clicking on the arrow next to Location. Change the X, Y, and Z values to move the object with more precision.

Adding the sky to a level

After the addition of light to the level, we will proceed to add the sky to the level. Click on Modes | Visual | Atmospheric Fog. In a similar way to adding light and adding a Box BSP, click, hold, and drag this into the viewport. We are almost ready to take a first look at what we have just created. Hang in there.

Adding the sky to a level

Adding Player Start

For every game, you need to set where the player will spawn. Go to Modes | Basic | Player Start. Click, hold, and drag Player Start into the viewport.

This screenshot shows the Modes window with Player Start:

Adding Player Start

Place Player Start in the center of the ground or slightly above it as shown in the following screenshot:

Adding Player Start

Deselect Player Start by pressing the Esc key. The light blue arrow from Player Start indicates the direction in which the player will spawn the game starts. To adjust the direction that the player faces upon spawning, rotate Player Start until the light blue arrow points in this direction. Take a look at the following tip on how to rotate an object.

Useful tip – rotating objects in a level

To rotate an object in a level, we use the Rotate tool to rotate objects around the x (row), y (pitch), and z (yaw) directions. Select the object and press the E key to display the Rotate tool. Three lines with a box tip will appear to extrude from the object. Click and hold the red arrow to rotate the object around the x axis, the green arrow to rotate it around the y axis, and the blue arrow to rotate it around the z axis.

Another way to rotate an object more accurately is by controlling its rotation through the actual rotation values found under Details. (Select the object to be rotated to display its details). In the Transform tab, go to Rotation, and set the X, Y, and Z values to rotate the object. There is an arrow next to Rotation that you can click on to select if you want to adjust the rotation values for Relative or World. When you select to rotate an object using the Relative setting, the object will rotate relative to its current position. When the object is rotated using the World setting, it will be relative to the world's position.

If you want the player controller (as shown in the preceding screenshot) to have the light blue arrow facing inwards and away from you, you will need to rotate the player controller 180 degrees around the y axis. Enter Y as 180 under the Relative setting. The player controller will be rotated in the manner shown in this screenshot:

Useful tip – rotating objects in a level

Useful tip – rotating objects in a level

To rotate an object in a level, we use the Rotate tool to rotate objects around the x (row), y (pitch), and z (yaw) directions. Select the object and press the E key to display the Rotate tool. Three lines with a box tip will appear to extrude from the object. Click and hold the red arrow to rotate the object around the x axis, the green arrow to rotate it around the y axis, and the blue arrow to rotate it around the z axis.

Another way to rotate an object more accurately is by controlling its rotation through the actual rotation values found under Details. (Select the object to be rotated to display its details). In the Transform tab, go to Rotation, and set the X, Y, and Z values to rotate the object. There is an arrow next to Rotation that you can click on to select if you want to adjust the rotation values for Relative or World. When you select to rotate an object using the Relative setting, the object will rotate relative to its current position. When the object is rotated using the World setting, it will be relative to the world's position.

If you want the player controller (as shown in the preceding screenshot) to have the light blue arrow facing inwards and away from you, you will need to rotate the player controller 180 degrees around the y axis. Enter Y as 180 under the Relative setting. The player controller will be rotated in the manner shown in this screenshot:

Useful tip – rotating objects in a level

Viewing a level that's been created

We are now ready to view the simple level that we have just created.

Before viewing the level, click on the Build button, as shown in the following screenshot, to build the light, materials, and so on, needed for this level. This step ensures that light is properly rendered in the level.

Viewing a level that's been created

After building the level, click on the Play button, as shown in this screenshot, to view the level:

Viewing a level that's been created

The following screenshot shows how the level looks. Move the mouse up, down, left, and right to see the level. Use W, A, S, and D to move the character around the level. To return to the editor, press ESC.

Viewing a level that's been created

Saving a level

Navigate to File | Save As… and give the map you have just created a name. In our example here, I have saved it as Chapter2Level in the …/UnrealProjects/MyProject/Content/Maps path, where MyProject is the name of the project.

Configuring a map as a start level

After saving your new map, you may want to also set this project to load this map as the default map. You can have several maps linked to this project and load them at specific points in the game. For now, we want to replace the current Example_Map with the newly created map that we have. To do so, go to Edit | Project Settings. This opens up a page with configurable values for the project. Go to Game | Maps & Modes. Refer to the following screenshot to take a look at how Maps & Modes is selected.

Look under Default Maps and change both Game Default Map and Editor Default Map in the map that you have just saved. In my case, it will be Chapter2Level. Then, close the project settings. When you start the editor and run the game the next time, your new map will be loaded by default.

Configuring a map as a start level

Adding material to the ground

Now that we have created the ground, let us make the ground look more realistic by applying a material to it.

Go to Content Browser | Content | StarterContent | Materials. Type wood into the Filters box. The following screenshot shows the walnut polished material that we want to use for the ground's material:

Adding material to the ground

Click, hold, and drag M_Wood_Floor_Walnut_Polished into the viewport area and drop it on the top surface of the ground. The resulting effect should look like this:

Adding material to the ground

Adding a wall

Now we are ready to add walls to prevent the player from falling off the map. To create walls, we will use the same BSP Box brush to create a wall. As we have just added a material in the previous step, you will need to clear this material selection by clicking on anything in Content Browser. This will prevent new geometries from being created using the same material.

Similar to creating the ground, go to Modes | BSP | Box. Click, hold, and drag into the viewport. Set the dimensions of the BSP box as X = 30, Y = 620, and Z = 280. To help us view and position the wall, use the controls to rotate the viewport. You can also use the different views to help position the wall onto the ground. Here, you can see how the wall should be positioned (note that I have panned the camera to view the level from a different angle):

Adding a wall

Duplicating a wall

Now duplicate the wall by first selecting the wall created in the earlier step. Make sure the Transform tool is displayed (if not, press W once when object is selected).

Click and hold one of the axes (the x axis, in the preceding example case) while holding down Alt + Shift as you drag the current wall in the x direction. You would notice that there is another copy of the wall moving in this direction. Release the keys when the wall is in the right position. Use normal translation controls to position the wall as shown here:

Duplicating a wall

Creating an opening for a door

The room is now almost complete. We will learn how to carve into a BSP Box brush to create an opening for a door.

Drag a new BSP Box brush into the map: X = 370, Y = 30, and Z = 280. Position this wall to seal one side of the room as shown in the following screenshot:

Creating an opening for a door

Till now, we have been using the Additive mode (add the radio button that is selected) to create a BSP Box brush. To create an opening in the wall, we will create another BSP Box brush using the Subtractive mode. Ensure that you have selected it as shown in the following screenshot. Drag and drop the BSP Box brush in the same manner as before into the viewport. As for the dimensions of this brush, we will approximate it to the size of the door, where X = 115, Y = 30, and Z = 212.

Creating an opening for a door

When the Subtractive BSP Box brush is positioned correctly, it will look something like this:

Creating an opening for a door

To help you position the Subtractive BSP Box brush, you can switch to the Front view to place the door more or less in the center. The following screenshot shows the Front view with the Subtractive BSP Box brush selected:

Creating an opening for a door

Adding materials to the walls

To make the ground look more realistic, we will apply a material to it. Go to Content Browser | Content | StarterContent | Materials. Type Wall into the Filters box. Select M_Basic_Wall and drag it onto the surface of the wall with the door. Then, we will use a different material. Type Brick into the Filters box. Select M_Brick_Clay_New to apply to the inner surface of the two other walls.

Here, you can take a look at how the level looks in the Unlit mode after applying the materials mentioned previously:

Adding materials to the walls

Build the light before running the level again to see how the level looks now.

Sealing a room

For now, let's duplicate the wall with the door to seal the room. Click on the wall, hold down Alt + Shift, and drag it across to the other side of the room. The following screenshot shows how it looks when the room is sealed:

Sealing a room

Adding props or a static mesh to the room

Let's now add some objects to the empty room. Go to Content Browser | Content | StarterContent | Props. Find SM_Lamp_Ceiling and drag it into the room.

Adding props or a static mesh to the room

As we want to use a ceiling lamp prop as a floor lamp, you will need to rotate the lamp by rotating it about the x axis by 180 degrees. Set X = 180 degrees using the Relative mode. The following screenshot shows the rotated lamp positioned at one end of the room. Note that I have built the light and changed the view mode to the Lit mode. Feel free to position the lamp anywhere to see how it looks.

Adding props or a static mesh to the room

Adding Lightmass Importance Volume

Since our room only takes up a small portion of the map, we can concentrate light on a small region by adding an item known as Lightmass Importance Volume to the map. The bounded box of the Lightmass Importance Volume tells the engine where light is needed in the map so it should encompass the entire area of the map that has objects. Drag and drop Lightmass Importance Volume into the map. Here, you can see where to find the Lightmass Importance Volume:

Adding Lightmass Importance Volume

By default, the wireframe box that's been dropped (which is the Lightmass Importance Volume) is a cube. You will need to scale it to fit your room. With the Lightmass Importance Volume selected, press R to display the Scale tool. Use the x, y, and z axes to adjust the size of the box till it fits the level. The following screenshot shows the scaling of the box using the Scale tool:

Adding Lightmass Importance Volume

After scaling and translating the box to fit the level, the Lightmass Importance Volume should look something like what is shown in the following screenshot, where the wireframe box is large enough to fit the room inside it. The size of the wireframe for the Lightmass Importance Volume can be larger than the map.

Adding Lightmass Importance Volume

Applying finishing touches to a room

Our room is almost complete. You would have noticed that the door now is just a hole in the wall. To make it look like a door, we need to add a door frame and a door as follows:

  1. Go to Content Browser | Content | StarterContent | Props.
  2. Click and drop SM_DoorFrame into the viewport.
  3. Adjust it to fit in the wall.

When done, it should look like what is shown in the following screenshot.

I've used different views, such as top, side, and front, to adjust the frame nicely to fit the door. You can adjust Snap Sizes for some fine-tuning.

Applying finishing touches to a room

Useful tip – using the drag snap grid

To help you move objects into position more accurately, you can make use of the snap grid button at the top of the viewport as shown in the following screenshot. Turning the drag snap grid on allows you to translate objects according to the grid size you select. Click on the mesh-like symbol to toggle snap grid on/off. The numbers displayed on the right are the minimum grid sizes by which an object will move when translated.

Useful tip – using the drag snap grid

I have also noticed that a portion of the floor is not textured yet. Use the same wood texture as you did previously to make sure that the ground is fully textured using M_Wood_Floor_Walnut_Polished.

Then, click and drag SM_Door into the viewport. Rotate the door and fit it into the door frame in the same manner as shown previously. Here, you can see how the door is in place:

Useful tip – using the drag snap grid

Useful tip – using the drag snap grid

To help you move objects into position more accurately, you can make use of the snap grid button at the top of the viewport as shown in the following screenshot. Turning the drag snap grid on allows you to translate objects according to the grid size you select. Click on the mesh-like symbol to toggle snap grid on/off. The numbers displayed on the right are the minimum grid sizes by which an object will move when translated.

Useful tip – using the drag snap grid

I have also noticed that a portion of the floor is not textured yet. Use the same wood texture as you did previously to make sure that the ground is fully textured using M_Wood_Floor_Walnut_Polished.

Then, click and drag SM_Door into the viewport. Rotate the door and fit it into the door frame in the same manner as shown previously. Here, you can see how the door is in place:

Useful tip – using the drag snap grid

Summary

We have learned how to navigate the viewport and set up/save a new map in a new project. We also created our first room with a door using the BSP Box brush, added materials to texture walls and floors, and learned to place static objects to enhance the look of the room. Although it is still kind of empty right now, we will continue to work on it in the next few chapters and expand on this map. In the next chapter, we will spice up the level by adding some objects that we can interact with.

 

Chapter 3. Game Objects – More and Move

We created our first room in the Unreal Editor in Chapter 2, Creating Your First Level. In this chapter, we will cover some information about the structure of objects we have used to prototype the level in Chapter 2, Creating Your First Level. This is to ensure that you have a solid foundation in some important core concepts before moving forward. Then, we will progressively introduce various concepts to make the objects move upon a player's interaction.

In this chapter, we will cover the following topics:

  • BSP Brush
  • Static Mesh
  • Texture and Materials
  • Collision
  • Volumes
  • Blueprint

BSP Brush

We used the BSP Box Brush in Chapter 2, Creating Your First Level, extensively to create the ground and the walls.

BSP Brushes are the primary building blocks for level creation in the game development. They are used for quick prototyping levels like how we have used them in Chapter 2, Creating Your First Level.

In Unreal, BSP Brushes come in the form of primitives (box, sphere, and so on) and also predefined/custom shapes.

Background

BSP stands for binary space partitioning. The structure of a BSP tree allows spatial information to be accessed quickly for rendering, especially in 3D scenes made up of polygons. A scene is recursively divided into two, until each node of the BSP tree contains only polygons that can render in arbitrary order. A scene is rendered by traversing down the BSP tree from a given node (viewpoint).

Since a scene is divided using the BSP principle, placing objects in the level could be viewed as cutting into the BSP partitions in the scene. Geometry Brushes use Constructive Solid Geometry (CSG) technique to create polygon surfaces. CSG combines simple primitives/custom shapes using Boolean operators such as union, subtraction, and intersection to create complex shapes in the level.

So, the CSG technique is used to create surfaces of the object in the level, and rendering the level is based on processing these surfaces using the BSP tree. This relationship has resulted in Geometry Brushes being known also as BSP Brushes, but more accurately, CSG surfaces.

Brush type

BSP Brushes can either be additive or subtractive in nature. Additive brushes are like volumes that fill up the space. Additive brushes were used for the ground and the walls in our map in Chapter 2, Creating Your First Level.

Subtractive brushes can be used to form hollow spaces. These were used to create a hole in the wall in which to place a door and its frame in Chapter 2, Creating Your First Level.

Brush solidity

For additive brushes, there are various states it can be in: solid, semi-solid, or non-solid.

Since subtractive brushes create empty spaces, players are allowed to move freely within them. Subtractive brushes can only be solid brushes.

Refer to the following table for comparison of their properties:

Brush solidity

Brush type

Degree of blocking

BSP cutting

Solid

Additive and subtractive

Blocks both players and projectiles

Creates BSP cuts to the surrounding world geometry

Semi-solid

Additive only

Blocks both players and projectiles

Does not cause BSP cuts to the surrounding world geometry

Non-solid

Additive only

Does not block players or projectiles

Does not cause BSP cuts to the surrounding world geometry

Static Mesh

Static Mesh is a geometry made up of polygons. Looking more microscopically at what a mesh is made of, it is made up of lines connecting vertices.

Static Mesh has vertices that cannot be animated. This means is that you cannot animate a part of the mesh and make that part move relative to itself. But the entire mesh can be translated, rotated, and scaled. The lamp and the door that we have added in Chapter 2, Creating Your First Level, are examples of Static Meshes.

A higher-resolution mesh has more polygons as compared to a lower-resolution mesh. This also implies that a higher resolution mesh has a larger number of vertices. A higher resolution mesh takes more time to render but is able to provide more details in the object.

Static Meshes are usually first created in external software programs, such as Maya or 3ds Max, and then imported into Unreal for placement in game maps.

The door, its frame, and the lamp that we added in Chapter 2, Creating Your First Level, are Static Meshes. Notice that these objects are not simple geometry looking objects.

BSP Brush versus Static Mesh

In game development, many objects in the game are Static Meshes. Why is that so? Static Mesh is considered more efficient, especially for a complex object with many vertices, as they can be cached to a video memory and are drawn by the computer's graphics card. So, Static Meshes are preferred when creating objects as they have better render performance, even for complex objects. However, this does not mean that BSP Brushes do not have a role in creating games.

When BSP Brush is simple, it can still be used without causing too much serious impact to the performance. BSP Brush can be easily created in the Unreal Editor, hence it is very useful for quick prototyping by the game/level designers. Simple BSP Brushes can be created and used as temporary placeholder objects while the actual Static Mesh is being modeled by the artists. The creation of a Static Mesh takes time, even more so for a highly detailed Static Mesh. We will cover a little information about the Static Mesh creation pipeline later in this chapter, so we have an idea of the amount of work that needs to be done to get a Static Mesh into the game. So, BSP Brush is great for an early game play testing without having to wait for all Static Meshes to be created.

Making Static Mesh movable

Let us open our saved map that we have created in Chapter 2, Creating Your First Level, and let us first save the level as a new Chapter3Level.

  1. Go to Content Browser | Content | StarterContent | Props, and search for SM_Chair, which is a standard Static Mesh prop. Click and drag it into our map.
  2. The chair we have in the level now is unmovable. You can quickly build and run the level to check it out. To make it movable, we need to change a couple of settings under the chair's details.
  3. First, ensure SM_Chair is selected, go to the Details tab. Go to Transform | Mobility, change it from Static to Movable. Take a look at the following screenshot, which describes how to make the chair movable:
    Making Static Mesh movable
  4. Next, we want the chair to be able to respond to us. Scroll a little down the Details tab to change the Physics setting for the chair. Go to Details | Physics. Make sure the checkbox for Simulate Physics is checked. When this checkbox is checked, the auto-link setting sets the Collision to be a PhysicsActor. The following screenshot shows the Physics settings of the chair:
    Making Static Mesh movable

Let us now build and play the level. When you walk into the chair, you will be able to push it around. Just to note, the chair is still known as Static Mesh, but it is now movable.

Materials

In Chapter 2, Creating Your First Level, we selected a walnut polished material and applied it to the ground. This changed the simple dull ground into a brown polished wood floor. Using materials, we are able to change the look and feel of the objects.

The reason for a short introduction of materials here is because it is a concept that we need to have learned about before we can construct a Static Mesh. We already know that we need Static Meshes in the game and we cannot only rely on the limited selection that we have in the default map package. We will need to know how to create our own Static Meshes, and we rely heavily on Materials to give the Static Meshes their look and feel.

So, when do we apply Materials while creating our custom Static Mesh? Materials are applied to the Static Mesh during its creation process outside the editor, which we will cover in a later section of this chapter. For now, let us first learn how Materials are constructed in the editor.

Creating a Material in Unreal

To fully understand the concept of a Material, we need to break it down into its fundamental components. How a surface looks is determined by many factors, including color, presence of print/pattern/designs, reflectivity, transparency, and many more. These factors combine together to give the surface its unique look.

In Unreal Engine, we are able to create our very own material by using the Material Editor. Based on the explanation given earlier, a Material is determined by many factors and all these factors combine together to give the Material its own look and feel.

Unreal Engine offers a base Material node that has a list of customizable factors, which we can use to design our Material. By using different values to different factors, we can come up with our very own Material. Let us take a look at what is behind the scene in a material that we have used in Chapter 2, Creating Your First Level.

Go to Content Browser | Content | Starter Content | Materials and double-click on M_Brick_Clay_New. This opens up the Material Editor. The following screenshot shows the zoomed-in version of the base Material node for the brick clay material. You might notice that Base Color, Roughness, Normal, and Ambient Occlusion have inputs to the base M_Brick_Clay_New material node. These inputs make the brick wall look like a brick wall.

Creating a Material in Unreal

The inputs to these nodes can take on values from various sources. Take Base Color for example, we can define the color using RGB values or we can take the color from the texture input. Textures are images in formats, such as .bmp, .jpg, .png, and so on, which we can create using tools, such as Photoshop or ZBrush.

We will talk more about the construction of the materials a little later in this book. For now, let us just keep in mind that materials are applied to the surfaces and textures are what we can use in combination, to give the materials its overall visual look.

Materials versus Textures

Notice that I have used both Materials and Textures in the previous section. It has often caused quite a bit of confusion for a newbie in the game development. Material is what we apply to surfaces and they are made up of a combination of different textures. Materials take on the properties from the textures depending on what was specified, including color, transparency, and so on.

As explained earlier, Textures are simple images in formats such as .tga, .bmp, .jpg, .png, and so on.

Texture/UV mapping

Now, we understand that a custom material is made up of a combination of textures and material is applied onto surfaces to give the polygon meshes its identity and realism. The next question is how do we apply these numerous textures that come with the material onto the surfaces? Do we simply slap them onto the 3D object? There must be a predictable manner in which we paint these textures onto the surfaces. The method used is called Texture Mapping , which was pioneered by Edwin Catmull in 1974.

Texture mapping assigns pixels from a texture image to a point on the surface of the polygon. The texture image is called a UV texture map. The reason we are using UV as an alternative to the XY coordinates is because we are already using XY to describe the geometric space of the object. So the UV coordinates are the texture's XY coordinates, and it is solely used to determine how to paint a 3D surface.

How to create and use a Texture Map

We will first need to unwrap a mesh at its seams and lay it out flat in 2D. This 2D surface is then painted upon to create the texture. This painted texture (also known as Texture Map) will then be wrapped back around the mesh by assigning the UV coordinates of the texture on each face of the mesh. To help you better visualize, take a look at the following illustration:

As a result of this, shared vertices can have more than one set of UV coordinates assigned.

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

Creating a Material in Unreal

To fully understand the concept of a Material, we need to break it down into its fundamental components. How a surface looks is determined by many factors, including color, presence of print/pattern/designs, reflectivity, transparency, and many more. These factors combine together to give the surface its unique look.

In Unreal Engine, we are able to create our very own material by using the Material Editor. Based on the explanation given earlier, a Material is determined by many factors and all these factors combine together to give the Material its own look and feel.

Unreal Engine offers a base Material node that has a list of customizable factors, which we can use to design our Material. By using different values to different factors, we can come up with our very own Material. Let us take a look at what is behind the scene in a material that we have used in Chapter 2, Creating Your First Level.

Go to Content Browser | Content | Starter Content | Materials and double-click on M_Brick_Clay_New. This opens up the Material Editor. The following screenshot shows the zoomed-in version of the base Material node for the brick clay material. You might notice that Base Color, Roughness, Normal, and Ambient Occlusion have inputs to the base M_Brick_Clay_New material node. These inputs make the brick wall look like a brick wall.

Creating a Material in Unreal

The inputs to these nodes can take on values from various sources. Take Base Color for example, we can define the color using RGB values or we can take the color from the texture input. Textures are images in formats, such as .bmp, .jpg, .png, and so on, which we can create using tools, such as Photoshop or ZBrush.

We will talk more about the construction of the materials a little later in this book. For now, let us just keep in mind that materials are applied to the surfaces and textures are what we can use in combination, to give the materials its overall visual look.

Materials versus Textures

Notice that I have used both Materials and Textures in the previous section. It has often caused quite a bit of confusion for a newbie in the game development. Material is what we apply to surfaces and they are made up of a combination of different textures. Materials take on the properties from the textures depending on what was specified, including color, transparency, and so on.

As explained earlier, Textures are simple images in formats such as .tga, .bmp, .jpg, .png, and so on.

Texture/UV mapping

Now, we understand that a custom material is made up of a combination of textures and material is applied onto surfaces to give the polygon meshes its identity and realism. The next question is how do we apply these numerous textures that come with the material onto the surfaces? Do we simply slap them onto the 3D object? There must be a predictable manner in which we paint these textures onto the surfaces. The method used is called Texture Mapping , which was pioneered by Edwin Catmull in 1974.

Texture mapping assigns pixels from a texture image to a point on the surface of the polygon. The texture image is called a UV texture map. The reason we are using UV as an alternative to the XY coordinates is because we are already using XY to describe the geometric space of the object. So the UV coordinates are the texture's XY coordinates, and it is solely used to determine how to paint a 3D surface.

How to create and use a Texture Map

We will first need to unwrap a mesh at its seams and lay it out flat in 2D. This 2D surface is then painted upon to create the texture. This painted texture (also known as Texture Map) will then be wrapped back around the mesh by assigning the UV coordinates of the texture on each face of the mesh. To help you better visualize, take a look at the following illustration:

As a result of this, shared vertices can have more than one set of UV coordinates assigned.

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

Materials versus Textures

Notice that I have used both Materials and Textures in the previous section. It has often caused quite a bit of confusion for a newbie in the game development. Material is what we apply to surfaces and they are made up of a combination of different textures. Materials take on the properties from the textures depending on what was specified, including color, transparency, and so on.

As explained earlier, Textures are simple images in formats such as .tga, .bmp, .jpg, .png, and so on.

Texture/UV mapping

Now, we understand that a custom material is made up of a combination of textures and material is applied onto surfaces to give the polygon meshes its identity and realism. The next question is how do we apply these numerous textures that come with the material onto the surfaces? Do we simply slap them onto the 3D object? There must be a predictable manner in which we paint these textures onto the surfaces. The method used is called Texture Mapping , which was pioneered by Edwin Catmull in 1974.

Texture mapping assigns pixels from a texture image to a point on the surface of the polygon. The texture image is called a UV texture map. The reason we are using UV as an alternative to the XY coordinates is because we are already using XY to describe the geometric space of the object. So the UV coordinates are the texture's XY coordinates, and it is solely used to determine how to paint a 3D surface.

How to create and use a Texture Map

We will first need to unwrap a mesh at its seams and lay it out flat in 2D. This 2D surface is then painted upon to create the texture. This painted texture (also known as Texture Map) will then be wrapped back around the mesh by assigning the UV coordinates of the texture on each face of the mesh. To help you better visualize, take a look at the following illustration:

As a result of this, shared vertices can have more than one set of UV coordinates assigned.

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

Texture/UV mapping

Now, we understand that a custom material is made up of a combination of textures and material is applied onto surfaces to give the polygon meshes its identity and realism. The next question is how do we apply these numerous textures that come with the material onto the surfaces? Do we simply slap them onto the 3D object? There must be a predictable manner in which we paint these textures onto the surfaces. The method used is called Texture Mapping , which was pioneered by Edwin Catmull in 1974.

Texture mapping assigns pixels from a texture image to a point on the surface of the polygon. The texture image is called a UV texture map. The reason we are using UV as an alternative to the XY coordinates is because we are already using XY to describe the geometric space of the object. So the UV coordinates are the texture's XY coordinates, and it is solely used to determine how to paint a 3D surface.

How to create and use a Texture Map

We will first need to unwrap a mesh at its seams and lay it out flat in 2D. This 2D surface is then painted upon to create the texture. This painted texture (also known as Texture Map) will then be wrapped back around the mesh by assigning the UV coordinates of the texture on each face of the mesh. To help you better visualize, take a look at the following illustration:

As a result of this, shared vertices can have more than one set of UV coordinates assigned.

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

How to create and use a Texture Map

We will first need to unwrap a mesh at its seams and lay it out flat in 2D. This 2D surface is then painted upon to create the texture. This painted texture (also known as Texture Map) will then be wrapped back around the mesh by assigning the UV coordinates of the texture on each face of the mesh. To help you better visualize, take a look at the following illustration:

As a result of this, shared vertices can have more than one set of UV coordinates assigned.

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

Multitexturing

To create a better appearance in surfaces, we can use multiple textures to create the eventual end result desired. This layering technique allows for many different textures to be created using different combinations of textures. More importantly, it gives the artists better control of details and/or lighting on a surface.

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

A special form of texture maps – Normal Maps

Normal Maps are a type of texture maps. They give the surfaces little bumps and dents. Normal Maps add the details to the surfaces without increasing the number of polygons. One very effective use of Normal Mapping is to generate Normal Maps from a high polygon 3D model and use it to texture the lower polygon model, which is also known as baking. We will discuss why we need the same 3D model with different number of polygons in the next section.

Normal maps are commonly stored as regular RGB images where the RGB components correspond to the X, Y, and Z coordinates, respectively, of the surface normal. The following image shows an example of a normal map taken from http://www.bricksntiles.com/textures/:

A special form of texture maps – Normal Maps

Level of detail

We create objects with varying level of details (LODs) to increase the efficiency of rendering. For objects that are closer to the player, high LODs objects are rendered. Objects with higher LODs have a higher number of polygons. For objects that are far away from the player, a simpler version of the object is rendered instead.

Artists can create different LOD versions of the 3D object using automated LOD algorithms, deployed through software or manually reducing the number of vertices, normals, edges in the 3D Models, to create a lower polygon count model. When creating models of different LODs, note that we always start by creating the most detailed model with the most number of polygons first and then reduce the number accordingly to create the other LOD versions. It is much harder to work the models the other way around. Do remember to keep the UV coherent when working with objects with different LODs. Currently, different LODs need to be light mapped separately.

The following image is taken from http://renderman.pixar.com/view/level-of-detail and very clearly shows the polygon count based on the distance away from the camera:

Level of detail

Collisions

Objects in Unreal Engine have collision properties that can be modified to design the behavior of the object when it collides with another object.

In real life, collisions occur when two objects move and meet each other at a point of contact. Their individual object properties will determine what kind of collision we get, how they respond to the collision, and their path after the collision. This is what we try to achieve in the game world as well.

The following screenshot shows the collision properties available to an object in Unreal Engine 4:

Collisions

If you are still confused about the concept of collision, imagine Static Mesh to give an object its shape (how large it is, how wide it is, and so on), while the collision of the object is able to determine the behavior of this object when placed on the table—whether the object is able to fall through the table in the level or lay stationery on the table.

Collision configuration properties

Let us go through some of the possible configurations in Unreal's Collision properties that we should get acquainted with.

Simulation Generates Hit Events

When an object has the Simulation Generates Hit Events flag checked, an alert is raised when the object has a collision. This alert notification can be used to trigger the onset of other game actions based on this collision.

Generate Overlap Events

The Generate Overlap Events flag is similar to the Simulation Generates Hit Events flag, but when this flag is checked, in order to generate an event, all the object needs is to have another object to overlap with it.

Collision Presets

The Collision Presets property contains a few frequently used settings that have been preconfigured for you. If you wish to create your own custom collision properties, set this to Custom.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Collision configuration properties

Let us go through some of the possible configurations in Unreal's Collision properties that we should get acquainted with.

Simulation Generates Hit Events

When an object has the Simulation Generates Hit Events flag checked, an alert is raised when the object has a collision. This alert notification can be used to trigger the onset of other game actions based on this collision.

Generate Overlap Events

The Generate Overlap Events flag is similar to the Simulation Generates Hit Events flag, but when this flag is checked, in order to generate an event, all the object needs is to have another object to overlap with it.

Collision Presets

The Collision Presets property contains a few frequently used settings that have been preconfigured for you. If you wish to create your own custom collision properties, set this to Custom.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Simulation Generates Hit Events

When an object has the Simulation Generates Hit Events flag checked, an alert is raised when the object has a collision. This alert notification can be used to trigger the onset of other game actions based on this collision.

Generate Overlap Events

The Generate Overlap Events flag is similar to the Simulation Generates Hit Events flag, but when this flag is checked, in order to generate an event, all the object needs is to have another object to overlap with it.

Collision Presets

The Collision Presets property contains a few frequently used settings that have been preconfigured for you. If you wish to create your own custom collision properties, set this to Custom.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Generate Overlap Events

The Generate Overlap Events flag is similar to the Simulation Generates Hit Events flag, but when this flag is checked, in order to generate an event, all the object needs is to have another object to overlap with it.

Collision Presets

The Collision Presets property contains a few frequently used settings that have been preconfigured for you. If you wish to create your own custom collision properties, set this to Custom.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Collision Presets

The Collision Presets property contains a few frequently used settings that have been preconfigured for you. If you wish to create your own custom collision properties, set this to Custom.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Collision Enabled

The Collision Enabled property allows three different settings: No Collision, No Physics Collision, and Collision Enabled. No Physics Collision is selected when this object is used only for non-physical types of collision such as raycasts, sweeps, and overlaps. Collision Enabled is selected when physics collision is needed. No Collision is selected when absolutely no collision is wanted.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Object Type

Objects can be categorized into several groups: WorldStatic, WorldDynamic, Pawn, PhysicsBody, Vehicle, Destructible, and Projectile. The type selected determines the interactions it takes on as it moves.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Collision Responses

The Collision Responses option sets the property values for all Trace and Object Responses that come with it. When Block is selected for Collision Responses, all the properties under Trace and Object Responses are also set to Block.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Trace Responses

The Trace Responses option affects how the object interacts with traces. Visibility and Camera are the two types of traces that you can choose to block, overlap, or ignore.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Object Responses

The Object Responses option affects how this object interacts with other object types. Remember the Object Type selection earlier? The Object Type property determines the type of object, and under this category, you can configure the collision response this object has with the different types of objects.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Collision hulls

For a collision to occur in Unreal Engine, hulls are used. To view an example of the collision hull for a Static Mesh, take a look at the light blue lines surrounding the cube in the following screenshot; it's a box collision hull:

Collision hulls

Hulls can be generated in Static Mesh Editor for static meshes. The following screenshot shows the menu options available for creating an auto-generated collision hull in Static Mesh Editor:

Collision hulls

Simple geometry objects can be combined and overlapped to form a simple hull. A simple hull/bounding box reduces the amount of calculation it needs during a collision. So for complex objects, a generalized bounding box can be used to encompass the object. When creating static mesh that has a complex shape, not a simple geometry type of object, you will need to refer to the Static Mesh creation pipeline section later on in the chapter to learn how to create a suitable collision bounding box for it.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Interactions

When designing collisions, you will also need to decide what kind of interactions the object has and what it will interact with.

To block means they will collide, and to overlap can mean that no collision will occur. When a block or an overlap happens, it is possible to flag the event so that other actions resulting from this interaction can be taken. This is to allow customized events, which you can have in game.

Note that for a block to actually occur, both objects must be set to Block and they must be set so that they block the right type of objects too. If one is set to block and the other to overlap, the overlap will occur but not the block. Block and overlap can happen when objects are moving at a high speed, but events can only be triggered on either overlap or block, not both. You can also set the blocking to ignore a particular type of object, for example, Pawn, which is the player.

Static Mesh creation pipeline

Static Mesh creation pipeline is done outside of the editor using 3D modeling tools such as Autodesk's Maya and 3D's Max. Unreal Engine 4 is compatible to import the FBX 2013 version of the files.

This creation pipeline is used mainly by the artists to create game objects for the project.

The actual steps and naming convention when importing Static Mesh into the editor are well documented on the Unreal 4 documentation website. You may refer to https://docs.unrealengine.com/latest/INT/Engine/Content/FBX/StaticMeshes/index.html for more details.

Introducing volumes

Volumes are invisible areas that are created to help the game developers perform a certain function. They are used in conjunction with the objects in the level to perform a specific purpose. Volumes are commonly used to set boundaries that are intended to prevent players from gaining access to trigger events in the game, or use the Lightmass Importance Volume to change how light is calculated within an area in the map as in Chapter 2, Creating Your First Level.

Here's a list of the different types of volumes that can be customized and used in Unreal Engine 4. But feel free to quickly browse through each of the volumes here for now, and revisit them later when we start learning how to use them later in the book. For this chapter, you may focus your attention first on the Trigger Volume, as we will be using that in the later examples of this chapter.

Blocking Volume

The Blocking Volume can be used to prevent players/characters/game objects from entering a certain area of the map. It is quite similar to collision hull which we have described earlier and can be used in place of Static Mesh collision hull, as they are simpler in shapes (block shapes), hence easier to calculate the response of the collision. These volumes also have the ability to detect which objects overlap with themselves quickly.

An example of the usage of the Blocking Volume is to prevent the player from walking across a row of low bushes. In this case, since the bushes are rather irregularly shaped but are roughly forming a straight line, like a hedge, an invisible Blocking Volume would be a very good way of preventing the player from crossing the bushes.

The following screenshot shows the properties for the Blocking Volume. We can change the shape and size of the volume under Brush Settings. Collision events and triggers other events using Blueprint. This is pretty much the basic configuration we will get for all other volumes too.

Blocking Volume

Camera Blocking Volume

The Camera Blocking Volume works in the same way as the Blocking Volume but it is used specifically to block cameras. It is useful when you want to limit the player from exploring with the camera beyond a certain range.

Trigger Volume

The Trigger Volume is probably one of the most used volumes. This is also the volume which we would be using to create events for the game level that we have been working on. As the name implies, upon entering this volume, we can trigger events, and via Blueprint, we can create a variety of events for our game, such as moving an elevator or spawning NPCs.

Nav Mesh Bounds Volume

The Nav Mesh Bounds Volume is used to indicate the space in which NPCs are able to freely navigate around. NPCs could be enemies in the game who need some sort of path finding method to get around the level on their own. This Nav Mesh Bounds Volume will set up the area in the game that they are able to walk through. This is important as there could be obstacles such as bridges that they will need to use to in order get across to the other side (instead of walking straight into the river and possibly drowning).

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Blocking Volume

The Blocking Volume can be used to prevent players/characters/game objects from entering a certain area of the map. It is quite similar to collision hull which we have described earlier and can be used in place of Static Mesh collision hull, as they are simpler in shapes (block shapes), hence easier to calculate the response of the collision. These volumes also have the ability to detect which objects overlap with themselves quickly.

An example of the usage of the Blocking Volume is to prevent the player from walking across a row of low bushes. In this case, since the bushes are rather irregularly shaped but are roughly forming a straight line, like a hedge, an invisible Blocking Volume would be a very good way of preventing the player from crossing the bushes.

The following screenshot shows the properties for the Blocking Volume. We can change the shape and size of the volume under Brush Settings. Collision events and triggers other events using Blueprint. This is pretty much the basic configuration we will get for all other volumes too.

Blocking Volume

Camera Blocking Volume

The Camera Blocking Volume works in the same way as the Blocking Volume but it is used specifically to block cameras. It is useful when you want to limit the player from exploring with the camera beyond a certain range.

Trigger Volume

The Trigger Volume is probably one of the most used volumes. This is also the volume which we would be using to create events for the game level that we have been working on. As the name implies, upon entering this volume, we can trigger events, and via Blueprint, we can create a variety of events for our game, such as moving an elevator or spawning NPCs.

Nav Mesh Bounds Volume

The Nav Mesh Bounds Volume is used to indicate the space in which NPCs are able to freely navigate around. NPCs could be enemies in the game who need some sort of path finding method to get around the level on their own. This Nav Mesh Bounds Volume will set up the area in the game that they are able to walk through. This is important as there could be obstacles such as bridges that they will need to use to in order get across to the other side (instead of walking straight into the river and possibly drowning).

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Camera Blocking Volume

The Camera Blocking Volume works in the same way as the Blocking Volume but it is used specifically to block cameras. It is useful when you want to limit the player from exploring with the camera beyond a certain range.

Trigger Volume

The Trigger Volume is probably one of the most used volumes. This is also the volume which we would be using to create events for the game level that we have been working on. As the name implies, upon entering this volume, we can trigger events, and via Blueprint, we can create a variety of events for our game, such as moving an elevator or spawning NPCs.

Nav Mesh Bounds Volume

The Nav Mesh Bounds Volume is used to indicate the space in which NPCs are able to freely navigate around. NPCs could be enemies in the game who need some sort of path finding method to get around the level on their own. This Nav Mesh Bounds Volume will set up the area in the game that they are able to walk through. This is important as there could be obstacles such as bridges that they will need to use to in order get across to the other side (instead of walking straight into the river and possibly drowning).

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Trigger Volume

The Trigger Volume is probably one of the most used volumes. This is also the volume which we would be using to create events for the game level that we have been working on. As the name implies, upon entering this volume, we can trigger events, and via Blueprint, we can create a variety of events for our game, such as moving an elevator or spawning NPCs.

Nav Mesh Bounds Volume

The Nav Mesh Bounds Volume is used to indicate the space in which NPCs are able to freely navigate around. NPCs could be enemies in the game who need some sort of path finding method to get around the level on their own. This Nav Mesh Bounds Volume will set up the area in the game that they are able to walk through. This is important as there could be obstacles such as bridges that they will need to use to in order get across to the other side (instead of walking straight into the river and possibly drowning).

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Nav Mesh Bounds Volume

The Nav Mesh Bounds Volume is used to indicate the space in which NPCs are able to freely navigate around. NPCs could be enemies in the game who need some sort of path finding method to get around the level on their own. This Nav Mesh Bounds Volume will set up the area in the game that they are able to walk through. This is important as there could be obstacles such as bridges that they will need to use to in order get across to the other side (instead of walking straight into the river and possibly drowning).

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Physics Volume

The Physics Volume is used to create areas in which the physics properties of the player/objects in the level change. An example of this would be altering the gravity within a space ship only when it reaches the orbit. When the gravity is changed in these areas, the player starts to move slower and float in the space ship. We can then turn this volume off when the ship comes back to earth. The following screenshot shows the additional settings we get from the Physics Volume:

Physics Volume

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Pain Causing Volume

The Pain Causing Volume is a very specialized volume used to create damage to the players upon entry. It is a "milder" version of the Kill Z Volume. Reduction of health and the amount of damage per second are customizable, according to your game needs. The following screenshot shows the properties you can adjust to control how much pain to inflict on the player:

Pain Causing Volume

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Kill Z Volume

We kill the player when it enters the Kill Z Volume. This is a very drastic volume that kills the player immediately. An example of its usage is to kill the player immediately when the player falls off a high building. The following screenshot shows the properties of Kill Z Volume to determine the point at which the player is killed:

Kill Z Volume

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Level Streaming Volume

The Level Streaming Volume is used to display the levels when you are within the volume. It generally fills the entire space where you want the level to be loaded. The reason we need to stream levels is to give players an illusion that we have a large open game level, when in fact the level is broken up into chunks for more efficient rendering. The following screenshot shows the properties that can be configured for the Level Streaming Volume:

Level Streaming Volume

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Cull Distance Volume

The Cull Distance Volume allows objects to be culled in the volume. The definition of cull is to select from a group. The Cull Distance Volume is used to select objects in the volume that need to disappear (or not rendered) based on the distance away from the camera. Tiny objects that are far away from the camera cannot be seen visibly. These objects can be culled if the camera is too far away from those objects. Using the Cull Distance Volume, you would be able to decide upon the distance and size of objects, which you want to cull within a fixed space. This can greatly improve performance of your game when used effectively.

This might seem very similar to the idea of occlusion. Occlusion is implemented by selecting object by object, when it is not rendered on screen. These are normally used for larger objects in the scene. Cull Distance Volume can be used over a large area of space and using conditions to specify whether or not the objects are rendered.

The following screenshot shows the configuration settings that are available to the Cull Distance Volume:

Cull Distance Volume

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Audio Volume

The Audio Volume is used to mimic real ambient sound changes when one transits from one place to another, especially when transiting to and from very different environments, such as walking into a clock shop from a busy street, or walking in and out of a restaurant with a live band playing in the background.

The volume is placed surrounding the boundaries of one of the areas creating an artificial border dividing the spaces into interior and exterior. With this artificially created boundary and settings that come with this Audio Volume, sound artists are able to configure how sounds are played during this transition.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

PostProcess Volume

The PostProcess Volume affects the overall scene using post-processing techniques. Post-processing effects include Bloom effects, Anti-Aliasing, and Depth of Field.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Lightmass Importance Volume

We have used Lightmass Importance Volume in Chapter 2, Creating Your First Level, to focus the light on the section of the map that has the objects in. The size of the volume should encompass your entire level.

Introducing Blueprint

The Unreal Editor offers the ability to create custom events for game levels through a visual scripting system. Before Unreal Engine 4, it was known as the Kismet system. In Unreal Engine 4, this system was revamped with more features and capabilities. The improved system was launched with the new name of Blueprint.

There are several types of Blueprint: Class Blueprint, Data-Only Blueprint, and Level Blueprint. These are more or less equivalent to what we used to know as Kismet, which is now known as Level Blueprint.

Why do I need Blueprint? The simple answer is that through Blueprint, we are able to control gameplay without having to dive into the actual coding. This makes it convenient for non-programmers to design and modify the gameplay. So, it mainly benefits the game designers/artists who can configure the game through the Blueprint editor.

So, how can we use Blueprint and what can I use Blueprint for? Blueprint is just like coding with an interface. You can select, drag, and drop function nodes into the editor, and link them up logically to evoke the desired response to specified scenarios in your game. For programmers, they will be able to pick it up pretty quickly, since Blueprint is in fact coding but through a visual interface.

For the benefit of everyone who is new to Unreal Engine 4 and maybe programming as well, we will go through a basic example of how Level Blueprint works here and use that as an example to go through some basic programming concepts at the same time.

What will we be using Blueprint for? Blueprint has the capabilities to prototype, implement, or modify virtually any gameplay element. The gameplay elements affect how game objects are spawned, what gets spawned, where they are spawned, and under what conditions they are spawned. The game objects can include lights, camera, player's input, triggers, meshes, and character models. Blueprint can control properties of these game objects dynamically to create countless gameplay scenarios. The examples of usage include altering the color of the lights when you enter a room in the game, triggering the door to shut behind you after entering the room and playing the sound effect of the door closing shut, spawning weapons randomly among three possible locations in the map, and so on.

In this chapter, we will focus on Level Blueprint first, since it is the most commonly used form of Blueprint.

Level Blueprint

Level Blueprint is a type of Blueprint that has influence over what happens in the level. Events that are created in this Blueprint affect what happens in the level, and are made specific to the situation by specifying the particular object it targets.

Feel free to jump to the next section first where we will go through a Blueprint example, so that we are able to understand Level Blueprint a little better.

The following screenshot shows a blank Level Blueprint. The most used window is Event Graph, which is in the center. Using different node types in Event Graph and linking it up appropriately creates a responsive interaction within the game. The nodes come with variables, values, and other similar properties used in programming to control the game events graphically (without writing a single line of script or code).

Level Blueprint

Level Blueprint

Level Blueprint is a type of Blueprint that has influence over what happens in the level. Events that are created in this Blueprint affect what happens in the level, and are made specific to the situation by specifying the particular object it targets.

Feel free to jump to the next section first where we will go through a Blueprint example, so that we are able to understand Level Blueprint a little better.

The following screenshot shows a blank Level Blueprint. The most used window is Event Graph, which is in the center. Using different node types in Event Graph and linking it up appropriately creates a responsive interaction within the game. The nodes come with variables, values, and other similar properties used in programming to control the game events graphically (without writing a single line of script or code).

Level Blueprint

Using the Trigger Volume to turn on/off light

We are now ready to use what we have learned to construct the next room for our game. We will duplicate the first room we have created in order to create our second room.

  1. Open the level that we created in Chapter 2, Creating Your First Level, (Chapter2_Level) and save it as a new level called Chapter3_Level.
  2. Select all the walls, the floor, the door, and the door frame.
  3. Hold down Alt + Shift and drag to duplicate the room.
  4. Place the duplicated room with the duplicated door aligned to the wall of the first room. Refer to the following screenshot to see how the walls are aligned from a Top view perspective:
    Using the Trigger Volume to turn on/off light
  5. Delete the back wall of the first room to link both the rooms.
  6. Delete all the doors to allow easy access to the second room.
  7. Move the standing lamp and chair to the side. Take a look the following screenshot to understand how the rooms look at this point:
    Using the Trigger Volume to turn on/off light
  8. Rebuild the lights. The following screenshot shows the room correctly illuminated after building the lights:
    Using the Trigger Volume to turn on/off light
  9. Now, let us focus on working on the second room. We will create a narrower walkway using the second room that we have just created.
  10. Move the sidewalls closer to each other—about 30 cm from the previous sidewall towards the center. Refer to the next two screenshots for the Top and Perspective views after moving the sidewalls:
    Using the Trigger Volume to turn on/off light
    Using the Trigger Volume to turn on/off light
  11. Note that LightMass Importance Volume is not encompassing the entire level now. Increase the size of the volume to cover the whole level. Take a look at the following screenshot to see how to extend the size of the volume correctly:
    Using the Trigger Volume to turn on/off light
  12. Go to Content Browser | Props. Click and drop SM_Lamp_Wall into the level. Rotate the lamp if necessary so that it lies nicely on the side wall.
  13. Go to Modes | Lights. Click and drop a Point Light into the second room. Place it just above the light source on the wall light, which we added in the previous step. Take a look at the following screenshot to see the placement of the lamp and Point Light that we have just added:
    Using the Trigger Volume to turn on/off light
  14. Adjust the Point Light settings: Intensity = 1700.0. This is approximately the light intensity coming off a light bulb. The following screenshot shows the settings for the Point Light:
    Using the Trigger Volume to turn on/off light
  15. Next, go to Light Color and adjust the color of the light to #FF9084FF, to adjust the mood of the level.
    Using the Trigger Volume to turn on/off light
  16. Now, let us rename the Point Light to WalkwayLight and the Wall Lamp prop to WallLamp.
  17. Select the Point Light and right-click to display the contextual menu. Go to Attach To and select WallLamp. This attaches the light to the prop so that when we move the prop, the light moves together. The following screenshot shows that WalkwayLight is linked to WallLamp:
    Using the Trigger Volume to turn on/off light
  18. Now, let us create a Trigger Volume. Go to Modes | Volumes. Click and drag the Trigger Volume into the level.
  19. Resize the volume to cover the entrance of the door dividing the two rooms. Refer to the next two screenshots on how to position the volume (Perspective view and Top view). Make sure that the volume covers the entire space of the door.
    Using the Trigger Volume to turn on/off light
    Using the Trigger Volume to turn on/off light
  20. Rename Trigger Volume to WalkwayLightTrigger.
  21. In order to use the Trigger Volume to turn the light on and off, we need to figure out which property from the Point Light controls this feature. Click on the Point Light (WalkwayLight) to display the properties of the light. Scroll down to Rendering and uncheck the property box for Visible. Notice that the light is now turned off. We want to keep the light turned off until we trigger it.
  22. So, the next step is to link the sequence of events up. This is done via Level Blueprint. We will need to trigger this change in property using the Trigger Volume, which we have created and turn the light back on.
  23. With the Point Light still selected, go to the top ribbon and select Blueprints | Open Level Blueprint. This opens up the Level Blueprint window. Make sure that the Point Light (WalkwayLight) is still selected as shown in the following screenshot:
    Using the Trigger Volume to turn on/off light
  24. Right-click in the Event Graph of the Level Blueprint window to display what actions can be added to the Level Blueprint.
  25. Due to Level Blueprint's ability to guide what actions are possible, we can simply select Add Reference to WalkwayLight. This creates the WalkwayLight actor in Level Blueprint. The following screenshot shows the WalkwayLight actor correctly added in Blueprint:
    Using the Trigger Volume to turn on/off light
  26. You can keep the Level Blueprint window open, and go to the Trigger Volume we have created the in the level.
  27. Select the Trigger Volume (WalkwayLightTrigger), right-click and select Add Event and then OnActorBeginOverlap. The following screenshot shows how to add OnActorBeginOverlap in Level Blueprint:
    Using the Trigger Volume to turn on/off light
  28. To control a variable in the Point Light, we will click and drag on the tiny blue circle on the WalkwayLight node added. This creates a blue line originating from the tiny blue circle. This also opens up a menu, where we can see what action can be done to the Point Light. Enter visi into the search bar to display the options. Click on Set Visibility. The following screenshot shows how to add the Set Visibility function to the Point Light (WalkwayLight):
    Using the Trigger Volume to turn on/off light
  29. Check the New Visiblity checkbox in the Set Visiblity function. The following screenshot shows the configuration we want:
    Using the Trigger Volume to turn on/off light
  30. Now, we are ready to link the OnActorBeginOverlap event to the Set Visibility function. Click and drag the white triangular box from OnActorBeginOverlap and drop it on the white triangular box at the Set Visibility function. The following screenshot shows the event correctly linked up:
    Using the Trigger Volume to turn on/off light
  31. Now, build the level and play. Walk through the door from the first room to the second room. The light should be triggered on.

But what happens when you walk back into the first room? The light remained turned on and nothing happens when you walk back into the second room. In the next example, we will go through how you can toggle the light on and off as you walk in and out the room. It is an alternative way to implement the control of the light and I shall leave it as optional for you to try it out.

Using Trigger Volume to toggle light on/off (optional)

The following steps can be used to trigger volume to toggle lights on or off:

  1. We need to replace the Set Visibility node in Event Graph. Click and drag the blue dot from Point Light (WalkwayLight) and drop it onto any blank space. This opens up the contextual menu. The following screenshot shows the contextual menu to place a new node from WalkwayLight:
    Using Trigger Volume to toggle light on/off (optional)
  2. Select Toggle Visibility. This creates an additional new node in Event Graph; we will need to rewire the links as per the following screenshot in order to link OnActorBeginOverlap to Toggle Visibility:
    Using Trigger Volume to toggle light on/off (optional)
  3. The last step is to delete the Set Visiblity node and we are ready to toggle the light on and off as we move in and out of the room. The following screenshot shows the final Event Graph we want. Compile and play the level to see how you can toggle the light on and off.
    Using Trigger Volume to toggle light on/off (optional)

Summary

We have covered a number of very important concepts about the objects that we use to populate our game world in Unreal Engine 4. We have broken one of the most common types of game object, Static Mesh, into its most fundamental components in order to understand its construction. We have also compared two types of game objects (Static Meshes and BSP), how they are different, and why they have their spot in the game. This will help you decide what kind of objects need to be created and how they will be created for your game level.

The chapter also briefly introduced textures and materials, how they are created, and applied onto the meshes. We will go into more details about Materials in the next chapter. So you might want to read Chapter 4, Material and Light, first before creating/applying materials to your newly created game objects. To help you optimize your game, this chapter also covered the mesh creation pipeline and the concept of LOD. For interactions to take place, we also needed to learn how objects interact and collide with one another in Unreal, what object properties are configurable to allow different physics interaction.

This chapter also covered our first introduction to Blueprint, the graphical scripting of Unreal Engine4. Through a simple Blueprint example, we learned how to turn on and off lights for our level using one of the many useful volumes that are in Unreal, Trigger Volume. In the next chapter, we will continue to build on the level we have created with more exciting materials and lights.

 

Chapter 4. Material and Light

In this chapter, we will learn in detail about the materials and the lights in Unreal Engine 4. We have grouped both Material and Light together in this chapter because how an object looks is largely determined by both—material and lighting.

Material is what we apply to the surface of an object and it affects how the object looks in the game. Material/Shader programming is a hot ongoing research topic as we always strive to improve the texture performance—seeking higher graphic details/realism/quality with limited CPU/GPU rendering power. Researchers in this area need to find ways to make the models we have in a game look as real as possible, with as little calculations/data size as possible.

Lighting is also a very powerful tool in world creation. There are many uses of light. Lights can create a mood for the level. When effectively used, it can be used to focus attention on objects in the level and guide players through your level. Light also creates shadow. In a game level, shadow needs to be created artificially. Hence, we will also learn how we get shadows rendered appropriately for our game.

Materials

In the previous chapter, we briefly touched on what a material is and what a texture is. A texture is like a simple image file in the format of .png/.tga. A material is a combination of different elements, including textures to create a surface property that we apply to our objects in the game. We have also briefly covered what UV coordinates are and how we use them to apply a 2D texture to the surface of a 3D object.

So far, we have only learned how to apply materials that are available in the default Unreal Engine. In this chapter, we will dive deeper into how we can actually create our own custom material in Unreal Engine 4. Fundamentally, the material creation for the objects falls into the scope of an artist. For special customized textures, they are sometimes hand painted by 2D artists using tools such as Photoshop or taken from photographs of textures from the exact objects we want, or similar objects. Textures can also be tweaked from existing texture collection to create the customized material that is needed for the 3D models. Due to the vast number of realistic textures needed, textures are sometimes also generated algorithmically by the programmers to allow more control over its final look. This is also an important research area for the advancing materials for computer graphics.

Material manipulation here falls under the scope of a specialized group of programmers known as graphic programmers. They are sometimes also researchers that look into ways to better compress texture, improve rendering performance, and create special dynamic material manipulation.

The Material Editor

In Unreal Engine 4, material manipulation can be achieved using the Material Editor. What this editor offers is the ability to create material expressions. Material expressions work together to create an overall surface property for the material. You can think of them as mathematical formulas that add/multiply together to affect the properties of a material. The Material Editor makes it easy to edit/formulate material expressions to create customized material and provides the capability to quickly preview the changes in the game. Through Unreal's Blueprint capabilities and programming, we can also achieve dynamic manipulation of materials as needed by the game.

The rendering system

The rendering system in Unreal Engine 4 uses the DirectX 11 pipeline, which includes deferred shading, global illumination, lit translucency, and post processing. Unreal Engine 4 has also started branching to work with the newest DirectX 12 pipeline for Windows 10, and DirectX 12 capabilities will be available to all.

Physical Based Shading Model

Unreal Engine 4 uses the Physical Based Shading Model (PBSP). This is a concept used in many modern day game engines. It uses an approximation of what light does in order to give an object its properties. Using this concept, we give values (0 to 1) to these four properties: Base Color, Roughness, Metallic, and Specular to approximate the visual properties.

For example, the bark of a tree trunk is normally brown, rough, and not very reflective. Based on what we know about how the bark should look like, we would probably set the metallic value to low value, roughness to a high value, and the base color to display brown with a low specular value.

This improves the process of creating materials as it is more intuitive as visual properties are governed by how light reacts, instead of the old method where we approximate the visual properties based on how light should behave.

For those who are familiar with the old terms used to describe material properties, you can think of it as having Diffuse Color and Specular Power replaced by Base Color, Metallic, and Roughness.

The advantage of using PBSP is that we can better approximate material properties with more accuracy.

High Level Shading Language

The Material Editor enables visual scripting of the High Level Shading Language (HLSL), using a network of nodes and connection. Those who are completely new to the concept of shaders or HLSL should go on to read the next section about shaders, DirectX and HLSL first, so that you have the basic foundation on how the computer renders material information on the screen. HLSL is aproprietary shading language developed by Microsoft. OpenGL has its own version, known as GLSL. HLSL is the programming language used to program the stages in the graphics pipeline. It uses variables that are similar to C programming and has many intrinsic functions that are already written and available for use by simply calling the function.HLSL shaders can be compiled at author-time or at runtime, and set at runtime into the appropriate pipeline stage.

Getting started

To open the Material Editor in Unreal Engine 4, go to Content Browser | Material and double-click on any material asset. Alternatively, you can select a material asset, right-click to open the context menu and select Edit to view that asset in the Material Editor.

If you want to learn how to create a new material, you can try out the example, which is covered in the upcoming section.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

The rendering system

The rendering system in Unreal Engine 4 uses the DirectX 11 pipeline, which includes deferred shading, global illumination, lit translucency, and post processing. Unreal Engine 4 has also started branching to work with the newest DirectX 12 pipeline for Windows 10, and DirectX 12 capabilities will be available to all.

Physical Based Shading Model

Unreal Engine 4 uses the Physical Based Shading Model (PBSP). This is a concept used in many modern day game engines. It uses an approximation of what light does in order to give an object its properties. Using this concept, we give values (0 to 1) to these four properties: Base Color, Roughness, Metallic, and Specular to approximate the visual properties.

For example, the bark of a tree trunk is normally brown, rough, and not very reflective. Based on what we know about how the bark should look like, we would probably set the metallic value to low value, roughness to a high value, and the base color to display brown with a low specular value.

This improves the process of creating materials as it is more intuitive as visual properties are governed by how light reacts, instead of the old method where we approximate the visual properties based on how light should behave.

For those who are familiar with the old terms used to describe material properties, you can think of it as having Diffuse Color and Specular Power replaced by Base Color, Metallic, and Roughness.

The advantage of using PBSP is that we can better approximate material properties with more accuracy.

High Level Shading Language

The Material Editor enables visual scripting of the High Level Shading Language (HLSL), using a network of nodes and connection. Those who are completely new to the concept of shaders or HLSL should go on to read the next section about shaders, DirectX and HLSL first, so that you have the basic foundation on how the computer renders material information on the screen. HLSL is aproprietary shading language developed by Microsoft. OpenGL has its own version, known as GLSL. HLSL is the programming language used to program the stages in the graphics pipeline. It uses variables that are similar to C programming and has many intrinsic functions that are already written and available for use by simply calling the function.HLSL shaders can be compiled at author-time or at runtime, and set at runtime into the appropriate pipeline stage.

Getting started

To open the Material Editor in Unreal Engine 4, go to Content Browser | Material and double-click on any material asset. Alternatively, you can select a material asset, right-click to open the context menu and select Edit to view that asset in the Material Editor.

If you want to learn how to create a new material, you can try out the example, which is covered in the upcoming section.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Physical Based Shading Model

Unreal Engine 4 uses the Physical Based Shading Model (PBSP). This is a concept used in many modern day game engines. It uses an approximation of what light does in order to give an object its properties. Using this concept, we give values (0 to 1) to these four properties: Base Color, Roughness, Metallic, and Specular to approximate the visual properties.

For example, the bark of a tree trunk is normally brown, rough, and not very reflective. Based on what we know about how the bark should look like, we would probably set the metallic value to low value, roughness to a high value, and the base color to display brown with a low specular value.

This improves the process of creating materials as it is more intuitive as visual properties are governed by how light reacts, instead of the old method where we approximate the visual properties based on how light should behave.

For those who are familiar with the old terms used to describe material properties, you can think of it as having Diffuse Color and Specular Power replaced by Base Color, Metallic, and Roughness.

The advantage of using PBSP is that we can better approximate material properties with more accuracy.

High Level Shading Language

The Material Editor enables visual scripting of the High Level Shading Language (HLSL), using a network of nodes and connection. Those who are completely new to the concept of shaders or HLSL should go on to read the next section about shaders, DirectX and HLSL first, so that you have the basic foundation on how the computer renders material information on the screen. HLSL is aproprietary shading language developed by Microsoft. OpenGL has its own version, known as GLSL. HLSL is the programming language used to program the stages in the graphics pipeline. It uses variables that are similar to C programming and has many intrinsic functions that are already written and available for use by simply calling the function.HLSL shaders can be compiled at author-time or at runtime, and set at runtime into the appropriate pipeline stage.

Getting started

To open the Material Editor in Unreal Engine 4, go to Content Browser | Material and double-click on any material asset. Alternatively, you can select a material asset, right-click to open the context menu and select Edit to view that asset in the Material Editor.

If you want to learn how to create a new material, you can try out the example, which is covered in the upcoming section.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

High Level Shading Language

The Material Editor enables visual scripting of the High Level Shading Language (HLSL), using a network of nodes and connection. Those who are completely new to the concept of shaders or HLSL should go on to read the next section about shaders, DirectX and HLSL first, so that you have the basic foundation on how the computer renders material information on the screen. HLSL is aproprietary shading language developed by Microsoft. OpenGL has its own version, known as GLSL. HLSL is the programming language used to program the stages in the graphics pipeline. It uses variables that are similar to C programming and has many intrinsic functions that are already written and available for use by simply calling the function.HLSL shaders can be compiled at author-time or at runtime, and set at runtime into the appropriate pipeline stage.

Getting started

To open the Material Editor in Unreal Engine 4, go to Content Browser | Material and double-click on any material asset. Alternatively, you can select a material asset, right-click to open the context menu and select Edit to view that asset in the Material Editor.

If you want to learn how to create a new material, you can try out the example, which is covered in the upcoming section.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Getting started

To open the Material Editor in Unreal Engine 4, go to Content Browser | Material and double-click on any material asset. Alternatively, you can select a material asset, right-click to open the context menu and select Edit to view that asset in the Material Editor.

If you want to learn how to create a new material, you can try out the example, which is covered in the upcoming section.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Creating a simple custom material

We will continue to use the levels we have created. Open Chapter3Level.umap and rename it Chapter4Level.umap to prevent overwriting what we have completed at the end of the previous chapter.

To create a new Material asset in our game package, go to Content Browser | Material. With Material selected, right-click to open the contextual menu, navigate to New Asset | Material. This creates the new material in the Material folder (we want to place assets in logical folders so that we can find game assets easily). Alternatively, you can go to Content Browser | New | Material.

Creating a simple custom material

Rename the new material to MyMaterial. The following screenshot shows the new MyMaterial correctly created:

Creating a simple custom material

Note that the thumbnail display for the new MyMaterial shows a grayed-out checkered material. This is the default material when no material has been applied.

To open the Material Editor to start designing our material, double-click on MyMaterial. The following screenshot shows the Material Editor with a blank new material. The spherical preview of the material shows up as black since no properties have been defined yet.

Creating a simple custom material

Let's start to define some properties for the MyMaterial node to create our very own unique material. Base Color, Metallic, and Roughness are the three values we will learn to configure first.

Base Color is defined by the red, green, and blue values in the form of a vector. To do so, we will drag and drop Constant3Vector from MyPalette on the right-hand side into the main window where the MyMaterial node is in. Alternatively, you can right-click to open the context menu and type vector into the search box to filter the list. Click and select Constant3Vector to create the node. Double-click on the Constant3Vector to display the Color Picker window. The following screenshot shows the setting of Constant3Vector we want to use to create a material for a red wall. (R = 0.4, G = 0.0, B = 0.0, H = 0.0, S = 1.0, V = 0.4):

Creating a simple custom material

Connect the Constant3Vector to the MyMaterial node as shown in the following screenshot by clicking and dragging from the small circle from the Constant3Vector node to the small circle next to the Base Color property in the MyMaterial node. This Constant3Vector node now provides the base color to the material. Notice how the spherical preview on the left updates to show the new color. If the color is not updated automatically, make sure that the Live Preview setting on the top ribbon is selected.

Creating a simple custom material

Now, let us set the Metallic value for the material. This property takes a numerical value from 0 to 1, where 1 is for a 100% metal. To create an input for a value, click and drag Constant from MyPalette or right-click in the Material Editor to open the menu; type in Constant into the search box to filter and select Constant from the filtered list. To edit the value in the constant, click on the Constant node to display the Details window and fill in the value. The following screenshot shows how the material would look if Metallic is set to 1:

Creating a simple custom material

After seeing how the Metallic value affects the material, let us see what Roughness does. Roughness also takes a Constant value from 0 to 1, where 0 is completely smooth and makes the surface very reflective. The left-hand screenshot shows how the material looks when Roughness is set to 0, whereas the right-hand screenshot shows how the material will look when Roughness is set to 1:

Creating a simple custom material

We want to use this new material to texture the walls. So, we have set Metallic as 0.3 and Roughness as 0.7. The following screenshot shows the final settings we have for our first custom material:

Creating a simple custom material

Go to MyMaterial in Content Browser and duplicate MyMaterial. Rename it MyWall_Grey. Change the base color to gray using the following values as shown in the picker node for the Constant3Vector value for Base Color. (R = 0.185, G = 0.185, B = 0.185, H = 0.0, S = 0.0, V = 0.185):

Creating a simple custom material

The following screenshot shows the links for the MyWall_Grey node. (Metallic = 0.3, Roughness = 0.7):

Creating a simple custom material

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Creating custom material using simple textures

To create a material using textures, we must first select a texture that is suitable. Textures can be created by artists or taken from photos of materials. For learning purposes, you can find suitable free source images from the Web, such as www.textures.com, and use them. Remember to check for conditions of usage and other license-related clauses, if you plan to publish it in a game.

There are two types of textures we need for a custom material using a simple texture. First, the actual texture that we want to use. For now, let us keep this selection simple and straightforward. Select this texture based on the color and it should have the overall properties of what you want the material to look like. Next, we need a normal texture. If you still remember what a normal map is, it controls the bumps on a surface. The normal map gives the grooves in a material. Both of these textures will work together to give you a realistic-looking material that you can use in your game.

In this example, we will create another wood texture that we will use to replace the wood texture from the default package that we have already applied in the room.

Here, we will start first by importing the textures that we need in Unreal Engine. Go to Content Browser | Textures. Then click on the Import button at the top. This opens up a window to browse to the location of your texture. Navigate to the folder location where your texture is saved, select the texture and click on Open. Note that if you are importing textures that are not in the power of two (256 x 256, 1024 x 1024, and so on), you would have a warning message. Textures that are not in the power of two should be avoided due to poor memory usage. If you are importing the example images that I am using, they are already converted to the power of two so you would not get this warning message on screen.

Import both T_Wood_Light and T_Wood_Light_N. T_Wood_Light will be used as the main texture, we want to have, and T_Wood_Light_N is the normal map texture, which we will use for this wood.

Next, we follow the same steps to create a new material, as in the previous example. Go to Content Browser | Material. With the Material folder selected, to open the contextual menu, navigate to New Asset | Material. Rename the new material MyWood.

Now, instead of selecting Constant3Vector to provide values to the base color, we will use TextureSample. Go to MyPalette and type in Texture to filter the list. Select TextureSample, drag and drop it into the Material Editor. Click on the TextureSample node to display the Details panel, as shown in the following screenshot. On the Details panel, go to Material Expression Texture Base and click on the small arrow next to it. This opens up a popup with all the suitable assets that you can use. Scroll down to select T_Wood_Light.

Creating custom material using simple textures

Now, we have configured TextureSample with the wood texture that we have imported into the editor earlier. Connect TextureSample by clicking on the white hollow circle connector, dragging it and dropping it on the Base Color connector on the MyWood node.

Repeat the same steps to create a TextureSample node for the T_Wood_Light_N normal map texture and connect it to the Normal input for MyWood.

The following screenshot shows the settings that we want to have for MyWood. To have a little glossy feel for our wood texture, set Roughness to 0.2 by using a Constant node. (Recap: drag and drop a Constant node from MyPalette and set the value to 0.2, connect it to the Roughness input of MyWood.)

Creating custom material using simple textures

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Using custom materials to transform the level

Using the custom materials that we have created in the previous two examples, we will replace the current materials that we have used.

The following screenshot shows the before and after look of the first room. Notice how the new custom materials have transformed the room into a modern looking room.

Using custom materials to transform the level

From the preceding screenshot, we also have added a Point Light and placed it onto the lamp prop, making it seem to be emitting light. The following screenshot shows the Point Light setting we have used (Light Intensity = 1000.0, Attenuation Radius = 1000.0):

Using custom materials to transform the level

Next, we added a ceiling to cover up the room. The ceiling of the wall uses the same box geometry as the rest of the walls. We have applied the M_Basic_Wall material onto it.

Then, we use the red wall material (MyMaterial) to replace the material on wall with the door frame. The gray wall material (MyWall_Grey) is used to replace the brick material for the walls at the side. The glossy wood material (MyWood) is used to replace the wooden floor material.

Rendering pipeline

For an image to appear on the screen, the computer must draw the images on the screen to display it. The sequence of steps to create a 2D representation of a scene by using both 2D and 3D data information is known as the graphics or rendering pipeline. Computer hardware such as central processing unit (CPU) and graphics processing unit (GPU) are used to calculate and manipulate the input data needed for drawing the 3D scene.

As games are interactive and rely heavily on real-time rendering, the amount of data necessary for rendering moving scenes is huge. Coordinate position, color, and all display information needs to be calculated for each vertex of the triangle polygon and at the same time, taking into account the effect of overlapping polygons before they can be displayed on screen correctly. Hence, it is very crucial to optimize both the CPU and GPU capabilities to process this data and deliver them timely on the screen. Continuous improvement in this area has been made over the years to allow better quality images to be rendered at higher frame rates for a better visual effect. At this point, games should run at a minimum frame rate of 30fps in order for players to have a reasonable gaming experience.

The rendering pipeline today uses a series of programmable shaders to manipulate information about an image before displaying the image on the screen. We'll cover shaders and Direct3D 11 graphics pipeline in more detail in the upcoming section.

Shaders

Shaders can be thought of as a sequence of programming codes that tells a computer how an image should be drawn. Different shaders govern different properties of an image. For example, Vertex Shaders give properties such as position, color, and UV coordinates for individual vertices. Another important purpose of vertex shaders is to transform vertices with 3D coordinates into the 2D screen space for display. Pixel shaders processes pixels to provide color, z-depth, and alpha value information. Geometry shader is responsible for processing data at the level of a primitive (triangle, line, and vertex).

Data information from an image is passed from one shader to the next for processing before they are finally output through a frame buffer.

Shaders are also used to incorporate post-processing effects such as Volumetric Lighting, HDR, and Bloom effects to accentuate images in a game.

The language which shaders are programmed in depends on the target environment. For Direct3D, the official language is HLSL. For OpenGL, the official shading language is OpenGL Shading Language (GLSL).

Since most shaders are coded for a GPU, major GPU makers Nvidia and AMD have also tried developing their own languages that can output for both OpenGL and Direct3D shaders. Nvidia developed Cg (deprecated now after version 3.1 in 2012) and AMD developed Mantle (used in some games, such as Battlefield 4, that were released in 2014 and seems to be gaining popularity among developers). Apple has also recently released its own shading language known as Metal Shading Language for iOS 8 in September 2014 to increase the performance benefits for iOS. Kronos has also announced a next generation graphics API based on OpenGL known as Vulkan in early 2015, which appears to be strongly supported by member companies such as Valve Corporation.

The following image is taken from a Direct3D 11 graphics pipeline on MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/ff476882(v=vs.85).aspx). It shows the programmable stages, which data can flow through to generate real-time graphics for our game, known as the rendering pipeline state representation.

Shaders

The information here is taken from Microsoft MSDN page. You can use the Direct3D 11API to configure all of the stages. Stages such as vertex, hull, domain, geometry, and pixel-shader (those with the rounded rectangular blocks), are programmable using HLSL. The ability to configure this pipeline programmatically makes it flexible for the game graphics rendering.

What each stage does is explained as follows:

Stage

Function

Input-assembler

This stage supplies data (in the form of triangles, lines, and points) to the pipeline.

Vertex-shader

This stage processes vertices such as undergoing transformations, skinning, and lighting. The number of vertices does not change after undergoing this stage.

Geometry-shader

This stage processes entire geometry primitives such as triangles, lines, and a single vertex for a point.

Stream-output

This stage serves to stream primitive data from the pipeline to memory while on its way to the rasterizer.

Rasterizer

This clips primitives and prepare the primitives for the pixel-shader.

Pixel-shader

Pixel manipulation is done here. Each pixel in the primitive is processed here, for example, pixel color.

Output-merger

This stage combines the various output data (pixel-shader values, depth, and stencil information) with the contents of the render target and depth/stencil buffers to generate the final pipeline result.

Hull-shader, tessellator, and domain-shader

These tessellation stages convert higher-order surfaces to triangles to prepare for rendering.

To help you better visualize what happens in each of the stages, the following image shows a very good illustration of a simplified rendering pipeline for vertices only. The image is taken from an old Cg tutorial. Note that different APIs have different pipelines but rely on similar basic concepts in rendering (source: http://goanna.cs.rmit.edu.au/~gl/teaching/rtr&3dgp/notes/pipeline.html).

Shaders

Example flow of how graphics is displayed:

  • The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
  • The data is passed through into the vertex shader where vertices are transformed.
  • If the geometry shader is active in the GPU, the geometry changes are performed in the scene.
  • If a tessellation shader is active in the GPU, the geometries in the scene can be subdivided. The calculated geometry is triangulated (subdivided into triangles).
  • Triangles are broken down into fragments. Fragment quads are modified according to the fragment shader.
  • To create the feel of depth, the z buffer value is set for the fragments and then sent to the frame buffer for displaying.

APIs – DirectX and OpenGL

Both DirectX and OpenGL are collections of application programming interfaces (APIs) used for handling multimedia information in a computer. They are the two most common APIs used today for video cards.

DirectX is created by Microsoft to allow multimedia related hardware, such as GPU, to communicate with the Windows system. OpenGL is the open source version that can be used on many operating system including Mac OS.

The decision to use DirectX or OpenGL APIs to program is dependent on operating system of the target machine.

DirectX

Unreal Engine 4 was first launched using DirectX11. Following the announcement that DirectX 12 ships with Windows 10, Unreal has created a DirectX 12 branch from the 4.4 version to allow developers to start creating games using this new DirectX 12.

An easy way to identify APIs that are a part of DirectX is that the names all begin with Direct. For computer games, the APIs that we are most concerned about are Direct3D, which is the graphical API for drawing high performance 3D graphics in games, and DirectSound3D, which is for the sound playback.

DirectX APIs are integral in creating high-performance 2D and 3D graphics for the Windows operating system. For example, DirectX11 is supported in Windows Vista, Windows 7 and Windows 8.1. The latest version of DirectX can be updated through service pack updates. DirectX 12 is known to be shipped with Windows 10.

DirectX12

Direct3D 12 was announced in 2014 and has been vastly revamped from Direct3D 11 to provide significant performance improvement. This is a very good link to a video posted on the MSDN blog that shows the tech demo for DirectX 12: http://channel9.msdn.com/Blogs/DirectX-Developer-Blog/DirectX-Techdemo.

(If you are unfamiliar with Direct3D 11 and have not read the Shaders section earlier, read that section before proceeding with the rest of the DirectX section.)

Pipeline state representation

If you can recall from the Shaders section, we have looked at the programmable pipeline for Direct3D 11. The following image is the same from the Shaders section (taken from MSDN) and it shows a series of programmable shaders:

Pipeline state representation

In Direct3D 11, each of the stages is configurable independently and each stage is setting states on the hardware independently. Since many stages have the capability to set the same hardware state due to interdependency, this results in hardware mismatch overhead. The following image is an excellent illustration of how hardware mismatch overhead happens:

Pipeline state representation

The driver will normally record these states from the application (game) first and wait until the draw time, when it is ready to send it to the display monitor. At draw time, these states are then queried in a control loop before they are is translated into a GPU code for the hardware in order to render the correct scene for the game. This creates an additional overhead to record and query for all the states at draw time.

In Direct3D 12, some programmable stages are grouped to form a single object known as pipeline state object (PSO) so that the each hardware state is set only once by the entire group, preventing hardware mismatch overhead. These states can now be used directly, instead of having to spend resources computing the resulting hardware states before the draw call. This reduces the draw call overhead, allowing more draw calls per frame. The PSO that is in use can still be changed dynamically based on whatever hardware native instructions and states that are required.

Pipeline state representation

Work submission

In Direct3D 11, work submission to the GPU is immediate. What is new in Direct3D 12 is that it uses command lists and bundles that contain the entire information needed to execute a particular workload.

Immediate work submission in Direct3D 11 means that information is passed as a single stream of command to the GPU and due to the lack of the entire information, these commands are often deferred until the actual work can be done.

When work submission is grouped in the self-contained command list, the drivers can precompute all the necessary GPU commands and then send that list to the GPU, making Direct3D 12 work submission a more efficient process. Additionally, the use of bundles can be thought of as a small list of commands that are grouped to create a particular object. When this object needs to be duplicated on screen, this bundle of commands can be "played back" to create the duplicated object. This further reduces computational time needed in Direct3D 12.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

DirectX

Unreal Engine 4 was first launched using DirectX11. Following the announcement that DirectX 12 ships with Windows 10, Unreal has created a DirectX 12 branch from the 4.4 version to allow developers to start creating games using this new DirectX 12.

An easy way to identify APIs that are a part of DirectX is that the names all begin with Direct. For computer games, the APIs that we are most concerned about are Direct3D, which is the graphical API for drawing high performance 3D graphics in games, and DirectSound3D, which is for the sound playback.

DirectX APIs are integral in creating high-performance 2D and 3D graphics for the Windows operating system. For example, DirectX11 is supported in Windows Vista, Windows 7 and Windows 8.1. The latest version of DirectX can be updated through service pack updates. DirectX 12 is known to be shipped with Windows 10.

DirectX12

Direct3D 12 was announced in 2014 and has been vastly revamped from Direct3D 11 to provide significant performance improvement. This is a very good link to a video posted on the MSDN blog that shows the tech demo for DirectX 12: http://channel9.msdn.com/Blogs/DirectX-Developer-Blog/DirectX-Techdemo.

(If you are unfamiliar with Direct3D 11 and have not read the Shaders section earlier, read that section before proceeding with the rest of the DirectX section.)

Pipeline state representation

If you can recall from the Shaders section, we have looked at the programmable pipeline for Direct3D 11. The following image is the same from the Shaders section (taken from MSDN) and it shows a series of programmable shaders:

Pipeline state representation

In Direct3D 11, each of the stages is configurable independently and each stage is setting states on the hardware independently. Since many stages have the capability to set the same hardware state due to interdependency, this results in hardware mismatch overhead. The following image is an excellent illustration of how hardware mismatch overhead happens:

Pipeline state representation

The driver will normally record these states from the application (game) first and wait until the draw time, when it is ready to send it to the display monitor. At draw time, these states are then queried in a control loop before they are is translated into a GPU code for the hardware in order to render the correct scene for the game. This creates an additional overhead to record and query for all the states at draw time.

In Direct3D 12, some programmable stages are grouped to form a single object known as pipeline state object (PSO) so that the each hardware state is set only once by the entire group, preventing hardware mismatch overhead. These states can now be used directly, instead of having to spend resources computing the resulting hardware states before the draw call. This reduces the draw call overhead, allowing more draw calls per frame. The PSO that is in use can still be changed dynamically based on whatever hardware native instructions and states that are required.

Pipeline state representation

Work submission

In Direct3D 11, work submission to the GPU is immediate. What is new in Direct3D 12 is that it uses command lists and bundles that contain the entire information needed to execute a particular workload.

Immediate work submission in Direct3D 11 means that information is passed as a single stream of command to the GPU and due to the lack of the entire information, these commands are often deferred until the actual work can be done.

When work submission is grouped in the self-contained command list, the drivers can precompute all the necessary GPU commands and then send that list to the GPU, making Direct3D 12 work submission a more efficient process. Additionally, the use of bundles can be thought of as a small list of commands that are grouped to create a particular object. When this object needs to be duplicated on screen, this bundle of commands can be "played back" to create the duplicated object. This further reduces computational time needed in Direct3D 12.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

DirectX12

Direct3D 12 was announced in 2014 and has been vastly revamped from Direct3D 11 to provide significant performance improvement. This is a very good link to a video posted on the MSDN blog that shows the tech demo for DirectX 12: http://channel9.msdn.com/Blogs/DirectX-Developer-Blog/DirectX-Techdemo.

(If you are unfamiliar with Direct3D 11 and have not read the Shaders section earlier, read that section before proceeding with the rest of the DirectX section.)

Pipeline state representation

If you can recall from the Shaders section, we have looked at the programmable pipeline for Direct3D 11. The following image is the same from the Shaders section (taken from MSDN) and it shows a series of programmable shaders:

Pipeline state representation

In Direct3D 11, each of the stages is configurable independently and each stage is setting states on the hardware independently. Since many stages have the capability to set the same hardware state due to interdependency, this results in hardware mismatch overhead. The following image is an excellent illustration of how hardware mismatch overhead happens:

Pipeline state representation

The driver will normally record these states from the application (game) first and wait until the draw time, when it is ready to send it to the display monitor. At draw time, these states are then queried in a control loop before they are is translated into a GPU code for the hardware in order to render the correct scene for the game. This creates an additional overhead to record and query for all the states at draw time.

In Direct3D 12, some programmable stages are grouped to form a single object known as pipeline state object (PSO) so that the each hardware state is set only once by the entire group, preventing hardware mismatch overhead. These states can now be used directly, instead of having to spend resources computing the resulting hardware states before the draw call. This reduces the draw call overhead, allowing more draw calls per frame. The PSO that is in use can still be changed dynamically based on whatever hardware native instructions and states that are required.

Pipeline state representation

Work submission

In Direct3D 11, work submission to the GPU is immediate. What is new in Direct3D 12 is that it uses command lists and bundles that contain the entire information needed to execute a particular workload.

Immediate work submission in Direct3D 11 means that information is passed as a single stream of command to the GPU and due to the lack of the entire information, these commands are often deferred until the actual work can be done.

When work submission is grouped in the self-contained command list, the drivers can precompute all the necessary GPU commands and then send that list to the GPU, making Direct3D 12 work submission a more efficient process. Additionally, the use of bundles can be thought of as a small list of commands that are grouped to create a particular object. When this object needs to be duplicated on screen, this bundle of commands can be "played back" to create the duplicated object. This further reduces computational time needed in Direct3D 12.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

Pipeline state representation

If you can recall from the Shaders section, we have looked at the programmable pipeline for Direct3D 11. The following image is the same from the Shaders section (taken from MSDN) and it shows a series of programmable shaders:

Pipeline state representation

In Direct3D 11, each of the stages is configurable independently and each stage is setting states on the hardware independently. Since many stages have the capability to set the same hardware state due to interdependency, this results in hardware mismatch overhead. The following image is an excellent illustration of how hardware mismatch overhead happens:

Pipeline state representation

The driver will normally record these states from the application (game) first and wait until the draw time, when it is ready to send it to the display monitor. At draw time, these states are then queried in a control loop before they are is translated into a GPU code for the hardware in order to render the correct scene for the game. This creates an additional overhead to record and query for all the states at draw time.

In Direct3D 12, some programmable stages are grouped to form a single object known as pipeline state object (PSO) so that the each hardware state is set only once by the entire group, preventing hardware mismatch overhead. These states can now be used directly, instead of having to spend resources computing the resulting hardware states before the draw call. This reduces the draw call overhead, allowing more draw calls per frame. The PSO that is in use can still be changed dynamically based on whatever hardware native instructions and states that are required.

Pipeline state representation

Work submission

In Direct3D 11, work submission to the GPU is immediate. What is new in Direct3D 12 is that it uses command lists and bundles that contain the entire information needed to execute a particular workload.

Immediate work submission in Direct3D 11 means that information is passed as a single stream of command to the GPU and due to the lack of the entire information, these commands are often deferred until the actual work can be done.

When work submission is grouped in the self-contained command list, the drivers can precompute all the necessary GPU commands and then send that list to the GPU, making Direct3D 12 work submission a more efficient process. Additionally, the use of bundles can be thought of as a small list of commands that are grouped to create a particular object. When this object needs to be duplicated on screen, this bundle of commands can be "played back" to create the duplicated object. This further reduces computational time needed in Direct3D 12.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

Work submission

In Direct3D 11, work submission to the GPU is immediate. What is new in Direct3D 12 is that it uses command lists and bundles that contain the entire information needed to execute a particular workload.

Immediate work submission in Direct3D 11 means that information is passed as a single stream of command to the GPU and due to the lack of the entire information, these commands are often deferred until the actual work can be done.

When work submission is grouped in the self-contained command list, the drivers can precompute all the necessary GPU commands and then send that list to the GPU, making Direct3D 12 work submission a more efficient process. Additionally, the use of bundles can be thought of as a small list of commands that are grouped to create a particular object. When this object needs to be duplicated on screen, this bundle of commands can be "played back" to create the duplicated object. This further reduces computational time needed in Direct3D 12.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

Resource access

In Direct3D 11, the game creates resource views that bind these views to slots at the shaders. These shaders then read the data from these explicit bound slots during a draw call. If the game wants to draw using different resources, it will be done in the next draw call with a different view.

In Direct3D 12, you can create various resource views by using descriptor heaps. Each descriptor heap can be customized to be linked to a specific shader using specific resources. This flexibility to design the descriptor heap allows you to have full control over the resource usage pattern, fully utilizing modern hardware capabilities. You are also able to describe more than one descriptor heap that is indexed to allow easy flexibility to swap heaps, to complete a single draw call.

Lights

We have briefly gone through the types of light in Chapter 1, An Overview of Unreal Engine. Let us do a quick recap first. Directional Light emits beams of parallel lights. Point Light emits light like a light bulb (from a single point radially outward in all directions). Spot Light emits light in a conical shape outwards and Sky Light mimics light from the sky downwards on the objects in the level.

In this chapter, we will learn how to use these basic lights to illuminate an interior area. We have already placed a Point Light in Chapter 2, Creating Your First Level, and learned how to adjust its intensity to 1700. Here in this chapter, we will learn more about the parameters that we can adjust with each type of light to create the lighting that we want.

Let us first view a level that has been illuminated using these Unreal lights. Load Chapter4Level_Prebuilt.umap, build and play the level to look around. Click on the lights that are placed in the level and you will notice that most of lights used are Point or Spot Light. These two forms of lights are quite commonly found in interior lighting.

The next section will guide you to extend the level on your own. Alternatively, you can use the Chapter4Level_Prebuilt level to help you along in the creation of your own level since it does take a fair amount of time to create the entire level. If you wish to skip to the next section, feel free to simply use the prebuilt version of the map provided, and go through the other examples in this chapter using the prebuilt map as a reference. However, it will be a great opportunity to revise what you have learned in the previous chapters and extend the level on your own.

Before we embark on the optional exercise to extend the level, let us go through a few tutorial examples on how we can place and configure the different types of light.

Configuring a Point Light with more settings

Open Chapter4Level.umap and rename it Chapter4Level_PointLight.umap.

Go to Modes | Lights, drag and drop a Point Light into the level. As Point Light emits light equally in all directions from a single point, Attenuation Radius, Intensity, and Color are the three most common values that are configured for a Point Light.

Attenuation Radius

The following screenshot shows when the Point Light has its default Attenuation Radius of 1000. The radius of the three blue circles is based on the attenuation radius of the Point Light and is used to show its area of effect on the environment.

Attenuation Radius

The following screenshot shows when the attenuation radius is reduced to 500. In this situation, you probably cannot see any difference in the lighting since the radius is still larger than the room itself:

Attenuation Radius

Now, let us take a look at what happens when we adjust the radius much smaller. The following screenshot shows the difference in light brightness when the radius changes. The image on the left is when the attenuation radius is set as 500 and the right when attenuation radius is set as 10.

Attenuation Radius

Intensity

Another setting for Point Light is Intensity. Intensity affects the brightness of the light. You can play around the Intensity value to adjust the brightness of the light. Before we determine what value to use for this field and how bright we want our light to be, you should be aware of another setting, Use Inverse Squared Falloff.

Use Inverse Squared Falloff

Point Lights and Spot Lights have physically based inverse squared falloff set on, as default. This setting is configurable as a checkbox found in the Light details under Advanced. The following screenshot shows where this property is found in the Details panel:

Use Inverse Squared Falloff

Inverse squared falloff is a physics law that describes how light intensity naturally fades over distance. When we have this setting, the units for intensity use the same units as the lights we have in the real world, in lumens. When inverse squared distance falloff is not used, intensity becomes just a value.

In the previous chapter where we have added our first Point Light, we have set intensity as 1700. This is equivalent to the brightness of a light bulb that has 1700 lumens because inverse squared distance falloff is used.

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color

Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle

Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles

Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light

Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area
Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat
Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Configuring a Point Light with more settings

Open Chapter4Level.umap and rename it Chapter4Level_PointLight.umap.

Go to Modes | Lights, drag and drop a Point Light into the level. As Point Light emits light equally in all directions from a single point, Attenuation Radius, Intensity, and Color are the three most common values that are configured for a Point Light.

Attenuation Radius

The following screenshot shows when the Point Light has its default Attenuation Radius of 1000. The radius of the three blue circles is based on the attenuation radius of the Point Light and is used to show its area of effect on the environment.

Attenuation Radius

The following screenshot shows when the attenuation radius is reduced to 500. In this situation, you probably cannot see any difference in the lighting since the radius is still larger than the room itself:

Attenuation Radius

Now, let us take a look at what happens when we adjust the radius much smaller. The following screenshot shows the difference in light brightness when the radius changes. The image on the left is when the attenuation radius is set as 500 and the right when attenuation radius is set as 10.

Attenuation Radius

Intensity

Another setting for Point Light is Intensity. Intensity affects the brightness of the light. You can play around the Intensity value to adjust the brightness of the light. Before we determine what value to use for this field and how bright we want our light to be, you should be aware of another setting, Use Inverse Squared Falloff.

Use Inverse Squared Falloff

Point Lights and Spot Lights have physically based inverse squared falloff set on, as default. This setting is configurable as a checkbox found in the Light details under Advanced. The following screenshot shows where this property is found in the Details panel:

Use Inverse Squared Falloff

Inverse squared falloff is a physics law that describes how light intensity naturally fades over distance. When we have this setting, the units for intensity use the same units as the lights we have in the real world, in lumens. When inverse squared distance falloff is not used, intensity becomes just a value.

In the previous chapter where we have added our first Point Light, we have set intensity as 1700. This is equivalent to the brightness of a light bulb that has 1700 lumens because inverse squared distance falloff is used.

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color

Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle

Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles

Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light

Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area

Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat

Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Attenuation Radius

The following screenshot shows when the Point Light has its default Attenuation Radius of 1000. The radius of the three blue circles is based on the attenuation radius of the Point Light and is used to show its area of effect on the environment.

Attenuation Radius

The following screenshot shows when the attenuation radius is reduced to 500. In this situation, you probably cannot see any difference in the lighting since the radius is still larger than the room itself:

Attenuation Radius

Now, let us take a look at what happens when we adjust the radius much smaller. The following screenshot shows the difference in light brightness when the radius changes. The image on the left is when the attenuation radius is set as 500 and the right when attenuation radius is set as 10.

Attenuation Radius

Intensity

Another setting for Point Light is Intensity. Intensity affects the brightness of the light. You can play around the Intensity value to adjust the brightness of the light. Before we determine what value to use for this field and how bright we want our light to be, you should be aware of another setting, Use Inverse Squared Falloff.

Use Inverse Squared Falloff

Point Lights and Spot Lights have physically based inverse squared falloff set on, as default. This setting is configurable as a checkbox found in the Light details under Advanced. The following screenshot shows where this property is found in the Details panel:

Use Inverse Squared Falloff

Inverse squared falloff is a physics law that describes how light intensity naturally fades over distance. When we have this setting, the units for intensity use the same units as the lights we have in the real world, in lumens. When inverse squared distance falloff is not used, intensity becomes just a value.

In the previous chapter where we have added our first Point Light, we have set intensity as 1700. This is equivalent to the brightness of a light bulb that has 1700 lumens because inverse squared distance falloff is used.

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color
Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle
Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles
Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light
Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area

Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat

Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Intensity

Another setting for Point Light is Intensity. Intensity affects the brightness of the light. You can play around the Intensity value to adjust the brightness of the light. Before we determine what value to use for this field and how bright we want our light to be, you should be aware of another setting, Use Inverse Squared Falloff.

Use Inverse Squared Falloff

Point Lights and Spot Lights have physically based inverse squared falloff set on, as default. This setting is configurable as a checkbox found in the Light details under Advanced. The following screenshot shows where this property is found in the Details panel:

Use Inverse Squared Falloff

Inverse squared falloff is a physics law that describes how light intensity naturally fades over distance. When we have this setting, the units for intensity use the same units as the lights we have in the real world, in lumens. When inverse squared distance falloff is not used, intensity becomes just a value.

In the previous chapter where we have added our first Point Light, we have set intensity as 1700. This is equivalent to the brightness of a light bulb that has 1700 lumens because inverse squared distance falloff is used.

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color
Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle
Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles
Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light
Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area

Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat

Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Use Inverse Squared Falloff

Point Lights and Spot Lights have physically based inverse squared falloff set on, as default. This setting is configurable as a checkbox found in the Light details under Advanced. The following screenshot shows where this property is found in the Details panel:

Use Inverse Squared Falloff

Inverse squared falloff is a physics law that describes how light intensity naturally fades over distance. When we have this setting, the units for intensity use the same units as the lights we have in the real world, in lumens. When inverse squared distance falloff is not used, intensity becomes just a value.

In the previous chapter where we have added our first Point Light, we have set intensity as 1700. This is equivalent to the brightness of a light bulb that has 1700 lumens because inverse squared distance falloff is used.

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color
Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle
Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles
Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light
Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light
Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.
Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)
Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area

Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat

Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Color

To adjust the color of Point Light, go to Light | Color. The following screenshot shows how the color of the light can be adjusted by specifying the RGB values or using the color picker to select the desired color:

Color
Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle
Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles
Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light
Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed

X

Y

Z

Ceiling

2000

1600

40

Floor

2000

1600

40

The left wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The light wall dividing the big room and walkway (the wall closest to you as you enter the big room from the walkway)

30

600

340

The left wall of the big room (where the kitchen area is)

1200

30

340

The right wall of the big room (where the dining area is)

2000

30

340

The left wall to the door (the wall across the room as you enter from the walkway, where the window seats are)

30

350

340

The right wall to the door (the wall across the room as you enter from the walkway, where the long benches are)

30

590

340

Door area (consists of brick walls, door frames, and door)

Wall filler left

30

130

340

Wall filler right

30

126

340

Door x 2

20

116

250

Side door frame x 2

25

4

250

Horizontal door frame

25

242

5

Side brick wall x 2

30

52

340

Horizontal brick wall

30

242

74

Part 3 – creating a small room along the walkway

To create the walkway to the small room, duplicate the same doorframe that we have created in the first room.

BSPs needed

X

Y

Z

Ceiling

800

600

40

Floor

800

600

40

Side wall x 2

30

570

340

Opposite wall (wall with the windows)

740

30

340

Part 4 – Creating a den area in the big room

BSPs needed

X

Y

Z

Sidewall x 2

30

620

340

Wall with shelves

740

30

340

Creating windows and doors

Now that we are done with rooms, we can work on the doors and windows.

Part 1 – creating large glass windows for the dining area

To create the windows, we use a subtractive Geometry Brush to create holes in the wall. First, create one of size X144 x Y30 x Z300 and place it right in the middle between the ceiling and ground. Duplicate this and convert it to an additive brush; adjust the size to X142 x Y4 x Z298.

Apply M_Metal_Copper for the frame and M_Glass to the addition brush, which was just created. Now, group them and duplicate both the brushes four times to create five windows. The screenshot of the dining area windows is shown as follows:

Part 1 – creating large glass windows for the dining area

Part 2 – creating an open window for the window seat

To create the window for the window seat area, create a subtractive geometry brush of size X50 x Y280 x Z220. For this window, we have a protruding ledge of X50 x Y280 x Z5 at the bottom of the window. Then for the glass, we duplicate the subtractive brush of size X4 x Y278 x Z216, convert it to additive brush and adjust it to fit.

Apply M_Metal_Brushed for the frame and M_Glass to the addition brush that was just created.

Part 2 – creating an open window for the window seat

Part 3 – creating windows for the room

For the room windows, create a subtractive brush of size X144 x Y40 x Z94. This is to create a hollow in the wall for the prop frame: SM_WindowFrame. Duplicate the subtractive brush and prop to create two windows for the room.

Part 4 – creating the main door area

For the main door area, we start by creating the doors and its frame, then the brick walls around the door and lastly, the remaining concrete plain wall.

We have two doors with frames then some brick wall to augment before going back to the usual smooth walls. Here are the dimensions for creating this door area:

BSPs needed

X

Y

Z

Actual door x 2

20

116

250

Side frame x 2

25

4

250

Top frame

25

242

5

Here are the dimensions for creating the area around the door:

BSPs needed

X

Y

Z

Brick wall side x 2

30

52

340

Brick wall top

30

242

74

Smooth wall left

30

126

340

Smooth wall right

30

130

360

Creating basic furniture

Let us begin it part by part as follows.

Part 1 – creating a dining table and placing chairs

For the dining table, we will be customizing a wooden table with a table top of size X480 x Y160 x Z12 and two legs each of size X20 x Y120 x Z70 placed 40 from the edge of the table. Material used to texture is M_Wood_Walnut.

Then arrange eight chairs around the table using SM_Chair from the Props folder.

Part 2 – decorating the sitting area

There are two low tables in the middle and one low long table at the wall. Place three SM_Couch from the Props folder around the low tables. Here are the dimensions for the larger table:

BSPs needed

X

Y

Z

Square top

140

140

8

Leg x 2

120

12

36

Here are the dimensions for the smaller table:

BSPs needed

X

Y

Z

Leg x 2

120

12

36

Here are the dimensions for a low long table at the wall:

BSPs needed

X

Y

Z

Block

100

550

100

Part 3 – creating the window seat area

Next to the open window, place a geometry box of size X120 x Y310 x Z100. This is to create a simplified seat by the window.

Part 4 – creating the Japanese seating area

The Japanese square table with surface size X200 x Y200 x Z8 and 4 short legs, each of size X20 x Y20 x Z36) is placed close to the corner of the table.

To create a leg space under the table, I used a subtractive brush (X140 x Y140 x Z40) and placed it on the ground under the table. I used the corner of this subtractive brush as a guide as to where to place the short legs for the table.

Part 5 – creating the kitchen cabinet area

This is a simplified block prototype for the kitchen cabinet area. The following are the dimensions for L-shaped area:

BSPs needed

Material

X

Y

Z

Shorter L: cabinet under tabletop

M_Wood_Walnut

140

450

100

Longer L: cabinet under tabletop

M_Wood_Walnut

890

140

100

Shorter L: tabletop

M_Metal_Brushed_Nickel

150

450

10

Longer L: tabletop

M_Metal_Brushed_Nickel

900

150

10

Shorter L: hanging cabinet

M_Wood_Walnut

100

500

100

Longer L: hanging cabinet

M_Wood_Walnut

900

100

100

The following are the dimensions for the island area (hood):

BSPs needed

Material

X

Y

Z

Hood (wooden area)

M_Wood_Walnut

400

75

60

Hood (metallic area)

M_Metal_Chrome

500

150

30

The following are the dimensions for the island area (table):

BSPs needed

Material

X

Y

Z

Cabinet under the table

M_Wood_Walnut

500

150

100

Tabletop

M_Metal_Chrome

550

180

10

Sink (use a subtractive brush)

M_Metal_Chrome

100

80

40

Stovetop

M_Metal_Burnished_Steel

140

100

5

Adding and configuring a Spot Light

Open Chapter4Level.umap and rename it Chapter4Level_SpotLight.umap. Go to Modes | Lights, drag and drop a Spot Light into the level.

The brightness, visible influence radius, and color of a Spot Light can be configured in the same way as the Point Light through the value of Intensity, Attenuation Radius, and Color.

Since Point Light has light emitting in all directions and a Spot Light emits light from a single point outwards in a conical shape with a direction, the Spot Light has additional properties such as inner cone and outer cone angle, which are configurable.

Inner cone and outer cone angle

The unit for the outer cone angle and inner cone angle is in degrees. The following screenshot shows the light radius that the spotlight has when the outer cone angle = 20 (on the left) and outer cone angle = 15 (on the right). The inner cone angle value did not produce much visible results in the screenshot, so very often the value is 0. However, the inner cone angle can be used to provide light in the center of the cone. This would be more visible for lights with a wider spread and certain IES Profiles.

Inner cone and outer cone angle

Using the IES Profile

Open Chapter4Level_PointLight.umap and rename it Chapter4Level_IESProfile.umap.

IES Light Profile is a file that contains information that describes how a light will look. This is created by light manufacturers and can be downloaded from the manufacturers' websites. These profiles could be used in architectural models to render scenes with realistic lighting. In the same way, the IES Profile information can be used in Unreal Engine 4 to render more realistic lights. IES Light Profiles can be applied to a Point Light or a Spot Light.

Downloading IES Light Profiles

IES Light Profiles can be downloaded from light manufacturers' websites. Here's a few that you can use:

Importing IES Profiles into the Unreal Engine Editor

From Content Browser, click on Import, as shown in the following screenshot:

Importing IES Profiles into the Unreal Engine Editor

I prefer to have my files in a certain order, hence I have created a new folder called IESProfile and created subfolders with the names of the manufacturers to better categorize all the light profiles that were imported.

Using IES Profiles

Continuing from the previous example, select the right Spot Light which we have in the scene and make sure it is selected. Go to the Details panel and scroll down to show the Light Profile of the light.

Then go to Content Browser and go to the IESProfile folder where we have imported the light profiles into. Click on one of the profiles that you want, drag and drop it on the IES Texture of the Spot Light. Alternatively, you can select the profile and go back to the Details panel of the Light and click on the arrow next to IES Texture to apply the profile on the Spot Light. In the following screenshot, I applied one of the profiles downloaded from the Panasonic website labeled 144907.

Using IES Profiles

I reconfigured the Spot Light with Intensity = 1000, Attenuation Radius = 1000, Outer Cone Angle = 40, and Inner Cone Angle = 0.

Next, I deleted the other Spot Light and replaced it with a Point Light where I set Intensity = 1000 and Attenuation Radius = 1000. I also set the Rotation-Y = -90 and then applied the same IES Profile to it. The following screenshot shows the difference when the same light profile is applied to a Spot Light and a Point Light. Note that the spread of the light in the Spot Light is reduced. This reinforces the concept that a Spot Light provides a conical shaped light with a direction spreading from the point source outwards. The outer cone angle determines this spread. The point light emits light in all directions and equally out, so it did not attenuate the light profile settings allowing the full design of this light profile to be displayed on the screen. This is one thing to keep in mind while using the IES Light Profile and which types of light to use them on.

Using IES Profiles

Adding and configuring a Directional Light

Open Chapter4Level.umap and rename it Chapter4Level_DirectionalLight.umap.

We have already added a Directional Light into our level in Chapter 2, Creating Your First Level, and it provides parallel beams of light into the level.

Directional Light can also be used to light the level by controlling the direction of the sun. The screenshot on the left shows the Directional Light when the Atmosphere Sun Light checkbox is unchecked. The screenshot on the right shows the Directional Light when the Atmosphere Sun Light checkbox is checked. When the Atmosphere Sun Light checkbox is checked, you can control the direction of the sunlight by adjusting the rotation of Directional Light.

Adding and configuring a Directional Light

The following screenshot shows how this looks when Rotation-Y = 0. This looks like an early sunset scene:

Adding and configuring a Directional Light

Example – adding and configuring a Sky light

Open Chapter4Level_DirectionalLight.umap and rename it Chapter4Level_Skylight.umap.

In the previous example, we have added sunlight control in the Directional Light. Build and compile to see how the level now looks.

Now, let us add a Sky Light into the level by going to Modes | Lights and then clicking and dragging Sky Light into the level. When adding a Sky Light to the level, always remember to build and compile first in order to see the effect of the Sky Light.

What does a Sky Light do? Sky Light models the color/light from the sky and is used to light up the external areas of the level. So the external areas of the level look more realistic as the color/light is reflecting off the surfaces (instead of using simple white/colored light).

The following screenshot shows the effect of a Sky Light. The left image shows the Sky Light not in the level. The right one shows the Sky Light. Note that the walls now have a tinge of the color of the sky.

Example – adding and configuring a Sky light

Static, stationary, or movable lights

After learning how to place and configure the different lights, we need to consider what kind of lights we need in the level. If you are new to the concept of light, you might want to briefly go through the useful light terms section to help in your understanding.

The following screenshot shows the Details panel where you can change a light to be static, stationary, or movable.

Static, stationary, or movable lights

Static and Stationary light sounds pretty much similar. What is the difference? When do you want to use a Static light and when do you want to use a Stationary light?

Common light/shadow definitions

The common light/shadow definitions are as follows:

  • Direct Light: This is the light that is present in the scene directly due to a light source.
  • Indirect Light: This is the light in the scene that is not directly from a light source. It is reflected light bouncing around and it comes from all sides.
  • Light Map: This is a data structure that stores the light/brightness information about an object. This makes the rendering of the object much quicker because we already know its color/brightness information in advance and it is not necessary to compute this during runtime.
  • Shadow Map: This is a process created to make dynamic shadows. It is fundamentally made up of two passes to create shadows. More passes can be added to render nicer shadows.

Static Light

In a game, we always want to have the best performance, and Static Light will be an excellent option because a Static Light needs only to be precomputed once into a Light Map. So for a Static Light, we have the lowest performance cost but in exchange, we are unable to change how the light looks, move the light, and integrate the effect of this light with moving objects (which means it is unable to create a shadow for the moving object as it moves within the influence of the light) into the environment during gameplay. However, a Static Light can cast shadow on the existing stationary objects that are in the level within its influence of radius. The radius of influence is based on the source radius of the light. In return for low performance cost, a Static Light has quite a bit of limitation. Hence, Static Lights are commonly used in the creation of scenes targeted for devices with low computational power.

Stationary Light

Stationary Light can be used in situations when we do not need to move, rotate, or change the influence radius of the light during gameplay, but allow the light the capacity to change color and brightness. Indirect Light and shadows are prebaked in Light Map in the same way as Static Light. Direct Light shadows are stored within Shadow Maps.

Stationary Light is medium in performance cost as it is able to create static shadow on static objects through the use of distance field shadow maps. Completely dynamic light and shadows is often more than 20 times more intensive.

Movable Light

Movable Light is used to cast dynamic light and shadows for the scene. This should be used sparingly in the level, unless absolutely necessary.

Exercise – extending your game level (optional)

Here are the steps that I have taken to extend the current Level4 to the prebuilt version of what we have right now. They are by no means the only way to do it. I have simply used a Geometry Brush to extend the level here for simplicity. The following screenshot shows one part of the extended level:

Exercise – extending your game level (optional)

Useful tips

Group items in the same area together when possible and rename the entity to help you identify parts of the level more quickly. These simple extra steps can save time when using the editor to create a mock-up of a game level.

Guidelines

If you plan to extend the game level on your own, open and load Level4.umap. Then save map as Level4_MyPreBuilt.umap. You can also open a copy of the extended level to copy assets or use it as a quick reference.

Area expansion

We will start by extending the floor area of the level.

Part 1 – lengthening the current walkway

The short walkway was extended to form an L-shaped walkway. The dimensions of the extended portion are X1200 x Y340 x Z40.

BSPs needed

X

Y

Z

Ceiling

1200

400

40

Floor

1200

400

40

Left wall

1570

30

280

Right wall

1260

30

280

Part 2 – creating a big room (living and kitchen area)

The walkway leads to a big room at the end, which is the main living and kitchen area.

BSPs needed