A Primer to the Third Dimension
Welcome to Chapter One! Even if you have some previous knowledge of the systems in this chapter, solidifying the foundation of your knowledge will only help the advanced topics settle in more easily.
In this chapter we will cover topics such as:
- A good look into the parts that make up 3D game development
- Definitions of Unity terminology
- A tour through Unity’s interface
Coming around to 3D
We will be going over the basic understanding of the 3D work within this section. From coordinate systems to the makeup of how the 3D model is rendered, we will only go surface level to ensure that the foundations are firmly planted within you as you progress through this journey. By reading through this, you will be able to have a strong understanding of why Unity is displaying the items.
3D coordinate systems are not all the same in each application! Unity is a Left-handed (Figure 1.1) world coordinate system with +Y facing upwards. Looking in the image below you can visualize the difference from left-handed and right-handed systems:
While we work within these coordinate systems, you will see positions of objects represented in an array 3 values within parenthesis as follows:
(0, 100, 0)
This represents (X, Y, Z) respectively. This is a good habit to get into as programming utilizes very similar syntax when writing out positions within the scripts. When we are talking about position, it is commonly referred as it’s “Transform”.
Local Space versus World Space
After understanding the world coordinates X, Y, Z, and that the very start of those coordinates start at 0 for each, represented with a (0, 0, 0). In the image below (Figure 1.2), where the colored lines meet is that 0,0,0 in the world. The cube has it’s own transform, which encompasses that object’s Transform, Rotation, and Scale:
The cube in the image (Figure 1.2) is at (1, 1.5, 2). This is called world space as the item’s Transform is being represented through the world’s coordinates starting from (0, 0, 0):
Now that we know the Cube’s transform is in relation to the world (0, 0, 0), we will not go over the parent-child relation which describes the local space. In the image above (Figure 1.3), the sphere is a child of the cube. The sphere’s local position is (0, 1, 0) in relation to the cube. Interestingly, if you now move the cube, the sphere will follow as it’s only offset from the cube and it’s transforms will remain (0, 1, 0) in relation to the cube.
Traditionally, a vector is a unity that has more than one element. In a 3D setting, a vector 3 will look very similar to what we’ve worked with currently. (0, 0, 0) is a vector 3! Vectors are used in very many solutions to game elements and logic. Primarily the developer will normalize vectors so that way the magnitude will always equal 1. This allows developer to work with the data very easily as 0 is the start, .5 is halfway, and 1 is the end of the vector.
Cameras are incredibly useful Objects! They humbly show us their perspective which allows our players to experience what we are trying to convey to them. As you may have guessed, a camera also has a transform just like all of game objects in the hierarchy. Cameras also have several parameters that can be changed to obtain different visual effects.
Different game elements and genres use cameras in different ways. Resident Evil using static cameras to give a sense of tension not knowing whats outside the window or around the corner, while Tomb Raider pulled the camera in close while she goes through caverns to give a sense of intimacy and emotional understanding with her face looking uncomfortable in tight spaces.
Cameras are essential to the experience you will be creating to your users, take time to play with them and learn compositional concepts to maximize the push of emotions in the players experience.
Faces, Edges, Vertices, and Meshes
3D objects are made up of multiple parts as seen in Figure 1.4. Vertices, represented by the green circles, are points in space relative to the world (0, 0, 0). Each object has a list of these vertices and their corresponding connections. Two vertices connected make an edge, represented with a red line. A face is made when either 3 or 4 edges connect to make a triangle or a quad. Sometimes quads are called a plane when not connected to any other faces. When all of these parts are together, you have a mesh:
Materials, Textures, and Shaders
Now that you know how what a mesh is comprised of in all Digital Content Creation (DCC) tools, let’s look into how Unity displays that mesh to you. At the very base level is a shader. Shaders can be thought of as small programs, which have their own language, that help update the graphics pipeline so Unity can render the objects in your scene to your screen. You can think of the shader as a large template for materials to be created.
The next level up is materials. A material is a set of attributes which are defined by the shader to be manipulated, which helps show what the object looks like. Each rendering pipeline will have separate shaders. Built-in, Universal Rendering Pipeline (URP), and High Definition Rendering Pipeline. For this book, we are using the middle and most widely used URP.
Figure 1.5 shows an example of a material using the URP’s Standard Lit shader. This allows us to manipulate Surface options, inputs for that surface, and some advanced options. For now, let’s just talk about the Base Map, first item in the surface inputs section. The term “Base Map” is being used here as a combination of the “Diffuse/Albedo” and “Tint” together. Diffuse or Albedo is used to define the base color (Red) that will be applied to the surface, in this case, white. If you placed a texture into this map by either dragging a texture onto the square (Green) to the left of the base map or clicking on the circle (Blue) in between the box and the name. After that you can tint the surface with the color if there needs to be any adjustments:
Figure 1.6 shows a simple example of what a cube would look like with a tint, texture, and the same texture with the tint changed. As we progress through the book, we will unlock more and more functions of materials, shaders and textures:
Textures can provide incredible detail for your 3D model. When creating a texture, the resolution is an important consideration. The first part of resolution that needs to be understood is “power of 2” sizes. A Power of 2 is as such:
2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc
These numbers represent the pixel size for both width and height. There are cases that you may need to mix the sizes as long as they fit the Power of 2 scale. Examples are:
256×256 1024×1024 256×1024 (This is less common to see, but is valid)
The second consideration of resolution is the size itself. The easiest way to work through this consideration is thinking about how large the 3D object will be on your screen. If you have a 1920x1080 screen resolution, that is 1920 pixels wide by 1080 pixels tall. If the object in question is only going to be taking up 10% of the screen and rarely closer, you may consider a 256x256 texture. By contrast, if you are making an emotion character driven game where emotions from facial expressions matter, you may want a 4096x4096 or 4k texture on just the face during those cut scenes.
Rigid Body Physics
Unity assumes that every game object does not need to be evaluated every frame for physics. Unity uses Nvidia’s PhysX engine for it’s physics calculations. To get any calculated physics responses, the game object needs a Rigid Body component added.
By adding the Rigid Body component to the game object you are then adding some properties to the game object seen in the inspector Figure 1.7 below”:
One unity unit of mass is equal to 1 kg of mass. This affects the physics decisions upon collisions. Drag units will attempt to reduce the velocity over time as friction. Angular Drag is similar, but constrained to only rotation speed. Using gravity either turns on or off gravity which is standard Earth gravity so the mass makes sense!
A thorough explanation of rigid body will be worked through in future chapters.
A gameobject with a rigid body without any collision will not be fully utilizing the physics and with gravity turned on will just fall through the world. There are quite a few colliders to play with to best suit your games’ needs:
You are also welcome to add multiple colliders, basic options seen in Figure 1.8 above, to an object to best suit the shape of the gameobject. It is very common to see colliders on empty gameobjects that are children of the primary object to allow easy transformation of the colliders. We will see this in practice later in this book.
Essential Unity Concepts
In the first section that you previously read we went over some Unity concepts already. We will go over them in a bit more detail here as you’ve read previously where several of these might be used. Unity houses a very modular focus to the items that are housed within the game development environment.
Unity treats every file as an asset. This is from 3D Model, Texture file, sprite, particle system, and so on. In your project you will have an Assets folder as the base folder to house all of your in project items. This could be textures, 3D models, particles systems, materials, shaders, animations, sprites, and the list goes on. As we add more onto our project, the assets folder should be organized and ready to grow. It is strongly recommended to keep your folder structure organized so that you or your team aren’t wasting time trying to find that one texture item that was left in a random folder on accident.
A scene houses all of the gameplay logic, game objects, cinematics, and everything else else which your game will be referencing to render or interact with.
Scenes are also used to cut up gameplay sections to bring down the load times. If you could imaging trying to load every single asset on a modern game every time you loaded it up. It would take way too much precious gaming time.
Most assets that are being referenced in a scene will be a GameObject (GO). There are some instances in which an assets can only be a component of a GO. The one common factor that you will see with all gameobjects is that they have the Transform component. Remembering back in the 3D Primer unity above we know that this is the world or local position, rotation, or scale of that game object. GO’s can have a long list of components connected to give functionality or data to be used in scripts for mechanics to grow.
Game Objects have the ability to house multiple functionality attached as “components”. Each component has it’s own unique properties. The entire list of components you can add is fairly extensive as you can see in Figure 1.9 below:
Each of these sections has smaller subsections. We will go over quite a few of them throughout this book. When you add an asset to the scene hierarchy which may require components, Unity will add them by default. An example of this default action happening is when you drag a 3D mesh into the hierarchy, the gameobject will have a mesh renderer component attached to the object automatically.
One component that is often used on game objects is scripts. This is where all of the logic and mechanics will be built onto your gameobjects. Whether you want to change the color, jump, change the time of day, collect an item, and so on, you will need to add that logic into a script on the object.
In Unity the primary language is C# (pronounced “C-Sharp”). This is a type strong programming language, meaning that there must be a type assigned to any variable that is being manipulated.
We will be using scripts in a multitude of ways and I know you are excited to get right into coding, but first we need to get into other Unity standard processes.
Utilizing the modular and strong object oriented nature of Unity, we can put together a grouping of items with default values set on their components which can be instanced in the scene at any time and house their own values.
To make a prefab, you drag a game object from the hierarchy in the scene to the asset browser. It will create a new prefab as well as turn that gameobject into the newly created prefab. It will also turn blue by default in the hierarchy as seen in Figure 1.10:
To take the modular components to a whole new level, Unity can take a package with all of it’s dependencies and export them out so you can bring them into other projects! Even better, you can sell your packages to other game developers from the Unity Asset Store!
Now that you have a solid foundation in 3D and Unity terms, let’s open it up and go over the interface itself. Next section will be a look into all of the most common interface pieces of Unity.
The interface for Unity is separated into several major components. We will go over the scene (Figure 1.11, Red) and the items within it’s interface as well as how to manipulate their properties in the Inspector (Figure 1.11, Orange). Then we will go into items which aren’t active in the scene, but available to add in the project window (Figure 1.11, Yellow). Finally, we will go over game view (Figure 1.11, Green) and the package manager (separate from the image below):
Scene View and Hierarchy
Scene view and hierarchy work in tandem. The hierarchy is how the scene will be rendering when the game plays. The scene view allows you to manipulate the game objects and their values in real time:
In Figure 1.12 above, there is a lot of information that can be seen right away. On the left, the hierarchy, you can see that there are objects in the scene. These Objects all have a transform which places them in the world. If you double click on an item or click on an item, put your mouse in the scene view then press ‘f’, you will then focus on that game object which puts the item centered on the scene’s viewport.
When you have an item selected, you can see at the object’s pivot point, usually center of the object, there is a tool showing colored arrows. The tool allows you to position the gameobject in space. You can also position the object on a plane by selecting the little square in between to axis.
In the upper right of Figure 1.12, you will see a camera gizmo, this little gizmo will allow you easily orient the viewport camera in to the front, sides, top, bottom, or change to an isometric camera or perspective with a single click.
Now that you have seen the item in scene, selected by left clicking in the scene or the hierarchy, you may want to change some properties or add components to that gameobject. This is where the inspector comes into play.
To manipulate those game object’s value, when you select the game object in the scene or hierarchy the inspector will update to show you the viable options to change per game object:
The inspector window in Figure 1.13 shows a good amount of this item that has been chosen. From the top, the name is Cube and the blue cube to the left denotes a prefab data type. You are able to make changes to the prefab itself by pressing the open button just below the name. This will create a new scene view which shows the prefab only. When you make changes to the prefab it will make a change to all instanced prefabs in any scene that is referencing it.
The transform component is showing position, rotate, and scale of the prefab in the scene.
The mesh filter is the makeup of the vertices, edges, and faces that make up that polygon.
Below that is the mesh renderer. This component will allow the rendering of the mesh filter. We can set the material here and other options that pretain to this items specific lighting and probes which we will cover in lighting section of this book.
Now below this is a collider and a rigid body. These work in tandem and are helping this object to react to physics in real time to the settings available on the components.
We’ve talked a lot about items in the scene and their properties, but where are they housed outside of the scene if their only referenced items? The project window will answer this question.
Here you will find assets that will be instanced in the scene or used as a component to fully realize the game you are building:
This window is the physical representation of the game objects that are referenced from. All of the items are in the assets folder seen in Figure 1.14 are physically on your hard drive. Unity makes meta files that are housing all of the properties of the items.
The interesting thing about having the raw files in the project window is that you can make changes to the items and when you focus on the Unity project (click on unity app), it will readjust the meta files and reload the items in the scene. This makes it so that way you can iterate on scripts and art faster!
We’ve look at the gameobjects in scene, placed them by manipulating the transforms, and know where the game object was referenced from. Now we should look at the game view to know how the game itself looks.
The Game View is similar to the scene view, however it’s following the rules that are build in scene view. The game will automatically look through the Main camera in the scene or else wise defined:
You can see that this looks very similar to the scene window, but the top has different options. On the top left we can see the Display drop down. This allows us to change cameras if we have multiple in the scene. The ratio is to the right of that, which is helpful to look at so you can target certain devices. Scale, to the right of the screen ratio, is helpful to quickly make the window larger or zoom in for debugging.
Maximize on play will maximize the screen on play to take advantage of full screen. Mute audio means to mute the games audio.
Stats will give a small overview of the stats in the game view. Later on during optimization we will go through profiling to get a much more in depth way to look at what may be causing issues within the gameplay for memory usage and other optimization opportunities:
Continuing on to the right is gizmos. This is a set of items that are showing in the game view in Figure 1.16 which you might now want to see. In this menu, you are able to turn them off or on depending on your needs.
Your Unity ID will house the packages you’ve bought from the Unity Asset store as well as the packages you may have on your hard drive or GitHub! You can use the package manager to import the packages into your project. You can get to this packages under Window → Package Manager as seen on Figure 1.17 below:
After you open the package manager, you will initially be shown what packages are in the project. You can change the top left dropdown to see what is standard in Unity or what packages you have bought in the asset store:
By choosing Unity Registry, this is a list of the Unity tested packages that come free and are part of the Unity platform available if you need them. You can read up on every package in the documents that are provided by the link in the right hand side, labeled View documentation when you click on a package on the left.
If you select the In Project, if will show you what packages are already installed with the current project that is loaded. This is helpful when you want to possible uninstall a package that may not be needed.
My Assets are the assets that you’ve bought or the project you are on and associated to your Unity ID has paid for previously.
Built in are standard with any project. You may need to enable or disable a built in package depending on what your needs will be. Explore them and disable what is not needed! Tidy projects now leads to less optimizations later!
Together we went over several key areas to begin your journey to game development. In this chapter, we laid the foundation for what is to come by going over some key fundamentals features of three primary topics:
We went over the coordinate system which led into world and local space. This allowed us to talk about vectors, which is used to denote position, rotation, scale, and many other data points in development. We talked about the role of cameras and how they allow you to be the storyteller their their lens. We went through the facets of 3D meshes and how polygons are made up of vertices, edges, faces. These meshes can then be colored by materials which are driven by shaders and informed by textures. We then ended on the basics of rigid body physics and the collision detection which accompanies it. This was enough of the basics to allow us to get into Unity concepts.
To get into how Unity works, we needed to talk about the object oriented nature of the application itself and understand common terminology. We started with the most fundamental of items, the asset which is all things in Unity. One primary asset is the Scene, which ends up housing and has reference to all gameobjects for the game to run with. Speaking of gameobjects, we dove into this heavily with defining what could be a gameobject and why it matters! Following this, we explained how gameobjects could house components with multiple properties and one of which you can script in C# to make your own properties and logic with them to create mechanics. After this, we brought up the basics of prefabs which are referenced containers with predefined properties on each gameobject. Finally, there is a package which is a unity file which houses multiple items you can place on the asset store as well as import into other projects.
To end this chapter, we went through a virtual tour of the Unity interface. We looked at the scene and hierarchy view and how to view and understand the hierarchy of the parent child relationships of the game objects. Then we went into the inspector to manipulate the gameobjects that needed to be manipulated. After this we went into the project window to look at all the assets we have in the project. After that, we needed to talk about the game view to show where the game logic would take place and how it was different from the scene window. Finally, we talked over the package manager which has all the asset packages from Unity as well as the ones you’ve bought from Unity Asset Store.