In this chapter, we will start developing a 3D e-learning game. To illustrate the concept of e-learning in games, our game will teach players American state flags and trivia over the course of three levels. After beginning with a definition of e-learning games and how they relate to "traditional" video games, we will carry on with implementing the core systems that control the main character of the game and define its abilities and ways to control the camera that follows the player in our 3D world.
In this chapter, we will cover the following topics:
Understanding e-learning
Introducing our game—Geography Quest
Comprehending the three Cs
Creating our first scene
Developing the character system
Building character representation
Developing code for the camera
Developing code for the player controls
Broadly speaking, e-learning is the use of digital technology to facilitate learning. This could include Internet servers and web browsers to deliver course material online in an asynchronous way. It could include the use of embedded videos in an application that a user can review at his or her leisure in bite-sized chunks. For our purposes in this book, we will focus on the gamification of learning and the use of multimedia and game software to deliver our specific learning outcomes.
The reasons that gamification works in e-learning are varied and are supported by both traditional pedagogy and neurobiology. We list, in no particular order, some of the most compelling reasons as follows:
Immersion: Games that are immersive to the player naturally activate more meaningful learning pathways in the brain. This is because the brain stores and consolidates different types of information in different regions of the brain, based on their relevance. By tying in a strong cinematic experience to the delivery of learning outcomes, you can recruit these systems in the user's brain to learn and retain the material you want to deliver.
But how do we make our games immersive? From the body of knowledge in movie, TV, and consumer game development, there are many design features we could borrow. However, to pick two important ones, we know that good character development and camera work are large contributors to the immersion level of a story.
Character development occurs when the view or opinion of the main character changes in the eye of the player. This happens naturally in a story when the main character participates in a journey that changes or evolves his or her world view, stature, or status. This evolution almost always happens as a result of a problem that occurs in the story. We will borrow from this principle as we plan the obstacles for our player to overcome.
Cinematic camera work helps encourage immersion because the more interesting and dramatic the view of the world that the player experiences, the more actively does the player engage with the story, and hence the learning outcomes by association.
Along with cinematic camera work, we must be sure to balance the playability of the game. Ironically, it is often the case that the more playable the game camera is, the less cinematic it is!
Spatial learning: It is worth giving spatial learning a special mention despite its close association to immersion as a modality of learning. It is known that a specific area of the brain stores the mental map of where things are in your surroundings. Games that have a spatial navigation component to them naturally will recruit this part of the brain to facilitate learning.
Active learning: Instruction is passive and learning is active! Playing games that require levels of thought beyond passive observation are naturally more conducive to learning and retention. By using games that have challenges and puzzles, we force the player to participate in higher order thinking while manipulating the subject matter of the learning outcomes.
Reinforcement and conditioning: Psychologists and learning professionals know that, for a given scenario, positive reinforcement of good behavior increases the likelihood of eliciting the same good behavior the next time that scenario presents itself. Traditional game designers know this lesson very well, as they reward the player both quantitatively (with points and items and power-ups and in-game related collectibles). They also reward the player qualitatively by inducing visceral reactions that feel good. These include being rewarded with on-screen particle effects, visually appealing cut scenes, explosions, sound effects, on screen animation, and so on. Slot machine developers know this lesson well as they play sounds and animations that elicit a feel-good response and reward payouts that condition the player to engage in the positive behavior of playing the game.
Emotional attachment: Games that build an emotional attachment in their players are more likely to garner active play and attention from their users. This results in higher retention of the learning objectives. But how do you engineer attachment into a design? One way is the use of avatars. It turns out that, as the player controls a character in the game, guides his or her actions, customizes his or her appearance, and otherwise invests time and energy in it, he or she may build an attachment to the avatar as it can become an extension of the player's self.
Cognitive flow: Have you ever participated in a task and lost track of time? Psychologists call this the state of flow, and it is known that in this heightened state of engagement, the brain is working at its best and learning potential is increased. We try and encourage the player to enter a state of flow in e-learning games by providing an immersive experience as well by asking the player to complete tasks that are challenging, interesting, and in scenarios with just enough emotional pressure or excitation to keep it interesting.
Safe practice environment: Video games and real-time simulations are good training vehicles because they are inherently safe. The player can practice a skill inside a game without any risk of bodily harm by repeating it in a virtual environment; this enables the player to experience freedom from physical repercussions and encourages exploration and active learning.
An astute reader may ask "What is the difference between e-learning games and consumer games?". This is a good question, which we would answer with "the learning outcomes themselves". A consumer game aims to teach the player how to play the game, how to master the mechanics, how to navigate the levels, and so on. An e-learning game uses the same design principles as consumer games, with the primary goal of achieving retention of the learning outcomes.
In our e-learning game, Geography Quest, we will follow the adventures of the player as park ranger, as you clean up the park to find the missing flags, participate in a trivia challenge/race, and then ultimately patrol your park helping the visitors with their questions. Through each chapter we not only build and extend our technology built inside Unity3D to achieve the design needs of this game, but we also apply the design considerations discussed earlier to develop compelling and effective e-learning content.
Our game will implement the following design features of an effective e-learning game:
Immersion
Spatial learning
Active learning
Reinforcement and conditioning
Emotional attachment
Cognitive flow
A safe practice environment
To design the software for the user experience in a 3D game, we can break the problem down into three systems: the camera, the character, and the controls. In this chapter, we will build the foundation of our e-learning game by developing the framework for these components:
Camera: This system is responsible for the virtual cinematography in the game. It ensures that the avatar is always on screen, that the relevant aspects of the 3D world are shown, and that this experience is achieved in a dynamic, interesting, and responsive way.
Character. This is the avatar itself. It is a 3D model of the personification of the player that is under direct user control. The character must represent the hero as well as possess the functional attributes necessary for the learning objectives.
Controls. This system refers to the control layer that the user interacts within the game. The genre and context of the game can and should affect how this system behaves. This system is impacted by the hardware that is available to the user to interact with. There are potentially many different input hardware devices we could choose to program for; while we may encounter gamepads, touch pads and touchscreens, and motion tracking cameras on potential target PCs, we will focus our attention on the traditional keyboard and mouse for input in our example.
These three systems are tightly coupled and are the trinity of the core 3D gameplay experience. Throughout a normal video game development cycle, we as game programmers may find ourselves making multiple iterations on these three systems until they "feel right". This is normal and is to be expected; however, the impact of changes in one system on the other two cannot be underestimated.
With these requirements in mind, let's build the framework:
Create a plane, positioned at (
0
,0
,0
), and name itground
.Under Edit | Render Settings, go to the Skybox Material panel of the Inspector pane, and add one of the skybox materials from the skybox package.
The GameObject drop-down menu is where you can select different types of basic Unity3D objects to populate your world. Create a directional light to the scene from GameObject | Create Other, and place it at (
0
,10
,0
) for readability. Set its orientation to something like (50
,330
,0
) to achieve a neat shading effect on the player capsule. In our world, the y axis will mean "in the air" and the x and z axes will correspond to the horizontal plane of the world.
Congratulations! You have created the testbed for this chapter. Now let's add the character system.

The character system is responsible for making the avatar of the game look and respond appropriately. It is crucial to get this right in an e-learning game because studies show that player attachment and engagement correlate to how well the player relates or personalizes with the hero. In later chapters, we will learn about how to do this with animation and player customization.
For now, our character system needs to allow coarse interactions with the environment (ground plane). To do this, we shall now create the following avatar capsule:

With these requirements in mind, let's build the framework:
From GameObject | CreateOther, select Capsule, and place it at (
0
,2.5
,0
), as shown in the following screenshot:Name the capsule
Player
in the Inspector pane.Create a cube in a similar fashion, and parent it to the capsule by dragging it onto the hero. Scale it to (
0.5
,0.5
,2
), and set its local position to (0
,1.5
,0.5
).Name the cube object
Hat
.
Congratulations! You now have a representation of our hero in the game. The Hat
object will serve as a visual cue for us in this chapter as we refine the controls and camera code.
In our 3D game, the main camera mode will follow a third-person algorithm. This means that it will follow the player from behind, trying to keep the player on screen and centered in view at all times. Before we start developing the camera, we need to think about the basic requirements of our game in order to be able to program the camera to achieve good cinematographic results. This list of requirements will grow over time; however, by considering the requirements early on, we build an extensible system throughout the course of this book by applying good system design in our software. In no particular order, we list the requirements of a good camera system as follows:
It needs to be able to track the hero at a pleasing distance and speed and in an organic way
It needs to be able to transition in an appealing way, from tracking various objects
It needs to be able to frame objects in the field of view, in a cinematic and pleasing way
Starting with an initial camera and motion system based on the Unity3D examples, we will extend these over time. We do this not only because it is instructive but also with the aim of extending them and making them our own over time. With these requirements in mind, let's build the camera code. Before we do, let's consider some pseudocode for the algorithm.
The GameCam
script is the class that we will attach our MainCamera
object to; it will be responsible for the motion of our in-game camera and for tracking the player on screen. The following five steps describe our GameCam
camera algorithm:
For every frame that our camera updates, if we have a valid
trackObj
GameObject reference, do the following:Cache the facing angle and the height of the object we are tracking.
Cache the current facing angle and height of the camera (the
GameObject
that this script is attached to).
Linearly interpolate from current facing to desired facing according to a dampening factor.
Linearly interpolate from current height to desired height according to another dampening factor.
Place the camera behind the track object, at the interpolated angle, facing the track object so that the object of interest can be seen in view, as shown in the following screenshot:
Now let's implement this algorithm in C# code by performing the following steps:
Right click on the
Chapter1 assets
folder and select Create New C# Script. Name itGameCam
and add it to theMain Camera
object.Create a public
GameObject
reference calledTrackObj
with the following code. This will point to theGameObject
that this camera is tracking at any given time, as shown in the following code:public GameObject trackObj;
Create the following four public float variables that will allow adjustment of the camera behavior in the object inspector. We will leave these uninitialized and then find working default values with the inspector, as shown in the following code:
Public float height; Public float desiredDistance; Public float heightDamp; Public float rotDamp;
Recall that the
Update()
loop of any GameObject gets called repeatedly while the game simulation is running, which makes this method a great candidate in which we can put our main camera logic. Hence, inside theUpdate()
loop of this script, we will call aUpdateRotAndTrans()
custom method, which will contain the actual camera logic. We will place this logic inside theUpdateRotAndTrans()
method. This method will update the rotation (facing angle) and translation (position) of the camera in the world; this is howGameCam
will accomplish the stated goal of moving in the world and tracking the player:void Update() { UpdateRotAndTrans(); }
Above the update loop, let's implement the
UpdateRotAndTrans()
method as follows:void UpdateRotAndTrans () { // to be filled in }
Inside this method, step 1 of our algorithm is accomplished with a sanity check on
trackObj
. By checking for null and reporting an error todebugLog
, we can make catching bugs much easier by looking at the console. This is shown in the following code:if (trackObj) { } else { Debug.Log("GameCamera:Error,trackObj invalid"); }
Step 2 of our algorithm is to store the desired rotation and height in two local float variables. In the case of the height, we offset the height of
trackObj
by an exposed global variableheight
so that we can adjust the specifics of the object as shown in the following code (sometimes an object may have its transform 0 at the ground plane, which would not look pleasing, we need numbers to tweak):DesiredRotationAngle = trackObj.transform.eulerAngles.y; DesiredHeight = trackObj.transform.position.y + height;
We also need to store the local variants of the preceding code for processing in our algorithm. Note the simplified but similar code compared to the code in the previous step. Remember that the
this
pointer is implied if we don't explicitly place it in front of a component (such astransform
):float RotAngle = transform.eulerAngles.y; float Height = transform.position.y;
Step 3 of our algorithm is where we do the actual LERP (linear interpolation) of the current and destination values for y-axis rotation and height. Remember that making use of the LERP method between two values means having to calculate a series of new values between the start and end that differs between one another by a constant amount.
Remember that Euler angles are the rotation about the cardinal axes, and Euler y indicates the horizontal angle of the object. Since these values change, we smooth out the current rotation and height more with a smaller dampening value, and we tighten the interpolation with a larger value.
Also note that we multiply
heightDamp
byTime.deltaTime
in order to make the height interpolation frame rate independent, and instead dependent on elapsed time, as follows:RotAngle = Mathf.LerpAngle (RotAngle, DesiredRotationAngle, rotDamp); Height = Mathf.Lerp (Height, DesiredHeight, heightDamp * Time.deltaTime);
The fourth and last step in our
GameCam
algorithm is to compute the position of the camera.Now that we have an interpolated rotation and height, we will place the camera behind
trackObject
at the interpolated height and angle. To do this, we will take the facing vector oftrackObject
and scale it by the negative value ofdesiredDistance
to find a vector pointing in the opposite direction totrackObject
; doing this requires us to converteulerAngles
toQuaternion
to simplify the math (we can do it with one API function!).Adding this to the
trackObject
position and setting the height gives the desired offset behind the object, as shown in the following code:Quaternion CurrentRotation = Quaternion.Euler (0.0f, RotAngle, 0.0f); Vector3 pos = trackObj.transform.position; pos -= CurrentRotation * Vector3.forward * desiredDistance; pos.y = Height; transform.position = pos;
As a final step, we point the
LookAt
GameObject reference of the camera to the center oftrackObject
so that it is always precisely in the middle of the field of view. It is most important to never lose the object you are tracking in a 3D game. This is critical!transform.LookAt (trackObj.transform.position);
Congratulations! We have now written our first camera
class that can smoothly track a rotating and translating object. To test this class, let's set the following default values in the Inspector pane as seen in the previous screenshot:
TrackObj: Set this to the Player1 object by dragging-and-dropping the object reference from the Hierarchy tab to the
trackObj
reference in the object inspector.Height: Set this to
0.25
. In general, the lower the camera, the more dramatic the effect but the less playable the game will be (because the user can see less of the world on screen).Desired Distance: Set this to
4
. At this setting, we can see the character framed nicely on screen when it is both moving and standing still.Rot Damp: Set this to
0.01
. The smaller this value, the looser and more interesting the rotation effect. The larger this value, the more tense the spring in the interpolation.Height Damp: Set this to
0.5
. The smaller this value, the looser and more interesting the height blending effect.
Once the player controls are developed (refer to the next section), try experimenting with these values and see what happens.
Tip
Downloading the example code
You can download the example code files for all Packt books you have purchased via your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
The third system we need to implement is the controls or how the character will respond to the user input. As a first pass, we need to be able to move our player in the world, so we will implement walk forward, walk backwards, walk left, and walk right. Luckily for us, Unity gives us an input system with axes so that we can write our control code once, and it will work with any devices that have an axis (such as keyboard or joypad). Of course, the devil is in the detail and keyboard controls behave differently from joypads, so we will write our code for keyboard input as it is the most responsive and most ubiquitous device. Once this script is finished, its behavior in combination with the GameCam
script will control how the player motion feels in the game.
For every frame our player updates, perform the following steps that describe our PlayeControls algorithm:
Store the forward and right vectors of the current camera.
Store the raw axis input from the controller (keyboard or joystick). These values will range from
-1.0
to1.0
, corresponding to full left or right, or full forward or backwards. Note that if you use a joystick, the rate of change of these values will generally be much slower than if a keyboard is used, so the code that processes it must be adjusted accordingly.Apply the raw input to transform the current camera basis vectors and compute a camera relative target direction vector.
Interpolate the current movement vector towards the target vector and damp the rate of change of the movement vector, storing the result away.
Compute the displacement of the camera with movement * movespeed and apply this to the camera.
Rotate the camera to the current move direction vector.
Now let's implement this algorithm in C# code:
Right click on the
Chapter1 assets
folder and select Create New C# Script. Name itPlayerControls.cs
. Add this script toGameObject
of Player1 by dragging-and-dropping it onto the object.Add a
CharacterController
component to the player'sGameObject
component as well. If Unity asks you whether you want to replace the box collider, agree to the change.Create
public Vector3 moveDirection
that will be used to store the current actual direction vector of the player. We initialize it to the zero vector by default as follows:public Vector3 moveDirection = Vector3.zero;
Create three public float variables:
rotateSpeed
,moveSpeed
, andspeedSmoothing
. The first two are coefficients of motion for rotation and translation, and the third is a factor that influences the smoothing ofmoveSpeed
. Note thatmoveSpeed
is private because this will only be computed as the result of the smoothing calculation betweenmoveDirection
andtargetDirection
as shown in the following code:public Float rotateSpeed; private float moveSpeed = 0.0f; public float speedSmoothing = 10.0f;
Inside the update loop of this script, we will call a custom method called
UpdateMovement()
. This method will contain the code that actually reads input from the user and moves the player in the game as shown in the following code:void Update() { UpdateMovement() }
Above the update loop, let's implement the
UpdateMovement()
method as follows:void UpdateMovement () { // to be filled in }
Inside this method, step 1 is accomplished by storing the horizontal projection of the forward and right vectors of the current camera as follows:
Vector3 cameraForward = Camera.mainCamera.transform.TransformDirection (Vector3.forward);
We project onto the horizontal plane because we want the character's motion to be parallel to the horizontal plane rather than vary with the camera's angle. We also use
Normalize
to ensure that the vector is well formed, as shown in the following code:cameraForward.y = 0.0f; cameraForward.Normalize();
Also, note the trick whereby we find the right vector by flipping the x and z components and negating the last component. This is faster than extracting and transforming the right vector, but returns the same result shown in the following code:
Vector3 cameraRight = new Vector3 (cameraForward.z, 0.0f, -cameraForward.x);
We store the raw axis values from Unity's
Input
class. Recall that this is the class that handles input for us, from which we can poll button and axes values. Forh
(which has a range from-1
to1
), the value between this range corresponds to an amount of horizontal displacement on the analog stick, joystick, or a keypress, as shown in the following code:float v = Input.GetAxisRaw("Vertical");
For v (which ranges from
-1
to1
), the value between this range corresponds to an amount of vertical displacement of the analog stick, joystick, or a different keypress.float h = Input.GetAxisRaw("Horizontal");
To see the keybindings, please check the input class settings under Edit | ProjectSettings | Input. There, under the Axes field in the object inspector, we can see all of the defined axes in the input manager class, their bindings, their names, and their parameters.
We compute the target direction vector for the character as proportional to the user input (v, h). By transforming (v, h) into camera space, the result is a world space vector that holds a camera relative motion vector that we store in
targetDirection
as shown in the following code:Vector3 targetDirection = h * cameraRight + v * cameraForward;
If this target vector is non-zero (when the user is moving, and hence
v
,h
are non-zero), we updatemoveDirection
by rotating it smoothly (and by a small magnitude), towardsmoveTarget
. By doing this in every frame, the actual direction eventually approximates the target direction, even astargetDirection
itself changes.We keep
moveDirection
normalized because our move speed calculation assumes a unit direction vector as shown in the following code:moveDirection = Vector3.RotateTowards (moveDirection, targetDirection, rotateSpeed * Mathf.Deg2Rad * Time.deltaTime, 1000); moveDirection = moveDirection.normalized;
We smoothly LERP the speed of our character up and down, trailing the actual magnitude of the
targetDirection
vector. This is to create an appealing effect that reduces jitter in the player and is crucial when we are using keyboard controls, where the variance inv
andh
raw data is at its highest, as shown in the following code:float curSmooth = speedSmoothing * Time.deltaTime; float targetSpeed = Mathf.Min (targetDirection.magnitude, 1.0f); moveSpeed = Mathf.Lerp (moveSpeed, targetSpeed, curSmooth);
We compute the displacement vector for the player in this frame with movementDirection * movespeed (remember that
movespeed
is smoothly interpolated andmoveDirection
is smoothly rotated towardtargetDirecton
).We scale displacement by
Time.delta
time (the amount of real time that has elapsed since the last frame). We do this so that our calculation is time dependent rather than frame rate dependent as shown in the following code:Vector3 displacement = moveDirection * moveSpeed * Time.deltaTime;
Then, we move the character by invoking the
move
method on theCharacterController
component of the player, passing thedisplacement
vector as a parameter as follows:this.GetComponent<CharacterController>() .Move(displacement);
Finally, we assign the rotation of
MoveDirection
to the rotation of the transform as follows:transform.rotation = Quaternion.LookRotation (moveDirection);
Congratulations! You have now written your first player controls class that can read user input from multiple axes and use that to drive a rotating and translating character capsule. To test this class, let's set the following default values in the Inspector pane as seen in the previous screenshot:
Track Obj: Set this to the Player1 object by dragging-and-dropping the object reference from the Hierarchy tab to the
trackObj
reference in the object inspector.Height: Set this to
0.25
. In general, the lower the camera, the more dramatic the effect, but the less playable the game will be (because the user can see less of the world on screen).Desired Distance: Set this to
4
. At this setting, we can see the character framed nicely on screen when it is both moving and standing still.Rot Damp: Set this to
0.01
. The smaller this value, the looser and more interesting the rotation effect. The larger this value, the more tense the spring in the interpolation.Height Damp: Set this to
0.5
. The smaller this value, the looser and more interesting the height blending effect.
Try experimenting with the following values and see what happens:
Rotate Speed : Set the default to
100
. The higher the value, the faster the player will rotate when the horizontal axis is set to full left or right.Speed Smoothing: Set the default to
10
. The higher this value, the smoother the character's acceleration and deceleration.
Try experimenting with these values to understand their effect on the player's motion behavior.
Test your "Three C" framework by debugging the game in the editor. Press play and then adjust the parameters on the PlayerControls and GameCam script to tighten or loosen the controls to your liking. Once you have a set of parameters that works for you, make sure to save your scene and project.
We learned about the three Cs in core gameplay experience programming. We developed base systems for the character, controls, and camera that have parameters so that designers and programmers can adjust the look and feel of the game quickly and easily.
Going forward, we will build the code necessary to make our game interactive! We will learn how to program interactive objects in our game, and we will develop the technology for a mission system, which will allow the game designer to build missions and objectives, track their status, and give them to the user in a systemic way.