Reader small image

You're reading from  Learning Game AI Programming with Lua

Product typeBook
Published inNov 2014
Reading LevelBeginner
PublisherPackt
ISBN-139781783281336
Edition1st Edition
Languages
Right arrow
Author (1)
David Young
David Young
Right arrow

Chapter 4. Mind Body Control

In this chapter, we will cover the following topics:

  • Attaching an animated mesh to an agent

  • Adding callbacks to the Lua animation state machine

  • Getting our soldier to shoot projectiles

  • Creating an agent that directly controls the animation playback

  • Creating an animation controller that handles the animation playback

  • Running agents through an obstacle course

Now that we've learned how to move agents based on steering forces and created a system to handle animated meshes, it's time to join both of these systems to build a visual representation of an animated, moving agent.

Going forward, we'll implement two different approaches on how decision logic, the mind, the visual appearance, and the body can be implemented within the sandbox.

Creating a body


So far, we've hardcoded a lot of functionality directly inside our sandbox or agent Lua scripts. Now, we'll be using the same logic, but instead of duplicating it in multiple places, we'll be using helper functions provided by src/demo_framework/script/Soldier.lua. The functions within the soldier Lua script are specialized and tuned to provide the correct steering weights and animation representation in order to create a soldier going forward. The animation state machines found within the soldier script are a more flushed-out representation of the same state machines that we created previously.

Creating a soldier

Now, we'll take a brief look at what functionalities the soldier script provides and how we'll go about using them.

The Soldier.SoldierStates and Soldier.WeaponStates tables provide a list of all the available animation states for both the soldier and the soldier's weapon animation state machines.

The Soldier_AttachWeapon function will attach the soldier's weapon mesh...

Adding callbacks to the animation state machine


So far, our animation state machine could control animations and transitions; now, we'll extend the ASM to notify functions whenever a new state begins playing. This functionality is extremely useful for actions that require synchronization with animations, such as reloading and shooting.

Previously, we've implemented our animation state machine within the Sandbox.lua script. Now, you can either copy that implementation into a new AnimationStateMachine.lua file or take a look at the implementation provided by the sandbox.

Note

The sandbox will always load a demo's specific implementation of a Lua file instead of the file provided by default within the demo_framework project. This allows you to replace the default implementation of the AnimationStateMachine.lua script, for instance, without needing to change any other script references.

Handling callbacks

A callback is essentially another Lua function that is stored as a variable. A helper function...

Getting our soldier to shoot


Before we use the Soldier_Shoot helper function provided by the sandbox, we should implement our soldier shooting by hand. Overall, the process of shooting a bullet requires a look up of the bone position and rotation for the soldier, creating a physics representation for the bullet, attaching a particle system to the bullet, launching the profile, and then handling the impact of a bullet with any other physics simulated object.

The bone position

Getting a bone position requires passing a sandbox object or mesh to the Animation.GetBonePosition function. GetBonePosition also works for any attached object that contains bones as well. This allows you to retrieve the bone position of a bone within the weapon while it is attached to the soldier, for example:

local position =
    Animation.GetBonePosition(sandboxObject, boneName);

The bone rotation

Bone rotation is exactly the same as getting the bone's position, except that it returns a vector that represents the rotation...

Getting our soldier to run


Now that we have an animated agent, we can get the agent to run around the obstacle course while animating with the same steering techniques we used previously. First, we set the agent's path, which is provided by the SandboxUtilities_GetLevelPath function, and then we request the ASM to let the agent play the run_forward animation.

Setting a path through the obstacle course

You can set a path through the obstacle course as follows:

SoldierAgent.lua:

require "SandboxUtilities"

function Agent_Initialize(agent)

    ...

    _soldierAsm:RequestState(
        Soldier.SoldierStates.STAND_RUN_FORWARD);

    -- Assign the default level path and adjust the agent's speed 
    -- to match the soldier's steering scalars.
    agent:SetPath(SandboxUtilities_GetLevelPath());
    agent:SetMaxSpeed(agent:GetMaxSpeed() * 0.5);
end

Running the obstacle course

Actually getting our agent to move requires us to calculate the steering forces based on the set path and then apply these forces...

Creating a brain


So far, we have a very basic binding between an agent and an animating mesh. Now, we are going to implement two different approaches that will help us have the decision logic control the agent's animated state machines.

Approaches for mind body control

The two main approaches we will be implementing are direct control over the ASM by the agent, where the decision logic can directly control which state the ASM transitions to, and a second approach where the agent issues commands to another system that is responsible for figuring out which animations are appropriate to be played for the agent.

Direct animation control


The first approach we'll implement is direct control over the ASM, as it is the simplest approach to understand and implement. While the agent has a lot of advantages as it knows exactly what animation is playing on its body, this technique tends to scale very poorly as more and more animation states are introduced. Although scaling becomes a problem, this approach allows for the lowest grain of control in terms of animation selection and response times from the agent.

As the agent must be responsible for animation selection, we'll create some basic actions that the agent can perform and represent as states:

DirectSoldierAgent.lua:

-- Supported soldier states.
local _soldierStates = {
    DEATH = "DEATH",
    FALLING = "FALLING",
    IDLE = "IDLE",
    MOVING = "MOVING",
    SHOOTING = "SHOOTING"
}

The death state

The first action we'll implement is the death state of the agent. As the ASM provides both a crouch and standing death variation, the action simply slows the...

A simple, finite state machine


Now that we've implemented the internals of each state, we can flush out the basic FSM that controls when each state function is to be called. We'll begin by getting some local variables out of the way and different flags for stance variation:

DirectSoldierAgent.lua:

local _soldier;
local _soldierAsm;
local _soldierStance;
local _soldierState;
local _weaponAsm;

-- Supported soldier stances.
local _soldierStances = {
    CROUCH = "CROUCH",
    STAND = "STAND"
};

Initializing the agent

Initializing our direct control agent is very similar to the other agents we created previously. This time, we add the ASM callbacks during the initialization so that our soldier will shoot during the fire and crouch_fire animation states. The Soldier_Shoot function expects to receive a table that contains both the agent and soldier mesh in order to shoot projectiles and handle projectile collisions:

DirectSoldierAgent.lua:

function Agent_Initialize(agent)
    -- Initialize the soldier...

Indirect animation control


Now that we've implemented direct ASM control from the agent's point of view, we're going to create a system that manages the ASM while taking commands from the agent. One layer of abstraction above the ASM helps separate decision-making logic that resides in the agent and low-level animation handling.

Take falling, for example—does it make sense for the agent to constantly care about knowing that the agent is falling, or would it make things simpler if another system forces the agent to play a falling animation until the agent can interact with the environment again?

The system we'll be creating is called an animation controller. As animation controllers are very specific to the type of agent we create, you'll tend to create a new animation controller for each and every agent type.

The animation controller

Creating an animation controller will follow an object-oriented style that is similar to the ASM. First, we create a new function that creates variables for holding...

Running the obstacle course


Now that we've implemented both approaches to animation control, it's time to create both of these agent types within the obstacle course. As there's no actual agent decision-making logic, we'll be binding each different state to a keyboard hotkey so that we can influence what actions the agents perform. Here, we can see a few agents running the obstacle course:

Creating a direct control agent

As all the animation control for our direct control agents exists within the agent Lua script, all that remains is setting the soldier's state based on hotkeys. Note that as we have no direct change state, we handle this completely within the hotkey itself:

DirectSoldierAgent.lua:

local function _IsNumKey(key, numKey)
    -- Match both numpad keys and numeric keys.
    return string.find(
        key, string.format("^[numpad_]*%d_key$", numKey));
end

function Agent_HandleEvent(agent, event)
    if (event.source == "keyboard" and event.pressed) then
        -- Ignore new state...

Action latency


Now that we've created both approaches for animation handling, we'll take a look at the pros and cons of each implementation. Direct control gives our agents absolute control over animations and allows the decision logic to account for the cost of animation playback. While it might seem counter-intuitive to mix animation logic with decision logic, this allows a direct control agent to be in absolute control over the minimum amount of latency required to go from a decision to a visible action within the sandbox.

With the indirect animation controller taking control over the body, the agent now faces a new issue, which is action latency. Action latency is the time between when a command is queued till it is acted upon. With our current setup of a fully connected ASM, this latency is minimized as any command can go directly to any queued command. A fully connected ASM is not a typical representation of an ASM, though. For many game types, this responsiveness won't affect the gameplay...

Summary


With movement and animation working together, we've finally created an agent that resembles more of what we would consider in the game AI. Going forward, we'll expand when and how our agents perform actions that create a robust game AI capable of movement, shooting, interacting, and finally, death.

Now that we've implemented a basic soldier agent, we can take a step back and begin to analyze the environment that agents reside in. In the next chapter, we'll integrate the navigation mesh generation and pathfinding in environments and finally move away from the fixed paths we've been using so far.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learning Game AI Programming with Lua
Published in: Nov 2014Publisher: PacktISBN-13: 9781783281336
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)