Reader small image

You're reading from  Learning Game AI Programming with Lua

Product typeBook
Published inNov 2014
Reading LevelBeginner
PublisherPackt
ISBN-139781783281336
Edition1st Edition
Languages
Right arrow
Author (1)
David Young
David Young
Right arrow

Chapter 9. Tactics

In this chapter, we will cover the following topics:

  • Generating layer-based influence maps

  • Drawing individual influence map layers

  • Learning about influence map modifiers

  • Manipulating and spreading influence values

  • Creating an influence map for agent occupancy

  • Creating a tactical influence map for potentially dangerous regions

So far, our agents have a limited number of data sources that allow for spatial reasoning about their surroundings. Navigation meshes allow agents to determine all walkable positions, and perception gives our agents the ability to see and hear; the lack of a fine-grained quantized view of the sandbox prevents our agents from reasoning about other possible places of interest within the sandbox.

Influence maps


Influence maps help alleviate the understanding of the environment from a high level. This type of spatial data can come in many different forms, but all essentially break down the environment into quantifiable regions where additional data can be stored and evaluated. In particular, we'll be using a three-dimensional grid-based representation for our influence maps. Each cell of the influence grid represents a three-dimensional cube of space. The sandbox uses an internal three-dimensional array structure that stores influence map data, and therefore, grows quickly in size based on the dimensions of each grid cell.

With influence maps, we can easily perform tasks such as spacing out agent positions or moving agents toward friendly regions and away from dangerous regions within the sandbox. Previously, we would use a utility-based approach that would become performance-intensive as more agents were introduced into the sandbox. With a central data structure shared by all agents...

Constructing an influence map


Constructing an influence map requires configuration parameters as well as a navigation mesh on which we can base the influence map. Instead of analyzing all geometry in the sandbox, an optimization is done by constructing grid cells along the navigation mesh itself. As all walkable areas are represented by the navigation mesh, the influence map will be able to store spatial data for all regions that agents can path to.

A navigation mesh used to construct the influence map

While information about all possible areas might be useful for certain tactical analysis, in practice, the additional memory and performance constraints are a higher priority.

Note

As the influence map is based on the underlying navigation mesh, try changing the configuration of the navigation map to generate different influence map representations.

Configuration

Configuring the influence map boils down to specifying a cell width, cell height, and any boundary offsets. Even though the maximum size...

Drawing influence maps


To display the current state of the influence map, you can call Sandbox.DrawInfluenceMap. The influenceMapLayer function that is passed in determines which of the 10 possible layers are to be drawn. As our influence maps support both positive and negative influences, three different colors are used to draw the resulting map:

Sandbox.DrawInfluenceMap(
    sandbox,
    influenceMapLayer,
    positiveInfluenceColor,
    neutralInfluenceColor,
    negativeInfluenceColor);

Each color passed to DrawInfluenceMap is a Lua table that represents the red, green, blue, and alpha properties of the color in the range of 0 to 1. In the earlier cases, the influence map was drawn with these settings:

Sandbox.DrawInfluenceMap(
    sandbox,
    0,
    { 0, 0, 1, 0.9 },
    { 0, 0, 0, 0.75 },
    { 1, 0, 0, 0.9 });

Note

Drawing the influence map only shows what the influence map looks like at that exact moment. The debug drawing of the influence map will not get updated on its own. Adding...

Accessing influences


Grid cells are accessed by their world position. Typically, you can use an agent's own position to determine which cell it's in, or you can generate random points on the navigation mesh to pick a random grid cell. Once a cell has been selected, you can manually change the influence or retrieve the current influence.

Setting influences

To set an influence value, you can call Sandbox.SetInfluence, passing in the layer of the map to be affected, the vector position to be affected, as well as the value. The influence map automatically truncates values outside the supported range from negative to positive:

Sandbox.SetInfluence(
    sandbox,
    influenceMapLayer,
    position,
    influenceValue);

The following screenshot illustrates what happens when you set a value directly on the influence map:

An influence map showing the positions of influence for all previous examples

Getting influences

Retrieving values from the influence map is just as simple as calling Sandbox.GetInfluence...

Clearing influences


To remove all current influences on a specific layer of the map, you can call Sandbox.ClearInfluenceMap. Influences on the map will not decay with time; so typically, you must clear the influence map before calculating a new spread of influences:

Sandbox.ClearInfluenceMap(sandbox, influenceMapLayer);

Visualizing what occurs when clearing the influence map; any cells set to any value other than zero will be set back to zero.

A cleared influence map

Spreading influences


To actually spread the current influences to their maximum possible regions, you can call Sandbox.SpreadInfluenceMap. Based on how influences are configured to fall off over distances, the spreading of influences will be calculated automatically. The reason behind spreading influences allows for a quantization of areas where multiple systems can access spatial data with no overhead cost of calculating values:

Sandbox.SpreadInfluenceMap(sandbox, influenceMapLayer);

The overall algorithm used to calculate influence has a maximum of 20 iterations in case the set falloff of influence is so low that it could propagate past 20 cells in any direction. Additionally, if the propagation of influence drops below the minimum of one percent, the algorithm will terminate early:

iterationMax = 20
minPropagation = 0.01
currentPropagation = 1 - falloff

for (iteration; iteration < iterationMax; ++iteration) {
    UpdateInfluenceGrid(grid, layer, inertia, falloff)
    
    currentPropagation...

Influence map layers


So far, we've talked about different layers of the influence map without showing you how they can be useful. Currently, the red influences used on the influence map are negative values, while the blue influences are positive values. When combined together on a single influence map layer, the boundary where blue meets red becomes the neutral influence of 0. While this can be very useful, we might want to see the total influence of either the negative influences or positive influences.

Combined negative and positive influences

Using a separate layer on the influence map, we can see the full negative influence unaffected by positive influences.

Only negative influences spread to their maximum

We can also do this for the positive influences and use an additional layer that maps only positive values.

Only positive influences spread to their maximum

Updating the influence map


So far, we've seen how to configure and spread influences in isolation. The actual update loop for the influence map combines each of these steps together. Remember that clearing the influence map is necessary in between spreading influences; otherwise the resulting influences will be oversaturated:

function Sandbox_Update(sandbox, deltaTimeInMillis)
    Sandbox.ClearInfluenceMap(sandbox, 0);
    
    for i=1, 15 do
        Sandbox.SetInfluence(
            sandbox,
            0,
            Sandbox.RandomPoint(sandbox, "default"),
            1);
    end

    Sandbox.SpreadInfluenceMap(sandbox, 0);
end

While this simple update scheme will work, it is incredibly CPU-intensive to calculate the entire influence map every sandbox update. Typically, a slightly out of date influence map will not adversely affect tactical decision making based on influence map data. As the sandbox internally updates at a frequency independent of frame rate, we can use an updateInterval...

Soldier tactics


In order to manage multiple influence maps, we can create common initialize and update functions that wrap the update interval of different influence maps as well as the initial setup. As the SoldierTactics initialize function will be responsible for the influence map's configuration and construction, we can move the previous initialization code from Sandbox.lua:

SoldierTactics.lua:

SoldierTactics = {};
SoldierTactics.InfluenceMap = {};

function SoldierTactics_InitializeTactics(sandbox)
    -- Override the default influence map configuration.
    local influenceMapConfig = {
        CellHeight = 1,
        CellWidth = 2,
        BoundaryMinOffset = Vector.new(0.18, 0, 0.35) };

    -- Create the sandbox influence map.
    Sandbox.CreateInfluenceMap(
        sandbox, "default", influenceMapConfig);

    -- Initialize each layer of the influence map that has
    -- an initialization function.
    for key, value in pairs(SoldierTactics.InfluenceMap) do
        value.initializeFunction...

Scoring team influences


Now, we can create an influence map that represents each of the agent teams that are moving around the sandbox. From the sandbox's point of view, we will be using perfect knowledge of the situation to acquire the locations of each agent; this means that the system will access the immediate location of every agent compared to the perception system that uses last seen locations. While this data is certainly useful for decision making, be aware that this is a form of cheating that agents use to query this influence layer.

Initializing team influences

Initializing a team influence layer consists of setting the desired falloff and inertia of the influence layer. A 20 percent falloff as well as a 50 percent inertia works well based on a cell width of 2 meters for the default sandbox layout.

Finding useful ranges for falloff and inertia requires experimentation and is very application-specific. A 20 percent falloff and 50 percent inertia works well with the default sandbox...

Scoring dangerous areas


The next influence map we'll create is scoring dangerous areas from a team-specific perspective. Using the events that the agents are currently sending out for communication, we can set influence values on the information the team rightfully knows about without resorting to scoring calculations:

SoldierTactics.lua:

require "AgentSenses"

local eventHandlers = {};
local bulletImpacts = {};
local bulletShots = {};
local deadFriendlies = {};
local seenEnemies = {};

Tapping into agent events

Without modifying the existing event system, we can create simple event handlers to store, process, and prune any number of events the agents are already sending. As events are processed differently for the influence map, we store local copies of each event and manage the lifetime of events separately from how agents process them:

SoldierTactics.lua:

local function HandleBulletImpactEvent(sandbox, eventType, event)
    table.insert(
        bulletImpacts, { position = event.position,...

Summary


New data sources for spatial analysis open many new opportunities to create interesting behaviors that allow agents to evaluate more than just a fixed perspective of the world. So far, we've created just a few types of influence maps and haven't even begun to access their data that affects the decision making. Even though changing our current agent's logic is left to you, going forward, the examples shown so far already allow for many interesting scenarios to play out.

Looking back, though, we've progressed quite far from the simple moving capsules we created in the earlier chapters. After adding system upon system to the sandbox, we can now manage AI animations, steering-based movements, agent behaviors, decision making, pathfinding, and a myriad of other AI facets. Even though we've only touched on each of the topics lightly, the sandbox is now yours to mold and expand, as you see fit. Every library the sandbox uses, as well as the sandbox itself, is open source. I eagerly look...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learning Game AI Programming with Lua
Published in: Nov 2014Publisher: PacktISBN-13: 9781783281336
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)