Reader small image

You're reading from  Unity AI Programming Essentials

Product typeBook
Published inDec 2014
Publisher
ISBN-139781783553556
Edition1st Edition
Tools
Right arrow
Author (1)
Curtis Bennett
Curtis Bennett
author image
Curtis Bennett

Curtis Bennett has been a developer in the games and computer graphics industry for several years. He has worked on developing immersive virtual environments, published research in visual simulation, taught college courses in game development, and worked for various game studios, and he was also an engineer on early versions of the RAIN AI plugin for Unity. Currently, he is the Technical Director for Creative Services at Ideum, which focuses on creating interactive media projects.
Read more about Curtis Bennett

Right arrow

Chapter 6. Sensors and Activities

In the previous chapters on pathfinding and behavior trees, we had AI characters moving through our AI environments and changing states, but they didn't really react to anything. They knew about the navigation mesh and different points in the scene, but there was no way for them to sense different objects in the game and react to them. This chapter changes that; we will look at how to tag objects in the game so that our characters can sense and react to them.

In this chapter, you will learn about:

  • Sensors and tagging game objects so that they can be sensed

  • AI characters that use sensors in RAIN

  • Advanced configuration of sensors in RAIN

  • Having AI characters react to different objects and perform different activities once they are sensed

An overview of sensing


A part of having good game AI is having the AI characters react to other parts of the game in a realistic way. For example, let's say you have an AI character in a scene searching for something, such as the player to attack them or items to collect (as in the demo in this chapter). We could have a simple proximity check, for example, if the enemy is 10 units from the player, it starts attacking. However, what if the enemy wasn't looking in the direction of the player and wouldn't be able to see or hear the player in real life? Having the enemy attack then is very unrealistic. We need to be able to set up more realistic and configurable sensors for our AI.

To set up senses for our characters, we will use RAIN's senses system. You might assume that we will use standard methods to query a scene in Unity, such as performing picking through Unity's ray casting methods. This works for simple cases, but RAIN has several advanced features to configure sensors for more realism...

Using senses with RAIN


For this demo, we will use RAIN 2.14 and have a ship that patrols a path, looks for pieces of gold, and picks them up. To start, we'll use a setup similar to that of the demo in Chapter 3, Behavior Trees. You can start from there or recreate it; we just need a ship, a wall, a path, with the ground being a little larger, and the objects spread out a little.

Note

When changing the base geometry of your game levels, you need to regenerate the navigation mesh. This is done by selecting the Navigation Mesh object in your scene and clicking on the Generate NavMesh button.

Here is our basic setup. The following image shows the starting point of our sensor demo:

We also just need the behavior tree for the ship to only patrol the path. Set up this behavior like we did in Chapter 2, Patrolling, or if you are using the behavior tree demo, delete the timer node functionality. The new behavior tree should look like the following screenshot:

This will be the starting point of the behavior...

Summary


In this chapter, we looked at how to set up sensors for our AI characters so that they can see the environment. We also saw how to tag objects with aspects so that they are visible to our AI. We also saw how to change a character's activities based on sensing, and we discussed different settings for sensors and how to tweak them. Sensors and aspects can make your game's AI more realistic, but they need to be carefully adjusted to give good results.

In the next chapter, we will look at taking our work with navigation and mind development to make our characters react more to their environments. Specifically, we will see how all of the AI we have used so far can make our AI characters adapt to different game events and create more complex AI.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Unity AI Programming Essentials
Published in: Dec 2014Publisher: ISBN-13: 9781783553556
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Curtis Bennett

Curtis Bennett has been a developer in the games and computer graphics industry for several years. He has worked on developing immersive virtual environments, published research in visual simulation, taught college courses in game development, and worked for various game studios, and he was also an engineer on early versions of the RAIN AI plugin for Unity. Currently, he is the Technical Director for Creative Services at Ideum, which focuses on creating interactive media projects.
Read more about Curtis Bennett