Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-grasping-hammer
Packt
17 Feb 2014
14 min read
Save for later

Grasping Hammer

Packt
17 Feb 2014
14 min read
(For more resources related to this topic, see here.) Terminology In this article, I will be guiding you through many examples using Hammer. There are a handful of terms that will recur many times that you will need to know. It might be a good idea to bookmark this page, so you can flip back and refresh your memory. Brush A brush is a piece of world geometry created with block tool. Brushes make up the basic building blocks of a map and must be convex. A convex object's faces cannot see each other, while a concave object's faces can. Imagine if you're lying on the pitched roof of a generic house. You wouldn't be able to see the other side of the roof because the profile of the house is a convex pentagon. If you moved the point of the roof down inside the house, you would be able to see the other side of the roof because the profile would then be concave. This can be seen in the following screenshot: Got it? Good. Don't think you're limited by this; you can always create convex shapes out of more than one brush. Since we're talking about houses, brushes are used to create the walls, floor, ceiling, and roof. Brushes are usually geometrically very simple; upon creation, common brushes have six faces and eight vertices like a cube. The brushes are outlined in the following screenshot: Not all brushes have six sides; however, the more advanced brushwork techniques can create brushes that have (almost) as many sides as you want. You need to be careful while making complex brushes with the more advanced tools though. This is because it can be quite easy to make an invalid solid if you're not careful. An invalid solid will cause errors during compilation and will make your map unplayable. A concave brush is an example of an invalid solid. World brushes are completely static or unmoving. If you want your brushes to have a special function, they need to be turned into entities. Entity An entity is anything in the game that has a special function. Entities come in two flavors: brush-based and point-based. A brush-based entity, as you can probably guess, is created from a brush. A sliding door or a platform lift are examples of brush-based entities. Point-based entities, on the other hand, are created in a point in space with the entity tool. Lights, models, sounds, and script events are all point-based entities. In the following figure, the models or props are highlighted: World The world is everything inside of a map that you create. Brushes, lights, triggers, sounds, models, and so on are all part of the world. They must all be contained within a sealed map made out of world brushes. Void The void is nothing or everything that isn't the world. The world must be sealed off from the void in order to function correctly when compiled and played in game. Imagine the void as outer space or a vacuum. World brushes seal off the world from the void. If there are any gaps in world brushes (or if there are any entities floating in the void), this will create a leak, and the engine will not be able to discern what the world is and what the void is. If a leak exists in your map, the engine will not know what is supposed to be seen during the compile! The map will compile, but performance-reducing side effects such as bland lighting and excess rendered polygons will plague your map. Settings If at any point in your mapping experience, Hammer doesn't seem to be operating the way you want it to be, go to Tools | Options, and see if there's any preferences you would like to change. You can customize general settings or options related to the 2D and 3D views. If you're coming from another editor, perhaps there's a setting that will make your Hammer experience similar to what you're used to. Loading Hammer for the first time You'll be opening SteamsteamappscommonHalf-Life 2bin often, so you may want to create a desktop shortcut for easier access. Run Hammer.bat from the bin folder to launch Valve Hammer Editor, Valve's map (level) creator, so you can start creating a map. Hammer will prompt you to choose a game configuration, so choose which game you want to map for. I will be using Half-Life 2: Episode Two in the following examples: When you first open up Hammer, you will see a blank gray screen surrounded by a handful of menus, as shown in the following screenshot: Like every other Microsoft Windows application, there is a main menu in the top-left corner of the screen that lets you open, save, or create a document. As far as Hammer is concerned, our documents are maps, and they are saved with the .vmf file extension. So let's open the File menu and load an example map so we can poke around a bit. Load the map titled Chapter2_Example.vmf. The Hammer overview In this section, we will be learning to recognize the different areas of Hammer and what they do. Being familiar with your environment is important! Viewports There are four main windows or viewports in Hammer, as shown in the following screenshot: By default, the top-left window is the 3D view or camera view. The top-right window is the top (x/y) view, the bottom-right window is the side (x/z) view, and the bottom-left window is the front (y/z) view. If you would like to change the layout of the windows, simply click on the top-left corner of any window to change what is displayed. In this article, I will be keeping the default layout but that does not mean you have to! Set up Hammer any way you'd like. For instance, if you would prefer your 3D view to be larger, grab the cross at the middle of the four screens and drag to extend the areas. The 3D window has a few special 3D display types such as Ray-Traced Preview and Lightmap Grid. We will be learning more about these later, but for now, just know that 3D Ray-traced Preview simulates the way light is cast. It does not mimic what you would actually see in-game, but it can be a good first step before compile to see what your lighting may look like. The 3D Lighting Preview will open in a new window and will update every time a camera is moved or a light entity is changed. You cannot navigate directly in the lighting preview window, so you will need to use the 2D cameras to change the viewing perspective. The Map toolbar Located to the left of the screen, the Map toolbar holds all the mapping tools. You will probably use this toolbar the most. The tools will each be covered in depth later on, but here's a basic overview, as shown in the following screenshot, starting from the first tool: The Selection Tool The Selection Tool is pretty self-explanatory; use this tool to select objects. The hot key for this is Shift + S. This is the tool that you will probably use the most. This tool selects objects in the 3D and 2D views and also lets you drag selection boxes in the 2D views. The Magnify Tool The Magnify Tool will zoom in and out in any view. You could also just use the mouse wheel if you have one or the + and – keys on the numerical keypad for the 2D views. The hot key for the magnify tool is Shift + G. The Camera Tool The Camera Tool enables 3D view navigation and lets you place multiple different cameras into the 2D views. Its hot key is Shift + C. The Entity Tool The Entity Tool places entities into the map. If clicked on the 3D view, an entity is placed on the closest brush to the mouse cursor. If used in the 2D view, a crosshair will appear noting the origin of the entity, and the Enter key will add it to the map at the origin. The entity placed in the map is specified by the object bar. The hot key is Shift + E. The Block Tool The Block Tool creates brushes. Drag a box in any 2D view and hit the Enter key to create a brush within the bounds of the box. The object bar specifies which type of brush will be created. The default is box and the hot key for this is Shift + B. The Texture Tool The Texture Tool allows complete control over how you paint your brushes. For now, just know where it is and what it does; the hot key is Shift + A. The Apply Current Texture Tool Clicking on the Apply Current Texture icon will apply the selected texture to the selected brush or brushes. The Decal Tool The Decal Tool applies decals and little detail textures to brushes and the hot key is Shift + D. The Overlay Tool The Overlay Tool is similar to the decal tool. However, overlays are a bit more powerful than decals. Shift + O will be the hot key. The Clipping Tool The Clipping Tool lets you slice brushes into two or more pieces, and the hot key is Shift + X. The Vertex manipulation Tool The Vertex manipulation Tool, or VM tool, allows you to move the individual vertices and edges of brushes any way you like. This is one of the most powerful tools you have in your toolkit! Using this tool improperly, however, is the easiest way to corrupt your map and ruin your day. Not to worry though, we'll learn about this in great detail later on. The hot key is Shift + V. The selection mode bar The selection mode bar (located at the top-right corner by default) lets you choose what you want to select. If Groups is selected, you will select an entire group of objects (if they were previously grouped) when you click on something. The Objects selection mode will only select individual objects within groups. Solids will only select solid objects. The texture bar Located just beneath the selection mode toolbar, the texture bar, as shown, in the following screenshot, shows a thumbnail preview of your currently selected (active) texture and has two buttons that let you select or replace a texture. What a nifty tool, eh? The filter control bar The filter control bar controls your VisGroups (short for visual groups). VisGroups separate your map objects into different categories, similar to layers in the image editing software. To make your mapping life a bit easier, you can toggle visibility of object groups. If, for example, you're trying to sculpt some terrain but keep getting your view blocked by tree models, you can just uncheck the Props box, as shown in the following screenshot, to hide all the trees! There are multiple automatically generated VisGroups such as entities, displacements, and nodraws that you can easily filter through. Don't think you're limited to this though; you can create your own VisGroup with any selection at any time. The object bar The object bar lets you control what type of brush you are creating with the brush tool. This is also where you turn brushes into brush-based entities and create and place prefabs, as shown in the following screenshot: Navigating in 3D You will be spending most of your time in the main four windows, so now let's get comfortable navigating in them, starting with the 3D viewport. Looking around Select the camera tool on the map tools bar. It's the third one down on the list and looks like a red 35 mm camera. Holding the left mouse button while in the 3D view will allow you to look around from a stationary point. Holding the right mouse button will allow you to pan left, right, up, and down in the 3D space. Holding both left and right mouse buttons down together will allow you to move forward and backwards as well as pan left and right. Scrolling the mouse wheel will move the camera forward and backwards. Practice flying down the example map hallway. While looking down the hallway, hold the right mouse button to rise up through the grate and see the top of the map. If you would prefer another method of navigating in 3D, you can use the W, S, A, and D keys to move around while the left mouse button is pressed. Just like your normal FPS game, W moves forward in the direction of the camera, S moves backwards, and A and D move left and right, respectively. You can also move the mouse to look around while moving. Being comfortable with the 3D view is necessary in order to become proficient in creating and scripting 3D environments. As with everything, practice makes perfect, so don't be discouraged if you find yourself hitting the wrong buttons. Having the camera tool selected is not necessary to navigate in 3D. With any tool selected, hold Space bar while the cursor is in the 3D window to activate the 3D navigation mode. While holding Space bar, the navigation functions exactly as it does as if the camera tool was selected. Releasing Space bar will restore normal functionality to the currently selected tool. This can be a huge time saver down the line when you're working on very technical object placements. Multiple cameras If you find yourself bouncing around between different areas of the map, or even changing angles near the same object, you can create multiple cameras and juggle between them. With the camera tool selected, hold Shift and drag a line with the left mouse button in any 2D viewport. The start of the line will be the camera origin, and the end of the line will be the camera's target. Whenever you create a new camera, the newly created camera becomes active and displays its view in the 3D viewport. To cycle between cameras, press the Page Up and Page Down buttons, or click on a camera in any 2D view. Camera locations are stored in the map file when you save, so you don't have to worry about losing them when you exit. Pressing the Delete key with a camera selected will remove the active camera. If you delete the only camera, your view will snap to the origin (0, 0, 0) but you will still be able to look around and create other cameras. In essence, you will always have at least one camera in your map. Selecting objects in the 3D viewport To select an object in the 3D viewport, you must have the selection tool active, as shown in the following screenshot. A quick way to activate the selection tool is to hit the Esc key while any tool is selected, or use the Shift + S hot key. A selected brush or a group of brushes will be highlighted in red with yellow edges as shown in the following screenshot: To deselect anything within the 3D window, click on any other brush, or on the background (void), or simply hit the Esc key. If you want to select an object behind another object, press and hold the left mouse button on the front object. This will cycle through all the objects that are located behind the cursor. You will be able to see the selected objects changing in the 2D and 3D windows about once per second. Simply release the mouse button to complete your selection. To select multiple brushes or objects in the 3D window, hold Ctrl and left-click on multiple brushes. Clicking on a selected object while Ctrl is held will deselect the object. If you've made a mistake choosing objects, you can undo your selections with Ctrl + Z or navigate to Edit | Undo.
Read more
  • 0
  • 0
  • 10765

article-image-downloading-and-understanding-construct-2
Packt
26 Dec 2014
19 min read
Save for later

Downloading and Understanding Construct 2

Packt
26 Dec 2014
19 min read
In this article by Aryadi Subagio, the author of Learning Construct 2, introduces you to Construct 2, makes you familiar with the interface and terms that Construct 2 uses, as well as gives you a quick overview of the event system. (For more resources related to this topic, see here.) About Construct 2 Construct 2 is an authoring tool that makes the process of game development really easy. It can be used by a variety of people, from complete beginners in game development to experts who want to make a prototype quickly or even use Construct 2 to make games faster than ever. It is created by Scirra Ltd, a company based in London, and right now, it can run on the Windows desktop platform, although you can export your games to multiple platforms. Construct 2 is an HTML5-based game editor with a lot of features enough for people beginning to work with game development to make their first 2D game. Some of them are: Multiple platforms to target: You can publish your game to desktop computers (PC, Mac, or Linux), to many mobile platforms (Android, iOS, Blackberry, Windows Phone 8.0, Tizen, and much more), and also on websites via HTML5. Also, if you have a developer's license, you can publish it on Nintendo's Wii U. No programming language required: Construct 2 doesn't use any programming language that is difficult to understand; instead, it relies on its event system, which is really easy for anyone, even without coding experience, to jump in. Built-in physics: Using Construct 2 means you don't need to worry about complicated physics functions; it's all built in Construct 2 and is easy to use! Can be extended (extensible): Many plugins have been written by third-party developers to add new functionalities to Construct 2. Note that writing plugins is outside the scope of this book. If you have a JavaScript background and want to try your hand at writing plugins, you can access the JavaScript SDK and documentation at https://www.scirra.com/manual/15/sdk. Special effects: There are a lot of built-in effects to make your game prettier! You can use Construct 2 to virtually create all kinds of 2D games, from platformer, endless run, tower defense, casual, top-down shooting, and many more. Downloading Construct 2 Construct 2 can be downloaded from Scirra's website (https://www.scirra.com/), which only requires you to click on the download button in order to get started. At the time of writing this book, the latest stable version is r184, and this tutorial is written using this version. Another great thing about Construct 2 is that it is actively developed, and the developer frequently releases beta features to gather feedback and perform bug testing. There are two different builds of Construct 2: beta build and stable build. Choosing which one to download depends on your preference when using Construct 2. If you like to get your hands on the latest features, then you should choose the beta build; just remember that beta builds often have bugs. If you want a bug-proof version, then choose the stable build, but you won't be the first one to use the new features. The installation process is really straightforward. You're free to skip this section if you like, because all you need to do is open the file and follow the instructions there. If you're installing a newer version of Construct 2, it will uninstall the older version automatically for you! Navigating through Construct 2 Now that we have downloaded and installed Construct 2, we can start getting our hands dirty and make games with it! Not so fast though. As Construct 2's interface is different compared to other game-making tools, we need to know how to use it. When you open Construct 2, you will see a start page as follows:   This start page is basically here to make it easier for you to return to your most recent projects, so if you just opened Construct 2, then this will be empty. What you need to pay attention to is the new project link on the left-hand side; click on it, and we'll start making games. Alternatively, you can click on File in the upper-left corner and then click on New. You'll see a selection of templates to start with, so understandably, this can be confusing if you don't know which one to pick. So, for now, just click on New empty project and then click on Open. Starting an empty project is good when you want to prototype your game. What you see in the screenshot now is an empty layout, which is the place where we'll make our games. This also represents how your game will look. It might be confusing to navigate the first time you see this, but don't worry; I'll explain everything you need to know for now by describing it piece by piece. The white part in the middle is the layout, because Construct 2 is a what you see is what you get kind of tool. This part represents how your game will look in the end. The layout is like your canvas; it's your workspace; it is where you design your levels, add your enemies, and place your floating coins. It is where you make your game. The take-home point here is that the layout size is not the same as the window size! The layout size can be bigger than the window size, but it can't be smaller than the window size. This is because the window size represents the actual game window. The dotted line is the border of the window size, so if you put a game object outside it, it won't be initially visible in the game, unless you scroll towards it. In the preceding screenshot, only the red plane is visible to the player. Players don't see the green spaceship because it's outside the game window. On the right-hand side, we have the Projects bar and the Objects bar. An Objects bar shows you all the objects that are used in the active layout. Note that an active layout is the one you focused on right now; this means that, at this very instance, we only have one layout. The Objects bar is empty because we haven't added any objects. The Projects bar helps in the structuring of your project, and it is structured as follows: All layouts are stored in the Layouts folder. Event sheets are stored in the Event sheets folder. All objects that are used in the project are stored in the Object types folder. All created families are in the Families folder. A family is a feature of Construct 2. The Sounds folder contains sound effects and audio files. The Music folder contains long background music. The difference between the Sounds folder and the Music folder is that the contents in the Music folder are streamed, while the files inside the Sounds folder are downloaded completely before they are played. This means if you put a long music track in the Sounds folder, it will take a few minutes for it to be played, but in the Music folder, it is immediately streamed. However, it doesn't mean that the music will be played immediately; it might need to buffer before playing. The Files folder contains other files that don't fit into the folders mentioned earlier. One example here is Icons. Although you can't rename or delete these folders, you can add subfolders inside them if you want. On the left-hand side, we have a Properties bar. There are three kinds of properties: layout properties, project properties, and object properties. The information showed in the Properties bar depends on what you clicked last. There is a lot of information here, so I think it's best to explain it as we go ahead and make our game, but for now, you can click on any part of the Properties bar and look at the bottom part of it for help. I'll just explain a bit about some basic things in the project properties: Name: This is your project's name; it doesn't have to be the same as the saved file's name. So, you can have the saved file as project_wing.capx and the project's name as Shadow wing. Version: This is your game's version number if you plan on releasing beta versions; make sure to change this first Description: Your game's short description; some application stores require you to fill this out before submitting ID: This is your game's unique identification; this comes in the com.companyname.gamename format, so your ID would be something like com.redstudio.shadowwing. Creating game objects To put it simply, everything in Construct 2 is a game object. This can range from the things that are visible on screen, which, for example, are sprites, text, particles, and sprite font, to the things that are not visible but are still used in the game, such as an array, dictionary, keyboard, mouse, gamepad, and many more. To create a new game object, you can either double-click anywhere on a layout (not on another object already present), or you can right-click on your mouse and select Insert new object. Doing either one of these will open an Insert New Object dialog, where you can select the object to be inserted. You can click on the Insert button or double-click on the object icon to insert it. There are two kinds of objects here: the objects that are inserted into the active layout and the objects that are inserted into the entire project. Objects that are visible on the screen are inserted into the active layout, and objects that are not visible on the screen are inserted into the entire project. If you look closely, each object is separated into a few categories such as Data & Storage, Form controls, General, and so on. I just want to say that you should pay special attention to the objects in the Form controls category. As the technology behind it is HTML5 and a Construct 2 game is basically a game made in JavaScript, objects such as the ones you see on web pages can be inserted into a Construct 2 game. These objects are the objects in the Form controls category. A special rule applies to the objects: we can't alter their layer order. This means that these objects are always on top of any other objects in the game. We also can't export them to platforms other than web platforms. So, if you want to make a cross-platform game, it is advised not to use the Form controls objects. For now, insert a sprite object by following these steps: After clicking on the Insert button, you will notice that your mouse cursor becomes a crosshair, and there's a floating label with the Layer 0 text. This is just a way for Construct 2 to tell you which layer you're adding to your object. Click your mouse to finally insert the object. Even if you add your object to a wrong layer, you can always move it later. When adding any object with a visual representation on screen, such as a sprite or a tiled background, Construct 2 automatically opens up its image-editing window. You can draw an image here or simply load it from a file created using a software. Click on X in the top-right corner of the window to close the window when you have finished drawing. You shouldn't worry here; this won't delete your object or image. Adding layers Layers are a great way to manage your objects' visual hierarchy. You can also add some visual effects to your game using layers. By default, your Layers bar is located at the same place as the Projects bar. You'll see two tabs here: Projects and Layers. Click on the Layers tab to open the Layers bar. From here, you can add new layers and rename, delete, and even reorganize them to your liking. You can do this by clicking on the + icon a few times to add new layers, and after this, you can reorganize them by dragging a layer up or down. Just like with Adobe products, you can also toggle the visibility of all objects in the same layer to make it easier while you're developing games. If you don't want to change or edit all objects in the same layer, which might be on a background layer for instance, you can lock this layer. Take a look at the following screenshot: There are two ways of referring to a layer: using its name (Layer 0, Layer 1, Layer 2, Layer 3) or its index (0, 1, 2, 3). As you can see from the previous screenshot, the index of a layer changes as you move a layer up or down its layer hierarchy (the layer first created isn't always the one with the index number 0). The layer with index 0 will always be at the bottom, and the one with the highest index will always be at the top, so remember this because it will come in handy when you make your games. The eye icon determines the visibility of the layer. Alternatively, you can also check the checkbox beside each layer's name. Objects from the invisible layer won't be visible in Construct 2 but will still visible when you play the game. The lock icon, beside the layer's name at the top, toggles between whether a layer is locked or not, so objects from locked layers can't be edited, moved, and selected. What is an event system? Construct 2 doesn't use a traditional programming language. Instead, it uses a unique style of programming called an event system. However, much like traditional programming languages, it works as follows: It executes commands from top to bottom It executes commands at every tick It has variables (a global and a local variable) It has a feature called function, which works in the same way as other functions in a traditional programming language, without having to go into the code An event system is used to control the objects in a layout. It can also be used to control the layout itself. An event system can be found inside the event sheet; you can access it by clicking on the event sheet tab at the top of the layout. Reading an event system I hope I haven't scared you all with the explanations of an event system. Please don't worry because it's really easy! There are two components to an event system: an event and an action. Events are things that occur in the game, and actions are the things that happen when there is an event. For a clearer understanding, take a look at the following screenshot where the event is taken from one of my game projects: The first event, the one with number 12, is a bullet on collision with an enemy, which means when any bullet collides with any enemy, the actions on its right-hand side will be executed. In this case, it will subtract the enemy's health, destroy the bullet object, and create a new object for a damage effect. The next event, number 13, is what happens when an enemy's health drops below zero; the actions will destroy the enemy and add points to the score variable. This is easy, right? Take a look at how we created the redDamage object; it says on layer "Game". Every time we create a new object through an action, we also need to specify on which layer it is created. As mentioned earlier, we can refer to a layer with its name or with its index number, so either way is fine. However, I usually use a layer's name, just in case I need to rearrange the layer's hierarchy later. If we use the layer's index (for example, index 1) we can rearrange the layer so that index 1 is different; this means that we will end up creating objects in the wrong layer. Earlier, I said that an event system executes commands from top to bottom. This is true except for one kind of event: a trigger. A trigger is an event that, instead of executing at every tick, waits for something to happen before it is executed. Triggers are events with a green arrow beside them (like the bullet on collision with enemy event shown earlier). As a result of this, unlike the usual events, it doesn't matter where the triggers are placed in the event system. Writing events Events are written on event sheets. When you create a new layout, you can choose to add a new event sheet to this new layout. If you choose to add an event sheet, you can rename it to the same name or one that is different from the layout. However, it is advised that you name the event sheets exactly same as the layout to make it clear which event sheet is associated with a layout. We can only link one event sheet to a layout from its properties, so if we want to add more event sheets to a layout, we must include them in that event sheet. To write an event, just perform the following steps: Click on the event sheet tab above the layout. You'll see an empty event sheet; to add events, simply click on the Add event link or right-click and select Add event. Note that from now, on I will refer to the action of adding a new step with words such as add event, add new event, or something similar. You'll see a new window with objects to create an event from; every time you add an event (or action), Construct 2 always gives you objects you can add an event (or action) from. This prevents you from doing something impossible, for example, trying to modify the value of a local variable outside of its scope. I will explain local variables shortly. Whether or not you have added an object, there will always be a system object to create an event from. This contains a list of events that you create directly from the game instead of from an object. Double-click on it, and you'll see a list of events you can create with a system object. There are a lot of events, and explaining them can take a long time. For now, if you're curious, there is an explanation of each event in the upper part of the window. Next, scroll down and look for an Every x seconds event. Double-click on it, enter 1.0 second, and click on Done. You should have the following event: To add an action to an event, just perform the following steps: Click on the Add action link beside an event. Click on an object you want to create an action from; for now, double-click on the systems object. Double-click on the Set layer background color action under the Layers & Layout category. Change the three numbers inside the bracket to 100, 200, and 50. Click on the Done button. You should have the following event: This action will change the background color of layer 0 to the one we set in the parameter, which is green. Also, because adding a screenshot every time gives you a code example, which would be troublesome, I will write my code example as follows: System every 1.0 seconds | System Restart layout The left-hand side of the code is the event, and the right-hand side of the code is the action. I think this is pretty clear. Creating a variable I said that I'm going to explain variables, and you might have noticed a global and local variables category when you added an action. A variable is like a glass or cup, but instead of water, it holds values. These values can be one of three types: Text, Number, or Boolean. Text: This type holds a value of letters, words, or a sentence. This can include numbers as well, but the numbers will be treated like a part of the word. Number: This type holds numerical values and can't store any alphabetical value. The numbers are treated like numbers, which means that mathematical operations can be performed on them. Boolean: This type only holds one of the two values: True or False. This is used to check whether a certain state of an object is true or false. To create a global variable, just right-click in an event sheet and select Add global variable. After that, you'll see a new window to add a global variable. Here's how to fill each field: Name: This is the name of the variable; no two variables can have the same name, and this name is case sensitive, which means exampleText is different from ExampleText. Type: This tells whether the variable is Text, Number, or Boolean. Only instance variables can have a Boolean type. Initial value: This is the variable's value when first created. A text type's value must be surrounded with a quote (" "). Description: This is an optional field; just in case the name isn't descriptive enough, additional explanation can be written here. After clicking on the OK button, you have created your new variable! This variable has a global scope; this means that it can be accessed from anywhere within the project, while a local variable only has a limited scope and can be accessed from a certain place in the event sheet. I will cover local variables in depth later in the book. You might have noticed that in the previous screenshot, the Static checkbox cannot be checked. This is because only local variables can be marked as static. One difference between global and local variables is that the local variable's value reverts to its initial value the next time the code is executed, while the global variable's value doesn't change until there's a code that changes it. A static local variable retains its value just like a global variable. All variables' values can be changed from events, both global and local, except the ones that are constant. Constant variables will always retain their initial value; they can never be changed. A constant variable can be used for a variable that has a value you don't want to accidentally rewrite later. Summary In this article, we learned about the features of Construct 2, its ease of use, and why it's perfect for people with no programming background. We learned about Construct 2's interface and how to create new layers in it. We know what objects are and how to create them. This article also introduced you to the event system and showed you how to write code in it. Now, you are ready to start making games with Construct 2! Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Introducing variables [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article]
Read more
  • 0
  • 0
  • 10739

article-image-html5-games-development-using-local-storage-store-game-data
Packt
05 Sep 2011
8 min read
Save for later

HTML5 Games Development: Using Local Storage to Store Game Data

Packt
05 Sep 2011
8 min read
  (For more resources on this subject, see here.)   The following screenshot shows the final result we will create through this article. So, let's get on with it: Storing data by using HTML5 local storage Imagine now we have published our CSS3 memory matching game (Code Download-Ch:3) and players are trying their best to perform well in the game. We want to show the players whether they played better or worse than the last time. We will save the latest score and inform players whether they are better or not this time by comparing the scores. They may feel proud when performing better. This may make them addicted and they may keep trying to get higher scores. Creating a game over dialog Before actually saving anything in the local storage, we need a game over screen. Imagine now we are playing the CSS3 memory matching game that we built and we successfully match and remove all cards. Once we finish, a game over screen pops up and shows the time we utilized to complete the game. Time for action – Creating a game over dialog with the elapsed played time Open the CSS3 matching game folder as our working directory. Download a background image from the following URL (we will use it as the background of the pop up): http://gamedesign.cc/html5games/popup_bg.jpg Place the image in the images folder. Open index.html into any text editor. We will need a font for the game over pop up. Add the following font embedding CSS into the head section: <link href="http://fonts.googleapis.com/css?family=Orbitron:400,700" rel="stylesheet" type="text/css" > Before the game section, we add a div named timer to show the elapsed playing time. In addition, we add a new popup section containing the HTML markup of the pop-up dialog: <div id="timer"> Elapsed time: <span id="elapsed-time">00:00</span></div><section id="game"> <div id="cards"> <div class="card"> <div class="face front"></div> <div class="face back"></div> </div> <!-- .card --> </div> <!-- #cards --></section> <!-- #game --><section id="popup" class="hide"> <div id="popup-bg"> </div> <div id="popup-box"> <div id="popup-box-content"> <h1>You Won!</h1> <p>Your Score:</p> <p><span class='score'>13</span></p> </div> </div></section> We will now move on to the style sheet. As it is just for styling and not related to our logic yet, we can simply copy the matchgame.css file from matching_game_with_game_over in the code example bundle (Ch:3). It is time to edit the game logic part. Open the html5games.matchgame.js file in an editor. In the jQuery ready function, we need a variable to store the elapsed time of the game. Then, we create a timer to count the game every second as follows: $(function(){ ...// reset the elapsed time to 0.matchingGame.elapsedTime = 0; // start the timer matchingGame.timer = setInterval(countTimer, 1000);} Next, add a countTimer function which will be executed every second. It displays the elapsed seconds in the minute and second format: function countTimer(){ matchingGame.elapsedTime++; // calculate the minutes and seconds from elapsed time var minute = Math.floor(matchingGame.elapsedTime / 60); var second = matchingGame.elapsedTime % 60; // add padding 0 if minute and second is less then 10 if (minute < 10) minute = "0" + minute; if (second < 10) second = "0" + second; // display the elapsed time $("#elapsed-time").html(minute+":"+second);} In the removeTookCards function which we wrote earlier, add the following highlighted code that executes the game over logic after removing all cards: function removeTookCards(){$(".card-removed").remove(); // check if all cards are removed and show game over if ($(".card").length == 0) { gameover(); } } At last, we create the following gameover function. It stops the counting timer, displays the elapsed time in the game over pop up, and finally shows the pop up: function gameover(){ // stop the timer clearInterval(matchingGame.timer); // set the score in the game over popup $(".score").html($("#elapsed-time").html()); // show the game over popup $("#popup").removeClass("hide");} Now, save all files and open the game in a browser. Try finishing the memory matching game and the game over screen will pop up, as shown in the following screenshot: What just happened? We have used the CSS3 transition animation to show the game over pop up. We benchmark the score by using the time a player utilized to finish the game. Saving scores in the browser Imagine now we are going to display how well the player played the last time. The game over screen includes the elapsed time as the last score alongside the current game score. Players can then see how well they do this time compared to last time. Time for action – Saving the game score First, we need to add a few markups in the popup section to display the last score. Add the following HTML in the popup section in index.html: <p> <small>Last Score: <span class='last-score'>20</span> </small></p> Then, we open the html5games.matchgame.js to modify some game logic in the gameover function. Add the following highlighted code in the gameover function. It loads the saved score from local storage and displays it as the score last time. Then, save the current score in the local storage: function gameover(){ // stop the timer clearInterval(matchingGame.timer); // display the elapsed time in the game over popup $(".score").html($("#elapsed-time")); // load the saved last score from local storage var lastElapsedTime = localStorage.getItem ("last-elapsed-time"); // convert the elapsed seconds into minute:second format // calculate the minutes and seconds from elapsed time var minute = Math.floor(lastElapsedTime / 60); var second = lastElapsedTime % 60; // add padding 0 if minute and second is less then 10 if (minute < 10) minute = "0" + minute; if (second < 10) second = "0" + second; // display the last elapsed time in game over popup $(".last-score").html(minute+":"+second); // save the score into local storage localStorage.setItem ("last-elapsed-time", matchingGame.elapsedTime); // show the game over popup $("#popup").removeClass("hide");} It is now time to save all files and test the game in the browser. When you finish the game for the first time, the last score should be 00:00. Then, try to finish the game for the second time. The game over pop up will show the elapsed time you played the last time. The following screenshot shows the game over screen with the current and last score: What just happened? We just built a basic scoring system that compares a player's score with his/her last score. Storing and loading data with local storage We can store data by using the setItem function from the localStorage object. The following table shows the usage of the function: localStorage.setItem(key, value); In our example, we save the game elapsed time as the score with the following code by using the key last-elapsed-item: localStorage.setItem("last-elapsed-time", matchingGame.elapsedTime); Complementary to setItem, we get the stored data by using the getItem function in the following way: localStorage.getItem(key); The function returns the stored value of the given key. It returns null when trying to get a non-existent key. This can be used to check whether we have stored any data for a specific key. The local storage saves the string value The local storage stores data in a key-value pair. The key and value are both strings. If we save numbers, Boolean, or any type other than string, then it will convert the value into a string while saving. Usually, problems occur when we load a saved value from the local storage. The loaded value is a string regardless of the type we are saving. We need to explicitly parse the value into the correct type before using it. For example, if we save a floating number into the local storage, we need to use the parseFloat function when loading it. The following code snippet shows how we can use parseFloat to retrieve a stored floating number: var score = 13.234;localStorage.setItem("game-score",score);// result: stored "13.234".var gameScore = localStorage.getItem("game-score");// result: get "13.234" into gameScore;gameScore = parseFloat(gameScore);// result: 13.234 floating value In the preceding code snippet, the manipulation may be incorrect if we forget to convert the gameScore from string to float. For instance, if we add the gameScore by 1 without the parseFloat function, the result will be 13.2341 instead of 14.234. So, be sure to convert the value from local storage to its correct type. Size limitation of local storageThere is a size limitation on the data stored through localStorage for each domain. This size limitation may be slightly different in different browsers. Normally, the size limitation is 5 MB. If the limit is exceeded, then the browser throws a QUOTA_EXCEEDED_ERR exception when setting a key-value into localStorage. Treating the local storage object as an associated array Besides using the setItem and getItem functions, we can treat the localStorage object as an associated array and access the stored entries by using square brackets. For instance, we can replace the following code with the latter version: Using the setItem and getItem: localStorage.setItem("last-elapsed-time", elapsedTime);var lastElapsedTime = localStorage.getItem("last-elapsed-time"); Access localStorage as an array as follows: localStorage["last-elapsed-time"] = elapsedTime;var lastElapsedTime = localStorage["last-elapsed-time"];
Read more
  • 0
  • 0
  • 10666

article-image-prototyping-levels-prototype
Packt
22 Sep 2015
13 min read
Save for later

Prototyping Levels with Prototype

Packt
22 Sep 2015
13 min read
Level design 101 – planning Now, just because we are going to be diving straight into Unity, I feel it's important to talk a little more about how level design is done in the game industry. While you may think a level designer will just jump into the editor and start playing, the truth is you would normally need to do a ton of planning ahead of time before you even open up your tool. Generally, a level begins with an idea. This can come from anything; maybe you saw a really cool building or a photo on the Internet gave you a certain feeling; maybe you want to teach the player a new mechanic. Turning this idea into a level is what a level designer does. Taking all of these ideas, the level designer will create a level design document, which will outline exactly what you're trying to achieve with the entire level from start to end. In this article by John Doran, author of Building FPS Games with Unity, a level design document will describe everything inside the level; listing all of the possible encounters, puzzles, so on and so forth, which the player will need to complete as well as any side quests that the player will be able to achieve. To prepare for this, you should include as many references as you can with maps, images, and movies similar to what you're trying to achieve. If you're working with a team, making this document available on a website or wiki will be a great asset so that you know exactly what is being done in the level, what the team can use in their levels, and how difficult their encounters can be. Generally, you'll also want a top-down layout of your level done either on a computer or with a graph paper, with a line showing a player's general route for the level with the encounters and missions planned out. (For more resources related to this topic, see here.) Of course, you don't want to be too tied down to your design document. It will change as you playtest and work on the level, but the documentation process will help solidify your ideas and give you a firm basis to work from. For those of you interested in seeing some level design documents, feel free to check out Adam Reynolds' Level Designer on Homefront and Call of Duty: World at War at http://wiki.modsrepository.com/index.php?title=Level_Design:_Level_Design_Document_Example. If you want to learn more about level design, I'm a big fan of Beginning Game Level Design by John Feil (previously, my teacher) and Marc Scattergood, Cengage Learning PTR. For more of an introduction to all of game design from scratch, check out Level Up!: The Guide to Great Video Game Design by Scott Rogers and Wiley and The Art of Game Design by Jesse Schel. For some online resources, Scott has a neat GDC talk called Everything I Learned About Level Design I Learned from Disneyland, which can be found at http://mrbossdesign.blogspot.com/2009/03/everything-i-learned-about-game-design.html, and World of Level Design (http://worldofleveldesign.com/) is a good source to learn about level design, though it does not talk about Unity specifically. In addition to a level design document, you can also create a game design document (GDD) that goes beyond the scope of just the level and includes story, characters, objectives, dialogue, concept art, level layouts, and notes about the game's content. However, it is something to do on your own. Creating architecture overview As a level designer, one of the most time-consuming parts of your job will be creating environments. There are many different ways out there to create levels. By default, Unity gives us some default meshes such as a Box, Sphere, and Cylinder. While it's technically possible to build a level in this way, it could get really tedious very quickly. Next, I'm going to quickly go through the most popular options to build levels for the games made in Unity before we jump into building a level of our own. 3D modelling software A lot of times, opening up a 3D modeling software package and building an architecture that way is what professional game studios will often do. This gives you maximum freedom to create your environment and allows you to do exactly what it is you'd like to do; but it requires you to be proficient in that tool, be it Maya, 3ds Max, Blender (which can be downloaded for free at blender.org), or some other tool. Then, you just need to export your models and import them into Unity. Unity supports a lot of different formats for 3D models (most commonly used are .obj and .fbx), but there are a lot of issues to consider. For some best practices when it comes to creating art assets, please visit http://blogs.unity3d.com/2011/09/02/art-assets-best-practice-guide/. Constructing geometry with brushes Constructive Solid Geometry (CSG), commonly referred to as brushes, is a tool artists/designers use to quickly block out pieces of a level from scratch. Using brushes inside the in-game level editor has been a common approach for artists/designers to create levels. Unreal Engine 4, Hammer, Radiant, and other professional game engines make use of this building structure, making it quite easy for people to create and iterate through levels quickly through a process called white-boxing, as it's very easy to make changes to the simple shapes. However; just like learning a modeling software tool, there can be a higher barrier for entry in creating complex geometry using a 3D application, but using CSG brushes will provide a quick solution to create shapes with ease. Unity does not support building things like this by default, but there are several tools in the Unity Asset Store, which allow you to do something like this. For example, sixbyseven studio has an extension called ProBuilder that can add this functionality to Unity, making it very easy to build out levels. The only possible downside is the fact that it does cost money, though it is worth every penny. However, sixbyseven has kindly released a free version of their tools called Prototype, which we installed earlier. It contains everything we will need for this chapter, but it does not allow us to add custom textures and some of the more advanced tools. We will be using ProBuilder later on in the book to polish the entire product. You can find out more information about ProBuilder at http://www.protoolsforunity3d.com/probuilder/. Modular tilesets Another way to generate architecture is through the use of "tiles" that are created by an artist. Similar to using Lego pieces, we can use these tiles to snap together walls and other objects to create a building. With creative uses of the tiles, you can create a large amount of content with just a minimal amount of assets. This is probably the easiest way to create a level at the expense of not being able to create unique looking buildings, since you only have a few pieces to work with. Titles such as Skyrim use this to a great extent to create their large world environments. Mix and match Of course, it's also possible to use a mixture of the preceding tools in order to use the advantages of certain ways of doing things. For example, you could use brushes to block out an area and then use a group of tiles called a tileset to replace the boxes with the highly detailed models, which is what a lot of AAA studios do. In addition, we could initially place brushes to test our gameplay and then add in props to break up the repetitiveness of the levels, which is what we are going to be doing. Creating geometry The first thing we are going to do is to learn how we can create geometry as described in the following steps: From the top menu, go to File | New Scene. This will give us a fresh start to build our project. Next, because we already have Prototype installed, let's create a cube by hitting Ctrl + K. Right now, our Cube (with a name of pb-Cube-1562 or something similar) is placed on a Position of 2, -7, -2. However, for simplicity's sake, I'm going to place it in the middle of the world. We can do this by typing in 0,0,0 by left-clicking in the X position field, typing 0, and then pressing Tab. Notice the cursor is now automatically at the Y part. Type in 0, press Tab again, and then, from the Z slot, press 0 again. Alternatively you can right-click on the Transform component and select Reset Position. Next, we have to center the camera back onto our Cube object. We can do this by going over to the Hierarchy tab and double-clicking on the Cube object (or selecting it and then pressing F). Now, to actually modify this cube, we are going to open up Prototype. We can do this by first selecting our Cube object, going to the Pb_Object component, and then clicking on the green Open Prototype button. Alternatively, you can also go to Tools | Prototype | Prototype Window. This is going to bring up a window much like the one I have displayed here. This new Prototype tab can be detached from the main Unity window or, if you drag from the tab over into Unity, it can be "hooked" into place elsewhere, like the following screenshot shows by my dragging and dropping it to the right of the Hierarchy tab. Next, select the Scene tab in the middle of the screen and press the G key to toggle us into the Object/Geometry mode. Alternatively, you can also click on the Element button in the Scene tab. Unlike the default Object/Top level mode, this will allow us to modify the cube directly to build upon it. For more information on the different modes, check out the Modes & Elements section from http://www.protoolsforunity3d.com/docs/probuilder/#buildingAndEditingGeometry. You'll notice the top of the Prototype tab has three buttons. These stand for what selection type you are currently wanting to use. The default is Vertex or the Point mode, which will allow us to select individual parts to modify. The next is Edge and the last is Face. Face is a good standard to use at this stage, because we only want to extend things out. Select the Face mode by either clicking on the button or pressing the H key twice until it says Editing Faces on the screen. Afterwards, select the box's right side. For a list of keyword shortcuts included with Prototype/ProBuilder, check out http://www.protoolsforunity3d.com/docs/probuilder/#keyboardShortcuts. Now, pull on the red handle to extend our brush outward. Easy enough. Note that, by default, while pulling things out, it is being done in 1 increment. This is nice when we are polishing our levels and trying to make things exactly where we want them, but right now, we are just prototyping. So, getting it out as quickly as possible is paramount to test if it's enjoyable. To help with this, we can use a feature of Unity called Unit Snapping. Undo the previous change we made by pressing Ctrl+Z. Then, move the camera over to the other side and select our longer face. Drag it 9 units out by holding down the Control key (Command on Mac). ProCore3D also has another tool out called ProGrids, which has some advanced unit snapping functionality, but we are not going to be using it. For more information on it, check out http://www.protoolsforunity3d.com/progrids/ If you'd like to change the distance traveled while using unit snapping, set it using the Edit | Snap Settings… menu. Next, drag both the sides out until they are 9 x 9 wide. To make things easier to see, select the Directional Light object in our scene via the Hierarchy tab and reduce the Light component's Intensity to . 5. So, at this point, we have a nice looking floor. However, to create our room, we are first going to need to create our ceiling. Select the floor we have created and press Ctrl + D to duplicate the brush. Once completed, change back into the Object/Top Level editing mode and move the brush so that its Position is at 0, 4, 0. Alternatively, you can click on the duplicated object and, from the Inspector tab, change the Position's Y value to 4. Go back into the sub-selection mode by hitting H to go back to the Faces mode. Then, hold down Ctrl and select all of the edges of our floor. Click on the Extrude button from the Prototype panel. This creates a new part on each of the four edges, which is by default .5 wide (change by clicking on the + button on the edge). This adds additional edges and/or faces to our object. Next, we are going to extrude again; but, rather than doing it from the menu, let's do it manually by selecting the tops of our newly created edges and holding down the Shift button and dragging it up along the Y (green) axis. We then hold down Ctrl after starting the extrusion to have it snap appropriately to fit around our ceiling. Note that the box may not look like this as soon as you let go, as Prototype needs time to compute lighting and materials, which it will mention from the bottom right part of Unity. Next, select Main Camera in the Hierarchy, hit W to switch to the Translate mode, and F to center the selection. Then, move our camera into the room. You'll notice it's completely dark due to the ceiling, but we can add light to the world to fix that! Let's add a point light by going to GameObject | Light | Point Light and position it in the center of the room towards the ceiling (In my case, it was at 4.5, 2.5. 3.5). Then, up the Range to 25 so that it hits the entire room. Finally, add a player to see how he interacts. First, delete the Main Camera object from Hierarchy, as we won't need it. Then, go into the Project tab and open up the AssetsUFPSBaseContentPrefabsPlayers folder. Drag and drop the AdvancedPlayer prefab, moving it so that it doesn't collide with the walls, floors, or ceiling, a little higher than the ground as shown in the following screenshot: Next, save our level (Chapter 3_1_CreatingGeometry) and hit the Play button. It may be a good idea for you to save your levels in such a way that you are able to go back and see what was covered in each section for each chapter, thus making things easier to find in the future. Again, remember that we can pull a weapon out by pressing the 1-5 keys. With this, we now have a simple room that we can interact with! Summary In this article, we take on the role of a level designer, who has been asked to create a level prototype to prove that our gameplay is solid. We will use the free Prototype tool to help in this endeavor. In addition, we will also learn some beginning level designs. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unity 3.x Scripting-Character Controller versus Rigidbody [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 10546

article-image-playing-physics
Packt
03 Jun 2015
26 min read
Save for later

Playing with Physics

Packt
03 Jun 2015
26 min read
In this article by Maxime Barbier, author of the book SFML Blueprints, we will add physics into this game and turn it into a new one. By doing this, we will learn: What is a physics engine How to install and use the Box2D library How to pair the physics engine with SFML for the display How to add physics in the game In this article, we will learn the magic of physics. We will also do some mathematics but relax, it's for conversion only. Now, let's go! (For more resources related to this topic, see here.) A physics engine – késako? We will speak about physics engine, but the first question is "what is a physics engine?" so let's explain it. A physics engine is a software or library that is able to simulate Physics, for example, the Newton-Euler equation that describes the movement of a rigid body. A physics engine is also able to manage collisions, and some of them can deal with soft bodies and even fluids. There are different kinds of physics engines, mainly categorized into real-time engine and non-real-time engine. The first one is mostly used in video games or simulators and the second one is used in high performance scientific simulation, in the conception of special effects in cinema and animations. As our goal is to use the engine in a video game, let's focus on real-time-based engine. Here again, there are two important types of engines. The first one is for 2D and the other for 3D. Of course you can use a 3D engine in a 2D world, but it's preferable to use a 2D engine for an optimization purpose. There are plenty of engines, but not all of them are open source. 3D physics engines For 3D games, I advise you to use the Bullet physics library. This was integrated in the Blender software, and was used in the creation of some commercial games and also in the making of films. This is a really good engine written in C/C++ that can deal with rigid and soft bodies, fluids, collisions, forces… and all that you need. 2D physics engines As previously said, in a 2D environment, you can use a 3D physics engine; you just have to ignore the depth (Z axes). However, the most interesting thing is to use an engine optimized for the 2D environment. There are several engines like this one and the most famous ones are Box2D and Chipmunk. Both of them are really good and none of them are better than the other, but I had to make a choice, which was Box2D. I've made this choice not only because of its C++ API that allows you to use overload, but also because of the big community involved in the project. Physics engine comparing game engine Do not mistake a physics engine for a game engine. A physics engine only simulates a physical world without anything else. There are no graphics, no logics, only physics simulation. On the contrary, a game engine, most of the time includes a physics engine paired with a render technology (such as OpenGL or DirectX). Some predefined logics depend on the goal of the engine (RPG, FPS, and so on) and sometimes artificial intelligence. So as you can see, a game engine is more complete than a physics engine. The two mostly known engines are Unity and Unreal engine, which are both very complete. Moreover, they are free for non-commercial usage. So why don't we directly use a game engine? This is a good question. Sometimes, it's better to use something that is already made, instead of reinventing it. However, do we really need all the functionalities of a game engine for this project? More importantly, what do we need it for? Let's see the following: A graphic output Physics engine that can manage collision Nothing else is required. So as you can see, using a game engine for this project would be like killing a fly with a bazooka. I hope that you have understood the aim of a physics engine, the differences between a game and physics engine, and the reason for the choices made for the project. Using Box2D As previously said, Box2D is a physics engine. It has a lot of features, but the most important for the project are the following (taken from the Box2D documentation): Collision: This functionality is very interesting as it allows our tetrimino to interact with each other Continuous collision detection Rigid bodies (convex polygons and circles) Multiple shapes per body Physics: This functionality will allow a piece to fall down and more Continuous physics with the time of impact solver Joint limits, motors, and friction Fairly accurate reaction forces/impulses As you can see, Box2D provides all that we need in order to build our game. There are a lot of other features usable with this engine, but they don't interest us right now so I will not describe them in detail. However, if you are interested, you can take a look at the official website for more details on the Box2D features (http://box2d.org/about/). It's important to note that Box2D uses meters, kilograms, seconds, and radians for the angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. I will come back to this later. Preparing Box2D Now that Box2D is introduced, let's install it. You will find the list of available versions on the Google code project page at https://code.google.com/p/box2d/downloads/list. Currently, the latest stable version is 2.3. Once you have downloaded the source code (from compressed file or using SVN), you will need to build it. Install Once you have successfully built your Box2D library, you will need to configure your system or IDE to find the Box2D library and headers. The newly built library can be found in the /path/to/Box2D/build/Box2D/ directory and is named libBox2D.a. On the other hand, the headers are located in the path/to/Box2D/Box2D/ directory. If everything is okay, you will find a Box2D.h file in the folder. On Linux, the following command adds Box2D to your system without requiring any configuration: sudo make install Pairing Box2D and SFML Now that Box2D is installed and your system is configured to find it, let's build the physics "hello world": a falling square. It's important to note that Box2D uses meters, kilograms, seconds, and radian for angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. Converting radians to degrees or vice versa is not difficult, but pixels to meters… this is another story. In fact, there is no way to convert a pixel to meter, unless if the number of pixels per meter is fixed. This is the technique that we will use. So let's start by creating some utility functions. We should be able to convert radians to degrees, degrees to radians, meters to pixels, and finally pixels to meters. We will also need to fix the pixel per meter value. As we don't need any class for these functions, we will define them in a namespace converter. This will result as the following code snippet: namespace converter {    constexpr double PIXELS_PER_METERS = 32.0;    constexpr double PI = 3.14159265358979323846;      template<typename T>    constexpr T pixelsToMeters(const T& x){return x/PIXELS_PER_METERS;};      template<typename T>    constexpr T metersToPixels(const T& x){return x*PIXELS_PER_METERS;};      template<typename T>    constexpr T degToRad(const T& x){return PI*x/180.0;};      template<typename T>    constexpr T radToDeg(const T& x){return 180.0*x/PI;} } As you can see, there is no difficulty here. We start to define some constants and then the convert functions. I've chosen to make the function template to allow the use of any number type. In practice, it will mostly be double or int. The conversion functions are also declared as constexpr to allow the compiler to calculate the value at compile time if it's possible (for example, with constant as a parameter). It's interesting because we will use this primitive a lot. Box2D, how does it work? Now that we can convert SFML unit to Box2D unit and vice versa, we can pair Box2D with SFML. But first, how exactly does Box2D work? Box2D works a lot like a physics engine: You start by creating an empty world with some gravity. Then, you create some object patterns. Each pattern contains the shape of the object position, its type (static or dynamic), and some other characteristics such as its density, friction, and energy restitution. You ask the world to create a new object defined by the pattern. In each game loop, you have to update the physical world with a small step such as our world in the games we've already made. Because the physics engine does not display anything on the screen, we will need to loop all the objects and display them by ourselves. Let's start by creating a simple scene with two kinds of objects: a ground and square. The ground will be fixed and the squares will not. The square will be generated by a user event: mouse click. This project is very simple, but the goal is to show you how to use Box2D and SFML together with a simple case study. A more complex one will come later. We will need three functionalities for this small project to: Create a shape Display the world Update/fill the world Of course there is also the initialization of the world and window. Let's start with the main function: As always, we create a window for the display and we limit the FPS number to 60. I will come back to this point with the displayWorld function. We create the physical world from Box2D, with gravity as a parameter. We create a container that will store all the physical objects for the memory clean purpose. We create the ground by calling the createBox function (explained just after). Now it is time for the minimalist game loop:    Close event managements    Create a box by detecting that the right button of the mouse is pressed Finally, we clean the memory before exiting the program: int main(int argc,char* argv[]) {    sf::RenderWindow window(sf::VideoMode(800, 600, 32), "04_Basic");    window.setFramerateLimit(60);    b2Vec2 gravity(0.f, 9.8f);    b2World world(gravity);    std::list<b2Body*> bodies;    bodies.emplace_back(book::createBox(world,400,590,800,20,b2_staticBody));      while(window.isOpen()) {        sf::Event event;        while(window.pollEvent(event)) {            if (event.type == sf::Event::Closed)                window.close();        }        if (sf::Mouse::isButtonPressed(sf::Mouse::Left)) {            int x = sf::Mouse::getPosition(window).x;            int y = sf::Mouse::getPosition(window).y;            bodies.emplace_back(book::createBox(world,x,y,32,32));        }        displayWorld(world,window);    }      for(b2Body* body : bodies) {        delete static_cast<sf::RectangleShape*>(body->GetUserData());        world.DestroyBody(body);    }    return 0; } For the moment, except the Box2D world, nothing should surprise you so let's continue with the box creation. This function is under the book namespace. b2Body* createBox(b2World& world,int pos_x,int pos_y, int size_x,int size_y,b2BodyType type = b2_dynamicBody) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),                         converter::pixelsToMeters<double>(pos_y));    bodyDef.type = type;    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(size_x/2.0),                    converter::pixelsToMeters<double>(size_y/2.0));      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.4;    fixtureDef.restitution= 0.5;    fixtureDef.shape = &b2shape;      b2Body* res = world.CreateBody(&bodyDef);    res->CreateFixture(&fixtureDef);      sf::Shape* shape = new sf::RectangleShape(sf::Vector2f(size_x,size_y));    shape->setOrigin(size_x/2.0,size_y/2.0);    shape->setPosition(sf::Vector2f(pos_x,pos_y));                                                   if(type == b2_dynamicBody)        shape->setFillColor(sf::Color::Blue);    else        shape->setFillColor(sf::Color::White);      res->SetUserData(shape);      return res; } This function contains a lot of new functionalities. Its goal is to create a rectangle of a specific size at a predefined position. The type of this rectangle is also set by the user (dynamic or static). Here again, let's explain the function step-by-step: We create b2BodyDef. This object contains the definition of the body to create. So we set the position and its type. This position will be in relation to the gravity center of the object. Then, we create b2Shape. This is the physical shape of the object, in our case, a box. Note that the SetAsBox() method doesn't take the same parameter as sf::RectangleShape. The parameters are half the size of the box. This is why we need to divide the values by two. We create b2FixtureDef and initialize it. This object holds all the physical characteristics of the object such as its density, friction, restitution, and shape. Then, we properly create the object in the physical world. Now, we create the display of the object. This will be more familiar because we will only use SFML. We create a rectangle and set its position, origin, and color. As we need to associate and display SFML object to the physical object, we use a functionality of Box2D: the SetUserData() function. This function takes void* as a parameter and internally holds it. So we use it to keep track of our SFML shape. Finally, the body is returned by the function. This pointer has to be stored to clean the memory later. This is the reason for the body's container in main(). Now, we have the capability to simply create a box and add it to the world. Now, let's render it to the screen. This is the goal of the displayWorld function: void displayWorld(b2World& world,sf::RenderWindow& render) {    world.Step(1.0/60,int32(8),int32(3));    render.clear();    for (b2Body* body=world.GetBodyList(); body!=nullptr; body=body->GetNext())    {          sf::Shape* shape = static_cast<sf::Shape*>(body->GetUserData());        shape->setPosition(converter::metersToPixels(body->GetPosition().x),        converter::metersToPixels(body->GetPosition().y));        shape->setRotation(converter::radToDeg<double>(body->GetAngle()));        render.draw(*shape);    }    render.display(); } This function takes the physics world and window as a parameter. Here again, let's explain this function step-by-step: We update the physical world. If you remember, we have set the frame rate to 60. This is why we use 1,0/60 as a parameter here. The two others are for precision only. In a good code, the time step should not be hardcoded as here. We have to use a clock to be sure that the value will always be the same. Here, it has not been the case to focus on the important part: physics. We reset the screen, as usual. Here is the new part: we loop the body stored by the world and get back the SFML shape. We update the SFML shape with the information taken from the physical body and then render it on the screen. Finally, we render the result on the screen. As you can see, it's not really difficult to pair SFML with Box2D. It's not a pain to add it. However, we have to take care of the data conversion. This is the real trap. Pay attention to the precision required (int, float, double) and everything should be fine. Now that you have all the keys in hand, let's build a real game with physics. Adding physics to a game Now that Box2D is introduced with a basic project, let's focus on the real one. We will modify our basic Tetris to get Gravity-Tetris alias Gravitris. The game control will be the same as in Tetris, but the game engine will not be. We will replace the board with a real physical engine. With this project, we will reuse a lot of work previously done. As already said, the goal of some of our classes is to be reusable in any game using SFML. Here, this will be made without any difficulties as you will see. The classes concerned are those you deal with user event Action, ActionMap, ActionTarget—but also Configuration and ResourceManager. There are still some changes that will occur in the Configuration class, more precisely, in the enums and initialization methods of this class because we don't use the exact same sounds and events that were used in the Asteroid game. So we need to adjust them to our needs. Enough with explanations, let's do it with the following code: class Configuration {    public:        Configuration() = delete;        Configuration(const Configuration&) = delete;        Configuration& operator=(const Configuration&) = delete;               enum Fonts : int {Gui};        static ResourceManager<sf::Font,int> fonts;               enum PlayerInputs : int { TurnLeft,TurnRight, MoveLeft, MoveRight,HardDrop};        static ActionMap<int> playerInputs;               enum Sounds : int {Spawn,Explosion,LevelUp,};        static ResourceManager<sf::SoundBuffer,int> sounds;               enum Musics : int {Theme};        static ResourceManager<sf::Music,int> musics;               static void initialize();           private:        static void initTextures();        static void initFonts();        static void initSounds();        static void initMusics();        static void initPlayerInputs(); }; As you can see, the changes are in the enum, more precisely in Sounds and PlayerInputs. We change the values into more adapted ones to this project. We still have the font and music theme. Now, take a look at the initialization methods that have changed: void Configuration::initSounds() {    sounds.load(Sounds::Spawn,"media/sounds/spawn.flac");    sounds.load(Sounds::Explosion,"media/sounds/explosion.flac");    sounds.load(Sounds::LevelUp,"media/sounds/levelup.flac"); } void Configuration::initPlayerInputs() {    playerInputs.map(PlayerInputs::TurnRight,Action(sf::Keyboard::Up));    playerInputs.map(PlayerInputs::TurnLeft,Action(sf::Keyboard::Down));    playerInputs.map(PlayerInputs::MoveLeft,Action(sf::Keyboard::Left));    playerInputs.map(PlayerInputs::MoveRight,Action(sf::Keyboard::Right));  playerInputs.map(PlayerInputs::HardDrop,Action(sf::Keyboard::Space,    Action::Type::Released)); } No real surprises here. We simply adjust the resources to our needs for the project. As you can see, the changes are really minimalistic and easily done. This is the aim of all reusable modules or classes. Here is a piece of advice, however: keep your code as modular as possible, this will allow you to change a part very easily and also to import any generic part of your project to another one easily. The Piece class Now that we have the configuration class done, the next step is the Piece class. This class will be the most modified one. Actually, as there is too much change involved, let's build it from scratch. A piece has to be considered as an ensemble of four squares that are independent from one another. This will allow us to split a piece at runtime. Each of these squares will be a different fixture attached to the same body, the piece. We will also need to add some force to a piece, especially to the current piece, which is controlled by the player. These forces can move the piece horizontally or can rotate it. Finally, we will need to draw the piece on the screen. The result will show the following code snippet: constexpr int BOOK_BOX_SIZE = 32; constexpr int BOOK_BOX_SIZE_2 = BOOK_BOX_SIZE / 2; class Piece : public sf::Drawable {    public:        Piece(const Piece&) = delete;        Piece& operator=(const Piece&) = delete;          enum TetriminoTypes {O=0,I,S,Z,L,J,T,SIZE};        static const sf::Color TetriminoColors[TetriminoTypes::SIZE];          Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation);        ~Piece();        void update();        void rotate(float angle);        void moveX(int direction);        b2Body* getBody()const;      private:        virtual void draw(sf::RenderTarget& target, sf::RenderStates states) const override;        b2Fixture* createPart((int pos_x,int pos_y,TetriminoTypes type); ///< position is relative to the piece int the matrix coordinate (0 to 3)        b2Body * _body;        b2World& _world; }; Some parts of the class don't change such as the TetriminoTypes and TetriminoColors enums. This is normal because we don't change any piece's shape or colors. The rest is still the same. The implementation of the class, on the other side, is very different from the precedent version. Let's see it: Piece::Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation) : _world(world) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),    converter::pixelsToMeters<double>(pos_y));    bodyDef.type = b2_dynamicBody;    bodyDef.angle = converter::degToRad(rotation);    _body = world.CreateBody(&bodyDef);      switch(type)    {        case TetriminoTypes::O : {            createPart((0,0,type); createPart((0,1,type);            createPart((1,0,type); createPart((1,1,type);        }break;        case TetriminoTypes::I : {            createPart((0,0,type); createPart((1,0,type);             createPart((2,0,type); createPart((3,0,type);        }break;        case TetriminoTypes::S : {            createPart((0,1,type); createPart((1,1,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::Z : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,1,type);        }break;        case TetriminoTypes::L : {            createPart((0,1,type); createPart((0,0,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::J : {            createPart((0,0,type); createPart((1,0,type);            createPart((2,0,type); createPart((2,1,type);        }break;        case TetriminoTypes::T : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,0,type);        }break;        default:break;    }    body->SetUserData(this);    update(); } The constructor is the most important method of this class. It initializes the physical body and adds each square to it by calling createPart(). Then, we set the user data to the piece itself. This will allow us to navigate through the physics to SFML and vice versa. Finally, we synchronize the physical object to the drawable by calling the update() function: Piece::~Piece() {    for(b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        fixture->SetUserData(nullptr);        delete shape;    }    _world.DestroyBody(_body); } The destructor loop on all the fixtures attached to the body, destroys all the SFML shapes and then removes the body from the world: b2Fixture* Piece::createPart((int pos_x,int pos_y,TetriminoTypes type) {    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2),    converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2)    ,b2Vec2(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_x*BOOK_BOX_SIZE)), converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_y*BOOK_BOX_SIZE))),0);      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.5;    fixtureDef.restitution= 0.4;    fixtureDef.shape = &b2shape;      b2Fixture* fixture = _body->CreateFixture(&fixtureDef);      sf::ConvexShape* shape = new sf::ConvexShape((unsigned int) b2shape.GetVertexCount());    shape->setFillColor(TetriminoColors[type]);    shape->setOutlineThickness(1.0f);    shape->setOutlineColor(sf::Color(128,128,128));    fixture->SetUserData(shape);       return fixture; } This method adds a square to the body at a specific place. It starts by creating a physical shape as the desired box and then adds this to the body. It also creates the SFML square that will be used for the display, and it will attach this as user data to the fixture. We don't set the initial position because the constructor will do it. void Piece::update() {    const b2Transform& xf = _body->GetTransform();       for(b2Fixture* fixture = _body->GetFixtureList(); fixture != nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        const b2PolygonShape* b2shape = static_cast<b2PolygonShape*>(fixture->GetShape());        const uint32 count = b2shape->GetVertexCount();        for(uint32 i=0;i<count;++i) {            b2Vec2 vertex = b2Mul(xf,b2shape->m_vertices[i]);            shape->setPoint(i,sf::Vector2f(converter::metersToPixels(vertex.x),            converter::metersToPixels(vertex.y)));        }    } } This method synchronizes the position and rotation of all the SFML shapes from the physical position and rotation calculated by Box2D. Because each piece is composed of several parts—fixture—we need to iterate through them and update them one by one. void Piece::rotate(float angle) {    body->ApplyTorque((float32)converter::degToRad(angle),true); } void Piece::moveX(int direction) {    body->ApplyForceToCenter(b2Vec2(converter::pixelsToMeters(direction),0),true); } These two methods add some force to the object to move or rotate it. We forward the job to the Box2D library. b2Body* Piece::getBody()const {return _body;}   void Piece::draw(sf::RenderTarget& target, sf::RenderStates states) const {    for(const b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr; fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        if(shape)            target.draw(*shape,states);    } } This function draws the entire piece. However, because the piece is composed of several parts, we need to iterate on them and draw them one by one in order to display the entire piece. This is done by using the user data saved in the fixtures. Summary Since the usage of a physics engine has its own particularities such as the units and game loop, we have learned how to deal with them. Finally, we learned how to pair Box2D with SFML, integrate our fresh knowledge to our existing Tetris project, and build a new funny game. Resources for Article: Further resources on this subject: Skinning a character [article] Audio Playback [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 10498

article-image-xna-4-3dgetting-battle-tanks-game-world
Packt
20 Sep 2012
16 min read
Save for later

XNA 4-3D:Getting the battle-tanks into game world

Packt
20 Sep 2012
16 min read
Adding the tank model For tank battles, we will be using a 3D model available for download from the App Hub website (http://create.msdn.com) in the Simple Animation CODE SAMPLE available at http://xbox.create.msdn.com/en-US/education/catalog/sample/simple_animation. Our first step will be to add the model to our content project in order to bring it into the game. Time for action – adding the tank model We can add the tank model to our project by following these steps: Download the 7089_06_GRAPHICSPACK.ZIP file from the book's website and extract the contents to a temporary folder. Select the .fbx file and the two .tga files from the archive and copy them to the Windows clipboard. Switch to Visual Studio and expand the Tank BattlesContent (Content) project. Right-click on the Models folder and select Paste to copy the files on the clipboard into the folder. Right-click on engine_diff_tex.tga inside the Models folder and select Exclude From Project. Right click on turret_alt_diff_tex.tga inside the Models folder and select Exclude From Project. What just happened? Adding a model to our game is like adding any other type of content, though there are a couple of pitfalls to watch out for. Our model includes two image files (the .tga files&emdash;an image format commonly associated with 3D graphics files because the format is not encumbered by patents) that will provide texture maps for the tank's surfaces. Unlike the other textures we have used, we do not want to include them as part of our content project. Why not? The content processor for models will parse the .fbx file (an Autodesk file format used by several 3D modeling packages) at compile time and look for the textures it references in the directory the model is in. It will automatically process these into .xnb files that are placed in the output folder &endash; Models, for our game. If we were to also include these textures in our content project, the standard texture processor would convert the image just like it does with the textures we normally use. When the model processor comes along and tries to convert the texture, an .xnb file with the same name will already exist in the Models folder, causing compile time errors. Incidentally, even though the images associated with our model are not included in our content project directly, they still get built by the content pipeline and stored in the output directory as .xnb files. They can be loaded just like any other Texture2D object with the Content.Load() method. Free 3D modeling software There are a number of freely available 3D modeling packages downloadable on the Web that you can use to create your own 3D content. Some of these include:   Blender: A free, open source 3D modeling and animation package. Feature rich, and very powerful. Blender can be found at http://www.blender.org. Wings 3D: Free, open source 3D modeling package. Does not support animation, but includes many useful modeling features. Wings 3D can be found at http://wings3d.com. Softimage Mod Tool: A modeling and animation package from Autodesk. The Softimage Mod Tool is available freely for non-commercial use. A version with a commercial-friendly license is also available to XNA Creator's Club members at http://usa.autodesk.com/adsk/servlet/pc/item?id=13571257&siteID=123112.         Building tanks Now that the model is part of our project, we need to create a class that will manage everything about a tank. While we could simply load the model in our TankBattlesGame class, we need more than one tank, and duplicating all of the items necessary to handle both tanks does not make sense. Time for action – building the Tank class We can build the Tank class using the following steps: Add a new class file called Tank.cs to the Tank Battles project. Add the following using directives to the top of the Tank.cs class file: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; Add the following fields to the Tank class: #region Fields private Model model; private GraphicsDevice device; private Vector3 position; private float tankRotation; private float turretRotation; private float gunElevation; private Matrix baseTurretTransform; private Matrix baseGunTransform; private Matrix[] boneTransforms; #endregion Add the following properties to the Tank class: #region Properties public Vector3 Position { get { return position; } set { position = value; } } public float TankRotation { get { return tankRotation; } set { tankRotation = MathHelper.WrapAngle(value); } } public float TurretRotation { get { return turretRotation; } set { turretRotation = MathHelper.WrapAngle(value); } } public float GunElevation { get { return gunElevation; } set { gunElevation = MathHelper.Clamp( value, MathHelper.ToRadians(-90), MathHelper.ToRadians(0)); } } #endregion Add the Draw() method to the Tank class, as follows: #region Draw public void Draw(ArcBallCamera camera) { model.Root.Transform = Matrix.Identity * Matrix.CreateScale(0.005f) * Matrix.CreateRotationY(TankRotation) * Matrix.CreateTranslation(Position); model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect basicEffect in mesh.Effects) { basicEffect.World = boneTransforms[mesh.ParentBone. Index]; basicEffect.View = camera.View; basicEffect.Projection = camera.Projection; basicEffect.EnableDefaultLighting(); } mesh.Draw(); } } #endregion In the declarations area of the TankBattlesGame class, add a new List object to hold a list of Tank objects, as follows: List tanks = new List(); Create a temporary tank so we can see it in action by adding the following to the end of the LoadContent() method of the TankBattlesGame class: tanks.Add( new Tank( GraphicsDevice, Content.Load(@"Modelstank"), new Vector3(61, 40, 61))); In the Draw() method of the TankBattlesGame class, add a loop to draw all of the Tank objects in the tank's list after the terrain has been drawn, as follows: foreach (Tank tank in tanks) { tank.Draw(camera); } Execute the game. Use your mouse to rotate and zoom in on the tank floating above the top of the central mountain in the scene, as shown in the following screenshot: What just happened? The Tank class stores the model that will be used to draw the tank in the model field. Just as with our terrain, we need a reference to the game's GraphicsDevice in order to draw our model when necessary. In addition to this information, we have fields (and corresponding properties) to represent the position of the tank, and the rotation angle of three components of the model. The first, TankRotation, determines the angle at which the entire tank is rotated. As the turret of the tank can rotate independently of the direction in which the tank itself is facing, we store the rotation angle of the turret in TurretRotation. Both TankRotation and TurretRotation contain code in their property setters to wrap their angles around if we go past a full circle in either direction. The last angle we want to track is the elevation angle of the gun attached to the turret. This angle can range from 0 degrees (pointing straight out from the side of the turret) to -90 degrees (pointing straight up). This angle is stored in the GunElevation property. The last field added in step 3 is called boneTransforms, and is an array of matrices. We further define this array while defining the Tank class' constructor by creating an empty array with a number of elements equal to the number of bones in the model. But what exactly are bones? When a 3D artist creates a model, they can define joints that determine how the various pieces of the model are connected. This process is referred to as "rigging" the model, and a model that has been set up this way is sometimes referred to as "rigged for animation". The bones in the model are defined with relationships to each other, so that when a bone higher up in the hierarchy moves, all of the lower bones are moved in relation to it. Think for a moment of one of your fingers. It is composed of three distinct bones separated by joints. If you move the bone nearest to your palm, the other two bones move as well – they have to if your finger bones are going to stay connected! The same is true of the components in our tank. When the tank rotates, all of its pieces rotate as well. Rotating the turret moves the cannon, but has no effect on the body or the wheels. Moving the cannon has no effect on any other parts of the model, but it is hinged at its base, so that rotating the cannon joint makes the cannon appear to elevate up and down around one end instead of spinning around its center. We will come back to these bones in just a moment, but let's first look at the current Draw() method before we expand it to account for bone-based animation. Model.Root refers to the highest level bone in the model's hierarchy. Transforming this bone will transform the entire model, so our basic scaling, rotation, and positioning happen here. Notice that we are drastically scaling down the model of the tank, to a scale of 0.005f. The tank model is quite large in raw units, so we need to scale it to a size that is in line with the scale we used for our terrain. Next, we use the boneTransforms array we created earlier by calling the model's CopyAbsoluteBoneTransformsTo() method. This method calculates the resultant transforms for each of the bones in the model, taking into account all of the parent bones above it, and copies these values into the specified array. We then loop through each mesh in the model. A mesh is an independent piece of the model, representing a movable part. Each of these meshes can have multiple effects tied to it, so we loop through those as well, using an instance of BasicEffect created on the spot to render the meshes. In order to render each mesh, we establish the mesh's world location by looking up the mesh's parent bone transformation and storing it in the World matrix. We apply our View and Projection matrices just like before, and enable default lighting on the effect. Finally, we draw the mesh, which sends the triangles making up this portion of the model out to the graphics card. The tank model The tank model we are using is from the Simple Animation sample for XNA 4.0, available on Microsoft's MSDN website at http://xbox.create.msdn.com/en-US/education/catalog/sample/simple_animation. Bringing things down to earth You might have noticed that our tank is not actually sitting on the ground. In fact, we have set our terrain scaling so that the highest point in the terrain is at 30 units, while the tank is positioned at 40 units above the X-Z plane. Given a (X,Z) coordinate pair, we need to come up with a way to determine what height we should place our tank at, based on the terrain. Time for action – terrain heights To place our tank appropriately on the terrain, we first need to calculate, then place our tank there. This is done in the following steps: Add a helper method to the Terrain class to calculate the height based on a given coordinate as follows: #region Helper Methods public float GetHeight(float x, float z) { int xmin = (int)Math.Floor(x); int xmax = xmin + 1; int zmin = (int)Math.Floor(z); int zmax = zmin + 1; if ( (xmin < 0) || (zmin < 0) || (xmax > heights.GetUpperBound(0)) || (zmax > heights.GetUpperBound(1))) { return 0; } Vector3 p1 = new Vector3(xmin, heights[xmin, zmax], zmax); Vector3 p2 = new Vector3(xmax, heights[xmax, zmin], zmin); Vector3 p3; if ((x - xmin) + (z - zmin) <= 1) { p3 = new Vector3(xmin, heights[xmin, zmin], zmin); } else { p3 = new Vector3(xmax, heights[xmax, zmax], zmax); } Plane plane = new Plane(p1, p2, p3); Ray ray = new Ray(new Vector3(x, 0, z), Vector3.Up); float? height = ray.Intersects(plane); return height.HasValue ? height.Value : 0f; } #endregion In the LoadContent() method of the TankBattlesGame class, modify the statement that adds a tank to the battlefield to utilize the GetHeight() method as follows: tanks.Add( new Tank( GraphicsDevice, Content.Load(@"Modelstank"), new Vector3(61, terrain.GetHeight(61,61), 61))); Execute the game and view the tank, now placed on the terrain as shown in the following screenshot: What just happened? You might be tempted to simply grab the nearest (X, Z) coordinate from the heights[] array in the Terrain class and use that as the height for the tank. In fact, in many cases that might work. You could also average the four surrounding points and use that height, which would account for very steep slopes. The drawbacks with those approaches will not be entirely evident in Tank Battles, as our tanks are stationary. If the tanks were mobile, you would see the elevation of the tank jump between heights jarringly as the tank moved across the terrain because each virtual square of terrain that the tank entered would have only one height. In the GetHeight() method that we just saw, we take a different approach. Recall that the way our terrain is laid out, it grows along the positive X and Z axes. If we imagine looking down from a positive Y height onto our terrain with an orientation where the X axis grows to the right and the Z axis grows downward, we would have something like the following: As we discussed when we created our index buffer, our terrain is divided up into squares whose corners are exactly 1 unit apart. Unfortunately, these squares do not help us in determining the exact height of any given point, because each of the four points of the square can theoretically have any height from 0 to 30 in the case of our terrain scale. Remember though, that each square is divided into two triangles. The triangle is the basic unit of drawing for our 3D graphics. Each triangle is composed of three points, and we know that three points can be used to define a plane. We can use XNA's Plane class to represent the plane defined by an individual triangle on our terrain mesh. To do so, we just need to know which triangle we want to use to create the plane. In order to determine this, we first get the (X, Z) coordinates (relative to the view in the preceding figure) of the upper-left corner of the square our point is located in. We determine this point by dropping any fractional part of the x and z coordinates and storing the values in xmin and zmin for later use. We check to make sure that the values we will be looking up in the heights[] array are valid (greater than zero and less than or equal to the highest element in each direction in the array). This could happen if we ask for the height of a position that is outside the bounds of our map's height. Instead of crashing the game, we will simply return a zero. It should not happen in our code, but it is better to account for the possibility than be surprised later. We define three points, represented as Vector3 values p1, p2, and p3. We can see right away that no matter which of the two triangles we pick, the (xmax, zmin) and (xmin, zmax) points will be included in our plane, so their values are set right away. To decide which of the final two points to use, we need to determine which side of the central dividing line the point we are looking for lies in. This actually turns out to be fairly simple to do for the squares we are using. In the case of our triangle, if we eliminate the integer portion of our X and Z coordinates (leaving only the fractional part that tells us how far into the square we are), the sum of both of these values will be less than or equal to the size of one grid square (1 in our case) if we are in the upper left triangle. Otherwise our point is in the right triangle. The code if ((x - xmin) + (z - zmin) <= 1) performs this check, and sets the value of p3 to either (xmin, zmin) or (xmax, zmax) depending on the result. Once we have our three points, we ask XNA to construct a Plane using them, and then we construct another new type of object we have not yet used – an object of the Ray class. A Ray has a base point, represented by a Vector3, and a direction – also represented by a Vector3. Think of a Ray as an infinitely long arrow that starts somewhere in our world and heads off in a given direction forever. In the case of the Ray we are using, the starting point is at the zero point on the Y axis, and the coordinates we passed into the method for X and Z. We specify Vector3.Up as the direction the Ray is pointing in. Remember from the FPS camera that Vector3.Up has an actual value of (0, 1, 0), or pointing up along the positive Y axis. The Ray class has an Intersects() method that returns the distance from the origin point along the Ray where the Ray intersects a given Plane. We must assign the return value of this method to a float? instead of a normal float. You may not be familiar with this notation, but the question mark at the end of the type specifies that the value is nullable—that is, it might contain a value, but it could also just contain a null value. In the case of the Ray.Intersects() method, the method will return null if the object of Ray class does not intersect the object of the Plane class at any point. This should never happen with our terrain height code, but we need to account for the possibility. When using a nullable float, we need to check to make sure that the variable actually has a value before trying to use it. In this case, we use the HasValue property of the variable. If it does have one, we return it. Otherwise we return a default value of zero.
Read more
  • 0
  • 0
  • 10496
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-importing-3d-formats-away3d
Packt
31 May 2011
5 min read
Save for later

Importing 3D Formats into Away3D

Packt
31 May 2011
5 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction The Away3D library contains a large set of 3D geometric primitives such as Cube, Sphere, Plane, and many more. Nevertheless, when we think of developing breathtaking and cutting edge 3D applications, there is really no way to get it done without using more sophisticated models than just basic primitives. Therefore, we need to use external 3D modeling programs such as Autodesk 3DsMax and Maya, or Blender to create complex models. The Power of Away3D is that it allows us to import a wide range of 3D formats for static meshes as well as for animations. Besides the models, the not less important part of the 3D world is textures. They are critical in making the model look cool and influencing the ultimate user experience. In this article, you will learn essential techniques to import different 3D formats into Away3D. Exporting models from 3DsMax/Maya/Blender You can export the following modeling formats from 3D programs: (Wavefront), Obj, DAE (Collada), 3ds, Ase (ASCII), MD2, Kmz, 3DsMax, and Maya can export natively Obj, DAE, 3ds, and ASCII. One of the favorite 3D formats of Away3D developers is DAE (Collada), although it is not the best in terms of performance because the file is basically an XML which becomes slow to parse when containing a lot of data. The problem is that although 3DsMax and Maya have got a built-in Collada exporter, the models from the output do not work in Away3D. The work around is to use open source Collada exporters such as ColladaMax/ColladaMaya, OpenCollada. The only difference between these two is the software versions support. Getting ready Go to http://opencollada.org/download.html and download the OpenCollada plugin for the appropriate software (3DsMax or Maya). Go to http://sourceforge.net/projects/colladamaya/files/ and download the ColladaMax or colladamaya plugin. Follow the instructions of the installation dialog of the plugin. The plugin will get installed automatically in the 3dsMax/Maya plugins directory (taking into account that the software was installed into the default path). How to do it... 3DsMax: Here is how to export Collada using OpenCollada plugin in 3DsMax2011. In order to export Collada (DAE) from 3DsMax, you should do the following: In 3DsMax, go to File and click on Export or Export Selected (target model selected). Select the OpenCOLLADA(*.DAE) format from the formats drop-down list. ColladaMax export settings: (Currently 3DsMax 2009 and lower) ColladaMax export settings are almost the same as those of OpenCollada. The only difference you can see in the exporting interface is the lack of Copy Images and Export user defined properties checkboxes. Select the checkboxes as is shown in the previous screenshot. Relative paths: Makes sure the texture paths are relative. Normals: Exporting object's normals. Copy Images: Is optional. If we select this option, the exporter outputs a folder with related textures into the same directory as the exported object. Triangulate: In case some parts of the mesh consist of more than three angled polygons, they get triangulated. Animation settings: Away3D supports bones animations from external assets. If you set bones animation and wish to export it, then check the Sample animation and set the begin and end frame for animation span that you want to export from the 3DsMax animation timeline. Maya: For showcase purposes, you can download a 30-day trial version of Autodesk Maya 2011. The installation process in Maya is slightly different: Open Maya. Go to top menu bar and select Window. In the drop-down list, select Settings/Preferences, in the new drop-down list, select Plug-in manager. Now you should see the Plug-in Manager interface: Now click on the Browse button and navigate to the directory where you extracted the OpenCollada ZIP archive. Select the COLLADAMaya.mll file and open it. Now you should see the OpenCollada plugin under the Other Registered Plugins category. Check the AutoLoad checkbox if you wish for the plugin to be loaded automatically the next time you start the program. After your model is ready for export, click on File | Export All or Export selected. The export settings for ColladaMaya are the same as for 3DsMax. How it works... The Collada file is just another XML but with a different format name (.dae). When exporting a model in a Collada format, the exporter writes into the XML nodes tree all essential data describing the model structure as well as animation data when one exports bone-based animated models. When deploying your DAE models to the web hosting directory, don't forget to change the .DAE extension to .XML. Forgetting will result in the file not being able to load because .DAE extension is ignored by most servers by default. There's more... Besides the Collada, you can also export OBJ, 3Ds, and ASE. Fortunately, for exporting these formats, you don't need any third party plugins but only those already located in the software. Free programs such as Blender also serve as an alternative to expansive commercial software such as Maya, or 3DsMax Blender comes with already built-in Collada exporter. Actually, it has two such exporters. At the time of this writing, these are 1.3 and 1.4. You should use 1.4 as 1.3 seems to output corrupted files that are not parsed in Away3D. The export process looks exactly like the one for 3dsMax. Select your model. Go to File, then Export. In the drop-down list of different formats, select Collada 1.4. The following interface opens: Select Triangles, Only Export Selection (if you wish to export only selected object), and Sample Animation. Set exporting destination path and click on Export and close. You are done.
Read more
  • 0
  • 0
  • 10361

article-image-enemy-and-friendly-ais
Packt
17 Dec 2014
29 min read
Save for later

Enemy and Friendly AIs

Packt
17 Dec 2014
29 min read
In this article by Kyle D'Aoust, author of the book Unity Game Development Scripting, we will see how to create enemy and friendly AIs. (For more resources related to this topic, see here.) Artificial Intelligence, also known as AI, is something that you'll see in every video game that you play. First-person shooter, real-time strategy, simulation, role playing games, sports, puzzles, and so on, all have various forms of AI in both large and small systems. In this article, we'll be going over several topics that involve creating AI, including techniques, actions, pathfinding, animations, and the AI manager. Then, finally, we'll put it all together to create an AI package of our own. In this article, you will learn: What a finite state machine is What a behavior tree is How to combine two AI techniques for complex AI How to deal with internal and external actions How to handle outside actions that affect the AI How to play character animations What is pathfinding? How to use a waypoint system How to use Unity's NavMesh pathfinding system How to combine waypoints and NavMesh for complete pathfinding AI techniques There are two very common techniques used to create AI: the finite state machine and the behavior tree. Depending on the game that you are making and the complexity of the AI that you want, the technique you use will vary. In this article, we'll utilize both the techniques in our AI script to maximize the potential of our AI. Finite state machines Finite state machines are one of the most common AI systems used throughout computer programming. To define the term itself, a finite state machine breaks down to a system, which controls an object that has a limited number of states to exist in. Some real-world examples of a finite state machine are traffic lights, television, and a computer. Let's look at an example of a computer finite state machine to get a better understanding. A computer can be in various states. To keep it simple, we will list three main states. These states are On, Off, and Active. The Off state is when the computer does not have power running it, the On state is when the computer does have power running it, and the Active state is when someone is using the computer. Let's take a further look into our computer finite state machine and explore the functions of each of its states: State Functions On Can be used by anyone Can turn off the computer Off Can turn on the computer Computer parts can be operated on Active Can access the Internet and various programs Can communicate with other devices Can turn off the computer Each state has its own functions. Some of the functions of each state affect each other, while some do not. The functions that do affect each other are the functions that control what state the finite state machine is in. If you press the power button on your computer, it will turn on and change the state of your computer to On. While the state of your computer is On, you can use the Internet and possibly some other programs, or communicate to other devices such as a router or printer. Doing so will change the state of your computer to Active. When you are using the computer, you can also turn off the computer by its software or by pressing the power button, therefore changing the state to Off. In video games, you can use a finite state machine to create AI with a simple logic. You can also combine finite state machines with other types of AI systems to create a unique and perhaps more complex AI system. In this article, we will be using finite state machines as well as what is known as a behavior tree. The behavior tree form of the AI system A behavior tree is another kind of AI system that works in a very similar way to finite state machines. Actually, behavior trees are made up of finite state machines that work in a hierarchical system. This system of hierarchy gives us great control over an individual, and perhaps many finite state systems within the behavior tree, allowing us to have a complex AI system. Taking a look back at the table explaining a finite state machine, a behavior tree works the same way. Instead of states, you have behaviors, and in place of the state functions, you have various finite state machines that determine what is done while the AI is in a specific behavior. Let's take a look at the behavior tree that we will be using in this article to create our AI: On the left-hand side, we have four behaviors: Idle, Guard, Combat, and Flee. To the right are the finite state machines that make up each of the behaviors. Idle and Flee only have one finite state machine, while Guard and Combat have multiple. Within the Combat behavior, two of its finite state machines even have a couple of their own finite state machines. As you can see, this hierarchy-based system of finite state machines allows us to use a basic form of logic to create an even more complex AI system. At the same time, we are also getting a lot of control by separating our AI into various behaviors. Each behavior will run its own silo of code, oblivious to the other behaviors. The only time we want a behavior to notice another behavior is either when an internal or external action occurs that forces the behavior of our AI to change. Combining the techniques In this article, we will take both of the AI techniques and combine them to create a great AI package. Our behavior tree will utilize finite state machines to run the individual behaviors, creating a unique and complex AI system. This AI package can be used for an enemy AI as well as a friendly AI. Let's start scripting! Now, let's begin scripting our AI! To start off, create a new C# file and name it AI_Agent. Upon opening it, delete any functions within the main class, leaving it empty. Just after the using statements, add this enum to the script: public enum Behaviors {Idle, Guard, Combat, Flee}; This enum will be used throughout our script to determine what behavior our AI is in. Now let's add it to our class. It is time to declare our first variable: public Behaviors aiBehaviors = Behaviors.Idle; This variable, aiBehaviors, will be the deciding factor of what our AI does. Its main purpose is to have its value checked and changed when needed. Let's create our first function, which will utilize one of this variable's purposes: void RunBehaviors(){switch(aiBehaviors){case Behaviors.Idle:   RunIdleNode();   break;case Behaviors.Guard:   RunGuardNode();   break;case Behaviors.Combat:   RunCombatNode();   break;case Behaviors.Flee:   RunFleeNode();   break;}} What this function will do is check the value of our aiBehaviors variable in a switch statement. Depending on what the value is, it will then call a function to be used within that behavior. This function is actually going to be a finite state machine, which will decide what that behavior does at that point. Now, let's add another function to our script, which will allow us to change the behavior of our AI: void ChangeBehavior(Behaviors newBehavior){aiBehaviors = newBehavior; RunBehaviors();} As you can see, this function works very similarly to the RunBehaviors function. When this function is called, it will take a new behaviors variable and assign its value to aiBehaviors. By doing this, we changed the behavior of our AI. Now let's add the final step to running our behaviors; for now, they will be empty functions that act as placeholders for our internal and external actions. Add these functions to the script: void RunIdleNode(){ } void RunGuardNode(){ }void RunCombatNode(){ }void RunFleeNode(){ } Each of these functions will run the finite state machines that make up the behaviors. These functions are essentially a middleman between the behavior and the behavior's action. Using these functions is the beginning of having more control over our behaviors, something that can't be done with a simple finite state machine. Internal and external actions The actions of a finite state machine can be broken up into internal and external actions. Separating the actions into the two categories makes it easier to define what our AI does in any given situation. The separation is helpful in the planning phase of creating AI, but it can also help in the scripting part as well, since you will know what will and will not be called by other classes and GameObjects. Another way this separation is beneficial is that it eases the work of multiple programmers working on the same AI; each programmer could work on separate parts of the AI without as many conflicts. External actions External actions are functions and activities that are activated when objects outside of the AI object act upon the AI object. Some examples of external actions include being hit by a player, having a spell being cast upon the player, falling from heights, losing the game by an external condition, communicating with external objects, and so on. The external actions that we will be using for our AI are: Changing its health Raising a stat Lowering a stat Killing the AI Internal actions Internal actions are the functions and activities that the AI runs within itself. Examples of these are patrolling a set path, attacking a player, running away from the player, using items, and so on. These are all actions that the AI will choose to do depending on a number of conditions. The internal actions that we will be using for our AI are: Patrolling a path Attacking a player Fleeing from a player Searching for a player Scripting the actions It's time to add some internal and external actions to the script. First, be sure to add the using statement to the top of your script with the other using statements: using System.Collections.Generic; Now, let's add some variables that will allow us to use the actions: public bool isSuspicious = false;public bool isInRange = false;public bool FightsRanged = false;public List<KeyValuePair<string, int>> Stats = new List<KeyValuePair<string, int>>();public GameObject Projectile; The first three of our new variables are conditions to be used in finite state machines to determine what function should be called. Next, we have a list of the KeyValuePair variables, which will hold the stats of our AI GameObject. The last variable is a GameObject, which is what we will use as a projectile for ranged attacks. Remember the empty middleman functions that we previously created? Now with these new variables, we will be adding some code to each of them. Add this code so that the empty functions are now filled: void RunIdleNode(){Idle();} void RunGuardNode(){Guard();}void RunCombatNode(){if(FightsRanged)   RangedAttack();else   MeleeAttack();}void RunFleeNode(){Flee();} Two of the three boolean variables we just created are being used as conditionals to call different functions, effectively creating finite state machines. Next, we will be adding the rest of our actions; these are what is being called by the middleman functions. Some of these functions will be empty placeholders, but will be filled later on in the article: void Idle(){} void Guard(){if(isSuspicious){   SearchForTarget();}else{   Patrol();}}void Combat(){if(isInRange){   if(FightsRanged)   {     RangedAttack();   }   else   {     MeleeAttack();   }}else{   SearchForTarget();}}void Flee(){} void SearchForTarget(){} void Patrol(){} void RangedAttack(){GameObject newProjectile;newProjectile = Instantiate(Projectile, transform.position, Quaternion.identity) as GameObject;} void MeleeAttack(){} In the Guard function, we check to see whether the AI notices the player or not. If it does, then it will proceed to search for the player; if not, then it will continue to patrol along its path. In the Combat function, we first check to see whether the player is within the attacking range; if not, then the AI searches again. If the player is within the attacking range, we check to see whether the AI prefers attacking up close or far away. For ranged attacks, we first create a new, temporary GameObject variable. Then, we set it to an instantiated clone of our Projectile GameObject. From here, the projectile will run its own scripts to determine what it does. This is how we allow our AI to attack the player from a distance. To finish off our actions, we have two more functions to add. The first one will be to change the health of the AI, which is as follows: void ChangeHealth(int Amount){if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == "Health")   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     if(Stats[i].Value <= 0)     {       Destroy(gameObject);     }     else if(Stats[i].Value < 25)     {       isSuspicious = false;       ChangeBehavior(Behaviors.Flee);     }     break;   }}} This function takes an int variable, which is the amount by which we want to change the health of the player. The first thing we do is check to see if the amount is negative; if it is, then we make our AI suspicious and change the behavior accordingly. Next, we search for the health stat in our list and set its value to a new value that is affected by the Amount variable. We then check if the AI's health is below zero to kill it; if not, then we also check if its health is below 25. If the health is that low, we make our AI flee from the player. To finish off our actions, we have one last function to add. It will allow us to affect a specific stat of the AI. These modifications will either add to or subtract from a stat. The modifications can be permanent or restored anytime. For the following instance, the modifications will be permanent: void ModifyStat(string Stat, int Amount){for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == Stat)   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     break;   }}if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}} This function takes a string and an integer. The string is used to search for the specific stat that we want to affect and the integer is how much we want to affect that stat by. It works in a very similar way to how the ChangeHealth function works, except that we first search for a specific stat. We also check to see if the amount is negative. This time, if it is negative, we change our AI behavior to Guard. This seems to be an appropriate response for the AI after being hit by something that negated one of its stats! Pathfinding Pathfinding is how the AI will maneuver around the level. For our AI package, we will be using two different kinds of pathfinding, NavMesh and waypoints. The waypoint system is a common approach to create paths for AI to move around the game level. To allow our AI to move through our level in an intelligent manner, we will use Unity's NavMesh component. Creating paths using the waypoint system Using waypoints to create paths is a common practice in game design, and it's simple too. To sum it up, you place objects or set locations around the game world; these are your waypoints. In the code, you will place all of your waypoints that you created in a container of some kind, such as a list or an array. Then, starting at the first waypoint, you tell the AI to move to the next waypoint. Once that waypoint has been reached, you send the AI off to another one, ultimately creating a system that iterates through all of the waypoints, allowing the AI to move around the game world through the set paths. Although using the waypoint system will grant our AI movement in the world, at this point, it doesn't know how to avoid obstacles that it may come across. That is when you need to implement some sort of mesh navigation system so that the AI won't get stuck anywhere. Unity's NavMesh system The next step in creating AI pathfinding is to create a way for our AI to navigate through the game world intelligently, meaning that it does not get stuck anywhere. In just about every game out there that has a 3D-based AI, the world it inhabits has all sorts of obstacles. These obstacles could be plants, stairs, ramps, boxes, holes, and so on. To get our AI to avoid these obstacles, we will use Unity's NavMesh system, which is built into Unity itself. Setting up the environment Before we can start creating our pathfinding system, we need to create a level for our AI to move around in. To do this, I am just using Unity primitive models such as cubes and capsules. For the floor, create a cube, stretch it out, and squish it to make a rectangle. From there, clone it several times so that you have a large floor made up of cubes. Next, delete a bunch of the cubes and move some others around. This will create holes in our floor, which will be used and tested when we implement the NavMesh system. To make the floor easy to see, I've created a material in green and assigned it to the floor cubes. After this, create a few more cubes, make one really long and one shorter than the previous one but thicker, and the last one will be used as a ramp. I've created an intersection of the really long cube and the thick cube. Then, place the ramp towards the end of the thick cube, giving access to the top of the cubes. Our final step in creating our test environment is to add a few waypoints for our AI. For testing purposes, create five waypoints in this manner. Place one in each corner of the level and one in the middle. For the actual waypoints, use the capsule primitive. For each waypoint, add a rigid body component. Name the waypoints as Waypoint1, Waypoint2, Waypoint3, and so on. The name is not all that important for our code; it just makes it easier to distinguish between waypoints in the inspector. Here's what I made for my level:  Creating the NavMesh Now, we will create the navigation mesh for our scene. The first thing we will do is select all of the floor cubes. In the menu tab in Unity, click on the Window option, and then click on the Navigation option at the bottom of the dropdown; this will open up the Navigation window. This is what you should be seeing right now:  By default, the OffMeshLink Generation option is not checked; be sure to check it. What this does is create links at the edges of the mesh allowing it to communicate with any other OffMeshLink nearby, creating a singular mesh. This is a handy tool since game levels typically use more than one mesh as a floor. The Scene filter will just show specific objects within the hierarchy view list. Selecting all the objects will show all of your GameObjects. Selecting mesh renderers will only show GameObjects that have the mesh renderer component. Then, finally, if you select terrains, only terrains will be shown in the Hierarchy view list. The Navigation Layer dropdown will allow you to set the area as either walkable, not walkable, or jump accessible. Walkable areas typically refer to floors, ramps, and so on. Non-walkable areas refer to walls, rocks, and other various obstacles. Next, click on the Bake tab next to the Object tab. You should see information that looks like this: For this article, I am leaving all the values at their defaults. The Radius property is used to determine how close to the walls the navigation mesh will exist. Height determines how much vertical space is needed for the AI agent to be able to walk on the navigation mesh. Max Slope is the maximum angle that the AI is allowed to travel on for ramps, hills, and so on. The Step Height property is used to determine how high the AI can step up onto surfaces higher than the ground level. For Generated Off Mesh Links, the properties are very similar to each other. The Drop Height value is the maximum amount of space the AI can intelligently drop down to another part of the navigation mesh. Jump Distance is the opposite of Height; it determines how high the AI can jump up to another part of the navigation mesh. The Advanced options are to be used when you have a better understanding of the NavMesh component and want a little more out of it. Here, you can further tweak the accuracy of the NavMesh as well as create Height Mesh to coincide with the navigation mesh. Now that you know all the basics of the Unity NavMesh, let's go ahead and create our navigation mesh. At the bottom-right corner of the Navigation tab in the Inspector window, you should see two buttons: one that says Clear and the other that says Bake. Click on the Bake button now to create your new navigation mesh. Select the ramp and the thick cube that we created earlier. In the Navigation window, make sure that the OffMeshLink Generation option is not checked, and that Navigation Layer is set to Default. If the ramp and the thick cube are not selected, reselect the floor cubes so that you have the floors, ramp, and thick wall selected. Bake the navigation mesh again to create a new one. This is what my scene looks like now with the navigation mesh: You should be able to see the newly generated navigation mesh overlaying the underlying mesh. This is what was created using the default Bake properties. Changing the Bake properties will give you different results, which will come down to what kind of navigation mesh you want the AI to use. Now that we have a navigation mesh, let's create the code for our AI to utilize. First, we will code the waypoint system, and then we will code what is needed for the NavMesh system. Adding our variables To start our navigation system, we will need to add a few variables first. Place these with the rest of our variables: public Transform[] Waypoints;public int curWaypoint = 0;bool ReversePath = false;NavMeshAgent navAgent;Vector3 Destination;float Distance; The first variable is an array of Transforms; this is what we will use to hold our waypoints. Next, we have an integer that is used to iterate through our Transform array. We have a bool variable, which will decide how we should navigate through the waypoints. The next three variables are more oriented towards our navigation mesh that we created earlier. The NavMeshAgent object is what we will reference when we want to interact with the navigation mesh. The destination will be the location that we want the AI to move towards. The distance is what we will use to check how far away we are from that location. Scripting the navigation functions Previously, we created many empty functions; some of these are dependent on pathfinding. Let's start with the Flee function. Add this code to replace the empty function: void Flee(){for(int fleePoint = 0; fleePoint < Waypoints.Length; fleePoint++){   Distance = Vector3.Distance(gameObject.transform.position, Waypoints[fleePoint].position);   if(Distance > 10.00f)   {     Destination = Waypoints[curWaypoint].position;     navAgent.SetDestination(Destination);     break;   }   else if(Distance < 2.00f)   {     ChangeBehavior(Behaviors.Idle);   }}} What this for loop does is pick a waypoint that has Distance of more than 10. If it does, then we set the Destination value to the current waypoint and move the AI accordingly. If the distance from the current waypoint is less than 2, we change the behavior to Idle. The next function that we will adjust is the SearchForTarget function. Add the following code to it, replacing its previous emptiness: void SearchForTarget(){Destination = GameObject.FindGameObjectWithTag("Player").transform.position;navAgent.SetDestination(Destination);Distance = Vector3.Distance(gameObject.transform.position, Destination);if(Distance < 10)   ChangeBehavior(Behaviors.Combat);} This function will now be able to search for a target, the Player target to be more specific. We set Destination to the player's current position, and then move the AI towards the player. When Distance is less than 10, we set the AI behavior to Combat. Now that our AI can run from the player as well as chase them down, let's utilize the waypoints and create paths for the AI. Add this code to the empty Patrol function: void Patrol(){Distance = Vector3.Distance(gameObject.transform.position, Waypoints[curWaypoint].position);if(Distance > 2.00f){   Destination = Waypoints[curWaypoint].position;   navAgent.SetDestination(Destination);}else{   if(ReversePath)   {     if(curWaypoint <= 0)     {       ReversePath = false;     }     else     {       curWaypoint--;       Destination = Waypoints[curWaypoint].position;     }   }   else   {     if(curWaypoint >= Waypoints.Length - 1)     {       ReversePath = true;     }     else     {       curWaypoint++;       Destination = Waypoints[curWaypoint].position;     }   }}} What Patrol will now do is check the Distance variable. If it is far from the current waypoint, we set that waypoint as the new destination of our AI. If the current waypoint is close to the AI, we check the ReversePath Boolean variable. When ReversePath is true, we tell the AI to go to the previous waypoint, going through the path in the reverse order. When ReversePath is false, the AI will go on to the next waypoint in the list of waypoints. With all of this completed, you now have an AI with pathfinding abilities. The AI can also patrol a path set by waypoints and reverse the path when the end has been reached. We have also added abilities for the AI to search for the player as well as flee from the player. Character animations Animations are what bring the characters to life visually in the game. From basic animations to super realistic movements, all the animations are important and really represent what scripters do to the player. Before we add animations to our AI, we first need to get a model mesh for it! Importing the model mesh For this article, I am using a model mesh that I got from the Unity Asset Store. To use the same model mesh that I am using, go to the Unity Asset Store and search for Skeletons Pack. It is a package of four skeleton model meshes that are fully textured, propped, and animated. The asset itself is free and great to use. When you import the package into Unity, it will come with all four models as well as their textures, and an example scene named ShowCase. Open that scene and you should see the four skeletons. If you run the scene, you will see all the skeletons playing their idle animations. Choose the skeleton you want to use for your AI; I chose skeletonDark for mine. Click on the drop-down list of your skeleton in the Hierarchy window, and then on the Bip01 drop-down list. Then, select the magicParticle object. For our AI, we will not need it, so delete it from the Hierarchy window. Create a new prefab in the Project window and name it Skeleton. Now select the skeleton that you want to use from the Hierarchy window and drag it onto the newly created prefab. This will now be the model that you will use for this article. In your AI test scene, drag and drop Skeleton Prefab onto the scene. I have placed mine towards the center of the level, near the waypoint in the middle. In the Inspector window, you will be able to see the Animation component full of animations for the model. Now, we will need to add a few components to our skeleton. Go to the Components menu on the top of the Unity window, select Navigation, and then select NavMesh Agent. Doing this will allow the skeleton to utilize the NavMesh we created earlier. Next, go into the Components menu again and click on Capsule Collider as well as Rigidbody. Your Inspector window should now look like this after adding the components: Your model now has all the necessary components needed to work with our AI script. Scripting the animations To script our animations, we will take a simple approach to it. There won't be a lot of code to deal with, but we will spread it out in various areas of our script where we need to play the animations. In the Idle function, add this line of code: animation.Play("idle"); This simple line of code will play the idle animation. We use animation to access the model's animation component, and then use the Play function of that component to play the animation. The Play function can take the name of the animation to call the correct animation to be played; for this one, we call the idle animation. In the SearchForTarget function, add this line of code to the script: animation.Play("run"); We access the same function of the animation component and call the run animation to play here. Add the same line of code to the Patrol function as well, since we will want to use that animation for that function too. In the RangedAttack and MeleeAttack functions, add this code: animation.Play("attack"); Here, we call the attack animation. If we had a separate animation for ranged attacks, we would use that instead, but since we don't, we will utilize the same animation for both attack types. With this, we finished coding the animations into our AI. It will now play those animations when they are called during gameplay. Putting it all together To wrap up our AI package, we will now finish up the script and add it to the skeleton. Final coding touches At the beginning of our AI script, we created some variables that we have yet to properly assign. We will do that in the Start function. We will also add the Update function to run our AI code. Add these functions to the bottom of the class: void Start(){navAgent = GetComponent<NavMeshAgent>(); Stats.Add(new KeyValuePair<string, int>("Health", 100));Stats.Add(new KeyValuePair<string, int>("Speed", 10));Stats.Add(new KeyValuePair<string, int>("Damage", 25));Stats.Add(new KeyValuePair<string, int>("Agility", 25));Stats.Add(new KeyValuePair<string, int>("Accuracy", 60));} void Update (){RunBehaviors();} In the Start function, we first assign the navAgent variable by getting the NavMeshAgent component from the GameObject. Next, we add new KeyValuePair variables to the Stats array. The Stats array is now filled with a few stats that we created. The Update function calls the RunBehaviors function. This is what will keep the AI running; it will run the correct behavior as long as the AI is active. Filling out the inspector To complete the AI package, we will need to add the script to the skeleton, so drag the script onto the skeleton in the Hierarchy window. In the Size property of the waypoints array, type the number 5 and open up the drop-down list. Starting with Element 0, drag each of the waypoints into the empty slots. For the projectile, create a sphere GameObject and make it a prefab. Now, drag it onto the empty slot next to Projectile. Finally, set the AI Behaviors to Guard. This will make it so that when you start the scene, your AI will be patrolling. The Inspector window of the skeleton should look something like this: Your AI is now ready for gameplay! To make sure everything works, we will need to do some playtesting. Playtesting A great way to playtest the AI is to play the scene in every behavior. Start off with Guard, then run it in Idle, Combat, and Flee. For different outputs, try adjusting some of the variables in the NavMesh Agent component, such as Speed, Angular Speed, and Stopping Distance. Try mixing your waypoints around so the path is different. Summary In this article, you learned how to create an AI package. We explored a couple of techniques to handle AI, such as finite state machines and behavior trees. Then, we dived into AI actions, both internal and external. From there, we figured out how to implement pathfinding with both a waypoint system and Unity's NavMesh system. Finally, we topped the AI package off with animations and put everything together, creating our finalized AI. Resources for Article: Further resources on this subject: Getting Started – An Introduction to GML [article] Animations in Cocos2d-x [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 10220

article-image-mission-running-eve-online
Packt
17 Oct 2012
14 min read
Save for later

Mission Running in EVE Online

Packt
17 Oct 2012
14 min read
Mission types Missions in EVE fall into five general categories: Courier, Kill, Mining, Trade, and Special missions. Courier, Kill, Mining, and Trade missions are further categorized into five different levels: level I through level V. Let's take a closer look at each category to see what each type of mission entails. Courier missions Courier missions are simply delivery missions where you move items from one station to another. The type of items can range from common items found on the market to mission-specific cargo containers. The size and quantity of items to move can vary greatly and can either be moved using whatever ship you are currently flying or may require the use of an industrial ship.   While the ISK reward for courier missions is a bit on the low side, it having no negative standing impact on opposing factions is a huge positive. This, added to the fact that courier missions are quite easy and can often be completed very quickly, means they are generally worth running. Every so often you will receive a courier mission that takes you into Low-Sec space and with the risks involved in Low-Sec, it is best to decline these missions. You are able to decline one mission every four hours per agent without taking a negative standing hit with that agent. Be very careful when declining missions, since if your standing falls too low with an agent you will no longer be able to receive mission offers from that agent. Kill missions By far the most common, the most profitable, and let's be honest, the most fun missions to run are kill missions. The only thing better than flying beautiful space ships is seeing beautiful space ships blow up. There are currently two types of kill missions. In one, you warp to a set of coordinates in space where you engage NPC targets and in the other you warp to an acceleration gate which takes you into a series of pockets all connected by more acceleration gates. Think of this second type of kill mission like a dungeon with multiple rooms, in which you fight NPC targets in each of the rooms. Kill missions are a great way to make ISK, especially when you are able to access higher-level agents. However, as you can guess, kill missions also get more and more difficult the higher the level of the agent. Another thing that you need to be very careful about when running kill missions is that you run the risk of having negative standing impact on factions opposed to the faction you are running missions for. For example, if you were running missions for the Caldari state and you completed a mission that had you destroy ships belonging to or friendly with the Gallente Federation, then you would lose standing with the Gallente. A great way to negate the standing loss is to run missions for as many agents as you can find and to decline any mission that will have you attack the ships of another empire. But remember you can only decline one mission per agent every four hours, hence having multiple agents. Agents and how they work Now would be a great time to take a closer look at the agent system of EVE. Each agent works in a specific division of a NPC corporation. The division that the agent works in determines the type of missions you will receive (more on this later), and the NPC corporation that the agent works for determines which faction your standing rewards will affect. An agent's information sheet is shown in the following image: As you can see, the agent Aariwa Tenanochi works for the Home Guard in their Security division and is a level 1 agent. Aariwa can be found in the Nonni system at the Home Guard Assembly Plant located at planet III Moon 1. You can also see that the only requirement to receive missions from Aariwa is that you meet his standing requirements. Doing the tutorial missions will allow you to access all level 1 agents for your chosen faction. All agents are rated level 1 to level 5. The higher the level of the agent the harder their missions are, and the harder the mission the better and bigger the reward. The difficulty level of each agent level can best be described as follows: Level 1: Very Easy. Designed for new players to learn about mission running and PvE in general. Frigate or destroyer for kill missions, short range and low cost items for courier and trade missions, and small amounts of low grade ore or minerals for mining missions. Level 2: Easy. Still designed beginner players, but will require you to start thinking tactically. Destroyer or cruiser for kill missions, slightly longer range and higher cost items for courier and trade missions, and higher amounts of low grade ore or minerals for mining missions. Level 3: Medium. You will need to have a firm understanding of how your ship works and have solid tactics for it. Battlecruiser or Heavy Assault Cruiser for kill missions, much longer range and higher cost items for courier and trade missions, and large amounts of low grade ore or minerals for mining missions. Level 4: Hard. You will need to understand your ship inside out, have solid tactics, and a solid fit for your ship. Battleship for kill missions, very long range and very costly items for courier and trade missions, and very large amounts of low grade ore or minerals for mining missions. Level 5: Very Hard. Same as level 4 agents but only found in Low-Sec systems and comes with all the risks of Low-Sec. It is a good idea to only do these missions in a fleet with a group of people. The standing requirements to access the different level of agents are shown in the following table:   Level 1 Level 2 Level 3 Level 4 Level 5 Standing required - 1.00 3.00 5.00 7.00 While you can use larger ships to run lower level kill missions, it is generally not a good idea. First, there can be ship limitation for a mission that does not allow you to use your larger ship and second, while your larger ship will be able to handle the incoming damage much easier, the larger weapons that your ship uses will have a harder time hitting the smaller targets. And since ammo costs ISK, it cuts into your profit per mission. I generally like to use the cheapest ship I can get away with when running missions. This ensures that if I ever get scanned down and killed by player pirates, my loss is minimal. Understanding your foe The biggest key to success when running kill missions is understanding who your targets will be and how they like to fight. Whenever you are offered a kill mission by an agent, you can find out which faction your target belongs to inside the mission description, either in the text of the description or by an emblem representing your target's faction. It is important to know which faction your target belongs to because then you will know what kind of damage to expect from your target. In the mission offer shown in the following image you can see that your targets for this mission are Rogue Drones: There are four types of damage, EM, Thermal , Kinetic, and Explosive, and each faction has a preference toward specific damage types. Once you know what kind of damage to expect you can then tailor the defense of your ship for maximum protection towards that damage. Knowing which faction your target belongs to will also tell you which type of damage you should be doing to maximize your damage output. Each faction has weaknesses towards specific damage types. Along with what types of damage to defend against and what types of damage to utilize, knowing which faction your target belongs to will also tell you what kind of special tactic ships you may encounter in your mission. For example, the Caldari likes to use ECM to jam your targeting systems and the Amarr uses energy neutralizers to drain your ship's capacitor, leaving you unable to use weapons or even to warp away. The following table shows all the factions and the damage types to defend and use: NPC faction Damage type to defend Damage type to deal Guristas Kinetic, Thermal Kinetic, Thermal Serpentis Thermal, Kinetic Thermal Blood Raider EM, Thermal EM, Thermal Sansha's Nation EM, Thermal EM, Thermal Angel Cartel Explosive, Kinetic, Thermal, EM Explosive Mordu's Legion Kinetic, Thermal, Explosive, EM Thermal, Kinetic Mercenary Kinetic, Thermal Thermal, Kinetic Republic Fleet Explosive, Thermal, Kinetic, EM Explosive, Kinetic Caldari Navy Kinetic, Thermal Kinetic, Thermal Amarr Navy EM, Thermal, Kinetic EM, Thermal Federation Navy Thermal, Kinetic Kinetic, Thermal Rogue Drones Explosive, Kinetic, EM, Thermal EM Thukker Tribe Explosive, Thermal EM CONCORD EM, Thermal, Kinetic, Explosive Explosive, Kinetic Equilibrium of Mankind Kinetic, Thermal Kinetic, EM You will most likely have noticed that several factions use the same damage types but they are listed in different orders. The damage types are listed this way because the damage output or weakness is not divided evenly. So for the Serpentis pirate faction for example, they prefer a combination of Kinetic and Thermal damage with a higher percentage being Kinetic damage than Thermal damage. What ship to use to maximize earnings The first question that most new mission runners ask is "What race of ships should I use?" While each race has its own advantages and disadvantages and are all very good choices, in my opinion Gallente drone ships are the best ships to use for mission running, from a pure ISK making stand point. Gallente drone ships are the best when it comes to mission running because of the lack of ammo costs and the versatility they offer. With drones being your primary method of dealing damage you do not have to worry about the cost associated with needing a large supply of ammo like the Caldari or Minmatar. What about the Amarr you say? They don't need ammo. While it is true that the Amarr also do not require ammo, the heavy capacitor drain of lasers limits the amount of defense your ship can have. Drone ships do not need to fit weapons in their high slots and therefore you can fit your ship with the maximum amount of defense possible. Drone ships can also utilize the speed and range of drones to their advantage. By using modules such as the Drone Link Augmentor you can increase the range of your drones so that you can comfortably sit outside the attack range of your targets and let your drones do all the work. Being outside of the attack range also means that you are outside the range of electronic warfare, so it's a win-win situation. The best feature of drone ships is the ability to carry different types of drones. Light Drones for frigate-sized targets, Medium Drones for cruiser-sized targets, and Heavy Drones for battleship-sized targets. You can even carry Electronic Warfare Drones that can jam your targets, drain their capacitors, or even provide more defense for your ship by way of shield or armor repairs. How should I fit my ship? The second most common question is "How should I have my ship fitted for mission running?" Ship fitting is very subjective and almost an art form in itself. I can easily go on for pages about the different concepts behind ship fittings and why one way is better than another but there will always be people that will disagree because what works for one person does not necessarily work for you. The best thing to do is to visit websites such as eve.battleclinic.com/browse_loadouts.php to get ideas on how ship fittings should work and to use 3rd party software such as Python Fitting Assistant to come up with fittings that suit your style of play the best. When browsing fittings on BattleClinic, it is a good idea to make sure the filters at the bottom right are set for the most recent expansions. You wouldn't want to use a completely out-of-date fitting that will only get you killed. Basic tactics When you start a kill mission you will likely be faced with two different scenarios. The first is as soon as you come out of warp, targets will be on the offensive and immediately start attacking you. The second and more favorable scenario is to warp onto the field with multiple groups of targets in a semi-passive state. No matter the scenario you come across, the following are a few simple tactics will ensure everything goes that much smoother. These tactics are as follows: Zoom out. I know it's hard because your ship is so pretty, but zoom the camera out. You need to be able to have a high level of situational awareness at all times and you can't do that if you're zoomed in on your ship all the time. Finish each wave or group of targets before engaging other targets. This will ensure that you will never be too outnumbered and be able to more easily handle the incoming damage. Kill the tacklers first. Tacklers are ships that use electronic warfare to slow your ship or to prevent you from warping away. Since flying out of range or warping away is the best way of escape when things go bad, killing off the tacklers first will give you the best chance to escape intact. Kill the largest targets first. Taking out the largest targets first gives you two critical advantages. The first is you are taking a lot of the damage against you off the field and the second is that larger ships are much easier to hit. Save structures for last. If part of your mission is to shoot or destroy a structure, save that for last. The majority of times, firing on a structure will cause all the hostiles to attack you at once and may even spawn stationary defenses. This tactic does not apply to defensive towers, such as missile and tackling towers. You should always kill these towers as soon as possible. Mining missions Mining missions come in two flavors. The first will have you travel to a set of coordinates given to you by your agent, to mine a specific amount of a specific type of ore and then return to your agent with the ore you have mined. The second will simply have you supply your agent with an amount of ore or mineral. For the second type of mining mission you can either mine and refine the ore yourself or purchase it from the market. The ISK reward for mining missions is really bad and in general you will make more ISK if you had simply spent the time mining, but once again, having no negative standing impact on opposing factions is a huge plus. So if you are a career miner, it can be worth it for you to run these missions for the standings gain. After all, you will have to increase your standing in order to maximize your refine yield. It would be best to only accept the missions in which you already have the requested ore or mineral in your stock and to decline the rest. Trade missions Trade missions are simple missions that require you to provide items to your agent at a specific station. These items can either be made by you or purchased off the market. Like courier and mining missions, the only upside to trade missions is that they can be completed without the negative standing impact on opposing factions. But with the high amount of ISK needed to complete these missions and the time involved, it is best to avoid these missions. If you choose to do these missions, check to see if the item required is on sale at the destination station. If it is on sale there, you can often purchase it and complete the mission without ever leaving your current station.
Read more
  • 0
  • 0
  • 10119

article-image-materials-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Materials with Ogre 3D

Packt
25 Nov 2010
7 min read
OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Creating a white quad We will use this to create a sample quad that we can experiment with. Time for action – creating the quad We will start with an empty application and insert the code for our quad into the createScene() function: Begin with creating the manual object: Ogre::ManualObject* manual = mSceneMgr- >createManualObject("Quad"); manual->begin("BaseWhiteNoLighting", RenderOperation::OT_TRIANGLE_ LIST); Create four points for our quad: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,1); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(1,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(1,1); manual->position(5.0, 10.0, 0.0);manual->textureCoord(0,0); Use indices to describe the quad: manual->index(0); manual->index(1); manual->index(2); manual->index(0); manual->index(3); manual->index(1); Finish the manual object and convert it to a mesh: manual->end(); manual->convertToMesh("Quad"); Create an instance of the entity and attach it to the scene using a scene node: Ogre::Entity * ent = mSceneMgr->createEntity("Quad"); Ogre::SceneNode* node = mSceneMgr->getRootSceneNode()- >createChildSceneNode("Node1"); node->attachObject(ent); Compile and run the application. You should see a white quad. What just happened? We used our knowledge to create a quad and attach to it a material that simply renders everything in white. The next step is to create our own material. Creating our own material Always rendering everything in white isn't exactly exciting, so let's create our first material. Time for action – creating a material Now, we are going to create our own material using the white quad we created. Change the material name in the application from BaseWhiteNoLighting to MyMaterial1: manual->begin("MyMaterial1", RenderOperation::OT_TRIANGLE_LIST); Create a new file named Ogre3DBeginnersGuide.material in the mediamaterialsscripts folder of our Ogre3D SDK. Write the following code into the material file: material MyMaterial1 { technique { pass { texture_unit { texture leaf.png } } } } Compile and run the application. You should see a white quad with a plant drawn onto it. What just happened? We created our first material file. In Ogre 3D, materials can be defined in material files. To be able to find our material files, we need to put them in a directory listed in the resources.cfg, like the one we used. We also could give the path to the file directly in code using the ResourceManager. To use our material defined in the material file, we just had to use the name during the begin call of the manual object. The interesting part is the material file itself. Materials Each material starts with the keyword material, the name of the material, and then an open curly bracket. To end the material, use a closed curly bracket—this technique should be very familiar to you by now. Each material consists of one or more techniques; a technique describes a way to achieve the desired effect. Because there are a lot of different graphic cards with different capabilities, we can define several techniques and Ogre 3D goes from top to bottom and selects the first technique that is supported by the user's graphic cards. Inside a technique, we can have several passes. A pass is a single rendering of your geometry. For most of the materials we are going to create, we only need one pass. However, some more complex materials might need two or three passes, so Ogre 3D enables us to define several passes per technique. In this pass, we only define a texture unit. A texture unit defines one texture and its properties. This time the only property we define is the texture to be used. We use leaf.png as the image used for our texture. This texture comes with the SDK and is in a folder that gets indexed by resources.cfg, so we can use it without any work from our side. Have a go hero – creating another material Create a new material called MyMaterial2 that uses Water02.jpg as an image. Texture coordinates take two There are different strategies used when texture coordinates are outside the 0 to 1 range. Now, let's create some materials to see them in action. Time for action – preparing our quad We are going to use the quad from the previous example with the leaf texture material: Change the texture coordinates of the quad from range 0 to 1 to 0 to 2. The quad code should then look like this: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,2); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(2,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(2,2); manual->position(5.0, 10.0, 0.0); manual->textureCoord(0,0); Now compile and run the application. Just as before, we will see a quad with a leaf texture, but this time we will see the texture four times. What just happened? We simply changed our quad to have texture coordinates that range from zero to two. This means that Ogre 3D needs to use one of its strategies to render texture coordinates that are larger than 1. The default mode is wrap. This means each value over 1 is wrapped to be between zero and one. The following is a diagram showing this effect and how the texture coordinates are wrapped. Outside the corners, we see the original texture coordinates and inside the corners, we see the value after the wrapping. Also for better understanding, we see the four texture repetitions with their implicit texture coordinates. We have seen how our texture gets wrapped using the default texture wrapping mode. Our plant texture shows the effect pretty well, but it doesn't show the usefulness of this technique. Let's use another texture to see the benefits of the wrapping mode. Using the wrapping mode with another texture Time for action – adding a rock texture For this example, we are going to use another texture. Otherwise, we wouldn't see the effect of this texture mode: Create a new material similar to the previous one, except change the used texture to: terr_rock6.jpg: material MyMaterial3 { technique { pass { texture_unit { texture terr_rock6.jpg } } } } Change the used material from MyMaterial1 to MyMaterial3: manual->begin("MyMaterial3", RenderOperation::OT_TRIANGLE_LIST) Compile and run the application. You should see a quad covered in a rock texture. What just happened? This time, the quad seems like it's covered in one single texture. We don't see any obvious repetitions like we did with the plant texture. The reason for this is that, like we already know, the texture wrapping mode repeats. The texture was created in such a way that at the left end of the texture, the texture is started again with its right side and the same is true for the lower end. This kind of texture is called seamless. The texture we used was prepared so that the left and right side fit perfectly together. The same goes for the upper and lower part of the texture. If this wasn't the case, we would see instances where the texture is repeated.
Read more
  • 0
  • 0
  • 10092
article-image-making-entity-multiplayer-ready
Packt
07 Apr 2014
8 min read
Save for later

Making an entity multiplayer-ready

Packt
07 Apr 2014
8 min read
(For more resources related to this topic, see here.) Understanding the dataflow of Lua entities in a multiplayer environment When using your own Lua entities in a multiplayer environment, you need to make sure everything your entity does on one of the clients is also triggered on all other clients. Let's take a light switch as an example. If one of the players turned on the light switch, the switch should also be flipped on all other clients. Each client connected to the game has an instance of that light switch in their level. The CryENGINE network implementation already handles all the work involved in linking these individual instances together using network entity IDs. Each light switch can contact its own instances on all connected clients and call its functions over the network. All you need to do is use the functionality that is already there. One way of implementing the light switch functionality is to turn on the switch in the entity as soon as the OnUsed() event is triggered and then send a message to all other clients in the network to also turn on their lights. This might work for something as simple as a switch, but can soon get messy when the entity becomes more complex. Ping times and message orders can lead to inconsistencies if two players try to flip the light switch at the same time. The representation of the process would look like the following diagram: Not so good – the light switch entity could trigger its own switch on all network instances of itself Doing it this way, with the clients notifying each other, can cause many problems. In a more stable solution, these kinds of events are usually run through the server. The server entity—let's call it the master entity—determines the state of the entities across the network at all times and distributes the entities throughout the network. This could be visualized as shown in the following diagram: Better – the light switch entity calls the server that will distribute the event to all clients In the light switch scenario mentioned earlier, the light switch entity would send an event to the server light switch entity first. Then, the server entity would call each light switch entity, including the original sender, to turn on their lights. It is important to understand that the entity that received the event originally does nothing else but inform the server about the event. The actual light is not turned on until the server calls back to all entities with the request to do so. The aforementioned dataflow works in single player as well, as CryENGINE will just pretend that the local machine is both the client and the server. This way, you will not have to make adjustments or add extra code to your entity to check whether it is single player or multiplayer. In a multiplayer environment with a server and multiple clients, it is important to set the script up so that it acts properly and the correct functions are called on either the client or the server. The first step to achieve this is to add a client and server table to the entity script using the following code: Client = {}, Server = {}, With this addition, our script table looks like the following code snippet: Testy = {Properties={ fileModel = "",Physics = { bRigidBody=1, = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", }, } Now, we can go ahead and modify the functions so that they work properly in multiplayer. We do this by adding the Client and Server subtables to our script. This way, the network system will be able to identify the Client/Server functions on the entity. The Client/Server functions The Client/Server functions are defined within your entity script by using the respective subtables that we previously defined in the entity table. Let's update our script and add a simple function that outputs a debug text into the console on each client. In order for everything to work properly, we first need to update our OnInit() function and make sure it gets called on the server properly. Simply add a server subtable to the function so that it looks like the following code snippet: functionTesty.Server:OnInit() self:OnReset(); end; This way, our OnReset() function will still be called properly. Now, we can add a new function that outputs a debug text for us. Let's keep it simple and just make it output a console log using the CryENGINE Log function, as shown in the following code snippet: functionTesty.Client:PrintLogOutput(text) Log(text); end; This function will simply print some text into the CryENGINE console. Of course, you can add more sophisticated code at this point to be executed on the client. Please also note the Client subtable in the function definition that tells the engine that this is a client function. In the next step, we have to add a way to trigger this function so that we can test the behavior properly. There are many ways of doing this, but to keep things simple, we will simply use the OnHit() callback function that will be automatically triggered when the entity is hit by something; for example, a bullet. This way, we can test our script easily by just shooting at our entity. The OnHit() callback function is quite simple. All it needs to do in our case is to call our PrintLogOutput function, or rather request the server to call it. For this purpose, we add another function to be called on the server that calls our PrintLogOutput() function. Again, please note that we are using the Client subtable of the entity to catch the hit that happens on the client. Our two new functions should look as shown in the following code snippet: functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end We now have two new functions: one is a client function calling a server function and the other one is a server function calling the actual function on all the clients. The Remote Method Invocation definitions As a last step, before we are finished, we need to expose our entity and its functions to the network. We can do this by adding a table within the root of our entity script that defines the necessary Remote Method Invocation (RMI). The Net.Expose table will expose our entity and its functions to the network so that they can be called remotely, as shown in the following code snippet: Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; Each RMI is defined by providing a function name, a set of RMI flags, and additional parameters. The first RMI flag is an order flag and defines the order of the network packets. You can choose between the following options: UNRELIABLE_ORDERED RELIABLE_ORDERED RELIABLE_UNORDERED These flags tell the engine whether the order of the packets is important or not. The attachment flag will define at what time the RMI is attached during the serialization process of the network. This parameter can be either of the following flags: PREATTACH: This flag attaches the RMI before game data serialization. POSTATTACH: This flag attaches the RMI after game data serialization. NOATTACH: This flag is used when it is not important if the RMI is attached before or after the game data serialization. FAST: This flag performs an immediate transfer of the RMI without waiting for a frame update. This flag is very CPU intensive and should be avoided if possible. The Net.Expose table we just added defines which functions will be exposed on the client and the server and will give us access to the following three subtables: allClients otherClients server With these functions, we can now call functions either on the server or the clients. You can use the allClients subtable to call a function on all clients or the otherClients subtable to call it on all clients except the own client. At this point, the entity table of our script should look as follows: Testy = { Properties={ fileModel = "", Physics = { bRigidBody=1, bRigidBodyActive = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", ShowBounds = 1, }, } Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; This defines our entity and its network exposure. With our latest updates, the rest of our script with all its functions should look as follows: functionTesty.Server:OnInit() self:OnReset(); end; functionTesty:OnReset() local props=self.Properties; if(not EmptyString(props.fileModel))then self:LoadObject(0,props.fileModel); end; EntityCommon.PhysicalizeRigid(self,0,props.Physics,0); self:DrawSlot(0, 1); end; functionTesty:OnPropertyChange() self:OnReset(); end; functionTesty.Client:PrintLogOutput(text) Log(text); end; functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end With these functions added to our entity, everything should be ready to go and you can test the behavior in game mode. When the entity is being shot at, the OnHit() function will request the log output to be printed from the server. The server calls the actual function on all clients. Summary In this article we learned about making our entity ready for a multiplayer environment by understanding the dataflow of Lua entities, understanding the Client/Server functions, and by exposing our entities to the network using the Remote Method Invocation definitions. Resources for Article: Further resources on this subject: CryENGINE 3: Terrain Sculpting [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 10074

article-image-animating-panda3d
Packt
01 Mar 2011
8 min read
Save for later

Animating in Panda3D

Packt
01 Mar 2011
8 min read
Panda3D 1.6 Game Engine Beginner's Guide Create your own computer game with this 3D rendering and game development framework The first and only guide to building a finished game using Panda3D Learn about tasks that can be used to handle changes over time Respond to events like keyboard key presses, mouse clicks, and more Take advantage of Panda3D's built-in shaders and filters to decorate objects with gloss, glow, and bump effects Follow a step-by-step, tutorial-focused process that matches the development process of the game with plenty of screenshots and thoroughly explained code for easy pick up        Actors and Animations An Actor is a kind of object in Panda3D that adds more functionality to a static model. Actors can include joints within them. These joints have parts of the model tied to them and are rotated and repositioned by animations to make the model move and change. Actors are stored in .egg and .bam files, just like models. Animation files include information on the position and rotation of joints at specific frames in the animation. They tell the Actor how to posture itself over the course of the animation. These files are also stored in .egg and .bam files. Time for action – loading Actors and Animations Let's load up an Actor with an animation and start it playing to get a feel for how this works: Open a blank document in NotePad++ and save it as Anim_01.py in the Chapter09 folder. We need a few imports to start with. Put these lines at the top of the file: import direct.directbase.DirectStart from pandac.PandaModules import * from direct.actor.Actor import Actor We won't need a lot of code for our class' __init__ method so let's just plow through it here : class World: def __init__(self): base.disableMouse() base.camera.setPos(0, -5, 1) self.setupLight() self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") self.kid.setH(180) The next thing we want to do is steal our setupLight() method from the Track class and paste it into this class: def setupLight(self): primeL = DirectionalLight("prime") primeL.setColor(VBase4(.6,.6,.6,1)) self.dirLight = render.attachNewNode(primeL) self.dirLight.setHpr(45,-60,0) render.setLight(self.dirLight) ambL = AmbientLight("amb") ambL.setColor(VBase4(.2,.2,.2,1)) self.ambLight = render.attachNewNode(ambL) render.setLight(self.ambLight) return Lastly, we need to instantiate the World class and call the run() method. w = World() run() Save the file and run it from the command prompt to see our loaded model with an animation playing on it, as depicted in the following screenshot: What just happened? Now, we have an animated Actor in our scene, slowly looping through a walk animation. We made that happen with only three lines of code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") The first line creates an instance of the Actor class. Unlike with models, we don't need to use a method of loader. The Actor class constructor takes two arguments: the first is the filename for the model that will be loaded. This file may or may not contain animations in it. The second argument is for loading additional animations from separate files. It's a dictionary of animation names and the files that they are contained in. The names in the dictionary don't need to correspond to anything; they can be any string. myActor = Actor( modelPath, {NameForAnim1 : Anim1Path, NameForAnim2 : Anim2Path, etc}) The names we give animations when the Actor is created are important because we use those names to control the animations. For instance, the last line calls the method loop() with the name of the walking animation as its argument. If the reference to the Actor is removed, the animations will be lost. Make sure not to remove the reference to the Actor until both the Actor and its animations are no longer needed. Controlling animations Since we're talking about the loop() method, let's start discussing some of the different controls for playing and stopping animations. There are four basic methods we can use: play("AnimName"): This method plays the animation once from beginning to end. loop("AnimName"): This method is similar to play, but the animation doesn't stop when it's over; it replays again from the beginning. stop() or stop("AnimName"): This method, if called without an argument, stops all the animations currently playing on the Actor. If called with an argument, it only stops the named animation. Note that the Actor will remain in whatever posture they are in when this method is called. pose("AnimName", FrameNumber): Places the Actor in the pose dictated by the supplied frame without playing the animation. We have some more advanced options as well. Firstly, we can provide option fromFrame and toFrame arguments to play or loop to restrict the animation to specific frames. myActor.play("AnimName", fromFrame = FromFrame, toFrame = toFrame) We can provide both the arguments, or just one of them. For the loop() method, there is also the optional argument restart, which can be set to 0 or 1. It defaults to 1, which means to restart the animation from the beginning. If given a 0, it will start looping from the current frame. We can also use the getNumFrames("AnimName") and getCurrentFrame("AnimName") methods to get more information about a given animation. The getCurrentAnim() method will return a string that tells us which animation is currently playing on the Actor. The final method we have in our list of basic animation controls sets the speed of the animation. myActor.setPlayRate(1.5, "AnimName") The setPlayRate() method takes two arguments. The first is the new play rate, and it should be expressed as a multiplier of the original frame rate. If we feed in .5, the animation will play half as fast. If we feed in 2, the animation will play twice as fast. If we feed in -1, the animation will play at its normal speed, but it will play in reverse. Have a go hero – basic animation controls Experiment with the various animation control methods we've discussed to get a feel for how they work. Load the Stand and Thoughtful animations from the animations folder as well, and use player input or delayed tasks to switch between animations and change frame rates. Once we're comfortable with what we've gone over so far, we'll move on. Animation blending Actors aren't limited to playing a single animation at a time. Panda3D is advanced enough to offer us a very handy functionality, called blending. To explain blending, it's important to understand that an animation is really a series of offsets to the basic pose of the model. They aren't absolutes; they are changes from the original. With blending turned on, Panda3D can combine these offsets. Time for action – blending two animations We'll blend two animations together to see how this works. Open Anim_01.py in the Chapter09 folder. We need to load a second animation to be able to blend. Change the line where we create our Actor to look like the following code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg", "Thoughtful" : "../Animations/Thoughtful.egg"}) Now, we just need to add this code above the line where we start looping the Walk animation: self.kid.enableBlend() self.kid.setControlEffect("Walk", 1) self.kid.setControlEffect("Thoughtful", 1) Resave the file as Anim_02.py and run it from the command prompt. What just happened? Our Actor is now performing both animations to their full extent at the same time. This is possible because we made the call to the self.kid.enableBlend() method and then set the amount of effect each animation would have on the model with the self.kid.setControlEffect() method. We can turn off blending later on by using the self.kid.disableBlend() method, which will return the Actor to the state where playing or looping a new animation will stop any previous animations. Using the setControlEffect method, we can alter how much each animation controls the model. The numeric argument we pass to setControlEffect() represents a percentage of the animation's offset that will be applied, with 1 being 100%, 0.5 being 50%, and so on. When blending animations together, the look of the final result depends a great deal on the model and animations being used. Much of the work needed to achieve a good result depends on the artist. Blending works well for transitioning between animations. In this case, it can be handy to use Tasks to dynamically alter the effect animations have on the model over time. Honestly, though, the result we got with blending is pretty unpleasant. Our model is hardly walking at all, and he looks like he has a nervous twitch or something. This is because both animations are affecting the entire model at full strength, so the Walk and Thoughtful animations are fighting for control over the arms, legs, and everything else, and what we end up with is a combination of both animation's offsets. Furthermore, it's important to understand that when blending is enabled, every animation with a control effect higher than 0 will always be affecting the model, even if the animation isn't currently playing. The only way to remove an animation's influence is to set the control effect to 0. This obviously can cause problems when we want to play an animation that moves the character's legs and another animation that moves his arms at the same time, without having them screw with each other. For that, we have to use subparts.  
Read more
  • 0
  • 0
  • 10043

article-image-animating-game-character
Packt
18 Feb 2015
8 min read
Save for later

Animating a Game Character

Packt
18 Feb 2015
8 min read
In this Article by Claudio Scolastici, author of the book Unity 2D Game Development Cookbook. we will cover the following recipes: Creating an animation tree Dealing with transitions (For more resources related to this topic, see here.) Now that we have imported the necessary graphic assets for a prototype, we can approach its actual building in Unity, starting by making an animation set for our character. Unity implements an easy-to-approach animation system, though quite powerful, called Mecanim. Mecanim" is a proprietary tool of Unity in which the animation clips belonging to a character are represented as boxes connected by directional arrows. Boxes represent states, which you can simply think of as idle, walk, run...you get the idea. Arrows, on" the other hand, represent the transitions between the states, which are responsible for actually blending between one animation clip and the next. Thanks to transitions, we can make characters that pass smoothly, for example, from a walking animation into a running one. The control of transitions is achieved" through parameters: variables belonging to different types that are stored in the character animator and are used to define and check the conditions that trigger an animation clip. The types available are common in programming and scripting languages: int, float, and bool. A distinctive type implemented in" Mecanim is the trigger, which is useful when you want a transition to be triggered as an all-or-nothing event. By the way, an animator is a built-in component of Unity, strictly connected with the Mecanim system, which is represented as a panel in the Unity interface. Inside this panel, the so-called animation tree of a character is actually built-up and the control parameters for the transitions are set and linked to the clips. Time for an image to help you better understand what we are talking about! The following picture shows an example of an animator of a "standard game character: As you can see, there are four states connected by transitions that configure the logic of the flow between one state and another. Inside these arrows, the parameters and their reference values to actually trigger the animations are stored. With Mecanim, it's quite easy to build the animation tree of a character and create the "logic that determines the conditions for the animations to be played. One example is to "use a float variable to blend between a walking and a running cycle, having the speed "of the character as the control parameter. Using a trigger or a boolean variable to add "a jumping animation to the character is another fairly common example. These are the subjects of our following two recipes, starting with trigger-based blending. Creating the animation tree In this recipe, we show you how to add animation clips to the animator component of a game object (our game character). This being done, we will be able to set the transitions between the clips and create the logic for the animations to be correctly played. Getting ready First of all, we need a set of "animation clips, imported in Unity and configured in Inspector. Before we proceed, be sure you have these four animation clips imported into your Unity project as FBX files: Char@Idle, Char@Run, Char@Jump, and Char@Walk. How to do it... The first operation is to create a folder to store the Animator Controller. From the project panel, select the Assets folder and create a new folder for the Animation Controller. Name this folder Animators. In the Animators folder, create a new Animator Controller option by navigating to Create | Animator Controller, as shown in the following screenshot: Name the "asset Character_Animator, or any other name you like. Double-click on Character_Animator to open the Animator panel in Unity. "Refer to the following screenshot; you should have an empty grid with a single magenta box called Any State: Access "the Models/Animations folder and select Char@Idle. Expand its hierarchy to access the actual animation clip named Idle; animation clips are represented by small play icons. Refer to the following screenshot for more clarity: Now drag" the clip into the Animator window. The clip should turn into a box inside the panel (colored in orange to represent that). Being the first clip imported into the Animator window, it is assumed to be the default animation for the character. That's exactly what we want! Repeat this operation with the clip named Jump, taken from the Char@Jump FBX file. The following screenshot shows what should appear in the Animator window: How it works... By dragging" animation clips from the project panel into the Animator editor, Mecanim creates a logic state for each of them. As states, the clips are available to connect through transitions and the animation tree of the character can come to life. With the Idle and Jump animations added to the Animator window, we can define the logic to control the conditions to switch between them. In the following recipe, we "create the transition to blend between these two animation clips. Dealing with transitions In this recipe, we create and set up the "transition for the character to switch between the Idle and Jump animation clips. For this task, we also need a parameter, which we will call bJump, to trigger the jump animation through code. Getting ready We will build on the previous recipe. Have the Animator window open, and be ready to follow our instructions. How to do it... As you move to the Animator panel in Unity, you should see a orange box representing the Idle animation, from our previous recipe. If it is not, right-click on it, and from the menu, select Set As Default. You can refer to the following screenshot: Right-click on the Idle clip and select Make Transition from the menu, as shown in the following screenshot: Drag the arrow that "appears onto the Jump clip and click to create the transition. "It should appear in the Inspector window, to the right of the Animator window. "Check the following screenshot to see whether you did it right: Now that we have got the "transition, we need a parameter to switch between Idle "and Jump. We use a boolean type for this, so we first need to create it. In the bottom-left corner of the Animator window, click on the small +, and from the "menu that appears, select Bool, as shown in the following screenshot: Name the newly created parameter bJump (the "b" stands for the boolean type; "it's a good habit to create meaningful variable names). Click on the white arrow representing the transition to access its properties in Inspector. There, a visual representation of the transition between the two clips "is available. By checking the "Conditions section in Inspector, you can see that the transition "is right now controlled by Exit Time, meaning that the Jump clip will be played only after the Idle clip has finished playing. The 0.97 value tells us that the transition is actually blending between the two clips for the last 3 percent of the idle animation. For your reference, you can adjust this value if you want to blend it a bit more or a "bit less. Please refer to the following screenshot: As we want our bJump parameter to control the transition, we need to change Exit Time using the tJump parameter. We do that by clicking on the drop-down menu on Exit Time and selecting tJump from the menu, as shown in the following screenshot: Note that "it is possible to add or remove conditions by acting on the small + "and - buttons in the interface if you need extra conditions to control one single transition. For now, we just want to be sure that the Atomic option is not flagged in the Inspector panel. The Atomic flag interrupts an animation, even if it has not finished playing yet. We don't want that to happen; when the character jumps, "the animation must get to its end before playing any other clip. The following screenshot highlights these options we just mentioned: How it works... We made our first "transition with Mecanim and used a boolean variable called bJump to control it. It is now possible to link bJump to an event, for example, pressing the spacebar "to trigger the Jump animation clip. Summary There was a time when building games was a cumbersome and almost exclusive activity, as you needed to program your own game engine, or pay a good amount of money to license one. Thanks to Unity, creating video games today is still a cumbersome activity, though less exclusive and expensive! With this article, we aim to provide you with a detailed guide to approach the development of an actual 2D game with Unity. As it is a complex process that requires several operations to be performed, we will do our best to support you at every step by providing all the relevant information to help you successfully make games with Unity. Resources for Article: Further resources on this subject: 2D Twin-stick Shooter [article] Components in Unity [article] Introducing the Building Blocks for Unity Scripts [article]
Read more
  • 0
  • 0
  • 9934
article-image-moving-space-pod-using-touch
Packt
18 Aug 2014
10 min read
Save for later

Moving the Space Pod Using Touch

Packt
18 Aug 2014
10 min read
Moving the Space Pod Using Touch This article written by, Frahaan Hussain, Arutosh Gurung, and Gareth Jones, authors of the book Cocos2d-x Game Development Essentials, will cover how to set up touch events within our game. So far, the game has had no user interaction from a gameplay perspective. This article will rectify this by adding touch controls in order to move the space pod and avoid the asteroids. (For more resources related to this topic, see here.) The topics that will be covered in this article are as follows: Implementing touch Single-touch Multi-touch Using touch locations Moving the spaceship when touching the screen There are two main routes to detect touch provided by Cocos2d-x: Single-touch: This method detects a single-touch event at any given time, which is what will be implemented in the game as it is sufficient for most gaming circumstances Multi-touch: This method provides the functionality that detects multiple touches simultaneously; this is great for pinching and zooming; for example, the Angry Birds game uses this technique Though a single-touch will be the approach that the game will incorporate, multi-touch will also be covered in this article so that you are aware of how to use this in future games. The general process for setting up touches The general process of setting up touch events, be it single or multi-touch, is as follows: Declare the touch functions. Declare a listener to listen for touch events. Assign touch functions to appropriate touch events as follows: When the touch has begun When the touch has moved When the touch has ended Implement touch functions. Add appropriate game logic/code for when touch events have occurred. Single-touch events Single-touch events can be detected at any given time, and for many games this is sufficient as it is for this game. Follow these steps to implement single-touch events into a scene: Declare touch functions in the GameScene.h file as follows: bool onTouchBegan(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchMoved(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchEnded(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchCancelled(cocos2d::Touch *touch, cocos2d::Event * event); This is what the GameScene.h file will look like: The previous functions do the following: The onTouchBegan function detects when a single-touch has occurred, and it returns a Boolean value. This should be true if the event is swallowed by the node and false indicates that it will keep on propagating. The onTouchMoved function detects when the touch moves. The onTouchEnded function detects when the touch event has ended, essentially when the user has lifted up their finger. The onTouchCancelled function detects when a touch event has ended but not by the user; for example, a system alert. The general practice is to call the onTouchEnded method to run the same code, as it can be considered the same event for most games. Declare a Boolean variable in the GameScene.h file, which will be true if the screen is being touched and false if it isn't, and also declare a float variable to keep track of the position being touched: bool isTouching;float touchPosition; This is how it will look in the GameScene.h file: Add the following code in the init() method of GameScene.cpp: auto listener = EventListenerTouchOneByOne::create(); listener->setSwallowTouches(true); listener->onTouchBegan = CC_CALLBACK_2(GameScreen::onTouchBegan, this); listener->onTouchMoved = CC_CALLBACK_2(GameScreen::onTouchMoved, this); listener->onTouchEnded = CC_CALLBACK_2(GameScreen::onTouchEnded, this); listener->onTouchCancelled = CC_CALLBACK_2(GameScreen::onTouchCancelled, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); isTouching = false; touchPosition = 0; This is how it will look in the GameScene.cpp file: There is quite a lot of new code in the previous code snippet, so let's run through it line by line: The first statement declares and initializes a listener for a single-touch The second statement prevents layers underneath from where the touch occurred by detecting the touches The third statement assigns our onTouchBegan method to the onTouchBegan listener The fourth statement assigns our onTouchMoved method to the onTouchMoved listener The fifth statement assigns our onTouchEnded method to the onTouchEnded listener The sixth statement assigns our onTouchCancelled method to the onTouchCancelled listener The seventh statement sets the touch listener to the event dispatcher so the events can be detected The eighth statement sets the isTouching variable to false as the player won't be touching the screen initially when the game starts The final statement initializes the touchPosition variable to 0 Implement the touch functions inside the GameScene.cpp file: bool GameScreen::onTouchBegan(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = true;touchPosition = touch->getLocation().x;return true;}void GameScreen::onTouchMoved(cocos2d::Touch *touch,cocos2d::Event * event){// not used for this game}void GameScreen::onTouchEnded(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = false;}void GameScreen::onTouchCancelled(cocos2d::Touch *touch,cocos2d::Event * event){onTouchEnded(touch, event);} The following is what the GameScene.cpp file will look like: Let's go over the touch functions that have been implemented previously: The onTouchBegan method will set the isTouching variable to true as the user is now touching the screen and is storing the starting touch position The onTouchMoved function isn't used in this game but it has been implemented so that you are aware of the steps for implementing it (as an extra task, you can implement touch movement so that if the user moves his/her finger from one side to another direction, the space pod gets changed) The onTouchEnded method will set the isTouching variable to false as the user is no longer touching the screen The onTouchCancelled method will call the onTouchEnded method as a touch event has essentially ended If the game were to be run, the space pod wouldn't move as the movement code hasn't been implemented yet. It will be implemented within the update() method to move left when the user touches in the left half of the screen and move right when user touches in the right half of the screen. Add the following code at the end of the update() method: // check if the screen is being touchedif (true == isTouching){// check which half of the screen is being touchedif (touchPosition < visibleSize.width / 2){// move the space pod leftplayerSprite->setPosition().x(playerSprite->getPosition().x - (0.50 * visibleSize.width * dt));// check to prevent the space pod from going offthe screen (left side)if (playerSprite->getPosition().x <= 0 +(playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(playerSprite->getContentSize().width / 2);}}else{// move the space pod rightplayerSprite->setPosition().x(playerSprite->getPosition().x + (0.50 * visibleSize.width * dt));// check to prevent the space pod from going off thescreen (right side)if (playerSprite->getPosition().x >=visibleSize.width - (playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(visibleSize.width -(playerSprite->getContentSize().width / 2));}}} The following is how this will look after adding the code: The preceding code performs the following steps: Checks whether the screen is being touched. Checks which side of the screen is being touched. Moves the player left or right. Checks whether the player is going off the screen and if so, stops him/her from moving. Repeats the process until the screen is no longer being touched. This section covered how to set up single-touch events and implement them within the game to be able to move the space pod left and right. Multi-touch events Multi-touch is set up in a similar manner of declaring the functions and creating a listener to actively listen out for touch events. Follow these steps to implement multi-touch into a scene: Firstly, the multi-touch feature needs to be enabled in the AppController.mm file, which is located within the ios folder. To do so, add the following code line below the viewController.view = eaglView; line: [eaglView setMultipleTouchEnabled: YES]; The following is what the AppController.mm file will look like: Declare the touch functions within the game scene header file (the functions do the same thing as the single-touch equivalents but enable multiple touches that can be detected simultaneously): void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesMoved(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesEnded(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesCancelled(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); The following is what the header file will look like: Add the following code in the init() method of the scene.cpp file to listen to the multi-touch events that will use the EventListenerTouchAllAtOnce class, which allows multiple touches to be detected at once: auto listener = EventListenerTouchAllAtOnce::create();listener->onTouchesBegan = CC_CALLBACK_2(GameScreen::onTouchesBegan, this);listener->onTouchesMoved = CC_CALLBACK_2(GameScreen::onTouchesMoved, this);listener->onTouchesEnded = CC_CALLBACK_2(GameScreen::onTouchesEnded, this);listener->onTouchesCancelled = CC_CALLBACK_2(GameScreen::onTouchesCancelled, this);this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); The following is how this will look: Implement the following multi-touch functions inside the scene.cpp: void GameScreen::onTouchesBegan(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("Multi-touch BEGAN"); } void GameScreen::onTouchesMoved(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { for (int i = 0; i < touches.size(); i++) { CCLOG("Touch %i: %f", i, touches[i]- >getLocation().x); } } void GameScreen::onTouchesEnded(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE ENDED"); } Moving the Space Pod Using Touch [ 92 ] void GameScreen::onTouchesCancelled(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE BEEN CANCELLED"); } The following is how this will look: The multi-touch functions just print out a log, stating that they have occurred, but when touches are moved, their respective x positions are logged. This section covered how to implement core foundations for multi-touch events so that they can be used for features such as zooming (for example, zooming into a scene in the Clash Of Clans game) and panning. Multi-touch wasn't incorporated within the game as it wasn't needed, but this section is a good starting point to implement it in future games. Summary This article covered how to set up touch listeners to detect touch events for single-touch and multi-touch. We incorporated single-touch within the game to be able to move the space pod left or right, depending on which half of the screen was being touched. Multi-touch wasn't used as the game didn't require it, but its implementation was shown so that it can be used for future projects. Resources for Article: Further resources on this subject: Cocos2d: Uses of Box2D Physics Engine [article] Cocos2d-x: Installation [article] Thumping Moles for Fun [article]
Read more
  • 0
  • 0
  • 9872

article-image-managing-and-displaying-information
Packt
17 Sep 2013
37 min read
Save for later

Managing and Displaying Information

Packt
17 Sep 2013
37 min read
(For more resources related to this topic, see here.) In order to realize these goals, in this article, we'll be doing the following: Displaying a countdown timer on the screen Configuring fonts Creating a game attribute to count lives Using graphics to display information Counting collected actors Keeping track of the levels Prior to continuing with the development of our game, let's take a little time out to review what we have achieved so far, and also to consider some of the features that our game will need before it can be published. A review of our progress The gameplay mechanics are now complete; we have a controllable character in the form of a monkey, and we have some platforms for the monkey to jump on and traverse the scene. We have also introduced some enemy actors, the croc and the snake, and we have Aztec statues falling from the sky to create obstacles for the monkey. Finally, we have the fruit, all of which must be collected by the monkey in order to successfully complete the level. With regards to the scoring elements of the game, we're currently keeping track of a countdown timer (displayed in the debug console), which causes the scene to completely restart when the monkey runs out of time. When the monkey collides with an enemy actor, the scene is not reloaded, but the monkey is sent back to its starting point in the scene, and the timer continues to countdown. Planning ahead – what else does our game need? With the gameplay mechanics working, we need to consider what our players will expect to happen when they have completed the task of collecting all the fruits. As mentioned in the introduction to this article, our plan is to create additional, more difficult levels for the player to complete! We also need to consider what will happen when the game is over; either when the player has succeeded in collecting all the fruits, or when the player has failed to collect the fruits in the allocated time. The solution that we'll be implementing in this game is to display a message to advise the player of their success or failure, and to provide options for the player to either return to the main menu, or if the task was completed successfully, continue to the next level within the game. We need to implement a structure so that the game can keep track of information, such as how many lives the player has left and which level of the game is currently being played. Let's put some information on the screen so that our players can keep track of the countdown timer. Displaying a countdown timer on the screen We created a new scene behavior called Score Management, which contains the Decrement Countdown event, shown as follows: Currently, as we can see in the previous screenshot, this event decrements the Countdown attribute by a value of 1, every second. We also have a debug print instruction that displays the current value of Countdown in the debug console to help us, as game developers, keep track of the countdown. However, players of the game cannot see the debug console, so we need to provide an alternative means of displaying the amount of time that the player has to complete the level. Let's see how we can display that information on the screen for players of our game. Time for action – displaying the countdown timer on the screen Ensure that the Score Management scene behavior is visible: click on the Dashboard tab, select Scene Behaviors, and double-click on the Score Management icon in the main panel. Click + Add Event | Basics | When Drawing. Double-click on the title of the new Drawing event, and rename it to Display Countdown. Click on the Drawing section button in the instruction block palette. Drag a draw text anything at (x: 0 y: 0) block into the orange when drawing event block in the main panel. Enter the number 10 into the x: textbox and also enter 10 into the y: textbox. Click on the drop-down arrow in the textbox after draw text and select Text | Basics. Then click on the text & text block. In the first textbox in green, … & … block, enter the text COUNTDOWN: (all uppercase, followed by a colon). In the second textbox, after the & symbol, click on the drop-down arrow and select Basics, then click on the anything as text block. Click on the drop-down arrow in the … as text block, and select Number | Attributes | Countdown. Ensure that the new Display Countdown event looks like the following screenshot: Test the game. What just happened? When the game is played, we can now see in the upper-left corner of the screen, a countdown timer that represents the value of the Countdown attribute as it is decremented each second. First, we created a new Drawing event, which we renamed to Display Countdown, and then we added a draw text anything at (x: 0 y: 0) block, which is used to display the specified text in the required location on the screen. We set both the x: and y: coordinates for displaying the drawn text to 10 pixels, that is, 10 pixels from the left-hand side of the screen, and 10 pixels from the top of the screen. The next task was to add some text blocks that enabled us to display an appropriate message along with the value of the Countdown attribute. The text & text block enables us to concatenate, or join together, two separate pieces of text. The Countdown attribute is a number, so we used the anything as text block to convert the value of the Countdown attribute to text to ensure that it will be displayed correctly when the game is being played. In practice, we could have just located the Countdown attribute block in the Attributes section of the palette, and then dragged it straight into the text & text block. However, it is best practice to correctly convert attributes to the appropriate type, as required by the instruction block. In our case, the number attribute is being converted to text because it is being used in the text concatenation instruction block. If we needed to use a text value in a calculation, we would convert it to a number using an anything as number block. Configuring fonts We can see, when testing the game, that the font we have used is not very interesting; it's a basic font that doesn't really suit the style of the game! Stencyl allows us to specify our own fonts, so our next step is to import a font to use in our game. Time for action – specifying a font for use in our game Before proceeding with the following steps, we need to locate the fonts-of-afrikaAfritubu.TTF file. Place the file in a location where it can easily be located, and continue with the following steps: In the Dashboard tab, click on Fonts. In the main panel, click on the box containing the words This game contains no Fonts. Click here to create one. In the Name textbox of the Create New… dialog box, type HUD Font and click on the Create button. In the left-hand panel, click on the Choose… button next to the Font selector. Locate the file Afritubu.TTF and double-click on it to open it. Note that the main panel shows a sample of the new font. In the left-hand panel, change the Size option to 25. Important: save the game! Return to the Display Countdown event in the Score Management scene behavior. In the instruction block palette, click on the Drawing section button and then the Styles category button. Drag the set current font to Font block above the draw text block in the when drawing event. Click on the Font option in the set current font to Font block, and select Choose Font from the pop-up menu. Double-click on the HUD Font icon in the Choose a Font… dialog box. Test the game. Observe the countdown timer at the upper-left corner of the game. What just happened? We can see that the countdown timer is now being displayed using the new font that we have imported into Stencyl, as shown in the following screenshot: The first step was to create a new blank font in the Stencyl dashboard and to give it a name (we chose HUD Font), and then we imported the font file from a folder on our hard disk. Once we had imported the font file, we could see a sample of the font in the main panel. We then increased the size of the font using the Size option in the left-hand panel. That's all we needed to do in order to import and configure a new font in Stencyl! However, before progressing, we saved the game to ensure that the newly imported font will be available for the next steps. With our new font ready to use, we needed to apply it to our countdown text in the Display Countdown behavior. So, we opened up the behavior and inserted the set current font to Font style block. The final step was to specify which font we wanted to use, by clicking on the Font option in the font style block, and choosing the new font, HUD Font, which we configured in the earlier steps. Heads-Up Display (HUD) is often used in games to describe either text or graphics that is overlaid on the main game graphics to provide the player with useful information. Using font files in Stencyl Stencyl can use any TrueType font that we have available on our hard disk (files with the extension TTF); many thousands of fonts are available to download from the Internet free of charge, so it's usually possible to find a font that suits the style of any game that we might be developing. Fonts are often subject to copyright, so be careful to read any licensing agreements that are provided with the font file, and only download font files from reliable sources. Have a go hero When we imported the font into Stencyl, we specified a new font size of 25, but it is a straightforward process to modify further aspects of the font style, such as the color and other effects. Click on the HUD Font tab to view the font settings (or reopen the Font Editor from the Dashboards tab) and experiment with the font size, color, and other effects to find an appropriate style for the game. Take this opportunity to learn more about the different effects that are available, referring to Stencyl's online help if required. Remember to test the game to ensure that any changes are effective and the text is not difficult to read! Creating a game attribute to count lives Currently, our game never ends. As soon as the countdown reaches zero, the scene is restarted, or when the monkey collides with an enemy actor, the monkey is repositioned at the starting point in the scene. There is no way for our players to lose the game! In some genres of game, the player will never be completely eliminated; effectively, the same game is played forever. But in a platform game such as ours, the player typically will have a limited number of chances or lives to complete the required task. In order to resolve our problem of having a never-ending game, we need to keep track of the number of lives available to our player. So let's start to implement that feature right now by creating a game attribute called Lives! Time for action – creating a Lives game attribute Click on the Settings icon on the Stencyl toolbar at the top of the screen. In the left-hand panel of the Game Settings dialog box, click on the Attributes option. Click on the green Create New button. In the Name textbox, type Lives. In the Category textbox, change the word Default to Scoring. In the Type section, ensure that the currently selected option is Number. Change Initial Value to 3. Click on OK to confirm the configuration. We'll leave the Game Settings dialog box open, so that we can take a closer look. What just happened? We have created a new game attribute called Lives. If we look at the rightmost panel of the Game Settings dialog box that we left open on the screen, we can see that we have created a new heading entitled SCORING, and underneath the heading, there is a label icon entitled Lives, as shown in the following screenshot: The Lives item is a new game attribute that can store a number. The category name of SCORING that we created is not used within the game. We can't access it with the instruction blocks; it is there purely as a memory aid for the game developer when working with game attributes. When many game attributes are used in a game, it can become difficult to remember exactly what they are for, so being able to place them under specific headings can be helpful. Using game attributes The attributes we have used so far, such as the Countdown attribute that we created in the Score Management behavior, lose their values as soon as a different scene is loaded, or when the current scene is reloaded. Some game developers may refer to these attributes as local called Lives, so let's attributes, because they belong to the behavior in which they were created. Losing its value is fine when the attribute is just being used within the current scene; for example, we don't need to keep track of the countdown timer outside of the Jungle scene, because the countdown is reset each time the scene is loaded. However, sometimes we need to keep track of values across several scenes within a game, and this is when game attributes become very useful. Game attributes work in a very similar manner to local attributes. They store values that can be accessed and modified, but the main difference is that game attributes keep their values even when a different scene is loaded. Currently, the issue of losing attribute values when a scene is reloaded is not important to us, because our game only has one scene. However, when our players succeed in collecting all the fruits, we want the next level to be started without resetting the number of lives. So we need the number of lives to be remembered when the next scene is loaded. We've created a game attribute called Lives, so let's put it to good use. Time for action – decrementing the number of lives If the Game Settings dialog box is still open, click on OK to close it. Open the Manage Player Collisions actor behavior. Click on the Collides with Enemies event in the left-hand panel. Click on the Attributes section button in the palette. Click on the Game Attributes category button. Locate the purple set Lives to 0 block under the Number Setters subcategory and drag it into the orange when event so that it appears above the red trigger event RestartLevel in behavior Health for Self block. Click on the drop-down arrow in the set Lives to … block and select 0 - 0 in the Math section. In the left textbox of the … - … block, click on the drop-down arrow and select Game Attributes | Lives. In the right-hand textbox, enter the digit 1. Locate the print anything block in the Flow section of the palette, under the Debug category, and drag it below the set Lives to Lives – 1 block. In the print … block, click on the drop-down arrow and select Text | Basics | text & text. In the first empty textbox, type Lives remaining: (including the colon). Click on the drop-down arrow in the second textbox and select Basics | anything as text. In the … as text block, click on the drop-down arrow and select Number | Game Attributes | Lives. Ensure that the Collides with Enemies event looks like the following screenshot: Test the game; make the monkey collide with an enemy actor, such as the croc, and watch the debug console! What just happened? We have modified the Collides with Enemies event in the Manage Player Collisions behavior so that it decrements the number of lives by one when the monkey collides with an enemy actor, and the new value of Lives is shown in the debug console. This was achieved by using the purple game attribute setter and getter blocks to set the value of the Lives game attribute to its current value minus one. For example, if the value of Lives is 3 when the event occurs, Lives will be set to 3 minus 1, which is 2! The print … block was then used to display a message in the console, advising how many lives the player has remaining. We used the text & text block to join the text Lives remaining: together with the current value of the Lives game attribute. The anything as text block converts the numeric value of Lives to text to ensure that it will display correctly. Currently, the value of the Lives attribute will continue to decrease below 0, and the monkey will always be repositioned at its starting point. So our next task is to make something happen when the value of the Lives game attribute reaches 0! No more click-by-click steps! From this point onwards, click-by-click steps to modify behaviors and to locate and place each instruction block will not be specified! Instead, an overview of the steps will be provided, and a screenshot of the completed event will be shown towards the end of each Time for action section. The search facility, at the top of the instruction block palette, can be used to locate the required instruction block; simply click on the search box and type any part of the text that appears in the required block, then press the Enter key on the keyboard to display all the matching blocks in the block palette. Time for action – detecting when Lives reaches zero Create a new scene called Game Over — Select the Dashboard tab, select Scenes, and then select Click here to create a new Scene. Leave all the settings at their default configuration and click on OK. Close the tab for the newly created scene. Open the Manage Player Collisions behavior and click on the Collides with Enemies event to display the event's instruction blocks. Insert a new if block under the existing print block. Modify the if block to if Lives < 0. Move the existing block, trigger event RestartLevel in behavior Health for Self into the if Lives > 0 block. Insert an otherwise block below the if Lives > 0 block. Insert a switch to Scene and Crossfade for 0 secs block inside the otherwise block. Click on the Scene option in the new block, then click on Choose Scene and select the Game Over scene. Change the secs textbox to 0 (zero). Ensure that our modified Collides with Enemies event now looks like the following screenshot: Test the game; make the monkey run into an enemy actor, such as the croc, three times. What just happened? We have modified the Collides with Enemies event so that the value of the Lives game attribute is tested after it has been decremented, and the game will switch to the Game Over scene if the number of lives remaining is less than zero. If the value of Lives is greater than zero, the RestartLevel event in the monkey's Health behavior is triggered. However, if the value of Lives is not greater than zero, the instruction in the otherwise block will be executed, and this switches to the (currently blank) Game Over scene that we have created. If we review all the instructions in the completed Collides with Enemies event, and write them in English, the statement will be: When the monkey collides with an enemy, reduce the value of Lives by one and print the new value to the debug console. Then, if the value of Lives is more than zero, trigger the RestartLevel event in the monkey's Health behavior, otherwise switch to the Game Over scene. Before continuing, we should note that the Game Over scene has been created as a temporary measure to ensure that as we are in the process of developing the game, it's immediately clear to us (the developer) that the monkey has run out of lives. Have a go hero Change the Countdown attribute value to 30 — open the Jungle scene, click on the Behaviors button, then select the Score Management behavior in the left panel to see the attributes for this behavior. The following tasks in this Have a go hero session are optional — failure to attempt them will not affect future tutorials, but it is a great opportunity to put some of our newly learned skills to practice! In the section, Time for action – displaying the countdown timer on the screen, we learned how to display the value of the countdown timer on the screen during gameplay. Using the skills that we have acquired in this article, try to complete the following tasks: Update the Score Management behavior to display the number of lives at the upper-right corner of the screen, by adding some new instruction blocks to the Display Counter event. Rename the Display Counter event to Display HUD. Remove the print Countdown block from the Decrement Countdown event also found in the Score Management behavior. Right-click on the instruction block and review the options available in the pop-up menu! Remove the print Lives remaining: & Lives as text instruction block from the Collides with Enemies event in the Manage Player Collisions behavior. Removing debug instructions Why did we remove the debug print … blocks in the previous Have a go hero session? Originally, we added the debug blocks to assist us in monitoring the values of the Countdown attribute and Lives game attribute during the development process. Now that we have updated the game to display the required information on the screen, the debug blocks are redundant! While it would not necessarily cause a problem to leave the debug blocks where they are, it is best practice to remove any instruction blocks that are no longer in use. Also, during development, excessive use of debug print blocks can have an impact on the performance of the game; so it's a good idea to remove them as soon as is practical. Using graphics to display information We are currently displaying two on-screen pieces of information for players of our game: the countdown timer and the number of lives available. However, providing too much textual information for players can be distracting for them, so we need to find an alternative method of displaying some of the information that the player needs during gameplay. Rather than using text to advise the player how much time they have remaining to complete the level, we're going to display a timer bar on the screen. Time for action – displaying a timer bar Open the Score Management scene behavior and click on the Display HUD event. In the when drawing event, right-click on the blue block that draws the text for the countdown timer and select Activate / Deactivate from the pop-up menu. Note that the block becomes faded. Locate the draw rect at (x: 0 y: 0) with (w: 0 h: 0) instruction block in the palette, and insert it at the bottom of the when drawing event. Click on the draw option in the newly inserted block and change it to fill. Set both the x: and y: textboxes to 10. Set the width (w:) to Countdown x 10. Set the height (h:) to 10. Ensure that the draw text … block and the fill rect at … block in the Display HUD event appear as shown in the following screenshot (the draw text LIVES: … block may look different if the earlier Have a go hero section was attempted): Test the game! What just happened? We have created a timer bar that displays the amount of time remaining for the player to collect the fruit, and the timer bar reduces in size with the countdown! First, in the Display HUD event we deactivated, or disabled, the block that was drawing the textual countdown message, because we no longer want the text message to be displayed on the screen. The next step was to insert a draw rect … block that was configured to create a filled rectangle at the upper-left corner of the screen and with a width equal to the value of the Countdown timer multiplied by 10. If we had not multiplied the value of the countdown by 10, the timer bar would be very small and difficult to see (try it)! We'll be making some improvements to the timer bar later in this article. Activating and deactivating instruction blocks When we deactivate an instruction block, as we did in the Display HUD event, it no longer functions; it's completely ignored! However, the block remains in place, but is shown slightly faded, and if required, it can easily be reenabled by right-clicking on it and selecting the Activate / Deactivate option. Being able to activate and deactivate instruction blocks without deleting them is a useful feature — it enables us to try out new instructions, such as our timer bar, without having to completely remove blocks that we might want to use in the future. If, for example, we decided that we didn't want to use the timer bar, we could deactivate it and reactivate the draw text … block! Deactivated instruction blocks have no impact on the performance of a game; they are completely ignored during the game compilation process. Have a go hero The tasks in this Have a go hero session are optional; failure to attempt them will not affect future tutorials. Referring to Stencyl's online help if required at www.stencyl.com/help/, try to make the following improvements to the timer bar: Specify a more visually appealing color for the rectangle Make it thicker (height) so that it is easier to see when playing the game Consider drawing a black border (known as a stroke) around the rectangle Try to make the timer bar reduce in size smoothly, rather than in big steps Ask an independent tester for feedback about the changes and then modify the bar based on the feedback. To view suggested modifications together with comments, review the Display HUD event in the downloadable files that accompany this article. Counting collected actors With the number of lives being monitored and displayed for the player, and the timer bar in place, we now need to create some instructions that will enable our game to keep track of how many of the fruit actors have been collected, and to carry out the appropriate action when there is no fruit left to collect. Time for action – counting the fruit Open the Score Management scene behavior and create a new number attribute (not a game attribute) with the configuration shown in the following screenshot (in the block palette, click on Attributes, and then click on Create an Attribute…). Add a new when created event and rename it to Initialize Fruit Required. Add the required instruction blocks to the new when created event, so the Initialize Fruit Required event appears as shown in the following screenshot, carefully checking the numbers and text in each of the blocks' textboxes: Note that the red of group block in the set actor value … block cannot be found in the palette; it has been dragged into place from the orange for each … of group Collectibles block. Test the game and look for the Fruit required message in the debug console. What just happened? Before we can create the instructions to determine if all the fruit have been collected, we need to know how many fruit actors there are to collect. So we have created a new event that stores that information for us in a number attribute called Fruit Required and displays it in the debug console. We have created a for each … of group Collectibles block. This very useful looping block will repeat the instructions placed inside it for each member of the specified group that can be found in the current scene. We have specified the Collectibles group, and the instruction that we have placed inside the new loop is increment Fruit Required by 1. When the loop has completed, the value of the Fruit Required attribute is displayed in the debug console using a print … block. When constructing new events, it's good practice to insert print … blocks so we can be confident that the instructions achieve the results that we are expecting. When we are happy that the results are as expected, perhaps after carrying out further testing, we can remove the debug printing from our event. We have also introduced a new type of block that can set a value for an actor; in this case, we have set actor value Collected for … of group to false. This block ensures that each of the fruit actors has a value of Collected that is set to false each time the scene is loaded; remember that this instruction is inside the for each … loopso it is being carried out for every Collectible group member in the current scene. Where did the actor's Collected value come from? Well, we just invented it! The set actor value … block allows us to create an arbitrary value for an actor at any time. We can also retrieve that value at any time with a corresponding get actor value … block, and we'll be doing just that when we check to see if a fruit actor has been collected in the next section, Time for action – detecting when all the fruit are collected. Translating our instructions into English, results in the following statement: For each actor in the Collectibles group, that can be found in this scene, add the value 1 to the Fruit Required attribute and also set the actor's Collected value to false. Finally, print the result in the debug console. Note that the print … block has been placed after the for each … loop, so the message will not be printed for each fruit actor; it will appear just once, after the loop has completed! If we wish to prove to ourselves that the loop is counting correctly, we can edit the Jungle scene and add as many fruit actors as we wish. When we test the game, we can see that the number of fruit actors in the scene is correctly displayed in the debug console. We have designed a flexible set of instructions that can be used in any scene with any number of fruit actors, and which does not require us (as the game designer) to manually configure the number of fruit actors to be collected in that scene! Once again, we have made life easier for our future selves! Now that we have the attribute containing the number of fruit to be collected at the start of the scene, we can create the instructions that will respond when the player has successfully collected them all. Time for action – detecting when all fruits have been collected Create a new scene called Level Completed, with a Background Color of yellow. Leave all the other settings at their default configuration. Close the tab for the newly created scene. Return to the Score Management scene behavior, and create a new custom event by clicking on + Add Event | Advanced | Custom Event. In the left-hand panel, rename the custom event to Fruit Collected. Add the required instruction blocks to the new Fruit Collected event, so it appears as shown in the following screenshot, again carefully checking the parameters in each of the text boxes: Note that there is no space in the when FruitCollected happens custom event name. Save the game and open the Manage Player Collisions actor behavior. Modify the Collides with Collectibles event so it appears as shown in the following screenshot. The changes are listed in the subsequent steps: A new if get actor value Collected for … of group = false block has been inserted. The existing blocks have been moved into the new if … block. A set actor value Collected for … of group to true block has been inserted above the grow … block. A trigger event FruitCollected in behavior Score Management for this scene block has been inserted above the do after 0.5 seconds block. An if … of group is alive block has been inserted into the do after 0.5 seconds block, and the existing kill … of group block has been moved inside the newly added if … block. Test the game; collect several pieces of fruit, but not all of them! Examine the contents of the debug console; it may be necessary to scroll the console horizontally to read the messages. Continue to test the game, but this time collect all the fruit actors. What just happened? We have created a new Fruit Collected event in the Score Management scene behavior, which switches to a new scene when all the fruit actors have been collected, and we have also modified the Collides with Collectibles event in the Manage Player Collisions actor behavior in order to count how many pieces of fruit remain to be collected. When testing the game we can see that, each time a piece of fruit is collected, the new value of the Fruit Required attribute is displayed in the debug console, and when all the fruit actors have been collected, the yellow Level Completed scene is displayed. The first step was to create a blank Level Completed scene, which will be switched to when all the fruit actors have been collected. As with the Game Over scene that we created earlier in this article, it is a temporary scene that enables us to easily determine when the task of collecting the fruit has been completed successfully for testing purposes. We then created a new custom event called Fruit Collected in the Score Management scene behavior. This custom event waits for the FruitCollected event trigger to occur, and when that trigger is received, the Fruit Required attribute is decremented by 1 and its new value is displayed in the debug console. A test is then carried out to determine if the value of the Fruit Required attribute is equal to zero, and if it is equal to zero, the bright yellow, temporary Level Completed scene will be displayed! Our final task was to modify the Collides with Collectibles event in the Manage Player Collisions actor behavior. We inserted an if… block to test the collectible actor's Collected value; remember that we initialized this value to false in the previous section, Time for action – counting the fruit. If the Collected value for the fruit actor is still false, then it hasn't been collected yet, and the instructions contained within the if … block will be carried out. Firstly, the fruit actor's Collected value is set to false, which ensures that this event cannot occur again for the same piece of fruit. Next, the FruitCollected custom event in the Score Management scene behavior is triggered. Following that, the do after 0.5 seconds block is executed, and the fruit actor will be killed. We have also added an if … of group is alive check that is carried out before the collectible actor is killed. Because we are killing the actor after a delay of 0.5 seconds, it's good practice to ensure that the actor still exists before we try to kill it! In some games, it may be possible for the actor to be killed by other means during that very short 0.5 second delay, and if we try to kill an actor that does not exist, a runtime error may occur, that is, an error that happens while the game is being played. This may result in a technical error message being displayed to the player, and the game cannot continue; this is extremely frustrating for players, and they are unlikely to try to play our game again! Preventing multiple collisions from being detected A very common problem experienced by game designers, who are new to Stencyl, occurs when a collision between two actors is repeatedly detected. When two actors collide, all collision events that have been created with the purpose of responding to that collision will be triggered repeatedly until the collision stops occurring, that is, when the two actors are no longer touching. If, for example, we need to update the value of an attribute when a collision occurs, the attribute might be updated dozens or even hundreds of times in only a few seconds! In our game, we want collisions between the monkey actor and any single fruit actor to cause only a single update to the Fruit Required attribute. This is why we created the actor value Collected for each fruit actor, and this value is initialized to be false, not collected, by the Initialize Fruit Required event in the Score Management scene behavior. When the Collides with Collectibles event in Manage Player Collisions actor behavior is triggered, a test is carried out to determine if the fruit actor has already been collected, and if it has been collected, no further instructions are carried out. If we did not have this test, then the FruitCollected custom event would be triggered numerous times, and therefore the Fruit Required attribute would be decremented numerous times, causing the value of the Fruit Required attribute to reach zero almost instantly; all because the monkey collided with a single fruit actor! Using a Boolean value of True or False to carry out a test in this manner is often referred to by developers as using a flag or Boolean flag. Note that, rather than utilizing an actor value to record whether or not a fruit actor has been collected, we could have created a new attribute and initialized and updated the attribute in the same way that we initialized and updated the actor value. However, this would have required more effort to configure, and there is no perceptible impact on performance when using actor values in this manner. Some Stencyl users never use actor values (preferring to always use attributes instead), however, this is purely a matter of preference and it is at the discretion of the game designer which method to use. In order to demonstrate what happens when the actor value Collected is not used to determine whether or not a fruit actor has been collected, we can simply deactivate the set actor value Collected for … of group to true instruction block in the Collides with Collectibles event. After deactivating the block, run the game with the debug console open, and allow the monkey to collide with a single fruit actor. The Fruit Required attribute will instantly be decremented multiple times, causing the level to be completed after colliding with only one fruit actor! Remember to reactivate the set actor value … block before continuing! Keeping track of the levels As discussed in the introduction to this article, we're going to be adding an additional level to our game, so we'll need a method for keeping track of a player's progress through the game's levels. Time for action – adding a game attribute to record the level Create a new number game attribute called Level, with the configuration shown in the following screenshot: Save the game. What just happened? We have created a new game attribute called Level, which will be used to record the current level of the game. A game attribute is being used here, because we need to access this value in other scenes within our game; local attributes have their values reset whenever a scene is loaded, whereas game attributes' values are retained regardless of whether or not a different scene has been loaded. Fixing the never-ending game! We've finished designing and creating the gameplay for the Monkey Run game, and the scoring behaviors are almost complete. However, there is an anomaly with the management of the Lives game attribute. The monkey correctly loses a life when it collides with an enemy actor, but currently, when the countdown expires, the monkey is simply repositioned at the start of the level, and the countdown starts again from the beginning! If we leave the game as it is, the player will have an unlimited number of attempts to complete the level — that's not much of a challenge! Have a go hero (Recommended) In the Countdown expired event, which is found in the Health actor behavior, modify the test for the countdown so that it checks for the countdown timer being exactly equal to zero, rather than the current test, which is for the Countdown attribute being less than 1. We only want the ShowAngel event to be triggered once when the countdown equals exactly zero! (Recommended) Update the game so that the Show Angel event manages the complete process of losing a life, that is, either when a collision occurs between the monkey and an enemy, or when the countdown timer expires. A single event should deduct a life and restart the level. (Optional) If we look carefully, we can see that the countdown timer bar starts to grow backwards when the player runs out of time! Update the Display HUD event in the Score Management scene behavior, so that the timer bar is only drawn when the countdown is greater than zero. There are many different ways to implement the above modifications, so take some time and plan the recommended modifications! Test the game thoroughly to ensure that the lives are reduced correctly and the level restarts as expected, when the monkey collides with the enemy, and when the countdown expires. It would certainly be a good idea to review the download file for this session, compare it with your own solutions, and review each event carefully, along with the accompanying comment blocks. There are some useful tips in the example file, so do take the time to have a look! Summary Although our game doesn't look vastly different from when we started this article, we have made some very important changes. First, we implemented a text display to show the countdown timer, so that players of our game can see how much time they have remaining to complete the level. We also imported and configured a font and used the new font to make the countdown display more visually appealing. We then implemented a system of tracking the number of lives that the player has left, and this was our first introduction to learning how game attributes can store information that can be carried across scenes. The most visible change that we implemented in this article was to introduce a timer bar that reduces in size as the countdown decreases. Although very few instruction blocks were required to create the timer bar, the results are very effective, and are less distracting for the player than having to repeatedly look to the top of the screen to read a text display. The main challenge for players of our game is to collect all the fruit actors in the allocated time, so we created an initialization event to count the number of fruit actors in the scene. Again, this event has been designed to be reusable, as it will always correctly count the fruit actors in any scene. We also implemented the instructions to test when there are no more fruit actors to be collected, so the player can be taken to the next level in the game when they have completed the challenge. A very important skill that we learned while implementing these instructions was to use actor values as Boolean flags to ensure that collisions are counted only once. Finally, we created a new game attribute to keep track of our players' progress through the different levels in the game Resources for Article : Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] 2D game development with Monkey [Article] Getting Started with GameSalad [Article]
Read more
  • 0
  • 0
  • 9802
Modal Close icon
Modal Close icon