Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - 3D Game Development

115 Articles
article-image-cryengine-3-breaking-ground-sandbox
Packt
23 Oct 2012
13 min read
Save for later

CryENGINE 3: Breaking Ground with Sandbox

Packt
23 Oct 2012
13 min read
The majority of games created using the CryENGINE SDK have historically been first-person shooters containing a mix of sandbox and directed gameplay. If you have gone so far as to purchase a book on the use of the CryENGINE 3 SDK, then I am certain that you have had some kind of idea for a game, or even improvements to existing games, that you might want to make. It has been my experience professionally that should you have any of these ideas and want to share or sell them, the ideas that are presented in a playable format, even in early prototype form, are far more effective and convincing than any PowerPoint presentation or 100-page design document. Reducing, reusing, recycling Good practice when creating prototypes and smaller scale games, especially if you lack the expertise in creating certain assets and code, is to reduce, reuse, and recycle. To break down what I mean: Reduce the amount of new assets and new code you need to make Reuse existing assets and code in new and unique ways Recycle the sample assets and code provided, and then convert them for your own uses Developing with CryEngine out of the box As mentioned earlier, the CryENGINE 3 SDK has a huge amount of out-of-the-box features for creating games. Let's begin by following a few simple steps to make our first game world. Before proceeding with this example, it's important to understand the features it is displaying; the level we will have created by the end of this article will not be a full, playable game, but rather a unique creation of yours, which will be constructed using the first major features we will need in our game. It will provide an environment in to which we can design gameplay. With the ultimate goal of this article being to create our own level with the core features immediately available to us, we must keep in mind that these examples are orientated to compliment a first-person shooter and not other genres. The first-person shooter genre is quite well defined as new games come out every year within this genre. So, it should be fairly easy for any developer to follow these examples. In my career, I have seen that you can indeed accomplish a good cross section of different games with the CryENGINE 3 SDK. However, the third- and first-person genres are significantly easier to create, immediately with the example content and features available right out of the box. For the designers: This article is truly a must-have for designers working with the engine. Though, I would highly recommend that all users of sandbox know how to use these features, as they are the principal features typically used within most levels of the different types of games in the CryENGINE. Time for action - creating a new level Let's follow a few simple steps to create our own level: Start the Editor.exe application. Select File | New. This will present you with a New Level dialog box that allows you to do the adjustments of some principal properties of your masterpiece to come. The following screenshot shows the properties available in New Level: Name this New Level, as Book_Example_1. The name that you choose here will identify this level for loading later as well as creating a folder and .cry file of the same name. In the Terrain section of the dialog box, set Heightmap Resolution to 1024x1024 , and Meters Per Unit to 1. Click on OK and your New Level will begin to load. This should occur relatively fast, but will depend on your computer's specifications. You will know the level has been loaded when you see Ready in the status bar. You will also see an ocean stretching out infinitely and some terrain slightly underneath the water. Maneuver your camera so that you have a good, overall view of the map you will create, as seen in the following screenshot: (Move the mouse over the image to enlarge.) What just happened? Congratulations! You now have an empty level to mold and modify at your will. Before moving on, let's talk a little about the properties that we just set, as they are fundamental properties of the levels within CryENGINE. It is important to understand these, as depending on the type of game you are creating, you may need bigger or smaller maps, or you may not even need terrain at all. Using the right Heightmap Resolution When we created the New Level, we chose a Heightmap Resolution of 1024x1024. To explain this further, each pixel on the heightmap has a certain grey level. This pixel then gets applied to the terrain polygons, and depending on the level of grey, will move the polygon on the terrain to a certain height. This is called displacement. Heightmaps always have varying values from full white to full black, where full white is maximum displacement and full black is minimum or no displacement. The higher the resolution of the heightmap, the more the pixels that are available to represent different features on the said heightmap. You can thus achieve more definition and a more accurate geometrical representation of your heightmap using higher resolutions. The settings can range from the smallest resolution of 128x128, all the way to the largest supported resolution of 8192x8192 . The following screenshot shows the difference between high resolution and low-resolution heightmaps: Scaling your level with Meters Per Unit If the Heightmap Resolution parameter is examined in terms of pixel size, then this dialog box can be viewed also as the Meters Per Pixel parameter. This means that each pixel of the heightmap will be represented by so many meters. For example, if a heightmap's resolution has 4 Meters Per Unit, then each pixel on the generated heightmap will measure to be 4 meters in length and width on the level. Even though Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult since there will be a wider, minimum triangle size set by this value. Keep in mind that you can adjust the unit size even after the map has been created. This is done through the terrain editor, which we will discuss shortly. Calculating the real-world size of the terrain The expected size of the terrain can easily be calculated before making the map, because the equation is not so complicated. The real-world size of the terrain can be calculated as: (Heightmap Resolution) x Meters Per Unit = Final Terrain Dimensions. For example: (128x128) x 2m = 256x256m (512x512) x 8m = 4096x4096m (1024x1024) x 2m = 2048x2048m Using or not using terrain In most cases, levels in CryENGINE will use some amount of the terrain. The terrain itself is a highly optimized system that has levels of dynamic tessellation, which adjusts the density of polygons depending on the distance from the camera to the player. Dynamic tessellation is used to make the more defined areas of the terrain closer to the camera and the less defined ones further away, as the number of terrain polygons on the screen will have a significant impact on the performance of the level. In some cases, however, the terrain can be expensive in terms of performance, and if the game is made in an environment like space or interior corridors and rooms, then it might make sense to disable the terrain. Disabling the terrain in these cases will save an immense amount of memory, and speed up level loading and runtime performance. In this particular example, we will use the terrain, but should you wish to disable it, simply go to the second tab in the RollupBar (usually called the environment tab) and set the ShowTerrainSurface parameter to false, as shown in the following screenshot:   Time for action - creating your own heightmap You must have created a new map to follow this example. Having sufficiently beaten the terrain system to death through explanation, let's get on with what we are most interested in, which is creating our own heightmap to use for our game: As discussed in the previous example, you should now see a flat plane of terrain slightly submerged beneath the ocean. At the top of the Sandbox interface in the main toolbar, you will find a menu selection called Terrain; open this. The following screenshot shows the options available in the Terrain menu. As we want to adjust the terrain, we will select the Edit Terrain option. This will open the Terrain Editor window, which is shown in the following screenshot: (Move the mouse over the image to enlarge.) You can zoom in and pan this window to further inspect areas within the map. Click-and-drag using the right mouse button to pan the view and use the mouse wheel to zoom in and zoom out. The Terrain Editor window has a multitude of options, which can be used to manipulate the heightmap of your level. Before we start painting anything, we should first set the maximum height of the map to something more manageable: Click on Modify. Click on Set Max Height. Set your Max Terrain Height to 256. Note that the terrain height is measured in meters. Having now set the Max Height parameter, we are ready to paint! Using a second monitor: This is a good time to take advantage of a second monitor should you have one, as you can leave the perspective view on your primary monitor and view the changes made in the Terrain Editor on your second monitor, in real time. On the right-hand side of the Terrain Editor , you will see a rollout menu named Terrain Brush. We will first use this to flatten a section of the level. Change the Brush Settings to Flatten, and set the following values: Outside Radius = 100 Inside Radius = 100 Hardness = 1 Height = 20 NOTE: You can sample the terrain height in the Terrain Editor or the view port using the shortcut Control when the flatten brush is selected. Now paint over the top half of the map. This will flatten the entire upper half of the terrain to 20 meters in height. You will end up with the following screenshot, where the dark portion represents the terrain, and since it is relatively low compared to our max height, it will appear black: (Move the mouse over the image to enlarge.) Note that, by default, the water is set to a height of 16 meters. Since we flattened our terrain to a height of 20 meters, we have a 4-meter difference from the terrain to the water in the center of the map. In the perspective viewport, this will look like a steep cliff going into the water. At the location where the terrain meets the water, it would make sense to turn this into a beach, as it's the most natural way to combine terrain and water. To do this, we will smoothen the hard edge of the terrain along the water. As this is to become our beach area, let's now use the smooth tools to make it passable by the player: Change the Type of brush to Smooth and set the following parameters: Outside Radius = 50 Hardness = 1 I find it significantly easier to gauge the effects of the smooth brush in the perspective viewport. Paint the southern edge of the terrain, which will become our beach. It might be difficult to view the effects of the smooth brush simply in the terrain editor, so I recommend using the perspective viewport to paint your beach. Now that we have what will be our beach, let's sculpt some background terrain. Select the Rise/Lower brush and set the following parameters: Outside Radius = 75 Inside Radius = 50 Hardness = 0.8 Height = 1 Before painting, set the Noise Settings for the brush; to do so, check Enable Noise to true. Also set: Scale = 5 Frequency = 25 Paint the outer edges of the terrain while keeping an eye on the perspective viewport at the actual height of the mountain type structure that this creates. You can see the results in the Terrain Editor and perspective view, as seen in the following screenshots: (Move the mouse over the image to enlarge.) (Move the mouse over the image to enlarge.) It is a good time to use the shortcut to switch to smooth brush while painting the terrain. While in perspective view, switch to the smooth brush using the Shift shortcut. A good technique is to use the Rise/Lower brush and only click a few times, and then use Shift to switch to the smooth brush and do this multiple times on the same area. This will give you some nice terrain variation, which will serve us nicely when we go to texture it. Don't forget the player's perspective: Remember to switch to game mode periodically to inspect your terrain from the players level. It is often the case that we get caught up in the appearance of a map by looking at it from our point of view while building it, rather than from the point of view of the player, which is paramount for our game to be enjoyable to anyone playing it. Save this map as Book_Example_1_no_color.cry. What just happened? In this particular example, we used one of the three different techniques to create height maps within the CryENGINE sandbox: The first technique, which we performed here, was manually painting the heightmap with a brush directly in the sandbox. The second technique, which we will explore later, is generating procedural terrain using the tools provided in sandbox. Finally, the third technique is to import a previously created heightmap from another program. You now have a level with some terrain that looks somewhat like a beach, a flat land area, and some mountains. This is a great place to start for any outdoor map as it allows us to use some powerful out of the box engine features like the water and the terrain. Having the mountains surrounding the map also encourages the illusion of having more terrain behind it. Have a go hero – using additional brush settings With the settings we just explored, try to add some more terrain variation into the map to customize it further, as per your game's needs. Try using different settings for the brushes we explored previously. You could try adding some islands out in the water off the coast of your beach or some hills on the flat portion of the map. Use the Inside Radius and Outside Radius, which have a falloff of the brushes settings from the inner area having the strongest effect and the outer having the least. To create steeper hills or mountains, set the Inside Radius and Outside Radius to be relatively similar in size. To get a shallower and smoother hill set the Inside Radius and Outside Radius further apart. Finally, try using the Hardness, which acts like the pressure applied to a brush by a painter on canvas. A good way to explain this is that if the Hardness is set to 1, then within one click you will have the desired height. If set to 0.01, then it will take 100 clicks to achieve an identical result. You can save these variations into different .cry files should you wish to do so.
Read more
  • 0
  • 0
  • 12957

article-image-introducing-building-blocks-unity-scripts
Packt
11 Oct 2013
15 min read
Save for later

Introducing the Building Blocks for Unity Scripts

Packt
11 Oct 2013
15 min read
(For more resources related to this topic, see here.) Using the term method instead of function You are constantly going to see the words function and method used everywhere as you learn Unity. The words function and method truly mean the same thing in Unity. They do the same thing. Since you are studying C#, and C# is an Object-Oriented Programming (OOP) language, I will use the word "method" throughout this article, just to be consistent with C# guidelines. It makes sense to learn the correct terminology for C#. Also, UnityScript and Boo are OOP languages. The authors of the Scripting Reference probably should have used the word method instead of function in all documentation. From now on I'm going to use the words method or methods in this article. When I refer to the functions shown in the Scripting Reference , I'm going to use the word method instead, just to be consistent throughout this article. Understanding what a variable does in a script What is a variable? Technically, it's a tiny section of your computer's memory that will hold any information you put there. While a game runs, it keeps track of where the information is stored, the value kept there, and the type of the value. However, for this article, all you need to know is how a variable works in a script. It's very simple. What's usually in a mailbox, besides air? Well, usually there's nothing but occasionally there is something in it. Sometimes there's money (a paycheck), bills, a picture from aunt Mabel, a spider, and so on. The point is what's in a mailbox can vary. Therefore, let's call each mailbox a variable instead. Naming a variable Using the picture of the country mailboxes, if I asked you to see what is in the mailbox, the first thing you'd ask is which one? If I said in the Smith mailbox, or the brown mailbox, or the round mailbox, you'd know exactly which mailbox to open to retrieve what is inside. Similarly, in scripts, you have to name your variables with a unique name. Then I can ask you what's in the variable named myNumber, or whatever cool name you might use. A variable name is just a substitute for a value As you write a script and make a variable, you are simply creating a placeholder or a substitute for the actual information you want to use. Look at the following simple math equation: 2 + 9 = 11 Simple enough. Now try the following equation: 11 + myNumber = ??? There is no answer to this yet. You can't add a number and a word. Going back to the mailbox analogy, write the number 9 on a piece of paper. Put it in the mailbox named myNumber. Now you can solve the equation. What's the value in myNumber? The value is 9. So now the equation looks normal: 11 + 9 = 20 The myNumber variable is nothing more than a named placeholder to store some data (information). So anywhere you would like the number 9 to appear in your script, just write myNumber, and the number 9 will be substituted. Although this example might seem silly at first, variables can store all kinds of data that is much more complex than a simple number. This is just a simple example to show you how a variable works. Time for action – creating a variable and seeing how it works Let's see how this actually works in our script. Don't be concerned about the details of how to write this, just make sure your script is the same as the script shown in the next screenshot. In the Unity Project panel, double-click on LearningScript. In MonoDevelop, write the lines 6, 11, and 13 from the next screenshot. Save the file. To make this script work, it has to be attached to a GameObject. Currently, in our State Machine project we only have one GameObject, the Main Camera. This will do nicely since this script doesn't affect the Main Camera in any way. The script simply runs by virtue of it being attached to a GameObject. Drag LearningScript onto the Main Camera. Select Main Camera so that it appears in the Inspector panel. Verify whether LearningScript is attached. Open the Unity Console panel to view the output of the script. Click on Play. The preceding steps are shown in the following screenshot: What just happened? In the following Console panel is the result of our equations. As you can see, the equation on line 13 worked by substituting the number 9 for the myNumber variable: Time for action – changing the number 9 to a different number Since myNumber is a variable, the value it stores can vary. If we change what is stored in it, the answer to the equation will change too. Follow the ensuing steps: Stop the game and change 9 to 19. Notice that when you restart the game, the answer will be 30. What just happened? You learned that a variable works by simple process of substitution. There's nothing more to it than that. We didn't get into the details of the wording used to create myNumber, or the types of variables you can create, but that wasn't the intent. This was just to show you how a variable works. It just holds data so you can use that data elsewhere in your script. Have a go hero – changing the value of myNumber In the Inspector panel, try changing the value of myNumber to some other value, even a negative value. Notice the change in answer in the Console. Using a method in a script Methods are where the action is and where the tasks are performed. Great, that's really nice to know but what is a method? What is a method? When we write a script, we are making lines of code that the computer is going to execute, one line at a time. As we write our code, there will be things we want our game to execute more than once. For example, we can write a code that adds two numbers. Suppose our game needs to add the two numbers together a hundred different times during the game. So you say, "Wow, I have to write the same code a hundred times that adds two numbers together. There has to be a better way." Let a method take away your typing pain. You just have to write the code to add two numbers once, and then give this chunk of code a name, such as AddTwoNumbers(). Now, every time our game needs to add two numbers, don't write the code over and over, just call the AddTwoNumbers() method. Time for action – learning how a method works We're going to edit LearningScript again. In the following screenshot, there are a few lines of code that look strange. We are not going to get into the details of what they mean in this article.Getting into the Details of Methods. Right now, I am just showing you a method's basic structure and how it works: In MonoDevelop, select LearningScript for editing. Edit the file so that it looks exactly like the following screenshot. Save the file. What's in this script file? In the previous screenshot, lines 6 and 7 will look familiar to you; they are variables just as you learned in the previous section. There are two of them this time. These variables store the numbers that are going to be added. Line 16 may look very strange to you. Don't concern yourself right now with how this works. Just know that it's a line of code that lets the script know when the Return/Enter key is pressed. Press the Return/Enter key when you want to add the two numbers together. Line 17 is where the AddTwoNumbers() method gets called into action. In fact, that's exactly how to describe it. This line of code calls the method. Lines 20, 21, 22, and 23 make up the AddTwoNumbers() method. Don't be concerned about the code details yet. I just want you to understand how calling a method works. Method names are substitutes too You learned that a variable is a substitute for the value it actually contains. Well, a method is no different. Take a look at line 20 from the previous screenshot: void AddTwoNumbers () The AddTwoNumbers() is the name of the method. Like a variable, AddTwoNumbers() is nothing more than a named placeholder in the memory, but this time it stores some lines of code instead. So anywhere we would like to use the code of this method in our script, just write AddTwoNumbers(), and the code will be substituted. Line 21 has an opening curly-brace and line 23 has a closing curly-brace. Everything between the two curly-braces is the code that is executed when this method is called in our script. Look at line 17 from the previous screenshot: AddTwoNumbers(); The method name AddTwoNumbers() is called. This means that the code between the curly-braces is executed. It's like having all of the code of a method right there on line 17. Of course, this AddTwoNumbers() method only has one line of code to execute, but a method could have many lines of code. Line 22 is the action part of this method, the part between the curly-braces. This line of code is adding the two variables together and displaying the answer to the Unity Console. Then, follow the ensuing steps: Go back to Unity and have the Console panel showing. Now click on Play. What just happened? Oh no! Nothing happened! Actually, as you sit there looking at the blank Console panel, the script is running perfectly, just as we programmed it. Line 16 in the script is waiting for you to press the Return/Enter key. Press it now. And there you go! The following screenshot shows you the result of adding two variables together that contain the numbers 2 and 9: Line 16 waited for you to press the Return/Enter key. When you do this, line 17 executes which calls the AddTwoNumbers() method. This allows the code block of the method, line 23, to add the the values stored in the variables number1 and number2. Have a go hero – changing the output of the method While Unity is in the Play mode, select the Main Camera so its Components show in the Inspector. In the Inspector panel, locate Learning Script and its two variables. Change the values, currently 2 and 9, to different values. Make sure to click your mouse in the Game panel so it has focus, then press the Return/Enter key again. You will see the result of the new addition in the Console. You just learned how a method works to allow a specific block of code to to be called to perform a task. We didn't get into any of the wording details of methods here, this was just to show you fundamentally how they work. Introducing the class The class plays a major role in Unity. In fact, what Unity does with a class a little piece of magic when Unity creates Components. You just learned about variables and methods. These two items are the building blocks used to build Unity scripts. The term script is used everywhere in discussions and documents. Look it up in the dictionary and it can be generally described as written text. Sure enough, that's what we have. However, since we aren't just writing a screenplay or passing a note to someone, we need to learn the actual terms used in programming. Unity calls the code it creates a C# script. However, people like me have to teach you some basic programming skills and tell you that a script is really a class. In the previous section about methods, we created a class (script) called LearningScript. It contained a couple of variables and a method. The main concept or idea of a class is that it's a container of data, stored in variables, and methods that process that data in some fashion. Because I don't have to constantly write class (script), I will be using the word script most of the time. However, I will also be using class when getting more specific with C#. Just remember that a script is a class that is attached to a GameObject. The State Machine classes will not be attached to any GameObjects, so I won't be calling them scripts. By using a little Unity magic, a script becomes a Component While working in Unity, we wear the following two hats: A Game-Creator hat A Scripting (programmer) hat When we first wear our Game-Creator hat, we will be developing our Scene, selecting GameObjects, and viewing Components; just about anything except writing our scripts. When we put our Scripting hat on, our terminology changes as follows: We're writing code in scripts using MonoDevelop We're working with variables and methods The magic happens when you put your Game-Creator hat back on and attach your script to a GameObject. Wave the magic wand — ZAP — the script file is now called a Component, and the public variables of the script are now the properties you can see and change in the Inspector panel. A more technical look at the magic A script is like a blueprint or a written description. In other words, it's just a single file in a folder on our hard drive. We can see it right there in the Projects panel. It can't do anything just sitting there. When we tell Unity to attach it to a GameObject, we haven't created another copy of the file, all we've done is tell Unity we want the behaviors described in our script to be a Component of the GameObject. When we click on the Play button, Unity loads the GameObject into the computer's memory. Since the script is attached to a GameObject, Unity also has to make a place in the computer's memory to store a Component as part of the GameObject. The Component has the capabilities specified in the script (blueprint) we created. Even more Unity magic There's some more magic you need to be aware of. The scripts inherit from MonoBehaviour. For beginners to Unity, studying C# inheritance isn't a subject you need to learn in any great detail, but you do need to know that each Unity script uses inheritance. We see the code in every script that will be attached to a GameObject. In LearningScript, the code is on line 4: public class LearningScript : MonoBehaviour The colon and the last word of that code means that the LearningScript class is inheriting behaviors from the MonoBehaviour class. This simply means that the MonoBehaviour class is making few of its variables and methods available to the LearningScript class. It's no coincidence that the variables and methods inherited look just like some of the code we saw in the Unity Scripting Reference. The following are the two inherited behaviors in the LearningScript: Line 9:: void Start () Line 14: void Update () The magic is that you don't have to call these methods, Unity calls them automatically. So the code you place in these methods gets executed automatically. Have a go hero – finding Start and Update in the Scripting Reference Try a search on the Scripting Reference for Start and Update to learn when each method is called by Unity and how often. Also search for MonoBehaviour. This will show you that since our script inherits from MonoBehaviour, we are able to use the Start() and Update() methods. Components communicating using the Dot Syntax Our script has variables to hold data, and our script has methods to allow tasks to be performed. I now want to introduce the concept of communicating with other GameObjects and the Components they contain. Communication between one GameObject's Components and another GameObject's Components using Dot Syntax is a vital part of scripting. It's what makes interaction possible. We need to communicate with other Components or GameObjects to be able to use the variables and methods in other Components. What's with the dots? When you look at the code written by others, you'll see words with periods separating them. What the heck is that? It looks complicated, doesn't it. The following is an example from the Unity documentation: transform.position.x Don't concern yourself with what the preceding code means as that comes later, I just want you to see the dots. That's called the Dot Syntax. The following is another example. It's the fictitious address of my house: USA.Vermont.Essex.22MyStreet Looks funny, doesn't it? That's because I used the syntax (grammar) of C# instead of the post office. However, I'll bet if you look closely, you can easily figure out how to find my house. Summary This article introduced you to the basic concepts of variables, methods, and Dot Syntax. These building blocks are used to create scripts and classes. Understanding how these building blocks work is critical so you don't feel you're not getting it. We discovered that a variable name is a substitute for the value it stores; a method name is a substitute for a block of code; when a script or class is attached to a GameObject, it becomes a Component. The Dot Syntax is just like an address to locate GameObjects and Components. With these concepts under your belt, we can proceed to learn the details of the sentence structure, the grammar, and the syntax used to work with variables, methods, and the Dot Syntax. Resources for Article: Further resources on this subject: Debugging Multithreaded Applications as Singlethreaded in C# [Article] Simplifying Parallelism Complexity in C# [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 12782

article-image-let-there-be-light
Packt
04 Oct 2013
8 min read
Save for later

Let There be Light!

Packt
04 Oct 2013
8 min read
(For more resources related to this topic, see here.) Basic light sources You use lights to give a scene brightness, ambience, and depth. Without light, everything looks flat and dull. Use additional light sources to even-out lighting and to set up interior scenes. In Unity, lights are components of GameObjects. The different kinds of light sources are as follows: Directional lights: These lights are commonly used to mimic the sun. Their position is irrelevant, as only orientation matters. Every architectural scene should at least have one main Directional light. When you only need to lighten up an interior room, they are more tricky to use, as they tend to brighten up the whole scene, but they help getting some light through the windows, inside the project. We'll see a few use cases in the next few sections. Point lights: These lights are easy to use, as they emit light in any direction. Try to minimize their Range, so they don't spill light in other places. In most scenes, you'll need several of them to balance out dark spots and corners and to even-out the overall lighting. Spot lights: These lights only emit light into a cone and are good to simulate interior light fixtures. They cast a distinct bright circular light spot so use them to highlight something. Area lights: These are the most advanced lights, as they allow a light source to be given an actual rectangular size. This results in smoother lights and shadows, but their effect is only visible when baking and they require a pro-license. They are good to simulate light panels or the effect of light coming in through a window. In the free version, you can simulate them using multiple Spot or Directional Lights. Shadows Most current games support some form of shadows. They can be pre-calculated or rendered in real-time. Pre-calculation implies that the effect of shadows and lighting is calculated in advance and rendered onto an additional material layer. It only makes sense for objects that don't move in the scene. Real-time shadows are rendered using the GPU, but can be computationally expensive and should only be used for dynamic lighting. You might be familiar with real-time shadows from applications such as SketchUp and recent versions of ArchiCAD or Revit. Ideally, both techniques are combined. The overall lighting of the scene (for example, buildings, street, interiors, and so on) is pre-calculated and baked in texture maps. Additional real-time shadows are used on the moving characters. Unity can blend both types of shadows to simulate dynamic lighting in large scenes. Some of these techniques, however, are only supported in the pro-version. Real-time shadows Imagine we want to create a sun or shadow study of a building. This is best appreciated in real-time and by looking from the outside. We will use the same model as we did in the previous article, but load it in a separate scene. We want to have a light object acting as a sun, a spherical object to act as a visual clue where the sun is positioned and link them together to control the rotations in an easy way. The steps to be followed to achieve this are as follows: Add a Directional light, name it SunLight and choose the Shadow Type. Hard shadows are more sharply defined and are the best choice in this example, whereas Soft shadows look smoother and are better suited for a subtle, mood lighting. Add an empty GameObject by navigating to GameObject | Create Empty that is positioned in the center of the scene and name it ORIGIN. Create a Sphere GameObject by navigating to GameObject | Create Other | Sphere, name it VisualSun. Make it a child of the ORIGIN by dragging the VisualSun name in the Hierarchy Tab onto the ORIGIN name. Give it a bright, yellow material, using a Self-Illumin/Diffuse Shader. Deactivate Cast Shadows and Receive Shadows on the Mesh Renderer component. After you have placed the VisualSun as a child of the origin object, reset the position of the Sphere to be 0 for X, Y and Z. It now sits in the same place as its parent. Even if you move the parent, its local position stays at X=0, Y=0 and Z=0, which makes it convenient for a placement relative to its parent object. Alter the Z-position to define an offset from the origin, for example 20 units. The negative Z will facilitate the SunLight orientation in the next step. The SunLight can be dragged onto the VisualSun and its local position reset to zero as well. When all rotations are also zero, it emits light down the Z-axis and thus straight to the origin. If you want to have a nice glow effect, you can add a Halo by navigating to Components | Effects | Halo and then to SunLight and setting a suitable Size. We now have a hierarchic structure of the origin, the visible sphere and the Directional light, that is accentuated by the halo. We can adjust this assembly by rotating the origin around. Rotating around the Y-axis defines the orientation of the sun, whereas a rotation around the X-axis defines the azimuth. With these two rotations, we can position the sun wherever we want. Lightmapping Real-time lighting is computationally very expensive. If you don't have the latest hardware, it might not even be supported. Or you might avoid it for a mobile app, where hardware resources are limited. It is possible to pre-calculate the lighting of a scene and bake it onto the geometry as textures. This process is called Lightmapping, for more information on it visit: http://docs.unity3d.com/Documentation/Manual/Lightmapping.html While actual calculations are rather complex, the process in Unity is made easy, thanks to the integrated Beast Lightmapping. There are a few things you need to set up properly. These are given as follows: First, ensure that any object that needs to be baked is set to Static. Each GameObject has a static-toggle, right next to the Name property. Activate this for all models and light objects that will not move in the Scene. Secondly, ensure that all geometry has a second set of texture coordinates, called UV2 coordinates in Unity. Default GameObjects have those set up, but for imported models, they usually need to be added. Luckily for us, this is automated when Generate Lightmap UVs is activated on the model import settings given earlier in Quick Walk Around Your Design, in the section entitled, Controlling the import settings. If all lights and meshes are static and UV2 coordinates are calculated, you are ready to go. Open the Lightmapping dialog by navigating to Window | Lightmapping and dock it somewhere conveniently. There are several settings, but we start with a basic setup that consists of the following steps: Usually a Single Lightmap suffices. Dual Lightmaps can look better, but require the deferred rendering method that is only supported in Unity Pro. Choose the Quality High modus. Quality Low gives jagged edges and is only used for quick testing. Activate Ambient Occlusion as a quick additional rendering step that darkens corners and occluded areas, such as where objects touch. This adds a real sense of depth and is highly recommended. Set both sliders somewhere in the middle and leave the distance at 0.1, to control how far the system will look to detect occlusions. Start with a fairly low Resolution, such as 5 or 10 texels per world unit. This defines how detailed the calculated Lightmap texture is, when compared to the geometry. Look at the Scene view, to get a checkered overlay visible, when adjusting Lightmapping settings. For final results, increase this to 40 or 50, to give more detail to the shadows, at the cost of longer rendering times. There are additional settings for which Unity Pro is required, such as Sky Light and Bounced Lighting. They both add to the realism of the lighting, so they are actually highly recommended for architectural visualization, if you have the pro-version. On the Object sub-tab, you can also tweak the shadow calculation settings for individual lights. By increasing the radius, you get a smoother shadow edge, at the cost of longer rendering times. If you increase the radius, you should also increase the amount of samples, which helps reduce the noise that gets added with sampling. This is shown in the following screenshot: Now you can go on and click Bake Scene. It can take quite some time for large models and fine resolutions. Check the blue time indicator on the right side of the status bar (but you can continue working in Unity). After the calculations are finished, the model is reloaded with the new texture and baked shadows are visible in Scene and Game views, as shown in the following screenshot: Beware that to bake a Scene, it needs to be saved and given a name, as Unity places the calculated Lightmap textures in a subfolder with the same name as the Scene. Summary In this article we learned about the use of different light sources and shadow. To avoid the heavy burden of real-time shadows, we discussed the use Lightmapping technique to bake lights and shadows on the model, from within Unity Resources for Article: Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] Introduction to Game Development Using Unity 3D [Article] Unity 3-0 Enter the Third Dimension [Article]    
Read more
  • 0
  • 0
  • 12586

article-image-miscellaneous-gameplay-features
Packt
01 Mar 2013
13 min read
Save for later

Miscellaneous Gameplay Features

Packt
01 Mar 2013
13 min read
(For more resources related to this topic, see here.) How to have a sprinting player use up energy Torque 3D's Player class has three main modes of movement over land: sprinting, running, and crouching. Some are designed to allow a player to sprint as much as they want, but perhaps with other limitations while sprinting. This is the default method of sprinting in the Torque 3D templates. Other game designs allow the player to sprint only for short bursts before the player becomes "tired". In this recipe, we will learn how to set up the Player class such that sprinting uses up a pool of energy that slowly recharges over time; and when that energy is depleted, the player is no longer able to sprint. How to do it... We are about to modify a PlayerData Datablock instance so that sprint uses up the player's energy as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock template in art/ datablocks/player.cs. Find the sprinting section of the Datablock instance and make the following changes: sprintForce = 4320; sprintEnergyDrain = 0.6; // Sprinting now drains energy minSprintEnergy = 10; // Minimum energy to sprint maxSprintForwardSpeed = 14; maxSprintBackwardSpeed = 8; maxSprintSideSpeed = 6; sprintStrafeScale = 0.25; sprintYawScale = 0.05; sprintPitchScale = 0.05; sprintCanJump = true; Start up the game and have the player sprint. Sprinting should now be possible for about 5.5 seconds before the player falls back to a run. If the player stops sprinting for about 7.5 seconds, their energy will be fully recharged and they will be able to sprint again. How it works... The maxEnergy property on the PlayerData Datablock instance determines the maximum amount of energy a player has. All of Torque 3D's templates set it to a value of 60. This energy may be used for a number of different activities (such as jet jumping), and even certain weapons may draw from it. By setting the sprintEnergyDrain property on the PlayerData Datablock instance to a value greater than zero, the player's energy will be drained every tick (about one-thirty-second of a second) by that amount. When the player's energy reaches zero they may no longer sprint, and revert back to running. Using our previous example, we have a value for the sprintEnergyDrain property of 0.6 units per tick. This works out to 19.2 units per second. Given that our DefaultPlayerData maxEnergy property is 60 units, we should run out of sprint energy in 3.125 seconds. However, we were able to sprint for about 5.5 seconds in our example before running out of energy. Why is this? A second PlayerData property affects energy use over time: rechargeRate. This property determines how much energy is restored to the player per tick, and is set to 0.256 units in DefaultPlayerData. When we take both the sprintEnergyDrain and recharcheRate properties into account, we end up with an effective rate of (0.6 – 0.256) 0.344 units drained per tick while sprinting. Assuming the player begins with the maximum amount of energy allowed by DefaultPlayerData, this works out to be (60 units / (0.344 units per tick * 32 ticks per second)) 5.45 seconds. The final PlayerData property that affects sprinting is minSprintEnergy. This property determines the minimum player energy level required before being able to sprint. When this property is greater than zero, it means that a player may continue to sprint until their energy is zero, but cannot sprint again until they have regained a minSprintEnergy amount of energy. There's more... Let's continue our discussion of player sprinting and energy use. Balance energy drain versus recharge rate With everything set up as described previously, every tick the player is sprinting his energy pool will be reduced by the value of sprintEnergyDrain property of PlayerData, and increased by the value of the rechargeRate property. This means that in order for the player's energy to actually drain, his sprintEnergyDrain property must be greater than his rechargeRate property. As a player's energy may be used for other game play elements (such as jet jumping or weapons fire), sometimes we may forget this relationship while tuning the rechargeRate property, and end up breaking a player's ability to sprint (or make them sprint far too long). Modifying other sprint limitations The way the DefaultPlayerData Datablock instance is set up in all of Torque 3D's templates, there are already limitations placed on sprinting without making use of an energy drain. This includes not being able to rotate the player as fast as when running, and limited strafing ability. Making sprinting rely on the amount of energy a player has is often enough of a limitation, and the other default limitations may be removed or reduced. In the end it depends on the type of game we are making. To change how much the player is allowed to rotate while sprinting, we modify the sprintYawScale and sprintPitchScale properties of the PlayerData property. These two properties represent the fraction of rotation allowed while sprinting compared with running and default to 0.05 each. To change how much the player is allowed to strafe while sprinting, we modify the sprintStrafeScale property of the PlayerData property. This property is the fraction of the amount of strafing movement allowed while running and defaults to 0.25. Disabling sprint During a game we may want to disable a player's sprinting ability. Perhaps they are too injured, or are carrying too heavy a load. To allow or disallow sprinting for a specific player we call the following Player class method on the server: Player.allowSprinting( allow ); In the previous code, the allow parameter is set to true to allow a player the ability to sprint, and to false to not allow a player to sprint at all. This method is used by the standard weapon mounting system in scripts/server/ weapon.cs to disable sprinting. If the ShapeBaseImageData Datablock instance for the weapon has a dynamic property of sprintDisallowed set to true, the player may not sprint while holding that weapon. The DeployableTurretImage Datablock instance makes use of this by not allowing the player to sprint while holding a turret. Enabling and disabling air control Air control is a fictitious force used by a number of games that allows a player to control their trajectory while falling or jumping in the air. Instead of just falling or jumping and hoping for the best, this allows the player to change course as necessary and trades realism for playability. We can find this type of control in first-person shooters, platformers, and adventure games. In this recipe we will learn how to enable or disable air control for a player, as well as limit its effect while in use. How to do it... We are about to modify a PlayerData Datablock instance to enable complete air control as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock instance in art/ datablocks/player.cs. Find the section of the Datablock instance that contains the airControl property and make the following change: jumpForce = "747"; jumpEnergyDrain = 0; minJumpEnergy = 0; jumpDelay = "15"; // Set to maximum air control airControl = 1.0; Start up the game and jump the player off of a building or a sand dune. While in the air press one of the standard movement keys: W, A, S, and D. We now have full trajectory control of the player while they are in the air as if they were running. How it works... If the player is not in contact with any surface and is not swimming, the airControl property of PlayerData is multiplied against the player's direction of requested travel. This multiplication only happens along the world's XY plane and does not affect vertical motion. Setting the airControl property of PlayerData to a value of 0 will disable all air control. Setting the airControl property to a value greater than 1 will cause the player to move faster in the air than they can run. How to jump jet In game terms, a jump jet is often a backpack, a helicopter hat, or a similar device that a player wears, that provides them a short thrust upwards and often uses up a limited energy source. This allows a player to reach a height they normally could not, jump a canyon, or otherwise get out of danger or reach a reward. In this recipe we will learn how to allow a player to jump jet. Getting ready We will be making TorqueScript changes in a project based on the Torque 3D Full template using the Empty Terrain level. If you haven't already, use the Torque Project Manager (Project Manager.exe) to create a new project from the Full template. It will be found under the My Projects directory. Then start up your favorite script editor, such as Torsion, and let's get going! How to do it... We are going to modify the player's Datablock instance to allow for jump jetting and adjust how the user triggers the jump jet as follows: Open the art/datablocks/player.cs file in your text editor. Find the DefaultPlayerData Datablock instance and just below the section on jumping and air control, add the following code: // Jump jet jetJumpForce = 500; jetJumpEnergyDrain = 3; jetMinJumpEnergy = 10; Open scripts/main.cs and make the following addition to the onStart() function: function onStart() { // Change the jump jet trigger to match a regular jump $player::jumpJetTrigger = 2; // The core does initialization which requires some of // the preferences to loaded... so do that first. exec( "./client/defaults.cs" ); exec( "./server/defaults.cs" ); Parent::onStart(); echo("n--------- Initializing Directory: scripts ---------"); // Load the scripts that start it all... exec("./client/init.cs"); exec("./server/init.cs"); // Init the physics plugin. physicsInit(); // Start up the audio system. sfxStartup(); // Server gets loaded for all sessions, since clients // can host in-game servers. initServer(); // Start up in either client, or dedicated server mode if ($Server::Dedicated) initDedicated(); else initClient(); } Start our Full template game and load the Empty Terrain level. Hold down the Space bar to cause the player to fly straight up for a few seconds. The player will then fall back to the ground. Once the player has regained enough energy it will be possible to jump jet again. How it works... The only property that is required to be set for jump jetting to work is the jetJumpForce property of the PlayerData Datablock instance. This property determines the amount of continuous force applied on the player object to have them flying up in the air. It takes some trial and error to determine what force works best. Other Datablock properties that are useful to set are jetJumpEnergyDrain and jetMinJumpEnergy. These two PlayerData properties make jet jumping use up a player's energy. When the energy runs out, the player may no longer jump jet until enough energy has recharged. The jetJumpEnergyDrain property is how much energy per tick is drained from the player's energy pool, and the jetMinJumpEnergy property is the minimum amount of energy the player needs in their energy pool before they can jump jet again. Please see the How to have a sprinting player use up energy recipe for more information on managing a player's energy use. Another change we made in our previous example is to define which move input trigger number will cause the player to jump jet. This is defined using the global $player::jumpJetTrigger variable. By default, this is set to trigger 1, which is usually the same as the right mouse button. However, all of the Torque 3D templates make use of the right mouse button for view zooming (as defined in scripts/client/default.bind.cs). In our previous example, we modified the global $player::jumpJetTrigger variable to use trigger 2, which is usually the same as for regular jumping as defined in scripts/ client/default.bind.cs: function jump(%val) { // Touch move trigger 2 $mvTriggerCount2++; } moveMap.bind( keyboard, space, jump ); This means that we now have jump jetting working off of the same key binding as regular jumping, which is the Space bar. Now holding down the Space bar will cause the player to jump jet, unless they do not have enough energy to do so. Without enough energy, the player will just do a regular jump with their legs. There's more... Let's continue our discussion of using a jump jet. Jump jet animation sequence If the shape used by the Player object has a Jet animation sequence defined, it will play while the player is jump jetting. This sequence will play instead of all other action sequences. The hierarchy or order of action sequences that the Player class uses to determine which action sequence to play is as follows: Jump jetting Falling Swimming Running (known internally as the stand pose) Crouching Prone Sprinting Disabling jump jetting During a game we may no longer want to allow a player to jump jet. Perhaps they have run out of fuel or they have removed the device that allowed them to jump jet. To allow or disallow jump jetting for a specific player, we call the following Player class method on the server: Player.allowJetJumping( allow ); In the previous code, the allow parameter is set to true to allow a player to jump jet, and to false for not allowing him to jump jet at all. More control over the jump jet The PlayerData Datablock instance has some additional properties to fine tune a player's jump jet capability. The first is the jetMaxJumpSpeed property. This property determines the maximum vertical speed at which the player may use their jump jet. If the player is moving upwards faster than this, then they may not engage their jump jet. The second is the jetMinJumpSpeed property. This property is the minimum vertical speed of the player before a speed multiplier is applied. If the player's vertical speed is between jetMinJumpSpeed and jetMaxJumpSpeed, the applied jump jet speed is scaled up by a relative amount. This helps ensure that the jump jet will always make the player move faster than their current speed, even if the player's current vertical speed is the result of some other event (such as being thrown by an explosion). Summary These recipes will help you to fully utilize the gameplay's features and make your game more interesting and powerful. The tips and tricks mentioned in the recipes will surely help you in making the game more real, more fun to play, and much more intriguing. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] Retopology in 3ds Max [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1 [Article]
Read more
  • 0
  • 0
  • 12550

article-image-using-mannequin-editor
Packt
07 Sep 2015
14 min read
Save for later

Using the Mannequin editor

Packt
07 Sep 2015
14 min read
In this article, Richard Marcoux, Chris Goodswen, Riham Toulan, and Sam Howels, the authors of the book CRYENGINE Game Development Blueprints, will take us through animation in CRYENGINE. In the past, animation states were handled by a tool called Animation Graph. This is akin to Flow Graph but handled animations and transitions for all animated entities, and unfortunately reduced any transitions or variation in the animations to a spaghetti graph. Thankfully, we now have Mannequin! This is an animation system where the methods by which animation states are handled is all dealt with behind the scenes—all we need to take care of are the animations themselves. In Mannequin, an animation and its associated data is known as a fragment. Any extra detail that we might want to add (such as animation variation, styles, or effects) can be very simply layered on top of the fragment in the Mannequin editor. While complex and detailed results can be achieved with all manner of first and third person animation in Mannequin, for level design we're only really interested in basic fragments we want our NPCs to play as part of flavor and readability within level scripting. Before we look at generating some new fragments, we'll start off with looking at how we can add detail to an existing fragment—triggering a flare particle as part of our flare firing animation. (For more resources related to this topic, see here.) Getting familiar with the interface First things first, let's open Mannequin! Go to View | Open View Pane | Mannequin Editor. This is initially quite a busy view pane so let's get our bearings on what's important to our work. You may want to drag and adjust the sizes of the windows to better see the information displayed. In the top left, we have the Fragments window. This lists all the fragments in the game that pertain to the currently loaded preview. Let's look at what this means for us when editing fragment entries. The preview workflow A preview is a complete list of fragments that pertains to a certain type of animation. For example, the default preview loaded is sdk_playerpreview1p.xml, which contains all the first person fragments used in the SDK. You can browse the list of fragments in this window to get an idea of what this means—everything from climbing ladders to sprinting is defined as a fragment. However, we're interested in the NPC animations. To change the currently loaded preview, go to File | Load Preview Setup and pick sdk_humanpreview.xml. This is the XML file that contains all the third person animations for human characters in the SDK. Once this is loaded, your fragment list should update to display a larger list of available fragments usable by AI. This is shown in the following screenshot:   If you don't want to perform this step every time you load Mannequin, you are able to change the default preview setup for the editor in the preferences. Go to Tools | Preferences | Mannequin | General and change the Default Preview File setting to the XML of your choice. Working with fragments Now we have the correct preview populating our fragment list, let's find our flare fragment. In the box with <FragmentID Filter> in it, type flare and press Enter. This will filter down the list, leaving you with the fireFlare fragment we used earlier. You'll see that the fragment is comprised of a tree. Expanding this tree one level brings us to the tag. A tag in mannequin is a method of choosing animations within a fragment based on a game condition. For example, in the player preview we were in earlier, the begin_reload fragment has two tags: one for SDKRifle and one for SDKShotgun. Depending on the weapon selected by the player, it applies a different tag and consequently picks a different animation. This allows animators to group together animations of the same type that are required in different situations. For our fireFlare fragment, as there are no differing scenarios of this type, it simply has a <default> tag. This is shown in the following screenshot:   Inside this tag, we can see there's one fragment entry: Option 1. These are the possible variations that Mannequin will choose from when the fragment is chosen and the required tags are applied. We only have one variation within fireFlare, but other fragments in the human preview (for example, IA_talkFunny) offer extra entries to add variety to AI actions. To load this entry for further editing, double-click Option 1. Let's get to adding that flare! Adding effects to fragments After loading the fragment entry, the Fragment Editor window has now updated. This is the main window in the center of Mannequin and comprises of a preview window to view the animation and a list of all the available layers and details we can add. The main piece of information currently visible here is the animation itself, shown in AnimLayer under FullBody3P: At the bottom of the Fragment Editor window, some buttons are available that are useful for editing and previewing the fragment. These include a play/pause toggle (along with a playspeed dropdown) and a jump to start button. You are also able to zoom in and out of the timeline with the mouse wheel, and scrub the timeline by click-dragging the red timeline marker around the fragment. These controls are similar to the Track View cinematics tool and should be familiar if you've utilized this in the past. Procedural layers Here, we are able to add our particle effect to the animation fragment. To do this, we need to add ProcLayer (procedural layer) to the FullBody3P section. The ProcLayer runs parallel to AnimLayer and is where any extra layers of detail that fragments can contain are specified, from removing character collision to attaching props. For our purposes, we need to add a particle effect clip. To do this, double-click on the timeline within ProcLayer. This will spawn a blank proc clip for us to categorize. Select this clip and Procedural Clip Properties on the right-hand side of the Fragment Editor window will be populated with a list of parameters. All we need to do now is change the type of this clip from None to ParticleEffect. This is editable in the dropdown Type list. This should present us with a ParticleEffect proc clip visible in the ProcLayer alongside our animation, as shown in the following screenshot:   Now that we have our proc clip loaded with the correct type, we need to specify the effect. The SDK has a couple of flare effects in the particle libraries (searchable by going to RollupBar | Objects Tab | Particle Entity); I'm going to pick explosions.flare.a. To apply this, select the proc clip and paste your chosen effect name into the Effect parameter. If you now scrub through fragment, you should see the particle effect trigger! However, currently the effect fires from the base of the character in the wrong direction. We need to align the effect to the weapon of the enemy. Thankfully, the ParticleEffect proc clip already has support for this in its properties. In the Reference Bone parameter, enter weapon_bone and hit Enter. The weapon_bone is the generic bone name that character's weapons are attached too, and as such it is a good bet for any cases where we require effects or objects to be placed in a character's weapon position. Scrubbing through the fragment again, the effect will now fire from the weapon hand of the character. If we ever need to find out bone names, there are a few ways to access this information within the editor. Hovering over the character in the Mannequin previewer will display the bone name. Alternatively, in Character Editor (we'll go into the details later), you can scroll down in the Rollup window on the right-hand side, expand Debug Options, and tick ShowJointNames. This will display the names of all bones over the character in the previewer. With the particle attached, we can now ensure that the timing of the particle effect matches the animation. To do this, you can click-and-drag the proc clip around timeline—around 1.5 seconds seems to match the timings for this animation. With the effect timed correctly, we now have a fully functioning fireFlare fragment! Try testing out the setup we made earlier with this change. We should now have a far more polished looking event. The previewer in Mannequin shares the same viewport controls as the perspective view in Sandbox. You can use this to zoom in and look around to gain a better view of the animation preview. The final thing we need to do is save our changes to the Mannequin databases! To do this, go to File | Save Changes. When the list of changed files is displayed, press Save. Mannequin will then tell you that you're editing data from the .pak files. Click Yes to this prompt and your data will be saved to your project. The resulting changed database files will appear in GameSDKAnimationsMannequinADB, and it should be distributed with your project if you package it for release. Adding a new fragment Now that we know how to add some effects feedback to existing fragments, let's look at making a new fragment to use as part of our scripting. This is useful to know if you have animators on your project and you want to get their assets in game quickly to hook up to your content. In our humble SDK project, we can effectively simulate this as there are a few animations that ship with the SDK that have no corresponding fragment. Now, we'll see how to browse the raw animation assets themselves, before adding them to a brand new Mannequin fragment. The Character Editor window Let's open the Character Editor. Apart from being used for editing characters and their attachments in the engine, this is a really handy way to browse the library of animation assets available and preview them in a viewport. To open the Character Editor, go to View | Open View Pane | Character Editor. On some machines, the expense of rendering two scenes at once (that is, the main viewport and the viewports in the Character Editor or Mannequin Editor) can cause both to drop to a fairly sluggish frame rate. If you experience this, either close one of the other view panes you have on the screen or if you have it tabbed to other panes, simply select another tab. You can also open the Mannequin Editor or the Character Editors without a level loaded, which allows for better performance and minimal load times to edit content. Similar to Mannequin, the Character Editor will initially look quite overwhelming. The primary aspects to focus on are the Animations window in the top-left corner and the Preview viewport in the middle. In the Filter option in the Animations window, we can search for search terms to narrow down the list of animations. An example of an animation that hasn't yet been turned into a Mannequin fragment is the stand_tac_callreinforcements_nw_3p_01 animation. You can find this by entering reinforcements into the search filter:   Selecting this animation will update the debug character in the Character Editor viewport so that they start to play the chosen animation. You can see this specific animation is a oneshot wave and might be useful as another trigger for enemy reinforcements further in our scripting. Let's turn this into a fragment! We need to make sure we don't forget this animation though; right-click on the animation and click Copy. This will copy the name to the clipboard for future reference in Mannequin. The animation can also be dragged and dropped into Mannequin manually to achieve the same result. Creating fragment entries With our animation located, let's get back to Mannequin and set up our fragment. Ensuring that we're still in the sdk_humanpreview.xml preview setup, take another look at the Fragments window in the top left of Mannequin. You'll see there are two rows of buttons: the top row controls creation and editing of fragment entries (the animation options we looked at earlier). The second row covers adding and editing of fragment IDs themselves: the top level fragment name. This is where we need to start. Press the New ID button on the second row of buttons to bring up the New FragmentID Name dialog. Here, we need to add a name that conforms to the prefixes we discussed earlier. As this is an action, make sure you add IA_ (interest action) as the prefix for the name you choose; otherwise, it won't appear in the fragment browser in the Flow Graph. Once our fragment is named, we'll be presented with Mannequin FragmentID Editor. For the most part, we won't need to worry about these options. But it's useful to be aware of how they might be useful (and don't worry, these can be edited after creation). The main parameters to note are the Scope options. These control which elements of the character are controlled by the fragment. By default, all these boxes are ticked, which means that our fragment will take control of each ticked aspect of the character. An example of where we might want to change this would be the character LookAt control. If we want to get an NPC to look at another entity in the world as part of a scripted sequence (using the AI:LookAt Flow Graph node), it would not be possible with the current settings. This is because the LookPose and Looking scopes are controlled by the fragment. If we were to want to control this via Flow Graph, these would need to be unticked, freeing up the look scopes for scripted control. With scopes covered, press OK at the bottom of the dialog box to continue adding our callReinforcements animation! We now have a fragment ID created in our Fragments window, but it has no entries! With our new fragment selected, press the New button on the first row of buttons to add an entry. This will automatically add itself under the <default> tag, which is the desired behavior as our fragment will be tag-agnostic for the moment. This has now created a blank fragment in the Fragment Editor. Adding the AnimLayer This is where our animation from earlier comes in. Right-click on the FullBody3P track in the editor and go to Add Track | AnimLayer. As we did previously with our effect on ProcLayer, double-click on AnimLayer to add a new clip. This will create our new Anim Clip, with some red None markup to signify the lack of animation. Now, all we need to do is select the clip, go to the Anim Clip Properties, and paste in our animation name by double-clicking the Animation parameter. The Animation parameter has a helpful browser that will allow you to search for animations—simply click on the browse icon in the parameter entry section. It lacks the previewer found in the Character Editor but can be a quick way to find animation candidates by name within Mannequin. With our animation finally loaded into a fragment, we should now have a fragment setup that displays a valid animation name on the AnimLayer. Clicking on Play will now play our reinforcements wave animation!   Once we save our changes, all we need to do now is load our fragment in an AISequence:Animation node in Flow Graph. This can be done by repeating the steps outlined earlier. This time, our new fragment should appear in the fragment dialog. Summary Mannequin is a very powerful tool to help with animations in CRYENGINE. We have looked at how to get started with it. Resources for Article: Further resources on this subject: Making an entity multiplayer-ready[article] Creating and Utilizing Custom Entities[article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 12487

article-image-thats-one-fancy-hammer
Packt
13 Jan 2014
8 min read
Save for later

That's One Fancy Hammer!

Packt
13 Jan 2014
8 min read
(For more resources related to this topic, see here.) Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard Browser-based 3D – welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 12381
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installation-ogre-3d
Packt
09 Feb 2011
6 min read
Save for later

Installation of Ogre 3D

Packt
09 Feb 2011
6 min read
OGRE 3D 1.7 Beginner's Guide Downloading and installing Ogre 3D The first step we need to take is to install and configure Ogre 3D. Time for action – downloading and installing Ogre 3D We are going to download the Ogre 3D SDK and install it so that we can work with it later. Go to http://www.ogre3d.org/download/sdk. Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot: (Move the mouse over the image to enlarge.) What just happened? We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK. Different versions of the Ogre 3D SDK Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms. Exploring the SDK Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D. When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us. The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system. Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does. resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines: Zip=../../media/packs/SdkTrays.zip FileSystem=../../media/thumbnails Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one. The Ogre 3D samples Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities. Time for action – building the Ogre 3D samples To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them. Go to the Ogre3D folder. Open the Ogre3d.sln solution file. Right-click on the solution and select Build Solution. Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished. If everything went well, go into the Ogre3D/bin folder. Execute the SampleBrowser.exe. You should see the following on your screen: Try the different samples to see all the nice features Ogre 3D offers. What just happened? We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.  
Read more
  • 0
  • 0
  • 12370

article-image-mesh-animation
Packt
18 Oct 2013
5 min read
Save for later

Mesh animation

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) Using animated models is not very different from using normal models. There are essentially two types of animation to consider (in addition to manually changing the position of a mesh's geometry in Three.js). If all you need is to smoothly transition properties between different values—for example, changing the rotation of a door in order to animate it opening—you can use the Tween.js library at https://github.com/sole/tween.jsto do so instead of animating the mesh itself. Jerome Etienne has a nice tutorial on doing this at http://learningthreejs.com/blog/2011/08/17/tweenjs-for-smooth-animation/. Morph animation Morph animation stores animation data as a sequence of positions. For example, if you had a cube with a shrink animation, your model could hold the positions of the vertices of the cube at full size and at the shrunk size. Then animation would consist of interpolating between those states during each rendering or keyframe. The data representing each state can hold either vertex targets or face normals. To use morph animation, the easiest approach is to use a THREE.MorphAnimMesh class, which is a subclass of the normal mesh. In the following example, the highlighted lines should only be included if the model uses normals: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry) { var material = new THREE.MeshLambertMaterial({ color: 0x000000, morphTargets: true, morphNormals: true, }); if (geometry.morphColors && geometry.morphColors.length) { var colorMap = geometry.morphColors[0]; for (var i = 0; i < colorMap.colors.length; i++) { geometry.faces[i].color = colorMap.colors[i]; } material.vertexColors = THREE.FaceColors; } geometry.computeMorphNormals(); var mesh = new THREE.MorphAnimMesh(geometry, material); mesh.duration = 5000; // in milliseconds scene.add(mesh); morphs.push(mesh); }); The first thing we do is set our material to be aware that the mesh will be animated with the morphTargets properties and optionally with morphNormal properties. Next, we check whether colors will change during the animation, and set the mesh faces to their initial color if so (if you know your model doesn't have morphColors, you can leave out that block). Then the normals are computed (if we have them) and our MorphAnimMesh animation is created. We set the duration value of the full animation, and finally store the mesh in the global morphs array so that we can update it during our physics loop: for (var i = 0; i < morphs.length; i++) { morphs[i].updateAnimation(delta); } Under the hood, the updateAnimation method just changes which set of positions in the animation the mesh should be interpolating between. By default, the animation will start immediately and loop indefinitely. To stop animating, just stop calling updateAnimation. Skeletal animation Skeletal animation moves a group of vertices in a mesh together by making them follow the movement of bone. This is generally easier to design because artists only have to move a few bones instead of potentially thousands of vertices. It's also typically less memory-intensive for the same reason. To use morph animation, use a THREE.SkinnedMesh class, which is a subclass of the normal mesh: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry, materials) { for (var i = 0; i < materials.length; i++) { materials[i].skinning = true; } var material = new THREE.MeshFaceMaterial(materials); THREE.AnimationHandler.add(geometry.animation); var mesh = new THREE.SkinnedMesh(geometry, material, false); scene.add(mesh); var animation = new THREE.Animation(mesh, geometry.animation.name); animation.interpolationType = THREE.AnimationHandler.LINEAR; // or CATMULLROM for cubic splines (ease-in-out) animation.play(); }); The model we're using in this example already has materials, so unlike in the morph animation examples, we have to change the existing materials instead of creating a new one. For skeletal animation we have to enable skinning, which refers to how the materials are wrapped around the mesh as it moves. We use the THREE.AnimationHandler utility to track where we are in the current animation and a THREE.SkinnedMesh utility to properly handle our model's bones. Then we use the mesh to create a new THREE.Animation and play it. The animation's interpolationType determines how the mesh transitions between states. If you want cubic spline easing (slow then fast then slow), use THREE.AnimationHandler.CATMULLROM instead of the LINEAR easing. We also need to update the animation in our physics loop: THREE.AnimationHandler.update(delta); It is possible to use both skeletal and morph animations at the same time. In this case, the best approach is to treat the animation as skeletal and manually update the mesh's morphTargetInfluences array as demonstrated in examples/webgl_animation_skinning_morph.html in the Three.js project. Summary This article explains how to manage external assets such as 3D models, as well as add details to your worlds and also teaches us to manage 3D models and animation. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article] 2D game development with Monkey [Article]
Read more
  • 0
  • 0
  • 12301

article-image-cryengine-3-sandbox-basics
Packt
21 Jun 2011
9 min read
Save for later

CryENGINE 3: Sandbox Basics

Packt
21 Jun 2011
9 min read
  CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Placing the objects in the world Placing objects is a simple task; however, basic terrain snapping is not explained to most new developers. It is common to ask why, when dragging and dropping an object into the world, they cannot see the object. This section will teach you the easiest ways to place an object into your map by using the Follow Terrain method. Getting ready Have My_Level open inside of Sandbox (after completing either of the Terrain sculpting or Generating a procedural terrain recipes). Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Have the Rollup Bar open and ready. Make sure you have the EditMode ToolBar open (right-click the top main ToolBar and tick EditMode ToolBar). How to do it... First select the Follow Terrain button. Then open the Objects tab within the Rollup Bar. Now from the Brushes browser, select any object you wish to place down (for example, defaults/box). You may either double-click the object, or drag-and-drop it onto the Perspective View. Move your mouse anywhere where there is visible terrain and then click once more to confirm the position you wish to place it in. How it works... The Follow Terrain tool is a simple tool that allows the pivot of the object to match the exact height of the terrain in that location. This is best seen on objects that have a pivot point close to or near the bottom of them. There's more... You can also follow terrain and snap to objects. This method is very similar to the Follow Terrain method, except that this also includes objects when placing or moving your selected object. This method does not work on non-physicalized objects.   Refining the object placement After placing the objects in the world with just the Follow Terrain or Snapping to Objects, you might find that you will need to adjust the position, rotation, or scale of the object. In this recipe, we will show you the basics of how you might be able to do so along with a few hotkey shortcuts to make this process a little faster. This works with any object that is placed in your level, from Entities to Solids. Getting ready Have My_Level open inside of Sandbox Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Make sure you have the EditMode ToolBar open (right-click on the top main ToolBar and tick EditMode ToolBar). Place any object in the world. How to do it... In this recipe, we will call your object (the one whose location you wish to refine) Box for ease of reference. Select Box. After selecting Box, you should see a three axis widget on it, which represents each axis in 3D space. By default, these axes align to the world: Y = Forward X = Right Z = Up To move the Box in the world space and change its position, proceed with the following steps: Click on the Select and Move icon in the EditMode ToolBar (1 for the keyboard shortcut). Click on the X arrow and drag your mouse up and down relative to the arrow's direction. Releasing the mouse button will confirm the location change. You may move objects either on a single axis, or two at once by clicking and dragging on the plane that is adjacent to any two axes: X + Y, X + Z, or Y + Z. To rotate an object, do the following: Select Box (if you haven't done so already). Click on the Select and Rotate icon in the EditMode ToolBar (2 for the keyboard shortcut). Click on the Z arrow (it now has a sphere at the end of it) and drag your mouse from side to side to roll the object relative to the axis. Releasing the mouse button will confirm the rotation change. You cannot rotate an object along multiple axes. To scale an object, do the following: Select Box (if you haven't done so already). Click on the Select and Scale icon in the EditMode ToolBar (3 for the keyboard shortcut). Click on the CENTER box and drag your mouse up and down to scale on all three axes at once. Releasing the mouse button will confirm the scale change. It is possible to scale on just one axis or two axes; however, this is highly discouraged as Non-Uniform Scaling will result in broken physical meshes for that object. If you require an object to be scaled up, we recommend you only scale uniformly on all three axes! There's more... Here are some additional ways to manipulate objects within the world. Local position and rotation To make position or rotation refinement a bit easier, you might want to try changing how the widget will position or rotate your object by changing it to align itself relative to the object's pivot. To do this, there is a drop-down menu in the EditMode ToolBar that will have the option to select Local. This is called Local Direction. This setup might help to position your object after you have rotated it. Grid and angle snaps To aid in positioning of non-organic objects, such as buildings or roads, you may wish to turn on the Snap to Grid option. Turning this feature on will allow you to move the object on a grid (currently relative to its location). To change the grid spacing, click the drop-down arrow next to the number to change the spacing (grid spacing is in meters). Angle Snaps is found immediately to the right of the Grid Snaps. Turning this feature on will allow you to rotate an object by every five degrees. Ctrl + Shift + Click Even though it is a Hotkey, to many developers this hotkey is extremely handy for initial placement of objects. It allows you to move the object quickly to any point on any physical surface relative to your Perspective View.   Utilizing the layers for multiple developer collaboration A common question that is usually asked about the CryENGINE is how does one developer work on the same level as another at the same time. The answer is—Layers. In this recipe, we will show you how you may be able to utilize the layer system for not only your own organization, but to set up external layers for other developers to work on in parallel. Getting ready Have My_Level open inside of Sandbox. Review the Navigating a level with the Sandbox Camera to get familiar with the Perspective View. Have the Rollup Bar open and ready. Review the Placing the objects in the world (place at least two objects) recipe. How to do it... For this recipe, we will assume that you have your own repository for your project or some means to send your work to others in your team. First, start by placing down two objects on the map. For the sake of the recipe, we shall refer to them as Box1 and Box2. After you've placed both boxes, open the Rollup Bar and bring up the Layers tab. Create a new layer by clicking the New Layer button (paper with a + symbol). A New Layer dialog box will appear. Give it the following parameters: Name = ActionBubble_01 Visible = True External = True Frozen = False Export To Game = True Now select Box1 and open the Objects tab within the Rollup Bar. From here you will see in the main rollup of this object with values such as – Name, Helper Size, MTL, and Minimal Spec. But also in this rollup you will see a button for layers (it should be labelled as Main). Clicking on that button will show you a list of all other available layers. Clicking again on another layer that is not highlighted will move this object to that layer (do this now by clicking on ActionBubble_01). Now save your level by clicking—File | Save. Now in your build folder, go to the following location: -... GameLevelsMy_Level. From here you will notice a new folder called Layers. Inside that folder, you will see ActionBubble_01.lyr. This layer shall be the layer that your other developers will work on. In order for them to be able to do so, you must first commit My_Level.cry and the Layers folder to your repository (it is easiest to commit the entire folder). After doing so, you may now have your other developer make changes to that layer by moving Box1 to another location. Then have them save the map. Have them commit only the ActionBubble_01.lyr to the repository. Once you have retrieved it from the updated repository, you will notice that Box1 will have moved after you have re-opened My_Level.cry in the Editor with the latest layer. How it works... External layers are the key to this whole process. Once a .cry file has been saved to reference an external layer, it will access the data inside of those layers upon loading the level in Sandbox. It is good practice to assign a Map owner who will take care of the .cry file. As this is the master file, only one person should be in charge of maintaining it by creating new layers if necessary. There's more... Here is a list of limitations of what external layers cannot hold. External layer limitations Even though any entity/object you place in your level can be placed into external layers, it is important to note that there are some items that cannot be placed inside of these layers. Here is a list of the common items that are solely owned by the .cry file: Terrain Heightmap Unit Size Max Terrain Height Holes Textures Vegetation Environment Settings (unless forced through Game Logic) Ocean Height Time of Day Settings (unless forced through Game Logic) Baked AI Markup (The owner of the .cry file must regenerate AI if new markup is created on external layers) Minimap Markers  
Read more
  • 0
  • 0
  • 12199

article-image-away3d-detecting-collisions
Packt
02 Jun 2011
6 min read
Save for later

Away3D: Detecting Collisions

Packt
02 Jun 2011
6 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction In this article, you are going to learn how to check intersection (collision) between 3D objects. Detecting collisions between objects in Away3D This recipe will teach you the fundamentals of collision detection between objects in 3D space. We are going to learn how to perform a few types of intersection tests. These tests can hardly be called collision detection in their physical meaning, as we are not going to deal here with any simulation of collision reaction between two bodies. Instead, the goal of the recipe is to understand the collision tests from a mathematical point of view. Once you are familiar with intersection test techniques,the road to creating of physical collision simulations is much shorter. There are many types of intersection tests in mathematics. These include some simple tests such as AABB (axially aligned bounding box), Sphere - Sphere, or more complex such as Triangle - Triangle, Ray - Plane, Line - Plane, and more. Here, we will cover only those which we can achieve using built-in Away3D functionality. These are AABB and AABS (axially aligned bounding sphere) intersections, as well as Ray-AABS and the more complex Ray- Triangle. The rest of the methods are outside of the scope of this article and you can learn about applying them from various 3D math resources. Getting ready Setup an Away3D scene in a new file extending AwayTemplate. Give the class a name CollisionDemo. How to do it... In the following example, we perform an intersection test between two spheres based on their bounding boxes volumes. You can move one of the spheres along X and Y with arrow keys onto the second sphere. On the objects overlapping, the intersected (static) sphere glows with a red color. AABB test: CollisionDemo.as package { public class CollisionDemo extends AwayTemplate { private var _objA:Sphere; private var _objB:Sphere; private var _matA:ColorMaterial; private var _matB:ColorMaterial; private var _gFilter_GlowFilter=new GlowFilter(); public function CollisionDemo() { super(); _cam.z=-500; } override protected function initMaterials() : void{ _matA=new ColorMaterial(0xFF1255); _matB=new ColorMaterial(0x00FF11); } override protected function initGeometry() : void{ _objA=new Sphere({radius:30,material:_matA}); _objB=new Sphere({radius:30,material:_matB}); _view.scene.addChild(_objA); _view.scene.addChild(_objB); _objB.ownCanvas=true; _objA.debugbb=true; _objB.debugbb=true; _objA.transform.position=new Vector3D(-80,0,400); _objB.transform.position=new Vector3D(80,0,400); } override protected function initListeners() : void{ super.initListeners(); stage.addEventListener(KeyboardEvent.KEY_DOWN,onKeyDown); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); if(AABBTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } } private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode){ case 38:_objA.moveUp(5); break; case 40:_objA.moveDown(5); break; case 37:_objA.moveLeft(5); break; case 39:_objA.moveRight(5); break; case 65:_objA.rotationZ-=3; break; case 83:_objA.rotationZ+=3; break; default: } } } } In this screenshot, the green sphere bounding box has a red glow while it is being intersected by the red sphere's bounding box: How it works... Testing intersections between two AABBs is really simple. First, we need to acquire the boundaries of the object for each axis. The box boundaries for each axis of any Object3D are defined by a minimum value for that axis and maximum value. So let's look at the AABBTest() method. Axis boundaries are defined by parentMin and parentMax for each axis, which are accessible for each object extending Object3D. You can see that Object3D also has minX,minY,minZ and maxX,maxY,maxZ. These properties define the bounding box boundaries too, but in objects space and therefore aren't helpful in AABB tests between two objects. So in order for a given bounding box to intersect a bounding box of other objects, three conditions have to be met for each of them: Minimal X coordinate for each of the objects should be less than maximum X of another. Minimal Y coordinate for each of the objects should be less than maximum Y of another. Minimal Z coordinate for each of the objects should be less than maximum Z of another. If one of the conditions is not met for any of the two AABBs, there is no intersection. The preceding algorithm is expressed in the AABBTest() function: private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } As you can see, if all of the conditions we listed previously are met, the execution will skip all the return false blocks and the function will return true, which means the intersection has occurred. There's more... Now let's take a look at the rest of the methods for collision detection, which are AABS-AABS, Ray-AABS, and Ray-Triangle. AABS test The intersection test between two bounding spheres is even simpler to perform than AABBs. The algorithm works as follows. If the distance between the centers of two spheres is less than the sum of their radius, then the objects intersect. Piece of cake! Isn't it? Let's implement it within the code. The AABS collision algorithm gives us the best performance. While there are many other even more sophisticated approaches, try to use this test if you are not after extreme precision. (Most of the casual games can live with this approximation). First, let's switch the debugging mode of _objA and _objB to bounding spheres. In the last application we built, go to the initGeometry() function and change: _objA.debugbb=true; _objB.debugbb=true; To: _objA.debugbs=true; _objB.debugbs=true; Next, we add the function to the class which implements the algorithm we described previously: private function AABSTest():Boolean{ var dist_Number=Vector3D.distance(_objA.position,_objB. position); if(dist<=(_objA.radius+_objB.radius)){ return true; } return false; } Finally, we add the call to the method inside onEnterFrame(): if(AABSTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } Each time AABSTest returns true, the intersected sphere is highlighted with a red glow:
Read more
  • 0
  • 0
  • 12164
article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 11866

article-image-using-cameras
Packt
16 Aug 2013
11 min read
Save for later

Using Cameras

Packt
16 Aug 2013
11 min read
(For more resources related to this topic, see here.) Creating a picture-in-picture effect Having more than one viewport displayed can be useful in many situations. For example, you might want to show simultaneous events going on in different locations, or maybe you want to have a separate window for hot-seat multiplayer games. Although you could do it manually by adjusting the Normalized Viewport Rect parameters on your camera, this recipe includes a series of extra preferences to make it more independent from the user's display configuration. Getting ready For this recipe, we have prepared a package named basicLevel containing a scene. The package is in the 0423_02_01_02 folder. How to do it... To create a picture-in-picture display, just follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene, inside the folder 02_01_02. This is a basic scene featuring a directional light, a camera, and some geometry. Add the Camera option to the scene through the Create dropdown menu on top of the Hierarchy view, as shown in the following screenshot: Select the camera you have created and, in the Inspector view, set its Depth to 1: In the Project view, create a new C# script and rename it PictureInPicture. Open your script and replace everything with the following code: using UnityEngine;public class PictureInPicture: MonoBehaviour {public enum HorizontalAlignment{left, center, right};public enum VerticalAlignment{top, middle, bottom};public HorizontalAlignment horizontalAlignment =HorizontalAlignment.left;public VerticalAlignment verticalAlignment =VerticalAlignment.top;public enum ScreenDimensions{pixels, screen_percentage};public ScreenDimensions dimensionsIn = ScreenDimensions.pixels;public int width = 50;public int height= 50;public float xOffset = 0f;public float yOffset = 0f;public bool update = true;private int hsize, vsize, hloc, vloc;void Start (){AdjustCamera ();}void Update (){if(update)AdjustCamera ();}void AdjustCamera(){if(dimensionsIn == ScreenDimensions.screen_percentage){hsize = Mathf.RoundToInt(width * 0.01f * Screen.width);vsize = Mathf.RoundToInt(height * 0.01f * Screen.height);} else {hsize = width;vsize = height;}if(horizontalAlignment == HorizontalAlignment.left){hloc = Mathf.RoundToInt(xOffset * 0.01f *Screen.width);} else if(horizontalAlignment == HorizontalAlignment.right){hloc = Mathf.RoundToInt((Screen.width - hsize)- (xOffset * 0.01f * Screen.width));} else {hloc = Mathf.RoundToInt(((Screen.width * 0.5f)- (hsize * 0.5f)) - (xOffset * 0.01f * Screen.height));}if(verticalAlignment == VerticalAlignment.top){vloc = Mathf.RoundToInt((Screen.height -vsize) - (yOffset * 0.01f * Screen.height));} else if(verticalAlignment == VerticalAlignment.bottom){vloc = Mathf.RoundToInt(yOffset * 0.01f *Screen.height);} else {vloc = Mathf.RoundToInt(((Screen.height *0.5f) - (vsize * 0.5f)) - (yOffset * 0.01f * Screen.height));}camera.pixelRect = new Rect(hloc,vloc,hsize,vsize);}} In case you haven't noticed, we are not achieving percentage by dividing numbers by 100, but rather multiplying them by 0.01. The reason behind that is performance: computer processors are faster multiplying than dividing. Save your script and attach it to the new camera that you created previously. Uncheck the new camera's Audio Listener component and change some of the PictureInPicture parameters: change Horizontal Alignment to Right, Vertical Alignment to Top, and Dimensions In to pixels. Leave XOffset and YOffset as 0, change Width to 400 and Height to 200, as shown below: Play your scene. The new camera's viewport should be visible on the top right of the screen: How it works... Our script changes the camera's Normalized Viewport Rect parameters, thus resizing and positioning the viewport according to the user preferences. There's more... The following are some aspects of your picture-in-picture you could change. Making the picture-in-picture proportional to the screen's size If you change the Dimensions In option to screen_percentage, the viewport size will be based on the actual screen's dimensions instead of pixels. Changing the position of the picture-in-picture Vertical Alignment and Horizontal Alignment can be used to change the viewport's origin. Use them to place it where you wish. Preventing the picture-in-picture from updating on every frame Leave the Update option unchecked if you don't plan to change the viewport position in running mode. Also, it's a good idea to leave it checked when testing and then uncheck it once the position has been decided and set up. See also The Displaying a mini-map recipe. Switching between multiple cameras Choosing from a variety of cameras is a common feature in many genres: race sims, sports sims, tycoon/strategy, and many others. In this recipe, we will learn how to give players the ability of choosing an option from many cameras using their keyboard. Getting ready In order to follow this recipe, we have prepared a package containing a basic level named basicScene. The package is in the folder 0423_02_01_02. How to do it... To implement switchable cameras, follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene from the 02_01_02 folder. This is a basic scene featuring a directional light, a camera, and some geometry. Add two more cameras to the scene. You can do it through the Create drop-down menu on top of the Hierarchy view. Rename them cam1 and cam2. Change the cam2 camera's position and rotation so it won't be identical to cam1. Create an Empty game object by navigating to Game Object | Create Empty. Then, rename it Switchboard. In the Inspector view, disable the Camera and Audio Listener components of both cam1 and cam2. In the Project view, create a new C# script. Rename it CameraSwitch and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;public class CameraSwitch : MonoBehaviour {public GameObject[] cameras;public string[] shortcuts;public bool changeAudioListener = true;void Update (){int i = 0;for(i=0; i<cameras.Length; i++){if (Input.GetKeyUp(shortcuts[i]))SwitchCamera(i);}}void SwitchCamera ( int index ){int i = 0;for(i=0; i<cameras.Length; i++){if(i != index){if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = false;}cameras[i].camera.enabled = false;} else {if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = true;}cameras[i].camera.enabled = true;}}}} Attach CameraSwitch to the Switchboard game object. In the Inspector view, set both Cameras and Shortcuts size to 3. Then, drag the scene cameras into the Cameras slots, and type 1, 2, and 3 into the Shortcuts text fields, as shown in the next screenshot. Play your scene and test your cameras. How it works... The script is very straightforward. All it does is capture the key pressed and enable its respective camera (and its Audio Listener, in case the Change Audio Listener option is checked). There's more... Here are some ideas on how you could try twisting this recipe a bit. Using a single-enabled camera A different approach to the problem would be keeping all the secondary cameras disabled and assigning their position and rotation to the main camera via a script (you would need to make a copy of the main camera and add it to the list, in case you wanted to save its transform settings). Triggering the switch from other events Also, you could change your camera from other game object's scripts by using a line of code, such as the one shown here: GameObject.Find("Switchboard").GetComponent("CameraSwitch"). SwitchCamera(1); See also The Making an inspect camera recipe. Customizing the lens flare effect As anyone who has played a game set in an outdoor environment in the last 15 years can tell you, the lens flare effect is used to simulate the incidence of bright lights over the player's field of view. Although it has become a bit overused, it is still very much present in all kinds of games. In this recipe, we will create and test our own lens flare texture. Getting ready In order to continue with this recipe, it's strongly recommended that you have access to image editor software such as Adobe Photoshop or GIMP. The source for lens texture created in this recipe can be found in the 0423_02_03 folder. How to do it... To create a new lens flare texture and apply it to the scene, follow these steps: Import Unity's Character Controller package by navigating to Assets | Import Package | Character Controller. Do the same for the Light Flares package. In the Hierarchy view, use the Create button to add a Directional Light effect to your scene. Select your camera and add a Mouse Look component by accessing the Component | Camera Control | Mouse Look menu option. In the Project view, locate the Sun flare (inside Standard Assets | Light Flares), duplicate it and rename it to MySun, as shown in the following screenshot: In the Inspector view, click Flare Texture to reveal the base texture's location in the Project view. It should be a texture named 50mmflare. Duplicate the texture and rename it My50mmflare. Right-click My50mmflare and choose Open. This should open the file (actually a.psd) in your image editor. If you're using Adobe Photoshop, you might see the guidelines for the texture, as shown here: To create the light rings, create new Circle shapes and add different Layer Effects such as Gradient Overlay, Stroke, Inner Glow, and Outer Glow. Recreate the star-shaped flares by editing the originals or by drawing lines and blurring them. Save the file and go back to the Unity Editor. In the Inspector view, select MySun, and set Flare Texture to My50mmflare: Select Directional Light and, in the Inspector view, set Flare to MySun. Play the scene and move your mouse around. You will be able to see the lens flare as the camera faces the light. How it works... We have used Unity's built-in lens flare texture as a blueprint for our own. Once applied, the lens flare texture will be displayed when the player looks into the approximate direction of the light. There's more... Flare textures can use different layouts and parameters for each element. In case you want to learn more about the Lens Flare effect, check out Unity's documentation at http://docs. unity3d.com/Documentation/Components/class-LensFlare.html. Making textures from screen content If you want your game or player to take in-game snapshots and apply it as a texture, this recipe will show you how. This can be very useful if you plan to implement an in-game photo gallery or display a snapshot of a past key moment at the end of a level (Race Games and Stunt Sims use this feature a lot). Getting ready In order to follow this recipe, please import the basicTerrain package, available in the 0423_02_04_05 folder, into your project. The package includes a basic terrain and a camera that can be rotated via a mouse. How to do it... To create textures from screen content, follow these steps: Import the Unity package and open the 02_04_05 scene. We need to create a script. In the Project view, click on the Create drop-down menu and choose C# Script. Rename it ScreenTexture and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;using System.Collections;public class ScreenTexture : MonoBehaviour {public int photoWidth = 50;public int photoHeight = 50;public int thumbProportion = 25;public Color borderColor = Color.white;public int borderWidth = 2;private Texture2D texture;private Texture2D border;private int screenWidth;private int screenHeight;private int frameWidth;private int frameHeight;private bool shoot = false;void Start (){screenWidth = Screen.width;screenHeight = Screen.height;frameWidth = Mathf.RoundToInt(screenWidth * photoWidth *0.01f);frameHeight = Mathf.RoundToInt(screenHeight * photoHeight* 0.01f);texture = new Texture2D (frameWidth,frameHeight,TextureFormat.RGB24,false);border = new Texture2D (1,1,TextureFormat.ARGB32, false);border.SetPixel(0,0,borderColor);border.Apply();}void Update (){if (Input.GetKeyUp(KeyCode.Mouse0))StartCoroutine(CaptureScreen());}void OnGUI (){GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,((screenHeight*0.5f)-(frameHeight*0.5f)) - borderWidth,frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,(screenHeight*0.5f)+(frameHeight*0.5f),frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f)- borderWidth*2,(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)+(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);if(shoot){GUI.DrawTexture(new Rect (10,10,frameWidth*thumbProportion*0.01f,frameHeight*thumbProportion* 0.01f),texture,ScaleMode.StretchToFill);}}IEnumerator CaptureScreen (){yield return new WaitForEndOfFrame();texture.ReadPixels(new Rect((screenWidth*0.5f)-(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),frameWidth,frameHeight),0,0);texture.Apply();shoot = true;}} Save your script and apply it to the Main Camera game object. In the Inspector view, change the values for the Screen Capturecomponent, leaving Photo Width and Photo Height as 25 and Thumb Proportion as 75, as shown here: Play the scene. You will be able to take a snapshot of the screen (and have it displayed on the top-left corner) by clicking the mouse button. How it works... Clicking the mouse triggers a function that reads pixels within the specified rectangle and applies them into a texture that is drawn by the GUI. There's more... Apart from displaying the texture as a GUI element, you could use it in other ways. Applying your texture to a material You apply your texture to an existing object's material by adding a line similar to GameObject.Find("MyObject").renderer.material.mainTexture = texture; at the end of the CaptureScreen function. Using your texture as a screenshot You can encode your texture as a PNG image file and save it. Check out Unity's documentation on this feature at http://docs.unity3d.com/Documentation/ScriptReference/ Texture2D.EncodeToPNG.html.
Read more
  • 0
  • 0
  • 11834

article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 11701
article-image-user-interface
Packt
23 Sep 2015
10 min read
Save for later

User Interface

Packt
23 Sep 2015
10 min read
This article, written by John Doran, the author of the Unreal Engine Game Development Cookbook, covers the following recipes: Creating a main menu Animating a menu (For more resources related to this topic, see here.) In order to create a good game project, you need to be able to communicate information to the player. To do this, we need to create a user interface (UI), which will allow us to display information such as the player's health, inventory, and so on. Inside Unreal 4, we use the Slate UI framework to create user interfaces, however, it's a very complex system. To make things easier for end users, Unreal also released the Unreal Motion Graphics (UMG) UI Designer which is a visual UI authoring tool with a much easier workflow. This is what we will be using in this article. For more information on Slate, refer to https://docs.unrealengine.com/latest/INT/Programming/Slate/index.html. Creating a main menu A main menu can serve as an introduction to your game and is a great place for us to discuss some additional things that UMG has, such as Texts and Buttons. We'll also learn how we can make buttons do things. Let's spend some time to see just how easy it is to create one! For more information on the client-server model, refer to https://en.wikipedia.org/wiki/Client%E2%80%93server_model. How to do it… To give you an idea of how it works, let's take a simple example of a coin collectable: Create a new level by going to File | New Level and select Empty Level. Next, inside the Content Browser tab, go to our UI folder, then to Add New | User Interface | Widget Blueprint, and give it a name of MainMenu. Double-click on it to open the editor. In this menu, we are going to have the title of the game and then a series of buttons the player can press: From the Palette tab, open up the Common section and drag and drop a Button onto the middle of the screen. Select the button and change its Size X to 400 and Size Y to 80. We will also rename the button to Play Game. Drag and drop a Text object onto the Play Game button and you should see it snap on to the button as a child. Under Content, change Text to Play Game. From here under Appearance, change the color of the button to black and change the Font size to 32. From the Hierarchy tab, select the Play Game button and copy and paste it to create duplicate. Move the button down, rename it to Quit Game, and change the Text to Content as well. Move both of the objects so that they're on the bottom part of the HUD, slightly above and side by side, as shown in the following image: Lastly, we'll want to set our pivots and anchors accordingly. When you select either the Quit Game or Play Game buttons, you may notice a sun-like looking widget that displays the Anchors of the object (known as the Anchor Medallion). In our case, open Anchors from the Details panel and click on the bottom-center option. Now that we have the buttons created, we want them to actually do something when we click on them. Select the Play Game button and from the Details tab, scroll down until you see the Events component. There should be a series of big green + buttons. Click on the green button beside OnClicked. Next, it will take us to the Event Graph with the appropriate event created for us. To the right of the event, right-click and create an Open Level action. Under Level Name, put in whatever level you like (for example, StarterMap) and then connect the output of the OnClicked action to the input of the Open Level action. To the right of that, create a Remove from Parent action to make sure that when we leave that, the menu doesn't stay. Finally, create a Get Player Controller action and to the right of it a Set Show Mouse Cursor action, which should be disabled, so that the mouse will no longer be visible since we will want to see the mouse in the menu. (Drag Return Value from the Get Player Controller action to create a new node and search for the mouse cursor action.) Now, go back to the Designer button and then select the Quit Game button. Click on the OnClicked button as well and to the right of this one, create a Quit Game action and connect the output of the OnClicked action to the input of the Quit Game action. Lastly, as a bit of polish, let's add our game's title to the screen. Drag and drop another Text object onto the scene, this time with Anchor at the top-center. From here, change Position X to 0 and Position Y to 176. Change Alignment in the X axis to .5 and check the Size to Content option for it to automatically resize. Set the Content component's Text property to the game's name (in my case, Game Name). Under the Appearance component, set the Font size to 93 and set Justification to Center. There are a number of other styling options that you may wish to use when developing your HUDs. For more information about it, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Styling/index.html. Compile the menu, and saveit. Now we need to actually have the widget show up. To do so, we'll need to take the same steps as we did earlier. Open up Level Blueprint by going to Blueprints | Open Level Blueprint and create an EventBeginPlay event. Then, to the right of this, right-click and create a Create Widget action. From the dropdown under Class, select MainMenu and connect the arrow from Event Begin Play to the input of Create MainMenu_C Widget. After this, click and drag the output arrow and create an Add to Viewport event. Then, connect Return Value of our Create Widget action to Target of the Add to Viewport action. Now lastly, we also want to display the player's cursor on the screen to show buttons. To do this, right-click and select Get Player Controller. Then, from Return Value of that, create a Show Mouse Cursor object in Set. Connect the output of the Add to Viewport action to the input of the Show Mouse Cursor action. Compile, save, and run the project! With this, our menu is completed! We can quit the game without any problem, and pressing the Play Game button will start our level! Animating a menu You may have created a menu or UI element at some point, but rather than having it static and non-moving, let's spend some time looking at how we can animate the menus by having them fly in and out or animating them in some way. This will help add to the polish of the title as well as enable players to notice things easier as they move in. Getting ready Before we start working on this, we need to have a project created and set up. Do the previous recipe all the way to completion. How to do it… Open up the MainMenu blueprint once more and from the bottom-left in the Animations tab, click on the +Animation button and give the new animation a name of MenuFlyIn. Select the newly created animation and you should see the window on the right-hand side brighten up. Next, click on the Auto Key toggle to have the animation editor automatically set keys that are appropriate for our implementation. If it's not there already, move the timeline bar (the white line with two orange ends on the top and bottom) to the 0.00 mark on the animation timeline. Next, select the Game Name object and under Color and Opacity, open it and change the A (alpha) value to 0. Now move the timeline bar to the 1.00 mark and then open the color again and set the A value to 1. You'll notice a transition—going from a completely transparent text to a fully shown one. This is a good start. Let's have the buttons fly in after the text appears. Next, move the Time bar to the 2.00 mark and select the Play Game button. Now from the Details tab, you'll notice that under the variables, there are new + icons to the left of variables. This value will save the value for use in the animations. Click on the + icon by the Position Y value. If you use your scroll wheel while inside the dark grey portion of the timeline bar (where the keyframe numbers are displayed), it zooms in and out. This can be quite useful when you create more complex animations. Now move the Time bar to the 1.00 mark and move the Play Game button off the screen. By doing the animation in this way, we are saving where we want it to be first at the end, and then going back in time to do the animations. Do the same animation for the Quit Game button. Now that our animation is created, let's make it in a way so that when the object starts, this animation is played. Click on the Graph button and from the MyBlueprint tab under the Graphs section, double-click on the Event Construct event, which is called as soon as we add the menu to the scene. Grab the pin on the end of it and create a Play Animation action. Drag and drop a MenuFlyIn animation into the scene and select Get. Connect its output pin to the In Animation property of the Play Animation action. Now that we have the animation work when we create the menu, let's have it play when we leave the menu. Select the Play Animation and Menu Fly In variables and copy them. Then move to the OnClicked (Play Game) action. Drag the OnClicked event over to the left and remove its original connection to the Open Level action by holding down Alt and clicking. Now paste (Ctrl + V) the new objects and connect the out pin of OnClicked (Play Game) to the input of Play Animation. Now change Play Mode to Reverse. To the right of this, create a Delay action. For the Duration variable, we want it to wait as long as the animation is, so from the Menu Fly In variable, create another pin and create a Get End Time action. Connect Return Value of Get End Time to the input of the Delay action. Connect the output of the Play Animation action to the input of the Delay action and the Completed output of the Delay action to the input of the Open Level action. Now we need to do the same for the OnClicked (Quit Game) event. Now compile, save, and run the game! Our menu is now completed and we've learned about how animation works inside UMG! For more examples of using UMG for animation, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Animation/index.html. Summary This article gave you some insight on Slate and the UMG Editor to create a number of UI elements and an animated main menu to tie your whole game together. We created a main menu and also learned how to make buttons do things. We spent some time looking at how we can animate menus by having them fly in and out. Resources for Article: Further resources on this subject: The Blueprint Class[article] Adding Fog to Your Games [article] Overview of Unreal Engine 4 [article]
Read more
  • 0
  • 0
  • 11570

article-image-blender-25-modeling-basic-humanoid-character
Packt
01 Jul 2011
14 min read
Save for later

Blender 2.5: modeling a basic humanoid character

Packt
01 Jul 2011
14 min read
Mission Briefing Our objective is to create a basic model of a humanoid, with all the major parts included and correctly shaped: head, arms, torso, legs, and feet will be defined. We won't be creating fine details of the model, but we will definitely pay attention to the process and the mindset necessary to achieve our goal. What Does It Do? We'll start by creating a very simple (and ugly) base mesh that we can tweak later to get a nice finished model. From a single cube, we will be creating an entire model of a basic humanoid character, and take the opportunity to follow our own "feelings" to create the finished model. Why Is It Awesome? This project will help you learn some good points that will be handy when working on future projects (even in complex projects). First of all, we'll learn a basic procedure for applying the box modeling technique to create a base mesh. We'll then learn that our models don't have to look nice all the time to ensure a proper result, instead we must have a proper understanding of where we are heading to avoid getting lost along the way. Finally, we'll learn to separate the complexity of a modeling task into two different parts, using the best tools for the job each time (thus having a more enjoyable time and very good freedom to creative). The brighter side of this project will be working with the sculpting tools, since they give us a very cool way of tweaking meshes without having to handle individual vertices, edges, or faces. This advantage constitutes an added value for our workflow: we can separate the boring technical parts of modeling (mostly extruding and defining topology) from the actual fine tweaking of the form. Moreover, if we have the possibility of using the sculpt tools with a pen tablet, the modeling experience will be greatly improved and will feel extremely intuitive. Your Hotshot Objectives Although this project is not really complex, we will separate it into seven tasks, to make it easier to follow. They are: Creating the base mesh Head Arms Torso Legs Feet and final tweaks Scene setup Creating the Base Mesh Let's begin our project by creating the mesh that will be further tweaked to get our final model. For this project we'll apply a methodology (avoiding overly complicated, unintelligible, written descriptions) that will give us some freedom and allow us to explore our creativity without the rigidity of having a strict blueprint to follow. There's a warning, though: our model will look ugly most of the time. This is because in the initial building process we're not going to put so much emphasis on how it looks but on the structure of the mesh. Having said that, let's start with the first task of the project. Prepare for Lift Off Fire up Blender, delete the default lamp, set the name of the default cube to character(from the Item panel, Properties sidebar) and save the file as character.blend in the project's directory. Engage Thrusters First, we need to set the character object with a Mirror modifier, so that we only need to work on one side of the character while the other side gets created automatically as we work. Select the character object, switch to Edit Mode and then switch to Front View (View | Front), then add a new edge loop running vertically by using the Loop Cut and Slide tool. Make sure that the new edge loop is not moved from the center of the cube, so that it separates the cube into two mirrored sides. Now set the viewport shading to wireframe (press the Z key), select the vertices on the left-hand side of the cube, and delete them (press the X key). Now let's switch back to Object Mode, then go to the Modifiers tab in the Properties Editor and add a new Mirror Modifier to the character object. On the settings panel for the Mirror Modifier, let's enable the Clipping option. This will leave us with the object set up according to our needs. Switch to Edit Mode for the character object and then to Face Select mode. Select the upper face of the mesh, extrude it (E key) and then move the extrusion 1 unit upwards, along the Z axis. Now perform a second extrusion, this time on the face that remains selected from the previous one, and move it 1 unit upwards too; this will leave us with three sections (the lowest one is the biggest). Follow along by switching to Right View (View | Right), extrude again, press Escape, and then move the extrusion 0.2 units upwards (press the G key, Z key, then type 0.2). With the upper face selected, let's scale it down by a factor of 0.3 (S key, then type 0.3) and then move it by 0.6 units along the Y axis (G key, Y key, then type 0.6). Continue by extruding again and moving the extrusion 0.5 units upwards (G key, Z key, then type 0.5). Then add another extrusion, moving it up by 0.1 units (G key, Z key, then type 0.1). With the last extrusion selected, perform a scale operation, by a factor of 1.5 (S key, then type 1.5). Right after that, extrude again and move the extrusion 1.5 units upwards (G key, Z key, then type 1.5). Now let's rotate the view freely, so that the face of the last extrusion that faces the front is selectable, select it and move it -0.5 units along the Y axis (press the G key, Y key, then type -0.5).Let's take a look at a screenshot to make sure that we are on the right path: Note the (fairly noticeable) shapes showing the neck area, the head, and the torso of our model. Take a look at the face on the model's side from where we'll later extrude the arm. Now let's switch to Front View (View | Front), then select the upper face on the side of the torso of the model, extrude it, press Escape, and move it 0.16 units along the X axis (G key, X key, then type 0.16). Continue by scaling it down by a factor of 0.75 (S key, then type 0.75) and move it up by 0.07 units (press the G key, Z key, then type 0.07). Then switch to Right View (View | Right) and move it 0.2 units along the Y axis (press the G key, Y key, then type 0.2). This will give us the starting point to extrude the arm. Switch to Front View (View | Front) and perform another extrusion (having selected the face that remains selected by the previous extrusion), press Escape, and then move it 0.45 units along the X axis (press the G key, X key, then type 0.45). Then let's switch to Edge Select Mode, deselect all the edges that could be selected (Select | Select/Deselect All), rotate the view to be able to select any of the horizontal edges of the last extrusion, and then select the upper horizontal edge of the last extrusion; then move it -0.16 units along the X axis (G key, X key, then type -0.16). Right after that, let's select the lower horizontal edge of the last extrusion and move it 0.66 units upwards (G key, Z key, then type 0.66). Finalize this tweak by selecting the last two edges that we worked with and move them -0.15 units along the X axis (press the G key, X key, then type -0.15). Let's also select the lower edge of the first extrusion that we made for the arm and move it 0.14 units along the X axis (press the G key, X key, then type 0.14). Since this process is a bit tricky, let's use a screenshot, to help us ensure that we are performing it correctly: The only reason to perform this weird tweaking of the base mesh is to ensure a proper topology (internal mesh structure) for the shoulder when the model is finished. Let's remember to take a look at the shoulder of the finished model and compare it with the previous screenshot to understand it. Make sure to only select the face shown selected in the previous screenshot and switch back to Front View (View | Front) to work on the arms. Extrude the selected face, press Escape, and then move it by 1.6 units along the X axis (press the G key, X key, then type 1.6). Then scale it down by a factor of 0.75 (press the S key, then type 0.75) and move it up 0.07 units (press the G key, Z key, then type 0.07). Continue by performing a second extrusion, press Escape and then move it 1.9 units along the X axis (press the G key, X key, then type 1.9). Then let's perform a scale constrained to the Y axis, this time by a factor of 0.5 (press the S key, Y key, then type 0.5). To perform some tweaks, let's switch to Top View (View | Top) and move the selected face 0.17 units along the Y axis (press the G key, Y key, then type 0.17). To model the simple shape that we will create for the hand, let's make sure that we have selected the rightmost face from the last extrusion, extrude it, and move it 0.25 units along the X axis (press the G key, X key, then type 0.25). Then perform a second extrusion and move it 0.25 units along the X axis as well, and finish the extrusions by adding a last one, this time moving it 0.6 units along the X axis (press the G key, X key, then type 0.6). For the thumb, let's select the face pointing forwards in the second-last extrusion, extrude it, and move the extruded face to the right and down (remember we are in Top View) so that it looks well shaped with the rest of the hand. For this we can perform a rotation of the selected face to orient it better. To finish the hand, let's select the faces forming the thumb and the one between the thumb and the other "fingers", and move them -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Also select the two faces on the other side of the face and move them 0.08 units along the Y axis (press the G key, Y key, then type 0.08). The following screenshot should be very helpful to follow the process: Now it's time to model the legs of our character. For that, let's pan the 3D View to get the lower face visible, select it, extrude it, and move it -0.4 units (press the G key, Z key, then type -0.4). Now switch to Edge Select Mode, select the rightmost edge of the face we just extruded down, and move it -0.85 units along the X axis (G key, X key, then type -0.85). To extrude the thigh, let's first switch to Face Select Mode, select the face that runs diagonally after we moved the edge in the previous step, then switch to Front View (View | Front), extrude the face, press Escape, and then apply a scale operation along the Z axis by a factor of 0 (press the S key, Z key, then type 0), to get it looking entirely flat. With the face from the last extrusion selected, let's move it -0.8 units along the Z axis (press the G key, Z key, then type -0.8). Right after that, let's scale the selected face by a factor of 1.28 along the X axis (press the S key, X key, then type 1.28) and move it 0.06 units along the X axis (press the G key, X key, then type 0.06). Now switch to Right View (View | Right), scale it by a factor of 0.8 (press the S key, Y key, then type 0.8), and then move it -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Perform another extrusion, then press Escape and move it -2.2 units along the Z axis (press the G key, Z key, then type -2.2). To give it a better form, let's now scale the selected face by a factor of 0.8 along the Y axis (press the S key, Y key, then type 0.8) and move it 0.05 units along the Y axis (press the G key, Y key, then type 0.05). To complete the thigh, let's switch to Front View (View | Front), scale it by a factor of 0.7 along the X axis (press the S key, X key, then type 0.7), and then move it -0.18 units along the X axis (press the G key, X key, then type -0.18). Right after the thigh, let's continue working on the leg. Make sure that the face from the tip of the previous extrusion is selected, extrude it, press Escape, then move it -2.3 units along the Z axis (press the G key, Z key, then type -2.3). Then let's switch to Right View (View | Right), scale it by a factor of 0.7 along the Y axis (press the S key, Y key, then type 0.7), and move it -0.02 units along the Y axis (press the G key, Y key, then type -0.02). Now we just need to create the feet by extruding the face selected previously and moving it -0.6 units along the Z axis (press the G key, Z key, then type -0.6). Then select the face of the last extrusion that faces the front, extrude it, press Escape, and move it -1.9 units along the Y axis. As a final touch, let's switch to Edge Select mode, then select the upper horizontal edge of the last extrusion and move it -0.3 units along the Z axis (press the G key, Z key, then type -0.3). Let's take a look at a couple of screenshots showing us how our model should look by now: In the previous screenshot, we can see the front part, whereas the back side of the model is seen in the next one. Let's take a couple of minutes to inspect the screenshots and compare them to our actual model, to be entirely sure that we have the correct mesh now. Notice that our model isn't looking especially nice yet; that's because we've just worked on creating the mesh, the actual form will be worked in the coming tasks. Objective Complete - Mini Debriefing In this task we just performed the very initial step of our modeling process: creating the base mesh to work with. In order to avoid overly complicated written explanations we are using a modeling process that leaves the actual "shaping" for later, so we only worked out the topology of our mesh and laid out some simple foundations such as general proportions. The good thing about this approach is that we put in effort where it is really required, saving some time and enjoying the process a lot more. Classified Intel There are two main methods for modeling: poly-to-poly modeling and box modeling. The poly-to-poly method is about working with very localized (often detailed) geometry, paying attention to how each polygon is laid out in the model. The box modeling method is about constructing the general form very fast, by using the extrude operation, while paying attention to general aspects such as proportions, deferring the detailed tweaks for later. In this project we apply the box modeling method. We just worked out a very simple mesh, mostly by performing extrusions and very simple tweaks. Our main concern while doing this task was to keep proportions correct, forgetting about the fine details of the "form" that we are working out. The next tasks of this project will be about using Blender's sculpting tools to ease the tweaking job a lot, getting a very nice model in the end without having to tweak individual vertices!
Read more
  • 0
  • 0
  • 11519
Modal Close icon
Modal Close icon