Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-using-specular-unity
Packt
16 Aug 2013
14 min read
Save for later

Using Specular in Unity

Packt
16 Aug 2013
14 min read
(For more resources related to this topic, see here.) The specularity of an object surface simply describes how shiny it is. These types of effects are often referred to as view-dependent effects in the Shader world. This is because in order to achieve a realistic Specular effect in your Shaders, you need to include the direction the camera or user is facing the object's surface. Although Specular requires one more component to achieve its visual believability, which is the light direction. By combining these two directions or vectors, we end up with a hotspot or highlight on the surface of the object, half way between the view direction and the light direction. This half-way direction is called the half vector and is something new we are going to explore in this article, along with customizing our Specular effects to simulate metallic and cloth Specular surfaces. Utilizing Unity3D's built-in Specular type Unity has already provided us with a Specular function we can use for our Shaders. It is called the BlinnPhong Specular lighting model. It is one of the more basic and efficient forms of Specular, which you can find used in a lot of games even today. Since it is already built into the Unity Surface Shader language, we thought it is best to start with that first and build on it. You can also find an example in the Unity reference manual, but we will go into a bit more depth with it and explain where the data is coming from and why it is working the way it is. This will help you to get a nice grounding in setting up Specular, so that we can build on that knowledge in the future recipes in this article. Getting ready Let's start by carrying out the following: Create a new Shader and give it a name. Create a new Material, give it a name, and assign the new Shader to its shaper property. Then create a sphere object and place it roughly at world center. Finally, let's create a directional light to cast some light onto our object. When your assets have been set up in Unity, you should have a scene that resembles the following screenshot: How to do it… Begin by adding the following properties to the Shader's Properties block: We then need to make sure we add the variables to the CGPROGRAM block, so that we can use the data in our new properties inside our Shader's CGPROGRAM block. Notice that we don't need to declare the _SpecColor property as a variable. This is because Unity has already created this variable for us in the built-in Specular model. All we need to do is declare it in our Properties block and it will pass the data along to the surf() function. Our Shader now needs to be told which lighting model we want to use to light our model with. You have seen the Lambert lighting model and how to make your own lighting model, but we haven't seen the BlinnPhong lighting model yet. So, let's add BlinnPhong to our #pragma statement like so: We then need to modify our surf() function to look like the following: How it works… This basic Specular is a great starting point when you are prototyping your Shaders, as you can get a lot accomplished in terms of writing the core functionality of the Shader, while not having to worry about the basic lighting functions. Unity has provided us with a lighting model that has already taken the task of creating your Specular lighting for you. If you look into the UnityCG.cginc file found in your Unity's install directory under the Data folder, you will notice that you have Lambert and BlinnPhong lighting models available for you to use. The moment you compile your Shader with the #pragma surface surf BlinnPhong, you are telling the Shader to utilize the BlinnPhong lighting function in the UnityCG.cginc file, so that we don't have to write that code over and over again. With your Shader compiled and no errors present, you should see a result similar to the following screenshot: Creating a Phong Specular type The most basic and performance-friendly Specular type is the Phong Specular effect. It is the calculation of the light direction reflecting off of the surface compared to the user's view direction. It is a very common Specular model used in many applications, from games to movies. While it isn't the most realistic in terms of accurately modeling the reflected Specular, it gives a great approximation that performs well in most situations. Plus, if your object is further away from the camera and the need for a very accurate Specular isn't needed, this is a great way to provide a Specular effect on your Shaders. In this article, we will be covering how to implement the per vertex version of the and also see how to implement the per pixel version using some new parameters in the surface Shader's Input struct. We will see the difference and discuss when and why to use these two different implementations for different situations. Getting ready Create a new Shader, Material, and object, and give them appropriate names so that you can find them later. Finally, attach the Shader to the Material and the Material to the object. To finish off your new scene, create a new directional light so that we can see our Specular effect as we code it. How to do it… You might be seeing a pattern at this point, but we always like to start out with our most basic part of the Shader writing process: the creation of properties. So, let's add the following properties to the Shader: We then have to make sure to add the corresponding variables to our CGPROGRAM block inside our SubShader block. Now we have to add our custom lighting model so that we can compute our own Phong Specular. Add the following code to the Shader's SubShader() function. Don't worry if it doesn't make sense at this point; we will cover each line of code in the next section: Finally, we have to tell the CGPROGRAM block that it needs to use our custom lighting function instead of one of the built-in ones. We do this by changing the #pragma statement to the following: The following screenshot demonstrates the result of our custom Phong lighting model using our own custom reflection vector: How it works… Let's break down the lighting function by itself, as the rest of the Shader should be pretty familiar to you at this point. We simply start by using the lighting function that gives us the view direction. Remember that Unity has given you a set of lighting functions that you can use, but in order to use them correctly you have to have the same arguments they provide. Refer to the following table, or go to http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderLighting.html: Not view Dependent half4 Lighting Name You choose (SurfaceOutput s, half3 lightDir, half atten); View Dependent half4 Lighting Name You choose (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten); In our case, we are doing a Specular Shader, so we need to have the view-dependent lighting function structure. So, we have to write: This will tell the Shader that we want to create our own view-dependent Shader. Always make sure that your lighting function name is the same in your lighting function declaration and the #pragma statement, or Unity will not be able to find your lighting model. The lighting function then begins by declaring the usual Diffuse component by dotting the vertex normal with the light direction or vector. This will give us a value of 1 when a normal on the model is facing towards the light, and a value of -1 when facing away from the light direction. We then calculate the reflection vector taking the vertex normal, scaling it by 2.0 and by the diff value, then subtracting the light direction from it. This has the effect of bending the normal towards the light; so as a vertex normal is pointing away from the light, it is forced to look at the light. Refer to the following screenshot for a more visual representation. The script that produces this debug effect is included at the book's support page at www.packtpub.com/support. Then all we have left to do is to create the final spec's value and color. To do this, we dot the reflection vector with the view direction and take it to a power of _SpecPower. Finally, we just multiply the _SpecularColor.rgb value over the spec value to get our final Specular highlight. The following screenshot displays the final result of our Phong Specular calculation isolated out in the Shader: Creating a BlinnPhong Specular type Blinn is another more efficient way of calculating and estimating specularity. It is done by getting the half vector from the view direction and the light direction. It was brought into the world of Cg by a man named Jim Blinn. He found that it was much more efficient to just get the half vector instead of calculating our own reflection vectors. It cut down on both code and processing time. If you actually look at the built-in BlinnPhong lighting model included in the UnityCG.cginc file, you will notice that it is using the half vector as well, hence the reason why it is named BlinnPhong. It is just a simpler version of the full Phong calculation. Getting ready This time, instead of creating a whole new scene, let's just use the objects and scene we have, and create a new Shader and Material and name them BlinnPhong. Once you have a new Shader, double-click on it to launch MonoDevelop, so that we can start to edit our Shader. How to do it… First, we need to add our own properties to the Properties block, so that we can control the look of the Specular highlight. Then, we need to make sure that we have created the corresponding variables inside our CGPROGRAM block, so that we can access the data from our Properties block, inside of our subshader. Now it's time to create our custom lighting model that will process our Diffuse and Specular calculations. To complete our Shader, we will need to tell our CGPROGRAM block to use our custom lighting model rather than a built-in one, by modifying the #pragma statement with the following code: The following screenshot demonstrates the results of our BlinnPhong lighting model: How it works… The BlinnPhong Specular is almost exactly like the Phong Specular, except that it is more efficient because it uses less code to achieve almost the same effect. You will find this approach nine times out of ten in today's modern Shaders, as it is easier to code and lighter on the Shader performance. Instead of calculating our own reflection vector, we are simply going to get the vector half way between the view direction and the light direction, basically simulating the reflection vector. It has actually been found that this approach is more physically accurate than the last approach, but we thought it is necessary to show you all the possibilities. So to get the half vector, we simply need to add the view direction and the light direction together, as shown in the following code snippet: Then, we simply need to dot the vertex normal with that new half vector to get our main Specular value. After that, we just take it to a power of _SpecPower and multiply it by the Specular color variable. It's much lighter on the code and much lighter on the math, but still gives us a nice Specular highlight that will work for a lot of real-time situations. Masking Specular with textures Now that we have taken a look at how to create a Specular effect for our Shaders, let's start to take a look into the ways in which we can start to modify our Specular and give more artistic control over its final visual quality. In this next recipe, we will look at how we can use textures to drive our Specular and Specular power attributes. The technique of using Specular textures is seen in most modern game development pipelines because it allows the 3D artists to control the final visual effect on a per-pixel basis. This provides us with a way in which we can have a mat-type surface and a shiny surface all in one Shader; or, we can drive the width of the Specular or the Specular power with another texture, to have one surface with a broad Specular highlight and another surface with a very sharp, tiny highlight. There are many effects one can achieve by mixing his/her Shader calculations with textures, and giving artists the ability to control their Shader's final visual effect is key to an efficient pipeline. Let's see how we can use textures to drive our Specular lighting models. This article will introduce you to some new concepts, such as creating your own Input struct, and learning how the data is being passed around from the output struct, to the lighting function, to the Input struct, and to the surf() function. Understanding the flow of data between these core Surface Shader elements is core to a successful Shader pipeline. Getting ready We will need a new Shader, Material, and another object to apply our Shader and Material on to. With the Shader and Material connected and assigned to your object in your scene, double-click the Shader to bring it up in MonoDevelop. We will also need a Specular texture to use. Any texture will do as long as it has some nice variation in colors and patterns. The following screenshot shows the textures we are using for this recipe: How to do it… First, let's populate our Properties block with some new properties. Add the following code to your Shader's Properties block: We then need to add the corresponding variables to the subshader, so that we can access the data from the properties in our Properties block. Add the following code, just after the #pragma statement: Now we have to add our own custom Output struct. This will allow us to store more data for use between our surf function and our lighting model. Don't worry if this doesn't make sense just yet. We will cover the finer details of this Output struct in the next section of the article. Place the following code just after the variables in the SubShader block: Just after the Output struct we just entered, we need to add our custom lighting model. In this case, we have a custom lighting model called LightingCustomPhong. Enter the following code just after the Output struct we just created: In order for our custom lighting model to work, we have to tell the SubShader block which lighting model we want to use. Enter the following code to the #pragma statement so that it loads our custom lighting model: Since we are going to be using a texture to modify the values of our base Specular calculation, we need to store another set of UVs for that texture specifically. This is done inside the Input struct by placing the word uv in front of the variable's name that is holding the texture. Enter the following code just after your custom lighting model: To finish off the Shader, we just need to modify our surf() function with the following code. This will let us pass the texture information to our lighting model function, so that we can use the pixel values of the texture to modify our Specular values in the lighting model function: The following screenshot shows the result of masking our Specular calculations with a color texture and its channel information. We now have a nice variation in Specular over the entire surface, instead of just a global value for the Specular:
Read more
  • 0
  • 0
  • 12812

article-image-introducing-building-blocks-unity-scripts
Packt
11 Oct 2013
15 min read
Save for later

Introducing the Building Blocks for Unity Scripts

Packt
11 Oct 2013
15 min read
(For more resources related to this topic, see here.) Using the term method instead of function You are constantly going to see the words function and method used everywhere as you learn Unity. The words function and method truly mean the same thing in Unity. They do the same thing. Since you are studying C#, and C# is an Object-Oriented Programming (OOP) language, I will use the word "method" throughout this article, just to be consistent with C# guidelines. It makes sense to learn the correct terminology for C#. Also, UnityScript and Boo are OOP languages. The authors of the Scripting Reference probably should have used the word method instead of function in all documentation. From now on I'm going to use the words method or methods in this article. When I refer to the functions shown in the Scripting Reference , I'm going to use the word method instead, just to be consistent throughout this article. Understanding what a variable does in a script What is a variable? Technically, it's a tiny section of your computer's memory that will hold any information you put there. While a game runs, it keeps track of where the information is stored, the value kept there, and the type of the value. However, for this article, all you need to know is how a variable works in a script. It's very simple. What's usually in a mailbox, besides air? Well, usually there's nothing but occasionally there is something in it. Sometimes there's money (a paycheck), bills, a picture from aunt Mabel, a spider, and so on. The point is what's in a mailbox can vary. Therefore, let's call each mailbox a variable instead. Naming a variable Using the picture of the country mailboxes, if I asked you to see what is in the mailbox, the first thing you'd ask is which one? If I said in the Smith mailbox, or the brown mailbox, or the round mailbox, you'd know exactly which mailbox to open to retrieve what is inside. Similarly, in scripts, you have to name your variables with a unique name. Then I can ask you what's in the variable named myNumber, or whatever cool name you might use. A variable name is just a substitute for a value As you write a script and make a variable, you are simply creating a placeholder or a substitute for the actual information you want to use. Look at the following simple math equation: 2 + 9 = 11 Simple enough. Now try the following equation: 11 + myNumber = ??? There is no answer to this yet. You can't add a number and a word. Going back to the mailbox analogy, write the number 9 on a piece of paper. Put it in the mailbox named myNumber. Now you can solve the equation. What's the value in myNumber? The value is 9. So now the equation looks normal: 11 + 9 = 20 The myNumber variable is nothing more than a named placeholder to store some data (information). So anywhere you would like the number 9 to appear in your script, just write myNumber, and the number 9 will be substituted. Although this example might seem silly at first, variables can store all kinds of data that is much more complex than a simple number. This is just a simple example to show you how a variable works. Time for action – creating a variable and seeing how it works Let's see how this actually works in our script. Don't be concerned about the details of how to write this, just make sure your script is the same as the script shown in the next screenshot. In the Unity Project panel, double-click on LearningScript. In MonoDevelop, write the lines 6, 11, and 13 from the next screenshot. Save the file. To make this script work, it has to be attached to a GameObject. Currently, in our State Machine project we only have one GameObject, the Main Camera. This will do nicely since this script doesn't affect the Main Camera in any way. The script simply runs by virtue of it being attached to a GameObject. Drag LearningScript onto the Main Camera. Select Main Camera so that it appears in the Inspector panel. Verify whether LearningScript is attached. Open the Unity Console panel to view the output of the script. Click on Play. The preceding steps are shown in the following screenshot: What just happened? In the following Console panel is the result of our equations. As you can see, the equation on line 13 worked by substituting the number 9 for the myNumber variable: Time for action – changing the number 9 to a different number Since myNumber is a variable, the value it stores can vary. If we change what is stored in it, the answer to the equation will change too. Follow the ensuing steps: Stop the game and change 9 to 19. Notice that when you restart the game, the answer will be 30. What just happened? You learned that a variable works by simple process of substitution. There's nothing more to it than that. We didn't get into the details of the wording used to create myNumber, or the types of variables you can create, but that wasn't the intent. This was just to show you how a variable works. It just holds data so you can use that data elsewhere in your script. Have a go hero – changing the value of myNumber In the Inspector panel, try changing the value of myNumber to some other value, even a negative value. Notice the change in answer in the Console. Using a method in a script Methods are where the action is and where the tasks are performed. Great, that's really nice to know but what is a method? What is a method? When we write a script, we are making lines of code that the computer is going to execute, one line at a time. As we write our code, there will be things we want our game to execute more than once. For example, we can write a code that adds two numbers. Suppose our game needs to add the two numbers together a hundred different times during the game. So you say, "Wow, I have to write the same code a hundred times that adds two numbers together. There has to be a better way." Let a method take away your typing pain. You just have to write the code to add two numbers once, and then give this chunk of code a name, such as AddTwoNumbers(). Now, every time our game needs to add two numbers, don't write the code over and over, just call the AddTwoNumbers() method. Time for action – learning how a method works We're going to edit LearningScript again. In the following screenshot, there are a few lines of code that look strange. We are not going to get into the details of what they mean in this article.Getting into the Details of Methods. Right now, I am just showing you a method's basic structure and how it works: In MonoDevelop, select LearningScript for editing. Edit the file so that it looks exactly like the following screenshot. Save the file. What's in this script file? In the previous screenshot, lines 6 and 7 will look familiar to you; they are variables just as you learned in the previous section. There are two of them this time. These variables store the numbers that are going to be added. Line 16 may look very strange to you. Don't concern yourself right now with how this works. Just know that it's a line of code that lets the script know when the Return/Enter key is pressed. Press the Return/Enter key when you want to add the two numbers together. Line 17 is where the AddTwoNumbers() method gets called into action. In fact, that's exactly how to describe it. This line of code calls the method. Lines 20, 21, 22, and 23 make up the AddTwoNumbers() method. Don't be concerned about the code details yet. I just want you to understand how calling a method works. Method names are substitutes too You learned that a variable is a substitute for the value it actually contains. Well, a method is no different. Take a look at line 20 from the previous screenshot: void AddTwoNumbers () The AddTwoNumbers() is the name of the method. Like a variable, AddTwoNumbers() is nothing more than a named placeholder in the memory, but this time it stores some lines of code instead. So anywhere we would like to use the code of this method in our script, just write AddTwoNumbers(), and the code will be substituted. Line 21 has an opening curly-brace and line 23 has a closing curly-brace. Everything between the two curly-braces is the code that is executed when this method is called in our script. Look at line 17 from the previous screenshot: AddTwoNumbers(); The method name AddTwoNumbers() is called. This means that the code between the curly-braces is executed. It's like having all of the code of a method right there on line 17. Of course, this AddTwoNumbers() method only has one line of code to execute, but a method could have many lines of code. Line 22 is the action part of this method, the part between the curly-braces. This line of code is adding the two variables together and displaying the answer to the Unity Console. Then, follow the ensuing steps: Go back to Unity and have the Console panel showing. Now click on Play. What just happened? Oh no! Nothing happened! Actually, as you sit there looking at the blank Console panel, the script is running perfectly, just as we programmed it. Line 16 in the script is waiting for you to press the Return/Enter key. Press it now. And there you go! The following screenshot shows you the result of adding two variables together that contain the numbers 2 and 9: Line 16 waited for you to press the Return/Enter key. When you do this, line 17 executes which calls the AddTwoNumbers() method. This allows the code block of the method, line 23, to add the the values stored in the variables number1 and number2. Have a go hero – changing the output of the method While Unity is in the Play mode, select the Main Camera so its Components show in the Inspector. In the Inspector panel, locate Learning Script and its two variables. Change the values, currently 2 and 9, to different values. Make sure to click your mouse in the Game panel so it has focus, then press the Return/Enter key again. You will see the result of the new addition in the Console. You just learned how a method works to allow a specific block of code to to be called to perform a task. We didn't get into any of the wording details of methods here, this was just to show you fundamentally how they work. Introducing the class The class plays a major role in Unity. In fact, what Unity does with a class a little piece of magic when Unity creates Components. You just learned about variables and methods. These two items are the building blocks used to build Unity scripts. The term script is used everywhere in discussions and documents. Look it up in the dictionary and it can be generally described as written text. Sure enough, that's what we have. However, since we aren't just writing a screenplay or passing a note to someone, we need to learn the actual terms used in programming. Unity calls the code it creates a C# script. However, people like me have to teach you some basic programming skills and tell you that a script is really a class. In the previous section about methods, we created a class (script) called LearningScript. It contained a couple of variables and a method. The main concept or idea of a class is that it's a container of data, stored in variables, and methods that process that data in some fashion. Because I don't have to constantly write class (script), I will be using the word script most of the time. However, I will also be using class when getting more specific with C#. Just remember that a script is a class that is attached to a GameObject. The State Machine classes will not be attached to any GameObjects, so I won't be calling them scripts. By using a little Unity magic, a script becomes a Component While working in Unity, we wear the following two hats: A Game-Creator hat A Scripting (programmer) hat When we first wear our Game-Creator hat, we will be developing our Scene, selecting GameObjects, and viewing Components; just about anything except writing our scripts. When we put our Scripting hat on, our terminology changes as follows: We're writing code in scripts using MonoDevelop We're working with variables and methods The magic happens when you put your Game-Creator hat back on and attach your script to a GameObject. Wave the magic wand — ZAP — the script file is now called a Component, and the public variables of the script are now the properties you can see and change in the Inspector panel. A more technical look at the magic A script is like a blueprint or a written description. In other words, it's just a single file in a folder on our hard drive. We can see it right there in the Projects panel. It can't do anything just sitting there. When we tell Unity to attach it to a GameObject, we haven't created another copy of the file, all we've done is tell Unity we want the behaviors described in our script to be a Component of the GameObject. When we click on the Play button, Unity loads the GameObject into the computer's memory. Since the script is attached to a GameObject, Unity also has to make a place in the computer's memory to store a Component as part of the GameObject. The Component has the capabilities specified in the script (blueprint) we created. Even more Unity magic There's some more magic you need to be aware of. The scripts inherit from MonoBehaviour. For beginners to Unity, studying C# inheritance isn't a subject you need to learn in any great detail, but you do need to know that each Unity script uses inheritance. We see the code in every script that will be attached to a GameObject. In LearningScript, the code is on line 4: public class LearningScript : MonoBehaviour The colon and the last word of that code means that the LearningScript class is inheriting behaviors from the MonoBehaviour class. This simply means that the MonoBehaviour class is making few of its variables and methods available to the LearningScript class. It's no coincidence that the variables and methods inherited look just like some of the code we saw in the Unity Scripting Reference. The following are the two inherited behaviors in the LearningScript: Line 9:: void Start () Line 14: void Update () The magic is that you don't have to call these methods, Unity calls them automatically. So the code you place in these methods gets executed automatically. Have a go hero – finding Start and Update in the Scripting Reference Try a search on the Scripting Reference for Start and Update to learn when each method is called by Unity and how often. Also search for MonoBehaviour. This will show you that since our script inherits from MonoBehaviour, we are able to use the Start() and Update() methods. Components communicating using the Dot Syntax Our script has variables to hold data, and our script has methods to allow tasks to be performed. I now want to introduce the concept of communicating with other GameObjects and the Components they contain. Communication between one GameObject's Components and another GameObject's Components using Dot Syntax is a vital part of scripting. It's what makes interaction possible. We need to communicate with other Components or GameObjects to be able to use the variables and methods in other Components. What's with the dots? When you look at the code written by others, you'll see words with periods separating them. What the heck is that? It looks complicated, doesn't it. The following is an example from the Unity documentation: transform.position.x Don't concern yourself with what the preceding code means as that comes later, I just want you to see the dots. That's called the Dot Syntax. The following is another example. It's the fictitious address of my house: USA.Vermont.Essex.22MyStreet Looks funny, doesn't it? That's because I used the syntax (grammar) of C# instead of the post office. However, I'll bet if you look closely, you can easily figure out how to find my house. Summary This article introduced you to the basic concepts of variables, methods, and Dot Syntax. These building blocks are used to create scripts and classes. Understanding how these building blocks work is critical so you don't feel you're not getting it. We discovered that a variable name is a substitute for the value it stores; a method name is a substitute for a block of code; when a script or class is attached to a GameObject, it becomes a Component. The Dot Syntax is just like an address to locate GameObjects and Components. With these concepts under your belt, we can proceed to learn the details of the sentence structure, the grammar, and the syntax used to work with variables, methods, and the Dot Syntax. Resources for Article: Further resources on this subject: Debugging Multithreaded Applications as Singlethreaded in C# [Article] Simplifying Parallelism Complexity in C# [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 12782

article-image-let-there-be-light
Packt
04 Oct 2013
8 min read
Save for later

Let There be Light!

Packt
04 Oct 2013
8 min read
(For more resources related to this topic, see here.) Basic light sources You use lights to give a scene brightness, ambience, and depth. Without light, everything looks flat and dull. Use additional light sources to even-out lighting and to set up interior scenes. In Unity, lights are components of GameObjects. The different kinds of light sources are as follows: Directional lights: These lights are commonly used to mimic the sun. Their position is irrelevant, as only orientation matters. Every architectural scene should at least have one main Directional light. When you only need to lighten up an interior room, they are more tricky to use, as they tend to brighten up the whole scene, but they help getting some light through the windows, inside the project. We'll see a few use cases in the next few sections. Point lights: These lights are easy to use, as they emit light in any direction. Try to minimize their Range, so they don't spill light in other places. In most scenes, you'll need several of them to balance out dark spots and corners and to even-out the overall lighting. Spot lights: These lights only emit light into a cone and are good to simulate interior light fixtures. They cast a distinct bright circular light spot so use them to highlight something. Area lights: These are the most advanced lights, as they allow a light source to be given an actual rectangular size. This results in smoother lights and shadows, but their effect is only visible when baking and they require a pro-license. They are good to simulate light panels or the effect of light coming in through a window. In the free version, you can simulate them using multiple Spot or Directional Lights. Shadows Most current games support some form of shadows. They can be pre-calculated or rendered in real-time. Pre-calculation implies that the effect of shadows and lighting is calculated in advance and rendered onto an additional material layer. It only makes sense for objects that don't move in the scene. Real-time shadows are rendered using the GPU, but can be computationally expensive and should only be used for dynamic lighting. You might be familiar with real-time shadows from applications such as SketchUp and recent versions of ArchiCAD or Revit. Ideally, both techniques are combined. The overall lighting of the scene (for example, buildings, street, interiors, and so on) is pre-calculated and baked in texture maps. Additional real-time shadows are used on the moving characters. Unity can blend both types of shadows to simulate dynamic lighting in large scenes. Some of these techniques, however, are only supported in the pro-version. Real-time shadows Imagine we want to create a sun or shadow study of a building. This is best appreciated in real-time and by looking from the outside. We will use the same model as we did in the previous article, but load it in a separate scene. We want to have a light object acting as a sun, a spherical object to act as a visual clue where the sun is positioned and link them together to control the rotations in an easy way. The steps to be followed to achieve this are as follows: Add a Directional light, name it SunLight and choose the Shadow Type. Hard shadows are more sharply defined and are the best choice in this example, whereas Soft shadows look smoother and are better suited for a subtle, mood lighting. Add an empty GameObject by navigating to GameObject | Create Empty that is positioned in the center of the scene and name it ORIGIN. Create a Sphere GameObject by navigating to GameObject | Create Other | Sphere, name it VisualSun. Make it a child of the ORIGIN by dragging the VisualSun name in the Hierarchy Tab onto the ORIGIN name. Give it a bright, yellow material, using a Self-Illumin/Diffuse Shader. Deactivate Cast Shadows and Receive Shadows on the Mesh Renderer component. After you have placed the VisualSun as a child of the origin object, reset the position of the Sphere to be 0 for X, Y and Z. It now sits in the same place as its parent. Even if you move the parent, its local position stays at X=0, Y=0 and Z=0, which makes it convenient for a placement relative to its parent object. Alter the Z-position to define an offset from the origin, for example 20 units. The negative Z will facilitate the SunLight orientation in the next step. The SunLight can be dragged onto the VisualSun and its local position reset to zero as well. When all rotations are also zero, it emits light down the Z-axis and thus straight to the origin. If you want to have a nice glow effect, you can add a Halo by navigating to Components | Effects | Halo and then to SunLight and setting a suitable Size. We now have a hierarchic structure of the origin, the visible sphere and the Directional light, that is accentuated by the halo. We can adjust this assembly by rotating the origin around. Rotating around the Y-axis defines the orientation of the sun, whereas a rotation around the X-axis defines the azimuth. With these two rotations, we can position the sun wherever we want. Lightmapping Real-time lighting is computationally very expensive. If you don't have the latest hardware, it might not even be supported. Or you might avoid it for a mobile app, where hardware resources are limited. It is possible to pre-calculate the lighting of a scene and bake it onto the geometry as textures. This process is called Lightmapping, for more information on it visit: http://docs.unity3d.com/Documentation/Manual/Lightmapping.html While actual calculations are rather complex, the process in Unity is made easy, thanks to the integrated Beast Lightmapping. There are a few things you need to set up properly. These are given as follows: First, ensure that any object that needs to be baked is set to Static. Each GameObject has a static-toggle, right next to the Name property. Activate this for all models and light objects that will not move in the Scene. Secondly, ensure that all geometry has a second set of texture coordinates, called UV2 coordinates in Unity. Default GameObjects have those set up, but for imported models, they usually need to be added. Luckily for us, this is automated when Generate Lightmap UVs is activated on the model import settings given earlier in Quick Walk Around Your Design, in the section entitled, Controlling the import settings. If all lights and meshes are static and UV2 coordinates are calculated, you are ready to go. Open the Lightmapping dialog by navigating to Window | Lightmapping and dock it somewhere conveniently. There are several settings, but we start with a basic setup that consists of the following steps: Usually a Single Lightmap suffices. Dual Lightmaps can look better, but require the deferred rendering method that is only supported in Unity Pro. Choose the Quality High modus. Quality Low gives jagged edges and is only used for quick testing. Activate Ambient Occlusion as a quick additional rendering step that darkens corners and occluded areas, such as where objects touch. This adds a real sense of depth and is highly recommended. Set both sliders somewhere in the middle and leave the distance at 0.1, to control how far the system will look to detect occlusions. Start with a fairly low Resolution, such as 5 or 10 texels per world unit. This defines how detailed the calculated Lightmap texture is, when compared to the geometry. Look at the Scene view, to get a checkered overlay visible, when adjusting Lightmapping settings. For final results, increase this to 40 or 50, to give more detail to the shadows, at the cost of longer rendering times. There are additional settings for which Unity Pro is required, such as Sky Light and Bounced Lighting. They both add to the realism of the lighting, so they are actually highly recommended for architectural visualization, if you have the pro-version. On the Object sub-tab, you can also tweak the shadow calculation settings for individual lights. By increasing the radius, you get a smoother shadow edge, at the cost of longer rendering times. If you increase the radius, you should also increase the amount of samples, which helps reduce the noise that gets added with sampling. This is shown in the following screenshot: Now you can go on and click Bake Scene. It can take quite some time for large models and fine resolutions. Check the blue time indicator on the right side of the status bar (but you can continue working in Unity). After the calculations are finished, the model is reloaded with the new texture and baked shadows are visible in Scene and Game views, as shown in the following screenshot: Beware that to bake a Scene, it needs to be saved and given a name, as Unity places the calculated Lightmap textures in a subfolder with the same name as the Scene. Summary In this article we learned about the use of different light sources and shadow. To avoid the heavy burden of real-time shadows, we discussed the use Lightmapping technique to bake lights and shadows on the model, from within Unity Resources for Article: Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] Introduction to Game Development Using Unity 3D [Article] Unity 3-0 Enter the Third Dimension [Article]    
Read more
  • 0
  • 0
  • 12586

article-image-miscellaneous-gameplay-features
Packt
01 Mar 2013
13 min read
Save for later

Miscellaneous Gameplay Features

Packt
01 Mar 2013
13 min read
(For more resources related to this topic, see here.) How to have a sprinting player use up energy Torque 3D's Player class has three main modes of movement over land: sprinting, running, and crouching. Some are designed to allow a player to sprint as much as they want, but perhaps with other limitations while sprinting. This is the default method of sprinting in the Torque 3D templates. Other game designs allow the player to sprint only for short bursts before the player becomes "tired". In this recipe, we will learn how to set up the Player class such that sprinting uses up a pool of energy that slowly recharges over time; and when that energy is depleted, the player is no longer able to sprint. How to do it... We are about to modify a PlayerData Datablock instance so that sprint uses up the player's energy as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock template in art/ datablocks/player.cs. Find the sprinting section of the Datablock instance and make the following changes: sprintForce = 4320; sprintEnergyDrain = 0.6; // Sprinting now drains energy minSprintEnergy = 10; // Minimum energy to sprint maxSprintForwardSpeed = 14; maxSprintBackwardSpeed = 8; maxSprintSideSpeed = 6; sprintStrafeScale = 0.25; sprintYawScale = 0.05; sprintPitchScale = 0.05; sprintCanJump = true; Start up the game and have the player sprint. Sprinting should now be possible for about 5.5 seconds before the player falls back to a run. If the player stops sprinting for about 7.5 seconds, their energy will be fully recharged and they will be able to sprint again. How it works... The maxEnergy property on the PlayerData Datablock instance determines the maximum amount of energy a player has. All of Torque 3D's templates set it to a value of 60. This energy may be used for a number of different activities (such as jet jumping), and even certain weapons may draw from it. By setting the sprintEnergyDrain property on the PlayerData Datablock instance to a value greater than zero, the player's energy will be drained every tick (about one-thirty-second of a second) by that amount. When the player's energy reaches zero they may no longer sprint, and revert back to running. Using our previous example, we have a value for the sprintEnergyDrain property of 0.6 units per tick. This works out to 19.2 units per second. Given that our DefaultPlayerData maxEnergy property is 60 units, we should run out of sprint energy in 3.125 seconds. However, we were able to sprint for about 5.5 seconds in our example before running out of energy. Why is this? A second PlayerData property affects energy use over time: rechargeRate. This property determines how much energy is restored to the player per tick, and is set to 0.256 units in DefaultPlayerData. When we take both the sprintEnergyDrain and recharcheRate properties into account, we end up with an effective rate of (0.6 – 0.256) 0.344 units drained per tick while sprinting. Assuming the player begins with the maximum amount of energy allowed by DefaultPlayerData, this works out to be (60 units / (0.344 units per tick * 32 ticks per second)) 5.45 seconds. The final PlayerData property that affects sprinting is minSprintEnergy. This property determines the minimum player energy level required before being able to sprint. When this property is greater than zero, it means that a player may continue to sprint until their energy is zero, but cannot sprint again until they have regained a minSprintEnergy amount of energy. There's more... Let's continue our discussion of player sprinting and energy use. Balance energy drain versus recharge rate With everything set up as described previously, every tick the player is sprinting his energy pool will be reduced by the value of sprintEnergyDrain property of PlayerData, and increased by the value of the rechargeRate property. This means that in order for the player's energy to actually drain, his sprintEnergyDrain property must be greater than his rechargeRate property. As a player's energy may be used for other game play elements (such as jet jumping or weapons fire), sometimes we may forget this relationship while tuning the rechargeRate property, and end up breaking a player's ability to sprint (or make them sprint far too long). Modifying other sprint limitations The way the DefaultPlayerData Datablock instance is set up in all of Torque 3D's templates, there are already limitations placed on sprinting without making use of an energy drain. This includes not being able to rotate the player as fast as when running, and limited strafing ability. Making sprinting rely on the amount of energy a player has is often enough of a limitation, and the other default limitations may be removed or reduced. In the end it depends on the type of game we are making. To change how much the player is allowed to rotate while sprinting, we modify the sprintYawScale and sprintPitchScale properties of the PlayerData property. These two properties represent the fraction of rotation allowed while sprinting compared with running and default to 0.05 each. To change how much the player is allowed to strafe while sprinting, we modify the sprintStrafeScale property of the PlayerData property. This property is the fraction of the amount of strafing movement allowed while running and defaults to 0.25. Disabling sprint During a game we may want to disable a player's sprinting ability. Perhaps they are too injured, or are carrying too heavy a load. To allow or disallow sprinting for a specific player we call the following Player class method on the server: Player.allowSprinting( allow ); In the previous code, the allow parameter is set to true to allow a player the ability to sprint, and to false to not allow a player to sprint at all. This method is used by the standard weapon mounting system in scripts/server/ weapon.cs to disable sprinting. If the ShapeBaseImageData Datablock instance for the weapon has a dynamic property of sprintDisallowed set to true, the player may not sprint while holding that weapon. The DeployableTurretImage Datablock instance makes use of this by not allowing the player to sprint while holding a turret. Enabling and disabling air control Air control is a fictitious force used by a number of games that allows a player to control their trajectory while falling or jumping in the air. Instead of just falling or jumping and hoping for the best, this allows the player to change course as necessary and trades realism for playability. We can find this type of control in first-person shooters, platformers, and adventure games. In this recipe we will learn how to enable or disable air control for a player, as well as limit its effect while in use. How to do it... We are about to modify a PlayerData Datablock instance to enable complete air control as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock instance in art/ datablocks/player.cs. Find the section of the Datablock instance that contains the airControl property and make the following change: jumpForce = "747"; jumpEnergyDrain = 0; minJumpEnergy = 0; jumpDelay = "15"; // Set to maximum air control airControl = 1.0; Start up the game and jump the player off of a building or a sand dune. While in the air press one of the standard movement keys: W, A, S, and D. We now have full trajectory control of the player while they are in the air as if they were running. How it works... If the player is not in contact with any surface and is not swimming, the airControl property of PlayerData is multiplied against the player's direction of requested travel. This multiplication only happens along the world's XY plane and does not affect vertical motion. Setting the airControl property of PlayerData to a value of 0 will disable all air control. Setting the airControl property to a value greater than 1 will cause the player to move faster in the air than they can run. How to jump jet In game terms, a jump jet is often a backpack, a helicopter hat, or a similar device that a player wears, that provides them a short thrust upwards and often uses up a limited energy source. This allows a player to reach a height they normally could not, jump a canyon, or otherwise get out of danger or reach a reward. In this recipe we will learn how to allow a player to jump jet. Getting ready We will be making TorqueScript changes in a project based on the Torque 3D Full template using the Empty Terrain level. If you haven't already, use the Torque Project Manager (Project Manager.exe) to create a new project from the Full template. It will be found under the My Projects directory. Then start up your favorite script editor, such as Torsion, and let's get going! How to do it... We are going to modify the player's Datablock instance to allow for jump jetting and adjust how the user triggers the jump jet as follows: Open the art/datablocks/player.cs file in your text editor. Find the DefaultPlayerData Datablock instance and just below the section on jumping and air control, add the following code: // Jump jet jetJumpForce = 500; jetJumpEnergyDrain = 3; jetMinJumpEnergy = 10; Open scripts/main.cs and make the following addition to the onStart() function: function onStart() { // Change the jump jet trigger to match a regular jump $player::jumpJetTrigger = 2; // The core does initialization which requires some of // the preferences to loaded... so do that first. exec( "./client/defaults.cs" ); exec( "./server/defaults.cs" ); Parent::onStart(); echo("n--------- Initializing Directory: scripts ---------"); // Load the scripts that start it all... exec("./client/init.cs"); exec("./server/init.cs"); // Init the physics plugin. physicsInit(); // Start up the audio system. sfxStartup(); // Server gets loaded for all sessions, since clients // can host in-game servers. initServer(); // Start up in either client, or dedicated server mode if ($Server::Dedicated) initDedicated(); else initClient(); } Start our Full template game and load the Empty Terrain level. Hold down the Space bar to cause the player to fly straight up for a few seconds. The player will then fall back to the ground. Once the player has regained enough energy it will be possible to jump jet again. How it works... The only property that is required to be set for jump jetting to work is the jetJumpForce property of the PlayerData Datablock instance. This property determines the amount of continuous force applied on the player object to have them flying up in the air. It takes some trial and error to determine what force works best. Other Datablock properties that are useful to set are jetJumpEnergyDrain and jetMinJumpEnergy. These two PlayerData properties make jet jumping use up a player's energy. When the energy runs out, the player may no longer jump jet until enough energy has recharged. The jetJumpEnergyDrain property is how much energy per tick is drained from the player's energy pool, and the jetMinJumpEnergy property is the minimum amount of energy the player needs in their energy pool before they can jump jet again. Please see the How to have a sprinting player use up energy recipe for more information on managing a player's energy use. Another change we made in our previous example is to define which move input trigger number will cause the player to jump jet. This is defined using the global $player::jumpJetTrigger variable. By default, this is set to trigger 1, which is usually the same as the right mouse button. However, all of the Torque 3D templates make use of the right mouse button for view zooming (as defined in scripts/client/default.bind.cs). In our previous example, we modified the global $player::jumpJetTrigger variable to use trigger 2, which is usually the same as for regular jumping as defined in scripts/ client/default.bind.cs: function jump(%val) { // Touch move trigger 2 $mvTriggerCount2++; } moveMap.bind( keyboard, space, jump ); This means that we now have jump jetting working off of the same key binding as regular jumping, which is the Space bar. Now holding down the Space bar will cause the player to jump jet, unless they do not have enough energy to do so. Without enough energy, the player will just do a regular jump with their legs. There's more... Let's continue our discussion of using a jump jet. Jump jet animation sequence If the shape used by the Player object has a Jet animation sequence defined, it will play while the player is jump jetting. This sequence will play instead of all other action sequences. The hierarchy or order of action sequences that the Player class uses to determine which action sequence to play is as follows: Jump jetting Falling Swimming Running (known internally as the stand pose) Crouching Prone Sprinting Disabling jump jetting During a game we may no longer want to allow a player to jump jet. Perhaps they have run out of fuel or they have removed the device that allowed them to jump jet. To allow or disallow jump jetting for a specific player, we call the following Player class method on the server: Player.allowJetJumping( allow ); In the previous code, the allow parameter is set to true to allow a player to jump jet, and to false for not allowing him to jump jet at all. More control over the jump jet The PlayerData Datablock instance has some additional properties to fine tune a player's jump jet capability. The first is the jetMaxJumpSpeed property. This property determines the maximum vertical speed at which the player may use their jump jet. If the player is moving upwards faster than this, then they may not engage their jump jet. The second is the jetMinJumpSpeed property. This property is the minimum vertical speed of the player before a speed multiplier is applied. If the player's vertical speed is between jetMinJumpSpeed and jetMaxJumpSpeed, the applied jump jet speed is scaled up by a relative amount. This helps ensure that the jump jet will always make the player move faster than their current speed, even if the player's current vertical speed is the result of some other event (such as being thrown by an explosion). Summary These recipes will help you to fully utilize the gameplay's features and make your game more interesting and powerful. The tips and tricks mentioned in the recipes will surely help you in making the game more real, more fun to play, and much more intriguing. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] Retopology in 3ds Max [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1 [Article]
Read more
  • 0
  • 0
  • 12550

article-image-metal-api-get-closer-bare-metal-metal-api
Packt
16 Feb 2016
7 min read
Save for later

Metal API: Get closer to the bare metal with Metal API

Packt
16 Feb 2016
7 min read
The Metal framework supports 3D graphics rendering and other data computing commands. Metal is used in game designing to reduce the CPU overhead. In this article we'll cover: CPU/GPU framework levels Graphics pipeline overview (For more resources related to this topic, see here.) The Apple Metal API and the graphics pipeline One of the rules, if not the golden rule of modern video game development, is to keep our games running constantly at 60 frames per second or greater. If developing for VR devices and applications, this is of even more importance as dropped frame rates could lead to a sickening and game ending experience for the player. In the past, being lean was the name of the game; hardware limitations prevented much from not only being written to the screen but how much memory storage a game could hold. This limited the number of scenes, characters, effects, and levels. In the past, game development was built more with an engineering mindset, so the developers made the things work with what little they had. Many of the games on 8-bit systems and earlier had levels and characters that were only different because of elaborate sprite slicing and recoloring. Over time, advances in hardware, particularly that of GPUs allowed for richer graphical experiences. This leads to the advent of computation-heavy 3D models, real-time lighting, robust shaders, and other effects that we can use to make our games present an even greater player experience; this while trying to stuff it all in that precious .016666 second/60 Hz window. To get everything out of the hardware and combat the clash between a designer's need to make the best looking experience and the engineering reality of hardware limitations in even today's CPU/GPUs, Apple developed the Metal API. CPU/GPU framework levels Metal is what's known as a low-level GPU API. When we build our games on the iOS platform, there are different levels between the machine code in our GPU/CPU hardware and what we use to design our games. This goes for any piece of computer hardware we work with, be it Apple or others. For example, on the CPU side of things, at the very base of it all is the machine code. The next level up is the assembly language of the chipset. Assembly language differs based on the CPU chipset and allows the programmer to be as detailed as determining the individual registers to swap data in and out of in the processor. Just a few lines of a for-loop in C/C++ would take up a decent number of lines to code in assembly. The benefit of working in the lower levels of code is that we could make our games run much faster. However, most of the mid-upper level languages/APIs are made to work well enough so that this isn't a necessity anymore. Game developers have coded in assembly even after the very early days of game development. In the late 1990's, the game developer Chris Sawyer created his game, Rollercoster Tycoon™, almost entirely in the x86 assembly language! Assembly can be a great challenge for any enthusiastic developer who loves to tinker with the inner workings of computer hardware. Moving up the chain we have where C/C++ code would be and just above that is where we'd find Swift and Objective-C code. Languages such as Ruby and JavaScript, which some developers can use in Xcode, are yet another level up. That was about the CPU, now on to the GPU. The Graphics Processing Unit (GPU) is the coprocessor that works with the CPU to make the calculations for the visuals we see on the screen. The following diagram shows the GPU, the APIs that work with the GPU, and possible iOS games that can be made based on which framework/API is chosen. Like the CPU, the lowest level is the processor's machine code. To work as close to the GPU's machine code as possible, many developers would use Silicon Graphics' OpenGL API. For mobile devices, such as the iPhone and iPad, it would be the OpenGL subset, OpenGL ES. Apple provides a helper framework/library to OpenGL ES named GLKit. GLKit helps simplify some of the shader logic and lessen the manual work that goes into working with the GPU at this level. For many game developers, this was practically the only option to make 3D games on the iOS device family originally; though some use of iOS's Core Graphics, Core Animation and UIKit frameworks were perfectly fine for simpler games. Not too long into the lifespan of the iOS device family, third-party frameworks came into play, which were aimed at game development. Using OpenGL ES as its base, thus sitting directly one level above it, is the Cocos2D framework. This was actually the framework used in the original release of Rovio's Angry Birds™ series of games back in 2009. Eventually, Apple realized how important gaming was for the success of the platform and made their own game-centric frameworks, that is, the SpriteKit and SceneKit frameworks. They too, like Cocos2D/3D, sat directly above OpenGL ES. When we made SKSprite nodes or SCNNodes in our Xcode projects, up until the introduction of Metal, OpenGL operations were being used to draw these objects in the update/render cycle behind the scenes. As of iOS 9, SpriteKit and SceneKit use Metal's rendering pipeline to process graphics to the screen. If the device is older, they revert to OpenGL ES as the underlying graphics API. Graphics pipeline overview Let's take a look at the graphics pipeline to get an idea, at least on an upper level, of what the GPU is doing during a single rendered frame. We can imagine the graphical data of our games being divided in two main categories: Vertex data: This is the position information of where on the screen this data can be rendered. Vector/vertex data can be expressed as points, lines, or triangles. Remember the old saying about video game graphics, "everything is a triangle." All of those polygons in a game are just a collection of triangles via their point/vector positions. The GPU's Vertex Processing Unit (VPU) handles this data. Rendering/pixel data: Controlled by the GPU's Rasterizer, this is the data that tells the GPU how the objects, positioned by the vertex data, will be colored/shaded on the screen. For example, this is where color channels, such as RGB and alpha, are handled. In short, it's the pixel data and what we actually see on the screen. Here's a diagram showing the graphics pipeline overview: The graphics pipeline is the sequence of steps it takes to have our data rendered to the screen. The previous diagram is a simplified example of this process. Here are the main sections that can make up the pipeline: Buffer objects: These are known as Vertex Buffer Objects in OpenGL and are of the class MTLBuffer in the Metal API. These are the objects we create in our code that are sent from the CPU to the GPU for primitive processing. These objects contain data, such as the positions, normal vectors, alphas, colors, and more. Primitive processing: These are the steps in the GPU that take our Buffer Objects, break down the various vertex and rendering data in those objects, and then draw this information to the frame buffer, which is the screen output we see on the device. Before we go over the steps of primitive processing done in Metal, we should first understand the history and basics of shaders. Summary This article gives us precise knowledge about CPU/GPU framework levels and Graphics pipeline. We also learned that to overcome hardware limitations in even today's CPU/GPUs world, Apple developed the Metal API. To learn more about iOS for game development, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: iOS Game Development By Example: https://www.packtpub.com/game-development/ios-game-development-example. Sparrow iOS Game Framework Beginner’s Guide: https://www.packtpub.com/game-development/sparrow-ios-game-framework-beginner%E2%80%99s-guide Resources for Article:   Further resources on this subject: Android and iOS Apps Testing at a Glance [article] Signing up to be an iOS developer [article] Introduction to GameMaker: Studio [article]
Read more
  • 0
  • 0
  • 12525

article-image-using-mannequin-editor
Packt
07 Sep 2015
14 min read
Save for later

Using the Mannequin editor

Packt
07 Sep 2015
14 min read
In this article, Richard Marcoux, Chris Goodswen, Riham Toulan, and Sam Howels, the authors of the book CRYENGINE Game Development Blueprints, will take us through animation in CRYENGINE. In the past, animation states were handled by a tool called Animation Graph. This is akin to Flow Graph but handled animations and transitions for all animated entities, and unfortunately reduced any transitions or variation in the animations to a spaghetti graph. Thankfully, we now have Mannequin! This is an animation system where the methods by which animation states are handled is all dealt with behind the scenes—all we need to take care of are the animations themselves. In Mannequin, an animation and its associated data is known as a fragment. Any extra detail that we might want to add (such as animation variation, styles, or effects) can be very simply layered on top of the fragment in the Mannequin editor. While complex and detailed results can be achieved with all manner of first and third person animation in Mannequin, for level design we're only really interested in basic fragments we want our NPCs to play as part of flavor and readability within level scripting. Before we look at generating some new fragments, we'll start off with looking at how we can add detail to an existing fragment—triggering a flare particle as part of our flare firing animation. (For more resources related to this topic, see here.) Getting familiar with the interface First things first, let's open Mannequin! Go to View | Open View Pane | Mannequin Editor. This is initially quite a busy view pane so let's get our bearings on what's important to our work. You may want to drag and adjust the sizes of the windows to better see the information displayed. In the top left, we have the Fragments window. This lists all the fragments in the game that pertain to the currently loaded preview. Let's look at what this means for us when editing fragment entries. The preview workflow A preview is a complete list of fragments that pertains to a certain type of animation. For example, the default preview loaded is sdk_playerpreview1p.xml, which contains all the first person fragments used in the SDK. You can browse the list of fragments in this window to get an idea of what this means—everything from climbing ladders to sprinting is defined as a fragment. However, we're interested in the NPC animations. To change the currently loaded preview, go to File | Load Preview Setup and pick sdk_humanpreview.xml. This is the XML file that contains all the third person animations for human characters in the SDK. Once this is loaded, your fragment list should update to display a larger list of available fragments usable by AI. This is shown in the following screenshot:   If you don't want to perform this step every time you load Mannequin, you are able to change the default preview setup for the editor in the preferences. Go to Tools | Preferences | Mannequin | General and change the Default Preview File setting to the XML of your choice. Working with fragments Now we have the correct preview populating our fragment list, let's find our flare fragment. In the box with <FragmentID Filter> in it, type flare and press Enter. This will filter down the list, leaving you with the fireFlare fragment we used earlier. You'll see that the fragment is comprised of a tree. Expanding this tree one level brings us to the tag. A tag in mannequin is a method of choosing animations within a fragment based on a game condition. For example, in the player preview we were in earlier, the begin_reload fragment has two tags: one for SDKRifle and one for SDKShotgun. Depending on the weapon selected by the player, it applies a different tag and consequently picks a different animation. This allows animators to group together animations of the same type that are required in different situations. For our fireFlare fragment, as there are no differing scenarios of this type, it simply has a <default> tag. This is shown in the following screenshot:   Inside this tag, we can see there's one fragment entry: Option 1. These are the possible variations that Mannequin will choose from when the fragment is chosen and the required tags are applied. We only have one variation within fireFlare, but other fragments in the human preview (for example, IA_talkFunny) offer extra entries to add variety to AI actions. To load this entry for further editing, double-click Option 1. Let's get to adding that flare! Adding effects to fragments After loading the fragment entry, the Fragment Editor window has now updated. This is the main window in the center of Mannequin and comprises of a preview window to view the animation and a list of all the available layers and details we can add. The main piece of information currently visible here is the animation itself, shown in AnimLayer under FullBody3P: At the bottom of the Fragment Editor window, some buttons are available that are useful for editing and previewing the fragment. These include a play/pause toggle (along with a playspeed dropdown) and a jump to start button. You are also able to zoom in and out of the timeline with the mouse wheel, and scrub the timeline by click-dragging the red timeline marker around the fragment. These controls are similar to the Track View cinematics tool and should be familiar if you've utilized this in the past. Procedural layers Here, we are able to add our particle effect to the animation fragment. To do this, we need to add ProcLayer (procedural layer) to the FullBody3P section. The ProcLayer runs parallel to AnimLayer and is where any extra layers of detail that fragments can contain are specified, from removing character collision to attaching props. For our purposes, we need to add a particle effect clip. To do this, double-click on the timeline within ProcLayer. This will spawn a blank proc clip for us to categorize. Select this clip and Procedural Clip Properties on the right-hand side of the Fragment Editor window will be populated with a list of parameters. All we need to do now is change the type of this clip from None to ParticleEffect. This is editable in the dropdown Type list. This should present us with a ParticleEffect proc clip visible in the ProcLayer alongside our animation, as shown in the following screenshot:   Now that we have our proc clip loaded with the correct type, we need to specify the effect. The SDK has a couple of flare effects in the particle libraries (searchable by going to RollupBar | Objects Tab | Particle Entity); I'm going to pick explosions.flare.a. To apply this, select the proc clip and paste your chosen effect name into the Effect parameter. If you now scrub through fragment, you should see the particle effect trigger! However, currently the effect fires from the base of the character in the wrong direction. We need to align the effect to the weapon of the enemy. Thankfully, the ParticleEffect proc clip already has support for this in its properties. In the Reference Bone parameter, enter weapon_bone and hit Enter. The weapon_bone is the generic bone name that character's weapons are attached too, and as such it is a good bet for any cases where we require effects or objects to be placed in a character's weapon position. Scrubbing through the fragment again, the effect will now fire from the weapon hand of the character. If we ever need to find out bone names, there are a few ways to access this information within the editor. Hovering over the character in the Mannequin previewer will display the bone name. Alternatively, in Character Editor (we'll go into the details later), you can scroll down in the Rollup window on the right-hand side, expand Debug Options, and tick ShowJointNames. This will display the names of all bones over the character in the previewer. With the particle attached, we can now ensure that the timing of the particle effect matches the animation. To do this, you can click-and-drag the proc clip around timeline—around 1.5 seconds seems to match the timings for this animation. With the effect timed correctly, we now have a fully functioning fireFlare fragment! Try testing out the setup we made earlier with this change. We should now have a far more polished looking event. The previewer in Mannequin shares the same viewport controls as the perspective view in Sandbox. You can use this to zoom in and look around to gain a better view of the animation preview. The final thing we need to do is save our changes to the Mannequin databases! To do this, go to File | Save Changes. When the list of changed files is displayed, press Save. Mannequin will then tell you that you're editing data from the .pak files. Click Yes to this prompt and your data will be saved to your project. The resulting changed database files will appear in GameSDKAnimationsMannequinADB, and it should be distributed with your project if you package it for release. Adding a new fragment Now that we know how to add some effects feedback to existing fragments, let's look at making a new fragment to use as part of our scripting. This is useful to know if you have animators on your project and you want to get their assets in game quickly to hook up to your content. In our humble SDK project, we can effectively simulate this as there are a few animations that ship with the SDK that have no corresponding fragment. Now, we'll see how to browse the raw animation assets themselves, before adding them to a brand new Mannequin fragment. The Character Editor window Let's open the Character Editor. Apart from being used for editing characters and their attachments in the engine, this is a really handy way to browse the library of animation assets available and preview them in a viewport. To open the Character Editor, go to View | Open View Pane | Character Editor. On some machines, the expense of rendering two scenes at once (that is, the main viewport and the viewports in the Character Editor or Mannequin Editor) can cause both to drop to a fairly sluggish frame rate. If you experience this, either close one of the other view panes you have on the screen or if you have it tabbed to other panes, simply select another tab. You can also open the Mannequin Editor or the Character Editors without a level loaded, which allows for better performance and minimal load times to edit content. Similar to Mannequin, the Character Editor will initially look quite overwhelming. The primary aspects to focus on are the Animations window in the top-left corner and the Preview viewport in the middle. In the Filter option in the Animations window, we can search for search terms to narrow down the list of animations. An example of an animation that hasn't yet been turned into a Mannequin fragment is the stand_tac_callreinforcements_nw_3p_01 animation. You can find this by entering reinforcements into the search filter:   Selecting this animation will update the debug character in the Character Editor viewport so that they start to play the chosen animation. You can see this specific animation is a oneshot wave and might be useful as another trigger for enemy reinforcements further in our scripting. Let's turn this into a fragment! We need to make sure we don't forget this animation though; right-click on the animation and click Copy. This will copy the name to the clipboard for future reference in Mannequin. The animation can also be dragged and dropped into Mannequin manually to achieve the same result. Creating fragment entries With our animation located, let's get back to Mannequin and set up our fragment. Ensuring that we're still in the sdk_humanpreview.xml preview setup, take another look at the Fragments window in the top left of Mannequin. You'll see there are two rows of buttons: the top row controls creation and editing of fragment entries (the animation options we looked at earlier). The second row covers adding and editing of fragment IDs themselves: the top level fragment name. This is where we need to start. Press the New ID button on the second row of buttons to bring up the New FragmentID Name dialog. Here, we need to add a name that conforms to the prefixes we discussed earlier. As this is an action, make sure you add IA_ (interest action) as the prefix for the name you choose; otherwise, it won't appear in the fragment browser in the Flow Graph. Once our fragment is named, we'll be presented with Mannequin FragmentID Editor. For the most part, we won't need to worry about these options. But it's useful to be aware of how they might be useful (and don't worry, these can be edited after creation). The main parameters to note are the Scope options. These control which elements of the character are controlled by the fragment. By default, all these boxes are ticked, which means that our fragment will take control of each ticked aspect of the character. An example of where we might want to change this would be the character LookAt control. If we want to get an NPC to look at another entity in the world as part of a scripted sequence (using the AI:LookAt Flow Graph node), it would not be possible with the current settings. This is because the LookPose and Looking scopes are controlled by the fragment. If we were to want to control this via Flow Graph, these would need to be unticked, freeing up the look scopes for scripted control. With scopes covered, press OK at the bottom of the dialog box to continue adding our callReinforcements animation! We now have a fragment ID created in our Fragments window, but it has no entries! With our new fragment selected, press the New button on the first row of buttons to add an entry. This will automatically add itself under the <default> tag, which is the desired behavior as our fragment will be tag-agnostic for the moment. This has now created a blank fragment in the Fragment Editor. Adding the AnimLayer This is where our animation from earlier comes in. Right-click on the FullBody3P track in the editor and go to Add Track | AnimLayer. As we did previously with our effect on ProcLayer, double-click on AnimLayer to add a new clip. This will create our new Anim Clip, with some red None markup to signify the lack of animation. Now, all we need to do is select the clip, go to the Anim Clip Properties, and paste in our animation name by double-clicking the Animation parameter. The Animation parameter has a helpful browser that will allow you to search for animations—simply click on the browse icon in the parameter entry section. It lacks the previewer found in the Character Editor but can be a quick way to find animation candidates by name within Mannequin. With our animation finally loaded into a fragment, we should now have a fragment setup that displays a valid animation name on the AnimLayer. Clicking on Play will now play our reinforcements wave animation!   Once we save our changes, all we need to do now is load our fragment in an AISequence:Animation node in Flow Graph. This can be done by repeating the steps outlined earlier. This time, our new fragment should appear in the fragment dialog. Summary Mannequin is a very powerful tool to help with animations in CRYENGINE. We have looked at how to get started with it. Resources for Article: Further resources on this subject: Making an entity multiplayer-ready[article] Creating and Utilizing Custom Entities[article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 12487
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cardboard-virtual-reality-everyone
Packt
11 Apr 2016
22 min read
Save for later

Cardboard is Virtual Reality for Everyone

Packt
11 Apr 2016
22 min read
In this article, by Jonathan Linowes and Matt Schoen, authors of the book Cardboard VR Projects for Android, introduce and define Google Cardboard. (For more resources related to this topic, see here.) Welcome to the exciting new world of virtual reality! We're sure that, as an Android developer, you want to jump right in and start building cool stuff that can be viewed using Google Cardboard. Your users can then just slip their smartphone into a viewer and step into your virtual creations. Let's take an outside-in tour of VR, Google Cardboard, and its Android SDK to see how they all fit together. Why is it called Cardboard? It all started in early 2014 when Google employees, David Coz and Damien Henry, in their spare time, built a simple and cheap stereoscopic viewer for their Android smartphone. They designed a device that can be constructed from ordinary cardboard, plus a couple of lenses for your eyes, and a mechanism to trigger a button "click." The viewer is literally made from cardboard. They wrote software that renders a 3D scene with a split screen, one view for the left eye, and another view, with offset, for the right eye. Peering through the device, you get a real sense of 3D immersion into the computer generated scene. It worked! The project was then proposed and approved as a "20% project" (where employees may dedicate one day a week for innovations), funded, and joined by other employees. In fact, Cardboard worked so well that Google decided to go forward, taking the project to the next level and releasing it to the public a few months later at Google I/O 2014. Since its inception, Google Cardboard has been accessible to hackers, hobbyists, and professional developers alike. Google open sourced the viewer design for anyone to download the schematics and make their own, from a pizza box or from whatever they had lying around. One can even go into business selling precut kits directly to consumers. An assembled Cardboard viewer is shown in the following image: The Cardboard project also includes a software development kit (SDK) that makes it easy to build VR apps. Google has released continuous improvements to the software, including both a native Java SDK as well as a plugin for the Unity 3D game engine (https://unity3d.com/). Since the release of Cardboard, a huge number of applications have been developed and made available on the Google Play Store. At Google I/O 2015, Version 2.0 introduced an upgraded design, improved software, and support for Apple iOS. Google Cardboard has rapidly evolved in the eye of the market from an almost laughable toy into a serious new media device for certain types of 3D content and VR experiences. Google's own Cardboard demo app has been downloaded millions of times from the Google Play store. The New York Times distributed about a million cardboard viewers with its November 8, Sunday issue back in 2015. Cardboard is useful for viewing 360-degree photos and playing low-fidelity 3D VR games. It is universally accessible to almost anyone because it runs on any Android or iOS smartphone. For developers who are integrating 3D VR content directly into Android apps, Google Cardboard is a way of experiencing virtual reality that is here to stay. A gateway to VR Even in a very short time, it's been available; this generation of consumer virtual reality, whether Cardboard or Oculus Rift, has demonstrated itself to be instantly compelling, immersive, entertaining, and "a game changer" for just about everyone who tries it. Google Cardboard is especially easy to access with a very low barrier to use. All you need is a smartphone, a low-cost Cardboard viewer (as low as $5 USD), and free apps downloaded from Google Play (or Apple App Store for iOS). Google Cardboard has been called a gateway to VR, perhaps in reference to marijuana as a "gateway drug" to more dangerous illicit drug abuse? We can play with this analogy for a moment, however, decadent. Perhaps Cardboard will give you a small taste of VR's potential. You'll want more. And then more again. This will help you fulfill your desire for better, faster, more intense, and immersive virtual experiences that can only be found in higher end VR devices. At this point, perhaps there'll be no turning back, you're addicted! Yet as a Rift user, I still also enjoy Cardboard. It's quick. It's easy. It's fun. And it really does work, provided I run apps that are appropriately designed for the device. I brought a Cardboard viewer in my backpack when visiting my family for the holidays. Everyone enjoyed it a lot. Many of my relatives didn't even get past the standard Google Cardboard demo app, especially its 360-degree photo viewer. That was engaging enough to entertain them for a while. Others jumped to a game or two or more. They wanted to keep playing and try new experiences. Perhaps it's just the novelty. Or, perhaps it's the nature of this new medium. The point is that Google Cardboard provides an immersive experience that's enjoyable, useful, and very easily accessible. In short, it is amazing. Then, show them an HTC Vive or Oculus Rift. Holy Cow! That's really REALLY amazing! We're not here to talk about the higher end VR devices, except to contrast it with Cardboard and to keep things in perspective. Once you try desktop VR, is it hard to "go back" to mobile VR? Some folks say so. But that's almost silly. The fact is that they're really separate things. As discussed earlier, desktop VR comes with much higher processing power and other high-fidelity features, whereas mobile VR is limited to your smartphone. If one were to try and directly port a desktop VR app to a mobile device, there's a good chance that you'll be disappointed. It's best to think of each as a separate media. Just like a desktop application or a console game is different from, but similar to, a mobile one. The design criteria may be similar but different. The technologies are similar but different. The user expectations are similar but different. Mobile VR may be similar to desktop VR, but it's different. To emphasize how different Cardboard is from desktop VR devices, it's worth pointing out that Google has written their manufacturer's specifications and guidelines. "Do not include a headstrap with your viewer. When the user holds the Cardboard with their hands against the face, their head rotation speed is limited by the torso rotational speed (which is much slower than the neck rotational speed). This reduces the chance of "VR sickness" caused by rendering/IMU latency and increases the immersiveness in VR." The implication is that Cardboard apps should be designed for shorter, simpler, and somewhat stationary experiences. Let's now consider the other ways where Cardboard is a gateway to VR. We predict that Android will continue to grow as a primary platform for virtual reality into the future. More and more technologies will get crammed into smartphones. And this technology will include features specifically with keeping VR in mind: Faster processors and mobile GPUs Higher resolution screens Higher precision motion sensors Optimized graphics pipelines Better software Tons of more VR apps Mobile VR will not give way to desktop VR; it may even eventually replace it. Furthermore, maybe soon we'll see dedicated mobile VR headsets that have the guts of a smartphone built-in without the cost of a wireless communications' contract. No need to use your own phone. No more getting interrupted while in VR by an incoming call or notification. No more rationing battery life in case you need to receive an important call or otherwise use your phone. These dedicated VR devices will likely be Android-based. The value of low-end VR Meanwhile, Android and Google Cardboard are here today, on our phones, in our pockets, in our homes, at the office, and even in our schools. Google Expeditions, for example, is Google's educational program for Cardboard (https://www.google.com/edu/expeditions/), which allows K-12 school children to take virtual field trips to "places a school bus can't," as they say, "around the globe, on the surface of Mars, on a dive to coral reefs, or back in time." The kits include Cardboard viewers and Android phones for each child in a classroom, plus an Android tablet for the teacher. They're connected with a network. The teacher can then guide students on virtual field trips, provide enhanced content, and create learning experiences that go way beyond a textbook or classroom video, as shown in the following image: The entire Internet can be considered a world-wide publishing and media distribution network. It's a web of hyperlinked pages, text, images, music, video, JSON data, web services, and many more. It's also teeming with 360-degree photos and videos. There's also an ever growing amount of three-dimensional content and virtual worlds. Would you consider writing an Android app today that doesn't display images? Probably not. There's a good chance that your app also needs to support sound files, videos, or other media. So, pay attention. Three-dimensional Cardboard-enabled content is coming quickly. You might be interested in reading this article now because VR looks fun. But soon enough, it may be a customer-driven requirement for your next app. Some examples of types of popular Cardboard apps include: 360 degree photo viewing, for example, Google's Cardboard demo (https://play.google.com/store/apps/details?id=com.google.samples.apps.cardboarddemo) and Cardboard Camera (https://play.google.com/store/apps/details?id=com.google.vr.cyclops) Video and cinema viewing, for example, a Cardboard theatre (https://play.google.com/store/apps/details?id=it.couchgames.apps.cardboardcinema) Roller coasters and thrill rides, for example, VR Roller Coaster (https://play.google.com/store/apps/details?id=com.frag.vrrollercoaster) Cartoonish 3D games, for example, Lamber VR (https://play.google.com/store/apps/details?id=com.archiactinteractive.LfGC&hl=en_GB) First person shooter games, for example, Battle 360 VR (https://play.google.com/store/apps/details?id=com.oddknot.battle360vr) Creepy scary stuff, for example, Sisters (https://play.google.com/store/apps/details?id=com.otherworld.Sisters) Educational experiences, for example, Titans of Space (https://play.google.com/store/apps/details?id=com.drashvr.titansofspacecb&hl=en_GB) Marketing experiences, for example, Volvo Reality (https://play.google.com/store/apps/details?id=com.volvo.volvoreality) And much more. Thousands more. The most popular ones have had hundreds of thousands of downloads (the Cardboard demo app itself has millions of downloads). Cardware! Let's take a look at the variety of Cardboard devices that are available. There's a lot of variety. Obviously, the original Google design is actually made from cardboard. And manufacturers have followed suit, offering cardboard Cardboards directly to consumers—brands such as Unofficial Cardboard, DODOCase, and IAmCardboard were among the first. Google provides the specifications and schematics free of charge (refer to https://www.google.com/get/cardboard/manufacturers/). The basic viewer design consists of an enclosure body, two lenses, and an input mechanism. The Works with Google Cardboard certification program indicates that a given viewer product meets the Google standards and works well with Cardboard apps. The viewer enclosure may be constructed from any material: cardboard, plastic, foam, aluminum, and so on. It should be lightweight and do a pretty good job of blocking the ambient light. The lenses (I/O 2015 Edition) are 34 mm diameter aspherical single lenses with an 80 degree circular FOV (field of view) and other specified parameters. The input trigger ("clicker") can be one of the several alternative mechanisms. The simplest is none, where the user must touch the smartphone screen directly with their finger to trigger a click. This may be inconvenient since the phone is sitting inside the viewer enclosure but it works. Plenty of viewers just include a hole to stick your finger inside. Alternatively, the original Cardboard utilized a small ring magnet attached to the outside of the viewer. The user can slide the magnet and this movement is sensed by the phone's magnetometer and recognized by the software as a "click". This design is not always reliable because the location of the magnetometer varies among phones. Also, using this method, it is harder to detect a "press and hold" interaction, which means that there is only one type of user input "event" to use within your application. Lastly, Version 2.0 introduced a button input constructed from a conductive "strip" and "pillow" glued to a Cardboard-based "hammer". When the button is pressed, the user's body charge is transferred onto the smartphone screen, as if he'd directly touched the screen with his finger. This clever solution avoids the unreliable magnetometer solution, instead uses the phone's native touchscreen input, albeit indirectly. It is also worth mentioning at this point that since your smartphone supports Bluetooth, it's possible to use a handheld Bluetooth controller with your Cardboard apps. This is not part of the Cardboard specifications and requires some extra configuration; the use of a third-party input handler or controller support built into the app. A mini Bluetooth controller is shown in the following image: Cardboard viewers are not necessarily made out of cardboard. Plastic viewers can get relatively costly. While they are more sturdy than cardboards, they fundamentally have the same design (assembled). Some devices include adjustable lenses, for the distance of the lenses from the screen, and/or the distance between your eyes (IPD or inter-pupillary distance). The Zeiss VR One, Homido, and Sunnypeak devices were among the first to become popular. Some manufacturers have gone out of the box (pun intended) with innovations that are not necessarily compliant with Google's specifications but provide capabilities beyond the Cardboard design. A notable example is the Wearality viewer (http://www.wearality.com/), which includes an exclusive patent 150-degree field of view (FOV) double Fresnel lens. It's so portable that it folds up like a pair of sunglasses. The Wearality viewer is shown in the following image: Configuring your Cardboard viewer With such a variety of Cardboard devices and variations in lens distance, field of view, distortion, and so on, Cardboard apps must be configured to a specific device's attributes. Google provides a solution to this as well. Each Cardboard viewer comes with a unique QR code and/or NFC chip, which you scan to configure the software for that device. If you're interested in calibrating your own device or customizing your parameters, check out the profile generator tools at https://www.google.com/get/cardboard/viewerprofilegenerator/. To configure your phone to a specific Cardboard viewer, open the standard Google Cardboard app, and select the Settings icon in the center bottom section of the screen, as shown in the following image: Then, point the camera to the QR code for your particular Cardboard viewer: Your phone is now configured for the specific Cardboard viewer parameters. Developing apps for Cardboard At the time of writing this article, Google provides two SDKs for Cardboard: Cardboard SDK for Android (https://developers.google.com/cardboard/android) Cardboard SDK for Unity (https://developers.google.com/cardboard/unity) Let's consider the Unity option first. Using Unity Unity (http://unity3d.com/) is a popular fully featured 3D game engine, which supports building your games on a wide gamut of platforms, from Playstation and XBox, to Windows and Mac (and Linux!), to Android and iOS. Unity consists of many separate tools integrated into a powerful engine under a unified visual editor. There are graphics tools, physics, scripting, networking, audio, animations, UI, and many more. It includes advanced computer graphics rendering, shading, textures, particles, and lighting with all kinds of options for optimizing performance and fine tuning the quality of your graphics for both 2D and 3D. If that's not enough, Unity hosts a huge Assets Store teaming with models, scripts, tools, and other assets created by its large community of developers. The Cardboard SDK for Unity provides a plugin package that you can import into the Unity Editor, containing prefabs (premade objects), C# scripts, and other assets. The package gives you what you need in order to add a stereo camera to your virtual 3D scene and build your projects to run as Cardboard apps on Android (and iOS). If you're interested in learning more about using Unity to build VR applications for Cardboard, check out another book by Packt Publishing, Unity Virtual Reality Projects by Jonathan Linowes (https://www.packtpub.com/game-development/unity-virtual-reality-projects). Going native So, why not just use Unity for Cardboard development? Good question. It depends on what you're trying to do. Certainly, if you need all the power and features of Unity for your project, it's the way to go. But at what cost? With great power comes great responsibility (says Uncle Ben Parker). It is quick to learn but takes a lifetime to master (says the Go Master). Seriously though, Unity is a powerful engine that may be an overkill for many applications. To take full advantage, you may require additional expertise in modeling, animation, level design, graphics, and game mechanics. Cardboard applications built with Unity are bulky. An empty Unity scene build for Android generates an .apk file that has minimum 23 megabytes. In contrast, the simple native Cardboard application, .apk, is under one megabyte. Along with this large app size comes a long loading time, possibly more than several seconds. It impacts the memory usage and battery use. Unless you've paid for a Unity Android license, your app always starts with the Made With Unity splash screen. These may not be acceptable constraints for you. In general, the closer you are to the metal, the better performance you'll eke out of your application. When you write directly for Android, you have direct access to the features of the device, more control over memory and other resources, and more opportunities for customization and optimization. This is why native mobile apps tend to trump over mobile web apps. Lastly, one of the best reasons to develop with native Android and Java may be the simplest. You're anxious to build something now! If you're already an Android developer, then just use what you already know and love! Take the straightest path from here to there. If you're familiar with Android development, then Cardboard development will come naturally. Using the Cardboard SDK for Android, you can do programming in Java, perhaps using an IDE (integrated development environment) such as Android Studio (also known as IntelliJ). As we'll notice throughout this article, your Cardboard Android app is like other Android apps, including a manifest, resources, and Java code. As with any Android app, you will implement a MainActivity class, but yours will extend CardboardActivity and implement CardboardView.StereoRenderer. Your app will utilize OpenGL ES 2.0 graphics, shaders, and 3D matrix math. It will be responsible for updating the display on each frame, that is, rerendering your 3D scene based on the direction the user is looking at that particular slice in time. It is particularly important in VR, but also in any 3D graphics context, to render a new frame as quickly as the display allows, usually at 60 FPS. Your app will handle the user input via the Cardboard trigger and/or gaze-based control. That's what your app needs to do. However, there are still more nitty gritty details that must be handled to make VR work. As noted in the Google Cardboard SDK guide (https://developers.google.com/cardboard/android/), the SDK simplifies many of these common VR development tasks, including the following: Lens distortion correction Head tracking 3D calibration Side-by-side rendering Stereo geometry configuration User input event handling Functions are provided in the SDK to handle these tasks for you. Building and deploying your applications for development, debugging, profiling, and eventually publishing on Google Play also follow the same Android workflows you may be familiar with already. That's cool. Of course, there's more to building an app than simply following an example. We'll take a look at techniques; for example, for using data-driven geometric models, for abstracting shaders and OpenGL ES API calls, and for building user interface elements, such as menus and icons. On top of all this, there are important suggested best practices for making your VR experiences work and avoiding common mistakes. An overview to VR best practices More and more is being discovered and written each day about the dos and don'ts when designing and developing for VR. Google provides a couple of resources to help developers build great VR experiences, including the following: Designing for Google Cardboard is a best practice document that helps you focus on the overall usability as well as avoid common VR pitfalls (http://www.google.com/design/spec-vr/designing-for-google-cardboard/a-new-dimension.html). Cardboard Design Lab is a Cardboard app that directly illustrates the principles of designing for VR, which you can explore in Cardboard itself. At Vision Summit 2016, the Cardboard team announced that they have released the source (Unity) project for developers to examine and extend (https://play.google.com/store/apps/details?id=com.google.vr.cardboard.apps.designlab and https://github.com/googlesamples/cardboard-unity/tree/master/Samples/CardboardDesignLab). VR motion sickness is a real symptom and concern for virtual reality caused in parts by a lag in screen updates, or latency, when you're moving your head. Your brain expects the world around you to change exactly in sync with your actual motion. Any perceptible delay can make you feel uncomfortable, to say the least, and possibly nauseous. Latency can be reduced by faster rendering of each frame and maintaining the recommended frames per second. Desktop VR apps are held at the high standard of 90 FPS, enabled by a custom HMD screen. On mobile devices, the screen hardware often limits the refresh rates to 60 FPS, or in the worst case, 30 FPS. There are additional causes of VR motion sickness and other user discomforts, which can be mitigated by following these design guidelines: Always maintain head tracking. If the virtual world seems to freeze or pause, this may cause users to feel ill. Displays user interface elements, such as titles and buttons, in 3D virtual spaces. If rendered in 2D, they'll seem to be "stuck to your face" and you will feel uncomfortable. When transitioning between scenes, fade to black, cut scenes will be very disorienting. Fading to white might be uncomfortably bright for your users. Users should remain in control of their movement within the app. Something about initiating camera motion yourself helps reduce motion sickness. Avoid acceleration and deceleration. As humans, we feel the acceleration but not constant velocity. If you are moving the camera inside the app, keep a constant velocity. Rollercoasters are fun, but even in real life, it can make you feel sick. Keep your users grounded. Being a virtual floating point in space, it can make you feel sick, whereas a feeling like you're standing on the ground or sitting in a cockpit provides a sense of stability. Maintain a reasonable distance from the eye for UI elements, such as buttons and reticle cursors. If too close, the user may have to look cross-eyed and can experience an eye strain. Some items that are too close may not converge at all and cause "double-vision." Building applications for virtual reality also differ from the conventional Android ones in other ways, such as follows: When transitioning from a 2D application into VR, it is recommended that you provide a headset icon for the user, the tap, as shown in the following image: To exit VR, the user can hit the back button in the system bar (if present) or the home button. The cardboard sample apps use a "tilt-up" gesture to return to the main menu, which is a fine approach if you want to allow a "back" input without forcing the user to remove the phone from the device. Make sure that you build your app to run in fullscreen mode (and not in Android's Lights Out mode). Do not perform any API calls that will present the user with a 2D dialog box. The user will be forced to remove the phone from the viewer to respond. Provide audio and haptic (vibration) feedback to convey information and indicate that the user input is recognized by the app. So, let's say that you've got your awesome Cardboard app done and is ready to publish. Now what? There's a line you can put in the AndroidManifext file that marks the app as a Cardboard app. Google's Cardboard app includes a Google Play store browser used to find a Cardboard app. Then, just publish it as you would do for any normal Android application. Summary In this article, we started by defining Google Cardboard and saw how it fits in the spectrum of consumer virtual reality devices. We then contrasted Cardboard with higher end VRs, such as Oculus Rift, HTC Vive, and PlayStation VR, making the case for low-end VR as a separate medium in its own right. We talked a bit about developing for Cardboard, and considered why and why not to use the Unity 3D game engine versus writing a native Android app in Java with the Cardboard SDK. And lastly, we took a quick survey of many design considerations for developing for VR, including ways to avoid motion sickness and tips for integrating Cardboard with Android apps in general. Resources for Article: Further resources on this subject: VR Build and Run[article] vROps – Introduction and Architecture[article] Designing and Building a vRealize Automation 6.2 Infrastructure[article]
Read more
  • 0
  • 0
  • 12421

article-image-thats-one-fancy-hammer
Packt
13 Jan 2014
8 min read
Save for later

That's One Fancy Hammer!

Packt
13 Jan 2014
8 min read
(For more resources related to this topic, see here.) Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard Browser-based 3D – welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 12381

article-image-installation-ogre-3d
Packt
09 Feb 2011
6 min read
Save for later

Installation of Ogre 3D

Packt
09 Feb 2011
6 min read
OGRE 3D 1.7 Beginner's Guide Downloading and installing Ogre 3D The first step we need to take is to install and configure Ogre 3D. Time for action – downloading and installing Ogre 3D We are going to download the Ogre 3D SDK and install it so that we can work with it later. Go to http://www.ogre3d.org/download/sdk. Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot: (Move the mouse over the image to enlarge.) What just happened? We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK. Different versions of the Ogre 3D SDK Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms. Exploring the SDK Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D. When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us. The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system. Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does. resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines: Zip=../../media/packs/SdkTrays.zip FileSystem=../../media/thumbnails Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one. The Ogre 3D samples Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities. Time for action – building the Ogre 3D samples To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them. Go to the Ogre3D folder. Open the Ogre3d.sln solution file. Right-click on the solution and select Build Solution. Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished. If everything went well, go into the Ogre3D/bin folder. Execute the SampleBrowser.exe. You should see the following on your screen: Try the different samples to see all the nice features Ogre 3D offers. What just happened? We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.  
Read more
  • 0
  • 0
  • 12370

article-image-whats-your-input
Packt
20 Aug 2014
7 min read
Save for later

What's Your Input?

Packt
20 Aug 2014
7 min read
This article by Venita Pereira, the author of the book Learning Unity 2D Game Development by Example, teaches us all about the various input types and states of a game. We will then go on to learn how to create buttons and the game controls by using code snippets for input detection. "Computers are finite machines; when given the same input, they always produce the same output." – Greg M. Perry, Sams Teach Yourself Beginning Programming in 24 Hours (For more resources related to this topic, see here.) Overview The list of topics that will be covered in this article is as follows: Input versus output Input types Output types Input Manager Input detection Buttons Game controls Input versus output We will be looking at exactly what both input and output in games entail. We will look at their functions, importance, and differentiations. Input in games Input may not seem a very important part of a game at first glance, but in fact it is very important, as input in games involves how the player will interact with the game. All the controls in our game, such as moving, special abilities, and so forth, depend on what controls and game mechanics we would like in our game and the way we would like them to function. Most games have the standard control setup of moving your character. This is to help usability, because if players are already familiar with the controls, then the game is more accessible to a much wider audience. This is particularly noticeable with games of the same genre and platform. For instance, endless runner games usually make use of the tilt mechanic which is made possible by the features of the mobile device. However, there are variations and additions to the pre-existing control mechanics; for example, many other endless runners make use of the simple swipe mechanic, and there are those that make use of both. When designing our games, we can be creative and unique with our controls, thereby innovating a game, but the controls still need to be intuitive for our target players. When first designing our game, we need to know who our target audience of players includes. If we would like our game to be played by young children, for instance, then we need to ensure that they are able to understand, learn, and remember the controls. Otherwise, instead of enjoying the game, they will get frustrated and stop playing it entirely. As an example, a young player may hold a touchscreen device with their fingers over the screen, thereby preventing the input from working correctly depending on whether the game was first designed to take this into account and support this. Different audiences of players interact with a game differently. Likewise, if a player is more familiar with the controls on a specific device, then they may struggle with different controls. It is important to create prototypes to test the input controls of a game thoroughly. Developing a well-designed input system that supports usability and accessibility will make our game more immersive. Output in games Output is the direct opposite of input; it provides the necessary information to the player. However, output is just as essential to a game as input. It provides feedback to the player, letting them know how they are doing. Output lets the player know whether they have done an action correctly or they have done something wrong, how they have performed, and their progression in the form of goals/missions/objectives. Without feedback, a player would feel lost. The player would potentially see the game as being unclear, buggy, or even broken. For certain types of games, output forms the heart of the game. The input in a game gets processed by the game to provide some form of output, which then provides feedback to the player, helping them learn from their actions. This is the cycle of the game's input-output system. The following diagram represents the cycle of input and output: Input types There are many different input types that we can utilize in our games. These various input types can form part of the exciting features that our games have to offer. The following image displays the different input types: The most widely used input types in games include the following: Keyboard: Key presses from a keyboard are supported by Unity and can be used as input controls in PC games as well as games on any other device that supports a keyboard. Mouse: Mouse clicks, motion (of the mouse), and coordinates are all inputs that are supported by Unity. Game controller: This is an input device that generally includes buttons (including shoulder and trigger buttons), a directional pad, and analog sticks. The game controller input is supported by Unity. Joystick: A joystick has a stick that pivots on a base that provides movement input in the form of direction and angle. It also has a trigger, throttle, and extra buttons. It is commonly used in flight simulation games to simulate the control device in an aircraft's cockpit and other simulation games that simulate controlling machines, such as trucks and cranes. Modern game controllers make use of a variation of joysticks known as analog sticks and are therefore treated as the same class of input device as joysticks by Unity. Joystick input is supported by Unity. Microphone: This provides audio input commands for a game. Unity supports basic microphone input. For greater fidelity, a third-party audio recognition tool would be required. Camera: This provides visual input for a game using image recognition. Unity has webcam support to access RGB data, and for more advanced features, third-party tools would be required. Touchscreen: This provides multiple touch inputs from the player's finger presses on the device's screen. This is supported by Unity. Accelerometer: This provides the proper acceleration force at which the device is moved and is supported by Unity. Gyroscope: This provides the orientation of the device as input and is supported by Unity. GPS: This provides the geographical location of the device as input and is supported by Unity. Stylus: Stylus input is similar to touchscreen input in that you use a stylus to interact with the screen; however, it provides greater precision. The latest version of Unity supports the Android stylus. Motion controller: This provides the player's motions as input. Unity does not support this, and therefore, third-party tools would be required. Output types The main output types in games are as follows: Visual output Audio output Controller vibration Unity supports all three. Visual output The Head-Up Display (HUD) is the gaming term for the game's Graphical User Interface (GUI) that provides all the essential information as visual output to the player as well as feedback and progress to the player as shown in the following image: HUD, viewed June 22, 2014, http://opengameart.org/content/golden-ui Other visual output includes images, animations, particle effects, and transitions. Audio Audio is what can be heard through an audio output, such as a speaker, to provide feedback that supports and emphasizes the visual output and, therefore, increases immersion. The following image displays a speaker: Speaker, viewed June 22, 2014, http://pixabay.com/en/loudspeaker-speakers-sound-music-146583/ Controller vibration Controller vibration provides feedback for instances where the player collides with an object or environmental feedback for earthquakes to provide even more immersion as in the following image: Having a game that is designed to provide output meaningfully not only makes it clearer and more enjoyable, but can truly bring the world to life, making it truly engaging for the player.
Read more
  • 0
  • 0
  • 12338
article-image-mesh-animation
Packt
18 Oct 2013
5 min read
Save for later

Mesh animation

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) Using animated models is not very different from using normal models. There are essentially two types of animation to consider (in addition to manually changing the position of a mesh's geometry in Three.js). If all you need is to smoothly transition properties between different values—for example, changing the rotation of a door in order to animate it opening—you can use the Tween.js library at https://github.com/sole/tween.jsto do so instead of animating the mesh itself. Jerome Etienne has a nice tutorial on doing this at http://learningthreejs.com/blog/2011/08/17/tweenjs-for-smooth-animation/. Morph animation Morph animation stores animation data as a sequence of positions. For example, if you had a cube with a shrink animation, your model could hold the positions of the vertices of the cube at full size and at the shrunk size. Then animation would consist of interpolating between those states during each rendering or keyframe. The data representing each state can hold either vertex targets or face normals. To use morph animation, the easiest approach is to use a THREE.MorphAnimMesh class, which is a subclass of the normal mesh. In the following example, the highlighted lines should only be included if the model uses normals: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry) { var material = new THREE.MeshLambertMaterial({ color: 0x000000, morphTargets: true, morphNormals: true, }); if (geometry.morphColors && geometry.morphColors.length) { var colorMap = geometry.morphColors[0]; for (var i = 0; i < colorMap.colors.length; i++) { geometry.faces[i].color = colorMap.colors[i]; } material.vertexColors = THREE.FaceColors; } geometry.computeMorphNormals(); var mesh = new THREE.MorphAnimMesh(geometry, material); mesh.duration = 5000; // in milliseconds scene.add(mesh); morphs.push(mesh); }); The first thing we do is set our material to be aware that the mesh will be animated with the morphTargets properties and optionally with morphNormal properties. Next, we check whether colors will change during the animation, and set the mesh faces to their initial color if so (if you know your model doesn't have morphColors, you can leave out that block). Then the normals are computed (if we have them) and our MorphAnimMesh animation is created. We set the duration value of the full animation, and finally store the mesh in the global morphs array so that we can update it during our physics loop: for (var i = 0; i < morphs.length; i++) { morphs[i].updateAnimation(delta); } Under the hood, the updateAnimation method just changes which set of positions in the animation the mesh should be interpolating between. By default, the animation will start immediately and loop indefinitely. To stop animating, just stop calling updateAnimation. Skeletal animation Skeletal animation moves a group of vertices in a mesh together by making them follow the movement of bone. This is generally easier to design because artists only have to move a few bones instead of potentially thousands of vertices. It's also typically less memory-intensive for the same reason. To use morph animation, use a THREE.SkinnedMesh class, which is a subclass of the normal mesh: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry, materials) { for (var i = 0; i < materials.length; i++) { materials[i].skinning = true; } var material = new THREE.MeshFaceMaterial(materials); THREE.AnimationHandler.add(geometry.animation); var mesh = new THREE.SkinnedMesh(geometry, material, false); scene.add(mesh); var animation = new THREE.Animation(mesh, geometry.animation.name); animation.interpolationType = THREE.AnimationHandler.LINEAR; // or CATMULLROM for cubic splines (ease-in-out) animation.play(); }); The model we're using in this example already has materials, so unlike in the morph animation examples, we have to change the existing materials instead of creating a new one. For skeletal animation we have to enable skinning, which refers to how the materials are wrapped around the mesh as it moves. We use the THREE.AnimationHandler utility to track where we are in the current animation and a THREE.SkinnedMesh utility to properly handle our model's bones. Then we use the mesh to create a new THREE.Animation and play it. The animation's interpolationType determines how the mesh transitions between states. If you want cubic spline easing (slow then fast then slow), use THREE.AnimationHandler.CATMULLROM instead of the LINEAR easing. We also need to update the animation in our physics loop: THREE.AnimationHandler.update(delta); It is possible to use both skeletal and morph animations at the same time. In this case, the best approach is to treat the animation as skeletal and manually update the mesh's morphTargetInfluences array as demonstrated in examples/webgl_animation_skinning_morph.html in the Three.js project. Summary This article explains how to manage external assets such as 3D models, as well as add details to your worlds and also teaches us to manage 3D models and animation. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article] 2D game development with Monkey [Article]
Read more
  • 0
  • 0
  • 12301

article-image-editing-uv-islands
Packt
10 Aug 2015
10 min read
Save for later

Editing the UV islands

Packt
10 Aug 2015
10 min read
In this article by Enrico Valenza, the author of Blender 3D Cookbook, we are going to join the two UV islands' halves together, in order to improve the final look of the texturing; we are also going to modify, if possible, a little of the island proportions in order to obtain a more regular flow of the UV vertices, and fix the distortions. We are going to the use the pin tool, which is normally used in conjunction with the Live Unwrap tool. (For more resources related to this topic, see here.) Getting ready First, we'll try to recalculate the unwrap of some of the islands by modifying the seams of the mesh. Before we start though, let's see if we can improve some of the visibility of the UV islands in the UV/Image Editor: Put the mouse cursor in the UV/Image Editor window and press the N key. In the Properties sidepanel that appears by pressing the N key on the right-hand side of the window, go to the Display subpanel and click on the Black or White button (depending on your preference) under the UV item. Check also the Smooth item box. Also, check the Stretch item, which even though it was made for a different purpose, can increase the visibility of the islands a lot. Press N again to get rid of the Properties sidepanel. All these options enabled should make the islands more easily readable in the UV/Image Editor window: The UV islands made more easily readable by the enabled items How to do it… Now we can start with the editing; initially, we are going to freeze the islands that we don't want to modify because their unwrap is either satisfactory, or we will deal with it later. So, perform the following steps: Press A to select all the islands, then by putting the mouse pointer on the two pelvis island halves and pressing Shift + L, multi-deselect them; press the P key to pin the remaining selected UV islands and then A to deselect everything: To the right-hand side, the pinned UV islands Zoom in on the islands of the pelvis, select both the left and right outer edge-loops, as shown in the following left image, and press P to pin them. Go to the 3D view and clear only the front part of the median seam on the pelvis. To do this, start to clear the seam from the front edges, go down and stop where it crosses the horizontal seam that passes the bottom part of the groin and legs, and leave the back part of the vertical median seam still marked: Pinning the extreme vertices in the UV/Image Editor, and editing the seam on the mesh Go into Face selection mode and select all the faces of the pelvis; put the mouse pointer in the 3D view and press U | Unwrap (alternatively, go into the UV/Image Editor and press E): Unwrapping again with the pinning and a different seam The island will keep the previous position because of the pinned edges, and is now unwrapped as one single piece (with the obvious exception of the seam on the back). We won't modify the pelvis island any further, so select all its vertices and press P to pin all of them and then deselect them. Press A in the 3D view to select all the faces of the mesh and make all the islands visible in the UV/Image Editor. Note that they are all pinned at the moment, so just select the vertices you want to unpin (Alt + P) in the islands of the tongue and inner mouth. Then, clear the median seam in the corresponding pieces on the mesh, and press E again: Re-unwrapping the tongue and inner mouth areas Select the UV vertices of the resulting islands and unpin them all; next, pin just one vertex at the top of the islands and one at the bottom, and unwrap again. This will result in a more organically distributed unwrap of the parts: Re-unwrapping again with a different pinning Select all the faces of the mesh, and then all the islands in the UV/Image Editor window. Press Ctrl + A to average their relative size and adjust their position in the default tile space: The rearranged UV islands Now, let's work on the head piece that, as in every character, should be the most important and well-finished piece. At the moment, the face is made using two separate islands; although this won't be visible in the final textured rendering of our character, it's always better, if possible, to join them in order to have a single piece, especially in the front mesh faces. Due to the elongated snout of the character, if we were to unwrap the head as a single piece simply without the median seam, we wouldn't get a nice evenly mapped result, so we must divide the whole head into more pieces. Actually, we can take advantage of the fact that the Gidiosaurus is wearing a helmet and that most of the head will be covered by it; this allows us to easily split the face from the rest of the mesh, hiding the seams under the helmet. Go into Edge selection mode and mark the seams, dividing the face from the cranium and neck as shown in the following screenshots. Select the crossing edge-loops, and then clear the unnecessary parts: New seams for the character's head part 1 Also clear the median seam in the upper face part, and under the seam on the bottom jaw, leaving it only on the front mandible and on the back of the cranium and neck: New seams for the character's head part 2 Go in the Face selection mode and select only the face section of the mesh, and then press E to unwrap. The new unwrap comes upside down, so select all the UV vertices and rotate the island by 180 degrees: The character's face unwrapped Select the cranium/neck section on the mesh and repeat the process: The rest of the head mesh unwrapped as a whole piece Now, select all the faces of the mesh and all the islands in the UV/Image Editor, and press Ctrl + A to average their reciprocal size. Once again, adjust the position of the islands inside the UV tile (Ctrl + P to automatically pack them inside the available space, and then tweak their position, rotation, and scale): The character's UV islands packed inside the default U0/V0 tile space How it works… Starting from the UV unwrap, we improved some of the islands by joining together the halves representing common mesh parts. When doing this, we tried to retain the already good parts of the unwrap by pinning the UV vertices that we didn't want to modify; this way, the new unwrap process was forced to calculate the position of the unpinned vertices using the constraints of the pinned ones (pelvis, tongue, and inner mouth). In other cases, we totally cleared the old seams on the model and marked new ones, in order to have a completely new unwrap of the mesh part (the head), we also used the character furniture (such as the armor) to hide the seams (which in any case, won't be visible at all). There's more… At this point, looking at the UV/Image Editor window containing the islands, it's evident that if we want to keep several parts in proportion to each other, some of the islands are a little too small to give a good amount of detail when texturing; for example, the Gidiosaurus's face. A technique for a good unwrap that is the current standard in the industry is UDIM UV Mapping, which means U-Dimension; basically, after the usual unwrap, the islands are scaled bigger and placed outside the default U0/V0 tile space. Look at the following screenshots, showing the Blender UV/Image Editor window:   The default U0/V0 tile space and the possible consecutive other tile spaces On the left-hand side, you can see, highlighted with red lines, the single UV tile that at present is the standard for Blender, which is identified by the UV coordinates 0 and 0: that is, U (horizontal) = 0 and V (vertical) = 0. Although not visible in the UV/Image Editor window, all the other possible consecutive tiles can be identified by the corresponding UV coordinates, as shown on the right-hand side of the preceding screenshot (again, highlighted with red lines). So, adjacent to the tile U0/V0, we can have the row with the tiles U1/V0, U2/V0, and so on, but we can also go upwards: U0/V1, U1/V1, U2/V1, and so on. To help you identify the tiles, Blender will show you the amount of pixels and also the number of tiles you are moving the islands in the toolbar of the UV/Image Editor window. In the following screenshot, the arm islands have been moved horizontally (on the negative x axis) by -3072.000 pixels; this is correct because that's exactly the X size of the grid image. In fact, in the toolbar of the UV/Image Editor window, while moving the islands we can read D: -3072.000 (pixels) and (inside brackets) 1.0000 (tile) along X; effectively, 3072 pixels = 1 tile.   Moving the arm islands to the U1/V0 tile space When moving UV islands from tile to tile, remember to check that the Constrain to Image Bounds item in the UVs menu on the toolbar of the UV/Image Editor window is disabled; also, enabling the Normalized item inside the Display subpanel under the N key Properties sidepanel of the same editor window will display the UV coordinates from 0.0 to 1.0, rather than in pixels. More, pressing the Ctrl key while moving the islands will constrain the movement to intervals, making it easy to translate them to exactly 1 tile space. Because at the moment Blender doesn't support the UDIM UV Mapping standard, simply moving an island outside the default U0/V0 tile, for example to U1/V0, will repeat the image you loaded in the U0/V0 tile and on the faces associated with the moved islands. To solve this, it's necessary, after moving the islands, to assign a different material, if necessary with its own different image textures, to each group of vertices/faces associated with each tile space. So, if you shared your islands over 4 tiles, you need to assign 4 different materials to your object, and each material must load the proper image texture. The goal of this process is obviously to obtain bigger islands mapped with bigger texture images, by selecting all the islands, scaling them bigger together using the largest ones as a guide, and then tweaking their position and distribution. One last thing: it is also better to unwrap the corneas and eyes (which are separate objects from the Gidiosaurus body mesh) and add their islands to the tiles where you put the face, mouth, teeth, and so on (use the Draw Other Objects tool in the View menu of the UV/Image Editor window to also show the UV islands of the other nonjoined unwrapped objects):   UV islands unwrapped, following the UDIM UV Mapping standard In our case, we assigned the Gidiosaurus body islands to 5 different tiles, U0/V0, U1/V0, U2/V0, U0/V1, and U1/V1, so we'll have to assign 5 different materials. Note that for exposition purposes only, in the preceding screenshot, you can see the cornea and eye islands together with the Gidiosaurus body islands because I temporarily joined the objects; however, it's usually better to maintain the eyes and corneas as separate objects from the main body. Summary In this article, we saw how we can work with UV islands. Resources for Article: Further resources on this subject: Working with Blender [article] Blender Engine : Characters [article] Blender 2.5: Rigging the Torso [article]
Read more
  • 0
  • 0
  • 12293

article-image-cryengine-3-sandbox-basics
Packt
21 Jun 2011
9 min read
Save for later

CryENGINE 3: Sandbox Basics

Packt
21 Jun 2011
9 min read
  CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Placing the objects in the world Placing objects is a simple task; however, basic terrain snapping is not explained to most new developers. It is common to ask why, when dragging and dropping an object into the world, they cannot see the object. This section will teach you the easiest ways to place an object into your map by using the Follow Terrain method. Getting ready Have My_Level open inside of Sandbox (after completing either of the Terrain sculpting or Generating a procedural terrain recipes). Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Have the Rollup Bar open and ready. Make sure you have the EditMode ToolBar open (right-click the top main ToolBar and tick EditMode ToolBar). How to do it... First select the Follow Terrain button. Then open the Objects tab within the Rollup Bar. Now from the Brushes browser, select any object you wish to place down (for example, defaults/box). You may either double-click the object, or drag-and-drop it onto the Perspective View. Move your mouse anywhere where there is visible terrain and then click once more to confirm the position you wish to place it in. How it works... The Follow Terrain tool is a simple tool that allows the pivot of the object to match the exact height of the terrain in that location. This is best seen on objects that have a pivot point close to or near the bottom of them. There's more... You can also follow terrain and snap to objects. This method is very similar to the Follow Terrain method, except that this also includes objects when placing or moving your selected object. This method does not work on non-physicalized objects.   Refining the object placement After placing the objects in the world with just the Follow Terrain or Snapping to Objects, you might find that you will need to adjust the position, rotation, or scale of the object. In this recipe, we will show you the basics of how you might be able to do so along with a few hotkey shortcuts to make this process a little faster. This works with any object that is placed in your level, from Entities to Solids. Getting ready Have My_Level open inside of Sandbox Review the Navigating a level with the Sandbox Camera recipe to get familiar with the Perspective View. Make sure you have the EditMode ToolBar open (right-click on the top main ToolBar and tick EditMode ToolBar). Place any object in the world. How to do it... In this recipe, we will call your object (the one whose location you wish to refine) Box for ease of reference. Select Box. After selecting Box, you should see a three axis widget on it, which represents each axis in 3D space. By default, these axes align to the world: Y = Forward X = Right Z = Up To move the Box in the world space and change its position, proceed with the following steps: Click on the Select and Move icon in the EditMode ToolBar (1 for the keyboard shortcut). Click on the X arrow and drag your mouse up and down relative to the arrow's direction. Releasing the mouse button will confirm the location change. You may move objects either on a single axis, or two at once by clicking and dragging on the plane that is adjacent to any two axes: X + Y, X + Z, or Y + Z. To rotate an object, do the following: Select Box (if you haven't done so already). Click on the Select and Rotate icon in the EditMode ToolBar (2 for the keyboard shortcut). Click on the Z arrow (it now has a sphere at the end of it) and drag your mouse from side to side to roll the object relative to the axis. Releasing the mouse button will confirm the rotation change. You cannot rotate an object along multiple axes. To scale an object, do the following: Select Box (if you haven't done so already). Click on the Select and Scale icon in the EditMode ToolBar (3 for the keyboard shortcut). Click on the CENTER box and drag your mouse up and down to scale on all three axes at once. Releasing the mouse button will confirm the scale change. It is possible to scale on just one axis or two axes; however, this is highly discouraged as Non-Uniform Scaling will result in broken physical meshes for that object. If you require an object to be scaled up, we recommend you only scale uniformly on all three axes! There's more... Here are some additional ways to manipulate objects within the world. Local position and rotation To make position or rotation refinement a bit easier, you might want to try changing how the widget will position or rotate your object by changing it to align itself relative to the object's pivot. To do this, there is a drop-down menu in the EditMode ToolBar that will have the option to select Local. This is called Local Direction. This setup might help to position your object after you have rotated it. Grid and angle snaps To aid in positioning of non-organic objects, such as buildings or roads, you may wish to turn on the Snap to Grid option. Turning this feature on will allow you to move the object on a grid (currently relative to its location). To change the grid spacing, click the drop-down arrow next to the number to change the spacing (grid spacing is in meters). Angle Snaps is found immediately to the right of the Grid Snaps. Turning this feature on will allow you to rotate an object by every five degrees. Ctrl + Shift + Click Even though it is a Hotkey, to many developers this hotkey is extremely handy for initial placement of objects. It allows you to move the object quickly to any point on any physical surface relative to your Perspective View.   Utilizing the layers for multiple developer collaboration A common question that is usually asked about the CryENGINE is how does one developer work on the same level as another at the same time. The answer is—Layers. In this recipe, we will show you how you may be able to utilize the layer system for not only your own organization, but to set up external layers for other developers to work on in parallel. Getting ready Have My_Level open inside of Sandbox. Review the Navigating a level with the Sandbox Camera to get familiar with the Perspective View. Have the Rollup Bar open and ready. Review the Placing the objects in the world (place at least two objects) recipe. How to do it... For this recipe, we will assume that you have your own repository for your project or some means to send your work to others in your team. First, start by placing down two objects on the map. For the sake of the recipe, we shall refer to them as Box1 and Box2. After you've placed both boxes, open the Rollup Bar and bring up the Layers tab. Create a new layer by clicking the New Layer button (paper with a + symbol). A New Layer dialog box will appear. Give it the following parameters: Name = ActionBubble_01 Visible = True External = True Frozen = False Export To Game = True Now select Box1 and open the Objects tab within the Rollup Bar. From here you will see in the main rollup of this object with values such as – Name, Helper Size, MTL, and Minimal Spec. But also in this rollup you will see a button for layers (it should be labelled as Main). Clicking on that button will show you a list of all other available layers. Clicking again on another layer that is not highlighted will move this object to that layer (do this now by clicking on ActionBubble_01). Now save your level by clicking—File | Save. Now in your build folder, go to the following location: -... GameLevelsMy_Level. From here you will notice a new folder called Layers. Inside that folder, you will see ActionBubble_01.lyr. This layer shall be the layer that your other developers will work on. In order for them to be able to do so, you must first commit My_Level.cry and the Layers folder to your repository (it is easiest to commit the entire folder). After doing so, you may now have your other developer make changes to that layer by moving Box1 to another location. Then have them save the map. Have them commit only the ActionBubble_01.lyr to the repository. Once you have retrieved it from the updated repository, you will notice that Box1 will have moved after you have re-opened My_Level.cry in the Editor with the latest layer. How it works... External layers are the key to this whole process. Once a .cry file has been saved to reference an external layer, it will access the data inside of those layers upon loading the level in Sandbox. It is good practice to assign a Map owner who will take care of the .cry file. As this is the master file, only one person should be in charge of maintaining it by creating new layers if necessary. There's more... Here is a list of limitations of what external layers cannot hold. External layer limitations Even though any entity/object you place in your level can be placed into external layers, it is important to note that there are some items that cannot be placed inside of these layers. Here is a list of the common items that are solely owned by the .cry file: Terrain Heightmap Unit Size Max Terrain Height Holes Textures Vegetation Environment Settings (unless forced through Game Logic) Ocean Height Time of Day Settings (unless forced through Game Logic) Baked AI Markup (The owner of the .cry file must regenerate AI if new markup is created on external layers) Minimap Markers  
Read more
  • 0
  • 0
  • 12199
article-image-away3d-detecting-collisions
Packt
02 Jun 2011
6 min read
Save for later

Away3D: Detecting Collisions

Packt
02 Jun 2011
6 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction In this article, you are going to learn how to check intersection (collision) between 3D objects. Detecting collisions between objects in Away3D This recipe will teach you the fundamentals of collision detection between objects in 3D space. We are going to learn how to perform a few types of intersection tests. These tests can hardly be called collision detection in their physical meaning, as we are not going to deal here with any simulation of collision reaction between two bodies. Instead, the goal of the recipe is to understand the collision tests from a mathematical point of view. Once you are familiar with intersection test techniques,the road to creating of physical collision simulations is much shorter. There are many types of intersection tests in mathematics. These include some simple tests such as AABB (axially aligned bounding box), Sphere - Sphere, or more complex such as Triangle - Triangle, Ray - Plane, Line - Plane, and more. Here, we will cover only those which we can achieve using built-in Away3D functionality. These are AABB and AABS (axially aligned bounding sphere) intersections, as well as Ray-AABS and the more complex Ray- Triangle. The rest of the methods are outside of the scope of this article and you can learn about applying them from various 3D math resources. Getting ready Setup an Away3D scene in a new file extending AwayTemplate. Give the class a name CollisionDemo. How to do it... In the following example, we perform an intersection test between two spheres based on their bounding boxes volumes. You can move one of the spheres along X and Y with arrow keys onto the second sphere. On the objects overlapping, the intersected (static) sphere glows with a red color. AABB test: CollisionDemo.as package { public class CollisionDemo extends AwayTemplate { private var _objA:Sphere; private var _objB:Sphere; private var _matA:ColorMaterial; private var _matB:ColorMaterial; private var _gFilter_GlowFilter=new GlowFilter(); public function CollisionDemo() { super(); _cam.z=-500; } override protected function initMaterials() : void{ _matA=new ColorMaterial(0xFF1255); _matB=new ColorMaterial(0x00FF11); } override protected function initGeometry() : void{ _objA=new Sphere({radius:30,material:_matA}); _objB=new Sphere({radius:30,material:_matB}); _view.scene.addChild(_objA); _view.scene.addChild(_objB); _objB.ownCanvas=true; _objA.debugbb=true; _objB.debugbb=true; _objA.transform.position=new Vector3D(-80,0,400); _objB.transform.position=new Vector3D(80,0,400); } override protected function initListeners() : void{ super.initListeners(); stage.addEventListener(KeyboardEvent.KEY_DOWN,onKeyDown); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); if(AABBTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } } private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode){ case 38:_objA.moveUp(5); break; case 40:_objA.moveDown(5); break; case 37:_objA.moveLeft(5); break; case 39:_objA.moveRight(5); break; case 65:_objA.rotationZ-=3; break; case 83:_objA.rotationZ+=3; break; default: } } } } In this screenshot, the green sphere bounding box has a red glow while it is being intersected by the red sphere's bounding box: How it works... Testing intersections between two AABBs is really simple. First, we need to acquire the boundaries of the object for each axis. The box boundaries for each axis of any Object3D are defined by a minimum value for that axis and maximum value. So let's look at the AABBTest() method. Axis boundaries are defined by parentMin and parentMax for each axis, which are accessible for each object extending Object3D. You can see that Object3D also has minX,minY,minZ and maxX,maxY,maxZ. These properties define the bounding box boundaries too, but in objects space and therefore aren't helpful in AABB tests between two objects. So in order for a given bounding box to intersect a bounding box of other objects, three conditions have to be met for each of them: Minimal X coordinate for each of the objects should be less than maximum X of another. Minimal Y coordinate for each of the objects should be less than maximum Y of another. Minimal Z coordinate for each of the objects should be less than maximum Z of another. If one of the conditions is not met for any of the two AABBs, there is no intersection. The preceding algorithm is expressed in the AABBTest() function: private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } As you can see, if all of the conditions we listed previously are met, the execution will skip all the return false blocks and the function will return true, which means the intersection has occurred. There's more... Now let's take a look at the rest of the methods for collision detection, which are AABS-AABS, Ray-AABS, and Ray-Triangle. AABS test The intersection test between two bounding spheres is even simpler to perform than AABBs. The algorithm works as follows. If the distance between the centers of two spheres is less than the sum of their radius, then the objects intersect. Piece of cake! Isn't it? Let's implement it within the code. The AABS collision algorithm gives us the best performance. While there are many other even more sophisticated approaches, try to use this test if you are not after extreme precision. (Most of the casual games can live with this approximation). First, let's switch the debugging mode of _objA and _objB to bounding spheres. In the last application we built, go to the initGeometry() function and change: _objA.debugbb=true; _objB.debugbb=true; To: _objA.debugbs=true; _objB.debugbs=true; Next, we add the function to the class which implements the algorithm we described previously: private function AABSTest():Boolean{ var dist_Number=Vector3D.distance(_objA.position,_objB. position); if(dist<=(_objA.radius+_objB.radius)){ return true; } return false; } Finally, we add the call to the method inside onEnterFrame(): if(AABSTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } Each time AABSTest returns true, the intersected sphere is highlighted with a red glow:
Read more
  • 0
  • 0
  • 12164

article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 11866
Modal Close icon
Modal Close icon