Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-your-first-unity-project
Packt
15 Jun 2017
11 min read
Save for later

Your first Unity project

Packt
15 Jun 2017
11 min read
In this article by Tommaso Lintrami, the author of the book Unity 2017 Game Development Essentials - Third Edition, we will see that when starting out in game development, one of the best ways to learn the various parts of the discipline is to prototype your idea. Unity excels in assisting you with this, with its visual scene editor and public member variables that form settings in the Inspector. To get to grips with working in the Unity editor. In this article, you will learn about: Creating a New Project in Unity Working with GameObjects in the SceneView and Hierarchys (For more resources related to this topic, see here.) As Unity comes in two main forms—a standard, free download and a paid Pro developer license.We'll stick to using features that users of the standard free edition will have access to. If you're launching Unity for the very first time, you'll be presented with a Unitydemonstration project. While this is useful to look into the best practices for the development of high-end projects, if you're starting out, looking over some of the assets and scripting may feel daunting, so we'll leave this behind and start from scratch! Take a look at the following steps for setting up your Unity project: In Unity go to File | NewProject and you will be presented with the ProjectWizard.The following screenshot is a Mac version shown: From here select the NEW tab and 3D type of project. Be aware that if at any time you wish to launch Unity and be taken directly to the ProjectWizard, then simply launch the Unity Editor application and immediately hold the Alt key (Mac and Windows). This can be set to the default behavior for launch in the Unity preferences. Click the Set button and choose where you would like to save your new Unity project folder on your hard drive.  The new project has been named UGDE after this book, and chosen to store it on my desktop for easy access. The Project Wizard also offers the ability to import many Assetpackages into your new project which are provided free to use in your game development by Unity Technologies. Comprising scripts, ready-made objects, and other artwork, these packages are a useful way to get started in various types of new project. You can also import these packages at any time from the Assets menu within Unity, by selecting ImportPackage, and choosing from the list of available packages. You can also import a package from anywhere on your hard drive by choosing the CustomPackage option here. This import method is also used to share assets with others, and when receiving assets you have downloaded through the AssetStore—see Window | Asset Store to view this part of Unity later. From the list of packages to be imported, select the following (as shown in the previous image):     Characters     Cameras     Effects     TerrainAssets     Environment When you are happy with your selection, simply choose Create Project at the bottom of this dialog window. Unity will then create your new project and you will see progress bars representing the import of the four packages. A basic prototyping environment To create a simple environment to prototype some game mechanics, we'll begin with a basic series of objects with which we can introduce gameplay that allows the player to aim and shoot at a wall of primitive cubes. When complete, your prototyping environment will feature a floor comprised of a cube primitive, a main camera through which to view the 3D world, and a point light setup to highlight the area where our gameplay will be introduced. It will look like something as shown in the following screenshot: Setting the scene As all new scenes come with a Main Camera object by default, we'll begin by adding a floor for our prototyping environment. On the Hierarchy panel, click the Create button, and from the drop-down menu, choose Cube. The items listed in this drop-down menu can also be found in the GameObject | CreateOther top menu. You will now see an object in the Hierarchy panel called Cube. Select this and press Return (Mac)/F2 (Windows) or double-click the object name slowly (both platforms) to rename this object, type in Floor and press Return (both platforms) to confirm this change. For consistency's sake, we will begin our creation at world zero—the center of the 3D environment we are working in. To ensure that the floor cube you just added is at this position, ensure it is still selected in the Hierarchypanel and then check the Transform component on the Inspector panel, ensuring that the position values for X, Y, and Z are all at 0, if not, change them all to zero either by typing them in or by clicking the cog icon to the right of the component, and selecting ResetPosition from the pop-out menu. Next, we'll turn the cube into a floor, by stretching it out in the X and Z axes. Into the X and Z values under Scale in the Transform component, type a value of 100, leaving Y at a value of 1. Adding simple lighting Now we will highlight part of our prototyping floor by adding a point light. Select the Create button on the Hierarchypanel(or go to Game Object | Create Other) and choose point light. Position the new point light at (0,20,0) using the Position values in the Transform component, so that it is 20 units above the floor. You will notice that this means that the floor is out of range of the light, so expand the range by dragging on the yellow dot handles that intersect the outline of the point light in the SceneView, until the value for range shown in the Light component in the Inspector reaches something around a value of 40, and the light is creating a lit part of the floor object. Bear in mind that most components and visual editing tools in the SceneView are inextricably linked, so altering values such asRangein the Inspector Light component will update the visual display in the SceneView as you type, and stay constant as soon as you pressReturnto confirm the values entered. Another brick in the wall Now let's make a wall of cubes that we can launch a projectile at. We'll do this by creating a single master brick, adding components as necessary, and then duplicating this until our wall is complete. Building the master brick In order to create a template for all of our bricks, we'll start by creating a master object, something to create clones of. This is done as follows: Click the Create button at the top of the Hierarchy, and select Cube. Position this at (0,1,0) using the Position values in the Transform component on the Inspector. Then, focus your view on this object by ensuring it is still selectedin the Hierarchy, by hovering your cursor over the SceneView, and pressing F. Add physics to your Cube object by choosing Component | Physics | Rigidbody from the top menu. This means that your object is now a Rigidbody—it has mass, gravity, and is affected by other objects using the physics engine for realistic reactions in the 3D world. Finally, we'll color this object by creating amaterial. Materials are a way of applying color and imagery to our 3D geometry. To make a new one, go to the Create button on the Project panel and choose Material from the drop-down menu. Press Return (Mac) or F2 (Windows) to rename this asset to Red instead of the default name New Material. You can also right-click in the Materials Project folder and Create| Material or alternatively you can use the editor main menu: Assets | Create | Material With this material selected, the Inspector shows its properties. Click on the color block to the right of MainColor [see image label 1] to open the Color Picker[see image label 2]. This will differ in appearance depending upon whether you are using Mac or Windows. Simply choose a shade of red, and then close the window. The Main Color block should now have been updated. To apply this material, drag it from the Project panel and drop it onto either the cube as seen in the SceneView, or onto the name of the object in the Hierarchy. The material is then applied to the Mesh Renderer component of this object and immediately seen following the other components of the object in the Inspector. Most importantly, your cube should now be red! Adjusting settings using the preview of this material on any object will edit the original asset, as this preview is simply a link to the asset itself, not a newly editable instance. Now that our cube has a color and physics applied through the Rigid body component, it is ready to be duplicated and act as one brick in a wall of many. However, before we do that, let’s have a quick look at the physics in action. With the cube still selected, set the Yposition value to 15 and the Xrotation value to 40 in the Transform component in the Inspector. Press Play at the top of the Unity interface and you should see the cube fall and then settle, having fallen at an angle. The shortcut for Play is Ctrl+Pfor Windows andCommand+Pfor Mac. Press Play again to stop testing. Do not press Pause as this will only temporarily halt the test, and changes made thereafter to the scene will not be saved. Set the Yposition value for the cube back to 1, and set the X Rotation back to 0. Now that we know our brick behaves correctly, let's start creating a row of bricks to form our wall. And snap!—It's a row To help you position objects, Unity allows you to snap to specific increments when dragging—these increments can be redefined by going to Edit | Snap Settings. To use snapping, hold down Command (Mac) or Ctrl (Windows) when using the Translatetool (W) to move objects in theSceneView. So in order to start building thewall, duplicate the cube brick we already have using the shortcut Command+D (Mac) or Ctrl+D (PC), then drag the red axis handle while holding the snapping key. This will snap one unit at a time by default, so snap-move your cube one unit in the X axis so that it sits next to the original cube, shown as follows: Repeat this procedure of duplication and snap-dragging until you have a row of 10 cubes in a line. This is the first row of bricks, and to simplify building the rest of the bricks we will now group this row under an empty object, and then duplicate the parent empty object. Vertex snapping The basic snapping technique used here works well as our cubes are a generic scale of 1, but when scaling more detailed shaped objects, you should use vertex snapping instead. To do this, ensure that the Translate tool is selected and hold down V on the keyboard.Now hover your cursor over a vertex point on your selected object and drag to any other vertex of another object to snap to it. Grouping and duplicating with empty objects Create an empty object by choosing GameObject | Create Empty from the top menu, then position this at (4.5,0.5,-1) using the Transform component in the Inspector. Rename this from the default nameGameObjecttoCubeHolder. Now select all of the cube objects in the Hierarchy by selecting the top one, holding the Shift key, and then selecting the last. Now drag this list of cubes in the Hierarchy onto the empty object named CubeHolder in the Hierarchy in order to make this their parent object.The Hierarchy should now look like this: You'll notice that the parent empty object now has an arrow to the left of its object title, meaning you can expand and collapse it. To save space in the Hierarchy, click the arrow now to hide all of the child objects, and then re-select the CubeHolder. Now that we have a complete row made and parented, we can simply duplicate the parent object, and use snap-dragging to lift a whole new row up in the Y axis. Use the duplicate shortcut (Command/Ctrl + D) as before, then select the Translate tool (W) and use the snap-drag technique (hold command on Mac, Ctrl on PC) outlined earlier to lift by 1 unit in the Y axis by pulling the green axis handle. Repeat this procedure to create eight rows of bricks in all, one on top of the other. It should look something like the following screenshot. Note that in the image all CubeHolderrow objects are selected in the Hierarchy. Summary In this article, you should have become familiar with the basics of using the Unity interface, working with GameObjects. Resources for Article: Further resources on this subject: Components in Unity [article] Component-based approach of Unity [article] Using Specular in Unity [article]
Read more
  • 0
  • 0
  • 32538

article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 31671

article-image-exploring-shaders-and-effects
Packt
14 Jul 2016
5 min read
Save for later

Exploring Shaders and Effects

Packt
14 Jul 2016
5 min read
In this article by Jamie Dean, the author of the book Mastering Unity Shaders and Effects, we will use transparent shaders and atmospheric effects to present the volatile conditions of the planet, Ridley VI, from the surface. In this article, we will cover the following topics: Exploring the difference between cutout, transparent, and fade Rendering Modes Implementing and adjusting Unity's fog effect in the scene (For more resources related to this topic, see here.) Creating the dust cloud material The surface of Ridley VI is made inhospitable by dangerous nitrogen storms. In our game scene, these are represented by dust cloud planes situated near the surface. We need to set up the materials for these clouds with the following steps: In the Project panel, click on the PACKT_Materials folder to view its contents in the Assets panel. In the Assets panel, right-click on an empty area and choose Create| Material. Rename the material dustCloud. In the Hierarchy panel, click to select the dustcloud object. The object's properties will appear in the Inspector. Drag the dustCloud material from the Assets panel onto the Materials field in the Mesh Renderer property visible in the Inspector. Next, we will set the texture map of the material. Reselect the dustCloud material by clicking on it in the Assets panel. Lock the Inspector by clicking on the small lock icon on the top-right corner of the panel. Locking the Inspector allows you to maintain the focus on assets while you are hooking up an associated asset in your project. In the Project panel, click on the PACKT_Textures folder. Locate the strato texture map and drag it into the dustCloud material's Albedo texture slot in the Inspector. The texture map contains four atlassed variations of the cloud effect. We need to adjust how much of the whole texture is shown in the material. In the Inspector, set the Tiling Y value to 0.25. This will ensure that only a quarter of the complete height of the texture will be used in the material. The texture map also contains opacity data. To use this in our material, we need to adjust the Rendering Mode. The Rendering Mode of Standard Shader allows us to specify the opaque nature of a surface. Most often, scene objects are Opaque. Objects behind them are blocked by them and are not visible through their surface. The next option is Cutout. This is used for surfaces containing areas of full opacity and full transparency, such as leaves on a tree or a chain link fence. The opacity is basically on or off for each pixel in a texture. Fade allows objects to have cutout areas where there are completely transparent and partially transparent pixels. The Transparent option is suitable for truly transparent surfaces such as windows, glass, and some types of plastic. When specular is used with a transparent material, it is applied over the whole surface, making it unsuitable for cutout effects. Comparison of Standard Shader transparency types The Fade Rendering Mode is the best option for our dustCloud material as we want the cloud objects to be cutout so that the edges of the quad where the material is applied to is not visible. We want the surface to be partially transparent so that other dustcloud quads are visible behind them, blending the effect. At the top of the material properties in the Inspector, click on the Rendering Mode drop-down menu and set it to Fade: Transparent dustCloud material applied The dust clouds should now be visible with their opacity reading correctly as shown in the preceding figure. In the next step, we will add some further environmental effects to the scene. Adding fog to the scene In this step, we will add fog to the scene. Fog can be set to fade out distant background elements to reduce the amount of scenery that needs to be rendered. It can be colored, allowing us to blend elements together and give our scene some depth. If the Lighting tab is not already visible in the Unity project, activate it from the menu bar by navigating to Windows | Lighting. Dock the Lighting panel if necessary. Scroll to the bottom to locate the Fog properties group. Check the checkbox next to Fog to enable it. You will see that fog is added to the environment in the Scene view as shown in the following figure. The default values do not quite match to what we need in the planet surface environment: Unity's default fog effect Click within the color swatch next to Fog Color to define the color value. When the color picker appears over the main Unity interface, type the hexcode E8BE80FF into the Hex Color field near the bottom as shown in the following screenshot: Fog effect color selection This will define the  yellow orange color that is appropriate for our planet's atmosphere. Set the Fog Mode to Exponential Squared to allow it to give the appearance of becoming thicker in the distance. Increase the fog by increasing the End value to 0.05: Adjusted fog blended with dust cloud transparencies Our dust cloud objects are being blended with the fog as shown in the preceding image. Summary In this article, we took a closer look at material Rendering Modes and how transparent effects can be implemented in a scene. We further explored the real-time environmental effects by creating dust clouds that fade in and out using atlassed textures. We then set up an environmental fog effect using Unity's built-in tools. For more information on Unity shaders and effects, you can refer to the following books: Unity 5.x Animation Cookbook: https://www.packtpub.com/game-development/unity-5x-animation-cookbook Unity 5.x Shaders and Effects Cookbook: https://www.packtpub.com/game-development/unity-5x-shaders-and-effects-cookbook Unity Shaders and Effects Cookbook: https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook Resources for Article: Further resources on this subject: Looking Good – The Graphical Interface [article] Build a First Person Shooter [article] The Vertex Functio [article]
Read more
  • 0
  • 0
  • 31296

article-image-using-collider-based-system
Packt
17 Feb 2016
10 min read
Save for later

Using a collider-based system

Packt
17 Feb 2016
10 min read
In this article by Jorge Palacios, the author of the book Unity 5.x Game AI Programming Cookbook, you will learn how to implement agent awareness using a mixed approach that considers the previous learnt sensory-level algorithms. (For more resources related to this topic, see here.) Seeing using a collider-based system This is probably the easiest way to simulate vision. We take a collider, be it a mesh or a Unity primitive, and use it as the tool to determine whether an object is inside the agent's vision range or not. Getting ready It's important to have a collider component attached to the same game object using the script on this recipe, as well as the other collider-based algorithms in this chapter. In this case, it's recommended that the collider be a pyramid-based one in order to simulate a vision cone. The lesser the polygons, the faster it will be on the game. How to do it… We will create a component that is able to see enemies nearby by performing the following steps: Create the Visor component, declaring its member variables. It is important to add the corresponding tags into Unity's configuration: using UnityEngine; using System.Collections; public class Visor : MonoBehaviour { public string tagWall = "Wall"; public string tagTarget = "Enemy"; public GameObject agent; } Implement the function for initializing the game object in case the component is already assigned to it: void Start() { if (agent == null) agent = gameObject; } Declare the function for checking collisions for every frame and build it in the following steps: public void OnCollisionStay(Collision coll) { // next steps here } Discard the collision if it is not a target: string tag = coll.gameObject.tag; if (!tag.Equals(tagTarget)) return; Get the game object's position and compute its direction from the Visor: GameObject target = coll.gameObject; Vector3 agentPos = agent.transform.position; Vector3 targetPos = target.transform.position; Vector3 direction = targetPos - agentPos; Compute its length and create a new ray to be shot soon: float length = direction.magnitude; direction.Normalize(); Ray ray = new Ray(agentPos, direction); Cast the created ray and retrieve all the hits: RaycastHit[] hits; hits = Physics.RaycastAll(ray, length); Check for any wall between the visor and target. If none, we can proceed to call our functions or develop our behaviors to be triggered: int i; for (i = 0; i < hits.Length; i++) { GameObject hitObj; hitObj = hits[i].collider.gameObject; tag = hitObj.tag; if (tag.Equals(tagWall)) return; } // TODO // target is visible // code your behaviour below How it works… The collider component checks every frame to know whether it is colliding with any game object in the scene. We leverage the optimizations to Unity's scene graph and engine, and focus only on how to handle valid collisions. After checking whether a target object is inside the vision range represented by the collider, we cast a ray in order to check whether it is really visible or there is a wall in between. Hearing using a collider-based system In this recipe, we will emulate the sense of hearing by developing two entities; a sound emitter and a sound receiver. It is based on the principles proposed by Millington for simulating a hearing system, and uses the power of Unity colliders to detect receivers near an emitter. Getting ready As with the other recipes based on colliders, we will need collider components attached to every object to be checked and rigid body components attached to either emitters or receivers. How to do it… We will create the SoundReceiver class for our agents and SoundEmitter for things such as alarms: Create the class for the SoundReceiver object: using UnityEngine; using System.Collections; public class SoundReceiver : MonoBehaviour { public float soundThreshold; } We define the function for our own behavior to handle the reception of sound: public virtual void Receive(float intensity, Vector3 position) { // TODO // code your own behavior here } Now, let's create the class for the SoundEmitter object: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SoundEmitter : MonoBehaviour { public float soundIntensity; public float soundAttenuation; public GameObject emitterObject; private Dictionary<int, SoundReceiver> receiverDic; } Initialize the list of receivers nearby and emitterObject in case the component is attached directly: void Start() { receiverDic = new Dictionary<int, SoundReceiver>(); if (emitterObject == null) emitterObject = gameObject; } Implement the function for adding new receivers to the list when they enter the emitter bounds: public void OnCollisionEnter(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Add(objId, receiver); } Also, implement the function for removing receivers from the list when they are out of reach: public void OnCollisionExit(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Remove(objId); } Define the function for emitting sound waves to nearby agents: public void Emit() { GameObject srObj; Vector3 srPos; float intensity; float distance; Vector3 emitterPos = emitterObject.transform.position; // next step here } Compute sound attenuation for every receiver: foreach (SoundReceiver sr in receiverDic.Values) { srObj = sr.gameObject; srPos = srObj.transform.position; distance = Vector3.Distance(srPos, emitterPos); intensity = soundIntensity; intensity -= soundAttenuation * distance; if (intensity < sr.soundThreshold) continue; sr.Receive(intensity, emitterPos); } How it works… The collider triggers help register agents in the list of agents assigned to an emitter. The sound emission function then takes into account the agent's distance from the emitter in order to decrease its intensity using the concept of sound attenuation. There is more… We can develop a more flexible algorithm by defining different types of walls that affect sound intensity. It works by casting rays and adding up their values to the sound attenuation: Create a dictionary to store wall types as strings (using tags) and their corresponding attenuation: public Dictionary<string, float> wallTypes; Reduce sound intensity this way: intensity -= GetWallAttenuation(emitterPos, srPos); Define the function called in the previous step: public float GetWallAttenuation(Vector3 emitterPos, Vector3 receiverPos) { // next steps here } Compute the necessary values for ray casting: float attenuation = 0f; Vector3 direction = receiverPos - emitterPos; float distance = direction.magnitude; direction.Normalize(); Cast the ray and retrieve the hits: Ray ray = new Ray(emitterPos, direction); RaycastHit[] hits = Physics.RaycastAll(ray, distance); For every wall type found via tags, add up its value (stored in the dictionary): int i; for (i = 0; i < hits.Length; i++) { GameObject obj; string tag; obj = hits[i].collider.gameObject; tag = obj.tag; if (wallTypes.ContainsKey(tag)) attenuation += wallTypes[tag]; } return attenuation; Smelling using a collider-based system Smelling can be simulated by computing collision between an agent and odor particles, scattered throughout the game level. Getting ready In this recipe based on colliders, we will need collider components attached to every object to be checked, which can be simulated by computing a collision between an agent and odor particles. How to do it… We will develop the scripts needed to represent odor particles and agents able to smell: Create the particle's script and define its member variables for computing its lifespan: using UnityEngine; using System.Collections; public class OdorParticle : MonoBehaviour { public float timespan; private float timer; } Implement the Start function for proper validations: void Start() { if (timespan < 0f) timespan = 0f; timer = timespan; } Implement the timer and destroy the object after its life cycle: void Update() { timer -= Time.deltaTime; if (timer < 0f) Destroy(gameObject); } Create the class for representing the sniffer agent: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Smeller : MonoBehaviour { private Vector3 target; private Dictionary<int, GameObject> particles; } Initialize the dictionary for storing odor particles: void Start() { particles = new Dictionary<int, GameObject>(); } Add to the dictionary the colliding objects that have the odor-particle component attached: public void OnCollisionEnter(Collision coll) { GameObject obj = coll.gameObject; OdorParticle op; op = obj.GetComponent<OdorParticle>(); if (op == null) return; int objId = obj.GetInstanceID(); particles.Add(objId, obj); UpdateTarget(); } Release the odor particles from the local dictionary when they are out of the agent's range or are destroyed: public void OnCollisionExit(Collision coll) { GameObject obj = coll.gameObject; int objId = obj.GetInstanceID(); bool isRemoved; isRemoved = particles.Remove(objId); if (!isRemoved) return; UpdateTarget(); } Create the function for computing the odor centroid according to the current elements in the dictionary: private void UpdateTarget() { Vector3 centroid = Vector3.zero; foreach (GameObject p in particles.Values) { Vector3 pos = p.transform.position; centroid += pos; } target = centroid; } Implement the function for retrieving the odor centroid, if any: public Vector3? GetTargetPosition() { if (particles.Keys.Count == 0) return null; return target; } How it works… Just like the hearing recipe based on colliders, we use the trigger colliders to register odor particles to an agent's perception (implemented using a dictionary). When a particle is included or removed, the odor centroid is computed. However, we implement a function to retrieve that centroid because when no odor particle is registered, the internal centroid position is not updated. There is more… The particle emission logic is left behind to be implemented according to our game's needs and it basically instantiates odor-particle prefabs. Also, it is recommended to attach the rigid body components to the agents. Odor particles are prone to be massively instantiated, reducing the game's performance. Seeing using a graph-based system We will start a recipe oriented to use graph-based logic in order to simulate sense. Again, we will start by developing the sense of vision. Getting ready It is important to grasp the chapter regarding path finding in order to understand the inner workings of the graph-based recipes. How to do it… We will just implement a new file: Create the class for handling vision: using UnityEngine; using System.Collections; using System.Collections.Generic; public class VisorGraph : MonoBehaviour { public int visionReach; public GameObject visorObj; public Graph visionGraph; } Validate the visor object: void Start() { if (visorObj == null) visorObj = gameObject; } Define and start building the function needed to detect visibility of a given set of nodes: public bool IsVisible(int[] visibilityNodes) { int vision = visionReach; int src = visionGraph.GetNearestVertex(visorObj); HashSet<int> visibleNodes = new HashSet<int>(); Queue<int> queue = new Queue<int>(); queue.Enqueue(src); } Implement a breath-first search algorithm: while (queue.Count != 0) { if (vision == 0) break; int v = queue.Dequeue(); List<int> neighbours = visionGraph.GetNeighbors(v); foreach (int n in neighbours) { if (visibleNodes.Contains(n)) continue; queue.Enqueue(v); visibleNodes.Add(v); } } Compare the set of visible nodes with the set of nodes reached by the vision system: foreach (int vn in visibleNodes) { if (visibleNodes.Contains(vn)) return true; } Return false if there is no match between the two sets of nodes: return false; How it works… The recipe uses the breath-first search algorithm in order to discover nodes within its vision reach, and then compares this set of nodes with the set of nodes where the agents reside. Summary In this article, we explained some algorithms for simulating senses and agent awareness. Resources for Article: Further resources on this subject: Animation and Unity3D Physics[article] Unity 3-0 Enter the Third Dimension[article] Animation features in Unity 5[article]
Read more
  • 0
  • 0
  • 31157

article-image-implementing-unity-2017-game-audio-tutorial
Amarabha Banerjee
11 Jul 2018
11 min read
Save for later

Implementing Unity 2017 Game Audio [Tutorial]

Amarabha Banerjee
11 Jul 2018
11 min read
Background music and audio effects play a big role in determining any game's success or failure. Creating engaging game audio, importing audio from other sources and working and customizing Audio FX clips as per the game flow is a vital task for any game developer.  In this article, we are going to discuss about how to create, customize and use third party audio in Unity games. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Basics of audio and sound FX in Unity Adding sound in Unity is simple enough, but you can implement it better if you understand how sound travels. While this is extremely important in 3D games because of the added third dimension, it is quite important in 2D games, just in a slightly different way. Before we discuss the differences, let's first learn about what and how sound works from a quick physics lesson. Listening to the physics behind sound What we hear is not just music, sound effects (FX) and ambient background noise. The sound is a longitudinal, mechanical (vibrating) wave. These "waves" can pass through different mediums (for example, air, water, your desk) but not through a vacuum. Therefore, no one will hear your screams in space. The sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation). You can see this concept illustrated in the following image: The density of certain materials, such as glass and plastic, allows a certain amount of light to pass through them. This will influence how the light will behave when it passes through them, such as bending/refracting (that is, the index of refraction), various materials (for example, liquids, solids, gases) have the same effect when it comes to allowing sound waves to pass. Some materials allow the sound to pass easily, while others dampen it. Therefore, sound studios/booths are made of certain materials to remove things such as echoes. It has a similar effect to when you scream underwater that there is a shark. It won't be as loud as if you scream from your kitchen to tell everyone dinner is ready. Another thing to consider is what is known as the Doppler Effect. The Doppler Effect results from an increase (or decrease) in the frequency of sound (and other things such as light, ripples in water) as the source of the sound and person/player move toward (or away from) each other. A simple example of this is when an emergency vehicle passes by you. You will notice that the sound of the siren is different before it reaches you when it is near you, and once it passes you. Considering this example, it is because there is a sudden change in pitch in the passing siren. This is visualized in the following image: So, what is the point of knowing this when it comes to developing games? Well, this is particularly important when creating games, more so in 3D, in relation to how sounds are heard by players in many ways. For example, imagine that you're nearing a creek, but there are dense bushes, large pine trees, and a rugged terrain. The sound that creek makes from where a player is in the game world is going to sound very different if it was a completely flat plane free from any vegetation. When it comes to 2D games, this is not necessarily as important because we are working without depth (z-axis) but similar principles apply when players may be navigating around a top-down environment and they are near a point of interest. You don't want that sound to be as loud when the player is far away as it would be if they were up close. Within the context of 2D and 3D sounds, Unity has a parameter for this exact thing called Spatial Blend. We will discuss this more in the Audio Source section. There are several ways that you can create audio within Unity, from importing your own/downloaded sounds to recording it live. Like images, Unity can import most standard audio file formats: AIFF, WAV, MP3, and Ogg, and tracker modules (for example, short instrument samples): .xm, .mod, .it, and .s3m. Importing audio Importing audio into Unity follows the same processes as importing any other type of asset. We will cover the basics of what you need to know in the following sections. Audio Listener Have you heard the saying, If a tree falls in a forest and no one is there to hear it, does it still make a sound? Well, in Unity, if there is nothing to hear your audio, then the answer is no. This is because Unity has a component called an Audio Listener, which works like a microphone. To locate the Audio Listener, click the Main Camera, and then look over at the Inspector; it should be located near the bottom, like in the following image: If for some reason, it isn't there, you can always add it by clicking the following button titled Add Component, type Audio Listener, and select it (click it) from the list, like in the following image: The important thing to remember is that an Audio Listener is the location of the sound, so it makes sense as to why it is typically placed on the Main Camera, but it can also be placed on a Player. A single scene can only have one Audio Listener; therefore, it's best to experiment with the one that works best for your game. It is important to remember that an Audio Listener works with an Audio Source, and must have one to work. Audio Source The Audio Source is where the sound comes from. This can be from many different objects within a Scene as well as background and sound FX. The Audio Source has several parameters; later we will briefly discuss the main ones. To see more information about all the parameters, you can check out the official Unity documentation by visiting the link or scanning the QR code: https://docs.unity3d.com/2017.2/Documentation/Manual/class-AudioSource.html You may be wondering why we should have a slider for Spatial Blend, instead of a checkbox. This is because we need to fade between 2D and 3D, and there is a good reason for this. Imagine that you're in a game and you're looking at a screen on a computer. In this case, your camera is going to be fixated on whatever is on the screen. This could be checking an inventory or even entering nuclear codes. In any case, you will want the sound that is being emitted from the screen to be the focal audio. Therefore, the slider in the Spatial Blend parameter is going to be closer to 2D. This is because you may still want ambient noises that are in the background incorporated into the experience. So, if you are closer to 2D, the sound will be the same in both speakers (or headphones). The closer you slide toward 3D, the more the volume will depend on the proximity of the Sound Listener to the Sound Source. It will also allow for things, such as the Doppler Effect, to be more noticeable, as it takes in 3D space. There are also specific settings for these things. Choosing sounds for background and FX When it comes to picking the right kind of music for your game, just like the aesthetics, you need to think about what kind of "mood" you're trying to create. Is it a somber or uplifting kind of mood, are you ironically contrasting the graphics (for example, happy) with gloomy music? There is really no right or wrong when it comes to your musical selection if you can communicate to the player what they are supposed to feel, at least in general. For this game, I have provided you with some example "moods" that you can apply to this game. Of course, you're welcome to choose sounds other than this that are more to your liking! All the sounds that we will use will be from the Free Sound website: https://freesound.org. You will need to create an account to download them, but it's free and there are many great sounds that you can use when creating games. In saying this, if you're intending to create your games for commercial purposes, please make sure that you check the Terms and Conditions on Free Sound to make sure that you're not violating any of them. Each track will have its own attribution licenses, including those for commercial use, so always check! For this project, we're going to stick with the "Happy" version. But I encourage you to experiment! Happy Collecting Angel Cakes: Chime sound (https://freesound.org/people/jgreer/sounds/333629/) Being attacked by the enemy: Cat Purr/Twit4.wav (https://freesound.org/people/steffcaffrey/sounds/262309/) Collecting health: correct (https://freesound.org/people/ertfelda/sounds/243701/) Collecting bonuses: Signal-Ring 1 (https://freesound.org/people/Vendarro/sounds/399315/) Background: Kirmes_Orgel_004_2_Rosamunde.mp3 (https://freesound.org/people/bilwiss/sounds/24720/) Sad Collecting Angel Cakes: Glass Tap (https://freesound.org/people/Unicornaphobist/sounds/262958/) Being attacked by the enemy: musicbox1.wav (https://freesound.org/people/sandocho/sounds/17700/) Collecting health: chime.wav (https://freesound.org/people/Psykoosiossi/sounds/398661/) Collecting bonuses: short metallic hit (https://freesound.org/people/waveplay/sounds/366400/) Background: improvised chill 8 (https://freesound.org/people/waveplay/sounds/238529/) Retro Collecting Angel Cakes: TF_Buzz.flac (https://freesound.org/people/copyc4t/sounds/235652/) Being attacked by the enemy: Game Die (https://freesound.org/people/josepharaoh99/sounds/364929/) Collecting health: galanghee.wav (https://freesound.org/people/metamorphmuses/sounds/91387/) Collecting bonuses: SW05.WAV (https://freesound.org/people/mad-monkey/sounds/66684/) Background: Angel-techno pop music loop (https://freesound.org/people/frankum/sounds/387410/) Not everyone can hear well or at all, so it pays to keep this in mind when you're developing games that may rely on audio to provide feedback to players. While subtitles can enable dialogue to be more accessible, sound FX can be a little trickier. Therefore, when it comes to implementing audio, think about how you could complement it, even if the same effect that you're trying to achieve with sound is subtle. For example, if you play a "bleep" for every item collected, perhaps you could associate it with a slight glow or flash of color. The choice is up to you, but it's something to keep in mind. On the other end of the spectrum, those who can hear might also want to turn the sounds off. We've all played that game (or several) that really begins to become irritating, so make sure that you also check this while you're playtesting. You don't want an awesome game to suck because your audio is intolerable and there is not an option to TURN THE SOUND OFF! You’ve been warned. Integrating background music in our game Once you choose which music better suits the kind of feel you want to create for your game, import both the sound and the music inside the project. If you want, you can create two folders for them, SoundFX and Music, respectively. Now, in our scene, we need to do the following: Create an empty game object (by clicking GameObject | Create empty), rename it Background Music. Attach an Audio Source component (in the Inspector, click Add Component | Audio | Audio Source). Next, we need to drag and drop the music we decided on/downloaded into the AudioClip variable and check the Loop option, so the background music will never stop. Also, check that Play on Awake is checked as well, even if it should be by default, so the music will start playing as soon as the game starts. Hit Play to start the game. Lastly, adjust the volume, depending on the music you chose. This may require a bit of playtesting (remember to set the value after the play mode, because the settings you adjust during play mode are not kept). In the end, this is how the component should look (in the image, I chose the happy theme music, and set a Volume of 0.1): Here in this article we have shown you how to incorporate game audio effects and background music in Unity games. If you liked this article, then check out the complete book Unity 2017 2D Game Development Projects. AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 30909

article-image-beyond-grading
Packt
16 Jan 2014
5 min read
Save for later

Beyond Grading

Packt
16 Jan 2014
5 min read
(for more resources related to this topic, see here.) Kudos As the final look of the frame is achieved during the compositing stage, there will always be numerous occasions where there is a requirement for more render passes to finalize the image. This results in extra 3D renders, along with more time and money. Also, few inevitable applications that give life to an image, such as lens effects (Defocus, Glares, and motions blur), are render intensive. Blender Compositor provides alternate procedures for these effects, without having to go back to 3D renders. A well planned CG pipeline can always provide sufficient data to be able to use these techniques during the compositing stage. Relighting Relighting is a compositing technique that is used to add extra light information not existing in the received 3D render information. This process facilitates additional creative tweaks in compositing. Though this technique can only provide light without considering shadowing information, additional procedures can provide a convincing approach to this limitation. The Normal node Relighting in Blender can be performed using the Normal node. The following screenshot shows the relighting workflow to add a cool light from the right screen. The following illustration uses a Hue Saturation Value node to attain the fake light color. Alternatively, any grading nodes can be used for similar effect. The technique is to use the Dot output of the Normal node as the factor input for any grade node. The following screenshot shows relighting with a cyan color light from the top using the Normal node: The light direction can be modified by left-clicking and dragging on the diffused sphere thumbnail image provided on the node. This fake lighting works great when used as secondary light highlights. However, as seen on the vertical brick in the preceding screenshot, light leaks can be encountered as shadowing is not considered. This can often spoil the fun. A quick fix for this is to use the Ambient Occlusion information to occlude the unwanted areas. The following screenshot illustrates the workflow of using the Ambient Occlusion pass along with the normal pass to resolve the light leak issue. The technique is to multiply the dot output of the Normal node with Ambient Occlusion info from the rendered image using Mix or Math nodes. As it can be observed in the following screenshot, the blue light leaks on the inside parts of the vertical brick is minimized by the Ambient Occlusion information. This solution works as long as relighting is not the primary lighting for the scene. Another issue that can be encountered while using the Normal node is negative values. These values will affect the nonlight areas, leading to an unwanted effect. The procedure to curb these unwanted values is to clamp them from the Dot output of the Normal node to zero, before using as a mask input to grade nodes. The following screenshot illustrates the issue with negative values. All pixels that have an over-saturated orange color are a result of negative values. The following screenshot shows the workflow to clamp the negative values from the dot information of a normal pass. A map value is connected between the grade node and Normal node, with the Use Minimum option on. This makes sure that only negative values are clamped to zero and all other values are unchanged. The Fresnel effect The Fresnel option available in shader parameters is used to modify the reflection intensity, based on the viewing angle, to simulate a metallic behavior. After 3D rendering, altering this property requires rerendering. Blender provides an alternate method to build and modify the Fresnel effect in compositing, using the Normal node. The following screenshot illustrates the Fresnel workflow. In this procedure, the dot output of a Normal node is connected to the Map Range node and the To Min/ To Max values are tweaked to obtain a black-and-white mask map, as shown in the screenshot. A Math node is connected to this mask input to clamp information to the 0-1 range. The 3D-combined render output is rebuilt using the diffuse, specular, and reflection passes from the 3D render. While rebuilding, the mask created using the Normal node should be applied as a mask to the factor input of the reflection Add node. This results in applying reflection only to the white areas of the mask, thereby exhibiting the Fresnel effect. A similar technique can be used to add edge highlights, using the mask as a factor input to the grade nodes. Summary This article dealt with advanced compositing techniques beyond grading. These techniques emphasize alternate methods in Blender Compositing for some specific 3D render requirements that can save lots of render times, thereby also saving budgets in making a CG film. Resources for Article: Further resources on this subject: Introduction to Blender 2.5: Color Grading [article] Blender Engine : Characters [article] Managing Blender Materials [article]
Read more
  • 0
  • 0
  • 30276
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-finding-your-way
Packt
21 Sep 2015
19 min read
Save for later

Finding Your Way

Packt
21 Sep 2015
19 min read
 This article by Ray Barrera, the author of Unity AI Game Programming Second Edition, covers the following topics: A* Pathfinding algorithm A custom A* Pathfinding implementation (For more resources related to this topic, see here.) A* Pathfinding We'll implement the A* algorithm in a Unity environment using C#. The A* Pathfinding algorithm is widely used in games and interactive applications even though there are other algorithms, such as Dijkstra's algorithm, because of its simplicity and effectiveness. Revisiting the A* algorithm Let's review the A* algorithm again before we proceed to implement it in next section. First, we'll need to represent the map in a traversable data structure. While many structures are possible, for this example, we will use a 2D grid array. We'll implement the GridManager class later to handle this map information. Our GridManager class will keep a list of the Node objects that are basically titles in a 2D grid. So, we need to implement that Node class to handle things such as node type (whether it's a traversable node or an obstacle), cost to pass through and cost to reach the goal Node, and so on. We'll have two variables to store the nodes that have been processed and the nodes that we have to process. We'll call them closed list and open list, respectively. We'll implement that list type in the PriorityQueue class. And then finally, the following A* algorithm will be implemented in the AStar class. Let's take a look at it: We begin at the starting node and put it in the open list. As long as the open list has some nodes in it, we'll perform the following processes: Pick the first node from the open list and keep it as the current node. (This is assuming that we've sorted the open list and the first node has the least cost value, which will be mentioned at the end of the code.) Get the neighboring nodes of this current node that are not obstacle types, such as a wall or canyon that can't be passed through. For each neighbor node, check if this neighbor node is already in the closed list. If not, we'll calculate the total cost (F) for this neighbor node using the following formula: F = G + H In the preceding formula, G is the total cost from the previous node to this node and H is the total cost from this node to the final target node. Store this cost data in the neighbor node object. Also, store the current node as the parent node as well. Later, we'll use this parent node data to trace back the actual path. Put this neighbor node in the open list. Sort the open list in ascending order, ordered by the total cost to reach the target node. If there's no more neighbor nodes to process, put the current node in the closed list and remove it from the open list. Go back to step 2. Once you have completed this process your current node should be in the target goal node position, but only if there's an obstacle free path to reach the goal node from the start node. If it is not at the goal node, there's no available path to the target node from the current node position. If there's a valid path, all we have to do now is to trace back from current node's parent node until we reach the start node again. This will give us a path list of all the nodes that we chose during our pathfinding process, ordered from the target node to the start node. We then just reverse this path list since we want to know the path from the start node to the target goal node. This is a general overview of the algorithm we're going to implement in Unity using C#. So let's get started. Implementation We'll implement the preliminary classes that were mentioned before, such as the Node, GridManager, and PriorityQueue classes. Then, we'll use them in our main AStar class. Implementing the Node class The Node class will handle each tile object in our 2D grid, representing the maps shown in the Node.cs file: using UnityEngine; using System.Collections; using System; public class Node : IComparable { public float nodeTotalCost; public float estimatedCost; public bool bObstacle; public Node parent; public Vector3 position; public Node() { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; } public Node(Vector3 pos) { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; this.position = pos; } public void MarkAsObstacle() { this.bObstacle = true; } The Node class has properties, such as the cost values (G and H), flags to mark whether it is an obstacle, its positions, and parent node. The nodeTotalCost is G, which is the movement cost value from starting node to this node so far and the estimatedCost is H, which is total estimated cost from this node to the target goal node. We also have two simple constructor methods and a wrapper method to set whether this node is an obstacle. Then, we implement the CompareTo method as shown in the following code: public int CompareTo(object obj) { Node node = (Node)obj; //Negative value means object comes before this in the sort //order. if (this.estimatedCost < node.estimatedCost) return -1; //Positive value means object comes after this in the sort //order. if (this.estimatedCost > node.estimatedCost) return 1; return 0; } } This method is important. Our Node class inherits from IComparable because we want to override this CompareTo method. If you can recall what we discussed in the previous algorithm section, you'll notice that we need to sort our list of node arrays based on the total estimated cost. The ArrayList type has a method called Sort. This method basically looks for this CompareTo method, implemented inside the object (in this case, our Node objects) from the list. So, we implement this method to sort the node objects based on our estimatedCost value. The IComparable.CompareTo method, which is a .NET framework feature, can be found at http://msdn.microsoft.com/en-us/library/system.icomparable.compareto.aspx. Establishing the priority queue The PriorityQueue class is a short and simple class to make the handling of the nodes' ArrayList easier, as shown in the following PriorityQueue.cs class: using UnityEngine; using System.Collections; public class PriorityQueue { private ArrayList nodes = new ArrayList(); public int Length { get { return this.nodes.Count; } } public bool Contains(object node) { return this.nodes.Contains(node); } public Node First() { if (this.nodes.Count > 0) { return (Node)this.nodes[0]; } return null; } public void Push(Node node) { this.nodes.Add(node); this.nodes.Sort(); } public void Remove(Node node) { this.nodes.Remove(node); //Ensure the list is sorted this.nodes.Sort(); } } The preceding code listing should be easy to understand. One thing to notice is that after adding or removing node from the nodes' ArrayList, we call the Sort method. This will call the Node object's CompareTo method and will sort the nodes accordingly by the estimatedCost value. Setting up our grid manager The GridManager class handles all the properties of the grid, representing the map. We'll keep a singleton instance of the GridManager class as we need only one object to represent the map, as shown in the following GridManager.cs file: using UnityEngine; using System.Collections; public class GridManager : MonoBehaviour { private static GridManager s_Instance = null; public static GridManager instance { get { if (s_Instance == null) { s_Instance = FindObjectOfType(typeof(GridManager)) as GridManager; if (s_Instance == null) Debug.Log("Could not locate a GridManager " + "object. n You have to have exactly " + "one GridManager in the scene."); } return s_Instance; } } We look for the GridManager object in our scene and if found, we keep it in our s_Instance static variable: public int numOfRows; public int numOfColumns; public float gridCellSize; public bool showGrid = true; public bool showObstacleBlocks = true; private Vector3 origin = new Vector3(); private GameObject[] obstacleList; public Node[,] nodes { get; set; } public Vector3 Origin { get { return origin; } } Next, we declare all the variables; we'll need to represent our map, such as number of rows and columns, the size of each grid tile, and some Boolean variables to visualize the grid and obstacles as well as to store all the nodes present in the grid, as shown in the following code: void Awake() { obstacleList = GameObject.FindGameObjectsWithTag("Obstacle"); CalculateObstacles(); } // Find all the obstacles on the map void CalculateObstacles() { nodes = new Node[numOfColumns, numOfRows]; int index = 0; for (int i = 0; i < numOfColumns; i++) { for (int j = 0; j < numOfRows; j++) { Vector3 cellPos = GetGridCellCenter(index); Node node = new Node(cellPos); nodes[i, j] = node; index++; } } if (obstacleList != null && obstacleList.Length > 0) { //For each obstacle found on the map, record it in our list foreach (GameObject data in obstacleList) { int indexCell = GetGridIndex(data.transform.position); int col = GetColumn(indexCell); int row = GetRow(indexCell); nodes[row, col].MarkAsObstacle(); } } } We look for all the game objects with an Obstacle tag and put them in our obstacleList property. Then we set up our nodes' 2D array in the CalculateObstacles method. First, we just create the normal node objects with default properties. Just after that, we examine our obstacleList. Convert their position into row-column data and update the nodes at that index to be obstacles. The GridManager class has a couple of helper methods to traverse the grid and get the grid cell data. The following are some of them with a brief description of what they do. The implementation is simple, so we won't go into the details. The GetGridCellCenter method returns the position of the grid cell in world coordinates from the cell index, as shown in the following code: public Vector3 GetGridCellCenter(int index) { Vector3 cellPosition = GetGridCellPosition(index); cellPosition.x += (gridCellSize / 2.0f); cellPosition.z += (gridCellSize / 2.0f); return cellPosition; } public Vector3 GetGridCellPosition(int index) { int row = GetRow(index); int col = GetColumn(index); float xPosInGrid = col * gridCellSize; float zPosInGrid = row * gridCellSize; return Origin + new Vector3(xPosInGrid, 0.0f, zPosInGrid); } The GetGridIndex method returns the grid cell index in the grid from the given position: public int GetGridIndex(Vector3 pos) { if (!IsInBounds(pos)) { return -1; } pos -= Origin; int col = (int)(pos.x / gridCellSize); int row = (int)(pos.z / gridCellSize); return (row * numOfColumns + col); } public bool IsInBounds(Vector3 pos) { float width = numOfColumns * gridCellSize; float height = numOfRows* gridCellSize; return (pos.x >= Origin.x && pos.x <= Origin.x + width && pos.x <= Origin.z + height && pos.z >= Origin.z); } The GetRow and GetColumn methods return the row and column data of the grid cell from the given index: public int GetRow(int index) { int row = index / numOfColumns; return row; } public int GetColumn(int index) { int col = index % numOfColumns; return col; } Another important method is GetNeighbours, which is used by the AStar class to retrieve the neighboring nodes of a particular node: public void GetNeighbours(Node node, ArrayList neighbors) { Vector3 neighborPos = node.position; int neighborIndex = GetGridIndex(neighborPos); int row = GetRow(neighborIndex); int column = GetColumn(neighborIndex); //Bottom int leftNodeRow = row - 1; int leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Top leftNodeRow = row + 1; leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Right leftNodeRow = row; leftNodeColumn = column + 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Left leftNodeRow = row; leftNodeColumn = column - 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); } void AssignNeighbour(int row, int column, ArrayList neighbors) { if (row != -1 && column != -1 && row < numOfRows && column < numOfColumns) { Node nodeToAdd = nodes[row, column]; if (!nodeToAdd.bObstacle) { neighbors.Add(nodeToAdd); } } } First, we retrieve the neighboring nodes of the current node in the left, right, top, and bottom, all four directions. Then, inside the AssignNeighbour method, we check the node to see whether it's an obstacle. If it's not, we push that neighbor node to the referenced array list, neighbors. The next method is a debug aid method to visualize the grid and obstacle blocks: void OnDrawGizmos() { if (showGrid) { DebugDrawGrid(transform.position, numOfRows, numOfColumns, gridCellSize, Color.blue); } Gizmos.DrawSphere(transform.position, 0.5f); if (showObstacleBlocks) { Vector3 cellSize = new Vector3(gridCellSize, 1.0f, gridCellSize); if (obstacleList != null && obstacleList.Length > 0) { foreach (GameObject data in obstacleList) { Gizmos.DrawCube(GetGridCellCenter( GetGridIndex(data.transform.position)), cellSize); } } } } public void DebugDrawGrid(Vector3 origin, int numRows, int numCols,float cellSize, Color color) { float width = (numCols * cellSize); float height = (numRows * cellSize); // Draw the horizontal grid lines for (int i = 0; i < numRows + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(0.0f, 0.0f, 1.0f); Vector3 endPos = startPos + width * new Vector3(1.0f, 0.0f, 0.0f); Debug.DrawLine(startPos, endPos, color); } // Draw the vertial grid lines for (int i = 0; i < numCols + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(1.0f, 0.0f, 0.0f); Vector3 endPos = startPos + height * new Vector3(0.0f, 0.0f, 1.0f); Debug.DrawLine(startPos, endPos, color); } } } Gizmos can be used to draw visual debugging and setup aids inside the editor scene view. The OnDrawGizmos method is called every frame by the engine. So, if the debug flags, showGrid and showObstacleBlocks, are checked, we just draw the grid with lines and obstacle cube objects with cubes. Let's not go through the DebugDrawGrid method, which is quite simple. You can learn more about gizmos in the Unity reference documentation at http://docs.unity3d.com/Documentation/ScriptReference/Gizmos.html. Diving into our A* Implementation The AStar class is the main class that will utilize the classes we have implemented so far. You can go back to the algorithm section if you want to review this. We start with our openList and closedList declarations, which are of the PriorityQueue type, as shown in the AStar.cs file: using UnityEngine; using System.Collections; public class AStar { public static PriorityQueue closedList, openList; Next, we implement a method called HeuristicEstimateCost to calculate the cost between the two nodes. The calculation is simple. We just find the direction vector between the two by subtracting one position vector from another. The magnitude of this resultant vector gives the direct distance from the current node to the goal node: private static float HeuristicEstimateCost(Node curNode, Node goalNode) { Vector3 vecCost = curNode.position - goalNode.position; return vecCost.magnitude; } Next, we have our main FindPath method: public static ArrayList FindPath(Node start, Node goal) { openList = new PriorityQueue(); openList.Push(start); start.nodeTotalCost = 0.0f; start.estimatedCost = HeuristicEstimateCost(start, goal); closedList = new PriorityQueue(); Node node = null; We initialize our open and closed lists. Starting with the start node, we put it in our open list. Then we start processing our open list: while (openList.Length != 0) { node = openList.First(); //Check if the current node is the goal node if (node.position == goal.position) { return CalculatePath(node); } //Create an ArrayList to store the neighboring nodes ArrayList neighbours = new ArrayList(); GridManager.instance.GetNeighbours(node, neighbours); for (int i = 0; i < neighbours.Count; i++) { Node neighbourNode = (Node)neighbours[i]; if (!closedList.Contains(neighbourNode)) { float cost = HeuristicEstimateCost(node, neighbourNode); float totalCost = node.nodeTotalCost + cost; float neighbourNodeEstCost = HeuristicEstimateCost( neighbourNode, goal); neighbourNode.nodeTotalCost = totalCost; neighbourNode.parent = node; neighbourNode.estimatedCost = totalCost + neighbourNodeEstCost; if (!openList.Contains(neighbourNode)) { openList.Push(neighbourNode); } } } //Push the current node to the closed list closedList.Push(node); //and remove it from openList openList.Remove(node); } if (node.position != goal.position) { Debug.LogError("Goal Not Found"); return null; } return CalculatePath(node); } This code implementation resembles the algorithm that we have previously discussed, so you can refer back to it if you are not clear of certain things. Get the first node of our openList. Remember our openList of nodes is always sorted every time a new node is added. So, the first node is always the node with the least estimated cost to the goal node. Check whether the current node is already at the goal node. If so, exit the while loop and build the path array. Create an array list to store the neighboring nodes of the current node being processed. Use the GetNeighbours method to retrieve the neighbors from the grid. For every node in the neighbors array, we check whether it's already in closedList. If not, put it in the calculate the cost values, update the node properties with the new cost values as well as the parent node data, and put it in openList. Push the current node to closedList and remove it from openList. Go back to step 1. If there are no more nodes in openList, our current node should be at the target node if there's a valid path available. Then, we just call the CalculatePath method with the current node parameter: private static ArrayList CalculatePath(Node node) { ArrayList list = new ArrayList(); while (node != null) { list.Add(node); node = node.parent; } list.Reverse(); return list; } } The CalculatePath method traces through each node's parent node object and builds an array list. It gives an array list with nodes from the target node to the start node. Since we want a path array from the start node to the target node, we just call the Reverse method. So, this is our AStar class. We'll write a test script in the following code to test all this and then set up a scene to use them in. Implementing a TestCode class This class will use the AStar class to find the path from the start node to the goal node, as shown in the following TestCode.cs file: using UnityEngine; using System.Collections; public class TestCode : MonoBehaviour { private Transform startPos, endPos; public Node startNode { get; set; } public Node goalNode { get; set; } public ArrayList pathArray; GameObject objStartCube, objEndCube; private float elapsedTime = 0.0f; //Interval time between pathfinding public float intervalTime = 1.0f; First, we set up the variables that we'll need to reference. The pathArray is to store the nodes array returned from the AStar FindPath method: void Start () { objStartCube = GameObject.FindGameObjectWithTag("Start"); objEndCube = GameObject.FindGameObjectWithTag("End"); pathArray = new ArrayList(); FindPath(); } void Update () { elapsedTime += Time.deltaTime; if (elapsedTime >= intervalTime) { elapsedTime = 0.0f; FindPath(); } } In the Start method, we look for objects with the Start and End tags and initialize our pathArray. We'll be trying to find our new path at every interval that we set to our intervalTime property in case the positions of the start and end nodes have changed. Then, we call the FindPath method: void FindPath() { startPos = objStartCube.transform; endPos = objEndCube.transform; startNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(startPos.position))); goalNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(endPos.position))); pathArray = AStar.FindPath(startNode, goalNode); } Since we implemented our pathfinding algorithm in the AStar class, finding a path has now become a lot simpler. First, we take the positions of our start and end game objects. Then, we create new Node objects using the helper methods of GridManager and GetGridIndex to calculate their respective row and column index positions inside the grid. Once we get this, we just call the AStar.FindPath method with the start node and goal node and store the returned array list in the local pathArray property. Next, we implement the OnDrawGizmos method to draw and visualize the path found: void OnDrawGizmos() { if (pathArray == null) return; if (pathArray.Count > 0) { int index = 1; foreach (Node node in pathArray) { if (index < pathArray.Count) { Node nextNode = (Node)pathArray[index]; Debug.DrawLine(node.position, nextNode.position, Color.green); index++; } } } } } We look through our pathArray and use the Debug.DrawLine method to draw the lines connecting the nodes from the pathArray. With this, we'll be able to see a green line connecting the nodes from start to end, forming a path, when we run and test our program. Setting up our sample scene We are going to set up a scene that looks something similar to the following screenshot: A sample test scene We'll have a directional light, the start and end game objects, a few obstacle objects, a plane entity to be used as ground, and two empty game objects in which we put our GridManager and TestAStar scripts. This is our scene hierarchy: The scene Hierarchy Create a bunch of cube entities and tag them as Obstacle. We'll be looking for objects with this tag when running our pathfinding algorithm. The Obstacle node Create a cube entity and tag it as Start. The Start node Then, create another cube entity and tag it as End. The End node Now, create an empty game object and attach the GridManager script. Set the name as GridManager because we use this name to look for the GridManager object from our script. Here, we can set up the number of rows and columns for our grid as well as the size of each tile. The GridManager script Testing all the components Let's hit the play button and see our A* Pathfinding algorithm in action. By default, once you play the scene, Unity will switch to the Game view. Since our pathfinding visualization code is written for the debug drawn in the editor view, you'll need to switch back to the Scene view or enable Gizmos to see the path found. Found path one Now, try to move the start or end node around in the scene using the editor's movement gizmo (not in the Game view, but the Scene view). Found path two You should see the path updated accordingly if there's a valid path from the start node to the target goal node, dynamically in real time. You'll get an error message in the console window if there's no path available. Summary In this article, we learned how to implement our own simple A* Pathfinding system. To attain this, we firstly implemented the Node class and established the priority queue. Then, we move on to setting up the grid manager. After that, we dived in deeper by implementing a TestCode class and setting up our sample scene. Finally, we tested all the components. Resources for Article: Further resources on this subject: Saying Hello to Unity and Android[article] Enemy and Friendly AIs[article] Customizing skin with GUISkin [article]
Read more
  • 0
  • 0
  • 29646

article-image-microsofts-xbox-team-at-e3-2019-project-scarlett-ai-powered-flight-simulator-keanu-reeves-in-cyberpunk-2077-and-more
Bhagyashree R
11 Jun 2019
6 min read
Save for later

Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more

Bhagyashree R
11 Jun 2019
6 min read
On Sunday at E3 2019, Microsoft made some really big announcements that had the audience screaming. These included release date of Project Scarlett, Xbox One successor, more than 60 game trailers, Keanu Reeves humbling the stage for promoting Cyberpunk 2077, and much more. E3, which stands for Electronic Entertainment Expo, is one of the biggest gaming events of the year. Its official dates are June 11-13, however, these dates are just for the shows happening at Los Angeles Convention Center. The press conferences were held on June 8 and 9. Along with hosting the world premiere of several computer and video games, this event also showcases new hardware and software products that take the gaming experience to the next level. Here are some of the highlights from Microsoft’s press conference: Project Scarlett will arrive in fall 2020 with Halo infinite Rumors have been going around about the next-generation of Xbox since December last year. Putting all these rumors to rest, Microsoft officially announced that Project Scarlett is planned to release during fall next year. The tech giant further shared that the next big upcoming space war game, Halo Infinite will launch alongside Project Scarlett. According to Microsoft, we can expect this new device to be four times more powerful than Xbox One X. It includes a custom designed CPU based on AMD’s Zen 2 and Radeon RDNA architecture. It supports 8K gaming, framerates of 120fps, and ray-tracing. The device will also include a non-mechanical SSD hard drive enabling faster game loads than its older mechanical hard drives. https://youtu.be/-ktN4bycj9s xCloud will open for public trials in October, one month ahead of Google’s Stadia After giving a brief live demonstration of its upcoming xCloud game streaming service in March, Microsoft announced that it will be available to the public in October this year. This announcement seems to be a direct response to Google’s Stadia, which was revealed in March and will make its public debut in November. Along with sharing the release date, the tech giant also gave E3 attendees the first hands-on trial of the service. At the event, Xbox chief Phil Spencer said, “Two months ago we connected all Xbox developers to Project xCloud. Today, we invite those of you here at E3 for our first public hands-on of Project xCloud. To experience the freedom to play right here at the show.” Microsoft built xCloud to provide gamers with a new way to play Xbox games where the gamers decide how and when they want to play. With xCloud Console Streaming you will be able to “turn your Xbox One into your own personal and free xCloud server.” It will enable you to stream entire Xbox One library including games from Xbox Game Pass to any device of your choice. https://twitter.com/Xbox/status/1137833126959280128 Xbox Elite 2 Wireless Controller to reach you on November 4th for $179.99 Microsoft announced the launch of Xbox Elite Wireless Controller Series 2, which it says is the totally re-engineered version of the previous Elite controller. It is open for pre-orders now and will be available on November 4th in 24 countries, priced at $179.99. The controller’s new adjustable tension thumbsticks provide improved precision and shorter hair trigger locks enable you to fire faster. The device includes USB-C support, Bluetooth, and a rechargeable battery that lasts for up to 40 hours per charge. Along with all these updates, it also allows you to do limitless customizations with the Xbox Accessories app on Xbox One and Windows 10 PC. https://youtu.be/SYVw0KqQiOI Cyberpunk 2077 featuring Keanu Reeves to release on April 16th, 2020 Last year, CD Projekt Red, the creator of Cyberpunk 2077 said that E3 2019 will be its “most important E3” ever and we cannot agree more. Keanu Reeves aka John Wick himself came to announce the release date of Cyberpunk 2077, which is April 16th, 2020. The trailer of the game ended with the biggest surprise for the audience: the appearance of Reeves’ as a character apparently named “Mr. Fusion.” The crowd went wild as soon as Reeves took to the stage to promote Cyberpunk 2077. When the actor said that walking in the streets of Cyberpunk 2077 will be breathtaking, a guy from the crowd yelled, "you're breathtaking." To which Reeves kindly replied: https://twitter.com/Xbox/status/1137854943006605312 The guy from the crowd was YouTuber Peter Sark, who shared on Twitter that "Keanu Reeves just announced to the world that I'm breathtaking." https://twitter.com/petertheleader/status/1137846108305014784 CD Projekt Red is now giving him a free collector’s edition copy of the game, which is amazing! For everyone else, don’t be upset as you can also pre-order Cyberpunk 2077’s physical and collector's edition from their official website. Though as xCloud, attendees will not be able to get a hands-on trial now, they will still be able to see the demo presentation. The demo is happening at the South Hall in the LA Convention Center, booth 1023, on June 11-13th. The new Microsoft Flight Simulator is powered by Azure cloud AI Microsoft showcased a new installment of its long-running Microsoft Flight Simulator series. Powered by Azure cloud artificial intelligence and satellite data, this updated simulator is capable of rendering amazingly real visuals. Though not many details have been shared, its trailer shows a stunning real-time 4K footage of lifelike landscapes and aircraft. Have a look at it yourself! https://youtu.be/ReDDgFfWlS4 Though this simulator has been PC-only in the past, this newly updated simulator is coming to Xbox One and will also be available via Xbox Game Pass. The specific release dates are unknown but they're expected to be out next year. Double Fine joins Xbox Game Studios At the event, Tim Schafer, the founder of Double Fine, shared that his company has now joined Microsoft’s ever-growing gaming studio. Double Fine Productions is the studio behind games like Psychonauts, Brutal Legend, Broken Age. He jokingly said, "For the last 19 years, we've been independent. Then Microsoft came to us and said, 'What if we gave you a bunch of money.' And I said 'OK, yeah.'" Schafer posted another video on YouTube explaining what this means for the company’s existing commitments. He shared that Psychonauts 2 will be provided to crowdfunders on the platforms they chose, but going forward the company will focus on "Xbox, Game Pass, and PC.” https://youtu.be/uR9yKz2C3dY These were just a few key announcements from the event. To know more, you can watch Microsoft keynote on YouTube: https://www.youtube.com/watch?v=zeYQ-kPF0iQ 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 28817

article-image-scripting-animation-maya
Packt
02 Aug 2016
28 min read
Save for later

Scripting for Animation in Maya

Packt
02 Aug 2016
28 min read
This article, written by Adrian Herbez, author of Maya Programming with Python Cookbook, will cover various recipes related to animating objects with scripting: Querying animation data Working with animation layers Copying animation from one object to another Setting keyframes Creating expressions via script (For more resources related to this topic, see here.) In this article, we'll be looking at how to use scripting to create animation and set keyframes. We'll also see how to work with animation layers and create expressions from code. Querying animation data In this example, we'll be looking at how to retrieve information about animated objects, including which attributes are animated and both the location and value of keyframes. Although this script is unlikely to be useful by itself, knowing the number, time, and values of keyframes is sometimes a prerequisite for more complex animation tasks. Getting ready To make get the most out of this script, you'll need to have an object with some animation curves defined. Either load up a scene with animation or skip ahead to the recipe on setting keyframes. How to do it... Create a new file and add the following code: import maya.cmds as cmds def getAnimationData(): objs = cmds.ls(selection=True) obj = objs[0] animAttributes = cmds.listAnimatable(obj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) if (numKeyframes > 0): print("---------------------------") print("Found ", numKeyframes, " keyframes on ", attribute) times = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), timeChange=True) values = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), valueChange=True) print('frame#, time, value') for i in range(0, numKeyframes): print(i, times[i], values[i]) print("---------------------------") getAnimationData() If you select an object with animation curves and run the script, you should see a readout of the time and value for each keyframe on each animated attribute. For example, if we had a simple bouncing ball animation with the following curves: We would see something like the following output in the script editor: --------------------------- ('Found ', 2, ' keyframes on ', u'|bouncingBall.translateX') frame#, time, value (0, 0.0, 0.0) (1, 190.0, 38.0) --------------------------- --------------------------- ('Found ', 20, ' keyframes on ', u'|bouncingBall.translateY') frame#, time, value (0, 0.0, 10.0) (1, 10.0, 0.0) (2, 20.0, 8.0) (3, 30.0, 0.0) (4, 40.0, 6.4000000000000004) (5, 50.0, 0.0) (6, 60.0, 5.120000000000001) (7, 70.0, 0.0) (8, 80.0, 4.096000000000001) (9, 90.0, 0.0) (10, 100.0, 3.276800000000001) (11, 110.0, 0.0) (12, 120.0, 2.6214400000000011) (13, 130.0, 0.0) (14, 140.0, 2.0971520000000008) (15, 150.0, 0.0) (16, 160.0, 1.6777216000000008) (17, 170.0, 0.0) (18, 180.0, 1.3421772800000007) (19, 190.0, 0.0) --------------------------- How it works... We start out by grabbing the selected object, as usual. Once we've done that, we'll iterate over all the keyframeable attributes, determine if they have any keyframes and, if they do, run through the times and values. To get the list of keyframeable attributes, we use the listAnimateable command: objs = cmds.ls(selection=True) obj = objs[0] animAttributes = cmds.listAnimatable(obj) This will give us a list of all the attributes on the selected object that can be animated, including any custom attributes that have been added to it. If you were to print out the contents of the animAttributes array, you would likely see something like the following: |bouncingBall.rotateX |bouncingBall.rotateY |bouncingBall.rotateZ Although the bouncingBall.rotateX part likely makes sense, you may be wondering about the | symbol. This symbol is used by Maya to indicate hierarchical relationships between nodes in order to provide fully qualified node and attribute names. If the bouncingBall object was a child of a group named ballGroup, we would see this instead: |ballGroup|bouncingBall.rotateX Every such fully qualified name will contain at least one pipe (|) symbol, as we see in the first, nongrouped example, but there can be many more—one for each additional layer of hierarchy. While this can lead to long strings for attribute names, it allows Maya to make use of objects that may have the same name, but under different parts of a larger hierarchy (to have control objects named handControl for each hand of a character, for example). Now that we have a list of all of the possibly animated attributes for the object, we'll next want to determine if there are any keyframes set on it. To do this, we can use the keyframe command in the query mode. for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) At this point, we have a variable (numKeyframes) that will be greater than zero for any attribute with at least one keyframe. Getting the total number of keyframes on an attribute is only one of the things that the keyframe command can do; we'll also use it to grab the time and value for each of the keyframes. To do this, we'll call it two more times, both in the query mode—once to get the times and once to get the values: times = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), timeChange=True) values = cmds.keyframe(attribute, query=True, index=(0,numKeyframes), valueChange=True) These two lines are identical in everything except what type of information we're asking for. The important thing to note here is the index flag, which is used to tell Maya which keyframes we're interested in. The command requires a two-element argument representing the first (inclusive) and last (exclusive) index of keyframes to examine. So, if we had total 20 keyframes, we would pass in (0,20), which would examine the keys with indices from 0 to 19. The flags we're using to get the values likely look a bit odd—both valueChange and timeChange might lead you to believe that we would be getting relative values, rather than absolute. However, when used in the previously mentioned manner, the command will give us what we want—the actual time and value for each keyframe, as they appear in the graph editor. If you want to query information on a single keyframe, you still have to pass in a pair of values- just use the index that you're interested in twice- to get the fourth frame, for example, use (3,3). At this point, we have two arrays—the times array, which contains the time value for each keyframe, and the values array that contains the actual attribute value. All that's left is to print out the information that we've found: print('frame#, time, value') for i in range(0, numKeyframes): print(i, times[i], values[i]) There's more... Using the indices to get data on keyframes is an easy way to run through all of the data for a curve, but it's not the only way to specify a range. The keyframe command can also accept time values. If we wanted to know how many keyframes existed on a given attribute between frame 1 and frame 100, for example, we could do the following: numKeyframes = cmds.keyframe(attributeName, query=True, time=(1,100) keyframeCount=True) Also, if you find yourself with highly nested objects and need to extract just the object and attribute names, you may find Python's built-in split function helpful. You can call split on a string to have Python break it up into a list of parts. By default, Python will break up the input string by spaces, but you can specify a particular string or character to split on. Assume that you have a string like the following: |group4|group3|group2|group1|ball.rotateZ Then, you could use split to break it apart based on the | symbol. It would give you a list, and using −1 as an index would give you just ball.rotateZ. Putting that into a function that can be used to extract the object/attribute names from a full string would be easy, and it would look something like the following: def getObjectAttributeFromFull(fullString): parts = fullString.split("|") return parts[-1] Using it would look something like this: inputString = "|group4|group3|group2|group1|ball.rotateZ" result = getObjectAttributeFromFull(inputString) print(result) # outputs "ball.rotateZ" Working with animation layers Maya offers the ability to create multiple layers of animation in a scene, which can be a good way to build up complex animation. The layers can then be independently enabled or disabled, or blended together, granting the user a great deal of control over the end result. In this example, we'll be looking at how to examine the layers that exist in a scene, and building a script will ensure that we have a layer of a given name. For example, we might want to create a script that would add additional randomized motion to the rotations of selected objects without overriding their existing motion. To do this, we would want to make sure that we had an animation layer named randomMotion, which we could then add keyframes to. How to do it... Create a new script and add the following code: import maya.cmds as cmds def makeAnimLayer(layerName): baseAnimationLayer = cmds.animLayer(query=True, root=True) foundLayer = False if (baseAnimationLayer != None): childLayers = cmds.animLayer(baseAnimationLayer, query=True, children=True) if (childLayers != None) and (len(childLayers) > 0): if layerName in childLayers: foundLayer = True if not foundLayer: cmds.animLayer(layerName) else: print('Layer ' + layerName + ' already exists') makeAnimLayer("myLayer") Run the script, and you should see an animation layer named myLayer appear in the Anim tab of the channel box. How it works... The first thing that we want to do is to find out if there is already an animation layer with the given name present in the scene. To do this, we start by grabbing the name of the root animation layer: baseAnimationLayer = cmds.animLayer(query=True, root=True) In almost all cases, this should return one of two possible values—either BaseAnimation or (if there aren't any animation layers yet) Python's built-in None value. We'll want to create a new layer in either of the following two possible cases: There are no animation layers yet There are animation layers, but none with the target name In order to make the testing for the above a bit easier, we first create a variable to hold whether or not we've found an animation layer and set it to False: foundLayer = False Now we need to check to see whether it's true that both animation layers exist and one of them has the given name. First off, we check that there was, in fact, a base animation layer: if (baseAnimationLayer != None): If this is the case, we want to grab all the children of the base animation layer and check to see whether any of them have the name we're looking for. To grab the children animation layers, we'll use the animLayer command again, again in the query mode: childLayers = cmds.animLayer(baseAnimationLayer, query=True, children=True) Once we've done that, we'll want to see if any of the child layers match the one we're looking for. We'll also need to account for the possibility that there were no child layers (which could happen if animation layers were created then later deleted, leaving only the base layer): if (childLayers != None) and (len(childLayers) > 0): if layerName in childLayers: foundLayer = True If there were child layers and the name we're looking for was found, we set our foundLayer variable to True. If the layer wasn't found, we create it. This's easily done by using the animLayer command one more time, with the name of the layer we're trying to create: if not foundLayer: cmds.animLayer(layerName) Finally, we finish off by printing a message if the layer was found to let the user know. There's more... Having animation layers is great, in that we can make use of them when creating or modifying keyframes. However, we can't actually add animation to layers without first adding the objects in question to the animation layer. Let's say that we had an object named bouncingBall, and we wanted to set some keyframes on its translateY attribute, in the bounceLayer animation layer. The actual command to set the keyframe(s) would look something like this: cmds.setKeyframe("bouncingBall.translateY", value=yVal, time=frame, animLayer="bounceLayer") However, this would only work as expected if we had first added the bouncingBall object to the bounceLayer animation layer. To do it, we could use the animLayer command in the edit mode, with the addSelectedObjects flag. Note that because the flag operates on the currently selected objects, we would need to first select the object we want to add: cmds.select("bouncingBall", replace=True) cmds.animLayer("bounceLayer", edit=True, addSelectedObjects=True) Adding the object will, by default, add all of its animatable attributes. You can also add specific attributes, rather than entire objects. For example, if we only wanted to add the translateY attribute to our animation layer, we could do the following: cmds.animLayer("bounceLayer", edit=True, attribute="bouncingBall.translateY") Copying animation from one object to another In this example, we'll create a script that will copy all of the animation data on one object to one or more additional objects, which could be useful to duplicate motion across a range of objects. Getting ready For the script to work, you'll need an object with some keyframes set. Either create some simple animation or skip ahead to the example on creating keyframes with script, later in this article. How to do it... Create a new script and add the following code: import maya.cmds as cmds def getAttName(fullname): parts = fullname.split('.') return parts[-1] def copyKeyframes(): objs = cmds.ls(selection=True) if (len(objs) < 2): cmds.error("Please select at least two objects") sourceObj = objs[0] animAttributes = cmds.listAnimatable(sourceObj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) if (numKeyframes > 0): cmds.copyKey(attribute) for obj in objs[1:]: cmds.pasteKey(obj, attribute=getAttName(attribute), option="replace") copyKeyframes() Select the animated object, shift-select at least one other object, and run the script. You'll see that all of the objects have the same motion. How it works... The very first part of our script is a helper function that we'll be using to strip the attribute name off a full object name/attribute name string. More on it will be given later. Now on to the bulk of the script. First off, we run a check to make sure that the user has selected at least two objects. If not, we'll display a friendly error message to let the user know what they need to do: objs = cmds.ls(selection=True) if (len(objs) < 2): cmds.error("Please select at least two objects") The error command will also stop the script from running, so if we're still going, we know that we had at least two objects selected. We'll set the first one to be selected to be our source object. We could just as easily use the second-selected object, but that would mean using the first selected object as the destination, limiting us to a single target:     sourceObj = objs[0] Now we're ready to start copying animation, but first, we'll need to determine which attributes are currently animated, through a combination of finding all the attributes that can be animated, and checking each one to see whether there are any keyframes on it: animAttributes = cmds.listAnimatable(sourceObj); for attribute in animAttributes: numKeyframes = cmds.keyframe(attribute, query=True, keyframeCount=True) If we have at least one keyframe for the given attribute, we move forward with the copying: if (numKeyframes > 0): cmds.copyKey(attribute) The copyKey command will cause the keyframes for a given object to be temporarily held in memory. If used without any additional flags, it will grab all of the keyframes for the specified attribute, exactly what we want in this case. If we wanted only a subset of the keyframes, we could use the time flag to specify a range. We're passing in each of the values that were returned by the listAnimatable function. These will be full names (both object name and attribute). That's fine for the copyKey command, but will require a bit of additional work for the paste operation. Since we're copying the keys onto a different object than the one that we copied them from, we'll need to separate out the object and attribute names. For example, our attribute value might be something like this: |group1|bouncingBall.rotateX From this, we'll want to trim off just the attribute name (rotateX) since we're getting the object name from the selection list. To do this, we created a simple helper function that takes a full-length object/attribute name and returns just the attribute name. That's easy enough to do by just breaking the name/attribute string apart on the . and returning the last element, which in this case is the attribute: def getAttName(fullname): parts = fullname.split('.') return parts[-1] Python's split function breaks apart the string into an array of strings, and using a negative index will count back from the end, with −1 giving us the last element. Now we can actually paste our keys. We'll run through all the remaining selected objects, starting with the second, and paste our copied keyframes: for obj in objs[1:]: cmds.pasteKey(obj, attribute=getAttName(attribute), option="replace") Note that we're using the nature of Python's for loops to make the code a bit more readable. Rather than using an index, as would be the case in most other languages, we can just use the for x in y construction. In this case, obj will be a temporary variable, scoped to the for loop, that takes on the value of each item in the list. Also note that instead of passing in the entire list, we use objs[1:] to indicate the entire list, starting at index 1 (the second element). The colon allows us to specify a subrange of the objs list, and leaving the right-hand side blank will cause Python to include all the items to the end of the list. We pass in the name of the object (from our original selection), the attribute (stripped from full name/attribute string via our helper function), and we use option="replace" to ensure that the keyframes we're pasting in replace anything that's already there. Original animation (top). Here, we see the result of pasting keys with the default settings (left) and with the replace option (right). Note that the default results still contain the original curves, just pushed to later frames If we didn't include the option flag, Maya would default to inserting the pasted keyframes while moving any keyframes already present forward in the timeline. There's more... There are a lot of other options for the option flag, each of which handles possible conflicts with the keys you're pasting and the ones that may already exist in a slightly different way. Be sure to have a look at the built-in documentation for the pasteKeys command for more information. Another, and perhaps better option to control how pasted keys interact with existing one is to paste the new keys into a separate animation layer. For example, if we wanted to make sure that our pasted keys end up in an animation layer named extraAnimation, we could modify the call to pasteKeys as follows: cmds.pasteKey(objs[i], attribute=getAttName(attribute), option="replace", animLayer="extraAnimation") Note that if there was no animation layer named extraAnimation present, Maya would fail to copy the keys. See the section on working with animation layers for more information on how to query existing layers and create new ones. Setting keyframes While there are certainly a variety of ways to get things to move in Maya, the vast majority of motion is driven by keyframes. In this example, we'll be looking at how to create keyframes with code by making that old animation standby—a bouncing ball. Getting ready The script we'll be creating will animate the currently selected object, so make sure that you have an object—either the traditional sphere or something else you'd like to make bounce. How to do it... Create a new file and add the following code: import maya.cmds as cmds def setKeyframes(): objs = cmds.ls(selection=True) obj = objs[0] yVal = 0 xVal = 0 frame = 0 maxVal = 10 for i in range(0, 20): frame = i * 10 xVal = i * 2 if i % 2 == 1: yVal = 0 else: yVal = maxVal maxVal *= 0.8 cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame) cmds.setKeyframe(obj + '.translateX', value=xVal, time=frame) setKeyframes() Run the preceding script with an object selected and trigger playback. You should see the object move up and down. How it works... In order to get our object to bounce, we'll need to set keyframes such that the object alternates between a Y-value of zero and an ever-decreasing maximum so that the animation mimics the way a falling object loses velocity with each bounce. We'll also make it move forward along the x-axis as it bounces. We start by grabbing the currently selected object and setting a few variables to make things easier to read as we run through our loop. Our yVal and xVal variables will hold the current value that we want to set the position of the object to. We also have a frame variable to hold the current frame and a maxVal variable, which will be used to hold the Y-value of the object's current height. This example is sufficiently simple that we don't really need separate variables for frame and the attribute values, but setting things up this way makes it easier to swap in more complex math or logic to control where keyframes get set and to what value. This gives us the following: yVal = 0 xVal = 0 frame = 0 maxVal = 10 The bulk of the script is a single loop, in which we set keyframes on both the X and Y positions. For the xVal variable, we'll just be multiplying a constant value (in this case, 2 units). We'll do the same thing for our frame. For the yVal variable, we'll want to alternate between an ever-decreasing value (for the successive peaks) and zero (for when the ball hits the ground). To alternate between zero and non-zero, we'll check to see whether our loop variable is divisible by two. One easy way to do this is to take the value modulo (%) 2. This will give us the remainder when the value is divided by two, which will be zero in the case of even numbers and one in the case of odd numbers. For odd values, we'll set yVal to zero, and for even ones, we'll set it to maxVal. To make sure that the ball bounces a little less each time, we set maxVal to 80% of its current value each time we make use of it. Putting all of that together gives us the following loop: for i in range(0, 20): frame = i * 10 xVal = i * 2 if (i % 2) == 1: yVal = 0 else: yVal = maxVal maxVal *= 0.8 Now we're finally ready to actually set keyframes on our object. This's easily done with the setKeyframe command. We'll need to specify the following three things: The attribute to keyframe (object name and attribute) The time at which to set the keyframe The actual value to set the attribute to In this case, this ends up looking like the following: cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame) cmds.setKeyframe(obj + '.translateX', value=xVal, time=frame) And that's it! A proper bouncing ball (or other object) animated with pure code. There's more... By default, the setKeyframe command will create keyframes with both in tangent and out tangent being set to spline. That's fine for a lot of things, but will result in overly smooth animation for something that's supposed to be striking a hard surface. We can improve our bounce animation by keeping smooth tangents for the keyframes when the object reaches its maximum height, but setting the tangents at its minimum to be linear. This will give us a nice sharp change every time the ball strikes the ground. To do this, all we need to do is to set both the inTangentType and outTangentType flags to linear, as follows: cmds.setKeyframe(obj + ".translateY", value=animVal, time=frame, inTangentType="linear", outTangentType="linear") To make sure that we only have linear tangents when the ball hits the ground, we could set up a variable to hold the tangent type, and set it to one of two values in much the same way that we set the yVal variable. This would end up looking like this: tangentType = "auto" for i in range(0, 20): frame = i * 10 if i % 2 == 1: yVal = 0 tangentType = "linear" else: yVal = maxVal tangentType = "spline" maxVal *= 0.8 cmds.setKeyframe(obj + '.translateY', value=yVal, time=frame, inTangentType=tangentType, outTangentType=tangentType) Creating expressions via script While most animation in Maya is created manually, it can often be useful to drive attributes directly via script, especially for mechanical objects or background items. One way to approach this is through Maya's expression editor. In addition to creating expressions via the expression editor, it is also possible to create expressions with scripting, in a beautiful example of code-driven code. In this example, we'll be creating a script that can be used to create a sine wave-based expression to smoothly alter a given attribute between two values. Note that expressions cannot actually use Python code directly; they require the code to be written in the MEL syntax. But this doesn't mean that we can't use Python to create expressions, which is what we'll do in this example. Getting ready Before we dive into the script, we'll first need to have a good handle on the kind of expression we'll be creating. There are a lot of different ways to approach expressions, but in this instance, we'll keep things relatively simple and tie the attribute to a sine wave based on the current time. Why a sine wave? Sine waves are great because they alter smoothly between two values, with a nice easing into and out of both the minimum and maximums. While the minimum and maximum values range from −1 to 1, it's easy enough to alter the output to move between any two numbers we want. We'll also make things a bit more flexible by setting up the expression to rely on a custom speed attribute that can be used to control the rate at which the attribute animates. The end result will be a value that varies smoothly between any two numbers at a user-specified (and keyframeable) rate. How to do it... Create a new script and add the following code: import maya.cmds as cmds def createExpression(att, minVal, maxVal, speed): objs = cmds.ls(selection=True) obj = objs[0] cmds.addAttr(obj, longName="speed", shortName="speed", min=0, keyable=True) amplitude = (maxVal – minVal)/2.0 offset = minVal + amplitude baseString = "{0}.{1} = ".format(obj, att) sineClause = '(sin(time * ' + obj + '.speed)' valueClause = ' * ' + str(amplitude) + ' + ' + str(offset) + ')' expressionString = baseString + sineClause + valueClause cmds.expression(string=expressionString) createExpression('translateY', 5, 10, 1) How it works... The first that we do is to add a speed attribute to our object. We'll be sure to make it keyable for later animation: cmds.addAttr(obj, longName="speed", shortName="speed", min=0, keyable=True) It's generally a good idea to include at least one keyframeable attribute when creating expressions. While math-driven animation is certainly a powerful technique, you'll likely still want to be able to alter the specifics. Giving yourself one or more keyframeable attributes is an easy way to do just that. Now we're ready to build up our expression. But first, we'll need to understand exactly what we want; in this case, a value that smoothly varies between two extremes, with the ability to control its speed. We can easily build an expression to do that using the sine function, with the current time as the input. Here's what it looks like in a general form: animatedValue = (sin(time * S) * M) + O; Where: S is a value that will either speed up (if greater than 1) or slow down (if less) the rate at which the input to the sine function changes M is a multiplier to alter the overall range through which the value changes O is an offset to ensure that the minimum and maximum values are correct You can also think about it visually—S will cause our wave to stretch or shrink along the horizontal (time) axis, M will expand or contract it vertically, and O will move the entire shape of the curve either up or down. S is already taken care of; it's our newly created "speed" attribute. M and O will need to be calculated, based on the fact that sine functions always produce values ranging from −1 to 1. The overall range of values should be from our minVal to our maxVal, so you might think that M should be equal to (maxVal – minVal). However, since it gets applied to both −1 and 1, this would leave us with double the desired change. So, the final value we want is instead (maxVal – minVal)/2. We store that into our amplitude variable as follows: amplitude = (maxVal – minVal)/2.0 Next up is the offset value O. We want to move our graph such that the minimum and maximum values are where they should be. It might seem like that would mean just adding our minVal, but if we left it at that, our output would dip below the minimum for 50% of the time (anytime the sine function is producing negative output). To fix it, we set O to (minVal + M) or in the case of our script: offset = minVal + amplitude This way, we move the 0 position of the wave to be midway between our minVal and maxVal, which is exactly what we want. To make things clearer, let's look at the different parts we're tacking onto sin(), and the way they effect the minimum and maximum values the expression will output. We'll assume that the end result we're looking for is a range from 0 to 4. Expression Additional component Minimum Maximum sin(time) None- raw sin function −1 1 sin(time * speed) Multiply input by "speed" −1 (faster) 1 (faster) sin(time * speed) * 2 Multiply output by 2 −2 2 (sin(time * speed) * 2) + 2 Add 2 to output 0 4   Note that 2 = (4-0)/2 and 2 = 0 + 2. Here's what the preceding progression looks like when graphed:   Four steps in building up an expression to var an attribute from 0 to 4 with a sine function. Okay, now that we have the math locked down, we're ready to translate that into Maya's expression syntax. If we wanted an object named myBall to animate along Y with the previous values, we would want to end up with: myBall.translateY = (sin(time * myBall.speed) * 5) + 12; This would work as expected if entered into Maya's expression editor, but we want to make sure that we have a more general-purpose solution that can be used with any object and any values. That's straightforward enough and just requires building up the preceding string from various literals and variables, which is what we do in the next few lines: baseString = "{0}.{1} = ".format(obj, att) sineClause = '(sin(time * ' + obj + '.speed)' valueClause = ' * ' + str(amplitude) + ' + ' + str(offset) + ')' expressionString = baseString + sineClause + valueClause I've broken up the string creation into a few different lines to make things clearer, but it's by no means necessary. The key idea here is that we're switching back and forth between literals (sin(time *, .speed, and so on) and variables (obj, att, amplitude, and offset) to build the overall string. Note that we have to wrap numbers in the str() function to keep Python from complaining when we combine them with strings. At this point, we have our expression string ready to go. All that's left is to actually add it to the scene as an expression, which is easily done with the expression command: cmds.expression(string=expressionString) And that's it! We will now have an attribute that varies smoothly between any two values. There's more... There are tons of other ways to use expressions to drive animation, and all sorts of simple mathematical tricks that can be employed. For example, you can easily get a value to move smoothly to a target value with a nice easing-in to the target by running this every frame: animatedAttribute = animatedAttribute + (targetValue – animatedAttribute) * 0.2; This will add 20% of the current difference between the target and the current value to the attribute, which will move it towards the target. Since the amount that is added is always a percentage of the current difference, the per-frame effect reduces as the value approaches the target, providing an ease-in effect. If we were to combine this with some code to randomly choose a new target value, we would end up with an easy way to, say, animate the heads of background characters to randomly look in different positions (maybe to provide a stadium crowd). Assume that we had added custom attributes for targetX, targetY, and targetZ to our object that would end up looking something like the following: if (frame % 20 == 0) { myCone.targetX = rand(time) * 360; myCone.targetY = rand(time) * 360; myCone.targetZ = rand(time) * 360; } myObject.rotateX += (myObject.targetX - myCone.rotateX) * 0.2; myObject.rotateY += (myObject.targetY - myCone.rotateY) * 0.2; myObject.rotateZ += (myObject.targetZ - myCone.rotateZ) * 0.2; Note that we're using the modulo (%) operator to do something (setting the target) only when the frame is an even multiple of 20. We're also using the current time as the seed value for the rand() function to ensure that we get different results as the animation progresses. The previously mentioned example is how the code would look if we entered it directly into Maya's expression editor; note the MEL-style (rather than Python) syntax. Generating this code via Python would be a bit more involved than our sine wave example, but would use all the same principles—building up a string from literals and variables, then passing that string to the expression command. Summary In this article, we primarily discussed scripting and animation using Maya.  Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [article] Discovering Python's parallel programming tools [article] Mining Twitter with Python – Influence and Engagement [article]
Read more
  • 0
  • 0
  • 28343

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 28334
article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 27962

article-image-animations-sprites
Packt
03 Aug 2016
10 min read
Save for later

Animations Sprites

Packt
03 Aug 2016
10 min read
In this article by, Abdelrahman Saher and Francesco Sapio, from the book, Unity 5.x 2D Game Development Blueprints, we will learn how to create and play animations for the player character to see as Unity controls the player and other elements in the game. The following is what we will go through: (For more resources related to this topic, see here.) Animating sprites Integrating animations into animators Continuing our platform game Animating sprites Creating and using animation for sprites is a bit easier than other parts of the development stage. By using animations and tools to animate our game, we have the ability to breathe some life into it. Let's start by creating a running animation for our player. There are two ways of creating animations in Unity: automatic clip creation and manual clip creation. Automatic clip creation This is the recommended method for creating 2D animations. Here, Unity is able to create the entire animation for you with a single-click. If you navigate in the Project Panel to Platformer Pack | Player | p1_walk, you can find an animation sheet as a single file p1_walk.png and a folder of a PNG image for each frame of the animation. We will use the latter. The reason for this is because the single sprite sheet will not work perfectly as it is not optimized for Unity. In the Project Panel, create a new folder and rename it to Animations. Then, select all the PNG images in Platformer Pack | Player | p1_walk | PNG and drop them in the Hierarchy Panel: A new window will appear that will give us the possibility to save them as a new animation in a folder that we chose. Let's save the animation in our new folder titled Animations as WalkAnim: After saving the animation, look in the Project Panel next to the animation file. Now, there is another asset with the name of one of the dropped sprites. This is an Animator Controller and, as the name suggests, it is used to control the animation. Let's rename it to PlayerAnimator so that we can distinguish it later on. In the Hierarchy panel, a game object has been automatically created with the original name of our controller. If we select it, the Inspector should look like the following: You can always add an Animator component to a game object by clicking on Add Component | Miscellaneous | Animator. As you can see, below the Sprite Renderer component there is an Animator component. This component will control the animation for the player and is usually accessed through a custom script to change the animations. For now, drag and drop the new controller PlayerAnimator on to our Player object. Manual clip creation Now, we also need a jump animation for our character. However, since we only have one sprite for the player jumping, we will manually create the animation clip for it. To achieve this, select the Player object in the Hierarchy panel and open the Animation window from Window | Animation. The Animation window will appear, as shown in the screenshot below: As you can see, our animation WalkAnim is already selected. To create a new animation clip, click on where the text WalkAnim is. As a result, a dropdown menu appears and here you can select Create New Clip. Save the new animation in the Animations folder as JumpAnim. On the right, you can find the animation timeline. Select from the Project Panel the folder Platformer Pack/Player. Drag and drop the sprite p1_jump on the timeline. You can see that the timeline for the animation has changed. In fact, now it contains the jumping animation, even if it is made out of only one sprite. Finally, save what we have done so far. The Animation window's features are best used to make fine tunes for the animation or even merging two or more animations into one. Now the Animations folder should look like this in the Project panel: By selecting the WalkAnim file, you will be able to see the Preview panel, which is collocated at the bottom of the Inspector when an object that may contain animation is selected. To test the animation, drag the Player object and drop it in the Preview panel and hit play: In the Preview panel, you can check out your animations without having to test them directly from code. In addition, you can easily select the desired animation and then drag the animation into a game object with the corresponding Animator Controller and dropping it in the Preview panel. The Animator In order to display an animation on a game object, you will be using both Animator Components and Animator Controllers. These two work hand in hand to control the animation of any animated object that you might have, and are described below: Animator Controller uses a state-machine to manage the animation states and the transitions between one another, almost like a flow chart of animations. Animator Component uses an Animator Controller to define which animation clips to use and applies them on the game object when needed. It also controls the blending and the transitions between them. Let's start modifying our controller to make it right for our character animations. Click on the Player and then open the Animator window from Window | Animator. We should see something like this: This is a state-machine, although it is automatically generated. To move around the grid, hold the middle mouse button and drag around. First, let's understand how all the different kinds of nodes work: Entry node (marked green): It is used when transitioning into a state machine, provided the required conditions were met. Exit node (marked red): It is used to exit a state machine when the conditions have been changed or completed. By default, it is not present, as there isn't one in the previous image. Default node (marked orange): It is the default state of the Animator and is automatically transitioned to from the entry node. Sub-state nodes (marked grey): They are also called custom nodes. They are used typically to represent a state for an object where an event will occur (in our case, an animation will be played). Transitions (arrows): They allow state machines to switch between one another by setting the conditions that will be used by Animator to decide which state will be activated. To keep things organized, let's reorder the nodes in the grid. Drag the three sub-states just right under the Entry node. Order them from left to right WalkAnim, New Animation, and JumpAnim. Then, right-click on New Animation and choose Set as Layer Default State. Now, our Animator window should look like the following: To edit a node, we need to select it and modify it as needed in the Inspector. So, select New Animation and the Inspector should be like the screenshot below: Here, we can have access to all the properties of the state or node New Animation. Let's change its name to Idle. Next, we need to change the speed of the state machine, which controls how fast the animation will be played. Next, we have Motion which refers to the animation that will be used for this state. After we have changed the name, save the scene, and this is what everything should look like now: We can test what we have done so far, by hitting play. As we can see in the Game view, the character is not animated. This is because the character is always in the Idle state and there are no transitions to let him change state. While the game is in runtime, we can see in the Animator window that the Idle state is running. Stop the game, and right-click on the WalkAnim node in the Animator window. Select from the menu Set as Layer Default State. As a result, the walking animation will be played automatically at the beginning of the game. If we press the play button again, we can notice that the walk animation is played, as shown in the screenshot below: You can experiment with the other states of the Animator. For example, you can try to set JumpAnim as the default animation or even tweak the speed of each state to see how they will be affected. Now that we know the basics of how the Animator works, let's stop the playback and revert the default state to the Idle state. To be able to connect our states together, we need to create transitions. To achieve this, right-click on the Idle state and select Make Transition which turns the mouse cursor into an arrow. By clicking on other states, we can connect them with a transition. In our case, click on the WalkAnim state to make a transition from the Idle state to the WalkAnim state. The animator window should look like the following: If we click on the arrow, we can have access to its properties in the Inspector, as shown in the following screenshot: The main properties that we might want to change are: Name (optional): We can assign a name to the transition. This is useful to keep everything organized and easy to access. In this case, let's name this transition Start Walking. Has Exit Time: Whether or not the animation should be played to the end before exiting its state when the conditions are not being met anymore. Conditions: The conditions that should be met so that the transition takes place. Let's try adding a condition and see what happens: When we try to create a condition for our transition, the following message appears next to Parameter does not exist in Controller which means that we need to add parameters that will be used for our condition. To create a parameter, switch to Parameters in the top left of the Animator window and add a new float using the + button and name it PlayerSpeed, as shown in the following screenshot: Any parameters that are created in the Animator are usually changed from code and those changes affect the state of animation. In the following screenshot, we can see the PlayerSpeed parameter on the left side: Now that we have created a parameter, let's head back to the transition. Click the drop down button next to the condition we created earlier and choose the parameter PlayerSpeed. After choosing the parameter, another option appears next to it. You can either choose Greater or Less, which means that the transition will happen when this parameter is respectively less than X or greater than X. Don't worry, as that X will be changed by our code later on. For now, choose Greater and set the value to 1, which means that when the player speed is more than one, the walk animation starts playing. You can test what we have done so far and change the PlayerSpeed parameter in runtime. Summary This wraps up everything that we will cover in this article. So far, we have added animations to our character to be played according to the player controls. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [Article] Adding Animations [Article] Bringing Your Game to Life with AI and Animations [Article]
Read more
  • 0
  • 0
  • 27850

article-image-adding-fog-your-games
Packt
21 Sep 2015
8 min read
Save for later

Adding Fog to Your Games

Packt
21 Sep 2015
8 min read
In this article by Muhammad A.Moniem, author of the book Unreal Engine Lighting and Rendering Essentials speaks about rendering without mentioning one of the most and old (but important) rendering features since the rise of the 3D rendering. Fog effects have always been an essential part of any rendering engines regardless of the main goal of that engine. However, in games, it is a must to have this feature, not only because of the ambiance and feel it will give to the game, but because it will minimize the draw distance while rendering the large and open areas, which is great performance wise! The fog effects can be used for a lot of purposes, starting from adding ambiance to the world to setting a global mood (perhaps scary), to simulating a real environment, or even to distracting the players. By the end of this little article, you'll be able to: Understand both the fog types in Unreal Engine Understand the difference between both the fog types Master all the parameters to control the fog types Having said this, let's get started! (For more resources related to this topic, see here.) The fog types Unreal Engine provides the user with two varieties of fog; each has its own set of parameters to modify and provide different results of effects. The two supported fog types are as follows: The Atmospheric Fog The Exponential Height Fog The Atmospheric Fog The Atmospheric Fog gives an approximation of light scattering through a planetary atmosphere. It is the best fog method that can be used with a natural environment scene, such as landscape scenes. One of the most core features of this fog is that it gives your directional light a sun disc effect. Adding it to your game By adding an actor from the Visual Effects section of the Modes panel, or even from the actor's context menu by right-clicking on the scene view, you can install the Atmospheric Fog in your level directly. In the Visual Effects submenu of the Modes panel, you can find both the fog types listed here. In order to be able to control the quality of the final visual look of the recently inserted fog, you will have to do some tweaks for its properties attached to the actor. Sun Multiplier: This is an overall multiplier for the directional light's brightness. Increasing this value will not only brighten the fog color, but will also brighten the sky color as well. Fog Multiplier: This is a multiplier that affects only the fog color (does not affect the directional light). Density Multiplier: This is a fog density multiplier (does not affect the directional light). Density Offset: This is a fog opacity controller. Distance Scale: This is a distance factor that is compared to the Unreal unit scale. This value is more effective for a very small world. As the world size increases, you will need to increase this value too, as larger values cause changes in the fog attenuation to take place faster. Altitude Scale: This is the scale along the z axis. Distance Offset: This is the distance offset, calculated in km, is used to manage the large distances. Ground Offset: This is an offset for the sea level. (normally, the sea level is 0, and as the fog system does not work for regions below the sea level, you need to make sure that all the terrain remains above this value in order to guarantee that the fog works.) Start Distance: This is the distance from the camera lens that the fog will start from. Sun Disk Scale: This is the size of the sun disk, but keep in mind that this can't be 0, as earlier there was an option to disable the sun disk, but in order to keep it real, Epic decided to remove this option and keep the sun disk, but it gives you the chance to make it as small as possible. Precompute Params: The properties included in this group need recomputation of precomputed texture data: Density Height: This is the fog density decay height controller. The lower the values, the denser the fog will be, while the higher the values, the less scatter the fog will have. Max Scattering Num: This sets a limit on the number of scattering calculations. Inscatter Altitude Sample Number: This is the number of different altitudes at which you can sample inscatter color. The Exponential Height Fog This type of fog has its own unique requirement. While the Atmospheric Fog can be added anytime or anywhere and it works, the Exponential Height Fog requires a special type of map where there are low and high bounds, as its mechanic includes creating more density in the low places of a map and less density in the high places of the map. Between both these areas, there will be a smooth transition. One of the most interesting features of the Exponential Height Fog is that is has two fog colors: one for the hemisphere facing the dominant directional light and another color for the opposite hemisphere. Adding it to your game As mentioned earlier, to add the volume type from the same Visual Effects section of the Modes panel is very simple. You can select the Exponential Height Fog actor and drag and drop it into the scene. As you can see, even the icon implies the high and low places from the sea level. In order to be able to control the final visual look of the recently inserted fog, you would have to do some tweaks for its properties attached to the actor: Fog Density: This is the global density controller of the fog. Fog Inscattering Color: This is the inscattering color for the fog (the primary color). In the following image, you can see how different values work: Fog Height Falloff: This is the Height density controller that controls how the density increases as the height decreases. Fog Max Opacity: This controls the maximum opacity of the fog. A value of 0 means the fog will be invisible. Start Distance: This is the distance from the camera where the fog will start. Directional Inscattering Exponent: This controls the size of the directional inscattering cone. The higher the value, the clearer vision you get, while the lower the value, the more fog dense you get. Directional Inscattering Start Distance: This controls the start distance from the viewer of the directional inscattering. Directional Inscattering Color: This sets the color for directional inscattering that is used to approximate inscattering from a directional light. Visible: This controls the fog visibility. Actor Hidden in Game: This enables or disables the fog in the game (it will not affect the editing mode). Editor Billboard Scale: This is the scale of the billboard components in the editor. The animated fog Almost like any other thing in Unreal Engine, you can do some animations for it. Some parts of the engine are super responsive to the animation system, while other parts have a limited access. However, speaking of the fog, it has a limited access in order to animate some values. You can use different ways and methods to animate values at runtime or even during the edit mode. The color The height fog color can be changed at runtime using the LinearColor Property Track in the Matinee Editor. By performing the following given steps, you can change the height fog color in the game: Create a new Matinee Actor. Open the newly created actor in the Matinee Editor. Create a Height Fog Actor. Create a group in Matinee. Attach the Height Fog Actor from the scene to the group created in the previous step. Create a linear color property track in the group. Choose the FogInscatteringColor or DirectionalInscatteringColor to control its value (using two colors is an advantage of that fog type, remember!). Add keyframes to the track, and set the color for them. Animating the Exponential Height Fog In order to animate the Exponential Height Fog, you can use one of the following two ways: Use Matinee to animate the Exponential Height Fog Actor values Use a timeline node in the Level Blueprint and control the Exponential Height Fog Actor values Summary In this article, you learned about the fog effects and the supported types in the Unreal Editor, the different parameters, and how to use any of the fog types. Now, it is recommended that you go ahead directly to your editor, and start adding some fog and play with its values. Even better if you can start to do some animation for the parameters as mentioned earlier. Don't just try in the Edit mode; sometimes, the results are different when you hit play or even more different when you cook a build, so feel free to build any level you made in an executable and check the results. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints[article] Creating a Brick Breaking Game[article] The Unreal Engine [article]
Read more
  • 0
  • 0
  • 27463
article-image-creating-ice-and-snow-materials
Packt
24 Dec 2013
8 min read
Save for later

Creating the ice and snow materials

Packt
24 Dec 2013
8 min read
(For more resources related to this topic, see here.) Getting ready We will create the ice and the snow using a single material, and mix them using a new technique. Select the iceberg mesh and add a new material to it. How to do it... We will now see the steps required to create the ice as well as the snow material. Creating ice The following are the steps to create ice: Add Glossy BSDF and Glass BSDF and mix them using a Mix Shader node. Let's put Glossy in the first socket and Glass in the second one. As input for the color of both the BSDFs, we will use a color Mix node with Color1 as white and Color2 as RGB 0.600, 1.00, 0.760. As input for the Fac value of the color Mix node, we will use a Voronoi Texture node with the Generated coordinates, Intensity mode, and Scale 100. Invert the color output using an Invert node and plug it into the Fac value of the color Mix node. As the input for Roughness of the Glossy BSDF, we will use the Layer Weight node's Facing output with a Blend value of 0.800. Then we will plug this into a ColorRamp node and set the color stops as shown in the following screenshot. The first color stop is HSV 0.000, 0.000, 0.090 and the second one is HSV 0.000, 0.000, 0.530. Remember to plug the ColorRamp node into the Glossy BSDF roughness socket. Finally, set Glass BSDF node's IOR to 1.310 and Roughness to 0.080. Now we will create the Fac input for the Shader Mix node of the two BSDFs. Now we will add Noise Texture to the Generated coordinates with a Scale of 130, Detail of 1, and Distortion of 0.500. Plug this into a ColorRamp node and set the color stops as shown in the the following screenshot: Let's now add a Subsurface Scattering node. Set the mode to Compatible, the Scale to 10.000 and the Radius to 0.070, 0.070, 0.10. As a color input, let's add another color Mix node with Color1 as RGB 0.780, 0.960, 1.000 and Color2 as RGB 0.320, 0.450, 0.480. The Fac input for the color Mix node will be the same as for the color Mix node of the Glass and Glossy BSDFs. Now mix the SSS node with the mix of the other two BSDFs, using an Add Shader node. Now, we will create the normals for the shader. Add three Image Texture nodes. In the first one, let's load the IceScratches.jpg file. We will use the Generated coordinates with a Scale of XYZ 20.000, 20.000, 20.000. Set the projection mode to Box and the Blend to 0.500. In the second Image Texture node, load the ice_snow_DISP.pngfile, using the UV coordinates. Finally, load the ice_snow_NRM.png file in the third Image Texture node, using again the UV textures. Now let's mix the IceScratches.jpg with the ice_snow_DISP.png textures, using a color Mix node with the Displacement Texture into the Color1 socket and the scratches texture into the Color2 socket. Set the Fac value to 0.100. Plug the mix of the textures into the Height socket of a Bump node and then plug the ice_snow_NRM.png texture into the Color socket of a Normal Map node. Finally, plug this one into the Normal socket of the Bump node. Set the Normal Map node's Strength to 0.050, the Bump node's Strength to 0.500 and the Distance to 1.000. Plug the Bump node into all of the BSDFs we added so far. Frame every node we created and label the frame ICE. Creating snow The nodes we will add now will still be within the same material, but outside the ICE group we just created. Add a Glossy and a Subsurface Scatter BSDF nodes. Mix them using a Mix Shader node with 20 percent of influence from the Glossy BSDF node. Set both the colors to white. Also, set the SSS Scale to 3.00 and the Radius to 0.400, 0.400, 0.450. Set the Glossy mode to GGX and the Roughness to 0.600. Add a Noise Textures node and set Scale to 2000.000, Detail to 2.000 and Distortion to 0.000. We will use Generated coordinates for this texture. Connect the Fac output of the Noise Texture node to the Height socket of the Bump node and set the Strength to 0.200 and the Distance to 1.000. Connect the Normal output of the Noise Texture node to the Normal input of the Subsurface Scatter BSDF and Glossy BSDF nodes. Now let's mix the Mix Shader node of the Subsurface Scatter BSDF and Glossy BSDF nodes with an Emission shader using another Mix Shader node. Add new Noise Textures, this time with Scale as 2500.000, Detail as 2.000, and Distortion as 0.000. Connect the Fac output of the Noise Texture node to the Color input of the Gamma node, with a Gamma value of 8.000, and then add the Color output of the Gamma node to the Fac input of the ColorRamp node. We will set up the color stops as seen in the next screenshot. Connect the ColorRamp node's Color socket to the Fac socket of the previous Mix Shader node. Remember to use the Color output of the ColorRamp node. Frame all these nodes and label the frame SNOW. Mixing ice and snow Add a Geometry(Add | Input) and a Normal node (Add | Vectors). Connect the Normal output from the Geometry node to the Normal input of the Normal node. Now connect the Dot output of the Normal node to the first socket of a Multiply node and set the mode to Multiply and the second value to 2.000. Add a Mix shader node and connect the ICE frame into the first Shader socket and the SNOW frame into the second one. Finally, connect the output of the Multiply node into the Fac value of the Mix Shader node. How it works... Let's see the most interesting points of this material in detail. For the ice material, we used a Voronoi Texture node to create a pattern for the surface color. Then we mixed the Glass and Glossy BSDF nodes using a Noise Texture node to simulate both, the more and less transparent areas: for example, the ice may be less transparent due to some part of it being covered with snow, difference in purity of the water, or the thickness of the ice. Then we mixed the two BSDFs with a Subsurface Scatter node to simulate the dispersion of the lighting inside the ice. Note that we used the ColorRamp node quite often in order to fine tune the various mixing and inputs. The snow material is quite similar for the main concept, but is missing the refractive part of the ice. Here we did something else. We used a Noise Texture node with some tweaking (Gamma and ColorRamp) to make it really contrasted to mix an emission shader to the rest of the material. This will create a small emission dot over the snow surface that we will use in compositing to create the flakes. It is really interesting how we mixed the two materials. We wanted the snow to be placed only on the flat surfaces of the iceberg, while we wanted the slopes to be just ice. To obtain this effect, we extracted the normal information about the mesh and used it to understand which part of the mesh is facing upward. We must imagine the normals to be working like the sunlight falling on the surface of the earth. Half of the sphere is in darkness, and half is hit by the light. We can decide from which direction the light hits the surface. Now imagine the same principle applied to the shape of our mesh. In this way we can create a white mask on the areas that are hit by the normal sphere direction. With the Normal node, we can orient this effect wherever we want. The default position of the sphere is exactly what we needed: the parts of the mesh that are faced upward are made white, while the rest of the mesh is black. Turning the sphere around will make the direction of the ramp that has been created, change accordingly. The sphere, maybe, is not the best way to set these kind of things as it lacks a bit of precision (as for the setting of the sky), but this will probably change in the future with something that allows more precise settings. Finally we used a Multiply node to multiply the value coming from the Normal node and increased the contrast of the mask. There's more... The normal method we just saw in this article is not the only way of mixing materials. Just some time ago, two new plugins have been developed. The first one allows us to obtain the same results we got in this article by creating a weight map or a vertex paint based on the slope of the mesh, while the second creates the same based on the altitude. This opens up many possibilities not only in terms of material creation, but also for the distribution of particle systems! The plugins can be downloaded from the following links, where we can find some instructions about them: http://blenderthings.blogspot.nl/2013/09/height-to-vertex-weights-blender-addon.html See also In the following link, Andrew Price teaches us how to create a different kind of ice material; for example, material that is more suitable for ice cubes. Surely worth a watch! http://www.blenderguru.com/videos/how-to-create-realistic-ice/
Read more
  • 0
  • 0
  • 27160

article-image-game-world
Packt
23 Feb 2016
39 min read
Save for later

The Game World

Packt
23 Feb 2016
39 min read
In this article, we will cover the basics of creating immersive areas where players can walk around and interact, as well as some of the techniques used to manage those areas. This article will give you some practical tips and tricks of the spritesheet system introduced with Unity 4.3 and how to get it to work for you. Lastly, we will also have a cursory look at how shaders work in the 2D world and the considerations you need to keep in mind when using them. However, we won't be implementing shaders as that could be another book in itself. The following is the list of topics that will be covered in this article: Working with environments Looking at sprite layers Handling multiple resolutions An overview of parallaxing and effects Shaders in 2D – an overview (For more resources related to this topic, see here.) Backgrounds and layers Now that we have our hero in play, it would be nice to give him a place to live and walk around, so let's set up the home town and decorate it. Firstly, we are going to need some more assets. So, from the asset pack you downloaded earlier, grab the following assets from the Environments pack, place them in the AssetsSpritesEnvironment folder, and name them as follows: Name the ENVIRONMENTS STEAMPUNKbackground01.png file Assets SpritesEnvironmentbackground01 Name the ENVIRONMENTSSTEAMPUNKenvironmentalAssets.png file AssetsSpritesEnvironmentenvironmentalAssets Name the ENVIRONMENTSFANTASYenvironmentalAssets.png file Assets SpritesEnvironmentenvironmentalAssets2 To slice or not to slice It is always better to pack many of the same images on to a single asset/atlas and then use the Sprite Editor to define the regions on that texture for each sprite, as long as all the sprites on that sheet are going to get used in the same scene. The reason for this is when Unity tries to draw to the screen, it needs to send the images to draw to the graphics card; if there are many images to send, this can take some time. If, however, it is just one image, it is a lot simpler and more performant with only one file to send. There needs to be a balance; too large an image and the upload to the graphics card can take up too many resources, too many individual images and you have the same problem. The basic rule of thumb is as follows: If the background is a full screen background or large image, then keep it separately. If you have many images and all are for the same scene, then put them into a spritesheet/atlas. If you have many images but all are for different scenes, then group them as best you can—common items on one sheet and scene-specific items on different sheets. You'll have several spritesheets to use. You basically want to keep as much stuff together as makes sense and not send unnecessary images that won't get used to the graphics card. Find your balance. The town background First, let's add a background for the town using the AssetsSpritesEnvironmentbackground01 texture. It is shown in the following screenshot: With the background asset, we don't need to do anything else other than ensure that it has been imported as a sprite (in case your project is still in 3D mode), as shown in the following screenshot: The town buildings For the steampunk environmental assets (AssetsSpritesEnvironmentenvironmentalAssets) that are shown in the following screenshot, we need a bit more work; once these assets are imported, change the Sprite Mode to Multiple and load up the Sprite Editor using the Sprite Editor button. Next, click on the Slice button, leave the settings at their default options, and then click on the Slice button in the new window as shown in the following screenshot: Click on Apply and close the Sprite Editor. You will have four new sprite textures available as seen in the following screenshot: The extra scenery We saw what happens when you use a grid type split on a spritesheet and when the automatic split works well, so what about when it doesn't go so well? If we look at the Fantasy environment pack (AssetsSpritesEnvironmentenvironmentalAssets2), we will see the following: After you have imported it and run the Split in Sprite Editor, you will notice that one of the sprites does not get detected very well; altering the automatic split settings in this case doesn't help, so we need to do some manual manipulation as shown in the following screenshot: In the previous screenshot, you can see that just two of the rocks in the top-right sprite have been identified by the splicing routine. To fix this, just delete one of the selections and then expand the other manually using the selection points in the corner of the selection box (after clicking on the sprite box). Here's how it will look before the correction: After correction, you should see something like the following screenshot: This gives us some nice additional assets to scatter around our towns and give it a more homely feel, as shown in the following screenshot: Building the scene So, now that we have some nice assets to build with, we can start building our first town. Adding the town background Returning to the scene view, you should see the following: If, however, we add our town background texture (AssetsSpritesBackgroundsBackground.png) to the scene by dragging it to either the project hierarchy or the scene view, you will end up with the following: Be sure to set the background texture position appropriately once you add it to the scene; in this case, be sure the position of the transform is centered in the view at X = 0, Y = 0, Z = 0. Unity does have a tendency to set the position relative to where your 3D view is at the time of adding it—almost never where you want it. Our player has vanished! The reason for this is simple: Unity's sprite system has an ordering system that comes in two parts. Sprite sorting layers Sorting Layers (Edit | Project Settings | Tags and Layers) are a collection of sprites, which are bulked together to form a single group. Layers can be configured to be drawn in a specific order on the screen as shown in the following screenshot: Sprite sorting order Sprites within an individual layer can be sorted, allowing you to control the draw order of sprites within that layer. The sprite Inspector is used for this purpose, as shown in the following screenshot: Sprite's Sorting Layers should not be confused with Unity's rendering layers. Layers are a separate functionality used to control whether groups of game objects are drawn or managed together, whereas Sorting Layers control the draw order of sprites in a scene. So the reason our player is no longer seen is that it is behind the background. As they are both in the same layer and have the same sort order, they are simply drawn in the order that they are in the project hierarchy. Updating the scene Sorting Layers To resolve the update of the scene's Sorting Layers, let's organize our sprite rendering by adding some sprite Sorting Layers. So, open up the Tags and Layers inspector pane as shown in the following screenshot (by navigating to Edit | Project settings | Tags and Layers), and add the following Sorting Layers: Background Player Foreground GUI You can reorder the layers underneath the default anytime by selecting a row and dragging it up and down the sprite's Sorting Layers list. With the layers set up, we can now configure our game objects accordingly. So, set the Sorting Layer on our background01 sprite to the Background layer as shown in the following screenshot: Then, update the PlayerSprite layer to Player; our character will now be displayed in front of the background. You can just keep both objects on the same layer and set the Sort Order value appropriately, keeping the background to a Sort Order value of 0 and the player to 10, which will draw the player in front. However, as you add more items to the scene, things will get tricky quickly, so it is better to group them in a layer accordingly. Now when we return to the scene, our hero is happily displayed but he is seen hovering in the middle of our village. So let's fix that next by simply changing its position transform in the Inspector window. Setting the Y position transform to -2 will place our hero nicely in the middle of the street (provided you have set the pivot for the player sprite to bottom), as shown in the following screenshot: Feel free at this point to also add some more background elements such as trees and buildings to fill out the scene using the environment assets we imported earlier. Working with the camera If you try and move the player left and right at the moment, our hero happily bobs along. However, you will quickly notice that we run into a problem: the hero soon disappears from the edge of the screen. To solve this, we need to make the camera follow the hero. When creating new scripts to implement something, remember that just about every game that has been made with Unity has most likely implemented either the same thing or something similar. Most just get on with it, but others and the Unity team themselves are keen to share their scripts to solve these challenges. So in most cases, we will have something to work from. Don't just start a script from scratch (unless it is a very small one to solve a tiny issue) if you can help it; here's some resources to get you started: Unity sample projects: http://Unity3d.com/learn/tutorials/projects Unity Patterns: http://unitypatterns.com/ Unity wiki scripts section: http://wiki.Unity3d.com/index.php/Scripts (also check other stuff for detail) Once you become more experienced, it is better to just use these scripts as a reference and try to create your own and improve on them, unless they are from a maintained library such as https://github.com/nickgravelyn/UnityToolbag. To make the camera follow the players, we'll take the script from the Unity 2D sample and modify it to fit in our game. This script is nice because it also includes a Mario style buffer zone, which allows the players to move without moving the camera until they reach the edge of the screen. Create a new script called FollowCamera in the AssetsScripts folder, remove the Start and Update functions, and then add the following properties: using UnityEngine;   public class FollowCamera : MonoBehavior {     // Distance in the x axis the player can move before the   // camera follows.   public float xMargin = 1.5f;     // Distance in the y axis the player can move before the   // camera follows.   public float yMargin = 1.5f;     // How smoothly the camera catches up with its target   // movement in the x axis.   public float xSmooth = 1.5f;     // How smoothly the camera catches up with its target   // movement in the y axis.   public float ySmooth = 1.5f;     // The maximum x and y coordinates the camera can have.   public Vector2 maxXAndY;     // The minimum x and y coordinates the camera can have.   public Vector2 minXAndY;     // Reference to  the player's transform.   public Transform player; } The variables are all commented to explain their purpose, but we'll cover each as we use them. First off, we need to get the player object's position so that we can track the camera to it by discovering it from the object it is attached to. This is done by adding the following code in the Awake function: void Awake()     {         // Setting up the reference.         player = GameObject.Find("Player").transform;   if (player == null)   {     Debug.LogError("Player object not found");   }       } An alternative to discovering the player this way is to make the player property public and then assign it in the editor. There is no right or wrong way—just your preference. It is also a good practice to add some element of debugging to let you know if there is a problem in the scene with a missing reference, else all you will see are errors such as object not initialized or variable was null. Next, we need a couple of helper methods to check whether the player has moved near the edge of the camera's bounds as defined by the Max X and Y variables. In the following code, we will use the settings defined in the preceding code to control how close you can get to the end result:   bool CheckXMargin()     {         // Returns true if the distance between the camera and the   // player in the x axis is greater than the x margin.         return Mathf.Abs (transform.position.x - player.position.x) > xMargin;     }       bool CheckYMargin()     {         // Returns true if the distance between the camera and the   // player in the y axis is greater than the y margin.         return Mathf.Abs (transform.position.y - player.position.y) > yMargin;     } To finish this script, we need to check each frame when the scene is drawn to see whether the player is close to the edge and update the camera's position accordingly. Also, we need to check if the camera bounds have reached the edge of the screen and not move it beyond. Comparing Update, FixedUpdate, and LateUpdate There is usually a lot of debate about which update method should be used within a Unity game. To put it simply, the FixedUpdate method is called on a regular basis throughout the lifetime of the game and is generally used for physics and time sensitive code. The Update method, however, is only called after the end of each frame that is drawn to the screen, as the time taken to draw the screen can vary (due to the number of objects to be drawn and so on). So, the Update call ends up being fairly irregular. For more detail on the difference between Update and FixedUpdate see the Unity Learn tutorial video at http://unity3d.com/learn/tutorials/modules/beginner/scripting/update-and-fixedupdate. As the player is being moved by the physics system, it is better to update the camera in the FixedUpdate method: void FixedUpdate()     {         // By default the target x and y coordinates of the camera         // are it's current x and y coordinates.         float targetX = transform.position.x;         float targetY = transform.position.y;           // If the player has moved beyond the x margin...         if (CheckXMargin())             // the target x coordinate should be a Lerp between             // the camera's current x position and the player's  // current x position.             targetX = Mathf.Lerp(transform.position.x,  player.position.x, xSmooth * Time.fixedDeltaTime );           // If the player has moved beyond the y margin...         if (CheckYMargin())             // the target y coordinate should be a Lerp between             // the camera's current y position and the player's             // current y position.             targetY = Mathf.Lerp(transform.position.y,  player.position.y, ySmooth * Time. fixedDeltaTime );           // The target x and y coordinates should not be larger         // than the maximum or smaller than the minimum.         targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);         targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);           // Set the camera's position to the target position with         // the same z component.         transform.position =          new Vector3(targetX, targetY, transform.position.z);     } As they say, every game is different and how the camera acts can be different for every game. In a lot of cases, the camera should be updated in the LateUpdate method after all drawing, updating, and physics are complete. This, however, can be a double-edged sword if you rely on math calculations that are affected in the FixedUpdate method, such as Lerp. It all comes down to tweaking your camera system to work the way you need it to do. Once the script is saved, just attach it to the Main Camera element by dragging the script to it or by adding a script component to the camera and selecting the script. Finally, we just need to configure the script and the camera to fit our game size as follows: Set the orthographic Size of the camera to 2.7 and the Min X and Max X sizes to 5 and -5 respectively. The perils of resolution When dealing with cameras, there is always one thing that will trip us up as soon as we try to build for another platform—resolution. By default, the Unity player in the editor runs in the Free Aspect mode as shown in the following screenshot: The Aspect mode (from the Aspect drop-down) can be changed to represent the resolutions supported by each platform you can target. The following is what you get when you switch your build target to each platform: To change the build target, go into your project's Build Settings by navigating to File | Build Settings or by pressing Ctrl + Shift + B, then select a platform and click on the Switch Platform button. This is shown in the following screenshot: When you change the Aspect drop-down to view in one of these resolutions, you will notice how the aspect ratio for what is drawn to the screen changes by either stretching or compressing the visible area. If you run the editor player in full screen by clicking on the Maximize on Play button () and then clicking on the play icon, you will see this change more clearly. Alternatively, you can run your project on a target device to see the proper perspective output. The reason I bring this up here is that if you used fixed bounds settings for your camera or game objects, then these values may not work for every resolution, thereby putting your settings out of range or (in most cases) too undersized. You can handle this by altering the settings for each build or using compiler predirectives such as #if UNITY_METRO to force the default depending on the build (in this example, Windows 8). To read more about compiler predirectives, check the Unity documentation at http://docs.unity3d.com/Manual/PlatformDependentCompilation.html. A better FollowCamera script If you are only targeting one device/resolution or your background scrolls indefinitely, then the preceding manual approach works fine. However, if you want it to be a little more dynamic, then we need to know what resolution we are working in and how much space our character has to travel. We will perform the following steps to do this: We will change the min and max variables to private as we no longer need to configure them in the Inspector window. The code is as follows:   // The maximum x and y coordinates the camera can have.     private Vector2 maxXAndY;       // The minimum x and y coordinates the camera can have.     private Vector2 minXAndY; To work out how much space is available in our town, we need to interrogate the rendering size of our background sprite. So, in the Awake function, we add the following lines of code: // Get the bounds for the background texture - world       size     var backgroundBounds = GameObject.Find("background")      .renderer.bounds; In the Awake function, we work out our resolution and viewable space by interrogating the ViewPort method on the camera and converting it to the same coordinate type as the sprite. This is done using the following code:   // Get the viewable bounds of the camera in world     // coordinates     var camTopLeft = camera.ViewportToWorldPoint      (new Vector3(0, 0, 0));     var camBottomRight = camera.ViewportToWorldPoint      (new Vector3(1, 1, 0)); Finally, in the Awake function, we update the min and max values using the texture size and camera real-world bounds. This is done using the following lines of code: // Automatically set the min and max values     minXAndY.x = backgroundBounds.min.x - camTopLeft.x;     maxXAndY.x = backgroundBounds.max.x - camBottomRight.x; In the end, it is up to your specific implementation for the type of game you are making to decide which pattern works for your game. Transitioning and bounds So our camera follows our player, but our hero can still walk off the screen and keep going forever, so let us stop that from happening. Towns with borders As you saw in the preceding section, you can use Unity's camera logic to figure out where things are on the screen. You can also do more complex ray testing to check where things are, but I find these are overly complex unless you depend on that level of interaction. The simpler answer is just to use the native Box2D physics system to keep things in the scene. This might seem like overkill, but the 2D physics system is very fast and fluid, and it is simple to use. Once we add the physics components, Rigidbody 2D (to apply physics) and a Box Collider 2D (to detect collisions) to the player, we can make use of these components straight away by adding some additional collision objects to stop the player running off. To do this and to keep things organized, we will add three empty game objects (either by navigating to GameObject | Create Empty, or by pressing Ctrl + Shift +N) to the scene (one parent and two children) to manage these collision points, as shown in the following screenshot: I've named them WorldBounds (parent) and LeftBorder and RightBorder (children) for reference. Next, we will position each of the child game objects to the left- and right-hand side of the screen, as shown in the following screenshot: Next, we will add a Box Collider 2D to each border game object and increase its height just to ensure that it works for the entire height of the scene. I've set the Y value to 5 for effect, as shown in the following screenshot: The end result should look like the following screenshot with the two new colliders highlighted in green: Alternatively, you could have just created one of the children, added the box collider, duplicated it (by navigating to Edit | Duplicate or by pressing Ctrl + D), and moved it. If you have to create multiples of the same thing, this is a handy tip to remember. If you run the project now, then our hero can no longer escape this town on his own. However, as we want to let him leave, we can add a script to the new Boundary game object so that when the hero reaches the end of the town, he can leave. Journeying onwards Now that we have collision zones on our town's borders, we can hook into this by using a script to activate when the hero approaches. Create a new C# script called NavigationPrompt, clear its contents, and populate it with the following code: using UnityEngine;   public class NavigationPrompt : MonoBehavior {     bool showDialog;     void OnCollisionEnter2D(Collision2D col)   {     showDialog = true;   }     void OnCollisionExit2D(Collision2D col)   {     showDialog = false;   } } The preceding code gives us the framework of a collision detection script that sets a flag on and off if the character interacts with what the script is attached to, provided it has a physics collision component. Without it, this script would do nothing and it won't cause an error. Next, we will do something with the flag and display some GUI when the flag is set. So, add the following extra function to the preceding script: void OnGUI()     {       if (showDialog)       {         //layout start         GUI.BeginGroup(new Rect(Screen.width / 2 - 150, 50, 300,           250));           //the menu background box         GUI.Box(new Rect(0, 0, 300, 250), "");           // Information text         GUI.Label(new Rect(15, 10, 300, 68), "Do you want to           travel?");           //Player wants to leave this location         if (GUI.Button(new Rect(55, 100, 180, 40), "Travel"))         {           showDialog = false;             // The following line is commented out for now           // as we have nowhere to go :D           //Application.LoadLevel(1);}           //Player wants to stay at this location         if (GUI.Button(new Rect(55, 150, 180, 40), "Stay"))         {           showDialog = false;         }           //layout end         GUI.EndGroup();       }     } The function itself is very simple and only activates if the showDialog flag is set to true by the collision detection. Then, we will perform the following steps: In the OnGUI method, we set up a dialog window region with some text and two buttons. One button asks if the player wants to travel, which would load the next area (commented out for now as we only have one scene), and close the dialog. One button simply closes the dialog if the hero didn't actually want to leave. As we haven't stopped moving the player, the player can also do this by moving away. If you now add the NavigationPrompt script to the two world border (LeftBorder and RightBorder) game objects, this will result in the following simple UI whenever the player collides with the edges of our world: We can further enhance this by tagging or naming our borders to indicate a destination. I prefer tagging, as it does not interfere with how my scene looks in the project hierarchy; also, I can control what tags are available and prevent accidental mistyping. To tag a game object, simply select a Tag using the drop-down list in the Inspector when you select the game object in the scene or project. This is shown in the following screenshot: If you haven't set up your tags yet or just wish to add a new one, select Add Tag in the drop-down menu; this will open up the Tags and Layers window of Inspector. Alternatively, you can call up this window by navigating to Edit | Project Settings | Tags and layers in the menu. It is shown in the following screenshot: You can only edit or change user-defined tags. There are several other tags that are system defined. You can use these as well; you just cannot change, remove, or edit them. These include Player, Respawn, Finish, Editor Only, Main Camera, and GameController. As you can see from the preceding screenshot, I have entered two new tags called The Cave and The World, which are the two main exit points from our town. Unity also adds an extra item to the arrays in the editor. This helps you when you want to add more items; it's annoying when you want a fixed size but it is meant to help. When the project runs, however, the correct count of items will be exposed. Once these are set up, just return to the Inspector for the two borders, and set the right one to The World and the left to The Cave. Now, I was quite specific in how I named these tags, as you can now reuse these tags in the script to both aid navigation and also to notify the player where they are going. To do this, simply update the Do you want to travel to line to the following: //Information text GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel to " +   this.tag + "?"); Here, we have simply appended the dialog as it is presented to the user with the name of the destination we set in the tag. Now, we'll get a more personal message, as shown in the following screenshot: Planning for the larger picture Now for small games, the preceding implementation is fine; however, if you are planning a larger world with a large number of interactions, provide complex decisions to prevent the player continuing unless they are ready. As the following diagram shows, there are several paths the player can take and in some cases, these is only one way. Now, we could just build up the logic for each of these individually as shown in the screenshot, but it is better if we build a separate navigation system so that we have everything in one place; it's just easier to manage that way. This separation is a fundamental part of any good game design. Keeping the logic and game functionality separate makes it easier to maintain in the future, especially when you need to take internationalization into account (but we will learn more about that later). Now, we'll change to using a manager to handle all the world/scene transitions, and simplify the tag names we use as they won't need to be displayed. So, The Cave will be renamed as just Cave, and we will get the text to display from the navigation manager instead of the tag. So, by separating out the core decision making functionality out of the prompt script, we can build the core manager for navigation. Its primary job is to maintain where a character can travel and information about that destination. First, we'll update the tags we created earlier to simpler identities that we can use in our navigation manager (update The Cave to Cave01 and The World to World). Next, we'll create a new C# script called NavigationManager in our AssetsScripts folder, and then replace its contents with the following lines of code: public static class NavigationManager {       public static Dictionary<string,string> RouteInformation =     new Dictionary<string,string>()   {     { "World", "The big bad world"},     { "Cave", "The deep dark cave"},   };     public static string GetRouteInfo(string destination)   {     return RouteInformation.ContainsKey(destination) ?     RouteInformation[destination] : null;   }     public static bool CanNavigate(string destination)   {     return true;   }     public static void NavigateTo(string destination)   {     // The following line is commented out for now     // as we have nowhere to go :D     //Application.LoadLevel(destination);   } } Notice the ? and : operators in the following statement: RouteInformation.ContainsKey(destination) ?   RouteInformation[destination] : null; These operators are C# conditional operators. They are effectively the shorthand of the following: if(RouteInformation.ContainsKey(destination)) {   return RouteInformation[destination]; } else {   return null; } Shorter, neater, and much nicer, don't you think? For more information, see the MSDN C# page at http://bit.ly/csharpconditionaloperator. The script is very basic for now, but contains several following key elements that can be expanded to meet the design goals of your game: RouteInformation: This is a list of all the possible destinations in the game in a dictionary. A static list of possible destinations in the game, and it is a core part of the manager as it knows everywhere you can travel in the game in one place. GetRouteInfo: This is a basic information extraction function. A simple controlled function to interrogate the destination list. In this example, we just return the text to be displayed in the prompt, which allows for more detailed descriptions that we could use in tags. You could use this to provide alternate prompts depending on what the player is carrying and whether they have a lit torch, for example. CanNavigate: This is a test to see if navigation is possible. If you are going to limit a player's travel, you need a way to test if they can move, allowing logic in your game to make alternate choices if the player cannot. You could use a different system for this by placing some sort of block in front of a destination to limit choice (as used in the likes of Zelda), such as an NPC or rock. As this is only an example, we can always travel and add logic to control it if you wish. NavigateTo: This is a function to instigate navigation. Once a player can travel, you can control exactly what happens in the game: does navigation cause the next scene to load straight away (as in the script currently), or does the current scene fade out and then a traveling screen is shown before fading the next level in? Granted, this does nothing at present as we have nowhere to travel to. The script you will notice is different to the other scripts used so far, as it is a static class. This means it sits in the background, only exists once in the game, and is accessible from anywhere. This pattern is useful for fixed information that isn't attached to anything; it just sits in the background waiting to be queried. Later, we will cover more advanced types and classes to provide more complicated scenarios. With this class in place, we just need to update our previous script (and the tags) to make use of this new manager. Update the NavigationPrompt script as follows: Update the collision function to only show the prompt if we can travel. The code is as follows: void OnCollisionEnter2D(Collision2D col) {   //Only allow the player to travel if allowed   if (NavigationManager.CanNavigate(this.tag))   {     showDialog = true;   } } When the dialog shows, display the more detailed destination text provided by the manager for the intended destination. The code is as follows: //Dialog detail - updated to get better detail GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel   to " + NavigationManager.GetRouteInfo(this.tag) + "?"); If the player wants to travel, let the manager start the travel process. The code is as follows: //Player wants to leave this location if (GUI.Button(new Rect(55, 100, 180, 40), "Travel")) {   showDialog = false;   NavigationManager.NavigateTo(this.tag); } The functionality I've shown here is very basic and it is intended to make you think about how you would need to implement it for your game. With so many possibilities available, I could fill several articles on this kind of subject alone. Backgrounds and active elements A slightly more advanced option when building game worlds is to add a level of immersive depth to the scene. Having a static image to show the village looks good, especially when you start adding houses and NPCs to the mix; but to really make it shine, you should layer the background and add additional active elements to liven it up. We won't add them to the sample project at this time, but it is worth experimenting with in your own projects (or try adding it to this one)—it is a worthwhile effect to look into. Parallaxing If we look at the 2D sample provided by Unity, the background is split into several panes—each layered on top of one another and each moving at a different speed when the player moves around. There are also other elements such as clouds, birds, buses, and taxes driving/flying around, as shown in the following screenshot: Implementing these effects is very easy technically. You just need to have the art assets available. There are several scripts in the wiki I described earlier, but the one in Unity's own 2D sample is the best I've seen. To see the script, just download the Unity Projects: 2D Platformer asset from https://www.assetstore.unity3d.com/en/#!/content/11228, and check out the BackgroundParallax script in the AssetsScripts folder. The BackgroundParallax script in the platformer sample implements the following: An array of background images, which is layered correctly in the scene (which is why the script does not just discover the background sprites) A scaling factor to control how much the background moves in relation to the camera target, for example, the camera A reducing factor to offset how much each layer moves so that they all don't move as one (or else what is the point, might as well be a single image) A smoothing factor so that each background moves smoothly with the target and doesn't jump around Implementing this same model in your game would be fairly simple provided you have texture assets that could support it. Just replicate the structure used in the platformer 2D sample and add the script. Remember to update the FollowCamera script to be able to update the base background, however, to ensure that it can still discover the size of the main area. Foreground objects The other thing you can do to liven up your game is to add random foreground objects that float across your scene independently. These don't collide with anything and aren't anything to do with the game itself. They are just eye candy to make your game look awesome. The process to add these is also fairly simple, but it requires some more advanced Unity features such as coroutines, which we are not going to cover here. So, we will come back to these later. In short, if you examine the BackgroundPropSpawner.cs script from the preceding Unity platformer 2D sample, you will have to perform the following steps: Create/instantiate an object to spawn. Set a random position and direction for the object to travel. Update the object over its lifetime. Once it's out of the scene, destroy or hide it. Wait for a time, and then start again. This allows them to run on their own without impacting the gameplay itself and just adds that extra bit of depth. In some cases, I've seen particle effects are also used to add effect, but they are used sparingly. Shaders and 2D Believe it or not, all 2D elements (even in their default state) are drawn using a shader—albeit a specially written shader designed to light and draw the sprite in a very specific way. If you look at the player sprite in the inspector, you will see that it uses a special Material called Sprites-Default, as shown in the following screenshot: This section is purely meant to highlight all the shading options you have in the 2D system. Shaders have not changed much in this update except for the addition of some 2D global lighting found in the default sprite shader. For more detail on shaders in general, I suggest a dedicated Unity shader book such as https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook. Clicking on the button next to Material field will bring up the material selector, which also shows the two other built-in default materials, as shown in the following screenshot: However, selecting either of these will render your sprite invisible as they require a texture and lighting to work; they won't inherit from the Sprite Renderer texture. You can override this by creating your own material and assigning alternate sprite style shaders. To create a new material, just select the AssetsMaterials folder (this is not crucial, but it means we create the material in a sensible place in our project folder structure) and then right click on and select Create | Material. Alternatively, do the same using the project view's Edit... menu option, as shown in the following screenshot: This gives us a basic default Diffuse shader, which is fine for basic 3D objects. However, we also have two default sprite rendering shaders available. Selecting the shader dropdown gives us the screen shown in the following screenshot: Now, these shaders have the following two very specific purposes: Default: This shader inherits its texture from the Sprite Renderer texture to draw the sprite as is. This is a very basic functionality—just enough to draw the sprite. (It contains its own static lighting.) Diffuse: This shader is the same as the Default shader; it inherits the texture of Default, but it requires an external light source as it does not contain any lighting—this has to be applied separately. It is a slightly more advanced shader, which includes offsets and other functions. Creating one of these materials and applying it to the Sprite Renderer texture of a sprite will override its default constrained behavior. This opens up some additional shader options in the Inspector, as shown in the following screenshot: These options include the following: Sprite Texture: Although changing the Tiling and Offset values causes a warning to appear, they still display a function (even though the actual displayed value resets). Tint: This option allows changing the default light tint of the rendered sprite. It is useful to create different colored objects from the same sprite. Pixel snap: This option makes the rendered sprite crisper but narrows the drawn area. It is a trial and error feature (see the following sections for more information). Achieving pixel perfection in your game in Unity can be a challenge due to the number of factors that can affect it, such as the camera view size, whether the image texture is a Power Of Two (POT) size, and the import setting for the image. This is basically a trial and error game until you are happy with the intended result. If you are feeling adventurous, you can extend these default shaders (although this is out of the scope of this article). The full code for these shaders can be found at http://Unity3d.com/unity/download/archive. If you are writing your own shaders though, be sure to add some lighting to the scene; otherwise, they are just going to appear dark and unlit. Only the default sprite shader is automatically lit by Unity. Alternatively, you can use the default sprite shader as a base to create your new custom shader and retain the 2D basic lighting. Another worthy tip is to check out the latest version of the Unity samples (beta) pack. In it, they have added logic to have two sets of shaders in your project: one for mobile and one for desktop, and a script that will swap them out at runtime depending on the platform. This is very cool; check out on the asset store at https://www.assetstore.unity3d.com/#/content/14474 and the full review of the pack at http://darkgenesis.zenithmoon.com/unity3dsamplesbeta-anoverview/. Going further If you are the adventurous sort, try expanding your project to add the following: Add some buildings to the town Set up some entry points for a building and work that into your navigation system, for example, a shop Add some rocks to the scene and color each differently using a manual material, maybe even add a script to randomly set the pixel color in the shader instead of creating several materials Add a new scene for the cave using another environment background, and get the player to travel between them Summary This certainly has been a very busy article just to add a background to our scene, but working out how each scene will work is a crucial design element for the entire game; you have to pick a pattern that works for you and your end result once as changing it can be very detrimental (and a lot of work) in the future. In this article, we covered the following topics: Some more practice with the Sprite Editor and sprite slicer including some tips and tricks when it doesn't work (or you want to do it yourself) Some camera tips, tricks, and scripts An overview of sprite layers and sprite sorting Defining boundaries in scenes Scene navigation management and planning levels in your game Some basics of how shaders work for 2D For learning Unity 2D from basic you can refer to https://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Resources for Article:   Further resources on this subject: Build a First Person Shooter [article] Let's Get Physical – Using GameMaker's Physics System [article] Using the Tiled map editor [article]
Read more
  • 0
  • 0
  • 26947
Modal Close icon
Modal Close icon