Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-using-cameras
Packt
16 Aug 2013
11 min read
Save for later

Using Cameras

Packt
16 Aug 2013
11 min read
(For more resources related to this topic, see here.) Creating a picture-in-picture effect Having more than one viewport displayed can be useful in many situations. For example, you might want to show simultaneous events going on in different locations, or maybe you want to have a separate window for hot-seat multiplayer games. Although you could do it manually by adjusting the Normalized Viewport Rect parameters on your camera, this recipe includes a series of extra preferences to make it more independent from the user's display configuration. Getting ready For this recipe, we have prepared a package named basicLevel containing a scene. The package is in the 0423_02_01_02 folder. How to do it... To create a picture-in-picture display, just follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene, inside the folder 02_01_02. This is a basic scene featuring a directional light, a camera, and some geometry. Add the Camera option to the scene through the Create dropdown menu on top of the Hierarchy view, as shown in the following screenshot: Select the camera you have created and, in the Inspector view, set its Depth to 1: In the Project view, create a new C# script and rename it PictureInPicture. Open your script and replace everything with the following code: using UnityEngine;public class PictureInPicture: MonoBehaviour {public enum HorizontalAlignment{left, center, right};public enum VerticalAlignment{top, middle, bottom};public HorizontalAlignment horizontalAlignment =HorizontalAlignment.left;public VerticalAlignment verticalAlignment =VerticalAlignment.top;public enum ScreenDimensions{pixels, screen_percentage};public ScreenDimensions dimensionsIn = ScreenDimensions.pixels;public int width = 50;public int height= 50;public float xOffset = 0f;public float yOffset = 0f;public bool update = true;private int hsize, vsize, hloc, vloc;void Start (){AdjustCamera ();}void Update (){if(update)AdjustCamera ();}void AdjustCamera(){if(dimensionsIn == ScreenDimensions.screen_percentage){hsize = Mathf.RoundToInt(width * 0.01f * Screen.width);vsize = Mathf.RoundToInt(height * 0.01f * Screen.height);} else {hsize = width;vsize = height;}if(horizontalAlignment == HorizontalAlignment.left){hloc = Mathf.RoundToInt(xOffset * 0.01f *Screen.width);} else if(horizontalAlignment == HorizontalAlignment.right){hloc = Mathf.RoundToInt((Screen.width - hsize)- (xOffset * 0.01f * Screen.width));} else {hloc = Mathf.RoundToInt(((Screen.width * 0.5f)- (hsize * 0.5f)) - (xOffset * 0.01f * Screen.height));}if(verticalAlignment == VerticalAlignment.top){vloc = Mathf.RoundToInt((Screen.height -vsize) - (yOffset * 0.01f * Screen.height));} else if(verticalAlignment == VerticalAlignment.bottom){vloc = Mathf.RoundToInt(yOffset * 0.01f *Screen.height);} else {vloc = Mathf.RoundToInt(((Screen.height *0.5f) - (vsize * 0.5f)) - (yOffset * 0.01f * Screen.height));}camera.pixelRect = new Rect(hloc,vloc,hsize,vsize);}} In case you haven't noticed, we are not achieving percentage by dividing numbers by 100, but rather multiplying them by 0.01. The reason behind that is performance: computer processors are faster multiplying than dividing. Save your script and attach it to the new camera that you created previously. Uncheck the new camera's Audio Listener component and change some of the PictureInPicture parameters: change Horizontal Alignment to Right, Vertical Alignment to Top, and Dimensions In to pixels. Leave XOffset and YOffset as 0, change Width to 400 and Height to 200, as shown below: Play your scene. The new camera's viewport should be visible on the top right of the screen: How it works... Our script changes the camera's Normalized Viewport Rect parameters, thus resizing and positioning the viewport according to the user preferences. There's more... The following are some aspects of your picture-in-picture you could change. Making the picture-in-picture proportional to the screen's size If you change the Dimensions In option to screen_percentage, the viewport size will be based on the actual screen's dimensions instead of pixels. Changing the position of the picture-in-picture Vertical Alignment and Horizontal Alignment can be used to change the viewport's origin. Use them to place it where you wish. Preventing the picture-in-picture from updating on every frame Leave the Update option unchecked if you don't plan to change the viewport position in running mode. Also, it's a good idea to leave it checked when testing and then uncheck it once the position has been decided and set up. See also The Displaying a mini-map recipe. Switching between multiple cameras Choosing from a variety of cameras is a common feature in many genres: race sims, sports sims, tycoon/strategy, and many others. In this recipe, we will learn how to give players the ability of choosing an option from many cameras using their keyboard. Getting ready In order to follow this recipe, we have prepared a package containing a basic level named basicScene. The package is in the folder 0423_02_01_02. How to do it... To implement switchable cameras, follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene from the 02_01_02 folder. This is a basic scene featuring a directional light, a camera, and some geometry. Add two more cameras to the scene. You can do it through the Create drop-down menu on top of the Hierarchy view. Rename them cam1 and cam2. Change the cam2 camera's position and rotation so it won't be identical to cam1. Create an Empty game object by navigating to Game Object | Create Empty. Then, rename it Switchboard. In the Inspector view, disable the Camera and Audio Listener components of both cam1 and cam2. In the Project view, create a new C# script. Rename it CameraSwitch and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;public class CameraSwitch : MonoBehaviour {public GameObject[] cameras;public string[] shortcuts;public bool changeAudioListener = true;void Update (){int i = 0;for(i=0; i<cameras.Length; i++){if (Input.GetKeyUp(shortcuts[i]))SwitchCamera(i);}}void SwitchCamera ( int index ){int i = 0;for(i=0; i<cameras.Length; i++){if(i != index){if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = false;}cameras[i].camera.enabled = false;} else {if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = true;}cameras[i].camera.enabled = true;}}}} Attach CameraSwitch to the Switchboard game object. In the Inspector view, set both Cameras and Shortcuts size to 3. Then, drag the scene cameras into the Cameras slots, and type 1, 2, and 3 into the Shortcuts text fields, as shown in the next screenshot. Play your scene and test your cameras. How it works... The script is very straightforward. All it does is capture the key pressed and enable its respective camera (and its Audio Listener, in case the Change Audio Listener option is checked). There's more... Here are some ideas on how you could try twisting this recipe a bit. Using a single-enabled camera A different approach to the problem would be keeping all the secondary cameras disabled and assigning their position and rotation to the main camera via a script (you would need to make a copy of the main camera and add it to the list, in case you wanted to save its transform settings). Triggering the switch from other events Also, you could change your camera from other game object's scripts by using a line of code, such as the one shown here: GameObject.Find("Switchboard").GetComponent("CameraSwitch"). SwitchCamera(1); See also The Making an inspect camera recipe. Customizing the lens flare effect As anyone who has played a game set in an outdoor environment in the last 15 years can tell you, the lens flare effect is used to simulate the incidence of bright lights over the player's field of view. Although it has become a bit overused, it is still very much present in all kinds of games. In this recipe, we will create and test our own lens flare texture. Getting ready In order to continue with this recipe, it's strongly recommended that you have access to image editor software such as Adobe Photoshop or GIMP. The source for lens texture created in this recipe can be found in the 0423_02_03 folder. How to do it... To create a new lens flare texture and apply it to the scene, follow these steps: Import Unity's Character Controller package by navigating to Assets | Import Package | Character Controller. Do the same for the Light Flares package. In the Hierarchy view, use the Create button to add a Directional Light effect to your scene. Select your camera and add a Mouse Look component by accessing the Component | Camera Control | Mouse Look menu option. In the Project view, locate the Sun flare (inside Standard Assets | Light Flares), duplicate it and rename it to MySun, as shown in the following screenshot: In the Inspector view, click Flare Texture to reveal the base texture's location in the Project view. It should be a texture named 50mmflare. Duplicate the texture and rename it My50mmflare. Right-click My50mmflare and choose Open. This should open the file (actually a.psd) in your image editor. If you're using Adobe Photoshop, you might see the guidelines for the texture, as shown here: To create the light rings, create new Circle shapes and add different Layer Effects such as Gradient Overlay, Stroke, Inner Glow, and Outer Glow. Recreate the star-shaped flares by editing the originals or by drawing lines and blurring them. Save the file and go back to the Unity Editor. In the Inspector view, select MySun, and set Flare Texture to My50mmflare: Select Directional Light and, in the Inspector view, set Flare to MySun. Play the scene and move your mouse around. You will be able to see the lens flare as the camera faces the light. How it works... We have used Unity's built-in lens flare texture as a blueprint for our own. Once applied, the lens flare texture will be displayed when the player looks into the approximate direction of the light. There's more... Flare textures can use different layouts and parameters for each element. In case you want to learn more about the Lens Flare effect, check out Unity's documentation at http://docs. unity3d.com/Documentation/Components/class-LensFlare.html. Making textures from screen content If you want your game or player to take in-game snapshots and apply it as a texture, this recipe will show you how. This can be very useful if you plan to implement an in-game photo gallery or display a snapshot of a past key moment at the end of a level (Race Games and Stunt Sims use this feature a lot). Getting ready In order to follow this recipe, please import the basicTerrain package, available in the 0423_02_04_05 folder, into your project. The package includes a basic terrain and a camera that can be rotated via a mouse. How to do it... To create textures from screen content, follow these steps: Import the Unity package and open the 02_04_05 scene. We need to create a script. In the Project view, click on the Create drop-down menu and choose C# Script. Rename it ScreenTexture and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;using System.Collections;public class ScreenTexture : MonoBehaviour {public int photoWidth = 50;public int photoHeight = 50;public int thumbProportion = 25;public Color borderColor = Color.white;public int borderWidth = 2;private Texture2D texture;private Texture2D border;private int screenWidth;private int screenHeight;private int frameWidth;private int frameHeight;private bool shoot = false;void Start (){screenWidth = Screen.width;screenHeight = Screen.height;frameWidth = Mathf.RoundToInt(screenWidth * photoWidth *0.01f);frameHeight = Mathf.RoundToInt(screenHeight * photoHeight* 0.01f);texture = new Texture2D (frameWidth,frameHeight,TextureFormat.RGB24,false);border = new Texture2D (1,1,TextureFormat.ARGB32, false);border.SetPixel(0,0,borderColor);border.Apply();}void Update (){if (Input.GetKeyUp(KeyCode.Mouse0))StartCoroutine(CaptureScreen());}void OnGUI (){GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,((screenHeight*0.5f)-(frameHeight*0.5f)) - borderWidth,frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,(screenHeight*0.5f)+(frameHeight*0.5f),frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f)- borderWidth*2,(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)+(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);if(shoot){GUI.DrawTexture(new Rect (10,10,frameWidth*thumbProportion*0.01f,frameHeight*thumbProportion* 0.01f),texture,ScaleMode.StretchToFill);}}IEnumerator CaptureScreen (){yield return new WaitForEndOfFrame();texture.ReadPixels(new Rect((screenWidth*0.5f)-(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),frameWidth,frameHeight),0,0);texture.Apply();shoot = true;}} Save your script and apply it to the Main Camera game object. In the Inspector view, change the values for the Screen Capturecomponent, leaving Photo Width and Photo Height as 25 and Thumb Proportion as 75, as shown here: Play the scene. You will be able to take a snapshot of the screen (and have it displayed on the top-left corner) by clicking the mouse button. How it works... Clicking the mouse triggers a function that reads pixels within the specified rectangle and applies them into a texture that is drawn by the GUI. There's more... Apart from displaying the texture as a GUI element, you could use it in other ways. Applying your texture to a material You apply your texture to an existing object's material by adding a line similar to GameObject.Find("MyObject").renderer.material.mainTexture = texture; at the end of the CaptureScreen function. Using your texture as a screenshot You can encode your texture as a PNG image file and save it. Check out Unity's documentation on this feature at http://docs.unity3d.com/Documentation/ScriptReference/ Texture2D.EncodeToPNG.html.
Read more
  • 0
  • 0
  • 11834

Packt
31 Dec 2015
6 min read
Save for later

Spacecraft – Adding Details

Packt
31 Dec 2015
6 min read
In this article by Christopher Kuhn, the author of the book Blender 3D Incredible Machines, we'll model our Spacecraft. As we do so, we'll cover a few new tools and techniques and apply things in different ways to create a final, complex model: Do it yourself—completing the body Building the landing gear (For more resources related to this topic, see here.) We'll work though the spacecraft one section at a time by adding the details. Do it yourself – completing the body Next, let's take a look at the key areas that we have left to model: The bottom of the ship and the sensor suite (on the nose) are good opportunities to practice on your own. They use identical techniques to the areas of the ship that we've already done. Go ahead and see what you can do! For the record, here's what I ended up doing with the sensor suite: Here's what I did with the bottom. You can see that I copied the circular piece that was at the top of the engine area: One of the nice things about a project as this is that you can start to copy parts from one area to another. It's unlikely that both the top and bottom of the ship would be shown in the same render (or shot), so you can probably get away with borrowing quite a bit. Even if you did see them simultaneously, it's not unreasonable to think that a ship would have more than one of certain components. Of course, this is just a way to make things quicker (and easier). If you'd like everything to be 100% original, you're certainly free to do so. Building the landing gear We'll do the landing struts together, but you can feel free to finish off the actual skids yourself: I kept mine pretty simple compared to the other parts of the ship: Once you've got the skid plate done, make sure to make it a separate object (if it's not already). We're going to use a neat trick to finish this up. Make a copy of the landing gear part and move it to the rear section (or front if you have modeled the rear). Then, under your mesh tab, you can assign both of these objects the same mesh data: Now, whenever you make a change to one of them, the change will carry over to the other as well. Of course, you could just model one and then duplicate it, but sometimes, it's nice to see how the part will look in multiple locations. For instance, the cutouts are slightly different between the front and back of the ship. As you model it, you'll want to make sure that it will fit both areas. The first detail that we'll add is a mounting bracket for our struts to go on: Then, we'll add a small cylinder (at this point, the large one is just a placeholder): We'll rotate it just a bit: From this, it's pretty easy to create a rear mounting piece. Once you've done this, go ahead and add a shock absorber for the front (leave room for the springs, which we'll add next): To create the spring, we'll start with a small (12-sided) circle. We'll make it so small because just like the cable reel on the grabbling gun there will naturally be a lot of geometry, and we want to keep the polygon count as low as possible. Then, in edit mode, move the whole circle away from its original center point: Having done this, you can now add a screw modifier. Right away, you'll see the effect: There are a couple of settings you'll want to make note of here. The Screw value controls the vertical gap or distance of your spring: The Angle and Steps values control the number of turns and smoothness respectively: Go ahead and play with these until you're happy. Then, move and scale your spring into a position. Once it's the way you like it, go ahead and apply the screw modifier (but don't join it to the shock absorber just yet): None of my existing materials seemed right for the spring. So, I went ahead and added one that I called Blue Plastic. At this point, we have a bit of a problem. We want to join the spring to the landing gear but we can't. The landing gear has an edge split modifier with a split angle value of 30, and the spring has a value of 46. If we join them right now, the smooth edges on the spring will become sharp. We don't want this. Instead, we'll go to our shock absorber. Using the Select menu, we'll pick the Sharp Edges option: By default, it will select all edges with an angle of 30 degrees or higher. Once you do this, go ahead and mark these edges as sharp: Because all the thirty degree angles are marked sharp, we no longer need the Edge Angle option on our edge split modifier. You can disable it by unchecking it, and the landing gear remains exactly the same: Now, you can join the spring to it without a problem: Of course, this does mean that when you create new edges in your landing gear, you'll now have to mark them as sharp. Alternatively, you can keep the Edge Angle option selected and just turn it up to 46 degrees—your choice. Next, we'll just pull in the ends of our spring a little, so they don't stick out: Maybe we'll duplicate it. After all, this is a big, heavy vehicle, so maybe, it needs multiple shock absorbers: This is a good place to leave our landing gear for now. Summary In this article, we finished modeling our Spaceship's landing gear. We used a few new tools within Blender, but mostly, we focused on workflow and technique. Resources for Article: Further resources on this subject: Blender 3D 2.49: Quick Start[article] Blender 3D 2.49: Working with Textures[article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 11797

article-image-replacing-2d-sprites-3d-models
Packt
21 Sep 2015
21 min read
Save for later

Replacing 2D Sprites with 3D Models

Packt
21 Sep 2015
21 min read
In this article by Maya Posch author of the book Mastering AndEngine Game Development, when using a game engine that limits itself to handling scenes in two dimensions, it seems obvious that you would use two-dimensional images here, better known as sprites. After all, you won't need that third dimension, right? It is when you get into more advanced games and scenes that you notice that with animations, and also with the usage of existing assets, there are many advantages of using a three-dimensional model in a two-dimensional scene. In this article we will cover these topics: Using 3D models directly with AndEngine Loading of 3D models with an AndEngine game (For more resources related to this topic, see here.) Why 3D in a 2D game makes sense The reasons we want to use 3D models in our 2D scene include the following: Recycling of assets: You can use the same models as used for a 3D engine project, as well as countless others. Broader base of talent: You'll be able to use a 3D modeler for your 2D game, as good sprite artists are so rare. Ease of animation: Good animation with sprites is hard. With 3D models, you can use various existing utilities to get smooth animations with ease. As for the final impact it has on the game's looks, it's no silver bullet but should ease the development somewhat. The quality of the used models and produced animations as well as the way they are integrated into a scene will determine the final look. 2D and 3D compared In short: 2D sprite 3D model Defined using a 2D grid of pixels Defined using vertices in a 3D grid Only a single front view Rotatable to observe any desired side Resource-efficient Resource-intensive A sprite is an image, or—if it's animated—a series of images. Within the boundaries of its resolution (for example 64, x 64 pixels), the individual pixels make up the resulting image. This is a proven low-tech method, and it has been in use since the earliest video games. Even the first 3D games, such as Wolfenstein 3D and Doom, used sprites instead of models, as the former are easy to implement and require very few resources to render. With the available memory and processing capabilities of video consoles and personal computers until the later part of the 1990s, sprites were everywhere. It wasn't until the appearance of dedicated vertex graphics processors for consumer systems from companies such as 3dfx, Nvidia, and ATI that sprites would be largely replaced by vertex (3D) models. This is not to say that 3D models were totally new by then, of course. The technology had been in commercial use since the 1970s, when it was used for movie CGI and engineering in particular. In essence, both sprites and models are a representation of the same object; it's just that one contains more information than the other. Once rendered on the screen, the resulting image contains roughly the same amount of data. The biggest difference between sprites and models is the total amount of information that they can contain. For a sprite, there is no side or back. A model, on the other hand, has information about every part of its surface. It can be rotated in front of a camera to obtain a rendering of each of those orientations. A sprite is thus equivalent to a single orientation of a model. Dealing with the third dimension The first question that is likely to come to mind when it is suggested to use 3D models in what is advertised as a 2D engine is whether or not this will make the game engine into a 3D engine. The brief answer here is "No." The longer answer is that despite the presence of these models, the engine's camera and other features are not aware of this third dimension, and so they will not be able to deal with it. It's not unlike the ray-casting engine employed by titles such as Wolfenstein 3D, which always operated in a horizontal plane and, by default, was not capable of tilting the camera to look up or down. This does imply that AndEngine can be turned into a 3D engine if all of its classes are adapted to deal with another dimension. We're not going that far here, however. All that we are interested in right now is integrating 3D model support into the existing framework. For this, we need a number of things. The most important one is to be able to load these models. The second is to render them in such a way that we can use them within the AndEngine framework. As we explored earlier, the way of integrating 3D models into a 2D scene is by realizing that a model is just a very large collection of possible sprites. What we need is a camera so that we can orient it relatively to the model, similar to how the camera in a 3D engine works. We can then display the model from the orientation. Any further manipulations, such as scaling and scene-wide transformations, are performed on the model's camera configuration. The model is only manipulated to obtain a new orientation or frame of an animation. Setting up the environment We first need to load the model from our resources into the memory. For this, we require logic that fetches the file, parses it, and produces the output, which we can use in the following step of rendering an orientation of the model. To load the model, we can either write the logic for it ourselves or use an existing library. The latter approach is generally preferred, unless you have special needs that are not yet covered by an existing library. As we have no such special needs, we will use an existing library. Our choice here is the open Asset Import Library, or assimp for short. It can import numerous 3D model files in addition to other kinds of resource files, which we'll find useful later on. Assimp is written in C++, which means that we will be using it as a native library (.a or .so). To accomplish this, we first need to obtain its source code and compile it for Android. The main Assimp site can be found at http://assimp.sf.net/, and the Git repository is at https://github.com/assimp/assimp. From the latter, we obtain the current source for Assimp and put it into a folder called assimp. We can easily obtain the Assimp source by either downloading an archive file containing the full repository or by using the Git client (from http://git-scm.com/) and cloning the repository using the following command in an empty folder (the assimp folder mentioned): git clone https://github.com/assimp/assimp.git This will create a local copy of the remote Git repository. An advantage of this method is that we can easily keep our local copy up to date with the Assimp project's version simply by pulling any changes. As Assimp uses CMake for its build system, we will also need to obtain the CMake version for Android from http://code.google.com/p/android-cmake/. Android-Cmake contains the toolchain file that we will need to set up the cross-compilation from our host system to Android/ARM. Assuming that we put Android-cmake into the android-cmake folder, we can then find this toolchain file under android-cmake/toolchain/android.toolchain.cmake. We now need to either set the following environmental variable or make sure we have properly set it: ANDROID_NDK: This points to the root folder where the Android NDK is placed At this point, we can use either the command-line-based CMake tool or the cross-platform CMake GUI. We choose the latter for sheer convenience. Unless you are quite familiar with the working of CMake, the use of the GUI tool can make the experience significantly more intuitive, not to mention faster and more automated. Any commands we use in the GUI tool will, however, easily translate to the command-line tool. The first thing we do after opening the CMake GUI utility is specify the location of the source—the assimp source folder—and the output for the CMake-generated files. For this path to the latter, we will create a new folder called buildandroid inside the Assimp source folder and specify it as the build folder. We now need to set a variable inside the CMake GUI: CMAKE_MAKE_PROGRAM: This variable specifies the path to the Make executable. For Linux/BSD, use GNU Make or similar; for Windows, use MinGW Make. Next, we will want to click on the Configure button where we can set the type of Make files generated as well as specify the location of the toolchain file. For the Make file type, you will generally want to pick Unix makefiles on Linux or similar and MinGW makefiles on Windows. Next, pick the option that allows you to specify the cross-compile toolchain file and select this file inside the Android-cmake folder as detailed earlier. After this, the CMake GUI should output Configuring done. What has happened now is that the toolchain file that we linked to has configured CMake to use the NDK's compiler, which targets ARM as well as sets other configuration options. If we want, we can change some options here, such as the following: CMAKE_BUILD_TYPE: We can specify the type of build we want here, which includes the Debug and Release strings. ASSIMP_BUILD_STATIC_LIB: This is a boolean value. Setting it to true (or checking the box in the GUI) will generate only a library file for static linking and no .so file. Whether we want to build statically or not depends on our ultimate goals and distribution details. As static linking of external libraries is quite convenient and also reduces the total file size on the platform, which is generally already strapped for space, it seems obvious to link statically. The resulting .a library for a release build should be in the order of 16 megabytes, while a debug build is about 68 megabytes. When linking the final application, only those parts of the library that we'll use will be included in our application, shrinking the total file size once more. We are now ready to click on the Generate button, which should generate a Generating done output. If you get an error along the lines of Could not uniquely determine machine name for compiler, you should look at the paths used by CMake and check whether they exist. For the NDK toolchain on Windows, for example, the path may contain the windows part, whereas the NDK only has a folder called windows-x86_64. If we look into the buildandroid folder after this, we can see that CMake has generated a makefile and additional relevant files. We only need the central Make file in the buildandroid folder, however. In a terminal window, we navigate to this folder and execute the following command: make This should start the execution of the Make files that CMake generated and result in a proper build. At the end of this compilation sequence, we should have a library file in assimp/libs/armeabi-v7a/ called libassimp.a. For our project, we need this library and the Assimp include files. We can find them under assimp/include/assimp. We copy the folder with the include files to our project's /jni folder. The .a library is placed in the /jni folder as well. As this is a relatively simple NDK project, a simple file structure is fine. For a more complex project, we would want to have a separate /jni/libs folder, or something similar. Importing a model The Assimp library provides conversion tools for reading resource files, such as those for 3D mesh models, and provides a generic format on the application's side. For a 3D mesh file, Assimp provides us with an aiScene object that contains all the meshes and related data as described by the imported file. After importing a model, we need to read the sets of data that we require for rendering. These are the types of data: Vertices (positions) Normals Texture mapping (UV) Indices Vertices might be obvious; they are the positions of points between which lines of basic geometric shapes are drawn. Usually, three vertices are used to form a triangular face, which forms the basic shape unit for a model. Normals indicate the orientation of the vertex. We have one normal per vertex. Texture mapping is provided using so-called UV coordinates. Each vertex has a UV coordinate if texture mapping information is provided with the model. Finally, indices are values provided per face, indicating which vertices should be used. This is essentially a compression technique, allowing the faces to define the vertices that they will use so that shared vertices have to be defined only once. During the drawing process, these indices are used by OpenGL to find the vertices to draw. We start off our importer code by first creating a new file called assimpImporter.cpp in the /jni folder. We require the following include: #include "assimp/Importer.hpp" // C++ importer interface #include "assimp/scene.h" // output data structure #include "assimp/postprocess.h" // post processing flags // for native asset manager #include <sys/types.h> #include <android/asset_manager.h> #include <android/asset_manager_jni.h> The Assimp include give us access to the central Importer object, which we'll use for the actual import process, and the scene object for its output. The postprocess include contains various flags and presets for post-processing information to be used with Importer, such as triangulation. The remaining includes are meant to give us access to the Android Asset Manager API. The model file is stored inside the /assets folder, which once packaged as an APK is only accessible during runtime via this API, whether in Java or in native code. Moving on, we will be using a single function in our native code to perform the importing and processing. As usual, we have to first declare a C-style interface so that when our native library gets compiled, our Java code can find the function in the library: extern "C" { JNIEXPORT jboolean JNICALL Java_com_nyanko_andengineontour_MainActivity_getModelData(JNIEnv* env, jobject obj, jobject model, jobject assetManager, jstring filename); }; The JNIEnv* parameter and the first jobject parameter are standard in an NDK/JNI function, with the former being a handy pointer to the current JVM environment, offering a variety of utility functions. Our own parameters are the following: model assetManager filename The model is a basic Java class with getters/setters for the arrays of vertex, normal, UV and index data of which we create an instance and pass a reference via the JNI. The next parameter is the Asset Manager instance that we created in the Java code. Finally, we obtain the name of the file that we are supposed to load from the assets containing our mesh. One possible gotcha in the naming of the function we're exporting is that of underscores. Within the function name, no underscores are allowed, as underscores are used to indicate to the NDK what the package name and class names are. Our getModelData function gets parsed as being in the MainActivity class of the package com.nyanko.andengineontour. If we had tried to use, for example, get_model_data as the function name, it would have tried to find function data in the model class of the com.nyanko.andengineontour.get package. Next, we can begin the actual importing process. First, we define the aiScene instance, that will contain the imported scene, and the arrays for the imported data, as well as the Assimp Importer instance: const aiScene* scene = 0; jfloat* vertexArray; jfloat* normalArray; jfloat* uvArray; jshort* indexArray; Assimp::Importer importer; In order to use a Java string in native code, we have to use the provided method to obtain a reference via the env parameter: const char* utf8 = env->GetStringUTFChars(filename, 0); if (!utf8) { return JNI_FALSE; } We then create a reference to the Asset Manager instance that we created in Java: AAssetManager* mgr = AAssetManager_fromJava(env, assetManager); if (!mgr) { return JNI_FALSE; } We use this to obtain a reference to the asset we're looking for, being the model file: AAsset* asset = AAssetManager_open(mgr, utf8, AASSET_MODE_UNKNOWN); if (!asset) { return JNI_FALSE; } Finally, we release our reference to the filename string before moving on to the next stage: env->ReleaseStringUTFChars(filename, utf8); With access to the asset, we can now read it from the memory. While it is, in theory, possible to directly read a file from the assets, you will have to write a new I/O manager to allow Assimp to do this. This is because asset files, unfortunately, cannot be passed as a standard file handle reference on Android. For smaller models, however, we can read the entire file from the memory and pass this data to the Assimp importer. First, we get the size of the asset, create an array to store its contents, and read the file in it: int count = (int) AAsset_getLength(asset); char buf[count + 1]; if (AAsset_read(asset, buf, count) != count) { return JNI_FALSE; } Finally, we close the asset reference: AAsset_close(asset); We are now done with the asset manager and can move on to the importing of this model data: const aiScene* scene = importer.ReadFileFromMemory(buf, count, aiProcessPreset_TargetRealtime_Fast); if (!scene) { return JNI_FALSE; } The importer has a number of possible ways to read in the file data, as mentioned earlier. Here, we read from a memory buffer (buf) that we filled in earlier with the count parameter, indicating the size in bytes. The last parameter of the import function is the post-processing parameters. Here, we use the aiProcessPreset_TargetRealtime_Fast preset, which performs triangulation (converting non-triangle faces to triangles), and other sensible presets. The resulting aiScene object can contain multiple meshes. In a complete importer, you'd want to import all of them into a loop. We'll just look at importing the first mesh into the scene here. First, we get the mesh: aiMesh* mesh = scene->mMeshes[0]; This aiMesh object contains all of the information on the data we're interested in. First, however, we need to create our arrays: int vertexArraySize = mesh->mNumVertices * 3; int normalArraySize = mesh->mNumVertices * 3; int uvArraySize = mesh->mNumVertices * 2; int indexArraySize = mesh->mNumFaces * 3; vertexArray = new float[vertexArraySize]; normalArray = new float[normalArraySize]; uvArray = new float[uvArraySize]; indexArray = new jshort[indexArraySize]; For the vertex, normal, and texture mapping (UV) arrays, we use the number of vertices as defined in the aiMesh object as normal, and the UVs are defined per vertex. The former two have three components (x, y, z) and the UVs have two (x, y). Finally, indices are defined per vertex of the face, so we use the face count from the mesh multiplied by the number of vertices. All things but indices use floats for their components. The jshort type is a short integer type defined by the NDK. It's generally a good idea to use the NDK types for values that are sent to and from the Java side. Reading the data from the aiMesh object to the arrays is fairly straightforward: for (unsigned int i = 0; i < mesh->mNumVertices; i++) { aiVector3D pos = mesh->mVertices[i]; vertexArray[3 * i + 0] = pos.x; vertexArray[3 * i + 1] = pos.y; vertexArray[3 * i + 2] = pos.z; aiVector3D normal = mesh->mNormals[i]; normalArray[3 * i + 0] = normal.x; normalArray[3 * i + 1] = normal.y; normalArray[3 * i + 2] = normal.z; aiVector3D uv = mesh->mTextureCoords[0][i]; uvArray[2 * i * 0] = uv.x; uvArray[2 * i * 1] = uv.y; } for (unsigned int i = 0; i < mesh->mNumFaces; i++) { const aiFace& face = mesh->mFaces[i]; indexArray[3 * i * 0] = face.mIndices[0]; indexArray[3 * i * 1] = face.mIndices[1]; indexArray[3 * i * 2] = face.mIndices[2]; } To access the correct part of the array to write to, we use an index that uses the number of elements (floats or shorts) times the current iteration plus an offset to ensure that we reach the next available index. Doing things this way instead of pointing incrementation has the benefit that we do not have to reset the array pointer after we're done writing. There! We have now read in all of the data that we want from the model. Next is arguably the hardest part of using the NDK—passing data via the JNI. This involves quite a lot of reference magic and type-matching, which can be rather annoying and lead to confusing errors. To make things as easy as possible, we used the generic Java class instance so that we already had an object to put our data into from the native side. We still have to find the methods in this class instance, however, using what is essentially a Java reflection: jclass cls = env->GetObjectClass(model); if (!cls) { return JNI_FALSE; } The first goal is to get a jclass reference. For this, we use the jobject model variable, as it already contains our instantiated class instance: jmethodID setVA = env->GetMethodID(cls, "setVertexArray", "([F)V"); jmethodID setNA = env->GetMethodID(cls, "setNormalArray", "([F)V"); jmethodID setUA = env->GetMethodID(cls, "setUvArray", "([F)V"); jmethodID setIA = env->GetMethodID(cls, "setIndexArray", "([S)V"); We then obtain the method references for the setters in the class as jmethodID variables. The parameters in this class are the class reference we created, the name of the method, and its signature, being a float array ([F) parameter and a void (V) return type. Finally, we create our native Java arrays to pass back via the JNI: jfloatArray jvertexArray = env->NewFloatArray(vertexArraySize); env->SetFloatArrayRegion(jvertexArray, 0, vertexArraySize, vertexArray); jfloatArray jnormalArray = env->NewFloatArray(normalArraySize); env->SetFloatArrayRegion(jnormalArray, 0, normalArraySize, normalArray); jfloatArray juvArray = env->NewFloatArray(uvArraySize); env->SetFloatArrayRegion(juvArray, 0, uvArraySize, uvArray); jshortArray jindexArray = env->NewShortArray(indexArraySize); env->SetShortArrayRegion(jindexArray, 0, indexArraySize, indexArray); This code uses the env JNIEnv* reference to create the Java array and allocate memory for it in the JVM. Finally, we call the setter functions in the class to set our data. These essentially calls the methods on the Java class inside the JVM, providing the parameter data as Java types: env->CallVoidMethod(model, setVA, jvertexArray); env->CallVoidMethod(model, setNA, jnormalArray); env->CallVoidMethod(model, setUA, juvArray); env->CallVoidMethod(model, setIA, jindexArray); We only have to return JNI_TRUE now, and we're done. Building our library To build our code, we write the Android.mk and Application.mk files. Next, we go to the top level of our project in a terminal window and execute the ndk-build command. This will compile the code and place a library in the /libs folder of our project, inside a folder that indicates the CPU architecture it was compiled for. For further details on the ndk-build tool, you can refer to the official documentation at https://developer.android.com/ndk/guides/ndk-build.html. Our Android.mk file looks as follows: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := libassimp LOCAL_SRC_FILES := libassimp.a include $(PREBUILT_STATIC_LIBRARY) include $(CLEAR_VARS) LOCAL_MODULE := assimpImporter #LOCAL_MODULE_FILENAME := assimpImporter LOCAL_SRC_FILES := assimpImporter.cpp LOCAL_LDLIBS := -landroid -lz -llog LOCAL_STATIC_LIBRARIES := libassimp libgnustl_static include $(BUILD_SHARED_LIBRARY) The only things worthy of notice here are the inclusion of the Assimp library we compiled earlier and the use of the gnustl_static library. Since we only have a single native library in the project, we don't have to share the STL library. So, we link it with our library. Finally, we have the Application.mk file: APP_PLATFORM := android-9 APP_STL := gnustl_static There's not much to see here beyond the required specification of the STL runtime that we wish to use and the Android revision we are aiming for. After executing the build command, we are ready to build the actual application that performs the rendering of our model data. Summary With our code added, we can now load 3D models from a variety of formats, import it into our application, and create objects out of them, which we can use together with AndEngine. As implemented now, we essentially have an embedded rendering pipeline for 3D assets that extends the basic AndEngine 2D rendering pipeline. This provides a solid platform for the next stages in extending these basics even further to provide the texturing, lighting, and physics effects that we need to create an actual game. Resources for Article: Further resources on this subject: Cross-platform Building[article] Getting to Know LibGDX [article] Nodes [article]
Read more
  • 0
  • 0
  • 11785

article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 11701

article-image-creating-weapons-your-game-using-unrealscript
Packt
23 Apr 2013
18 min read
Save for later

Creating weapons for your game using UnrealScript

Packt
23 Apr 2013
18 min read
(For more resources related to this topic, see here.) Creating a gun that fires homing missiles UDK already has a homing rocket launcher packaged with the dev kit (UTWeap_ RocketLauncher). The problem however, is that it isn't documented well; it has a ton of excess code only necessary for multiplayer games played over a network, and can only lock on when you have loaded three rockets. We're going to change all of that, and allow our homing weapon to lock onto a pawn and fire any projectile of our choice. We also need to change a few functions, so that our weapon fires from the correct location and uses the pawn's rotation and not the camera's. Getting ready As I mentioned earlier, our main weapon for this article will extend from the UTWeap_ ShockRifle, as that gun offers a ton of great base functionality which we can build from. Let's start by opening your IDE and creating a new weapon called MyWeapon, and have it extend from UTWeap_ShockRifle as shown as follows: class MyWeapon extends UTWeap_ShockRifle; How to do it... We need to start by adding all of the variables that we'll be needing for our lock on feature. There are quite a few here, but they're all commented in pretty great detail. Much of this code is straight from UDK's rocket launcher, that is why it looks familiar. In this recipe, we'll be creating a base weapon which extends from one of the Unreal Tournament's most commonly used weapons, the shock rifle, and base all of our weapons from that. I've gone ahead and removed an unnecessary information, added comments, and altered functionality so that we can lock onto pawns with any weapon, and fire only one missile while doing so. /********************************************************* Weapon lock on support********************************************************//** Class of the rocket to use when seeking */var class<UTProjectile> SeekingRocketClass;/** The frequency with which we will check for a lock */var(Locking) float LockCheckTime;/** How far out should we be considering actors for a lock */var float LockRange;/** How long does the player need to target an actor to lock on toit*/var(Locking) float LockAcquireTime;/** Once locked, how long can the player go without painting theobject before they lose the lock */var(Locking) float LockTolerance;/** When true, this weapon is locked on target */var bool bLockedOnTarget;/** What "target" is this weapon locked on to */var Actor LockedTarget;var PlayerReplicationInfo LockedTargetPRI;/** What "target" is current pending to be locked on to */var Actor PendingLockedTarget;/** How long since the Lock Target has been valid */var float LastLockedOnTime;/** When did the pending Target become valid */var float PendingLockedTargetTime;/** When was the last time we had a valid target */var float LastValidTargetTime;/** angle for locking for lock targets */var float LockAim;/** angle for locking for lock targets when on Console */var float ConsoleLockAim;/** Sound Effects to play when Locking */var SoundCue LockAcquiredSound;var SoundCue LockLostSound;/** If true, weapon will try to lock onto targets */var bool bTargetLockingActive;/** Last time target lock was checked */var float LastTargetLockCheckTime; With our variables in place, we can now move onto the weapon's functionality. The InstantFireStartTrace() function is the same function we added in our weapon. It allows our weapon to start its trace from the correct location using the GetPhysicalFireStartLoc() function. function. As mentioned before, this simply grabs the rotation of the weapon's muzzle flash socket, and tells the weapon to fire projectiles from that location, using the socket's rotation. The same goes for GetEffectLocation(), which is where our muzzle flash will occur. The v in vector for the InstantFireStartTrace() function is not capitalized. The reason being that vector is actually of struct type, and not a function, and that is standard procedure in UDK. /********************************************************* Overriden to use GetPhysicalFireStartLoc() instead of* Instigator.GetWeaponStartTraceLocation()* @returns position of trace start for instantfire()********************************************************/simulated function vector InstantFireStartTrace(){return GetPhysicalFireStartLoc();}/********************************************************* Location that projectiles will spawn from. Works for secondaryfire on* third person mesh********************************************************/simulated function vector GetPhysicalFireStartLoc(optional vectorAimDir){Local SkeletalMeshComponent AttachedMesh;local vector SocketLocation;Local TutorialPawn TutPawn;TutPawn = TutorialPawn(Owner);AttachedMesh = TutPawn.CurrentWeaponAttachment.Mesh;/** Check to prevent log spam, and the odd situation winwhich a cast to type TutPawn can fail */if (TutPawn != none){AttachedMesh.GetSocketWorldLocationAndRotation(MuzzleFlashSocket, SocketLocation);}return SocketLocation;}/********************************************************* Overridden from UTWeapon.uc* @return the location + offset from which to spawn effects(primarily tracers)********************************************************/simulated function vector GetEffectLocation(){Local SkeletalMeshComponent AttachedMesh;local vector SocketLocation;Local TutorialPawn TutPawn;TutPawn = TutorialPawn(Owner);AttachedMesh = TutPawn.CurrentWeaponAttachment.Mesh;if (TutPawn != none){AttachedMesh.GetSocketWorldLocationAndRotation(MuzzleFlashSocket, SocketLocation);}MuzzleFlashSocket, SocketLocation);return SocketLocation;} Now we're ready to dive into the parts of code that are applicable to the actual homing of the weapon. Let's start by adding our debug info, which allows us to troubleshoot any issues we may have along the way. ********************************************************** Prints debug info for the weapon********************************************************/simulated function GetWeaponDebug( out Array<String> DebugInfo ){Super.GetWeaponDebug(DebugInfo);DebugInfo[DebugInfo.Length] = "Locked:"@bLockedOnTarget@LockedTarget@LastLockedontime@(WorldInfo.TimeSeconds-LastLockedOnTime);DebugInfo[DebugInfo.Length] ="Pending:"@PendingLockedTarget@PendingLockedTargetTime@WorldInfo.TimeSeconds;} Here we are simply stating which target our weapon is currently locked onto, in addition to the pending target. It does this by grabbing the variables we've listed before, after they've returned from their functions, which we'll add in the next part. We need to have a default state for our weapon to begin with, so we mark it as inactive. /********************************************************* Default state. Go back to prev state, and don't use our* current tick********************************************************/auto simulated state Inactive{ignores Tick;simulated function BeginState(name PreviousStateName){Super.BeginState(PreviousStateName);// not looking to lock onto a targetbTargetLockingActive = false;// Don't adjust our target lockAdjustLockTarget(None);} We ignore the tick which tells the weapon to stop updating any of its homing functions. Additionally, we tell it not to look for an active target or adjust its current target, if we did have one at the moment. While on the topic of states, if we finish our current one, then it's time to move onto the next: /********************************************************* Finish current state, & prepare for the next one********************************************************/simulated function EndState(Name NextStateName){Super.EndState(NextStateName);// If true, weapon will try to lock onto targetsbTargetLockingActive = true;}} If our weapon is destroyed or we are destroyed, then we want to prevent the weapon from continuing to lock onto a target. /********************************************************* If the weapon is destroyed, cancel any target lock********************************************************/simulated event Destroyed(){// Used to adjust the LockTarget.AdjustLockTarget(none);//Calls the previously defined Destroyed functionsuper.Destroyed();} Our next chunk of code is pretty large, but don't let it intimidate you. Take your time and read it through to have a thorough understanding of what is occurring. When it all boils down, the CheckTargetLock() function verifies that we've actually locked onto our target. We start by checking that we have a pawn, a player controller, and that we are using a weapon which can lock onto a target. We then check if we can lock onto the target, and if it is possible, we do it. At the moment we only have the ability to lock onto pawns. /****************************************************************** Have we locked onto our target?****************************************************************/function CheckTargetLock(){local Actor BestTarget, HitActor, TA;local UDKBot BotController;local vector StartTrace, EndTrace, Aim, HitLocation,HitNormal;local rotator AimRot;local float BestAim, BestDist;if((Instigator == None)||(Instigator.Controller ==None)||(self != Instigator.Weapon) ){return;}if ( Instigator.bNoWeaponFiring)// TRUE indicates that weapon firing is disabled for thispawn{// Used to adjust the LockTarget.AdjustLockTarget(None);// "target" is current pending to be locked on toPendingLockedTarget = None;return;}// We don't have a targetBestTarget = None;BotController = UDKBot(Instigator.Controller);// If there is BotController...if ( BotController != None ){// only try locking onto bot's targetif((BotController.Focus != None) &&CanLockOnTo(BotController.Focus) ){// make sure bot can hit itBotController.GetPlayerViewPoint( StartTrace, AimRot );Aim = vector(AimRot);if((Aim dot Normal(BotController.Focus.Location -StartTrace)) > LockAim ){HitActor = Trace(HitLocation, HitNormal,BotController.Focus.Location, StartTrace, true,,,TRACEFLAG_Bullet);if((HitActor == None)||(HitActor == BotController.Focus) ){// Actor being looked atBestTarget = BotController.Focus;}}}} Immediately after that, we do a trace to see if our missile can hit the target, and check for anything that may be in the way. If we determine that we can't hit our target then it's time to start looking for a new one. else{// Trace the shot to see if it hits anyoneInstigator.Controller.GetPlayerViewPoint( StartTrace, AimRot );Aim = vector(AimRot);// Where our trace stopsEndTrace = StartTrace + Aim * LockRange;HitActor = Trace(HitLocation, HitNormal, EndTrace, StartTrace,true,,, TRACEFLAG_Bullet);// Check for a hitif((HitActor == None)||!CanLockOnTo(HitActor) ){/** We didn't hit a valid target? Controllerattempts to pick a good target */BestAim = ((UDKPlayerController(Instigator.Controller)!=None)&&UDKPlayerController(Instigator.Controller).bConsolePlayer) ? ConsoleLockAim : LockAim;BestDist = 0.0;TA = Instigator.Controller.PickTarget(class'Pawn', BestAim, BestDist, Aim, StartTrace,LockRange);if ( TA != None && CanLockOnTo(TA) ){/** Best target is the target we've locked */BestTarget = TA;}}// We hit a valid targetelse{// Best Target is the one we've done a trace onBestTarget = HitActor;}} If we have a possible target, then we note its time mark for locking onto it. If we can lock onto it, then start the timer. The timer can be adjusted in the default properties and determines how long we need to track our target before we have a solid lock. // If we have a "possible" target, note its time markif ( BestTarget != None ){LastValidTargetTime = WorldInfo.TimeSeconds;// If we're locked onto our best targetif ( BestTarget == LockedTarget ){/** Set the LLOT to the time in seconds sincelevel began play */LastLockedOnTime = WorldInfo.TimeSeconds;} Once we have a good target, it should turn into our current one, and start our lock on it. If we've been tracking it for enough time with our crosshair (PendingLockedTargetTime), then lock onto it. else{if ( LockedTarget != None&&((WorldInfo.TimeSeconds - LastLockedOnTime >LockTolerance)||!CanLockOnTo(LockedTarget)) ){// Invalidate the current locked TargetAdjustLockTarget(None);}/** We have our best target, see if they shouldbecome our current target Check for a newpending lock */if (PendingLockedTarget != BestTarget){PendingLockedTarget = BestTarget;PendingLockedTargetTime =((Vehicle(PendingLockedTarget) != None)&&(UDKPlayerController(Instigator.Controller)!=None)&&UDKPlayerController(Instigator.Controller).bConsolePlayer)? WorldInfo.TimeSeconds + 0.5*LockAcquireTime: WorldInfo.TimeSeconds + LockAcquireTime;}/** Otherwise check to see if we have beentracking the pending lock long enough */else if (PendingLockedTarget == BestTarget&& WorldInfo.TimeSeconds = PendingLockedTargetTime ){AdjustLockTarget(PendingLockedTarget);LastLockedOnTime = WorldInfo.TimeSeconds;PendingLockedTarget = None;PendingLockedTargetTime = 0.0;}}} Otherwise, if we can't lock onto our current or our pending target, then cancel our current target, along with our pending target. else{if ( LockedTarget != None&&((WorldInfo.TimeSeconds -LastLockedOnTime > LockTolerance)||!CanLockOnTo(LockedTarget)) ){// Invalidate the current locked TargetAdjustLockTarget(None);}// Next attempt to invalidate the Pending Targetif ( PendingLockedTarget != None&&((WorldInfo.TimeSeconds - LastValidTargetTime >LockTolerance)||!CanLockOnTo(PendingLockedTarget)) ){// We are not pending another target to lock ontoPendingLockedTarget = None;}}} That was quite a bit to digest. Don't worry, because the functions from here on out are pretty simple and straightforward. As with most other classes, we need a Tick() function to check for something in each frame. Here, we'll be checking whether or not we have a target locked in each frame, as well as setting our LastTargetLockCheckTime to the number of seconds passed during game-time. /********************************************************* Check target locking with each update********************************************************/event Tick( Float DeltaTime ){if ( bTargetLockingActive && ( WorldInfo.TimeSeconds >LastTargetLockCheckTime + LockCheckTime ) ){LastTargetLockCheckTime = WorldInfo.TimeSeconds;// Time, in seconds, since level began playCheckTargetLock();// Checks to see if we are locked on a target}} As I mentioned earlier, we can only lock onto pawns. Therefore, we need a function to check whether or not our target is a pawn. /********************************************************* Given an potential target TA determine if we can lock on to it.By* default, we can only lock on to pawns.********************************************************/simulated function bool CanLockOnTo(Actor TA){if ( (TA == None) || !TA.bProjTarget || TA.bDeleteMe ||(Pawn(TA) == None) || (TA == Instigator) ||(Pawn(TA).Health <= 0) ){return false;}return ( (WorldInfo.Game == None) ||!WorldInfo.Game.bTeamGame || (WorldInfo.GRI == None) ||!WorldInfo.GRI.OnSameTeam(Instigator,TA) );} Once we have a locked target we need to trigger a sound, so that the player is aware of the lock. The whole first half of this function simply sets two variables to not have a target, and also plays a sound cue to notify the player that we've lost track of our target. /********************************************************* Used to adjust the LockTarget.********************************************************/function AdjustLockTarget(actor NewLockTarget){if ( LockedTarget == NewLockTarget ){// No need to updatereturn;}if (NewLockTarget == None){// Clear the lockif (bLockedOnTarget){// No targetLockedTarget = None;// Not locked onto a targetbLockedOnTarget = false;if (LockLostSound != None && Instigator != None &&Instigator.IsHumanControlled() ){// Play the LockLostSound if we lost track of thetargetPlayerController(Instigator.Controller).ClientPlaySound(LockLostSound);}}}else{// Set the lockbLockedOnTarget = true;LockedTarget = NewLockTarget;LockedTargetPRI = (Pawn(NewLockTarget) != None) ?Pawn(NewLockTarget).PlayerReplicationInfo : None;if ( LockAcquiredSound != None && Instigator != None &&Instigator.IsHumanControlled() ){PlayerController(Instigator.Controller).ClientPlaySound(LockAcquiredSound);}}} Once it looks like everything has checked out we can fire our ammo! We're just setting everything back to 0 at this point, as our projectile is seeking our target, so it's time to start over and see whether we will use the same target or find another one. /********************************************************* Everything looks good, so fire our ammo!********************************************************/simulated function FireAmmunition(){Super.FireAmmunition();AdjustLockTarget(None);LastValidTargetTime = 0;PendingLockedTarget = None;LastLockedOnTime = 0;PendingLockedTargetTime = 0;} With all of that out of the way, we can finally work on firing our projectile, or in our case, our missile. ProjectileFile() tells our missile to go after our currently locked target, by setting the SeekTarget variable to our currently locked target. /********************************************************* If locked on, we need to set the Seeking projectile's* LockedTarget.********************************************************/simulated function Projectile ProjectileFire(){local Projectile SpawnedProjectile;SpawnedProjectile = super.ProjectileFire();if (bLockedOnTarget &&UTProj_SeekingRocket(SpawnedProjectile) != None){/** Go after the target we are currently lockedonto */UTProj_SeekingRocket(SpawnedProjectile).SeekTarget =LockedTarget;}return SpawnedProjectile;} Really though, our projectile could be anything at this point. We need to tell our weapon to actually use our missile (or rocket, they are used interchangeably) which we will define in our defaultproperties block. /********************************************************* We override GetProjectileClass to swap in a Seeking Rocket if weare* locked on.********************************************************/function class<Projectile> GetProjectileClass(){// if we're locked on...if (bLockedOnTarget){// use our homing rocketreturn SeekingRocketClass;}// Otherwise...else{// Use our default projectilereturn WeaponProjectiles[CurrentFireMode];}} If we don't have a SeekingRocketClass class defined, then we just use the currently defined projectile from our CurrentFireMode array. The last part of this class involves the defaultproperties block. This is the same thing we saw in our Camera class. We're setting our muzzle flash socket, which is used for not only firing effects, but also weapon traces, to actually use our muzzle flash socket. defaultproperties{// Forces the secondary fire projectile to fire fromthe weapon attachment */MuzzleFlashSocket=MuzzleFlashSocket} Our MyWeapon class is complete. We don't want to clog our defaultproperties block and we have some great base functionality, so from here on out our weapon classes will generally be only changes to the defaultproperties block. Simplicity! Create a new class called MyWeapon_HomingRocket. Have it extend from MyWeapon. class MyWeapon_HomingRocket extends MyWeapon; In our defaultproperties block, let's add our skeletal and static meshes. We're just going to keep using the shock rifle mesh. Although it's not necessary to do this, as we're already a child class of (that is, inheriting from) UTWeap_ShockRifle, I still want you to see where you would change the mesh if you ever wanted to. defaultproperties{// Weapon SkeletalMeshBegin Object class=AnimNodeSequence Name=MeshSequenceAEnd Object// Weapon SkeletalMeshBegin Object Name=FirstPersonMeshSkeletalMesh=SkeletalMesh'WP_ShockRifle.Mesh.SK_WP_ShockRifle_1P'AnimSets(0)=AnimSet'WP_ShockRifle.Anim.K_WP_ShockRifle_1P_Base'Animations=MeshSequenceARotation=(Yaw=-16384)FOV=60.0End Object// PickupMeshBegin Object Name=PickupMeshSkeletalMesh=SkeletalMesh'WP_ShockRifle.Mesh.SK_WP_ShockRifle_3P'End Object// Attachment classAttachmentClass=class'UTGameContent.UTAttachment_ShockRifle' Next, we want to declare the type of projectile, the type of damage it does, and the frequency at which it can be fired. Moreover, we want to declare that each shot fired will only deplete one round from our inventory. We can declare how much ammo the weapon starts with too. // Defines the type of fire for each modeWeaponFireTypes(0)=EWFT_InstantHitWeaponFireTypes(1)=EWFT_ProjectileWeaponProjectiles(1)=class'UTProj_Rocket'// Damage typesInstantHitDamage(0)=45FireInterval(0)=+1.0FireInterval(1)=+1.3InstantHitDamageTypes(0)=class'UTDmgType_ShockPrimary'InstantHitDamageTypes(1)=None// Not an instant hit weapon, so set to "None"// How much ammo will each shot use?ShotCost(0)=1ShotCost(1)=1// # of ammo gun should start withAmmoCount=20// Initial ammo count if weapon is lockedLockerAmmoCount=20// Max ammo countMaxAmmoCount=40 Our weapon will use a number of sounds that we didn't previously need, such as locking onto a pawn, as well as losing lock. So let's add those now. // Sound effectsWeaponFireSnd[0] =SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_FireCue'WeaponFireSnd[1]=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_Fire_Cue'WeaponEquipSnd=SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_RaiseCue'WeaponPutDownSnd=SoundCue'A_Weapon_ShockRifle.Cue.A_Weapon_SR_LowerCue'PickupSound=SoundCue'A_Pickups.Weapons.Cue.A_Pickup_Weapons_Shock_Cue'LockAcquiredSound=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_SeekLock_Cue'LockLostSound=SoundCue'A_Weapon_RocketLauncher.Cue.A_Weapon_RL_SeekLost_Cue' We won't be the only one to use this weapon, as bots will be picking it up during Deathmatch style games as well. Therefore, we want to declare some logic for the bots, such as how strongly they will desire it, and whether or not they can use it for things like sniping. // AI logicMaxDesireability=0.65 // Max desireability for botsAIRating=0.65CurrentRating=0.65bInstantHit=false // Is it an instant hit weapon?bSplashJump=false// Can a bot use this for splash damage?bRecommendSplashDamage=true// Could a bot snipe with this?bSniping=false// Should it fire when the mouse is released?ShouldFireOnRelease(0)=0// Should it fire when the mouse is released?ShouldFireOnRelease(1)=0 We need to create an offset for the camera too, otherwise the weapon wouldn't display correctly as we switch between first and third person cameras. // Holds an offset for spawning projectile effectsFireOffset=(X=20,Y=5)// Offset from view center (first person)PlayerViewOffset=(X=17,Y=10.0,Z=-8.0) Our homing properties section is the bread and butter of our class. This is where you'll alter the default values for anything to do with locking onto pawns. // Homing properties/** angle for locking for locktargets when on Console */ConsoleLockAim=0.992/** How far out should we be before considering actors fora lock? */LockRange=9000// Angle for locking, for lockTargetLockAim=0.997// How often we check for lockLockChecktime=0.1// How long does player need to hover over actor to lock?LockAcquireTime=.3// How close does the trace need to be to the actual targetLockTolerance=0.8SeekingRocketClass=class'UTProj_SeekingRocket' Animations are an essential part of realism, so we want the camera to shake when firing a weapon, in addition to an animation for the weapon itself. // camera anim to play when firing (for camera shakes)FireCameraAnim(1)=CameraAnim'Camera_FX.ShockRifle.C_WP_ShockRifle_Alt_Fire_Shake'// Animation to play when the weapon is firedWeaponFireAnim(1)=WeaponAltFire While we're on the topic of visuals, we may as well add the flashes at the muzzle, as well as the crosshairs for the weapon. // Muzzle flashesMuzzleFlashPSCTemplate=WP_ShockRifle.Particles.P_ShockRifle_MF_AltMuzzleFlashAltPSCTemplate=WP_ShockRifle.Particles.P_ShockRifle_MF_AltMuzzleFlashColor=(R=200,G=120,B=255,A=255)MuzzleFlashDuration=0.33MuzzleFlashLightClass=class'UTGame.UTShockMuzzleFlashLight'CrossHairCoordinates=(U=256,V=0,UL=64,VL=64)LockerRotation=(Pitch=32768,Roll=16384)// CrosshairIconCoordinates=(U=728,V=382,UL=162,VL=45)IconX=400IconY=129IconWidth=22IconHeight=48/** The Color used when drawing the Weapon's Name on theHUD */WeaponColor=(R=160,G=0,B=255,A=255) Since weapons are part of a pawn's inventory, we need to declare which slot this weapon will fall into (from one to nine). // InventoryInventoryGroup=4 // The weapon/inventory set, 0-9GroupWeight=0.5 // position within inventory group.(used by prevweapon and nextweapon) Our final piece of code has to do with rumble feedback with the Xbox gamepad. This is not only used on consoles, but also it is generally reserved for it. /** Manages the waveform data for a forcefeedback device,specifically for the xbox gamepads. */Begin Object Class=ForceFeedbackWaveformName=ForceFeedbackWaveformShooting1Samples(0)=(LeftAmplitude=90,RightAmplitude=40,LeftFunction=WF_Constant,RightFunction=WF_LinearDecreasing,Duration=0.1200)End Object// controller rumble to play when firingWeaponFireWaveForm=ForceFeedbackWaveformShooting1} All that's left to do is to add the weapon to your pawn's default inventory. You can easily do this by adding the following line to your TutorialGame class's defaultproperties block: defaultproperties{DefaultInventory(0)=class'MyWeapon_HomingRocket'} Load up your map with a few bots on it, hold your aiming reticule over it for a brief moment and when you hear the lock sound, fire away! How it works... To keep things simple we extend from UTWeap_ShockRifle. This gave us a great bit of base functionality to work from. We created a MyWeapon class which offers not only everything that the shock rifle does, but also the ability to lock onto targets. When we aim our target reticule over an enemy bot, it checks for a number of things. First, it verifies that it is an enemy and also whether or not the target can be reached. It does this by drawing a trace and returns any actors which may fall in our weapon's path. If all of these things check out, then it begins to lock onto our target after we've held the reticule over the enemy for a set period of time. We then fire our projectile, which is either the weapon's firing mode, or in our case, a rocket. We didn't want to clutter the defaultproperties block for MyWeapon; so we create a child class called MyWeapon_HomingRocket that makes use of all the functionality and only changes the defaultproperties block, which will influence the weapon's aesthetics, sound effects, and even some functionality with the target lock.
Read more
  • 0
  • 0
  • 11594

article-image-user-interface
Packt
23 Sep 2015
10 min read
Save for later

User Interface

Packt
23 Sep 2015
10 min read
This article, written by John Doran, the author of the Unreal Engine Game Development Cookbook, covers the following recipes: Creating a main menu Animating a menu (For more resources related to this topic, see here.) In order to create a good game project, you need to be able to communicate information to the player. To do this, we need to create a user interface (UI), which will allow us to display information such as the player's health, inventory, and so on. Inside Unreal 4, we use the Slate UI framework to create user interfaces, however, it's a very complex system. To make things easier for end users, Unreal also released the Unreal Motion Graphics (UMG) UI Designer which is a visual UI authoring tool with a much easier workflow. This is what we will be using in this article. For more information on Slate, refer to https://docs.unrealengine.com/latest/INT/Programming/Slate/index.html. Creating a main menu A main menu can serve as an introduction to your game and is a great place for us to discuss some additional things that UMG has, such as Texts and Buttons. We'll also learn how we can make buttons do things. Let's spend some time to see just how easy it is to create one! For more information on the client-server model, refer to https://en.wikipedia.org/wiki/Client%E2%80%93server_model. How to do it… To give you an idea of how it works, let's take a simple example of a coin collectable: Create a new level by going to File | New Level and select Empty Level. Next, inside the Content Browser tab, go to our UI folder, then to Add New | User Interface | Widget Blueprint, and give it a name of MainMenu. Double-click on it to open the editor. In this menu, we are going to have the title of the game and then a series of buttons the player can press: From the Palette tab, open up the Common section and drag and drop a Button onto the middle of the screen. Select the button and change its Size X to 400 and Size Y to 80. We will also rename the button to Play Game. Drag and drop a Text object onto the Play Game button and you should see it snap on to the button as a child. Under Content, change Text to Play Game. From here under Appearance, change the color of the button to black and change the Font size to 32. From the Hierarchy tab, select the Play Game button and copy and paste it to create duplicate. Move the button down, rename it to Quit Game, and change the Text to Content as well. Move both of the objects so that they're on the bottom part of the HUD, slightly above and side by side, as shown in the following image: Lastly, we'll want to set our pivots and anchors accordingly. When you select either the Quit Game or Play Game buttons, you may notice a sun-like looking widget that displays the Anchors of the object (known as the Anchor Medallion). In our case, open Anchors from the Details panel and click on the bottom-center option. Now that we have the buttons created, we want them to actually do something when we click on them. Select the Play Game button and from the Details tab, scroll down until you see the Events component. There should be a series of big green + buttons. Click on the green button beside OnClicked. Next, it will take us to the Event Graph with the appropriate event created for us. To the right of the event, right-click and create an Open Level action. Under Level Name, put in whatever level you like (for example, StarterMap) and then connect the output of the OnClicked action to the input of the Open Level action. To the right of that, create a Remove from Parent action to make sure that when we leave that, the menu doesn't stay. Finally, create a Get Player Controller action and to the right of it a Set Show Mouse Cursor action, which should be disabled, so that the mouse will no longer be visible since we will want to see the mouse in the menu. (Drag Return Value from the Get Player Controller action to create a new node and search for the mouse cursor action.) Now, go back to the Designer button and then select the Quit Game button. Click on the OnClicked button as well and to the right of this one, create a Quit Game action and connect the output of the OnClicked action to the input of the Quit Game action. Lastly, as a bit of polish, let's add our game's title to the screen. Drag and drop another Text object onto the scene, this time with Anchor at the top-center. From here, change Position X to 0 and Position Y to 176. Change Alignment in the X axis to .5 and check the Size to Content option for it to automatically resize. Set the Content component's Text property to the game's name (in my case, Game Name). Under the Appearance component, set the Font size to 93 and set Justification to Center. There are a number of other styling options that you may wish to use when developing your HUDs. For more information about it, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Styling/index.html. Compile the menu, and saveit. Now we need to actually have the widget show up. To do so, we'll need to take the same steps as we did earlier. Open up Level Blueprint by going to Blueprints | Open Level Blueprint and create an EventBeginPlay event. Then, to the right of this, right-click and create a Create Widget action. From the dropdown under Class, select MainMenu and connect the arrow from Event Begin Play to the input of Create MainMenu_C Widget. After this, click and drag the output arrow and create an Add to Viewport event. Then, connect Return Value of our Create Widget action to Target of the Add to Viewport action. Now lastly, we also want to display the player's cursor on the screen to show buttons. To do this, right-click and select Get Player Controller. Then, from Return Value of that, create a Show Mouse Cursor object in Set. Connect the output of the Add to Viewport action to the input of the Show Mouse Cursor action. Compile, save, and run the project! With this, our menu is completed! We can quit the game without any problem, and pressing the Play Game button will start our level! Animating a menu You may have created a menu or UI element at some point, but rather than having it static and non-moving, let's spend some time looking at how we can animate the menus by having them fly in and out or animating them in some way. This will help add to the polish of the title as well as enable players to notice things easier as they move in. Getting ready Before we start working on this, we need to have a project created and set up. Do the previous recipe all the way to completion. How to do it… Open up the MainMenu blueprint once more and from the bottom-left in the Animations tab, click on the +Animation button and give the new animation a name of MenuFlyIn. Select the newly created animation and you should see the window on the right-hand side brighten up. Next, click on the Auto Key toggle to have the animation editor automatically set keys that are appropriate for our implementation. If it's not there already, move the timeline bar (the white line with two orange ends on the top and bottom) to the 0.00 mark on the animation timeline. Next, select the Game Name object and under Color and Opacity, open it and change the A (alpha) value to 0. Now move the timeline bar to the 1.00 mark and then open the color again and set the A value to 1. You'll notice a transition—going from a completely transparent text to a fully shown one. This is a good start. Let's have the buttons fly in after the text appears. Next, move the Time bar to the 2.00 mark and select the Play Game button. Now from the Details tab, you'll notice that under the variables, there are new + icons to the left of variables. This value will save the value for use in the animations. Click on the + icon by the Position Y value. If you use your scroll wheel while inside the dark grey portion of the timeline bar (where the keyframe numbers are displayed), it zooms in and out. This can be quite useful when you create more complex animations. Now move the Time bar to the 1.00 mark and move the Play Game button off the screen. By doing the animation in this way, we are saving where we want it to be first at the end, and then going back in time to do the animations. Do the same animation for the Quit Game button. Now that our animation is created, let's make it in a way so that when the object starts, this animation is played. Click on the Graph button and from the MyBlueprint tab under the Graphs section, double-click on the Event Construct event, which is called as soon as we add the menu to the scene. Grab the pin on the end of it and create a Play Animation action. Drag and drop a MenuFlyIn animation into the scene and select Get. Connect its output pin to the In Animation property of the Play Animation action. Now that we have the animation work when we create the menu, let's have it play when we leave the menu. Select the Play Animation and Menu Fly In variables and copy them. Then move to the OnClicked (Play Game) action. Drag the OnClicked event over to the left and remove its original connection to the Open Level action by holding down Alt and clicking. Now paste (Ctrl + V) the new objects and connect the out pin of OnClicked (Play Game) to the input of Play Animation. Now change Play Mode to Reverse. To the right of this, create a Delay action. For the Duration variable, we want it to wait as long as the animation is, so from the Menu Fly In variable, create another pin and create a Get End Time action. Connect Return Value of Get End Time to the input of the Delay action. Connect the output of the Play Animation action to the input of the Delay action and the Completed output of the Delay action to the input of the Open Level action. Now we need to do the same for the OnClicked (Quit Game) event. Now compile, save, and run the game! Our menu is now completed and we've learned about how animation works inside UMG! For more examples of using UMG for animation, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Animation/index.html. Summary This article gave you some insight on Slate and the UMG Editor to create a number of UI elements and an animated main menu to tie your whole game together. We created a main menu and also learned how to make buttons do things. We spent some time looking at how we can animate menus by having them fly in and out. Resources for Article: Further resources on this subject: The Blueprint Class[article] Adding Fog to Your Games [article] Overview of Unreal Engine 4 [article]
Read more
  • 0
  • 0
  • 11570
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-blender-25-modeling-basic-humanoid-character
Packt
01 Jul 2011
14 min read
Save for later

Blender 2.5: modeling a basic humanoid character

Packt
01 Jul 2011
14 min read
Mission Briefing Our objective is to create a basic model of a humanoid, with all the major parts included and correctly shaped: head, arms, torso, legs, and feet will be defined. We won't be creating fine details of the model, but we will definitely pay attention to the process and the mindset necessary to achieve our goal. What Does It Do? We'll start by creating a very simple (and ugly) base mesh that we can tweak later to get a nice finished model. From a single cube, we will be creating an entire model of a basic humanoid character, and take the opportunity to follow our own "feelings" to create the finished model. Why Is It Awesome? This project will help you learn some good points that will be handy when working on future projects (even in complex projects). First of all, we'll learn a basic procedure for applying the box modeling technique to create a base mesh. We'll then learn that our models don't have to look nice all the time to ensure a proper result, instead we must have a proper understanding of where we are heading to avoid getting lost along the way. Finally, we'll learn to separate the complexity of a modeling task into two different parts, using the best tools for the job each time (thus having a more enjoyable time and very good freedom to creative). The brighter side of this project will be working with the sculpting tools, since they give us a very cool way of tweaking meshes without having to handle individual vertices, edges, or faces. This advantage constitutes an added value for our workflow: we can separate the boring technical parts of modeling (mostly extruding and defining topology) from the actual fine tweaking of the form. Moreover, if we have the possibility of using the sculpt tools with a pen tablet, the modeling experience will be greatly improved and will feel extremely intuitive. Your Hotshot Objectives Although this project is not really complex, we will separate it into seven tasks, to make it easier to follow. They are: Creating the base mesh Head Arms Torso Legs Feet and final tweaks Scene setup Creating the Base Mesh Let's begin our project by creating the mesh that will be further tweaked to get our final model. For this project we'll apply a methodology (avoiding overly complicated, unintelligible, written descriptions) that will give us some freedom and allow us to explore our creativity without the rigidity of having a strict blueprint to follow. There's a warning, though: our model will look ugly most of the time. This is because in the initial building process we're not going to put so much emphasis on how it looks but on the structure of the mesh. Having said that, let's start with the first task of the project. Prepare for Lift Off Fire up Blender, delete the default lamp, set the name of the default cube to character(from the Item panel, Properties sidebar) and save the file as character.blend in the project's directory. Engage Thrusters First, we need to set the character object with a Mirror modifier, so that we only need to work on one side of the character while the other side gets created automatically as we work. Select the character object, switch to Edit Mode and then switch to Front View (View | Front), then add a new edge loop running vertically by using the Loop Cut and Slide tool. Make sure that the new edge loop is not moved from the center of the cube, so that it separates the cube into two mirrored sides. Now set the viewport shading to wireframe (press the Z key), select the vertices on the left-hand side of the cube, and delete them (press the X key). Now let's switch back to Object Mode, then go to the Modifiers tab in the Properties Editor and add a new Mirror Modifier to the character object. On the settings panel for the Mirror Modifier, let's enable the Clipping option. This will leave us with the object set up according to our needs. Switch to Edit Mode for the character object and then to Face Select mode. Select the upper face of the mesh, extrude it (E key) and then move the extrusion 1 unit upwards, along the Z axis. Now perform a second extrusion, this time on the face that remains selected from the previous one, and move it 1 unit upwards too; this will leave us with three sections (the lowest one is the biggest). Follow along by switching to Right View (View | Right), extrude again, press Escape, and then move the extrusion 0.2 units upwards (press the G key, Z key, then type 0.2). With the upper face selected, let's scale it down by a factor of 0.3 (S key, then type 0.3) and then move it by 0.6 units along the Y axis (G key, Y key, then type 0.6). Continue by extruding again and moving the extrusion 0.5 units upwards (G key, Z key, then type 0.5). Then add another extrusion, moving it up by 0.1 units (G key, Z key, then type 0.1). With the last extrusion selected, perform a scale operation, by a factor of 1.5 (S key, then type 1.5). Right after that, extrude again and move the extrusion 1.5 units upwards (G key, Z key, then type 1.5). Now let's rotate the view freely, so that the face of the last extrusion that faces the front is selectable, select it and move it -0.5 units along the Y axis (press the G key, Y key, then type -0.5).Let's take a look at a screenshot to make sure that we are on the right path: Note the (fairly noticeable) shapes showing the neck area, the head, and the torso of our model. Take a look at the face on the model's side from where we'll later extrude the arm. Now let's switch to Front View (View | Front), then select the upper face on the side of the torso of the model, extrude it, press Escape, and move it 0.16 units along the X axis (G key, X key, then type 0.16). Continue by scaling it down by a factor of 0.75 (S key, then type 0.75) and move it up by 0.07 units (press the G key, Z key, then type 0.07). Then switch to Right View (View | Right) and move it 0.2 units along the Y axis (press the G key, Y key, then type 0.2). This will give us the starting point to extrude the arm. Switch to Front View (View | Front) and perform another extrusion (having selected the face that remains selected by the previous extrusion), press Escape, and then move it 0.45 units along the X axis (press the G key, X key, then type 0.45). Then let's switch to Edge Select Mode, deselect all the edges that could be selected (Select | Select/Deselect All), rotate the view to be able to select any of the horizontal edges of the last extrusion, and then select the upper horizontal edge of the last extrusion; then move it -0.16 units along the X axis (G key, X key, then type -0.16). Right after that, let's select the lower horizontal edge of the last extrusion and move it 0.66 units upwards (G key, Z key, then type 0.66). Finalize this tweak by selecting the last two edges that we worked with and move them -0.15 units along the X axis (press the G key, X key, then type -0.15). Let's also select the lower edge of the first extrusion that we made for the arm and move it 0.14 units along the X axis (press the G key, X key, then type 0.14). Since this process is a bit tricky, let's use a screenshot, to help us ensure that we are performing it correctly: The only reason to perform this weird tweaking of the base mesh is to ensure a proper topology (internal mesh structure) for the shoulder when the model is finished. Let's remember to take a look at the shoulder of the finished model and compare it with the previous screenshot to understand it. Make sure to only select the face shown selected in the previous screenshot and switch back to Front View (View | Front) to work on the arms. Extrude the selected face, press Escape, and then move it by 1.6 units along the X axis (press the G key, X key, then type 1.6). Then scale it down by a factor of 0.75 (press the S key, then type 0.75) and move it up 0.07 units (press the G key, Z key, then type 0.07). Continue by performing a second extrusion, press Escape and then move it 1.9 units along the X axis (press the G key, X key, then type 1.9). Then let's perform a scale constrained to the Y axis, this time by a factor of 0.5 (press the S key, Y key, then type 0.5). To perform some tweaks, let's switch to Top View (View | Top) and move the selected face 0.17 units along the Y axis (press the G key, Y key, then type 0.17). To model the simple shape that we will create for the hand, let's make sure that we have selected the rightmost face from the last extrusion, extrude it, and move it 0.25 units along the X axis (press the G key, X key, then type 0.25). Then perform a second extrusion and move it 0.25 units along the X axis as well, and finish the extrusions by adding a last one, this time moving it 0.6 units along the X axis (press the G key, X key, then type 0.6). For the thumb, let's select the face pointing forwards in the second-last extrusion, extrude it, and move the extruded face to the right and down (remember we are in Top View) so that it looks well shaped with the rest of the hand. For this we can perform a rotation of the selected face to orient it better. To finish the hand, let's select the faces forming the thumb and the one between the thumb and the other "fingers", and move them -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Also select the two faces on the other side of the face and move them 0.08 units along the Y axis (press the G key, Y key, then type 0.08). The following screenshot should be very helpful to follow the process: Now it's time to model the legs of our character. For that, let's pan the 3D View to get the lower face visible, select it, extrude it, and move it -0.4 units (press the G key, Z key, then type -0.4). Now switch to Edge Select Mode, select the rightmost edge of the face we just extruded down, and move it -0.85 units along the X axis (G key, X key, then type -0.85). To extrude the thigh, let's first switch to Face Select Mode, select the face that runs diagonally after we moved the edge in the previous step, then switch to Front View (View | Front), extrude the face, press Escape, and then apply a scale operation along the Z axis by a factor of 0 (press the S key, Z key, then type 0), to get it looking entirely flat. With the face from the last extrusion selected, let's move it -0.8 units along the Z axis (press the G key, Z key, then type -0.8). Right after that, let's scale the selected face by a factor of 1.28 along the X axis (press the S key, X key, then type 1.28) and move it 0.06 units along the X axis (press the G key, X key, then type 0.06). Now switch to Right View (View | Right), scale it by a factor of 0.8 (press the S key, Y key, then type 0.8), and then move it -0.12 units along the Y axis (press the G key, Y key, then type -0.12). Perform another extrusion, then press Escape and move it -2.2 units along the Z axis (press the G key, Z key, then type -2.2). To give it a better form, let's now scale the selected face by a factor of 0.8 along the Y axis (press the S key, Y key, then type 0.8) and move it 0.05 units along the Y axis (press the G key, Y key, then type 0.05). To complete the thigh, let's switch to Front View (View | Front), scale it by a factor of 0.7 along the X axis (press the S key, X key, then type 0.7), and then move it -0.18 units along the X axis (press the G key, X key, then type -0.18). Right after the thigh, let's continue working on the leg. Make sure that the face from the tip of the previous extrusion is selected, extrude it, press Escape, then move it -2.3 units along the Z axis (press the G key, Z key, then type -2.3). Then let's switch to Right View (View | Right), scale it by a factor of 0.7 along the Y axis (press the S key, Y key, then type 0.7), and move it -0.02 units along the Y axis (press the G key, Y key, then type -0.02). Now we just need to create the feet by extruding the face selected previously and moving it -0.6 units along the Z axis (press the G key, Z key, then type -0.6). Then select the face of the last extrusion that faces the front, extrude it, press Escape, and move it -1.9 units along the Y axis. As a final touch, let's switch to Edge Select mode, then select the upper horizontal edge of the last extrusion and move it -0.3 units along the Z axis (press the G key, Z key, then type -0.3). Let's take a look at a couple of screenshots showing us how our model should look by now: In the previous screenshot, we can see the front part, whereas the back side of the model is seen in the next one. Let's take a couple of minutes to inspect the screenshots and compare them to our actual model, to be entirely sure that we have the correct mesh now. Notice that our model isn't looking especially nice yet; that's because we've just worked on creating the mesh, the actual form will be worked in the coming tasks. Objective Complete - Mini Debriefing In this task we just performed the very initial step of our modeling process: creating the base mesh to work with. In order to avoid overly complicated written explanations we are using a modeling process that leaves the actual "shaping" for later, so we only worked out the topology of our mesh and laid out some simple foundations such as general proportions. The good thing about this approach is that we put in effort where it is really required, saving some time and enjoying the process a lot more. Classified Intel There are two main methods for modeling: poly-to-poly modeling and box modeling. The poly-to-poly method is about working with very localized (often detailed) geometry, paying attention to how each polygon is laid out in the model. The box modeling method is about constructing the general form very fast, by using the extrude operation, while paying attention to general aspects such as proportions, deferring the detailed tweaks for later. In this project we apply the box modeling method. We just worked out a very simple mesh, mostly by performing extrusions and very simple tweaks. Our main concern while doing this task was to keep proportions correct, forgetting about the fine details of the "form" that we are working out. The next tasks of this project will be about using Blender's sculpting tools to ease the tweaking job a lot, getting a very nice model in the end without having to tweak individual vertices!
Read more
  • 0
  • 0
  • 11519

article-image-getting-started
Packt
26 Dec 2012
6 min read
Save for later

Getting Started

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) System requirements Before we take a look at how to download and install ShiVa3D, it might be a good idea to see if your system will handle it. The minimum requirements for the ShiVa3D editor are as follows: Microsoft Windows XP and above, Mac OS with Parallels Intel Pentium IV 2 GHz or AMD Athlon XP 2600+ 512 MB of RAM 3D accelerated graphics card with 64 MB RAM and 1440 x 900 resolution Network interface In addition to the minimum requirements, the following suggestions will give you the best gaming experience: Intel Core Duo 1.8 GHz or AMD Athlon 64 X2 3600+ 1024 MB of RAM Modern 3D accelerated graphics card with 256 MB RAM and 1680 x 1050 resolution Sound card Downloading ShiVa3D Head over to http://www.stonetrip.com and get a copy of ShiVa3D Web Edition. Currently, there is a download link on the home page. Once you get to the Download page, enter your email address and click on the Download button. If everything goes right, you will be prompted for a save location—save it in a place that will be easy to find later. That's it for the download, but you may want to take a second to look around Stonetrip's website. There are links to the documentation, forum, wiki, and news updates. It will be well worth your time to become familiar with the site now since you will be using it frequently. Installing ShiVa3D Assuming your computer meets the minimum requirements, installation should be pretty easy. Simply find the installation file that you downloaded and run it. I recommend sticking with the default settings. If you do have issues getting it installed, it is most likely due to a technical problem, so head on over to the forums, and we will be more than glad to lend a helping hand. The ShiVa editor Several different applications were installed, if you accepted the default installation choices. The only one we are going to worry about for most of this book is the ShiVa Web Edition editor, so go ahead and open it now. By default, ShiVa opens with a project named Samples loaded. You can tell by looking at the lower right-hand quadrant of the screen in the Data Explorer—the root folder is named Samples, as shown in the following screenshot: This is actually a nice place to start, because there are all sorts of samples that we can play with. We'll come back to those once we have had a chance to make our own game. We will cover the editor in more detail later, but for now it is important to notice that the default layout has four sections: Attributes Editor, Game Editor, Scene Viewer, and Data Explorer. Each of these sections represents a module within the editor. The Data Explorer window, for example, gives us access to all of the resources that can be used in our project such as materials, models, fonts, and so on. Creating a project A project is the way by which we can group games that share the same resources.To create a new project, click on Main | Projects in the upper left-hand corner of the screen. The project window will open, as shown in the following screenshot: In this window, we can see the Samples project along with its path. The green light next to the name indicates that Samples is the project currently loaded into the editor. If there were other projects listed, the other projects would have red lights besides their names. The steps for creating a new project are as follows: Click on the Add button to create a new project. Navigate to the location we want for our project and then right-click in the explorer area and select New | Folder. Name the folder as IntroToShiva, highlight the folder and click on Select. The project window will now show our new project has the green light and the Samples project has a red light. Click on the Close button to finish. Notice that the root folder in the Data Context window now says IntroToShiva. Creating a game Games are exactly what you would think they are and it's time we created ours. The steps for creating our own games are as follows: Go to the Game Editor window in the lower left-hand corner and click on Game | Create. A window will pop up asking for the game name.We will be creating a game in which the player must fly a spaceship through a tunnel or cave and avoid obstacles; so let's call the game CaveRunner. Click on the OK button and the bottom half of our editor should look like the following screenshot: Notice that there is now some information displayed in the Game Editor window and the Data Explorer window shows the CaveRunner game in the Games folder. A game is simply the empty husk of what we are really trying to build. Next, we will begin building out our game by adding a scene. Making a scene We can think of a scene as a level in a game—it is the stage upon which we place our objects, so that the player can interact with them. We can create a scene by performing the following steps: Click on Edit | Scene | Create in the Game Editor window. Name the scene as Level1 and click on the OK button. The new scene is created and opened for immediate use, as shown in the following screenshot: We can tell Level1 is open, because the Game Editor window switched to the Scenes tab and now Level1 has a green check mark next to it; we can also see a grid in the Scene Viewer window. Additionally, the scene information is displayed in the upper left-hand corner of the Scene Viewer window and the Scene tag says Level1. So we were able to get a scene created, but it is sadly empty—it's not much of a level in even the worst of games. If we want this game to be worth playing, we better add something interesting. Let's start by importing a ship.
Read more
  • 0
  • 0
  • 11511

article-image-animations-cocos2d-x
Packt
23 Sep 2014
24 min read
Save for later

Animations in Cocos2d-x

Packt
23 Sep 2014
24 min read
In this article, created by Siddharth Shekhar, the author of Learning Cocos2d-x Game Development, we will learn different tools that can be used to animate the character. Then, using these animations, we will create a simple state machine that will automatically check whether the hero is falling or is being boosted up into the air, and depending on the state, the character will be animated accordingly. We will cover the following in this article: Animation basics TexturePacker Creating spritesheet for the player Creating and coding the enemy animation Creating the skeletal animation Coding the player walk cycle (For more resources related to this topic, see here.) Animation basics First of all, let's understand what animation is. An animation is made up of different images that are played in a certain order and at a certain speed, for example, movies that run images at 30 fps or 24 fps, depending on which format it is in, NTSC or PAL. When you pause a movie, you are actually seeing an individual image of that movie, and if you play the movie in slow motion, you will see the frames or images that make up to create the full movie. In games while making animations, we will do the same thing: adding frames and running them at a certain speed. We will control the images to play in a particular sequence and interval by code. For an animation to be "smooth", you should have at least 24 images or frames being played in a second, which is known as frames per second (FPS). Each of the images in the animation is called a frame. Let's take the example of a simple walk cycle. Each walk cycle should be of 24 frames. You might say that it is a lot of work, and for sure it is, but the good news is that these 24 frames can be broken down into keyframes, which are important images that give the illusion of the character walking. The more frames you add between these keyframes, the smoother the animation will be. The keyframes for a walk cycle are Contact, Down, Pass, and Up positions. For mobile games, as we would like to get away with as minimal work as possible, instead of having all the 24 frames, some games use just the 4 keyframes to create a walk animation and then speed up the animation so that player is not able to see the missing frames. So overall, if you are making a walk cycle for your character, you will create eight images or four frames for each side. For a stylized walk cycle, you can even get away with a lesser number of frames. For the animation in the game, we will create images that we will cycle through to create two sets of animation: an idle animation, which will be played when the player is moving down, and a boost animation, which will get played when the player is boosted up into the air. Creating animation in games is done using two methods. The most popular form of animation is called spritesheet animation and the other is called skeletal animation. Spritesheet animation Spritesheet animation is when you keep all the frames of the animation in a single file accompanied by a data file that will have the name and location of each of the frames. This is very similar to the BitmapFont. The following is the spritesheet we will be using in the game. For the boost and idle animations, each of the frames for the corresponding animation will be stored in an array and made to loop at a particular predefined speed. The top four images are the frames for the boost animation. Whenever the player taps on the screen, the animation will cycle through these four images appearing as if the player is boosted up because of the jetpack. The bottom four images are for the idle animation when the player is dropping down due to gravity. In this animation, the character will look as if she is blinking and the flames from the jetpack are reduced and made to look as if they are swaying in the wind. Skeletal animation Skeletal animation is relatively new and is used in games such as Rayman Origins that have loads and loads of animations. This is a more powerful way of making animations for 2D games as it gives a lot of flexibility to the developer to create animations that are fast to produce and test. In the case of spritesheet animations, if you had to change a single frame of the animation, the whole spritesheet would have to be recreated causing delay; imagine having to rework 3000 frames of animations in your game. If each frame was hand painted, it would take a lot of time to produce the individual images causing delay in production time, not to mention the effort and time in redrawing images. The other problem is device memory. If you are making a game for the PC, it would be fine, but in the case of mobiles where memory is limited, spritesheet animation is not a viable option unless cuts are made to the design of the game. So, how does skeletal animation work? In the case of skeletal animation, each item to be animated is stored in a separate spritesheet along with the data file for the locations of the individual images for each body part and object to be animated, and another data file is generated that positions and rotates the individual items for each of the frames of the animation. To make this clearer, look at the spritesheet for the same character created with skeletal animation: Here, each part of the body and object to be animated is a separate image, unlike the method used in spritesheet animation where, for each frame of animation, the whole character is redrawn. TexturePacker To create a spritesheet animation, you will have to initially create individual frames in Photoshop, Illustrator, GIMP or any other image editing software. I have already made it and have each of the images for the individual frames ready. Next, you will have to use a software to create spritesheets from images. TexturePacker is a very popular software that is used by industry professionals to create spritesheets. You can download it from https://www.codeandweb.com/. These are the same guys who made PhysicsEditor, which we used to make shapes for Box2D. You can use the trial version of this software. While downloading, choose the version that is compatible with your operating system. Fortunately, TexturePacker is available for all the major operating systems, including Linux. Refer to the following screenshot to check out the steps to use TexturePacker: Once you have downloaded TexturePacker, you have three options: you can click to try the full version for a week, or you can purchase the license, or click on the essential version to use in the trial version. In the trial version, some of the professional features are disabled, so I recommend trying the professional features for a week. Once you click the option, you should see the following interface: Texture packer has three panels; let's start from the right. The right-hand side panel will display the names of all the images that you select to create the spritesheet. The center panel is a preview window that shows how the images are packed. The left-hand side panel gives you options to store the packed texture and data file to be published to and decide the maximum size of the packed image. The Layout section gives a lot of flexibility to set up the individual images in TexturePacker, and then you have the advanced section. Let's look at some of the key items on the panel on the left. The display section The display section consists of the following options: Data Format: As we saw earlier, each exported file creates a spritesheet that has a collection of images and a data file that keeps track of the positions on the spritesheet. The data format usually changes depending upon the framework or engine. In TexturePacker, you can select the framework that you are using to develop the game, and TexturePacker will create a data file format that is compatible with the framework. If you look at the drop-down menu, you can see a lot of popular frameworks and engines in the list such as 2DToolkit, OGRE, Cocos2d, Corona SDK, LibGDX, Moai, Sparrow/Starling, SpriteKit, and Unity. You can also create a regular JSON file too if you wish. Java Script Object Notification (JSON) is similar to an XML file that is used to store and retrieve data. It is a collection of names and value pairs used for data interchanging. Data file: This is the location where you want the exported file to be placed. Texture format: Usually, this is set to .png, but you can select the one that is most convenient. Apart from PNG, you also have PVR, which is used so that people cannot view the image readily and also provides image compression. Png OPT file: This is used to set the quality of PNG images. Image format: This sets the RGB format to be used; usually, you would want this to be set at the default value. AutoSD: If you are going to create images for different resolutions, this option allows you to create resources depending on the different resolutions you are developing the game for, without the need for going into the graphics software, shrinking the images and packing them again for all the resolutions. Content protection: This protects the image and data file with an encryption key so that people can't steal spritesheets from the game file. The Geometry section The Geometry section consists of the following options: Max size: You can specify the maximum width and height of the spritesheet depending upon the framework. Usually, all frameworks allow up to 4092 x 4092, but it mostly depends on the device. Fixed size: Apparently, if you want a fixed size, you will go with this option. Size constraint: Some frameworks prefer the spritesheets to be in the power of 2 (POT), for example, 32x32, 64x64, 256x256, and so on. If this is the case, you need to select the size accordingly. For Cocos2d, you can choose any size. Scale: This is used to scale up or scale down the image. The Layout section The Layout section consists of the following options: Algorithm: This is the algorithm that will be used to make sure that the images you select to create the spritesheet are packed in the most efficient way. If you are using the pro version, choose MaxRects, but if you are using the essential version, you will have to choose Basic. Border Padding / Shape Padding: Border padding packs the gap between the border of the spritesheet and the image that it is surrounding. Shape padding is the padding between the individual images of the spritesheets. If you find that the images are getting overlapped while playing the animation in the game, you might want to increase the values to avoid overlapping. Trim: This removes the extra alpha that is surrounding the image, which would unnecessarily increase the image size of the spritesheet. Advanced features The following are some miscellaneous options in TexturePacker: Texture path: This appends the path of the texture file at the beginning of the texture name Clean transparent pixels: This sets the transparent pixels color to #000 Trim sprite names: This will remove the extension from the names of the sprites (.png and .jpg), so while calling for the name of the frame, you will not have to use extensions Creating a spritesheet for the player Now that we understand the different items in the TextureSettings panel of TexturePacker, let's create our spritesheet for the player animation from individual frames provided in the Resources folder. Open up the folder in the system and select all the images for the player that contains the idle and boost frames. There will be four images for each of the animation. Select all eight images and click-and-drag all the images to the Sprites panel, which is the right-most panel of TexturePacker. Once you have all the images on the Sprites panel, the preview panel at the center will show a preview of the spritesheet that will be created: Now on the TextureSettings panel, for the Data format option, select cocos2d. Then, in the Data file option, click on the folder icon on the right and select the location where you would like to place the data file and give the name as player_anim. Once selected, you will see that the Texture file location also auto populates with the same location. The data file will have a format of .plist and the texture file will have an extension of .png. The .plist format creates data in a markup language similar to XML. Although it is more common on Mac, you can use this data type independent of the platform you use while developing the game using Cocos2d-x. Keep the rest of the settings the same. Save the file by clicking on the save icon on the top to a location where the data and spritesheet files are saved. This way, you can access them easily the next time if you want to make the same modifications to the spritesheet. Now, click on the Publish button and you will see two files, player_anim.plist and player_anim.png, in the location you specified in the Data file and Location file options. Copy and paste these two files in the Resources folder of the project so that we can use these files to create the player states. Creating and coding enemy animation Now, let's create a similar spritesheet and data file for the enemy also. All the required files for the enemy frames are provided in the Resources folder. So, once you create the spritesheet for the enemy, it should look something like the following screenshot. Don't worry if the images are shown in the wrong sequence, just make sure that the files are numbered correctly from 1 to 4 and it is in the sequence the animations needs to be played in. Now, place the enemy_anim.png spritesheet and data file in the Resources folder in the directory and add the following lines of code in the Enemy.cpp file to animate the enemy:   //enemy animation       CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create("enemy_anim.png");    CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();    cache->addSpriteFramesWithFile("enemy_anim.plist");       this->createWithSpriteFrameName("enemy_idle_1.png");    this->addChild(spritebatch);             //idle animation    CCArray* animFrames = CCArray::createWithCapacity(4);      char str1[100] = {0};    for(int i = 1; i <= 4; i++)    {        sprintf(str1, "enemy_idle_%d.png", i);        CCSpriteFrame* frame = cache->spriteFrameByName( str1 );        animFrames->addObject(frame);    }           CCAnimation* idleanimation = CCAnimation::createWithSpriteFrames(animFrames, 0.25f);    this->runAction (CCRepeatForever::create(CCAnimate::create(idleanimation))) ; This is very similar to the code for the player. The only difference is that for the enemy, instead of calling the function on the hero, we call it to the same class. So, now if you build and run the game, you should see the enemy being animated. The following is the screenshot from the updated code. You can now see the flames from the booster engine of the enemy. Sadly, he doesn't have a boost animation but his feet swing in the air. Now that we have mastered the spritesheet animation technique, let's see how to create a simple animation using the skeletal animation technique. Creating the skeletal animation Using this technique, we will create a very simple player walk cycle. For this, there is a software called Spine by Esoteric Software, which is a very widely used professional software to create skeletal animations for 2D games. The software can be downloaded from the company's website at http://esotericsoftware.com/spine-purchase: There are three versions of the software available: the trial, essential, and professional versions. Although majority of the features of the professional version are available in the essential version, it doesn't have ghosting, meshes, free-form deformation, skinning, and IK pinning, which is in beta stage. The inclusion of these features does speed up the animation process and certainly takes out a lot of manual work for the animator or illustrator. To learn more about these features, visit the website and hover the mouse over these features to have a better understanding of what they do. You can follow along by downloading the trial version, which can be done by clicking the Download trial link on the website. Spine is available for all platforms including Windows, Mac, and Linux. So download it for the OS of your choice. On Mac, after downloading and running the software, it will ask to install X11, or you can download and install it from http://xquartz.macosforge.org/landing/. After downloading and installing the plugin, you can open Spine. Once the software is up and running, you should see the following window: Now, create a new project by clicking on the spine icon on the top left. As we can see in the screenshot, we are now in the SETUP mode where we set up the character. On the Tree panel on the right-hand side, in the Hierarchy pane, select the Images folder. After selecting the folder, you will be able to select the path where the individual files are located for the player. Navigate to the player_skeletal_anim folder where all the images are present. Once selected, you will see the panel populate with the images that are present in the folder, namely the following: bookGame_player_Lleg bookGame_player_Rleg bookGame_player_bazooka bookGame_player_body bookGame_player_hand bookGame_player_head Now drag-and-drop all the files from the Images folder onto the scene. Don't worry if the images are not in the right order. In the Draw Order dropdown in the Hierarchy panel, you can move around the different items by drag-and-drop to make them draw in the order that you want them to be displayed. Once reordered, move the individual images on the screen to the appropriate positions: You can move around the images by clicking on the translate button on the bottom of the screen. If you hover over the buttons, you can see the names of the buttons. We will now start creating the bones that we will use to animate the character. In the panel on the bottom of the Tools section, click on the Create button. You should now see the cursor change to the bone creation icon. Before you create a bone, you have to always select the bone that will be the parent. In this case, we select the root bone that is in the center of the character. Click on it and drag downwards and hold the Shift key at the same time. Click-and-drag downwards up to the end of the blue dress of the character; make sure that the blue dress is highlighted. Now release the mouse button. The end point of this bone will be used as the hip joint from where the leg bones will be created for the character. Now select the end of the newly created bone, which you made in the last step, and click-and-drag downwards again holding Shift at the same time to make a bone that goes all the way to the end of the leg. With the leg still getting highlighted, release the mouse button. To create the bone for the other leg, create a new bone again starting from end of the first bone and the hip joint, and while the other leg is selected, release the mouse button to create a bone for the leg. Now, we will create a bone for the hand. Select the root node, the node in the middle of the character while holding Shift again, and draw a bone to the hand while the hand is highlighted. Create a bone for the head by again selecting the root node selected earlier. Draw a bone from the root node to the head while holding Shift and release the mouse button once you are near the ear of the character and the head is highlighted. You will notice that we never created a bone for the bazooka. For the bazooka, we will make the hand as the parent bone so that when the hand gets rotated, the bazooka also rotates along. Click on the bazooka node on the Hierarchy panel (not the image) and drag it to the hand node in the skeleton list. You can rotate each of the bones to check whether it is rotating properly. If not, you can move either the bones or images around by locking either one of them in its place so that you can move or rotate the other freely by clicking either the bones or the images button in the compensate button at the bottom of the screen. The following is the screenshot that shows my setup. You can use it to follow and create the bones to get a more satisfying animation. To animate the character, click on the SETUP button on the top and the layout will change to ANIMATE. You will see that a new timeline has appeared at the bottom. Click on the Animations tab in Hierarchy and rename the animation name from animation to runCycle by double-clicking on it. We will use the timeline to animate the character. Click on the Dopesheet icon at the bottom. This will show all the keyframes that we have made for the animation. As we have not created any, the dopesheet is empty. To create our first keyframe, we will click on the legs and rotate both the bones so that it reflects the contact pose of the walk cycle. Now to set a keyframe, click on the orange-colored key icon next to Rotate in the Transform panel at the bottom of the screen. Click on the translate key, as we will be changing the translation as well later. Once you click on it, the dopesheet will show the bones that you just rotated and also show what changes you made to the bone. Here, we rotated the bone, so you will see Rotation under the bones, and as we clicked on the translate key, it will show the Translate also. Now, frame 24 is the same as frame 0. So, to create the keyframe at frame 24, drag the timeline scrubber to frame 24 and click on the rotate and translate keys again. To set the keyframe at the middle where the contact pose happens but with opposite legs, rotate the legs to where the opposite leg was and select the keys to create a keyframe. For frames 6 and 18, we will keep the walk cycle very simple, so just raise the character above by selecting the root node, move it up in the y direction and click the orange key next to the translate button in the Transform panel at the bottom. Remember that you have to click it once in frame 6 and then move the timeline scrubber to frame 18, move the character up again, and click on the key again to create keyframes for both frames 6 and 18. Now the dopesheet should look as follow: Now to play the animation in a loop, click on the Repeat Animation button to the right of the Play button and then on the Play button. You will see the simple walk animation we created for the character. Next, we will export the data required to create the animation in Cocos2d-x. First, we will export the data for the animation. Click on the Spine button on top and select Export. The following window should pop up. Select JSON and choose the directory in which you would want to save the file to and click on Export: That is not all; we have to create a spritesheet and data file just as we created one in texture packer. There is an inbuilt tool in Spine to create a packed spritesheet. Again, click on the Spine icon and this time select Texture Packer. Here, in the input directory, select the Images folder from where we imported all the images initially. For the output directory, select the location to where the PNG and data files should be saved to. If you click on the settings button, you will see that it looks very similar to what we saw in TexturePacker. Keep the default values as they are. Click on Pack and give the name as player. This will create the .png and .atlas files, which are the spritesheet and data file, respectively: You have three files instead of the two in TexturePacker. There are two data files and an image file. While exporting the JSON file, if you didn't give it a name, you can rename the file manually to player.json just for consistency. Drag the player.atlas, player.json, and player.png files into the project folder. Finally, we come to the fun part where we actually use the data files to animate the character. For testing, we will add the animations to the HelloWorldScene.cpp file and check the result. Later, when we add the main menu, we will move it there so that it shows as soon as the game is launched. Coding the player walk cycle If you want to test the animations in the current project itself, add the following to the HelloWorldScene.h file first: #include <spine/spine-cocos2dx.h> Include the spine header file and create a variable named skeletonNode of the CCSkeletalAnimation type: extension::CCSkeletonAnimation* skeletonNode; Next, we initialize the skeletonNode variable in the HelloWorldScene.cpp file:    skeletonNode = extension::CCSkeletonAnimation::createWithFile("player.json", "player.atlas", 1.0f);    skeletonNode->addAnimation("runCycle",true,0,0);    skeletonNode->setPosition(ccp(visibleSize.width/2 , skeletonNode->getContentSize().height/2));    addChild(skeletonNode); Here, we give the two data files into the createWithFile() function of CCSkeletonAnimation. Then, we initiate it with addAnimation and give it the animation name we gave when we created the animation in Spine, which is runCycle. We next set the position of the skeletonNode; we set it right above the bottom of the screen. Next, we add the skeletonNode to the display list. Now, if you build and run the project, you will see the player getting animated forever in a loop at the bottom of the screen: On the left, we have the animation we created using TexturePacker from CodeAndWeb, and in the middle, we have the animation that was created using Spine from Esoteric Software. Both techniques have their set of advantages, and it also depends upon the type and scale of the game that you are making. Depending on this, you can choose the tool that is more tuned to your needs. If you have a smaller number of animations in your game and if you have good artists, you could use regular spritesheet animations. If you have a lot of animations or don't have good animators in your team, Spine makes the animation process a lot less cumbersome. Either way, both tools in professional hands can create very good animations that will give life to the characters in the game and therefore give a lot of character to the game itself. Summary This article took a very brief look at animations and how to create an animated character in the game using the two of the most popular animation techniques used in games. We also looked at FSM and at how we can create a simple state machine between two states and make the animation change according to the state of the player at that moment. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [Article] Sprites [Article] Cocos2d-x: Installation [Article]
Read more
  • 0
  • 0
  • 11395

article-image-collision-detection-and-physics-panda3d-game-development
Packt
30 Mar 2011
12 min read
Save for later

Collision Detection and Physics in Panda3D Game Development

Packt
30 Mar 2011
12 min read
Panda3D 1.7 Game Developer's Cookbook Over 80 recipes for developing 3D games with Panda3D, a full-scale 3D game engine In a video game, the game world or level defines the boundaries within which the player is allowed to interact with the game environment. But how do we enforce these boundaries? How do we keep the player from running through walls? This is where collision detection and response come into play. Collision detection and response not only allow us to keep players from passing through the level boundaries, but also are the basis for many forms of interaction. For example, lots of actions in games are started when the player hits an invisible collision mesh, called a trigger, which initiates a scripted sequence as a response to the player entering its boundaries. Simple collision detection and response form the basis for nearly all forms of interaction in video games. It’s responsible for keeping the player within the level, for crates being pushable, for telling if and where a bullet hit the enemy. What if we could add some extra magic to the mix to make our games even more believable, immersive, and entertaining? Let’s think again about pushing crates around: What happens if the player pushes a stack of crates? Do they just move like they have been glued together, or will they start to tumble and eventually topple over? This is where we add physics to the mix to make things more interesting, realistic, and dynamic. In this article, we will take a look at the various collision detection and physics libraries that the Panda3D engine allows us to work with. Putting in some extra effort, we will also see that it is not very hard to integrate a physics engine that is not part of the Panda3D SDK. Using the built-in collision detection system Not all problems concerning world and player interaction need to be handled by a fully fledged physics API—sometimes a much more basic and lightweight system is just enough for our purposes. This is why in this recipe we dive into the collision handling system that is built into the Panda3D engine. Getting ready This recipe relies upon the project structure created in Setting up the game structure (code download-Ch:1), Setting Up Panda3D and Configuring Development Tools. How to do it... Let’s go through this recipe’s tasks: Open Application.py and add the include statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * import random class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.addSmiley() self.addFloor() taskMgr.add(self.updateSmiley, "UpdateSmiley") Next, add the method that initializes the collision detection system: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.accept("frowney-in-floor", self.onCollision) Next, implement the method for adding the frowney model to the scene: def addSmiley(self): self.frowney = loader.loadModel("frowney") self.frowney.reparentTo(render) self.frowney.setPos(0, 0, 10) self.frowney.setPythonTag("velocity", 0) col = self.frowney.attachNewNode(CollisionNode("frowney")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) The following methods will add a floor plane to the scene and handle the collision response: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollision(self, entry): vel = random.uniform(0.01, 0.2) self.frowney.setPythonTag("velocity", vel) Add this last piece of code. This will make the frowney model bounce up and down: def updateSmiley(self, task): vel = self.frowney.getPythonTag("velocity") z = self.frowney.getZ() self.frowney.setZ(z + vel) vel -= 0.001 self.frowney.setPythonTag("velocity", vel) return task.cont Hit the F6 key to launch the program: How it works... We start off by adding some setup code that calls the other initialization routines. We also add the task that will update the smiley’s position. In the setupCD() method, we initialize the collision detection system. To be able to find out which scene objects collided and issue the appropriate responses, we create an instance of the CollisionTraverser class and assign it to base.cTrav. The variable name is important, because this way, Panda3D will automatically update the CollisionTraverser every frame. The engine checks if a CollisionTraverser was assigned to that variable and will automatically add the required tasks to Panda3D’s update loop. Additionally, we enable debug drawing, so collisions are being visualized at runtime. This will overlay a visualization of the collision meshes the collision detection system uses internally. In the last lines of setupCD(), we instantiate a collision handler that sends a message using Panda3D’s event system whenever a collision is detected. The method call addInPattern(“%fn-in-%in”) defines the pattern for the name of the event that is created when a collision is encountered the first time. %fn will be replaced by the name of the object that bumps into another object that goes by the name that will be inserted in the place of %in. Take a look at the event handler that is added below to get an idea of what these events will look like. After the code for setting up the collision detection system is ready, we add the addSmiley() method, where we first load the model and then create a new collision node, which we attach to the model’s node so it is moved around together with the model. We also add a sphere collision shape, defined by its local center coordinates and radius. This is the shape that defines the boundaries; the collision system will test against it to determine whether two objects have touched. To complete this step, we register our new collision node with the collision traverser and configure it to use the collision handler that sends events as a collision response. Next, we add an infinite floor plane and add the event handling method for reacting on collision notifications. Although the debug visualization shows us a limited rectangular area, this plane actually has an unlimited width and height. In our case, this means that at any given x- and y-coordinate, objects will register a collision when any point on their bounding volume reaches a z-coordinate of 0. It’s also important to note that the floor is not registered as a collider here. This is contrary to what we did for the frowney model and guarantees that the model will act as the collider, and the floor will be treated as the collidee when a contact between the two is encountered. While the onCollision() method makes the smiley model go up again, the code in updateSmiley() constantly drags it downwards. Setting the velocity tag on the frowney model to a positive or negative value, respectively, does this in these two methods. We can think of that as forces being applied. Whenever we encounter a collision with the ground plane, we add a one-shot bounce to our model. But what goes up must come down, eventually. Therefore, we continuously add a gravity force by decreasing the model’s velocity every frame. There’s more... This sample only touched a few of the features of Panda3D’s collision system. The following sections are meant as an overview to give you an impression of what else is possible. For more details, take a look into Panda3D’s API reference. Collision Shapes In the sample code, we used CollisionPlane and CollisionSphere, but there are several more shapes available: CollisionBox: A simple rectangular shape. Crates, boxes, and walls are example usages for this kind of collision shape. CollisionTube: A cylinder with rounded ends. This type of collision mesh is often used as a bounding volume for first and third person game characters. CollisionInvSphere: This shape can be thought of as a bubble that contains objects, like a fish bowl. Everything that is outside the bubble is reported to be colliding. A CollisionInvSphere may be used to delimit the boundaries of a game world, for example. CollisionPolygon: This collision shape is formed from a set of vertices, and allows for the creating of freeform collision meshes. This kind of shape is the most complex to test for collisions, but also the most accurate one. Whenever polygon-level collision detection is important, when doing hit detection in a shooter for example, this collision mesh comes in handy. CollisionRay: This is a line that, starting from one point, extends to infinity in a given direction. Rays are usually shot into a scene to determine whether one or more objects intersect with them. This can be used for various tasks like finding out if a bullet shot in the given direction hit a target, or simple AI tasks like finding out whether a bot is approaching a wall. CollisionLine: Like CollisionRay, but stretches to infinity in both directions. CollisionSegment: This is a special form of ray that is limited by two end points. CollisionParabola: Another special type of ray that is bent. The flying curves of ballistic objects are commonly described as parabolas. Naturally, we would use this kind of ray to find collisions for bullets, for example. Collision Handlers Just like it is the case with collision shapes for this recipe, we only used CollisionHandlerEvent for our sample program, even though there are several more collision handler classes available: CollisionHandlerPusher: This collision handler automatically keeps the collider out of intersecting vertical geometry, like walls. CollisionHandlerFloor: Like CollisionHandlerPusher, but works in the horizontal plane. CollisionHandlerQueue: A very simple handler. All it does is add any intersecting objects to a list. PhysicsCollisionHandler: This collision handler should be used in connection with Panda3D’s built-in physics engine. Whenever a collision is found by this collision handler, the appropriate response is calculated by the simple physics engine that is built into the engine. Using the built-in physics system Panda3D has a built-in physics system that treats its entities as simple particles with masses to which forces may be applied. This physics system is a great amount simpler than a fully featured rigid body one. But it still is enough for cheaply, quickly, and easily creating some nice and simple physics effects. Getting ready To be prepared for this recipe, please first follow the steps found in Setting up the game structure (code download-Ch:1). Also, the collision detection system of Panda3D will be used, so reading up on it in Using the built-in collision detection system might be a good idea! How to do it... The following steps are required to work with Panda3D’s built-in physics system: Edit Application.py and add the required import statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * from panda3d.physics import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.setupPhysics() self.addSmiley() self.addFloor() Next, add the methods for initializing the collision detection and physics systems to the Application class: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.notifier.addOutPattern("%fn-out-%in") self.accept("smiley-in-floor", self.onCollisionStart) self.accept("smiley-out-floor", self.onCollisionEnd) def setupPhysics(self): base.enableParticles() gravNode = ForceNode("gravity") render.attachNewNode(gravNode) gravityForce = LinearVectorForce(0, 0, -9.81) gravNode.addForce(gravityForce) base.physicsMgr.addLinearForce(gravityForce) Next, implement the method for adding a model and physics actor to the scene: def addSmiley(self): actor = ActorNode("physics") actor.getPhysicsObject().setMass(10) self.phys = render.attachNewNode(actor) base.physicsMgr.attachPhysicalNode(actor) self.smiley = loader.loadModel("smiley") self.smiley.reparentTo(self.phys) self.phys.setPos(0, 0, 10) thrustNode = ForceNode("thrust") self.phys.attachNewNode(thrustNode) self.thrustForce = LinearVectorForce(0, 0, 400) self.thrustForce.setMassDependent(1) thrustNode.addForce(self.thrustForce) col = self.smiley.attachNewNode(CollisionNode("smiley")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) Add this last piece of source code that adds the floor plane to the scene to Application.py: Application.py: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollisionStart(self, entry): base.physicsMgr.addLinearForce(self.thrustForce) def onCollisionEnd(self, entry): base.physicsMgr.removeLinearForce(self.thrustForce) Start the program by pressing F6: How it works... After adding the mandatory libraries and initialization code, we proceed to the code that sets up the collision detection system. Here we register event handlers for when the smiley starts or stops colliding with the floor. The calls involved in setupCD() are very similar to the ones used in Using the built-in collision detection system. Instead of moving the smiley model in our own update task, we use the built-in physics system to calculate new object positions based on the forces applied to them. In setupPhysics(), we call base.enableParticles() to fire up the physics system. We also attach a new ForceNode to the scene graph, so all physics objects will be affected by the gravity force. We also register the force with base.physicsMgr, which is automatically defined when the physics engine is initialized and ready. In the first couple of lines in addSmiley(), we create a new ActorNode, give it a mass, attach it to the scene graph and register it with the physics manager class. The graphical representation, which is the smiley model in this case, is then added to the physics node as a child so it will be moved automatically as the physics system updates. We also add a ForceNode to the physics actor. This acts as a thruster that applies a force that pushes the smiley upwards whenever it intersects the floor. As opposed to the gravity force, the thruster force is set to be mass dependant. This means that no matter how heavy we set the smiley to be, it will always be accelerated at the same rate by the gravity force. The thruster force, on the other hand, would need to be more powerful if we increased the mass of the smiley. The last step when adding a smiley is adding its collision node and shape, which leads us to the last methods added in this recipe, where we add the floor plane and define that the thruster should be enabled when the collision starts, and disabled when the objects’ contact phase ends.
Read more
  • 0
  • 0
  • 11173
article-image-introducing-variables
Packt
24 Jun 2014
6 min read
Save for later

Introducing variables

Packt
24 Jun 2014
6 min read
(For more resources related to this topic, see here.) In order to store data, you have to store data in the right kind of variables. We can think of variables as boxes, and what you put in these boxes depends on what type of box it is. In most native programming languages, you have to declare a variable and its type. Number variables Let's go over some of the major types of variables. The first type is number variables. These variables store numbers and not letters. That means, if you tried to put a name in, let's say "John Bura", then the app simply won't work. Integer variables There are numerous different types of number variables. Integer variables, called Int variables, can be positive or negative whole numbers—you cannot have a decimal at all. So, you could put -1 as an integer variable but not 1.2. Real variables Real variables can be positive or negative, and they can be decimal numbers. A real variable can be 1.0, -40.4, or 100.1, for instance. There are other kinds of number variables as well. They are used in more specific situations. For the most part, integer and real variables are the ones you need to know—make sure you don't get them mixed up. If you were to run an app with this kind of mismatch, chances are it won't work. String variables There is another kind of variable that is really important. This type of variable is called a string variable. String variables are variables that comprise letters or words. This means that if you want to record a character's name, then you will have to use a string variable. In most programming languages, string variables have to be in quotes, for example, "John Bura". The quote marks tell the computer that the characters within are actually strings that the computer can use. When you put a number 1 into a string, is it a real number 1 or is it just a fake number? It's a fake number because strings are not numbers—they are strings. Even though the string shows the number 1, it isn't actually the number 1. Strings are meant to display characters, and numbers are meant to do math. Strings are not meant to do math—they just hold characters. If you tried to do math with a string, it wouldn't work (except in JavaScript, which we will talk about shortly). Strings shouldn't be used for calculations—they are meant to hold and display characters. If we have a string "1", it will be recorded as a character rather than an integer that can be used for calculations. Boolean variables The last main type of variable that we need to talk about is Boolean variables. Boolean variables are either true or false, and they are very important when it comes to games. They are used where there can only be two options. The following are some examples of Boolean variables: isAlive isShooting isInAppPurchaseCompleted isConnectedToInternet Most of these variables start off with the word is. This is usually done to signify that the variable that we are using is a Boolean. When you make games, you tend to use a lot of Boolean variables because there are so many states that game objects can be in. Often, these states have only two options, and the best thing to do is use a Boolean. Sometimes, you need to use an integer instead of a Boolean. Usually, 0 equals false and 1 equals true. Other variables When it comes to game production, there are a lot of specific variables that differ from environment to environment. Sometimes, there are GameObject variables, and there can also be a whole bunch of more specific variables. Declaring variables If you want to store any kind of data in variables, you have to declare them first. In the backend of Construct 2, there are a lot of variables that are already declared for you. This means that Construct 2 takes out the work of declaring variables. The variables that are taken care of for you include the following: Keyboard Mouse position Mouse angle Type of web browser Writing variables in code When we use Construct 2, a lot of the backend busywork has already been done for us. So, how do we declare variables in code? Usually, variables are declared at the top of the coding document, as shown in the following code: Int score; Real timescale = 1.2; Bool isDead; Bool isShooting = false; String name = "John Bura"; Let's take a look at all of them. The type of variable is listed first. In this case, we have the Int, Real, Bool (Boolean), and String variables. Next, we have the name of the variable. If you look carefully, you can see that certain variables have an = (equals sign) and some do not. When we have a variable with an equals sign, we initialize it. This means that we set the information in the variable right away. Sometimes, you need to do this and at other times, you do not. For example, a score does not need to be initialized because we are going to change the score as the game progresses. As you already know, you can initialize a Boolean variable to either true or false—these are the only two states a Boolean variable can be in. You will also notice that there are quotes around the string variable. Let's take a look at some examples that won't work: Int score = -1.2; Bool isDead = "false"; String name = John Bura; There is something wrong with all these examples. First of all, the Int variable cannot be a decimal. Second, the Bool variable has quotes around it. Lastly, the String variable has no quotes. In most environments, this will cause the program to not work. However, in HTML5 or JavaScript, the variable is changed to fit the situation. Summary In this article, we learned about the different types of variables and even looked at a few correct and incorrect variable declarations. If you are making a game, get used to making and setting lots of variables. The best part is that Construct 2 makes handling variables really easy. Resources for Article: Further resources on this subject: 2D game development with Monkey [article] Microsoft XNA 4.0 Game Development: Receiving Player Input [article] Flash Game Development: Making of Astro-PANIC! [article]
Read more
  • 0
  • 0
  • 11113

article-image-skinning-character
Packt
21 Apr 2014
6 min read
Save for later

Skinning a character

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) Our world in 5000 AD is incomplete without our mutated human being Mr. Green. Our Mr. Green is a rigged model, exported from Blender. All famous 3D games from Counter Strike to World of Warcraft use skinned models to give the most impressive real world model animations and kinematics. Hence, our learning has to now evolve to load Mr. Green and add the same quality of animation in our game. We will start our study of character animation by discussing the skeleton, which is the base of the character animation, upon which a body and its motion is built. Then, we will learn about skinning, how the bones of the skeleton are attached to the vertices, and then understand its animations. In this article, we will cover basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model. Understanding the basics of a character's skeleton A character's skeleton is a posable framework of bones. These bones are connected by articulated joints, arranged in a hierarchical data structure. The skeleton is generally rendered and is used as an invisible armature to position and orient a character's skin. The joints are used for relative movement within the skeleton. They are represented by a 4 x 4 linear transformation matrices (combination of rotation, translation, and scale). The character skeleton is set up using only simple rotational joints as they are sufficient to model the joints of real animals. Every joint has limited degrees of freedom (DOFs). DOFs are the possible ranges of motion of an object. For instance, an elbow joint has one rotational DOF and a shoulder joint has three DOFs, as the shoulder can rotate along three perpendicular axes. Individual joints usually have one to six DOFs. Refer to the link http://en.wikipedia.org/wiki/Six_degrees_of_freedom to understand different degrees of freedom. A joint local matrix is constructed for each joint. This matrix defines the position and orientation of each joint and is relative to the joint above it in the hierarchy. The local matrices are used to compute the world space matrices of the joint, using the process of forward kinematics. The world space matrix is used to render the attached geometry and is also used for collision detection. The digital character skeleton is analogous to the real-world skeleton of vertebrates. However, the bones of our digital human character do have to correspond to the actual bones. It will depend on the level of detail of the character you require. For example, you may or may not require cheek bones to animate facial expressions. Skeletons are not just used to animate vertebrates but also mechanical parts such as doors or wheels. Comprehending the joint hierarchy The topology of a skeleton is a tree or an open-directed graph. The joints are connected up in a hierarchical fashion to the selected root joint. The root joint has no parent of itself and is presented in the model JSON file with the parent value of -1. All skeletons are kept as open trees without any closed loops. This restriction though does not prevent kinematic loops. Each node of the tree represents a joint, also called bones. We use both terms interchangeably. For example, the shoulder is a joint, and the upper arm is a bone, but the transformation matrix of both objects is same. So mathematically, we would represent it as a single component with three DOFs. The amount of rotation of the shoulder joint will be reflected by the upper arm's bone. The following figure shows simple robotic bone hierarchy: Understanding forward kinematics Kinematics is a mathematical description of a motion without the underlying physical forces. Kinematics describes the position, velocity, and acceleration of an object. We use kinematics to calculate the position of an individual bone of the skeleton structure (skeleton pose). Hence, we will limit our study to position and orientation. The skeleton is purely a kinematic structure. Forward kinematics is used to compute the world space matrix of each bone from its DOF value. Inverse kinematics is used to calculate the DOF values from the position of the bone in the world. Let's dive a little deeper into forward kinematics and study a simple case of bone hierarchy that starts from the shoulder, moves to the elbow, finally to the wrist. Each bone/joint has a local transformation matrix, this.modelMatrix. This local matrix is calculated from the bone's position and rotation. Let's say the model matrices of the wrist, elbow, and shoulder are this.modelMatrixwrist, this.modelMatrixelbow, and this.modelMatrixshoulder respectively. The world matrix is the transformation matrix that will be used by shaders as the model matrix, as it denotes the position and rotation in world space. The world matrix for a wrist will be: this.worldMatrixwrist = this.worldMatrixelbow * this.modelMatrixwrist The world matrix for an elbow will be: this.worldMatrixelbow = this.worldMatrixshoulder * this.modelMatrixelbow If you look at the preceding equations, you will realize that to calculate the exact location of a wrist in the world space, we need to calculate the position of the elbow in the world space first. To calculate the position of the elbow, we first need to calculate the position of shoulder. We need to calculate the world space coordinate of the parent first in order to calculate that of its children. Hence, we use depth-first tree traversal to traverse the complete skeleton tree starting from its root node. A depth-first traversal begins by calculating modelMatrix of the root node and traverses down through each of its children. A child node is visited and subsequently all of its children are traversed. After all the children are visited, the control is transferred to the parent of modelMatrix. We calculate the world matrix by concatenating the joint parent's world matrix and its local matrix. The computation of calculating a local matrix from DOF and then its world matrix from the parent's world matrix is defined as forward kinematics. Let's now define some important terms that we will often use: Joint DOFs: A movable joint movement can generally be described by six DOFs (three for position and rotation each). DOF is a general term: this.position = vec3.fromValues(x, y, z); this.quaternion = quat.fromValues(x, y, z, w); this.scale = vec3.fromValues(1, 1, 1); We use quaternion rotations to store rotational transformations to avoid issues such as gimbal lock. The quaternion holds the DOF values for rotation around the x, y, and z values. Joint offset: Joints have a fixed offset position in the parent node's space. When we skin a joint, we change the position of each joint to match the mesh. This new fixed position acts as a pivot point for the joint movement. The pivot point of an elbow is at a fixed location relative to the shoulder joint. This position is denoted by a vector position in the joint local matrix and is stored in m31, m32, and m33 indices of the matrix. The offset matrix also holds initial rotational values.
Read more
  • 0
  • 0
  • 11064

article-image-scenes-and-menus
Packt
01 Feb 2016
19 min read
Save for later

Scenes and Menus

Packt
01 Feb 2016
19 min read
In this article by Siddharth Shekar, author of the book Cocos2d Cross-Platform Game Development Cookbook, Second Edition, we will cover the following recipes: Adding level selection scenes Scrolling level selection scenes (For more resources related to this topic, see here.) Scenes are the building blocks of any game. Generally, in any game, you have the main menu scene in which you are allowed to navigate to different scenes, such as GameScene, OptionsScene, and CreditsScene. In each of these scenes, you have menus. Similarly in MainScene, there is a play button that is part of a menu that, when pressed, takes the player to GameScene, where the gameplay code runs. Adding level selection scenes In this section, we will take a look at how to add a level selection scene in which you will have buttons for each level you want to play, and if you select it, this particular level will load up. Getting ready To create a level selection screen, you will need a custom sprite that will show a background image of the button and a text showing the level number. We will create these buttons first. Once the button sprites are created, we will create a new scene that we will populate with the background image, name of the scene, array of buttons, and a logic to change the scene to the particular level. How to do it... We will create a new Cocoa Touch class with CCSprite as the parent class and call it LevelSelectionBtn. Then, we will open up the LevelSelectionBtn.h file and add the following lines of code in it: #import "CCSprite.h" @interface LevelSelectionBtn : CCSprite -(id)initWithFilename:(NSString *) filename   StartlevelNumber:(int)lvlNum; @end We will create a custom init function; in this, we will pass the name of the file of the image, which will be the base of the button and integer that will be used to display the text at the top of the base button image. This is all that is required for the header class. In the LevelSelectionBtn.m file, we will add the following lines of code: #import "LevelSelectionBtn.h" @implementation LevelSelectionBtn -(id)initWithFilename:(NSString *) filename StartlevelNumber: (int)lvlNum; {   if (self = [super initWithImageNamed:filename]) {     CCLOG(@"Filename: %@ and levelNUmber: %d", filename, lvlNum);     CCLabelTTF *textLabel = [CCLabelTTF labelWithString:[NSString       stringWithFormat:@"%d",lvlNum ] fontName:@"AmericanTypewriter-Bold" fontSize: 12.0f];     textLabel.position = ccp(self.contentSize.width / 2, self.contentSize.height / 2);     textLabel.color = [CCColor colorWithRed:0.1f green:0.45f blue:0.73f];     [self addChild:textLabel];   }   return self; } @end In our custom init function, we will first log out if we are sending the correct data in. Then, we will create a text label and pass it in as a string by converting the integer. The label is then placed at the center of the current sprite base image by dividing the content size of the image by half to get the center. As the background of the base image and the text both are white, the color of the text is changed to match the color blue so that the text is actually visible. Finally, we will add the text to the current class. This is all for the LevelSelectionBtn class. Next, we will create LevelSelectionScene, in which we will add the sprite buttons and the logic that the button is pressed for. So, we will now create a new class, LevelSelectionScene, and in the header file, we will add the following lines: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   NSMutableArray *buttonSpritesArray; } +(CCScene*)scene; @end Note that apart from the usual code, we also created NSMutuableArray called buttonsSpritesArray, which will be used in the code. Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionScene +(CCScene*)scene{     return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     //Add Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@ "Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     //add text heading for file     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:@     "LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //initialize array     buttonSpritesArray = [NSMutableArray array];     int widthCount = 5;     int heightCount = 5;     float spacing = 35.0f;     float halfWidth = winSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = winSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = 1;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtnalloc]           initWithFilename:@"btnBG.png"           StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } Here, we will add the background image and heading text for the scene and initialize NSMutabaleArray. We will then create six new variables, as follows: WidthCount: This is the number of columns we want to have heightCount: This is the number of rows we want spacing: This is the distance between each of the sprite buttons so that they don't overlap halfWidth: This is the distance in the x axis from the center of the screen to upper-left position of the first sprite button that will be placed halfHeight: This is the distance in the y direction from the center to the upper-left position of the first sprite button that will be placed lvlNum: This is the counter with an initial value of 1. This is incremented each time a button is created to show the text in the button sprite. In the double loop, we will get the x and y coordinates of each of the button sprites. First, to get the y position from the half height, we will subtract the spacing multiplied by the j counter. As the value of j is initially 0, the y value remains the same as halfWidth for the topmost row. Then, for the x value of the position, we will add half the width of the spacing multiplied by the i counter. Each time, the x position is incremented by the spacing. After getting the x and y position, we will create a new LevelSelectionBtn sprite and pass in the btnBG.png image and also pass in the value of lvlNum to create the button sprite. We will set the position to the value of x and y that we calculated earlier. To refer to the button by number, we will assign the name of the sprite, which is the same as the number of the level. So, we will convert lvlNum to a string and pass in the value. Then, the button will be added to the scene, and it will also be added to the array we created globally as we will need to cycle through the images later. Finally, we will increment the value of lvlNum. However, we have still not added any interactivity to the sprite buttons so that when it is pressed, it will load the required level. For added touch interactivity, we will use the touchBegan function built right into Cocos2d. We will create more complex interfaces, but for now, we will use the basic touchBegan function. In the same file, we will add the following code right between the init function and @end: -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCrossFadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];       self.userInteractionEnabled = false;     }   } } The touchBegan function will be called each time we touch the screen. So, once we touch the screen, it gets the location of where you touched and stores it as a variable called location. Then, using the for in loop, we will loop through all the button sprites we added in the array. Using the RectContainsPoint function, we will check whether the location that we pressed is inside the rect of any of the sprites in the loop. We will then log out so that we will get an indication in the console as to which button number we have clicked on so that we can be sure that the right level is loaded. A crossfade transition is created, and the current scene is swapped with GameplayScene with the name of the current sprite clicked on. Finally, we have to set the userInteractionEnabled Boolean false so that the current class stops listening to the touch. Also, at the top of the class in the init function, we enabled this Boolean, so we will add the following line of code as highlighted in the init function:     if(self = [super init]){       self.userInteractionEnabled = TRUE;       CGSize  winSize = [[CCDirector sharedDirector]viewSize]; How it works... So, we are done with the LevelSelectionScene class, but we still need to add a button in MainScene to open LevelSelectionScene. In MainScene, we will add the following lines in the init function, in which we will add menubtn and a function to be called once the button is clicked on as highlighted here:         CCButton *playBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"playBtn_normal.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@ "playBtn_pressed.png"]           disabledSpriteFrame:nil];         [playBtn setTarget:self selector:@selector(playBtnPressed:)];          CCButton *menuBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           disabledSpriteFrame:nil];          [menuBtn setTarget:self selector:@selector(menuBtnPressed:)];         CCLayoutBox * btnMenu;         btnMenu = [[CCLayoutBox alloc] init];         btnMenu.anchorPoint = ccp(0.5f, 0.5f);         btnMenu.position = CGPointMake(winSize.width/2, winSize.height * 0.5);          btnMenu.direction = CCLayoutBoxDirectionVertical;         btnMenu.spacing = 10.0f;          [btnMenu addChild:menuBtn];         [btnMenu addChild:playBtn];          [self addChild:btnMenu]; Don't forget to include the menuBtn.png file included in the resources folder of the project, otherwise you will get a build error. Next, also add in the menuBtnPressed function, which will be called once menuBtn is pressed and released, as follows: -(void)menuBtnPressed:(id)sender{   CCLOG(@"menu button pressed");   CCTransition *transition = [CCTransition transitionCrossFadeWith Duration:0.20];   [[CCDirector sharedDirector]replaceScene:[[LevelSelectionScene alloc]init] withTransition:transition]; } Now, the MainScene should similar to the following: Click on the menu button below the play button, and you will be able to see LevelSelectionScreen in all its glory. Now, click on any of the buttons to open up the gameplay scene displaying the number that you clicked on. In this case, I clicked on button number 18, which is why it shows 18 in the gameplay scene when it loads. Scrolling level selection scenes If your game has say 20 levels, it is okay to have one single level selection scene to display all the level buttons; but what if you have more? In this section, we will modify the previous section's code, create a node, and customize the class to create a scrollable level selection scene. Getting ready We will create a new class called LevelSelectionLayer, inherit from CCNode, and move all the content we added in LevelSelectionScene to it. This is done so that we can have a separate class and instantiate it as many times as we want in the game. How to do it... In the LevelSelectionLayer.m file, we will change the code to the following: #import "CCNode.h" @interface LevelSelectionLayer : CCNode {   NSMutableArray *buttonSpritesArray; } -(id)initLayerWith:(NSString *)filename   StartlevelNumber:(int)lvlNum   widthCount:(int)widthCount   heightCount:(int)heightCount   spacing:(float)spacing; @end We changed the init function so that instead of hardcoding the values, we can create a more flexible level selection layer. In the LevelSelectionLayer.m file, we will add the following: #import "LevelSelectionLayer.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionLayer - (void)onEnter{   [super onEnter];   self.userInteractionEnabled = YES; } - (void)onExit{   [super onExit];   self.userInteractionEnabled = NO; } -(id)initLayerWith:(NSString *)filename StartlevelNumber:(int)lvlNum widthCount:(int)widthCount heightCount:(int)heightCount spacing: (float)spacing{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     self.contentSize = winSize;     buttonSpritesArray = [NSMutableArray array];     float halfWidth = self.contentSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = self.contentSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = lvlNum;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtn alloc]         initWithFilename:filename StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   CCLOG(@"location: %f, %f", location.x, location.y);   CCLOG(@"touched");   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCross FadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];     }   } } @end The major changes are highlighted here. The first is that we added and removed the touch functionality using the onEnter and onExit functions. The other major change is that we set the contentsize value of the node to winSize. Also, while specifying the upper-left coordinate of the button, we did not use winsize for the center but the contentsize of the node. Let's move to LevelSelectionScene now; we will execute the following code: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   int layerCount;   CCNode *layerNode; } +(CCScene*)scene; @end In the header file, we will change it to add two global variables in it: The layerCount variable keeps the total layers and nodes you add The layerNode variable is an empty node added for convenience so that we can add all the layer nodes to it so that we can move it back and forth instead of moving each layer node individually Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" #import "LevelSelectionLayer.h" @implementation LevelSelectionScene +(CCScene*)scene{   return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     layerCount = 1;     //Basic CCSprite - Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@"Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:     @"LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //empty node     layerNode = [[CCNode alloc]init];     [self addChild:layerNode];     int widthCount = 5;     int heightCount = 5;     float spacing = 35;     for(int i=0; i<3; i++){       LevelSelectionLayer* lsLayer = [[LevelSelectionLayer alloc]initLayerWith:@"btnBG.png"         StartlevelNumber:widthCount * heightCount * i + 1         widthCount:widthCount         heightCount:heightCount         spacing:spacing];       lsLayer.position = ccp(winSize.width * i, 0);       [layerNode addChild:lsLayer];     }     CCButton *leftBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       disabledSpriteFrame:nil];     [leftBtn setTarget:self selector:@selector(leftBtnPressed:)];     CCButton *rightBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       disabledSpriteFrame:nil];     [rightBtn setTarget:self selector:@selector(rightBtnPressed:)];     CCLayoutBox * btnMenu;     btnMenu = [[CCLayoutBox alloc] init];     btnMenu.anchorPoint = ccp(0.5f, 0.5f);     btnMenu.position = CGPointMake(winSize.width * 0.5, winSize.height * 0.2);     btnMenu.direction = CCLayoutBoxDirectionHorizontal;     btnMenu.spacing = 300.0f;     [btnMenu addChild:leftBtn];     [btnMenu addChild:rightBtn];     [self addChild:btnMenu z:4];   }   return self; } -(void)rightBtnPressed:(id)sender{   CCLOG(@"right button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount >=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(-winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount--;   } } -(void)leftBtnPressed:(id)sender{   CCLOG(@"left button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount <=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount++;   } } @end How it works... The important piece of the code is highlighted. Apart from adding the usual background and text, we will initialize layerCount to 1 and initialize the empty layerNode variable. Next, we will create a for loop, in which we will add the three level selection layers by passing the starting value of each selection layer in the btnBg image, the width count, height count, and spacing between each of the buttons. Also, note how the layers are positioned at a width's distance from each other. The first one is visible to the player. The consecutive layers are added off screen similarly to how we placed the second image offscreen while creating the parallax effect. Then, each level selection layer is added to layerNode as a child. We will also create the left-hand side and right-hand side buttons so that we can move layerNode to the left and right once clicked on. We will create two functions called leftBtnPressed and rightBtnPressed in which we will add functionality when the left-hand side or right-hand side button gets pressed. First, let's look at the rightBtnPressed function. Once the button is pressed, we will log out this button. Next, we will get the size of the window. We will then check whether the value of layerCount is greater than zero, which is true as we set the value as 1. We will create a moveBy action, in which we give the window width for the movement in the x direction and 0 for the movement in the y direction as we want the movement to be only in the x direction and not y. Lastly, we will pass in a value of 0.20f. The action is then run on layerNode and the layerCount value is decremented. In the leftBtnPressed function, the opposite is done to move the layer in the opposite direction. Run the game to see the change in LevelSelectionScene. As you can't go left, pressing the left button won't do anything. However, if you press the right button, you will see that the layer scrolls to show the next set of buttons. Summary In this article, we learned about adding level selection scenes and scrolling level selection scenes in Cocos2d. Resources for Article: Further resources on this subject: Getting started with Cocos2d-x [article] Dragging a CCNode in Cocos2D-Swift [article] Run Xcode Run [article]
Read more
  • 0
  • 0
  • 10893
article-image-instance-and-devices
Packt
13 Feb 2017
4 min read
Save for later

Instance and Devices

Packt
13 Feb 2017
4 min read
In this article by Pawel Lapinski, the author of the book Vulkan Cookbook, we will learn how to destroy a logical device, destroy a Vulkan Instance and then releasing a Vulkan Loader library. (For more resources related to this topic, see here.) Destroying a logical device After we are done and we want to quit the application, we should cleanup after ourselves. Despite the fact that all the resources should be destroy automatically by the driver, when the Vulkan Instance is destroyed, we should do this explicitly in the application to follow the good programming guidelines. The order of destroying resources should be opposite to the order in which they were created. Resources should be released in the order reverse to the order of their creation. In this article logical device was the last created object, so it will be destroyed first. How to do it… Take the handle of the created logical device that was stored in a variable of type VkDevice named logical_device. Call vkDestroyDevice( logical_device, nullptr ), provide the logical_device variable in the first argument and nullptr in the second. For safety reasons, assign the VK_NULL_HANDLE value the logical_device variable. How it works… Implementation of logical device destroying is very straightforward: if( logical_device ) { vkDestroyDevice( logical_device, nullptr ); logical_device = VK_NULL_HANDLE; } First, we need to check, if the logical device handle is valid, we shouldn't destroy objects that weren't created. Then, we destroy the device with vkDestroyDevice() function call and we assign the VK_NULL_HANDLE value to the variable in which logical device handle was stored. We do this just in case--if there is some kind of mistake in our code, we won't destroy the same object twice. Remember that when we destroy a logical device, we can't use device‑level functions acquired from it. See also Creating a logical device Destroying a Vulkan Instance After all other resources are destroyed, we can destroy a Vulkan Instance. How to do it… Take the handle of a created Vulkan Instance object stored in a variable of type VkInstance named instance. Call vkDestroyInstance( instance, nullptr ), provide an instance variable as the first argument and nullptr as the second argument. For safety reasons, assign VK_NULL_HANDLE value to the instance variable. How it works… Before we can close the application, we should make sure created resources are released. Vulkan Instance is destroyed with the following code: if( instance ) { vkDestroyInstance( instance, nullptr ); instance = VK_NULL_HANDLE; } See also Creating a Vulkan Instance Releasing a Vulkan Loader library Libraries that are loaded dynamically, must be explicitly closed (released). To be able to use Vulkan in our application, we opened Vulkan Loader (a vulkan-1.dll library on Windows or libvulkan.so.1 library on Linux). So before we can close the application, we should free it. How to do it… On Windows operating systems family: Take the variable of type HMODULE named vulkan_library, in which handle of a loaded Vulkan Loader was stored. Call FreeLibrary( vulkan_library ), provide a vulkan_library variable in the only argument. For safeness reasons, assign the nullptr value to the vulkan_library variable. On Linux operating systems family: Take the variable of type void* named vulkan_library in which handle to a loaded Vulkan Loader was stored. Call dlclose( vulkan_library ), provide a vulkan_library variable in the only argument. For safety reasons, assign the nullptr value to the vulkan_library variable. How it works… On Windows operating systems family, dynamic libraries are opened using LoadLibrary() function. Such libraries must be closed (released) by calling FreeLibrary() function to which a handle of a previously opened library must be provided. On Linux operating systems family, dynamic libraries are opened using dlopen() function. Such libraries must be closed (released) by calling dlclose() function, to which a handle of a previously opened library must be provided. #if defined _WIN32 FreeLibrary( vulkan_library ); #elif defined __linux dlclose( vulkan_library ); #endif vulkan_library = nullptr; See also Connecting with a Vulkan Loader library Summary In this article, you learned about the different members of a class or blueprint. We worked with instance properties, type properties, instance methods, and type methods. We worked with stored properties, getters, setters. Resources for Article: Further resources on this subject: Introducing an Android platform [article] Get your Apps Ready for Android N [article] Drawing and Drawables in Android Canvas [article]
Read more
  • 0
  • 0
  • 10817

Packt
23 May 2016
13 min read
Save for later

Bang Bang – Let's Make It Explode

Packt
23 May 2016
13 min read
In this article by Justin Plowman, author of the book 3D Game Design with Unreal Engine 4 and Blender, We started with a basic level that, when it comes right down to it, is simply two rooms connected by a hallway with a simple crate. From humble beginnings our game has grown, as have our skills. Our simple cargo ship leads the player to a larger space station level. This level includes scripted events to move the story along and a game asset that looks great and animates. However, we are not done. How do we end our journey? We blow things up, that's how! In this article we will cover the following topics: Using class blueprints to bring it all together Creating an explosion using sound effects Adding particle effects (For more resources related to this topic, see here.) Creating a class blueprint to tie it all together We begin with the first step to any type of digital destruction, creation. we have created a disturbing piece of ancient technology. The Artifact stands as a long forgotten terror weapon of another age, somehow brought forth by an unknown power. But we know the truth. That unknown power is us, and we are about to import all that we need to implement the Artifact on the deck of our space station. Players beware! Take a look at the end result. To get started we will need to import the Artifact body, the Tentacle, and all of the texture maps from Substance Painter. Let's start with exporting the main body of the Artifact. In Blender, open our file with the complete Artifact. The FBX file format will allow us to export both the completed 3d model and the animations we created, all in a single file. Select the Artifact only. Since it is now bound to the skeleton we created, the bones and the geometry should all be one piece. Now press Alt+S to reset the scale of our game asset. Doing this will make sure that we won't have any problems weird scaling problems when we import the Artifact into Unreal. Head to the File menu and select Export. Choose FBX as our file format. On the first tab of the export menu, select the check box for Selected Objects. This will make sure that we get just the Artifact and not the Tentacle. On the Geometries tab, change the Smoothing option to Faces. Name your file and click Export! Alright, we now have the latest version of the Artifact exported out as an FBX. With all the different processes described in its pages, it makes a great reference! Time to bring up Unreal. Open the game engine and load our space station level. It's been a while since we've taken a look at it and there is no doubt in my mind that you've probably thought of improvements and new sections you would love to add. Don't forget them! Just set them aside for now. Once we get our game assets in there and make them explode, you will have plenty of time to add on. Time to import into Unreal! Before we begin importing our pieces, let's create a folder to hold our custom assets. Click on the Content folder in the Content Browser and then right click. At the top of the menu that appears, select New Folder and name it CustomAssets. It's very important not to use spaces or special characters (besides the underscore). Select our new folder and click Import. Select the Artifact FBX file. At the top of the Import menu, make sure Import as Skeletal and Import Mesh are selected. Now click the small arrow at the bottom of the section to open the advanced options. Lastly, turn on the check box to tell Unreal to use T0 As Reference Pose. A Reference Pose is the starting point for any animations associated with a skeletal mesh. Next, take a look at the Animation section of the menu. Turn on Import Animations to tell Unreal to bring in our open animation for the Artifact. Once all that is done, it's time to click Import! Unreal will create a Skeletal Mesh, an Animation, a Physics Asset, and a Skeleton asset for the Artifact. Together, these pieces make up a fully functioning skeletal mesh that can be used within our game. Take a moment and repeat the process for the Tentacle, again being careful to make sure to export only selected objects from Blender. Next, we need to import all of our texture maps from Substance Painter. Locate the export folder for Substance Painter. By default it is located in the Documents folder, under Substance Painter, and finally under Export. Head back into Unreal and then bring up the Export folder from your computer's task bar. Click and drag each of the texture maps we need into the Content Browser. Unreal will import them automatically. Time to set them all up as a usable material! Right click in the Content Browser and select Material from the Create Basic Asset section of the menu. Name the material Artifact_MAT. This will open a Material Editor window. Creating Materials and Shaders for video games is an art form all its own. Here I will talk about creating Materials in basic terms but I would encourage you to check out the Unreal documentation and open up some of the existing materials in the Starter Content folder and begin exploring this highly versatile tool. So we need to add our textures to our new Material. An easy way to add texture maps to any Material is to click and drag them from the Content Browser into the Material Editor. This will create a node called a Texture Sample which can plug into the different sockets on the main Material node. Now to plug in each map. Drag a wire from each of the white connections on the right side of each Texture Sample to its appropriate slot on the main Material node. The Metallic and Roughness texture sample will be plugged into two slots on the main node. Let's preview the result. Back in the Content Browser, select the Artifact. Then in the Preview Window of the Material Editor, select the small button the far right that reads Set the Preview Mesh based on the current Content Browser selection. The material has come out just a bit too shiny. The large amount of shine given off by the material is called the specular highlight and is controlled by the Specular connection on the main Material node. If we check the documentation, we can see that this part of the node accepts a value between 0 and 1. How might we do this? Well, the Material Editor has a Constant node that allows us to input a number and then plug that in wherever we may need it. This will work perfectly! Search for a Constant in the search box of the Palette, located on the right side of the Material Editor. Drag it into the area with your other nodes and head over to the Details panel. In the Value field, try different values between 0 and 1 and preview the result. I ended up using 0.1. Save your changes. Time to try it out on our Artifact! Double click on the Skeletal Mesh to open the Skeletal Mesh editor window. On the left hand side, look for the LOD0 section of the menu. This section has an option to add a material (I have highlighted it in the image above). Head back to the content browser and select our Artifact_MAT material. Now select the small arrow in the LOD0 box to apply the selection to the Artifact. How does it look? Too shiny? Not shiny enough? Feel free to adjust our Constant node in the material until you are able to get the result you want. When you are happy, repeat the process for the Tentacle. You will import it as a static mesh (since it doesn't have any animations) and create a material for it made out of the texture maps you created in Substance Painter. Now we will use a Class Blueprint for final assembly. Class Blueprints are a form of standalone Blueprint that allows us to combine art assets with programming in an easy and, most importantly, reusable package. For example, the player is a class blueprint as it combines the player character skeletal mesh with blueprint code to help the player move around. So how and when might we use class blueprints vs just putting the code in the level blueprint? The level blueprint is great for anything that is specific to just that level. Such things would include volcanoes on a lava based level or spaceships in the background of our space station level. Class blueprints work great for building objects that are self-contained and repeatable, such as doors, enemies, or power ups. These types of items would be used frequently and would have a place in several levels of a game. Let's create a class blueprint for the Artifact. Click on the Blueprints button and select the New Empty Blueprints Tab. This will open the Pick Parent Class menu. Since we creating a prop and not something that the player needs to control directly, select the Actor Parent Class. The next screen will ask us to name our new class and for a location to save it. I chose to save it in my CustomAssets folder and named it Artifact_Blueprint. Welcome to the Class Blueprint Editor. Similar to other editor windows within Unreal, the Class Blueprint editor has both a Details panel and a Palette. However, there is a panel that is new to us. The Components panel contains a list of the art that makes up a class blueprint. These components are various pieces that make up the whole object. For our Artifact, this would include the main piece itself, any number of tentacles, and a collision box. Other components that can be added include particle emitters, audio, and even lights. Let's add the Artifact. In the Components section, click the Add Component button and select Skeletal Mesh from the drop down list. You can find it in the Common section. This adds a blank Skeletal Mesh to the Viewport and the Components list. With it selected, check out the Details panel. In the Mesh section is an area to assign the skeletal mesh you wish it to be. Back in the Content Browser, select the Artifact. Lastly, back in the Details panel of the Blueprint Editor click the small arrow next to the Skeletal Mesh option to assign the Artifact. It should now appear in the viewport. Back to the Components list. Let's add a Collision Box. Click Add Component and select Collision Box from the Collision section of the menu. Click it and in the Details panel, increase the Box Extents to a size that would allow the player to enter within its bounds. I used 180 for x, y, and z. Repeat the last few steps and add the Tentacles to the Artifact using the Add Component menu. We will use the Static Mesh option. The design calls for 3, but add more if you like. Time to give this class blueprint a bit of programming. We want the player to be able to walk up to the Artifact and press the E key to open it. We used a Gate ton control the flow of information through the blueprint. However, Gates don't function the same within class blueprints so we require a slightly different approach. The first step in the process is to use the Enable Input and Disable Input nodes to allow the player to use input keys when they are within our box collision. Using the search box located within our Palette, grab an Enable Input and a Disable Input. Now we need to add our trigger events. Click on the Box variable within the Variable section of the My Blueprint panel. This changes the Details panel to display a list of all the Events that can be created for this component. Click the + button next to the OnComponentBeginOverlap and the OnComponentEndOverlap events. Connect the OnComponentBeginOverlap event to the Enable Input node and the OnComponentEndOverlap event to the Disable Input node. Next, create an event for the player pressing the E key by searching for it and dragging it in from the Palette. To that, we will add a Do Once node. This node works similar to a Gate in that it restricts the flow of information through the network, but it does allow the action to happen once before closing. This will make it so the player can press E to open the Artifact but the animation will only play once. Without it a player can press E as many times as they want, playing the animation over and over again. Its fun for a while since it makes it look like a mouth trying to eat you, but it's not our original intention (I might have spent some time pressing it repeatedly and laughing hysterically). Do Once can be easily found in the Palette. Lastly, we will need a Play Animation node. There are two versions so be sure to grab this node from the Skeletal Mesh section of your search so that its target is Skeletal Mesh Component. Connect the input E event to the Do Once node and the Do Once node to the Play Animation. Once last thing to complete this sequence. We need to set the target and animation to play on the Play Animation node. So the target will be our Skeletal Mesh component. Click on the Artifact component in the Components list and drag it into the Blueprint window and plug that into the Target on our Play Animation. Lastly, click the drop down under the New Anim to Play option on the Play Animation node and select our animation of the Artifact opening. We're done! Let's save all of our files and test this out. Drag the Artifact into our Space Station and position it in the Importer/Export Broker's shop. Build the level and then drop in and test. Did it open? Does it need more tentacles? Debug and refine it until it is exactly what you want. Summary This article provides an overview about using class blueprints, creating an explosion and adding particle effects. Resources for Article: Further resources on this subject: Dynamic Graphics [article] Lighting basics [article] VR Build and Run [article]
Read more
  • 0
  • 0
  • 10769
Modal Close icon
Modal Close icon