Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-overview-physics-bodies-and-physics-materials
Packt
30 Sep 2015
14 min read
Save for later

Overview of Physics Bodies and Physics Materials

Packt
30 Sep 2015
14 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will take a deeper look at Physics Bodies in Unreal Engine 4. We will also look at some of the detailed properties available to these assets. In addition, we will discuss the following topics: Physical materials – an overview For the purposes of this article, we will continue to work with Unreal Engine 4 and the Unreal_PhyProject. Let's begin by discussing Physics Bodies in Unreal Engine 4. (For more resources related to this topic, see here.) Physics Bodies – an overview When it comes to creating Physics Bodies, there are multiple ways to go about it (most of which we have covered up to this point), so we will not go into much detail about the creation of Physics Bodies. We can have Static Meshes react as Physics Bodies by checking the Simulate Physics property of the asset when it is placed in our level: We can also create Physics Bodies by creating Physics Assets and Skeletal Meshes, which automatically have the properties of physics by default. Lastly, Shape Components in blueprints, such as spheres, boxes, and capsules will automatically gain the properties of a Physics Body if they are set for any sort of collision, overlap, or other physics simulation events. As always, remember to ensure that our asset has a collision applied to it before attempting to simulate physics or establish Physics Bodies, otherwise the simulation will not work. When you work with the properties of Physics on Static Meshes or any other assets that we will attempt to simulate physics with, we will see a handful of different parameters that we can change in order to produce the desired effect under the Details panel. Let's break down these properties: Simulate Physics: This parameter allows you to enable or simulate physics with the asset you have selected. When this option is unchecked, the asset will remain static, and once enabled, we can edit the Physics Body properties for additional customization. Auto Weld: When this property is set to True, and when the asset is attached to a parent object, such as in a blueprint, the two bodies are merged into a single rigid body. Physics settings, such as collision profiles and body settings, are determined by Root Component. Start Awake: This parameter determines whether the selected asset will Simulate Physics at the start once it is spawned or whether it will Simulate Physics at a later time. We can change this parameter with the level and actor blueprints. Override Mass: When this property is checked and set to True, we can then freely change the Mass of our asset using kilograms (kg). Otherwise, the Mass in Kg parameter will be set to a default value that is based on a computation between the physical material applied and the mass scale value. Mass in Kg: This parameter determines the Mass of the selected asset using kilograms. This is important when you work with different sized physics objects and want them to react to forces appropriately. Locked Axis: This parameter allows you to lock the physical movement of our object along a specified axis. We have the choice to lock the default axes as specified in Project Settings. We also have the choice to lock physical movement along the individual X, Y, and Z axes. We can have none of the axes either locked in translation or rotation, or we can customize each axis individually with the Custom option. Enable Gravity: This parameter determines whether the object should have the force of gravity applied to it. The force of gravity can be altered in the World Settings properties of the level or in the Physics section of the Engine properties in Project Settings. Use Async Scene: This property allows you to enable the use of Asynchronous Physics for the specified object. By default, we cannot edit this property. In order to do so, we must navigate to Project Settings and then to the Physics section. Under the advanced Simulation tab, we will find the Enable Async Scene parameter. In an asynchronous scene, objects (such as Destructible actors) are simulated, and a Synchronous scene is where classic physics tasks, such as a falling crate, take place. Override Walkable Slope on Instance: This parameter determines whether or not we can customize an object's walkable slope. In general, we would use this parameter for our player character, but this property enables the customization of how steep a slope is that an object can walk on. This can be controlled specifically by the Walkable Slope Angle parameter and the Walkable Slope Behavior parameter. Override Max Depenetration Velocity: This parameter allows you to customize Max Depenetration Velocity of the selected physics body. Center of Mass Offset: This property allows you to specify a specific vector offset for the selected objects' center of mass from the calculated location. Being able to know and even modify the center of the mass for our objects can be very useful when you work with sensitive physics simulations (such as flight). Sleep Family: This parameter allows you to control the set of functions that the physics object uses when in a sleep mode or when the object is moving and slowly coming to a stop. The SF Sensitive option contains values with a lower sleep threshold. This is best used for objects that can move very slowly or for improved physics simulations (such as billiards). The SF Normal option contains values with a higher sleep threshold, and objects will come to a stop in a more abrupt manner once in motion as compared to the SF Sensitive option. Mass Scale: This parameter allows you to scale the mass of our object by multiplying a scalar value. The lower the number, the lower the mass of the object will become, whereas the larger the number, the larger the mass of the object will become. This property can be used in conjunction with the Mass in Kg parameter to add more customization to the mass of the object. Angular Damping: This property is a modifier of the drag force that is applied to the object in order to reduce angular movement, which means to reduce the rotation of the object. We will go into more detail regarding Angular Damping. Linear Damping: This property is used to simulate the different types of friction that can assist in the game world. This modifier adds a drag force to reduce linear movement, reducing the translation of the object. We will go into more detail regarding Linear Damping. Max Angular Velocity: This parameter limits Max Angular Velocity of the selected object in order to prevent the object from rotating at high rates. By increasing this value, the object will spin at very high speeds once it is impacted by an outside force that is strong enough to reach the Max Angular Velocity value. By decreasing this value, the object will not rotate as fast, and it will come to a halt much faster depending on the angular damping applied. Position Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its position; the solver iteration count is responsible for periodically checking the physics body's position. Increasing this value will be more CPU intensive, but better stabilized. Velocity Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its velocity; the solver iteration count is responsible for periodically checking the physics body's velocity. Increasing this value will be more CPU intensive, but better stabilized. Now that we have discussed all the different parameters available to Physics Bodies in Unreal Engine 4, feel free to play around with these values in order to obtain a stronger grasp of what each property controls and how it affects the physical properties of the object. As there are a handful of properties, we will not go into detailed examples of each, but the best way to learn more is to experiment with these values. However, we will work with how to create various examples of physics bodies in order to explore Physics Damping and Friction. Physical Materials – an overview Physical Materials are assets that are used to define the response of a physics body when you dynamically interact with the game world. When you first create Physical Material, you are presented with a set of default values that are identical to the default Physical Material that is applied to all physics objects. To create Physical Material, let's navigate to Content Browser and select the Content folder so that it is highlighted. From here, we can right-click on the Content folder and select the New Folder option to create a new folder for our Physical Material; name this new folder PhysicalMaterials. Now, in the PhysicalMaterials folder, right-click on the empty area of Content Browser and navigate to the Physics section and select Physical Material. Make sure to name this new asset PM_Test. Double-click on the new Physical Material asset to open Generic Asset Editor and we should see the following values that we can edit in order to make our physics objects behave in certain ways: Let's take a few minutes to break down each of these properties: Friction: This parameter controls how easily objects can slide on this surface. The lower the friction value, the more slippery the surface. The higher the friction value, the less slippery the surface. For example, ice would have a Friction surface value of .05, whereas a Friction surface value of 1 would cause the object not to slip as much once moved. Friction Combine Mode: This parameter controls how friction is computed for multiple materials. This property is important when it comes to interactions between multiple physical materials and how we want these calculations to be made. Our choices are Average, Minimum, Maximum, and Multiply. Override Friction Combine Mode: This parameter allows you to set the Friction Combine Mode parameter instead of using Friction Combine Mode, found in the Project Settings | Engine | Physics section. Restitution: This parameter controls how bouncy the surface is. The higher the value, the more bouncy the surface will become. Density: This parameter is used in conjunction with the shape of the object to calculate its mass properties. The higher the number, the heavier the object becomes (in grams per cubic centimeter). Raise Mass to Power: This parameter is used to adjust the way in which the mass increases as the object gets larger. This is applied to the mass that is calculated based on a solid object. In actuality, larger objects do not tend to be solid and become more like shells (such as a vehicle). The values are clamped to 1 or less. Destructible Damage Threshold Scale: This parameter is used to scale the damage threshold for the destructible objects that this physical material is applied to. Surface Type: This parameter is used to describe what type of real-world surface we are trying to imitate for our project. We can edit these values by navigating to the Project Settings | Physics | Physical Surface section. Tire Friction Scale: This parameter is used as the overall tire friction scalar for every type of tire and is multiplied by the parent values of the tire. Tire Friction Scales: This parameter is almost identical to the Tire Friction Scale parameter, but it looks for a Tire Type data asset to associate it to. Tire Types can be created through the use of Data Assets by right-clicking on the Content Browser | Miscellaneous | Data Asset | Tire Type section. Now that we have briefly discussed how to create Physical Materials and what their properties are, let's take a look at how to apply Physical Materials to our physics bodies. In FirstPersonExampleMap, we can select any of the physics body cubes throughout the level and in the Details panel under Collision, we will find the Phys Material Override parameter. It is here that we can apply our Physical Material to the cube and view how it reacts to our game world. For the sake of an example, let's return to the Physical Material, PM_Test, that we created earlier, change the Friction property from 0.7 to 0.2, and save it. With this change in place, let's select a physics body cube in FirstPersonExampleMap and apply the Physical Material, PM_Test, to the Phys Material Override parameter of the object. Now, if we play the game, we will see that the cube we applied the Physical Material, PM_Test, to will start to slide more once shot by the player than it did when it had a Friction value of 0.7. We can also apply this Physical Material to the floor mesh in FirstPersonExampleMap to see how it affects the other physics bodies in our game world. From here, feel free to play around with the Physical Material parameters to see how we can affect the physics bodies in our game world. Lastly, let's briefly discuss how to apply Physical Materials to normal Materials, Material Instances, and Skeletal Meshes. To apply Physical Material to a normal material, we first need to either create or open an already created material in Content Browser. To create a material, just right-click on an empty area of Content Browser and select Material from the drop-down menu.Double-click on Material to open Material Editor, and we will see the parameter for Phys Material under the Physical Material section of Details panel in the bottom-left of Material Editor: To apply Physical Material to Material Instance, we first need to create Material Instance by navigating to Content Browser and right-clicking on an empty area to bring up the context drop-down menu. Under the Materials & Textures section, we will find an option for Material Instance. Double-click on this option to open Material Instance Editor. Under the Details panel in the top-left corner of this editor, we will find an option to apply Phys Material under the General section: Lastly, to apply Physical Material to Skeletal Mesh, we need to either create or open an already created Physics Asset that contains Skeletal Mesh. In the First Person Shooter Project template, we can find TutorialTPP_PhysicsAsset under the Engine Content folder. If the Engine Content folder is not visible by default in Content Browser, we need to simply navigate to View Options in the bottom-right corner of Content Browser and check the Show Engine Content parameter. Under the Engine Content folder, we can navigate to the Tutorial folder and then to the TutorialAssets folder to find the TutorialTPP_PhysicsAsset asset. Double-click on this asset to open Physical Asset Tool. Now, we can click on any of the body parts found on Skeletal Mesh to highlight it. Once this is highlighted, we can view the option for Simple Collision Physical Material in the Details panel under the Physics section. Here, we can apply any of our Physical Materials to this body part. Summary In this article, we discussed what Physics Bodies are and how they function in Unreal Engine 4. Moreover, we looked at the properties that are involved in Physics Bodies and how these properties can affect the behavior of these bodies in the game. Additionally, we briefly discussed Physical Materials, how to create them, and what their properties entail when it comes to affecting its behavior in the game. We then reviewed how to apply Physical Materials to static meshes, materials, material instances, and skeletal meshes. Now that we have a stronger understanding of how Physics Bodies work in the context of angular and linear velocities, momentum, and the application of damping, we can move on and explore in detail how Physical Materials work and how they are implemented. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game[article] Working with Away3D Cameras[article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 24362

article-image-controls-and-player-movement
Packt
09 Feb 2015
17 min read
Save for later

Controls and player movement

Packt
09 Feb 2015
17 min read
In this article by Miguel DeQuadros, author of the book GameSalad Essentials, you will learn how to create your playable character. We are going to cover a couple of different control schemes, depending on the platform you want to release your awesome game on. We will deal with some keyboard controls, mouse controls, and even some touch controls for mobile devices. (For more resources related to this topic, see here.) Let's go into our project, and open up level 1. First we are going to add some gravity to the scene. In the scene editor, click on the Scene button in the Inspector window. In the Attributes window, expand the Gravity drop down, and change the Y value to 900. Now, on to our actor! We have already created our player actor in the Library, so let's drag him into our level. Once he's placed exactly where you want him to be, double-click on him to open up the actor editor. Instead of clicking the lock button, click the Edit Prototype... button on the upper left-hand corner of the screen, just above the actor's image. After doing this, we will edit the main actor within the Library; so instead of having to program each actor in every scene, we just drag the main actor in the library and boom! It's programmed. For our player actor, we are going to keep things super organized, because honestly, he's going to have a lot of behaviors, and even more so if you decide to continue after we are done. Click the Create Group button, right beside the Create Rule button. This will now create a group which we will put all our keyboard control-based behaviors within, so let's name it Keyboard Controls by double-clicking the name of the group. Now let's start creating the rules for each button that will control our character. Each rule will be Actor receives event | key | (whichever key you want) | is down. To select the key, simply press the Keyboard button inside the rule, and it will open up a virtual keyboard. All you have to do is click the key you want to use. For our first rule, let's do Actor receives event | key | left | is down. Because our sprite has been created with Kevin facing the right side, we have to flip him whenever the player presses the left arrow key. So, we have to drag in a Change Attribute behavior, and change it to Change Attribute: self.Graphics.Flip Horizontally to true. Then drag in an Animate behavior for the walking animation, and drag in the corresponding walking images for our character. Now let's get Kevin moving! For each Move rule you create, drag in an Accelerate behavior, change it to the corresponding direction you want him to move in, and change the Acceleration value to 500. Again, you can play around with these moving values to whatever suits your style. When you create the Move Right rule, don't forget to change the Flip Horizontally attribute to false so that he flips back to normal when the right key is pressed. Test out the level to see if he's moving. Oops! Did he fall through the platforms? We need to fix that! We are going to do this the super easy way. Click on the home button and then to the Actors tab. On the bottom-left corner of the screen, you will find a + button, click on it. This will add a new group of actors, or tags; name it Platforms. Now all you have to do is, simply drag in each platform actor you want the character to collide against in this section. Now when we create our Collision behavior, we only have to do one for the whole set of platforms, instead of creating one per actor. Time saved, money made! Plus, it keeps things more organized. Let's go back to our Kevin actor, and outside of the controls group, drag in a Collide behavior, and all you have to do is change it to Bounce when colliding with | actor with tag: | Platforms. Now, whenever he collides with an actor that you dragged into the Platforms section, he will stop. Test it out to see how it works. Bounce! Bounce! Bounce! We need to adjust a few things in the Physics section of the Kevin actor, and also the Platform actors. Open up one of the Platform actors (it doesn't matter which one, because we are going to change them all), in the Attributes window, open up the Physics drop down, and change the Bounciness value to 0. Or if you want it to be a rubbery surface, you can leave it at 1, or even 100, whatever you like. If you want a slippery platform, change the Friction to a lower than one value, such as 0.1. Do the same with our Kevin actor. Change his Bounciness to 0, his Density to 100, and Friction to 50, and check off (if not already) the Fixed Rotation, and Movable options. Now test it out. Everything should work perfectly! If you change the Platform's Friction value and set it much higher, our actor won't move very well, which would be good for super sticky platforms. We are now going to work on jumping. You want him to jump only when he is colliding with the ground, otherwise the player could keep pressing the jump button and he would just continue to fly; which would be cool, but it wouldn't make the levels very challenging now, would it? Let's go back to editing our Kevin actor. We need to create a new attribute within the actor itself, so in the Attributes window, click on the + button, select Boolean, and name it OnGround, and leave it unchecked. Create a new rule, and change it to Actor receives event | overlaps of collides | with actor with tag: | Platforms. Then drag in another rule, change it to Attribute | self.Motion Linear Velocity.Y | < 1, then click on the + button within the rule to create another condition, and change it to Attribute | self.Motion Linear Velocity.Y > -1. Whoa there cowboy! What does this mean? Let's break it down. This is detecting the actor's up and down motion, so when he is going slower than 1, and faster than -1, we will change the attribute. Why do this? It's a bit of a fail safe, so when the actor jumps and the player keeps holding the jump key, he doesn't keep jumping. If we didn't add these conditions, the player could jump at times when he shouldn't. Within that rule, drag in a Change Attribute behavior, and change it to Change Attribute: | self.OnGround To 1. (1 being true, 0 being false, or you can type in true or false.) Now, inside the Keyboard Controls group, create a new rule and name it Jumping. Change the rule to Actor receives event | key | space | is down, and we need to check another condition, so click on the + button inside of the rule, and change the new condition to Attribute | self.OnGround is true. This will not only check if the space bar is down, but will also make sure that he is colliding with the ground. Drag in a Change Attribute behavior inside of this Jumping rule, and change it to self.OnGround to 0, which, you guessed it, turns off the OnGround condition so the player can't keep jumping! Now drag in a Timer behavior, change it to For | 0.1 seconds, then drag in an Accelerate behavior inside of the Timer, and change the Direction to 90, Acceleration to 5000. Again, play around with these values. If you want him to jump a little higher, or lower, just change the timing slower or faster. If you want to slow him down a bit, such as walking speed and falling speed (which is highly recommended), in the Attributes section in the Actor window, under Motion change the Max Speed (I ended up changing it to 500), then click on Apply Max Speed. If you put too low a value, he will pause before he jumps because the combined speed of him walking and jumping is over the max speed so GameSalad will throttle him down, and that doesn't look right. For the finishing touch, let's drag in a Control Camera behavior so the camera moves with Kevin. (Don't forget to change the camera's bounds in the scene!) Now Kevin should be shooting around the level like a maniac! If you want to use mobile controls, let's go through some details. If not, skip ahead to the next heading. Mobile controls Mobile controls in GameSalad are relatively simple to do. It involves creating a non-moveable layer within the scene, and creating actors to act as buttons on screen. Let's see what we're talking about. Go into your scene. Under the Inspector window, click on the Scene button, then the Layers tab. Click on the + button to add a new layer, rename it to Buttons, and uncheck scrollable. If you have a hard time renaming the new layer by double clicking on it, simply click on the layer and press Enter or the return key, you will now be able to rename it. Whether or not this is a bug, we will find out in further releases. In each button, you'll want to uncheck the Moveable option in the Physics section, so the button doesn't fall off the screen when you click on play. Drag them into your scene in the ways you want the buttons to be laid out. We are now going to create three new game attributes; each will correspond to the button being pressed. Go to our scene, and click on the Game button in the Inspector window, and under the Attributes tab add three new Boolean attributes, LeftButton, RightButton, and JumpButton. Now let's start editing each button in the Library, not in the scene. We'll start with the left arrow button. Create a new rule, name it Button Pressed, change it to Actor receives event | touch | is pressed. Inside that rule, drag in a change attribute behavior and change it to game.LeftButton | to true; then expand the Otherwise drop down in the rule and copy and paste the previous change attribute rule into the Otherwise section and change it from true to false. What this does is, any time the player isn't touching that button, the LeftButton attribute is false. Easy! Do the same for each of the other buttons, but change each attribute accordingly. If you want, you can even change the color or image of the actor when it's pressed so you can actually see what button you are pressing. Simply adding three Change Attribute behaviors to the rule and changing the colors to 1 when being touched, and 0 when not, will highlight the button on touch. It's not required, but hey, it looks a lot better! Let's go back into our Kevin actor, copy and paste the Keyboard Controls group, and rename it to Touch Controls. Now we have to change each rule from Actor receives event | key, to Attribute | game.LeftButton | is true (or whatever button is being pressed). Test it out to see if everything works. If it does, awesome! If not, go back and make sure everything was typed in correctly. Don't forget, when it comes to changing attributes, you can't simply type in game.LeftButton, you have to click on the … button beside the Attribute, and then click on Game, then click LeftButton. Now that Kevin is alive, let's make him armed and dangerous. Go back to the Scene button within the scene editor, and under the Layers tab click on the Background layer to start creating actors in that layer. If you forget to do this, GameSalad will automatically start creating actors in the last selected layer, which can be a fast way of creating objects, but annoying if you forget. Attacking Pretty much any game out there nowadays has some kind of shooting or attacking in it. Whether you're playing NHL, NFL, or Portal 2, there is some sort of attack, or shot that you can make, and Kevin will locate a special weapon in the first level. The Inter Dimensional Gravity Disruptor I created both, the weapon and the projectile that will be launched from the Inter Dimensional Gravity Disruptor. For the gun actor, don't forget to uncheck the Movable option in the Physics drop down to prevent the gun from moving around. Now let's place the gun somewhere near the end of the level. For now, let's create a new game-wide attribute, so within the scene, click on the Game button within the Inspector window, and open the Attributes tab. Now, click on the + button to add a new attribute. It's going to be a Boolean, and we will name it GunCollected. What we are going to do is, when Kevin collides with the collectable gun at the end of the level, the Boolean will switch to true to show it's been collected, then we will do some trickery that will allow us to use multiple weapons. We are going to add two more new game-wide attributes, both Integers; name one KevX, and the other KevY. I'll explain this in a minute. Open up the Kevin actor in the Inspector window, create a new Rule, and change it to: Actor receives event | overlaps or collides | with | actor of type | Inter-Dimensional-Gravity-Disruptor, or whatever you want to name it; then drag in a Change Attribute behavior into the rule, and change it to game.GunCollected | to true. Finally, outside of this rule, drag in two Constrain Attribute behaviors and change them to the following: game.KevX | to | self.position.xgame.KevY | to | self.position.y The Constrain attributes do exactly what it sounds like, it constantly changes the attribute, constraining it; whereas the Change Attribute does it once. That's all we need to do within our Kevin actor. Now let's go back to our gun actor inside the Inspector and create a new Rule, change it to Attribute | game.GunCollected | is true. Inside that rule, drag in two Constrain Attribute behaviors, and change them to the following: self.position.x | to | game.KevXself.position.y| to | game.KevY +2 Test out your game now and see if the gun follows Kevin when he collects it. If we were to put a Change Attribute instead of Constrict Attribute, the gun would quickly pop to his position, and that's it. You could fix this by putting the Change attributes inside of a Timer behavior where every 0.0001 second it changes the attributes, but when you test it, the gun will be following quite choppily and that doesn't look good. We now want the gun to flip at the same time Kevin does, so let's create two rules the same way we did with Kevin. When the Left arrow key is pressed, change the flipped attribute to true, and when the Right arrow key is pressed change the flipped attribute to false. This happens only when the GunCollected attribute is true, otherwise the gun will be flipping before you even collect it. So create these rules within the GunCollected is true rule. Now the gun will flip with Kevin! Keep in mind, if you haven't checked out the gun image I created (I drew the gun on top of the Kevin sprite) and saved it where it was positioned, no centering will happen. This way the gun is automatically positioned and you don't have to finagle positioning it in the behaviors. Now, for the pew pew. Open up our GravRound actor within the Inspector window, simply add in a Move behavior and make it pretty fast; I did 500. Change the Density, Friction, and Bounciness, all to zero. Open up our Gun actor. We are going to create a new Rule; it will have three conditions to the rule (for Shoot Left) and they are as follows: Attribute | game.GunCollected | is true Actor receives event | key | (whatever shoot button you want) | is down Attribute | self.graphic.Flip.Horizontally | is true Now drag in a Spawn Actor behavior into this rule, select the GravRound actor, Direction: 180, Position: -25. Copy and paste this rule and change it so flip is false, and the Direction of the Spawn Actor is 0.0, and Position is 25: Test it out to make sure he's shooting in the right directions. When developing for a while, it's easy to get tired and miss things. Don't forget to create a new button for shooting, if you are creating a mobile version of the game. Shooting with touch controls is done the exact same way as walking and jumping with touch buttons. Melee weapons If you want to create a game with melee weapons, I'll quickly take you through the process of doing it. Melee weapons can be made the same way as shooting a bullet in real. You can either have the blade or object as a separate actor positioned to the actor, and when the player presses the attack button, have the player and the melee weapon animate at the same time. This way when the melee weapon collides with the enemy, it will be more accurate. If you want to have the player draw the weapon in hand, simply animate the actor, and detect if there is a collision. This will be a little less accurate, as an enemy could run into the player when he is animating and get hurt, so what you can do is have the player spawn an invisible actor to detect the melee weapon collisions more accurately. I'll quickly go through this with you. When the player presses the attack button and the actor isn't flipped (so he'll be facing the right end of the screen), animate accordingly, and then spawn the actor in front of the player. Then, in the detector actor you spawn when attacking, have it automatically destroyed when the player has finished animating. Also, throw in a Move behavior, especially when there is a down swipe such as the attack. For this example, you'll want a detector to follow the edge of the blade. This way, all you have to do is detect collisions inside the enemies with the detector actor. It is way more accurate to do things this way; in fact you can even do collisions with walls. Let's say the actor punches a wall and this detector collides with the block, spawn some particles for debris, and change the image of the block into a cracked image. There are a lot of cool things you can do with melee weapons and with ranged weapons. You can even have the wall pieces have a sort of health bar, which is really a set amount of times a bullet or sword will have to hit it before it is destroyed. Kevin is now bouncing around our level, running like a silly little boy, and now he can shoot things up. Summary We had an unmovable, boring character—albeit a good looking one. We discussed how to get him moving using a keyboard, and mobile touch controls. We then discussed how to get our character to collect a weapon, have that weapon follow him around, and then start shooting it. What are we going to talk about, you ask? Well, we are going to make Kevin have health, lives, and power ups. We'll discuss an inventory system, scoring, and some more game play mechanics like pausing and such. Relax for a little bit, take a nice break. If the weather is as nice out there as it is at the time of me writing, go enjoy the lovely sun. I can't stress the importance of taking a break enough when it comes to game development. There have been many times when I've been programming for hours on end, and I end up making a mistake and because I'm so tired, I can't find the problem. As soon as I take rest, the problem is easy to fix and I'm more alert. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF ArcGIS Spatial Analyst [Article] Sparrow iOS Game Framework - The Basics of Our Game [Article] Three.js - Materials and Texture [Article]
Read more
  • 0
  • 0
  • 24220

article-image-running-simple-game-using-pygame
Packt
29 Mar 2013
5 min read
Save for later

Running a simple game using Pygame

Packt
29 Mar 2013
5 min read
How to do it... Imports: First we will import the required Pygame modules. If Pygame is installed properly, we should get no errors: import pygame, sys from pygame.locals import * Initialization: We will initialize Pygame by creating a display of 400 by 300 pixels and setting the window title to Hello world: pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') The main game loop: Games usually have a game loop, which runs forever until, for instance, a quit event occurs. In this example, we will only set a label with the text Hello world at coordinates (100, 100). The text has a font size of 19, red color, and falls back to the default font: while True: sys_font = pygame.font.SysFont("None", 19) rendered = sys_font.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() We get the following screenshot as the end result: The following is the complete code for the Hello World example: import pygame, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') while True: sysFont = pygame.font.SysFont("None", 19) rendered = sysFont.render('Hello World', 0, (255, 100, 100)) screen.blit(rendered, (100, 100)) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() How it works... It might not seem like much, but we learned a lot in this recipe. The functions that passed the review are summarized in the following table: Function Description pygame.init() This function performs the initialization and needs to be called before any other Pygame functions are called. pygame.display.set_mode((400, 300)) This function creates a so-called   Surface object to draw on. We give this function a tuple representing the width and height of the surface. pygame.display.set_caption('Hello World!') This function sets the window title to a specified string value. pygame.font.SysFont("None", 19) This function creates a system font from a comma-separated list of fonts (in this case none) and a font size parameter. sysFont.render('Hello World', 0, (255, 100, 100)) This function draws text on a surface. The second parameter indicates whether anti-aliasing is used. The last parameter is a tuple representing the RGB values of a color. screen.blit(rendered, (100, 100)) This function draws on a surface. pygame.event.get() This function gets a list of Event objects. Events represent some special occurrence in the system, such as a user quitting the game. pygame.quit() This function cleans up resources used by Pygame. Call this function before exiting the game. pygame.display.update() This function refreshes the surface.   Drawing with Pygame Before we start creating cool games, we need an introduction to the drawing functionality of Pygame. As we noticed in the previous section, in Pygame we draw on the Surface objects. There is a myriad of drawing options—different colors, rectangles, polygons, lines, circles, ellipses, animation, and different fonts. How to do it... The following steps will help you diverge into the different drawing options you can use with Pygame: Imports: We will need the NumPy library to randomly generate RGB values for the colors, so we will add an extra import for that: import numpy Initializing colors: Generate four tuples containing three RGB values each with NumPy: colors = numpy.random.randint(0, 255, size=(4, 3)) Then define the white color as a variable: WHITE = (255, 255, 255) Set the background color: We can make the whole screen white with the following code: screen.fill(WHITE) Drawing a circle: Draw a circle in the center with the window using the first color we generated: pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) Drawing a line: To draw a line we need a start point and an end point. We will use the second random color and give the line a thickness of 3: pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) Drawing a rectangle: When drawing a rectangle, we are required to specify a color, the coordinates of the upper-left corner of the rectangle, and its dimensions: pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) Drawing an ellipse: You might be surprised to discover that drawing an ellipse requires similar parameters as for rectangles. The parameters actually describe an imaginary rectangle that can be drawn around the ellipse: pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) The resulting window with a circle, line, rectangle, and ellipse using random colors: The code for the drawing demo is as follows: import pygame, sys from pygame.locals import * import numpy pygame.init() screen = pygame.display.set_mode((400, 400)) pygame.display.set_caption('Drawing with Pygame') colors = numpy.random.randint(0, 255, size=(4, 3)) WHITE = (255, 255, 255) #Make screen white screen.fill(WHITE) #Circle in the center of the window pygame.draw.circle(screen, colors[0], (200, 200), 25, 0) # Half diagonal from the upper-left corner to the center pygame.draw.line(screen, colors[1], (0, 0), (200, 200), 3) pygame.draw.rect(screen, colors[2], (200, 0, 100, 100)) pygame.draw.ellipse(screen, colors[3], (100, 300, 100, 50), 2) while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() Summary Here we saw how to create a basic game to get us started. The game demonstrated fonts and screen management in the time-honored tradition of Hello world examples. The next section of drawing with Pygame taught us how to draw basic shapes such as rectangles, ovals, circles, lines, and others. We also learned important information about colors and color management. Resources for Article : Further resources on this subject: Using Execnet for Parallel and Distributed Processing with NLTK [Article] TortoiseSVN: Getting Started [Article] Python 3 Object Oriented Programming: Managing objects [Article]
Read more
  • 0
  • 0
  • 23972

article-image-c-sfml-visual-studio-and-starting-first-game
Packt
19 Aug 2016
15 min read
Save for later

C++, SFML, Visual Studio, and Starting the first game

Packt
19 Aug 2016
15 min read
In this article by John Horton, author of the book, Beginning C++ Game Programming, we will waste no time in getting you started on your journey to writing great games for the PC, using C++ and the OpenGL powered SFML.We will learn absolutely everything we need, to have the first part of our first game up and running. Here is what we will do now. (For more resources related to this topic, see here.) Find out about the games we will build Learn a bit about C++ Explore SFML and its relationship with C++ Look at the software, Visual Studio Setup a game development environment Create a reusable project template which will save a lot of time Plan and prepare for the first game project Timber!!! Write the first C++ code and make a runnable game that draws a background The games The journey will be mainly smooth as we will learn the fundamentals of the super-fast C++ language, a step at a time and then put the new knowledge to use, adding cool features to the three games we are building. Timber!!! The first game is an addictive, fast-paced clone of the hugely successful Timberman http://store.steampowered.com/app/398710/. Our game, Timber!!!, will allow us to be introduced to all the C++ basics at the same time as building a genuinely playable game. Here is what our version of the game will look like when we are done and we have added a few last-minute enhancements. The Walking Zed Next we will build a frantic, zombie survival-shooter, not unlike the Steam hit, Over 9000 Zombies http://store.steampowered.com/app/273500/. The player will have a machine gun, the ability to gather resources and build defenses. All this will take place in a randomly generated, scrolling world. To achieve this we will learn about object oriented programming and how it enables us to have a large code base (lots of code) that is easy to write and maintain. Expect exciting features like hundreds of enemies, rapid fire weaponry and directional sound. Thomas gets a real friend The third game will be a stylish and challenging, single player and coop, puzzle platformer. It is based on the very popular game, Thomas was Alone http://store.steampowered.com/app/220780/. Expect to learn cool topics like particle effects, OpenGL Shaders and multiplayer networking. If you want to play any of the games now, you can do so from the download bundle in the Runnable Games folder. Just double-click on the appropriate .exe file. Let's get started by introducing C++, Visual Studio and SFML! C++ One question you might have is, why use C++ at all? C++ is fast, very fast. What makes this so, is the fact that the code that we write is directly translated into machine executable instructions. These instructions together, are what makes the game. The executable game is contained within a .exe file which the player can simply double-click to run. There are a few steps in the process. First the pre-processor looks to see if any other code needs to be included within our own and adds it if necessary. Next, all the code is compiled into object files by the compiler program. Finally a third program called the linker, joins all the object files into the executable file which is our game. In addition, C++ is well established at the same time as being extremely up-to-date. C++ is an Object Oriented Programming (OOP) language which means we can write and organize our code in a proven way that makes our games efficient and manageable. Most of this other code that I refered to, you might be able to guess, is SFML, and we will find out more about SFML in just a minute. The pre-processor, compiler and linker programs I have just mentioned, are all part of the Visual Studio Integrated Development Environment (IDE). Microsoft Visual Studio Visual Studio hides away the complexity of the pre-processing, compiling and linking. It wraps it all up into the press of a button. In addition to this, it provides a slick user interface for us to type our code and manage, what will become a large selection of code files and other project assets as well. While there are advanced versions of Visual Studio that cost hundreds of dollars, we will be able to build all three of our games in the free Express 2015 for Desktop version. SFML Simple Fast Media Library (SFML) is not the only C++ library for games. It is possible to make an argument to use other libraries, but SFML seems to come through in first place, for me, every time. Firstly it is written using object oriented C++. Perhaps the biggest benefit is that all modern C++ programming uses OOP. Every C++ beginners guide I have ever read. uses and teaches OOP. OOP is the future (and the now) of coding in almost all languages in fact. So why, if your learning C++ from the beginning, would you want to do it any other way? SFML has a module (code) for just about anything you would ever want to do in a 2d game. SFML works using OpenGL that can also make 3d games. OpenGL is the de-facto free-to-use graphics library for games. When you use SFML, you are automatically using OpenGL. SFML drastically simplifies: 2d graphics and animation including scrolling game-worlds. Sound effects and music playback, including high quality directional sound. Online multiplayer features The same code can be compiled and linked on all major desktop operating systems, and soon mobile as well! Extensive research has not uncovered any more suitable way to build 2d games for PC, even for expert developers and especially if you are a beginner and want to learn C++ in a fun gaming environment. Setting up the development environment Now you know a bit more about how we will be making games, it is time to set up a development environment so we can get coding. What about Mac and Linux? The games that we make can be built to run on Windows, Mac and Linux! The code we use will be identical for each each. However, each version does need to be compiled and linked on the platform for which it is intended and Visual Studio will not be able to help us with Mac and Linux. Although, I guess, if you are an enthusiastic Mac or Linux user and you are comfortable with your operating system, the vast majority of the challenge you will encounter, will be in the initial setup of the development environment, SFML and the first project. For Linux, read this for an overview: http://www.sfml-dev.org/tutorials/2.0/start-linux.php For Linux, read this for step-by-step: http://en.sfml-dev.org/forums/index.php?topic=9808.0 On Mac, read this tutorial as well as the linked out articles: http://www.edparrish.net/common/sfml-osx.html Installing Visual Studio Express 2015 for Desktop Installing Visual Studio can be almost as simple as downloading a file and clicking a few buttons. It will help us, however, if we carefully run through exactly how we do this. For this reason I will walk through the installation process a step at a time. The Microsoft Visual Studio site says, you need 5 GB of hard disk space. From experience, however, I would suggest you need at least 10 GB of free space. In addition, these figures are slightly ambiguous. If you are planning to install on a secondary hard drive, you will still need at least 5 GB on the primary hard drive because no matter where you choose to install Visual Studio, it will need this space too. To summarize this ambiguous situation: It is essential to have a full 10 GB space on the primary hard disk, if you will be installing Visual Studio to that primary hard disk. On the other hand, make sure you have 5 GB on the primary hard disk as well as 10 GB on the secondary, if you intend to install to a secondary hard disk. Yep, stupid, I know! The first thing you need is a Microsoft account and the login details. If you have a Hotmail or MSN email address then you already have one. If not, you can sign up for a free one here:https://login.live.com/. Visit this link: https://www.visualstudio.com/en-us/downloads/download-visual-studio-vs.aspx. Click on Visual Studio 2015, then Express 2015 for desktop then the Download button. This next image shows the three places to click. Wait for the short download to complete and then run the downloaded file. Now you just need to follow the on-screen instructions. However, make a note of the folder where you choose to install Visual Studio. If you want to do things exactly the same as me, then create a new folder called Visual Studio 2015 on your preferred hard disk and install to this folder. This whole process could take a while depending on the speed of your Internet connection. When you see the next screen, click on Launch and enter your Microsoft account login details. Now we can turn to SFML. Setting up SFML This short tutorial will step through downloading the SFML files, that allows us to include the functionality contained in the library as well as the files, called DLL files, that will enable us to link our compiled object code to the SFML compiled object code. Visit this link on the SFML website: http://www.sfml-dev.org/download.php. Click on the button that says Latest Stable Version as shown next. By the time you read this guide, the actual latest version will almost certainly have changed. That doesn't matter as long as you do the next step just right. We want to download the 32 bit version for Visual C++ 2014. This might sound counter-intuitive because we have just installed Visual Studio 2015 and you probably(most commonly) have a 64 bit PC. The reason we choose the download that we do, is because Visual C++ 2014 is part of Visual Studio 2015 (Visual Studio does more than C++) and we will be building games in 32 bit so they run on both 32 and 64 bit machines. To be clear click the download indicated below. When the download completes, create a folder at the root of the same drive where you installed Visual Studio and name it SFML. Also create another folder at the root of the drive where you installed Visual Studio and call it Visual Studio Stuff. We will store all kinds of Visual Studio related things here so Stuff seems like a good name. Just to be clear, here is what my hard drive looks like after this step Obviously, the folders you have in between the highlighted three folders in the image will probably be totally different to mine. Now, ready for all the projects we will soon be making, create a new folder inside Visual Studio Stuff. Name the new folder Projects. Finally, unzip the SFML download. Do this on your desktop. When unzipping is complete you can delete the zip folder. You will be left with a single folder on your desktop. Its name will reflect the version of SFML that you downloaded. Mine is called SFML-2.3.2-windows-vc14-32-bit. Your file name will likely reflect a more recent version. Double click this folder to see the contents, then double click again into the next folder (mine is called SFML-2.3.2). The image below is what my SFML-2.3.2 folder contents looks like, when the entire contents has been selected. Yours should look the same. Copy the entire contents of this folder, as seen in the previous image and paste/drag all the contents into the SFML folder you created in step 3. I will refer to this folder simply as your SFML folder. Now we are ready to start using C++ and SFML in Visual Studio. Creating a reusable project template As setting up a project is a fairly fiddly process, we will create a project and then save it as a Visual Studio template. This will save us a quite significant amount of work each time we start a new game. So if you find the next tutorial a little tedious, rest assured that you will never need to do this again. In the New Project window, click the little drop-down arrow next to Visual C++ to reveal more options, then click Win32 and then click Win32 Console Application, You can see all these selections in the next screen-shot. Now, at the bottom of the New Project window type HelloSFML in the Name: field. Next, browse to the Visual Studio StuffProjects folder that we created in the previous tutorial. This will be the location that all our project files will be kept. All templates are based on an actual project. So we will have a project called HelloSFML but the only thing we will do with it, is make a template from it. When you have completed the steps above click OK. The next image shows the Application Settings window. Check the box for Console application, and leave the other options as shown below. Click Finish and Visual Studio will create the new project. Next we will add some fairly intricate and important project settings. This is the laborious part, but as we will create a template, we will only need to do this once. What we need to do is to tell Visual Studio, or more specifically the code compiler, that is part of Visual Studio, where to find a special type of code file from SFML. The special type of file I am referring to is a header file. Header files are the files that define the format of the SFML code. So when we use the SFML code, the compiler knows how to handle it. Note that the header files are distinct from the main source code files and they are contained in files with the .hpp file extension. All this will become clearer when we eventually start adding our own header files in the second project. In addition, we need to tell Visual Studio where it can find the SFML library files. From the Visual Studio main menu select Project | HelloSFML properties. In the resulting HelloSFML Property Pages window, take the following steps, which are numbered and can be referred to in the next image. First, select All Configurations from the Configuration: drop-down. Second, select C/C++ then General from the left-hand menu. Third, locate the Additional Include Directories edit box and type the drive letter where your SFML folder is located, followed by SFMLinclude. The full path to type, if you located your SFML folder on your D drive is, as shown in the screen-shot, D:SFMLinclude. Vary your path if you installed SFML to a different drive. Click Apply to save your configurations so far. Now, still in the same window, perform these next steps which again refer to the next image. Select Linker then General. Find the Additional Library Directories edit box and type the drive letter where your SFML folder is, followed by SFMLlib. So the full path to type if you located your SFML folder on your D drive is, as shown in the screen-shot, D:SFMLlib. Vary your path if you installed SFML to a different drive. Click Apply to save your configurations so far. Finally for this stage, still in the same window, perform these steps which again refer to the next image. Switch the Configuration: drop down(1) to Debug as we will be running and testing our games in debug mode. Select Linker then Input (2). Find the Additional Dependencies edit box (3) and click into it at the far left hand side. Now copy & paste/type the following at the indicated place.: sfml-graphics-d.lib;sfml-window-d.lib;sfml-system-d.lib;sfml-network-d.lib;sfml-audio-d.lib; Again be REALLY careful to place the cursor exactly and not to overwrite any of the text that is already there. Click OK. Let's make the template of our HelloSFML project so we never have to do this slightly mind-numbing task again. Creating a reusable project template is really easy. In Visual Studio select File | Export Template…. Then in the Export Template Wizard window make sure the Project template option is selected and the HelloSFML project is selected for the From which project do you want to create a template option. Click Next and then Finish. Phew, that's it! Next time we create a project I'll show you how to do it from this template. Let's build Timber!!! Summary In this article we learnt that, it is true that configuring an IDE, to use a C++ library can be a bit awkward and long. Also the concept of classes and objects is well known to be slightly awkward for people new to coding. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 23838

article-image-introduction-hlsl-language
Packt
28 Jun 2013
8 min read
Save for later

Introduction to HLSL language

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Distance/Height-based fog Distance/Height-based fog is an approximation to the fog you would normally see outdoors. Even in the clearest of days, you should be able to see some fog far in the distance. The main benefit of adding the fog effect is that it helps the viewer estimate how far different elements in the scene are based on the amount of fog covering them. In addition to the realism this effect adds, it has the additional benefit of hiding the end of the visible range. Without fog to cover the far plane, it becomes easier to notice when far scene elements are clipped by the cameras far plane. By tuning the height of the fog you can also add a darker atmosphere to your scene as demonstrated by the following image: This recipe will demonstrate how distance/height-based fog can be added to our deferred directional light calculation. See the How it works… section for details about adding the effect to other elements of your rendering code. Getting ready We will be passing additional fog specific parameters to the directional light's pixel shader through a new constant buffer. The reason for separating the fog values into their own constant buffer is to allow the same parameters to be used by any other shader that takes fog into account. To create the new constant buffer use the following buffer descriptor: Constant buffer descriptor parameter   Value   Usage   D3D11_USAGE_DYNAMIC   BindFlags   D3D11_BIND_CONSTANT_BUFFER   CPUAccessFlags   D3D11_CPU_ACCESS_WRITE   ByteWidth   48   The reset of the descriptor fields should be set to zero. All the fog calculations will be handled in the deferred directional light pixel shader. How to do it... Our new fog constant buffer is declared in the pixel shader as follows: cbuffer cbFog : register( b2 ){float3 FogColor : packoffset( c0 );float FogStartDepth : packoffset( c0.w );float3 FogHighlightColor : packoffset( c1 );float FogGlobalDensity : packoffset( c1.w );float3 FogSunDir : packoffset( c2 );FogHeightFalloff : packoffset( c2.w );} The helper function used for calculating the fog is as follows: float3 ApplyFog(float3 originalColor, float eyePosY, float3eyeToPixel){float pixelDist = length( eyeToPixel );float3 eyeToPixelNorm = eyeToPixel / pixelDist;// Find the fog staring distance to pixel distancefloat fogDist = max(pixelDist - FogStartDist, 0.0);// Distance based fog intensityfloat fogHeightDensityAtViewer = exp( -FogHeightFalloff * eyePosY );float fogDistInt = fogDist * fogHeightDensityAtViewer;// Height based fog intensityfloat eyeToPixelY = eyeToPixel.y * ( fogDist / pixelDist );float t = FogHeightFalloff * eyeToPixelY;const float thresholdT = 0.01;float fogHeightInt = abs( t ) > thresholdT ?( 1.0 - exp( -t ) ) / t : 1.0;// Combine both factors to get the final factorfloat fogFinalFactor = exp( -FogGlobalDensity * fogDistInt *fogHeightInt );// Find the sun highlight and use it to blend the fog colorfloat sunHighlightFactor = saturate(dot(eyeToPixelNorm, FogSunDir));sunHighlightFactor = pow(sunHighlightFactor, 8.0);float3 fogFinalColor = lerp(FogColor, FogHighlightColor,sunHighlightFactor);return lerp(fogFinalColor, originalColor, fogFinalFactor);} The Applyfog function takes the color without fog along with the camera height and the vector from the camera to the pixel the color belongs to and returns the pixel color with fog. To add fog to the deferred directional light, change the directional entry point to the following code: float4 DirLightPS(VS_OUTPUT In) : SV_TARGET{// Unpack the GBufferfloat2 uv = In.Position.xy;//In.UV.xy;SURFACE_DATA gbd = UnpackGBuffer_Loc(int3(uv, 0));// Convert the data into the material structureMaterial mat;MaterialFromGBuffer(gbd, mat);// Reconstruct the world positionfloat2 cpPos = In.UV.xy * float2(2.0, -2.0) - float2(1.0, -1.0);float3 position = CalcWorldPos(cpPos, gbd.LinearDepth);// Get the AO valuefloat ao = AOTexture.Sample(LinearSampler, In.UV);// Calculate the light contributionfloat4 finalColor;finalColor.xyz = CalcAmbient(mat.normal, mat.diffuseColor.xyz) * ao;finalColor.xyz += CalcDirectional(position, mat);finalColor.w = 1.0;// Apply the fog to the final colorfloat3 eyeToPixel = position - EyePosition;finalColor.xyz = ApplyFog(finalColor.xyz, EyePosition.y,eyeToPixel);return finalColor;} With this change, we apply the fog on top of the lit pixels color and return it to the light accumulation buffer. How it works… Fog is probably the first volumetric effect implemented using a programmable pixel shader as those became commonly supported by GPUs. Originally, fog was implemented in hardware (fixed pipeline) and only took distance into account. As GPUs became more powerful, the hardware distance based fog was replaced by a programmable version that also took into account things such as height and sun effect. In reality, fog is just particles in the air that absorb and reflect light. A ray of light traveling from a position in the scene travels, the camera interacts with the fog particles, and gets changed based on those interactions. The further this ray has to travel before it reaches the camera, the larger the chance is that this ray will get either partially or fully absorbed. In addition to absorption, a ray traveling in a different direction may get reflected towards the camera and add to the intensity of the original ray. Based on the amount of particles in the air and the distance a ray has to travel, the light reaching our camera may contain more reflection and less of the original ray which leads to a homogenous color we perceive as fog. The parameters used in the fog calculation are: FogColor: The fog base color (this color's brightness should match the overall intensity so it won't get blown by the bloom) FogStartDistance: The distance from the camera at which the fog starts to blend in FogHighlightColor: The color used for highlighting pixels with pixel to camera vector that is close to parallel with the camera to sun vector FogGlobalDensity: Density factor for the fog (the higher this is the denser the fog will be) FogSunDir: Normalized sun direction FogHeightFalloff: Height falloff value (the higher this value, the lower is the height at which the fog disappears will be) When tuning the fog values, make sure the ambient colors match the fog. This type of fog is designed for outdoor environments, so you should probably disable it when lighting interiors. You may have noticed that the fog requires the sun direction. We already store the inversed sun direction for the directional light calculation. You can remove that value from the directional light constant buffer and use the fog vector instead to avoid the duplicate values This recipe implements the fog using the exponential function. The reason for using the exponent function is because of its asymptote on the negative side of its graph. Our fog implementation uses that asymptote to blend the fog in from the starting distances. As a reminder, the exponent function graph is as follows: The ApplyFog function starts off by finding the distance our ray traveled in the fog (fogDepth). In order to take the fog's height into account, we also look for the lowest height between the camera and the pixel we apply the fog to which we then use to find how far our ray travels vertically inside the fog (fogHeight). Both distance values are negated and multiplied by the fog density to be used as the exponent. The reason we negate the distance values is because it's more convenient to use the negative side of the exponential functions graph which is limited to the range 0 to 1. As the function equals 1 when the exponent is 0, we have to invert the results (stored in fogFactors). At this point we have one factor for the height which gets larger the further the ray travels vertically into the fog and a factor that gets larger the further the ray travels in the fog in any direction. By multiplying both factors with each other we get the combined fog effect on the ray: the higher the result is, the more the original ray got absorbed and light got reflected towards the camera in its direction (this is stored in fogFinalFactor). Before we can compute the final color value, we need to find the fog's color based on the camera and sun direction. We assume that the sun intensity is high enough to get more of its light rays reflected towards the camera direction and sun direction are close to parallel. We use the dot product between the two to determine the angle and narrow the result by raising it to the power of 8 (the result is stored in sunHighlightFactor). The result is used to lerp between the fog base color and the fog color highlighted by the sun. Finally, we use the fog factor to linearly interpolate between the input color and the fog color. The resulting color is then returned from the helper function and stored into the light accumulation buffer. As you can see, the changes to the directional light entry point are very minor as most of the work is handled inside the helper function ApplyFog. Adding the fog calculation to the rest of the deferred and forward light sources should be pretty straightforward. One thing to take into consideration is that fog also has to be applied to scene elements that don't get lit, like the sky or emissive elements. Again, all you have to do is call ApplyFog to get the final color with the fog effect. Summary In this article, we learned how to apply fog effect and add atmospheric scenes to the images. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] 3D Vector Drawing and Text with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 23689

article-image-exciting-features-haxeflixel
Packt
02 Nov 2015
4 min read
Save for later

The Exciting Features of HaxeFlixel

Packt
02 Nov 2015
4 min read
This article by Jeremy McCurdy, the author of the book Haxe Game Development Essentials, uncovers the exciting features of HaxeFlixel. When getting into cross-platform game development, it's often difficult to pick the best tool. There are a lot of engines and languages out there to do it, but when creating 2D games, one of the best options out there is HaxeFlixel. HaxeFlixel is a game engine written in the Haxe language. It is powered by the OpenFL framework. Haxe is a cross-platform language and compiler that allows you to write code and have it run on a multitude of platforms. OpenFL is a framework that expands the Haxe API and allows you to have easy ways to handle things such as rendering an audio in a uniform way across different platforms. Here's a rundown of what we'll look at: Core features Display Audio Input Other useful features Multiplatform support Advanced user interface support Visual effects  (For more resources related to this topic, see here.)  Core features HaxeFlixel is a 2D game engine, originally based off the Flash game engine Flixel. So, what makes it awesome? Let's start with the basic things you need: display, audio, and input. Display In HaxeFlixel, most visual elements are represented by objects using the FlxSprite class. This can be anything from spritesheet animations to shapes drawn through code. This provides you with a simple and consistent way of working with visual elements. Here's an example of how the FlxSprite objects are used: You can handle things such as layering by using the FlxGroup class, which does what its name implies—it groups things together. The FlxGroup class also can be used for collision detection (check whether objects from group A hit objects from group B). It also acts an object pool for better memory management. It's really versatile without feeling bloated. Everything visual is displayed by using the FlxCamera class. As the name implies, it's a game camera. It allows you to do things such as scrolling, having fullscreen visual effects, and zooming in or out of the page. Audio Sound effects and music are handled using a simple but effective sound frontend. It allows you to play sound effects and loop music clips with easy function calls. You can also manage the volume on a per sound basis, via global volume controls, or a mix of both. Input HaxeFlixel supports many methods of input. You can use mouse, touch, keyboard, or gamepad input. This allows you to support players on every platform easily. On desktop platforms, you can easily customize the mouse cursor without the need to write special functionalities. The built-in gamepad support covers mappings for the following controllers: Xbox PS3 PS4 OUYA Logitech Other useful features HaxeFlixel has a bunch of other cool features. This makes it a solid choice as a game engine. Among these are multiplatform support, advanced user interface support, and visual effects. Multi-platform support HaxeFlixel can be built for many different platforms. Much of this comes from it being built using OpenFL and its stellar cross-platform support. You can build desktop games that will work natively on Windows, Mac, and Linux. You can build mobile games for Android and iOS with relative ease. You can also target the Web by using Flash or the experimental support for HTML5. Advanced user interface support By using the flixel-ui add-on library, you can create complex game user interfaces. You can define and set up these interfaces with this by using XML configuration files. The flixel-ui library gives you access to a lot of different control types, such as 9-sliced images, the check/toggle buttons, text input, tabs, and drop-down menus. You can even localize UI text into different languages by using the firetongue library of Haxe. Visual effects Another add-on is the effects library. It allows you to warp and distort sprites by using the FlxGlitchSprite and FlxWaveSprite classes. You can also add trails to objects by using the FlxTrail class. Aside from the add-on library, HaxeFlixel also has built-in support for 2D particle effects, camera effects such as screen flashes and fades, and screen shake for an added impact. Summary In this article, we discussed several features of HaxeFlixel. This includes the core features of display, audio, and input. We also covered the additional features of multiplatform support, advanced user interface support, and visual effects. Resources for Article: Further resources on this subject: haXe 2: The Dynamic Type and Properties [article] Being Cross-platform with haXe [article] haXe 2: Using Templates [article]
Read more
  • 0
  • 0
  • 23481
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-ready-fight
Packt
08 Sep 2016
15 min read
Save for later

Getting Ready to Fight

Packt
08 Sep 2016
15 min read
In this article by Ashley Godbold author of book Mastering Unity 2D Game Development, Second Edition, we will start out by laying the main foundation for the battle system of our game. We will create the Heads Up Display (HUD) as well as design the overall logic of the battle system. The following topics will be covered in this article: Creating a state manager to handle the logic behind a turn-based battle system Working with Mecanim in the code Exploring RPG UI Creating the game's HUD (For more resources related to this topic, see here.) Setting up our battle statemanager The most unique and important part of a turn-based battle system is the turns. Controlling the turns is incredibly important, and we will need something to handle the logic behind the actual turns for us. We'll accomplish this by creating a battle state machine. The battle state manager Starting back in our BattleScene, we need to create a state machine using all of Mecanim's handy features. Although we will still only be using a fraction of the functionality with the RPG sample, I advise you to investigate and read more about its capabilities. Navigate to AssetsAnimationControllers and create a new Animator Controller called BattleStateMachine, and then we can begin putting together the battle state machine. The following screenshot shows you the states, transitions, and properties that we will need: As shown in the preceding screenshot, we have created eight states to control the flow of a battle with two Boolean parameters to control its transition. The transitions are defined as follows: From Begin_Battle to Intro BattleReadyset to true Has Exit Timeset to false (deselected) Transition Durationset to 0 From Intro to Player_Move Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Player_Move to Player_Attack PlayerReadyset totrue Has Exit Timeset tofalse Transition Durationset to0 From Player_Attack to Change_Control PlayerReadyset tofalse Has Exit Timeset tofalse Transition Durationset to2 From Change_Control to Enemy_Attack Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Enemy_Attack to Player_Move BattleReadyset totrue Has Exit Timeset tofalse Transition Durationset to2 From Enemy_Attack to Battle_Result BattleReadyset tofalse Has Exit Timeset tofalse Transition Timeset to2 From Battle_Result to Battle_End Has Exit Timeset totrue Exit Timeset to0.9 Transition Timeset to5 Summing up, what we have built is a steady flow of battle, which can be summarized as follows: The battle begins and we show a little introductory clip to tell the player about the battle. Once the player has control, we wait for them to finish their move. We then perform the player's attack and switch the control over to the enemy AI. If there are any enemies left, they get to attack the player (if they are not too scared and have not run away). If the battle continues, we switch back to the player, otherwise we show the battle result. We show the result for five seconds (or until the player hits a key), and then finish the battle and return the player to the world together with whatever loot and experience gained. This is just a simple flow, which can be extended as much as you want, and as we continue, you will see all the points where you could expand it. With our animator state machine created, we now just need to attach it to our battle manager so that it will be available when the battle runs; the following are the ensuing steps to do this: Open up BattleScene. Select the BattleManager game object in the project Hierarchy and add an Animator component to it. Now drag the BattleStateMachine animator controller we just created into the Controller property of the Animator component. The preceding steps attached our new battle state machine to our battle engine. Now, we just need to be able to reference the BattleStateMachine Mecanim state machine from theBattleManager script. To do so, open up the BattleManager script in AssetsScripts and add the following variable to the top of the class: private Animator battleStateManager; Then, to capture the configuredAnimator in our BattleManager script, we add the following to an Awake function place before the Start function: voidAwake(){ battleStateManager=GetComponent<Animator>(); if(battleStateManager==null){ Debug.LogError("NobattleStateMachineAnimatorfound."); }   } We have to assign it this way because all the functionality to integrate the Animator Controller is built into the Animator component. We cannot simply attach the controller directly to the BattleManager script and use it. Now that it's all wired up, let's start using it. Getting to the state manager in the code Now that we have our state manager running in Mecanim, we just need to be able to access it from the code. However, at first glance, there is a barrier to achieving this. The reason being that the Mecanim system uses hashes (integer ID keys for objects) not strings to identify states within its engine (still not clear why, but for performance reasons probably). To access the states in Mecanim, Unity provides a hashing algorithm to help you, which is fine for one-off checks but a bit of an overhead when you need per-frame access. You can check to see if a state's name is a specific string using the following: GetCurrentAnimatorStateInfo(0).IsName("Thing you're checking") But there is no way to store the names of the current state, to a variable. A simple solution to this is to generate and cache all the state hashes when we start and then use the cache to talk to the Mecanim engine. First, let's remove the placeholder code, for the old enum state machine.So, remove the following code from the top of the BattleManager script: enum BattlePhase {   PlayerAttack,   EnemyAttack } private BattlePhase phase; Also, remove the following line from the Start method: phase = BattlePhase.PlayerAttack; There is still a reference in the Update method for our buttons, but we will update that shortly; feel free to comment it out now if you wish, but don't delete it. Now, to begin working with our new state machine, we need a replica of the available states we have defined in our Mecanim state machine. For this, we just need an enumeration using the same names (you can create this either as a new C# script or simply place it in the BattleManager class) as follows: publicenumBattleState { Begin_Battle, Intro, Player_Move, Player_Attack, Change_Control, Enemy_Attack, Battle_Result, Battle_End } It may seem strange to have a duplicate of your states in the state machine and in the code; however, at the time of writing, it is necessary. Mecanim does not expose the names of the states outside of the engine other than through using hashes. You can either use this approach and make it dynamic, or extract the state hashes and store them in a dictionary for use. Mecanim makes the managing of state machines very simple under the hood and is extremely powerful, much better than trawling through code every time you want to update the state machine. Next, we need a location to cache the hashes the state machine needs and a property to keep the current state so that we don't constantly query the engine for a hash. So, add a new using statement to the beginning of the BattleManager class as follows: using System.Collections; using System.Collections.Generic; using UnityEngine; Then, add the following variables to the top of the BattleManager class: private Dictionary<int, BattleState> battleStateHash = new Dictionary<int, BattleState>(); private BattleState currentBattleState; Finally, we just need to integrate the animator state machine we have created. So, create a new GetAnimationStates method in the BattleManager class as follows: void GetAnimationStates() {   foreach (BattleState state in (BattleState[])System.Enum.     GetValues(typeof(BattleState)))   {     battleStateHash.Add(Animator.StringToHash       (state.ToString()), state);   } } This simply generates a hash for the corresponding animation state in Mecanim and stores the resultant hashes in a dictionary that we can use without having to calculate them at runtime when we need to talk to the state machine. Sadly, there is no way at runtime to gather the information from Mecanim as this information is only available in the editor. You could gather the hashes from the animator and store them in a file to avoid this, but it won't save you much. To complete this, we just need to call the new method in the Start function of the BattleManager script by adding the following: GetAnimationStates(); Now that we have our states, we can use them in our running game to control both the logic that is applied and the GUI elements that are drawn to the screen. Now add the Update function to the BattleManager class as follows: voidUpdate() {   currentBattleState = battleStateHash[battleStateManager.     GetCurrentAnimatorStateInfo(0).shortNameHash];     switch (currentBattleState)   {     case BattleState.Intro:       break;     case BattleState.Player_Move:       break;     case BattleState.Player_Attack:       break;     case BattleState.Change_Control:       break;     case BattleState.Enemy_Attack:       break;     case BattleState.Battle_Result:       break;     case BattleState.Battle_End:       break;     default:       break;   } } The preceding code gets the current state from the animator state machine once per frame and then sets up a choice (switch statement) for what can happen based on the current state. (Remember, it is the state machine that decides which state follows which in the Mecanim engine, not nasty nested if statements everywhere in code.) Now we are going to update the functionality that turns our GUI button on and off. Update the line of code in the Update method we wrote as follows: if(phase==BattlePhase.PlayerAttack){ so that it now reads: if(currentBattleState==BattleState.Player_Move){ This will make it so that the buttons are now only visible when it is time for the player to perform his/her move. With these in place, we are ready to start adding in some battle logic. Starting the battle As it stands, the state machine is waiting at the Begin_Battle state for us to kick things off. Obviously, we want to do this when we are ready and all the pieces on the board are in place. When the current Battle scene we added, starts, we load up the player and randomly spawn in a number of enemies into the fray using a co-routine function called SpawnEnemies. So, only when all the dragons are ready and waiting to be chopped down do we want to kick things off. To tell the state machine to start the battle, we simple add the following line just after the end of the forloop in the SpawnEnemies IEnumerator co-routine function: battleStateManager.SetBool("BattleReady", true); Now when everything is in place, the battle will finally begin. Introductory animation When the battle starts, we are going to display a little battle introductory image that states who the player is going to be fighting against. We'll have it slide into the scene and then slide out. You can do all sorts of interesting stuff with this introductory animation, like animating the individual images, but I'll leave that up to you to play with. Can't have all the fun now, can I? Start by creating a new Canvas and renaming it IntroCanvas so that we can distinguish it from the canvas that will hold our buttons. At this point, since we are adding a second canvas into the scene, we should probably rename ours to something that is easier for you to identify. It's a matter of preference, but I like to use different canvases for different UI elements. For example, one for the HUD, one for pause menus, one for animations, and so on. You can put them all on a single canvas and use Panels and CanvasGroup components to distinguish between them; it's really up to you. As a child of the new IntroCanvas, create a Panel with the properties shown in the following screenshot. Notice that the Imageoblect's Color property is set to black with the alpha set to about half: Now add as a child of the Panel two UI Images and a UI Text. Name the first image PlayerImage and set its properties as shown in the following screenshot. Be sure to set Preserve Aspect to true: Name the second image EnemyImage and set the properties as shown in the following screenshot: For the text, set the properties as shown in the following screenshot: Your Panel should now appear as mine did in the image at the beginning of this section. Now let's give this Panel its animation. With the Panel selected, select the Animation tab. Now hit the Create button. Save the animation as IntroSlideAnimation in the Assets/Animation/Clipsfolder. At the 0:00 frame, set the Panel's X position to 600, as shown in the following screenshot: Now, at the 0:45 frame, set the Panel's X position to 0. Place the playhead at the 1:20 frame and set the Panel's X position to 0, there as well, by selecting Add Key, as shown in the following screenshot: Create the last frame at 2:00 by setting the Panel's X position to -600. When the Panel slides in, it does this annoying bounce thing instead of staying put. We need to fix this by adjusting the animation curve. Select the Curves tab: When you select the Curves tab, you should see something like the following: The reason for the bounce is the wiggle that occurs between the two center keyframes. To fix this, right-click on the two center points on the curve represented by red dots and select Flat,as shown in the following screenshot: After you do so, the curve should be constant (flat) in the center, as shown in the following screenshot: The last thing we need to do to connect this to our BattleStateMananger isto adjust the properties of the Panel's Animator. With the Panel selected, select the Animator tab. You should see something like the following: Right now, the animation immediately plays when the scene is entered. However, since we want this to tie in with our BattleStateManager and only begin playing in the Intro state, we do not want this to be the default animation. Create an empty state within the Animator and set it as the default state. Name this state OutOfFrame. Now make a Trigger Parameter called Intro. Set the transition between the two states so that it has the following properties: The last things we want to do before we move on is make it so this animation does not loop, rename this new Animator, and place our Animator in the correct subfolder. In the project view, select IntroSlideAnimation from the Assets/Animation/Clips folder and deselect Loop Time. Rename the Panel Animator to VsAnimator and move it to the Assets/Animation/Controllersfolder. Currently, the Panel is appearing right in the middle of the screen at all times, so go ahead and set the Panel's X Position to600, to get it out of the way. Now we can access this in our BattleStateManager script. Currently, the state machine pauses at the Intro state for a few seconds; let's have our Panel animation pop in. Add the following variable declarations to our BattleStateManager script: public GameObjectintroPanel; Animator introPanelAnim; And add the following to the Awake function: introPanel Anim=introPanel.GetComponent<Animator>(); Now add the following to the case line of the Intro state in the Updatefunction: case BattleState.Intro: introPanelAnim.SetTrigger("Intro"); break; For this to work, we have to drag and drop the Panel into the Intro Panel slot in the BattleManager Inspector. As the battle is now in progress and the control is being passed to the player, we need some interaction from the user. Currently, the player can run away, but that's not at all interesting. We want our player to be able to fight! So, let's design a graphic user interface that will allow her to attack those adorable, but super mean, dragons. Summary Getting the battle right based on the style of your game is very important as it is where the player will spend the majority of their time. Keep the player engaged and try to make each battle different in some way, as receptiveness is a tricky problem to solve and you don't want to bore the player. Think about different attacks your player can perform that possibly strengthen as the player strengthens. In this article, you covered the following: Setting up the logic of our turn-based battle system Working with state machines in the code Different RPG UI overlays Setting up the HUD of our game so that our player can do more than just run away Resources for Article: Further resources on this subject: Customizing an Avatar in Flash Multiplayer Virtual Worlds [article] Looking Good – The Graphical Interface [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 23415

article-image-looking-back-looking-forward
Packt
09 Feb 2015
24 min read
Save for later

Looking Back, Looking Forward

Packt
09 Feb 2015
24 min read
In this article by Simon Jackson, author of the book Unity 3D UI Essentials we will see how the new Unity UI has long been sought by developers; it has been announced and re-announced over several years, and now it is finally here. The new UI system is truly awesome and (more importantly for a lot of developers on a shoestring budget) it's free. (For more resources related to this topic, see here.) As we start to look forward to the new UI system, it is very important to understand the legacy GUI system (which still exists for backwards compatibility) and all it has to offer, so you can fully understand just how powerful and useful the new system is. It's crucial to have this understanding, especially since most tutorials will still speak of the legacy GUI system (so you know what on earth they are talking about). With an understanding of the legacy system, you will then peer over the diving board and walk through a 10,000-foot view of the new system, so you get a feel of what to expect from the rest of this book. The following is the list of topics that will be covered in this article: A look back into what legacy Unity GUI is Tips, tricks, and an understanding of legacy GUI and what it has done for us A high level overview of the new UI features State of play You may not expect it, but the legacy Unity GUI has evolved over time, "adding new features and improving performance. However, because it has "kept evolving based on the its original implementation, it has been hampered "with many constraints and the ever pressing need to remain backwards compatible (just look at Windows, which even today has to cater for" programs written in BASIC (http://en.wikipedia.org/wiki/BASIC)). Not to say the old system is bad, it's just not as evolved as some of the newer features being added to the Unity 4.x and Unity 5.x series, which are based on newer and more enhanced designs, and more importantly, a new core. The main drawback of the legacy GUI system is that it is only drawn in screen space (drawn on" the screen instead of within it) on top of any 3D elements or drawing in your scenes. This is fine if you want menus or overlays in your title but if you want to integrate it further within your 3D scene, then it is a lot more difficult. For more information about world space and screen space, see this Unity Answers article (http://answers.unity3d.com/questions/256817/about-world-space-and-local-space.html). So before we can understand how good the new system is, we first need to get to grips with where we are coming from. (If you are already familiar with the legacy GUI system, feel free to skip over this section.) A point of reference Throughout this book, we will refer to the legacy GUI simply as GUI. When we talk about the new system, it will be referred to as UI or Unity UI, just so you don't get mixed-up when reading. When looking around the Web (or even in the Unity support forums), you may hear about or see references to uGUI, which was the development codename for the new Unity UI system. GUI controls The legacy" GUI controls provide basic and stylized controls for use in your titles. All legacy GUI controls are drawn during the GUI rendering phase from the built-in "OnGUI method. In the sample that accompanies this title, there are examples of all the controls in the AssetsBasicGUI.cs script. For GUI controls to function, a camera in the scene must have the GUILayer component attached to it. It is there by default on any Camera in a scene, so for most of the time you won't notice it. However, if you have removed it, then you will have to add it back for GUI to work. The component is just the hook for the OnGUI delegate handler, to ensure it has called each frame. Like "the Update method in scripts, the OnGUI method can be called several times per frame if rendering is slowing things down. Keep this in mind when building your own legacy GUI scripts. The controls that are available in the legacy GUI are: Label Texture Button Text fields (single/multiline and password variant) Box Toolbars Sliders ScrollView Window So let's go through them in more detail: All the following code is implemented in the sample project in the basic GUI script located in the AssetsScripts folder of the downloadable code. To experiment yourself, create a new project, scene, and script, placing the code for each control in the script and attach the script to the camera (by dragging it from the project view on to the Main Camera GameObject in the scene hierarchy). You can then either run the project or adorn the class in the script with the [ExecuteInEditMode] attribute to see it in the game view. The Label control Most "GUI systems start with a Label control; this simply "provides a stylized control to display read-only text on the screen, it is initiated by including the following OnGUI method in your script: void OnGUI() {GUI.Label(new Rect(25, 15, 100, 30), "Label");} This results in the following on-screen display: The Label control "supports altering its font settings through the" use of the guiText GameObject property (guiText.font) or GUIStyles. To set guiText.font in your script, you would simply apply the following in your script, either in the Awake/Start functions or before drawing the next section of text you want drawn in another font: public Font myFont = new Font("arial");guiText.font = myFont; You can also set the myFont property in Inspector using an "imported font. The Label control forms the basis for all controls to display text, and as such, "all other controls inherit from it and have the same behaviors for styling the displayed text. The Label control also supports using a Texture for its contents, but not both text "and a texture at the same time. However, you can layer Labels and other controls on top of each other to achieve the same effect (controls are drawn implicitly in the order they are called), for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {//Draw a textureGUI.Label(new Rect(125, 15, 100, 30), myTexture);//Draw some text on top of the texture using a labelGUI.Label(new Rect(125, 15, 100, 30), "Text overlay");} You can override the order in which controls are drawn by setting GUI.depth = /*<depth number>*/; in between calls; however, I would advise against this unless you have a desperate need. The texture" will then be drawn to fit the dimensions of the Label field, By" default it scales on the shortest dimension appropriately. This too can be altered using GUIStyle to alter the fixed width and height or even its stretch characteristics. Texture drawing Not "specifically a control in itself, the GUI framework" also gives you the ability to simply draw a Texture to the screen Granted there is little difference to using DrawTexture function instead of a Label with a texture or any other control. (Just another facet of the evolution of the legacy GUI). This is, in effect, the same as the previous Label control but instead of text it only draws a texture, for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTexture(new Rect(325, 15, 100, 15), myTexture);} Note that in all the examples providing a Texture, I have provided a basic template to initialize an empty texture. In reality, you would be assigning a proper texture to be drawn. You can also provide scaling and alpha blending values when drawing the texture to make it better fit in the scene, including the ability to control the aspect ratio that the texture is drawn in. A warning though, when you scale the image, it affects the rendering properties for that image under the legacy GUI system. Scaling the image can also affect its drawn position. You may have to offset the position it is drawn at sometimes. For" example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTexture(new Rect(325, 15, 100, 15), myTexture, "   ScaleMode.ScaleToFit,true,0.5f);} This will do its best to draw the source texture with in the drawn area with alpha "blending (true) and an aspect ratio of 0.5. Feel free to play with these settings "to get your desired effect. This is useful in certain scenarios in your game when you want a simple way to "draw a full screen image on the screen on top of all the 3D/2D elements, such as pause or splash screen. However, like the Label control, it does not receive any "input events (see the Button control for that). There is also a variant of the DrawTexture function called DrawTextureWithTexCoords. This allows you to not only pick where you want the texture drawn on to the screen, but also from which part of the source texture you want to draw, for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTextureWithTexCoords (new Rect(325, 15, 100, 15), "   myTexture ,new Rect(10, 10, 50, 5));} This will pick a region from the source texture (myTexture) 50 pixels wide by "5 pixels high starting from position 10, 10 on the texture. It will then draw this to "the Rect region specified. However, the DrawTextureWithTexCoords function cannot perform scaling, it "can only perform alpha blending! It will simply draw to fit the selected texture region to the size specified in the initial Rect. DrawTextureWithTexCoords has also been used to draw individual sprites using the legacy GUI, which has a notion of what a sprite is. The Button control Unity "also provides a Button control, which comes in two" variants. The basic "Button control which only supports a single click, whereas RepeatButton supports "holding down the button. They are both instantiated the same way by using an if statement to capture "when the button is clicked, as shown in the following script: void OnGUI() {if (GUI.Button(new Rect(25, 40, 120, 30), "Button")){   //The button has clicked, holding does nothing}if (GUI.RepeatButton(new Rect(170, 40, 170, 30), "   "RepeatButton")){   //The button has been clicked or is held down}} Each will result in a simple button on screen as follows:   The controls also support using a Texture for the button content as well by providing a texture value for the second parameter as follows: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {if (GUI.Button(new Rect(25, 40, 120, 30), myTexture)){ }} Like the Label, the "font of the text can be altered using GUIStyle or" by setting the guiText property of the GameObject. It also supports using textures in the same "way as the Label. The easiest way to look at this control is that it is a Label that "can be clicked. The Text control Just as "there is a need to display text, there is also a need for a "user to be able to enter text, and the legacy GUI provides the following controls to do just that: Control Description TextField This" is a basic text box, it supports a single line of text, no new lines (although, if the text contains end of line characters, it will draw the extra lines down). TextArea This "is an extension of TextField that supports entering of multiple lines of text; new lines will be added when the user "hits the enter key. PasswordField This is" a variant of TextField; however, it won't display "the value entered, it will replace each character with a replacement character. Note, the password itself is still stored in text form and if you are storing users' passwords, you should encrypt/decrypt the actual password when using it. Never store characters in plain text.  Using the TextField control is simple, as it returns the final state of the value that has been entered and you have to pass that same variable as a parameter for the current text to be displayed. To use the controls, you apply them in script as follows: string textString1 = "Some text here";string textString2 = "Some more text here";string textString3 = "Even more text here";void OnGUI() {textString = GUI.TextField(new Rect(25, 100, 100, 30), "   textString1);textString = GUI.TextArea(new Rect(150, 100, 200, 75), "   textString2);textString = GUI.PasswordField(new Rect(375, 100, 90, 30), "   textString3, '*');} A note about strings in Unity scripts Strings are immutable. Every time you change their value they create a new string in memory by having the textString variable declared at the class level it is a lot more memory efficient. If you declare the textString variable in the OnGUI method, "it will generate garbage (wasted memory) in each frame. "Worth keeping in mind. When" displayed, the textbox by default looks like this:   As with" the Label and Button controls, the font of the text displayed can be altered using either a GUIStyle or guiText GameObject property. Note that text will overflow within the field if it is too large for the display area, but it will not be drawn outside the TextField dimensions. The same goes for multiple lines. The Box control In the "midst of all the controls is a generic purpose control that "seemingly "can be used for a variety of purposes. Generally, it's used for drawing a "background/shaded area behind all other controls. The Box control implements most of the features mentioned in the controls above controls (Label, Texture, and Text) in a single control with the same styling and layout options. It also supports text and images as the other controls do. You can draw it with its own content as follows: void OnGUI() {GUI.Box (new Rect (350, 350, 100, 130), "Settings");} This gives you the following result:   Alternatively, you" can use it to decorate the background" of other controls, "for example: private string textString = "Some text here";void OnGUI() {GUI.Box (new Rect (350, 350, 100, 130), "Settings");GUI.Label(new Rect(360, 370, 80, 30), "Label");textString = GUI.TextField(new Rect(360, 400, 80, 30), "   textString);if (GUI.Button (new Rect (360, 440, 80, 30), "Button")) {}} Note that using the Box control does not affect any controls you draw upon it. It is drawn as a completely separate control. This statement will be made clearer when you look at the Layout controls later in this article. The example will draw the box background and the Label, Text, and Button controls on top of the box area and looks like this:   The box control can" be useful to highlight groups of controls or providing "a simple background (alternatively, you can use an image instead of just text and color). As with the other controls, the Box control supports styling using GUIStyle. The Toggle/checkbox control If checking "on / checking off is your thing, then the legacy" GUI also has a checkbox control for you, useful for those times when" you need to visualize the on/off state "of an option. Like the TextField control, you "pass the variable that manages Togglestate as "a parameter and it returns the new value (if it changes), so it is applied in code "as follows: bool blnToggleState = false;void OnGUI() {blnToggleState = GUI.Toggle(new Rect(25, 150, 250, 30),blnToggleState, "Toggle");} This results in the following on-screen result: As with the Label and Button controls, the font of the text displayed can be altered using either a GUIStyle or guiText GameObject property. Toolbar panels The basic "GUI controls also come with some very basic" automatic layout panels: "the first handles an arrangement of buttons, however these buttons are grouped "and only one can be selected at a time. As with other controls, the style of the button can be altered using a GUIStyles definition, including fixing the width of the buttons and spacing. There are two layout options available, these are: The Toolbar control The Selection grid control The Toolbar control The Toolbar control "arranges buttons in a horizontal "pattern (vertical is "not supported). Note that it is possible to fake a vertical toolbar by using a selection grid with just one item per row. See the Selection grid section later in this article for more details. As with other controls, the Toolbar returns the index of the currently selected button in the toolbar. The buttons are also the same as the base button control so it also offers options to support either text or images for the button content. The RepeatButton control however is not supported. To implement the toolbar, you specify an array of the content you wish to display in each button and the integer value that controls the selected button, as follows: private int toolbarInt;private string[] toolbarStrings ;Void Start() {toolbarInt = 0;toolbarStrings = { "Toolbar1", "Toolbar2", "Toolbar3" };}void OnGUI() {toolbarInt = GUI.Toolbar(new Rect(25, 200, 200, 30), "   toolbarInt, toolbarStrings);} The main" difference between the preceding controls is that you" have to pass the currently selected index value in to the control and it then returns the new value. The Toolbar when drawn looks as follows:   As stated, you can also pass an array of textures as well and they will be displayed instead of the text content. The SelectionGrid control The "SelectionGrid control is a customization "of the previous standard Toolbar control, it is able to arrange the buttons in a grid layout and resize the buttons appropriately to fill the target area. In code, SelectionGrid is used in a very similar format to the Toolbar code shown previously, for example: private int selectionGridInt ;private string[] selectionStrings;Void Start() {selectionGridInt = 0;selectionStrings = { "Grid 1", "Grid 2", "Grid 3", "Grid 4" };}void OnGUI() {selectionGridInt = GUI.SelectionGrid(new Rect(250, 200, 200, 60), selectionGridInt, selectionStrings, 2);} The main difference between SelectionGrid and Toolbar in code is that with SelectionGrid you can specify the number of items in a single row and the control will automatically lay out the buttons accordingly (unless overridden using GUIStyle). The preceding code will result in the following layout:   On "their own, they provide a little more flexibility" than just using buttons alone. The Slider/Scrollbar controls When "you need to control a range in your games. GUI or add "a handle to control moving properties between two values, like" moving an object around in your scene, this is "where the Slider and Scrollbar controls come in. They provide two similar out-of-the-box implementations that give a scrollable region and a handle to control the value behind the control. Here, they are presented side by side:   The slimmer Slider and chunkier Scrollbar controls can both work in either horizontal or vertical modes and have presets for the minimum and maximum values allowed. Slider control code In code, the "Slider control code is represented as follows: private float fltSliderValue = 0.5f;void OnGUI() {fltSliderValue = GUI.HorizontalSlider(new Rect(25, 250, 100,30), "   fltSliderValue, 0.0f, 10.0f);fltSliderValue = GUI.VerticalSlider(new Rect(150, 250, 25, 50), "   fltSliderValue, 10.0f, 0.0f);} Scrollbar control code In code, the "Scrollbar control code is represented as follows: private float fltScrollerValue = 0.5f;void OnGUI() {fltScrollerValue = GUI.HorizontalScrollbar(new Rect(25, 285,"   100, 30), fltScrollerValue, 1.0f, 0.0f, 10.0f);fltScrollerValue = GUI.VerticalScrollbar(new Rect(200, 250, 25,"   50), fltScrollerValue, 1.0f, 10.0f, 0.0f);} Like Toolbar and SelectionGrid, you are required to pass in the current value and it will return the updated value. However, unlike all the other controls, you actually have two style points, so you can style the bar and the handle separately, giving you a little more freedom and control over the look and feel of the sliders. Normally, you would only use the Slider control; The main reason the Scrollbar is available is that it forms the basis for the next control, the ScrollView control. The ScrollView control The last" of the displayable controls is the ScrollView control, "which gives you masking abilities over GUI elements with" optional horizontal and vertical Scrollbars. Simply put, it allows you to define an area larger for controls behind a smaller window on the screen, for example:   Example list implementation using a ScrollView control Here we have a list that has many items that go beyond the drawable area of the ScrollView control. We then have the two scrollbars to move the content of the scroll viewer up/down and left/right to change the view. The background content is hidden behind a viewable mask that is the width and height of the ScrollView control's main window. Styling the "control is a little different as there is no base style" for it; it relies on the currently set default GUISkin. You can however set separate GUIStyles for each of the sliders but only over the whole slider, not its individual parts (bar and handle). If you don't specify styles for each slider, it will also revert to the base GUIStyle. Implementing a ScrollView is fairly easy, for example: Define the visible area along with a virtual background layer where the controls will be drawn using a BeginScrollView function. Draw your controls in the virtual area. Any GUI drawing between the ScrollView calls is drawn within the scroll area. Note that 0,0 in the ScrollView is from the top-left of the ScrollView area and not the top-left-hand corner of the screen. Complete it by closing the control with the EndScrollView function. For example, the previous example view was created with the following code: private Vector2 scrollPosition = Vector2.zero;private bool blnToggleState = false;void OnGUI(){scrollPosition = GUI.BeginScrollView(" new Rect(25, 325, 300, 200), " scrollPosition, " new Rect(0, 0, 400, 400)); for (int i = 0; i < 20; i++){   //Add new line items to the background   addScrollViewListItem(i, "I'm listItem number " + i);}GUI.EndScrollView();} //Simple function to draw each list item, a label and checkboxvoid addScrollViewListItem(int i, string strItemName){GUI.Label(new Rect(25, 25 + (i * 25), 150, 25), strItemName);blnToggleState = GUI.Toggle(" new Rect(175, 25 + (i * 25), 100, 25), " blnToggleState, "");} In this "code, we define a simple function (addScrollViewListItem) to "draw a list item (consisting of a label and checkbox). We then begin the ScrollView control by passing the visible area (the first Rect parameter), the starting scroll position, and finally the virtual area behind the control (the second Rect parameter). Then we "use that to draw 20 list items inside the virtual area of the ScrollView control using our helper function before finishing off and closing the control with the EndScrollView command. If you want to, you can also nest ScrollView controls within each other. The ScrollView control also has some actions to control its use like the ScrollTo command. This command will move the visible area to the coordinates of the virtual layer, bringing it into focus. (The coordinates for this are based on the top-left position of the virtual layer; 0,0 being top-left.) To use the ScrollTo function on ScrollView, you must use it within the Begin and End ScrollView commands. For example, we can add a button in ScrollView to jump to the top-left of the virtual area, for example: public Vector2 scrollPosition = Vector2.zero;void OnGUI(){scrollPosition = GUI.BeginScrollView(" new Rect(10, 10, 100, 50)," scrollPosition, " new Rect(0, 0, 220, 10)); if (GUI.Button(new Rect(120, 0, 100, 20), "Go to Top Left"))   GUI.ScrollTo(new Rect(0, 0, 100, 20));GUI.EndScrollView();} You can also additionally turn on/off the sliders on either side of the control by specifying "the BeginScrollView statement "using the alwayShowHorizontal and alwayShowVertical properties; these are highlighted here in an updated "GUI.BeginScrollView call: Vector2 scrollPosition = Vector2.zero;bool ShowVertical = false; // turn off vertical scrollbarbool ShowHorizontal = false; // turn off horizontal scrollbarvoid OnGUI() {scrollPosition = GUI.BeginScrollView(new Rect(25, 325, 300, 200),scrollPosition,new Rect(0, 0, 400, 400),ShowHorizontal,ShowVertical);GUI.EndScrollView ();}GUI.EndScrollView ();} Rich Text Formatting Now" having just plain text everywhere would not look" that great and would likely force developers to create images for all the text on their screens (granted a fair few still do so for effect). However, Unity does provide a way to enable richer text display using a style akin to HTML wherever you specify text on a control (only for label and display purposes; getting it to" work with input fields is not recommended). In this HTML style of writing text, we have the following tags we can use to liven up the text displayed. This gives" text a Bold format, for example: The <b>quick</b> brown <b>Fox</b> jumped over the <b>lazy Frog</b> This would result in: The quick brown Fox jumped over the lazy Frog Using this tag "will give text an Italic format, for example: The <b><i>quick</i></b> brown <b>Fox</b><i>jumped</i> over the <b>lazy Frog</b> This would result in: The quick brown Fox jumped over the lazy Frog As you can "probably guess, this tag will alter the Size of the text it surrounds. For reference, the default size for the font is set by the font itself. For example: The <b><i>quick</i></b> <size=50>brown <b>Fox</b></size> <i>jumped</i> over the <b>lazy Frog</b> This would result in: Lastly, you can specify different colors for text surrounded by the Color tag. "The color itself is" denoted using its 8-digit RGBA hex value, for example: The <b><i>quick</i></b> <size=50><color=#a52a2aff>brown</color> <b>Fox</b></size> <i>jumped</i> over the <b>lazy Frog</b> Note that the "color is defined using normal RGBA color space notation (http://en.wikipedia.org/wiki/RGBA_color_space) in hexadecimal form with two characters per color, for example, RRGGBBAA. Although the color property does also support the shorter RGB color space, which is the same notation "but without the A (Alpha) component, for example,. RRGGBB The preceding code would result in: (If you are reading this in print, the previous word brown is in the color brown.) You can also use a color name to reference it but the pallet is quite limited; for more "details, see the Rich Text manual reference page at http://docs.unity3d.com/Manual/StyledText.html. For text meshes, there are two additional tags: <material></material> <quad></quad> These only apply when associated to an existing mesh. The material is one of the materials assigned to the mesh, which is accessed using the mesh index number "(the array of materials applied to the mesh). When applied to a quad, you can also specify a size, position (x, y), width, and height to the text. The text mesh isn't well documented and is only here for reference; as we delve deeper into the new UI system, we will find much better ways of achieving this. Summary So now we have a deep appreciation of the past and a glimpse into the future. One thing you might realize is that there is no one stop shop when it comes to Unity; each feature has its pros and cons and each has its uses. It all comes down to what you want to achieve and the way you want to achieve it; never dismiss anything just because it's old. In this article, we covered GUI controls. It promises to be a fun ride, and once we are through that, we can move on to actually building some UI and then placing it in your game scenes in weird and wonderful ways. Now stop reading this and turn the page already!! Resources for Article: Further resources on this subject: What's Your Input? [article] Parallax scrolling [article] Unity Networking – The Pong Game [article]
Read more
  • 0
  • 0
  • 23149

article-image-buildbox-2-game-development-peek-boo
Packt
23 Sep 2016
20 min read
Save for later

Buildbox 2 Game Development: peek-a-boo

Packt
23 Sep 2016
20 min read
In this article by Ty Audronis author of the book Buildbox 2 Game Development, teaches the reader the Buildbox 2 game development environment by example.The following excerpts from the book should help you gain an understanding of the teaching style and the feel of the book.The largest example we give is by making a game called Ramblin' Rover (a motocross-style game that uses some of the most basic to the most advanced features of Buildbox).Let's take a quick look. (For more resources related to this topic, see here.) Making the Rover Jump As we've mentioned before, we're making a hybrid game. That is, it's a combination of a motocross game, a platformer, and a side-scrolling shooter game. Our initial rover will not be able to shoot at anything (we'll save this feature for the next upgraded rover that anyone can buy with in-game currency). But this rover will need to jump in order to make the game more fun. As we know, NASA has never made a rover for Mars that jumps. But if they did do this, how would they do it? The surface of Mars is a combination of dust and rocks, so the surface conditions vary greatly in both traction and softness. One viable way is to make the rover move in the same way a spacecraft manoeuvres (using little gas jets). And since the gravity on Mars is lower than that on Earth, this seems legit enough to include it in our game. While in our Mars Training Ground world, open the character properties for Training Rover. Drag the animated PNG sequence located in our Projects/RamblinRover/Characters/Rover001-Jump folder (a small four-frame animation) into the JumpAnimation field. Now we have an animation of a jump-jet firing when we jump. We just need to make our rover actually jump. Your Properties window should look like the following screenshot: The preceding screenshot shows the relevant sections of the character's properties window We're now going to revisit the Character Gameplay Settings section. Scroll the Properties window all the way down to this section. Here's where we actually configure a few settings in order to make the rover jump. The previous screenshot shows the section as we're going to set it up. You can configure your settings similarly. The first setting we are considering is Jump Force. You may notice that the vertical force is set to 55. Since our gravity is -20 in this world, we need enough force to not only counteract the gravity, but also to give us a decent height (about half the screen). A good rule is to just make our Jump Force 2x our Gravity. Next is Jump Counter. We've set it to 1. By default, it's set to 0. This actually means infinity. When JumpCounter is set to 0, there is no limit to how many times a player can use the jump boost… They could effectively ride the top of the screen using the jump boost,such as a flappy bird control. So, we set it to 1 in order to limit the jumps to one at a time. There is also a strange oddity with the Buildbox that we can exploit with this. The jump counter resets only after the rover hits the ground. But, there's a funny thing… The rover itself never actually touches the ground (unless it crashes), only the wheels do. There is one other way to reset the jump counter: by doing a flip. What this means is that once players use their jump up, the only way to reset it is to do a flip-trick off a ramp. Add a level of difficulty and excitement to the game using a quirk of the development software! We could trick the software into believing that the character is simply close enough to the ground to reset the counter by increasing Ground Threshold to the distance that the body is from the ground when the wheels have landed. But why do this? It's kind of cool that a player has to do a trick to reset the jump jets. Finally, let's untick the Jump From Ground checkbox. Since we're using jets for our boost, it makes sense that the driver could activate them while in the air. Plus, as we've already said, the body never meets the ground. Again, we could raise the ground threshold, but let's not (for the reasonsstated previously). Awesome! Go ahead and give it a try by previewing the level. Try jumping on the small ramp that we created, which is used to get on top of our cave. Now, instead of barely clearing it, the rover will easily clear it, and the player can then reset the counter by doing a flip off the big ramp on top. Making a Game Over screen This exercise will show you how to make some connections and new nodes using Game Mind Map. The first thing we're going to want is an event listener to sense when a character dies. It sounds complex, and if we were coding a game, this would take several lines of code to accomplish. In Buildbox, it's a simple drag-and-drop method. If you double-click on the Game Field UI node, you'll be presented with the overlay for the UI and controls during gameplay. Since this is a basic template, you are actually presented with a blank screen. This template is for you to play around with on a computer, so no controls are on the screen. Instead, it is assumed that you would use keyboard controls to play the demo game. This is why the screen looks blank: There are some significant differences between the UI editor and the World editor. You can notice that the Character tab from the asset library is missing, and there is a timeline editor on the bottom. We'll get into how to use this timeline later. For now, let's keep things simple and add our Game Over sensor. If you expand the Logic tab in the asset library, you'll find the Event Observer object. You can drag this object anywhere onto the stage. It doesn't even have to be in the visible window (the dark area in the center of the stage). So long as it's somewhere on the stage, the game can use this logic asset. If you do put it on the visible area of the stage, don't worry; it's an invisible asset, and won't show in your game. While the Event observer is selected on the stage, you'll notice that its properties pop up in the properties window (on the right side of the screen). By default, the Game Over type of event is selected. But if you select this drop-down menu, you'll notice a ton of different event types that this logic asset can handle. Let's leave all of the properties at their default values (except the name; change this to Game Over) and go back to Game Mind Map (the top-left button): Do you notice anything different? The Game Field UI node now has a Game Over output. Now, we just need a place to send this output. Right-click on the blank space of the grid area. Now you can either create a new world or new UI. Select Add New UI and you'll see a new green node that is titled New UI1. This new UI will be your Game Over screen when a character dies. Before we can use this new node, it needs to be connected to the Game Over output of Game Field UI. This process is exceedingly simple. Just hold down your left mouse button on the Game Over output's dark dot, and drag it to the New UI1's Load dark dot (on the left side of the New UI1 node). Congratulations, you've just created your first connected node. We're not done yet, though. We need to make this Game Over screen link back to restart the game. First, by selecting the New UI1 node, change its name using the parameters window (on the right of the screen) to Game Over UI. Make sure you hit your Enter key; this will commit the changed name. Now double-click on the Game Over UI node so we can add some elements to the screen. You can't have a Game Over screen without the words Game Over, so let's add some text. So, we've pretty much completed the game field (except for some minor items that we'll address quite soon). But believe it or not, we're only halfway there! In this article, we're going to finally create our other two rovers, and we'll test and tweak our scenes with them. We'll set up all of our menus, information screens, and even a coin shop where we can use in-game currency to buy the other two rovers, or even use some real-world currency to short-cut and buy more in-game currency. And speaking of monetization, we'll set up two different types of advertising from multiple providers to help us make some extra cash. Or, in the coin-shop, players can pay a modest fee to remove all advertising! Ready? Well, here we go! We got a fever, and the only cure is more rovers! So now that we've created other worlds, we definitely need to set up some rovers that are capable of traversing them. Let's begin with the optimal rover for Gliese. This one is called the K.R.A.B.B. (no, it doesn't actually stand for anything…but the rover looks like a crab, and acronyms look more military-like). Go ahead and drag all of the images in the Rover002-Body folder as characters. Don't worry about the error message. This just tells you that only one character can be on the stage at a time. The software still loads this new character into the library, and that's all we really want at this time anyway. Of course, drag the images in the Rover002-Jump folder to the Jump Animation field, and the LaserShot.png file to the Bullet Animation field. Set up your K.R.A.B.B. with the following settings: For Collision Shape, match this: In the Asset Library, drag the K.R.A.B.B. above the Mars Training Rover. This will make it the default rover. Now, you can test your Gliese level (by soloing each scene) with this rover to make sure it's challenging, yet attainable. You'll notice some problems with the gun destroying ground objects, but we'll solve that soon enough. Now, let's do the same with Rover 003. This one uses a single image for the Default Animation, but an image sequence for the jump. We'll get to the bullet for this one in a moment, but set it up as follows: Collision Shape should look as follows: You'll notice that a lot of the settings are different on this character, and you may wonder what the advantage of this is (since it doesn't lean as much as the K.R.A.B.B.). Well, it's a tank, so the damage it can take will be higher (which we'll set up shortly), and it can do multiple jumps before recharging (five, to be exact). This way, this rover can fly using flappy-bird style controls for short distances. It's going to take a lot more skill to pilot this rover, but once mastered, it'll be unstoppable. Let's move onto the bullet for this rover. Click on the Edit button (the little pencil icon) inside Bullet Animation (once you've dragged the missile.png file into the field), and let's add a flame trail. Set up a particle emitter on the missile, and position it as shown in the following screenshots: The image on the left shows the placement of the missile and the particle emitter. On the right, you can see the flame set up. You may wonder why it is pointed in the opposite direction. This will actually make the flames look more realistic (as if they're drifting behind the missile). Preparing graphic assets for use in Buildbox Okay, so as I said before, the only graphic assets that Buildbox can use are PNG files. If this was just a simple tutorial on how to make Ramblin' Rover, we could leave it there. But it's not just it. Ramblin' Rover is just an example of how a game is made, but we want to give you all of the tools and baseknowledge you need to create all of your own games from scratch. Even if you don't create your own graphic assets, you need to be able to tell anybody creating them for you how you want them. And more importantly, you need to know why. Graphics are absolutely the most important thing in developing a game. After all, you saw how just some eyes and sneakers made a cute character that people would want to see. Graphics create your world. They create characters that people want to succeed. Most importantly, graphics create the feel of your game, and differentiate it from other games on the market. What exactly is a PNG file? Anybody remember GIF files? No, not animated GIFs that you see on most chat-rooms and on Facebook (although they are related). Back in the 1990s, a still-frame GIF file was the best way to have a graphics file that had a transparent background. GIFs can be used for animation, and can have a number of different purposes. However, GIFs were clunky. How so? Well, they had a type of compression known as lossy. This just means that when compressed, information was lost, and artifacts and noise could pop up and be present. Furthermore, GIFs used indexed colors. This means that anywhere from 2 to 256 colors could be used, and that's why you see something known as banding in GIF imagery. Banding is where something in real life goes from dark to light because of lighting and shadows. In real life, it's a smooth transition known as a gradient. With indexed colors, banding can occur when these various shades are outside of the index. In this case, the colors of these pixels are quantized (or snapped) to the nearest color in the index. The images here show a noisy and banded GIF (left) versus the original picture (right): So, along came PNGs (Portable Network Graphics is what it stands for). Originally, the PNG format was what a program called Macromedia Fireworks used to save projects. Now,the same software is called Adobe Fireworks and is part of the Creative Cloud. Fireworks would cut up a graphics file into a table or image map and make areas of the image clickable via hyperlink for HTML web files. PNGs were still not widely supported by web browsers, so it would export the final web files as GIFs or JPEGs. But somewhere along the line, someone realized that the PNG image itself was extremely bandwidthefficient. So, in the 2000s, PNGs started to see some support on browsers. Up until around 2008, though, Microsoft's Internet Explorer still did not support PNGs with transparency, so some strange CSS hacks needed to be done to utilize them. Today, though, the PNG file is the most widely used network-based image file. It's lossless, has great transparency, and is extremely efficient. Since PNGs are very widely used, and this is probably why Buildbox restricts compatibility to this format. Remember, Buildbox can export for multiple mobile and desktop platforms. Alright, so PNGs are great and very compatible. But there are multiple flavours of PNG files. So, what differentiates them? What bit-ratings mean? When dealing with bit-ratings, you have to understand that when you hear 8-bit image and 24-bit image, it maybe talking about two different types of rating, or even exactly the same type of image. Confused? Good, because when dealing with a graphics professional to create your assets, you're going to have to be a lot more specific, so let's give you a brief education in this. Your typical image is 8 bits per channel (8 bpc), or 24 bits total (because there are three channels: red, green, and blue). This is also what they mean by a 16.7 million-color image. The math is pretty simple. A bit is either 0 or 1. 8 bits may look something as 01100110. This means that there are 256 possible combinations on that channel. Why? Because to calculate the number of possibilities, you take the number of possible values per slot and take it to that power. 0 or 1; that's 2 possibilities, and 8 bit is 8 slots. 2x2x2x2x2x2x2x2 (2 to the 8th power) is 256. To combine colors on a pixel, you'd need to multiply the possibilities such as 256x256x256 (which is 16.7 million). This is how they know that there are 16.7 million possible colors in an 8 bpc or 24-bit image. So saying 8 bit may mean per channel or overall. This is why it's extremely important to add thr "channel" word if that's what you mean. Finally, there is a fourth channel called alpha. The alpha channel is the transparency channel. So when you're talking about a 24-bit PNG with transparency, you're really talking about a 32-bit image. Why is this important to know? This is because some graphics programs (such as Photoshop) have 24-bit PNG as an option with a checkbox for transparency. But some other programs (such as the 3D software we used called Lightwave) have an option for a 24-bit PNG and a 32-bit PNG. This is essentially the same as the Photoshop options, but with different names. By understanding what these bits per channel are and what they do, you can navigate your image-creating software options better. So, what's an 8-bit PNG, and why is it so important to differentiate it from an 8-bit per channel PNG (or 24-bit PNG)? It is because an 8-bit PNG is highly compressed. Much like a GIF, it uses indexed colors. It also uses a great algorithm to "dither" or blend the colors to fill them in to avoid banding. 8-bit PNG files are extremely efficient on resources (that is, they are much smaller files), but they still look good, unless they have transparency. Because they are so highly compressed, the alpha channel is included in the 8-bits. So, if you use 8-bit PNG files for objects that require transparency, they will end up with a white-ghosting effect around them and look terrible on screen, much like a weather report where the weather reporter's green screen is bad. So, the rule is… So, what all this means to you is pretty simple. For objects that require transparency channels, always use 24-bit PNG files with transparency (also called 8 bits per channel, or 32-bit images). For objects that have no transparency (such as block-shaped obstacles and objects), use 8-bit PNG files. By following this rule, you'll keep your game looking great while avoiding bloating your project files. In the end, Buildbox repacks all of the images in your project into atlases (which we'll cover later) that are 32 bit. However, it's always a good practice to stay lean. If you were a Buildbox 1.x user, you may remember that Buildbox had some issues with DPI (dots per inch) between the standard 72 and 144 on retina displays. This issue is a thing of the past with Buildbox 2. Image sequences Think of a film strip. It's just a sequence of still-images known as frames. Your standard United States film runs at 24 frames per second (well, really 23.976, but let's just round up for our purposes). Also, in the US, television runs at 30 frames per second (again, 29.97, but whatever…let's round up). Remember that each image in our sequence is a full image with all of the resources associated with it. We can quite literally cut our necessary resources in half by cutting this to 15 frames per second (fps). If you open the content you downloaded, and navigate to Projects/RamblinRover/Characters/Rover001-Body, you'll see that the images are named Rover001-body_001.png, Rover001-body_002.png and so on. The final number indicates the number that should play in the sequence (first 001, then 002, and so on). The animation is really just the satellite dish rotating, and the scanner light in the window rotating as well. But what you'll really notice is that this animation is loopable. All loopable means is that the animation can loop (play over and over again) without you noticing a bump in the footage (the final frame leads seamlessly back to the first). If you're not creating these animations yourself, you'll need to make sure to specify to your graphics professional to make these animations loopable at 15 fps. They should understand exactly what you mean, and if they don't…you may consider finding a new animator. Recommended software for graphics assets For the purposes of context (now that you understand more about graphics and Buildbox), a bit of reinforcement couldn't hurt. A key piece of graphics software is the Adobe Creative Cloud subscription (http://www.adobe.com/CreativeCloud ). Given its bang for the buck, it just can't be beaten. With it, you'll get Photoshop (which can be used for all graphics assets from your game's icon to obstacles and other objects), Illustrator (which is great for navigational buttons), After Effects (very useful for animated image sequences), Premiere Pro (a video editing application for marketing videos from screen-captured gameplay), and Audition (for editing all your sound). You may also want some 3D software, such as Lightwave, 3D Studio Max, or Maya. This can greatly improve the ability to make characters, enemies, and to create still renders for menus and backgrounds. Most of the assets in Ramblin' Rover were created with the 3D software Lightwave. There are free options for all of these tools. However, there are not nearly as many tutorials and resources available on the web to help you learn and create using these. One key thing to remember when using free software: if it's free…you're the product. In other words, some benefits come with paid software, such as better support, and being part of the industry standard. Free software seems to be in a perpetual state of "beta testing." If using free software, read your End User License Agreement (known as a EULA) very carefully. Some software may require you to credit them in some way for the privilege of using their software for profit. They may even lay claim to part of your profits. Okay, let's get to actually using our graphics in Ramblin' Rover… Summary See? It's not that tough to follow. By using plain-English explanations combined with demonstrating some significant and intricate processes, you'll be taken on a journey meant to stimulate your imagination and educate you on how to use the software. Along with the book comes the complete project files and assets to help you follow along the entire way through the build process. You'll be making your own games in no time! Resources for Article: Further resources on this subject: What Makes a Game a Game? [article] Alice 3: Controlling the Behavior of Animations [article] Building a Gallery Application [article]
Read more
  • 0
  • 0
  • 22740

article-image-blizzard-comes-under-fire-after-banning-pro-player-for-expressing-support-for-hong-kong-protests
Sugandha Lahoti
10 Oct 2019
6 min read
Save for later

Blizzard comes under fire after banning pro-player for expressing support for Hong Kong protests

Sugandha Lahoti
10 Oct 2019
6 min read
Update: The article has now been updated to include Blizzard's press release about relaxing the ban on the pro-player.  Blizzard has been under fire since last weekend after the game publisher issued a year-long ban to a Hearthstone player who expressed support for the Hong Kong protestors during a competition live stream. The incident occurred on Sunday when Ng “Blitzchung” Wai Chung voiced support for the protesters in Hong Kong in a post-game interview. Blitzchung said, “Liberate Hong Kong. Revolution of our age!” The ban is effective from October 5th and forbids Blitzchung from participating in any tournaments for an entire year. Blizzard is also withholding any prize money he would have earned from competing in the tournament. Blizzard has also terminated its contract with the two casters who were interviewing the competitor. Explaining the reason behind this ban Blizzard issued a statement, “Per the competition rule, players aren’t allowed to do anything that brings [them] into public disrepute, offends a portion or group of the public, or otherwise damages [Blizzard’s] image. While we stand by one’s right to express individual thoughts and opinions, players and other participants that elect to participate in our esports competitions must abide by the official competition rules.” Game Players, US politicians, and Blizzard employees are outraged After the ban of Hearthstone pro,  Blizzard was at the end of major backlash from video game players, US politicians, and Blizzard employees. On Tuesday, a small group of Blizzard employees walked out of work to protest the company’s actions. The demonstration featured about 12-30 employees from multiple departments, who gathered around the Orc warrior statue in the center of the company’s main campus in Irvine, California. The Daily Beast spoke with a few employees. “The action Blizzard took against the player was pretty appalling but not surprising,” said a longtime Blizzard employee. “Blizzard makes a lot of money in China, but now the company is in this awkward position where we can’t abide by our values.” “I’m disappointed,” another current Blizzard employee said. “We want people all over the world to play our games, but no action like this can be made with political neutrality.” US Senators Marco Rubio and Ron Wyden also chastised the actions of Blizzard on Twitter. “Blizzard shows it is willing to humiliate itself to please the Chinese Communist Party,” Senator Wyden tweeted. “No American company should censor calls for freedom to make a quick buck.” “Recognize what’s happening here,” Senator Rubio said on Twitter. “People who don’t live in #China must either self-censor or face dismissal & suspensions. China using access to the market as leverage to crush free speech globally. Implications of this will be felt long after everyone in U.S. politics today is gone.” https://twitter.com/marcorubio/status/1181556058659135488 Blizzard’s own forums and subreddits were also bombarded with angry messages denouncing the ban. The r/Blizzard subreddit went down for a few hours on Tuesday after the board was drowned with posts calling for players to boycott Blizzard and its games like World of Warcraft, Overwatch, and Hearthstone. On its Hearthstone board, a redditor Hinz97 said in a post,“ I play [Hearthstone] everyday, I climbed to Legend several times. I spent more than $10k. As a [Hong Konger], I quit [ Hearthstone] without consideration.” “I’ve been playing since beta. Good riddance,” Redditor UltimaterializerX said. “Blizzard CLEARLY only cares about the Chinese market. The censorship of art was bad enough. The censorship of human life is indefensible. Finding videos of what’s going on in Hong Kong is easy and I suggest everyone do so. It’s Tiananmen Square all over again.” https://twitter.com/Espsilverfire2/status/1182001007976423424 Mark Kern, Team Lead for Vanilla World of Warcraft tweeted, “This hurts. But until Blizzard reverses their decision on @blitzchungHS.  I am giving up playing Classic WoW, which I helped make and helped convince Blizzard to relaunch. There will be no Mark of Kern guild after all.” Fortnite creator Epic Games released a statement stating that it will not ban players or content creators for political speech. “Epic supports everyone’s right to express their views on politics and human rights. We wouldn’t ban or punish a Fortnite player or content creator for speaking on these topics.” https://twitter.com/TimSweeneyEpic/status/1181933071760789504 Blizzard has not yet responded to this development or lifted the ban. Hong Kong protests began in June and now the tech industry has been caught in between the China HK political tussle. In August, Chinese state-run media agencies were caught buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. Post this revelation, Twitter banned 936 accounts managed by the Chinese state; Facebook removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior; Google shutdown 210 YouTube channels. Most recently Apple, after pressure from the Chinese govt, banned a protest safety app that helps people track locations of the Hong Kong police which made people very angry. Amid the protests a day later, Apple again brought it back to the iOS Store. Yesterday, according to Quartz investigations editor John Keefe, Apple has reportedly removed the Quartz application from the App Store at the request of the Chinese government. Quartz has been covering the Hong Kong protests in detail and has been blocked across all of mainland China. Update as on Oct 11: After four days of mounting public pressure, Blizzard Entertainment published a press release partially relaxing the ban on the professional player who expressed support for the Hong Kong protestors during a competition live stream. The one year ban on Ng "blitzchung" has since been changed to a six-month suspension. Additionally, the two Chinese broadcasters who had been fired are now put on a six-month suspension from their jobs. Blizzard President J. Allen Brack wrote also clarified that they were not under the influence of China. "The specific views expressed by blitzchung were NOT a factor in the decision we made," Brack wrote. "I want to be clear: our relationships in China had no influence on our decision." Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests
Read more
  • 0
  • 0
  • 21959
article-image-opengl-40-using-uniform-blocks-and-uniform-buffer-objects
Packt
03 Aug 2011
10 min read
Save for later

OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects

Packt
03 Aug 2011
10 min read
  OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your OpenGL Shading language use         Read more about this book       If your OpenGL/GLSL program involves multiple shader programs that use the same uniform variables, one has to manage the variables separately for each program. Uniform blocks were designed to ease the sharing of uniform data between programs. In this article by David Wolff, author of OpenGL 4.0 Shading Language Cookbook, we will create a buffer object for storing the values of all the uniform variables, and bind the buffer to the uniform block. Then when changing programs, the same buffer object need only be re-bound to the corresponding block in the new program. (For more resources on this subject, see here.) Uniform locations are generated when a program is linked, so the locations of the uniforms may change from one program to the next. The data for those uniforms may have to be re-generated and applied to the new locations. A uniform block is simply a group of uniform variables defined within a syntactical structure known as a uniform block. For example, in this recipe, we'll use the following uniform block: uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;}; This defines a block with the name BlobSettings that contains four uniform variables. With this type of block definition, the variables within the block are still part of the global scope and do not need to be qualified with the block name. The buffer object used to store the data for the uniforms is often referred to as a uniform buffer object. We'll see that a uniform buffer object is simply just a buffer object that is bound to a certain location. For this recipe, we'll use a simple example to demonstrate the use of uniform buffer objects and uniform blocks. We'll draw a quad (two triangles) with texture coordinates, and use our fragment shader to fill the quad with a fuzzy circle. The circle is a solid color in the center, but at its edge, it gradually fades to the background color, as shown in the following image. Getting ready Start with an OpenGL program that draws two triangles to form a quad. Provide the position at vertex attribute location 0, and the texture coordinate (0 to 1 in each direction) at vertex attribute location 1 (see: Sending data to a shader using per-vertex attributes and vertex buffer objects). We'll use the following vertex shader: #version 400 layout (location = 0) in vec3 VertexPosition;layout (location = 1) in vec3 VertexTexCoord; out vec3 TexCoord; void main(){ TexCoord = VertexTexCoord; gl_Position = vec4(VertexPosition,1.0);} The fragment shader contains the uniform block, and is responsible for drawing our fuzzy circle: #version 400in vec3 TexCoord;layout (location = 0) out vec4 FragColor; uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;}; void main() { float dx = TexCoord.x - 0.5; float dy = TexCoord.y - 0.5; float dist = sqrt(dx * dx + dy * dy); FragColor = mix( InnerColor, OuterColor, smoothstep( RadiusInner, RadiusOuter, dist ) );} The uniform block is named BlobSettings. The variables within this block define the parameters of our fuzzy circle. The variable OuterColor defines the color outside of the circle. InnerColor is the color inside of the circle. RadiusInner is the radius defining the part of the circle that is a solid color (inside the fuzzy edge), and the distance from the center of the circle to the inner edge of the fuzzy boundary. RadiusOuter is the outer edge of the fuzzy boundary of the circle (when the color is equal to OuterColor). The code within the main function computes the distance of the texture coordinate to the center of the quad located at (0.5, 0.5). It then uses that distance to compute the color by using the smoothstep function. This function provides a value that smoothly varies between 0.0 and 1.0 when the value of the third argument is between the values of the first two arguments. Otherwise it returns 0.0 or 1.0 depending on whether it is less than the first or greater than the second, respectively. The mix function is then used to linearly interpolate between InnerColor and OuterColor based on the value returned by the smoothstep function. How to do it... In the OpenGL program, after linking the shader program, use the following steps to send data to the uniform block in the fragment shader: Get the index of the uniform block using glGetUniformBlockIndex. GLuint blockIndex = glGetUniformBlockIndex(programHandle, "BlobSettings"); Allocate space for the buffer to contain the data for the uniform block. We get the size of the block using glGetActiveUniformBlockiv. GLint blockSize; glGetActiveUniformBlockiv(programHandle, blockIndex, GL_UNIFORM_BLOCK_DATA_SIZE, &blockSize); GLubyte * blockBuffer= (GLubyte *) malloc(blockSize); Query for the offset of each variable within the block. To do so, we first find the index of each variable within the block. // Query for the offsets of each block variableconst GLchar *names[] = { "InnerColor", "OuterColor", "RadiusInner", "RadiusOuter" }; GLuint indices[4];glGetUniformIndices(programHandle, 4, names, indices); GLint offset[4];glGetActiveUniformsiv(programHandle, 4, indices, GL_UNIFORM_OFFSET, offset); Place the data into the buffer at the appropriate offsets. GLfloat outerColor[] = {0.0f, 0.0f, 0.0f, 0.0f};GLfloat innerColor[] = {1.0f, 1.0f, 0.75f, 1.0f};GLfloat innerRadius = 0.25f, outerRadius = 0.45f; memcpy(blockBuffer + offset[0], innerColor, 4 * sizeof(GLfloat));memcpy(blockBuffer + offset[1], outerColor, 4 * sizeof(GLfloat));memcpy(blockBuffer + offset[2], &innerRadius, sizeof(GLfloat));memcpy(blockBuffer + offset[3], &outerRadius, sizeof(GLfloat)); Create the OpenGL buffer object and copy the data into it. GLuint uboHandle;glGenBuffers( 1, &uboHandle );glBindBuffer( GL_UNIFORM_BUFFER, uboHandle );glBufferData( GL_UNIFORM_BUFFER, blockSize, blockBuffer, GL_DYNAMIC_DRAW ); Bind the buffer object to the uniform block. glBindBufferBase( GL_UNIFORM_BUFFER, blockIndex, uboHandle ); How it works... Phew! This seems like a lot of work! However, the real advantage comes when using multiple programs where the same buffer object can be used for each program. Let's take a look at each step individually. First, we get the index of the uniform block by calling glGetUniformBlockIndex, then we query for the size of the block by calling glGetActiveUniformBlockiv. After getting the size, we allocate a temporary buffer named blockBuffer to hold the data for our block. The layout of data within a uniform block is implementation dependent, and implementations may use different padding and/or byte alignment. So, in order to accurately layout our data, we need to query for the offset of each variable within the block. This is done in two steps. First, we query for the index of each variable within the block by calling glGetUniformIndices. This accepts an array of variable names (third argument) and returns the indices of the variables in the array indices (fourth argument). Then we use the indices to query for the offsets by calling glGetActiveUniformsiv. When the fourth argument is GL_UNIFORM_OFFSET, this returns the offset of each variable in the array pointed to by the fifth argument. This function can also be used to query for the size and type; however, in this case we choose not to do so, to keep the code simple (albeit less general). The next step involves filling our temporary buffer blockBuffer with the data for the uniforms at the appropriate offsets. Here we use the standard library function memcpy to accomplish this. Now that the temporary buffer is populated with the data with the appropriate layout, we can create our buffer object and copy the data into the buffer object. We call glGenBuffers to generate a buffer handle, and then bind that buffer to the GL_UNIFORM_BUFFER binding point by calling glBindBuffer. The space is allocated within the buffer object and the data is copied when glBufferData is called. We use GL_DYNAMIC_DRAW as the usage hint here, because uniform data may be changed somewhat often during rendering. Of course, this is entirely dependent on the situation. Finally, we associate the buffer object with the uniform block by calling glBindBufferBase. This function binds to an index within a buffer binding point. Certain binding points are also so-called "indexed buffer targets". This means that the target is actually an array of targets, and glBindBufferBase allows us to bind to one index within the array. There's more... If the data for a uniform block needs to be changed at some later time, one can call glBufferSubData to replace all or part of the data within the buffer. If you do so, don't forget to first bind the buffer to the generic binding point GL_UNIFORM_BUFFER. Using an instance name with a uniform block A uniform block can have an optional instance name. For example, with our BlobSettings block, we could have used the instance name Blob, as shown here: uniform BlobSettings { vec4 InnerColor; vec4 OuterColor; float RadiusInner; float RadiusOuter;} Blob; In this case, the variables within the block are placed within a namespace qualified by the instance name. Therefore our shader code needs to refer to them prefixed with the instance name. For example: FragColor = mix( Blob.InnerColor, Blob.OuterColor, smoothstep( Blob.RadiusInner, Blob.RadiusOuter, dist ) ); Additionally, we need to qualify the variable names within the OpenGL code when querying for variable indices. The OpenGL specification says that they must be qualified with the block name (BlobSettings). However, my tests using the ATI Catalyst (10.8) drivers required me to use the instance name (Blob). Using layout qualifiers with uniform blocks Since the layout of the data within a uniform buffer object is implementation dependent, it required us to query for the variable offsets. However, one can avoid this by asking OpenGL to use the standard layout std140. This is accomplished by using a layout qualifier when declaring the uniform block. For example: layout( std140 ) uniform BlobSettings { ...}; The std140 layout is described in detail within the OpenGL specification document (available at http://www.opengl.org). Other options for the layout qualifier that apply to uniform block layouts include packed and shared. The packed qualifier simply states that the implementation is free to optimize memory in whatever way it finds necessary (based on variable usage or other criteria). With the packed qualifier, we still need to query for the offsets of each variable. The shared qualifier guarantees that the layout will be consistent between multiple programs and program stages provided that the uniform block declaration does not change. If you are planning to use the same buffer object between multiple programs and/or program stages, it is a good idea to use the shared option. There are two other layout qualifiers that are worth mentioning: row_major and column_major. These define the ordering of data within the matrix type variables within the uniform block. One can use multiple qualifiers for a block. For example, to define a block with both the row_major and shared qualifiers, we would use the following syntax: layout( row_major, shared ) uniform BlobSettings { ...}; Summary This article covered the topic of Using Uniform Blocks and Uniform Buffer Objects. Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] GLSL 4.0: Discarding Fragments to Create a Perforated Look [Article]
Read more
  • 0
  • 0
  • 21735

article-image-cooking-cupcakes-towers
Packt
03 Jan 2017
6 min read
Save for later

Cooking cupcakes towers

Packt
03 Jan 2017
6 min read
In this article by Francesco Sapio author of the book Getting Started with Unity 2D Game Development - Second Edition we will see how to create our towers. This is not an easy task, but at the end we will acquire a lot of scripting skills. (For more resources related to this topic, see here.) What a cupcake Tower does First of all, it's useful to write down what we want to achieve and define what exactly a cupcake tower is supposed to do. The best way is to write down a list, to have clear idea of what we are trying to achieve: A cupcake tower is able to detect pandas within a certain range. A cupcake tower shoots a different kind of projectile according to its typology against the pandas within a certain range. Furthermore, among this range, it uses a policy to decide which panda to shoot. There is a reload time, before the cupcake tower is able to shoot again. The cupcake tower can be upgraded (in a bigger cupcake!), increasing its stats and therefore changing its appearance. Scripting the cupcake tower There are a lot of things to implement. Let's start by creating a new script and naming it CupcakeTowerScript. As we already mentioned for the Projectile Script, in this article, we implement the main logic, but of course there is always space to improve. Shooting to pandas Even if we don't have enemies yet, we can already start to program the behavior of the cupcake towers to shoot to the enemies. In this article we will learn a bit about using Physics to detect objects within a range. Let's start by defining four variables. The first three are public, so we can set them in the Inspector, the last one is private, since we only need it to check how much time is elapsed. In particular, the first three variables store the parameters of our tower. So, the projectile prefab, its range and its reload time. We can write the following: public float rangeRadius; //Maximum distance that the Cupcake Tower can shoot public float reloadTime; //Time before the Cupcake Tower is able to shoot again public GameObject projectilePrefab; //Projectile type that is fired from the Cupcake Tower private float elapsedTime; //Time elapsed from the last time the Cupcake Tower has shot Now, in the Update() function we need to check if enough time has elapsed in order to shoot. This can be easily done by using an if-statement. In any case, at the end, the time elapsed should be increased: void Update () { if (elapsedTime >= reloadTime) { //Rest of the code } elapsedTime += Time.deltaTime; } Within the if statement, we need to reset the elapsed time, so to be able to shoot the next time. Then, we need to check if within its range there are some game objects or not. if (elapsedTime >= reloadTime) { //Reset elapsed Time elapsedTime = 0; //Find all the gameObjects with a collider within the range of the Cupcake Tower Collider2D[] hitColliders = Physics2D.OverlapCircleAll(transform.position, rangeRadius); //Check if there is at least one gameObject found if (hitColliders.Length != 0) { //Rest of the code } } If there are enemies within range, we need to decide a policy about which enemy the tower should be targeted. There are different ways to do this and different strategies that the tower itself could choose. Here, we are going to implement one where the nearest enemy to the tower will be the one targeted. To implement this policy, we need to loop all all the game objects that we have found in range, check if they actually are enemies, and using distances, pick the nearest one. To achieve this, write the following code inside the previous if statement: if (hitColliders.Length != 0) { //Loop over all the gameObjects to identify the closest to the Cupcake Tower float min = int.MaxValue; int index = -1; for (int i = 0; i < hitColliders.Length; i++) { if (hitColliders[i].tag == "Enemy") { float distance = Vector2.Distance(hitColliders[i].transform.position, transform.position); if (distance < min) { index = i; min = distance; } } } if (index == -1) return; //Rest of the code } Once we got the target, we need to get the direction, that the tower will use to throw the projectile. So, let's write this: //Get the direction of the target Transform target = hitColliders[index].transform; Vector2 direction = (target.position - transform.position).normalized; Finally, we need to instantiate a new Projectile, and assign to it the direction of the enemy, as the following: //Create the Projectile GameObject projectile = GameObject.Instantiate(projectilePrefab, transform.position, Quaternion.identity) as GameObject; projectile.GetComponent<ProjectileScript>().direction = direction; Instantiate Game Objects it is usually slow, and it should be avoided. However, for the learning propose we can live with that. And that is it for shooting to the enemies. Upgrading the cupcake tower, making it even tastier In order to create a function to upgrade the tower, we first need to define a variable to store the actual level of the tower: public int upgradeLevel; //Level of the Cupcake Tower Then, we need an array with all the Sprites for the different upgrades, like the following: public Sprite[] upgradeSprites; //Different sprites for the different levels of the Cupcake Tower Finally, we can create our Upgrade function. We need to upgrade the graphics, and increase the stats. Feel free to tweak this values as you prefer. However, don't forget to increase the level of the tower as well as to assign the new sprite. At the end, you should have something like the following: public void Upgrade() { rangeRadius += 1f; reloadTime -= 0.5f; upgradeLevel++; GetComponent<SpriteRenderer>().sprite = upgradeSprites[upgradeLevel]; } Save the script, and for now we have done with it. A pre-cooked cupcake tower through Prefabs As we have done with the Sprinkle, we need to do something similar for the cupcake Tower. In the Prefabs folder in the Project Panel, create a new Prefab by right clicking and then navigate to Create | Prefab. Name it SprinklesCupcakeTower. Now, drag and drop the Sprinkles_Cupcake_Tower_0 from the Graphics/towers folder (within the cupcake_tower_sheet-01 file) in the Scene View. Attach the CupcakeTowerScript to the object by navigating to Add Component | Script | CupcakeTowerScript. The Inspector should look like the following: We need to assign the Pink_Sprinkle_Projectile_Prefab to the Projectile Prefab variable. Then, we need to assign the different Sprites for the upgrades. In particular, we can use Sprinkles_Cupcake_Tower_* (replacing the * with the level of the cupcake tower) from the same sheet as before. Don't worry too much about the other parameters of the tower, like the range radius or the reload time, since we will see how to balance the game later on. At the end, this is what we should see: The last step is to drag this game object inside the prefab. As a result, our cupcake tower is ready. Summary In this article we covered the topic of creating a cupcake tower and scripting it. Resources for Article: Further resources on this subject: Animating a Game Character [article] What's Your Input? [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 21668

article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 21598
article-image-basics-glsl-40-shaders
Packt
10 Aug 2011
11 min read
Save for later

The basics of GLSL 4.0 shaders

Packt
10 Aug 2011
11 min read
Shaders were first introduced into OpenGL in version 2.0, introducing programmability into the formerly fixed-function OpenGL pipeline. Shaders are implemented using the OpenGL Shading Language (GLSL). The GLSL is syntactically similar to C, which should make it easier for experienced OpenGL programmers to learn. Due to the nature of this text, I won't present a thorough introduction to GLSL here. Instead, if you're new to GLSL, reading through these recipes should help you to learn the language by example. If you are already comfortable with GLSL, but don't have experience with version 4.0, you'll see how to implement these techniques utilizing the newer API. However, before we jump into GLSL programming, let's take a quick look at how vertex and fragment shaders fit within the OpenGL pipeline. Vertex and fragment shaders In OpenGL version 4.0, there are five shader stages: vertex, geometry, tessellation control, tessellation evaluation, and fragment. In this article we'll focus only on the vertex and fragment stages. Shaders replace parts of the OpenGL pipeline. More specifically, they make those parts of the pipeline programmable. The following block diagram shows a simplified view of the OpenGL pipeline with only the vertex and fragment shaders installed. Vertex data is sent down the pipeline and arrives at the vertex shader via shader input variables. The vertex shader's input variables correspond to vertex attributes (see Sending data to a shader using per-vertex attributes and vertex buffer objects). In general, a shader receives its input via programmer-defined input variables, and the data for those variables comes either from the main OpenGL application or previous pipeline stages (other shaders). For example, a fragment shader's input variables might be fed from the output variables of the vertex shader. Data can also be provided to any shader stage using uniform variables (see Sending data to a shader using uniform variables). These are used for information that changes less often than vertex attributes (for example, matrices, light position, and other settings). The following figure shows a simplified view of the relationships between input and output variables when there are two shaders active (vertex and fragment). The vertex shader is executed once for each vertex, possibly in parallel. The data corresponding to vertex position must be transformed into clip coordinates and assigned to the output variable gl_Position before the vertex shader finishes execution. The vertex shader can send other information down the pipeline using shader output variables. For example, the vertex shader might also compute the color associated with the vertex. That color would be passed to later stages via an appropriate output variable. Between the vertex and fragment shader, the vertices are assembled into primitives, clipping takes place, and the viewport transformation is applied (among other operations). The rasterization process then takes place and the polygon is filled (if necessary). The fragment shader is executed once for each fragment (pixel) of the polygon being rendered (typically in parallel). Data provided from the vertex shader is (by default) interpolated in a perspective correct manner, and provided to the fragment shader via shader input variables. The fragment shader determines the appropriate color for the pixel and sends it to the frame buffer using output variables. The depth information is handled automatically. Replicating the old fixed functionality Programmable shaders give us tremendous power and flexibility. However, in some cases we might just want to re-implement the basic shading techniques that were used in the default fixed-function pipeline, or perhaps use them as a basis for other shading techniques. Studying the basic shading algorithm of the old fixed-function pipeline can also be a good way to get started when learning about shader programming. In this article, we'll look at the basic techniques for implementing shading similar to that of the old fixed-function pipeline. We'll cover the standard ambient, diffuse, and specular (ADS) shading algorithm, the implementation of two-sided rendering, and flat shading. in the next article, we'll also see some examples of other GLSL features such as functions, subroutines, and the discard keyword. Implementing diffuse, per-vertex shading with a single point light source One of the simplest shading techniques is to assume that the surface exhibits purely diffuse reflection. That is to say that the surface is one that appears to scatter light in all directions equally, regardless of direction. Incoming light strikes the surface and penetrates slightly before being re-radiated in all directions. Of course, the incoming light interacts with the surface before it is scattered, causing some wavelengths to be fully or partially absorbed and others to be scattered. A typical example of a diffuse surface is a surface that has been painted with a matte paint. The surface has a dull look with no shine at all. The following image shows a torus rendered with diffuse shading. The mathematical model for diffuse reflection involves two vectors: the direction from the surface point to the light source (s), and the normal vector at the surface point (n). The vectors are represented in the following diagram. The amount of incoming light (or radiance) that reaches the surface is partially dependent on the orientation of the surface with respect to the light source. The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the light is perpendicular to the normal. In between, it is proportional to the cosine of the angle between the direction towards the light source and the normal vector. So, since the dot product is proportional to the cosine of the angle between two vectors, we can express the amount of radiation striking the surface as the product of the light intensity and the dot product of s and n. Where Ld is the intensity of the light source, and the vectors s and n are assumed to be normalized. You may recall that the dot product of two unit vectors is equal to the cosine of the angle between them. As stated previously, some of the incoming light is absorbed before it is re-emitted. We can model this interaction by using a reflection coefficient (Kd), which represents the fraction of the incoming light that is scattered. This is sometimes referred to as the diffuse reflectivity, or the diffuse reflection coefficient. The diffuse reflectivity becomes a scaling factor for the incoming radiation, so the intensity of the outgoing light can be expressed as follows: Because this model depends only on the direction towards the light source and the normal to the surface, not on the direction towards the viewer, we have a model that represents uniform (omnidirectional) scattering. In this recipe, we'll evaluate this equation at each vertex in the vertex shader and interpolate the resulting color across the face. In this and the following recipes, light intensities and material reflectivity coefficients are represented by 3-component (RGB) vectors. Therefore, the equations should be treated as component-wise operations, applied to each of the three components separately. Luckily, the GLSL will make this nearly transparent because the needed operators will operate component-wise on vector variables. Getting ready Start with an OpenGL application that provides the vertex position in attribute location 0, and the vertex normal in attribute location 1 (see Sending data to a shader using per-vertex attributes and vertex buffer objects). The OpenGL application also should provide the standard transformation matrices (projection, modelview, and normal) via uniform variables. The light position (in eye coordinates), Kd, and Ld should also be provided by the OpenGL application via uniform variables. Note that Kd and Ld are type vec3. We can use a vec3 to store an RGB color as well as a vector or point. How to do it... To create a shader pair that implements diffuse shading, use the following code: Use the following code for the vertex shader. #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; out vec3 LightIntensity; uniform vec4 LightPosition; // Light position in eye coords. uniform vec3 Kd; // Diffuse reflectivity uniform vec3 Ld; // Light source intensity uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 ProjectionMatrix; uniform mat4 MVP; // Projection * ModelView void main() { // Convert normal and position to eye coords vec3 tnorm = normalize( NormalMatrix * VertexNormal); vec4 eyeCoords = ModelViewMatrix * vec4(VertexPosition,1.0)); vec3 s = normalize(vec3(LightPosition - eyeCoords)); // The diffuse shading equation LightIntensity = Ld * Kd * max( dot( s, tnorm ), 0.0 ); // Convert position to clip coordinates and pass along gl_Position = MVP * vec4(VertexPosition,1.0); } Use the following code for the fragment shader. #version 400 in vec3 LightIntensity; layout( location = 0 ) out vec4 FragColor; void main() { FragColor = vec4(LightIntensity, 1.0); } Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. See Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 for details about compiling, linking, and installing shaders. How it works... The vertex shader does all of the work in this example. The diffuse reflection is computed in eye coordinates by first transforming the normal vector using the normal matrix, normalizing, and storing the result in tnorm. Note that the normalization here may not be necessary if your normal vectors are already normalized and the normal matrix does not do any scaling. The normal matrix is typically the inverse transpose of the upper-left 3x3 portion of the model-view matrix. We use the inverse transpose because normal vectors transform differently than the vertex position. For a more thorough discussion of the normal matrix, and the reasons why, see any introductory computer graphics textbook. (A good choice would be Computer Graphics with OpenGL by Hearn and Baker.) If your model-view matrix does not include any non-uniform scalings, then one can use the upper-left 3x3 of the model-view matrix in place of the normal matrix to transform your normal vectors. However, if your model-view matrix does include (uniform) scalings, you'll still need to (re)normalize your normal vectors after transforming them. The next step converts the vertex position to eye (camera) coordinates by transforming it via the model-view matrix. Then we compute the direction towards the light source by subtracting the vertex position from the light position and storing the result in s. Next, we compute the scattered light intensity using the equation described above and store the result in the output variable LightIntensity. Note the use of the max function here. If the dot product is less than zero, then the angle between the normal vector and the light direction is greater than 90 degrees. This means that the incoming light is coming from inside the surface. Since such a situation is not physically possible (for a closed mesh), we use a value of 0.0. However, you may decide that you want to properly light both sides of your surface, in which case the normal vector needs to be reversed for those situations where the light is striking the back side of the surface (see Implementing two-sided shading). Finally, we convert the vertex position to clip coordinates by multiplying with the model-view projection matrix, (which is: projection * view * model) and store the result in the built-in output variable gl_Position. gl_Position = MVP * vec4(VertexPosition,1.0); The subsequent stage of the OpenGL pipeline expects that the vertex position will be provided in clip coordinates in the output variable gl_Position. This variable does not directly correspond to any input variable in the fragment shader, but is used by the OpenGL pipeline in the primitive assembly, clipping, and rasterization stages that follow the vertex shader. It is important that we always provide a valid value for this variable. Since LightIntensity is an output variable from the vertex shader, its value is interpolated across the face and passed into the fragment shader. The fragment shader then simply assigns the value to the output fragment. There's more... Diffuse shading is a technique that models only a very limited range of surfaces. It is best used for surfaces that have a "matte" appearance. Additionally, with the technique used above, the dark areas may look a bit too dark. In fact, those areas that are not directly illuminated are completely black. In real scenes, there is typically some light that has been reflected about the room that brightens these surfaces. In the following recipes, we'll look at ways to model more surface types, as well as provide some light for those dark parts of the surface.
Read more
  • 0
  • 0
  • 21501

Packt
02 Nov 2015
28 min read
Save for later

Let's Get Physical – Using GameMaker's Physics System

Packt
02 Nov 2015
28 min read
 In this article by Brandon Gardiner and Julián Rojas Millán, author of the book GameMaker Cookbook we'll cover the following topics: Creating objects that use physics Alternating the gravity Applying a force via magnets Creating a moving platform Making a rope (For more resources related to this topic, see here.) The majority of video games are ruled by physics in one way or another. 2D platformers require coded movement and jump physics. Shooters, both 2D and 3D, use ballistic calculators that vary in sophistication to calculate whether you shot that guy or missed him and he's still coming to get you. Even Pong used rudimentary physics to calculate the ball's trajectory after bouncing off of a paddle or wall. The next time you play a 3D shooter or action-adventure game, check whether or not you see the logo for Havok, a physics engine used in over 500 games since it was introduced in 2000. The point is that physics, however complex, is important in video games. GameMaker comes with its own engine that can be used to recreate physics-based sandbox games, such as The Incredible Machine, or even puzzle games, such as Cut the Rope or Angry Birds. Let's take a look at how elements of these games can be accomplished using GameMaker's built-in physics engine. Physics engine 101 In order to use GameMaker's physics engine, we first need to set it up. Let's create and test some basic physics before moving on to something more complicated. Gravity and force One of the things that we learned with regards to GameMaker physics was to create our own simplistic gravity. Now that we've set up gravity using the physics engine, let's see how we can bend it according to our requirements. Physics in the environment GameMaker's physics engine allows you to choose not only the objects that are affected by external forces but also allows you to see how they are affected. Let's take a look at how this can be applied to create environmental objects in your game. Advanced physics-based objects Many platforming games, going all the way back to Pitfall!, have used objects, such as a rope as a gameplay feature. Pitfall!, mind you, uses static rope objects to help the player avoid crocodiles, but many modern games use dynamic ropes and chains, among other things, to create a more immersive and challenging experience. Creating objects that use physics There's a trend in video games where developers create products that have less games than play areas; worlds and simulators in which a player may or may not be given an objective and it wouldn't matter either way. These games can take on a life of their own; Minecraft is essentially a virtual game of building blocks and yet has become a genre of its own, literally making its creator, Markus Persson (also known as Notch), a billionaire in the process. While it is difficult to create, the fun in games such as Minecraft is designed by the player. If you give a player a set of tools or objects to play with, you may end up seeing an outcome you hadn't initially thought of and that's a good thing. The reason why I have mentioned all of this is to show you how it binds to GameMaker and what we can do with it. In a sense, GameMaker is a lot like Minecraft. It is a set of tools, such as the physics engine we're about to use, that the user can employ if he/she desires (of course, within limits), in order to create something funny or amazing or both. What you do with these tools is up to you, but you have to start somewhere. Let's take a look at how to build a simple physics simulator. Getting ready The first thing you'll need is a room. Seems simple enough, right? Well, it is. One difference, however, is that you'll need to enable physics before we begin. With the room open, click on the Physics tab and make sure that the box marked Room is Physics World is checked. After this, we'll need some sprites and objects. For sprites, you'll need a circle, triangle, and two squares, each of a different color. The circle is for obj_ball. The triangle is for obj_poly. One of the squares is for obj_box, while the other is for obj_ground. You'll also need four objects without sprites: obj_staticParent, obj_dynamicParent, obj_button, and obj_control. How to do it Open obj_staticParent and add two collision events: one with itself and one with obj_dynamicParent. In each of the collision events, drag and drop a comment from the Control tab to the Actions box. In each comment, write Collision. Close obj_staticParent and repeat steps 1-3 for obj_dynamicParent. In obj_dynamicParent, click on Add Event, and then click on Other and select Outside Room. From the Main1 tab, drag and drop Destroy Instance in the Actions box. Select Applies to Self. Open obj_ground and set the parent to obj_staticParent. Add a create event with a code block containing the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open the room that you created and start placing instances of obj_ground around it to create platforms, stairs, and so on. This is how mine looked like: Open obj_ball and set the parent to obj_dynamicParent. Add a create event and enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ball) / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_box, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.5); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.01); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_poly, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_polygon_shape(fixture); physics_fixture_add_point(fixture, 0, -(sprite_height / 2)); physics_fixture_add_point(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_add_point(fixture, -(sprite_width / 2), sprite_height / 2); physics_fixture_set_density(fixture, 0.01); physics_fixture_set_restitution(fixture, 0.1); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_control and add a create event using the following code: globalvar shape_select; globalvar shape_output; shape_select = 0; Add a Step and add the following code to a code block: if mouse_check_button(mb_left) && alarm[0] < 0 && !place_meeting(x, y, obj_button) { instance_create(mouse_x, mouse_y, shape_output); alarm[0] = 5; } if mouse_check_button_pressed(mb_right) { shape_select += 1; } Now, add an event to alarm[0] and give it a comment stating Set Timer. Place an instance of obj_control in the room that you created, but make sure that it is placed in the coordinates (0, 0). Open obj_button and add a step event. Drag a code block to the Actions tab and input the following code: if shape_select > 2 { shape_select = 0; } if shape_select = 0 { sprite_index = spr_ball; shape_output = obj_ball; } if shape_select = 1 { sprite_index = spr_box; shape_output = obj_box; } if shape_select = 2 { sprite_index = spr_poly; shape_output = obj_poly; } Once these steps are completed, you can test your physics environment. Use the right mouse button to select the shape you would like to create, and use the left mouse button to create it. Have fun! How it works While not overly complicated, there is a fair amount of activity in this recipe. Let's take a quick look at the room itself. When you created this room, you checked the box for Room is Physics World. This does exactly what it says it does; it enables physics in the room. If you have any physics-enabled objects in a room that is not a physics world, errors will occur. In the same menu, you have the gravity settings (which are vector-based) and pixels to meters, which sets the scale of objects in the room. This setting is important as it controls how each object is affected by the coded physics. YoYo Games based GameMaker's physics on the real world (as they should) and so GameMaker needs to know how many meters are represented by each pixel. The higher the number, the larger the world in the room. If you place an object in two different rooms with different pixel to meter settings, even though the objects have the same settings, GameMaker will apply physics to them differently because it views them as being of differing size and weight. Let's take a look at the objects in this simulation. Firstly, you have two parent objects: one static and the other dynamic. The static object is the only parent to one object: obj_ground. The reason for this is that static objects are not affected by outside forces in a physics world, that is, the room you built. Because of this, the ground pieces are able to ignore gravity and forces applied by other objects that collide with them. Now, neither obj_staticParent nor obj_dynamicParent contain any physics code; we saved this for our other objects. We use our parent objects to govern our collision groups using two objects instead of coding collisions in each object. So, we use drag and drop collision blocks to ensure that any children can collide with instances of one another and with themselves. Why did you drag comment blocks into these collision events? We did this so that GameMaker doesn't ignore them; the contents of each comment block are irrelevant. Also, the dynamic parent has an event that destroys any instance of its children that end up outside the room. The reason for this is simply to save memory. Otherwise, each object, even those off-screen, will be accounted for calculations at every step and this will slow everything down and eventually crash the program. Now, as we're using physics-enabled objects, let's see how each one differs from the others. When working with the object editor, you may have noticed the checkbox labelled Uses Physics. This checkbox will automatically set up the basic physics code within the selected object, but only after assuming that you're using the drag and drop method of programming. If you click on it, you'll see a new menu with basic collision options as well as several values and associated options: Density: Density in GameMaker works exactly as it does in real life. An object with a high density will be much heavier and harder to move via force than a low-density object of the same size. Think of how far you can kick an empty cardboard box versus how far you can kick a cardboard box full of bricks, assuming that you don't break your foot. Restitution: Restitution essentially governs an object's bounciness. A higher restitution will cause an object to bounce like a rubber ball, whereas a lower restitution will cause an object to bounce like a box of bricks, as mentioned in the previous example. Collision group: Collision grouping tells GameMaker how certain objects react with one another. By default, all physics objects are set to collision group 0. This means that they will not collide with other objects without a specific collision event. Assigning a positive number to this setting will cause the object in question to collide with all other objects in the same collision group, regardless of collision events. Assigning a negative number will prevent the object from colliding with any objects in that group. I don't recommend that you use collision groups unless absolutely necessary, as it takes a great deal of memory to work properly. Linear damping: Linear damping works a lot like air friction in real life. This setting affects the velocity (momentum) of objects in motion over time. Imagine a military shooter where thrown grenades don't arc, they just keep soaring through the air. We don't need this. This is what rockets are for. Angular damping: Angular damping is similar to linear damping. It only affects an object's rotation. This setting keeps objects from spinning forever. Have you ever ridden the Teacup ride at Disneyland? If so, you will know that angular damping is a good thing. Friction: Friction also works in a similar way to linear damping, but it affects an object's momentum as it collides with another object or surface. If you want to create icy surfaces in a platformer, friction is your friend. We didn't use this menu in this recipe but we did set and modify these settings through code. First, in each of the objects, we set them to use physics and then declared their shapes and collision masks. We started with declaring the fixture variable because, as you can see, it is part of each of the functions we used and typing fixture is easier than typing physics_fixture_create() every time. The fixture variable that we bind to the object is what is actually being affected by forces and other physics objects, so we must set its shape and properties in order to tell GameMaker how it should react. In order to set the fixture's shape, we use physics_set_circle_shape, physics_set_box_shape, and physics_set_polygon_shape. These functions define the collision mask associated with the object in question. In the case of the circle, we got the radius from half the width of the sprite, whereas for the box, we found the outer edges used via half the width and half the height. GameMaker then uses this information to create a collision mask to match the sprite from which the information was gathered. When creating a fixture from a more complex sprite, you can either use the aforementioned methods to approximate a mask, or you can create a more complex shape using a polygon like we did for the triangle. You'll notice that the code to create the triangle fixture had extra lines. This is because polygons require you to map each point on the shape you're trying to create. You can map three to eight points by telling GameMaker where each one is situated in relation to the center of the image (0, 0). One very important detail is that you cannot create a concave shape; this will result in an error. Every fixture you create must have a convex shape. The only way to create a concave fixture is to actually create multiple fixtures in the same object. If you were to take the code for the triangle, duplicate all of it in the same code block and alter the coordinates for each point in the duplicated code; you can create concave shapes. For example, you can use two rectangles to make an L shape. This can only be done using a polygon fixture, as it is the only fixture that allows you to code the position of individual points. Once you've coded the shape of your fixture, you can begin to code its attributes. I've described what each physics option does, and you've coded and tested them using the instructions mentioned earlier. Now, take a look at the values for each setting. The ball object has a higher restitution than the rest; did you notice how it bounced? The box object has a very low friction; it slides around on platforms as though it is made of ice. The triangle has very low density and angular damping; it is easily knocked around by the other objects and spins like crazy. You can change how objects react to forces and collisions by changing one or more of these values. I definitely recommend that you play around with these settings to see what you can come up with. Remember how the ground objects are static? Notice how we still had to code them? Well, that's because they still interact with other objects but in an almost opposite fashion. Since we set the object's density to 0, GameMaker more or less views this as an object that is infinitely dense; it cannot be moved by outside forces or collisions. It can, however, affect other objects. We don't have to set the angular and linear damping values simply because the ground doesn't move. We do, however, have to set the restitution and friction levels because we need to tell GameMaker how other objects should react when they come in contact with the ground. Do you want to make a rubber wall to bounce a player off? Set the restitution to a higher level. Do you want to make that icy patch we talked about? Then, you need to lower the friction. These are some fun settings to play around with, so try it out. Alternating gravity Gravity can be a harsh mistress; if you've ever fallen from a height, you will understand what I mean. I often think it would be great if we could somehow lessen gravity's hold on us, but then I wonder what it would be like if we could just reverse it all together! Imagine flipping a switch and then walking on the ceiling! I, for one, think that it would be great. However, since we don't have the technology to do it in real life, I'll have to settle for doing it in video games. Getting ready For this recipe, let's simplify things and use the physics environment that we created in the previous recipe. How to do it In obj_control, open the code block in the create event. Add the following code: physics_world_gravity(0, -10); That's it! Test the environment and see what happens when you create your physics objects. How it works GameMaker's physics world of gravity is vector-based. This means that you simply need to change the values of x and y in order to change how gravity works in a particular room. If you take a look at the Physics tab in the room editor, you'll see that there are values under x and y. The default value is 0 for x and 10 for y. When we added this code to the control object's create event, we changed the value of y to -10, which means that it will flow in the opposite direction. You can change the direction to 360 degrees by altering both x and y, and you can change the gravity's strength by raising and lowering the values. There's more Alternating the gravity's flow can be a lot of fun in a platformer. Several games have explored this in different ways. Your character can change the gravity by hitting a switch in a game, the player can change it by pressing a button, or you can just give specific areas different gravity settings. Play around with this and see what you can create. Applying force via magnets Remember playing with magnets in a science class when you were a kid. It was fun back then, right? Well, it's still fun; powerful magnets make a great gift for your favorite office worker. What about virtual magnets, though? Are they still fun? The answer is yes. Yes, they are. Getting ready Once again, we're simply going to modify our existing physics environment in order to add some new functionality.  How to do it In obj_control, open the code block in the step event. Add the following code: if keyboard_check(vk_space) { with (obj_dynamicParent) { var dir = point_direction(x,y,mouse_x,mouse_y); physics_apply_force(x, y, lengthdir_x(30, dir), lengthdir_y(30, dir)); } } Once you close the code block, you can test your new magnet. Add some objects, hold down the spacebar, and see what happens. How it works Applying a force to a physics-enabled object in GameMaker will add a given value to the direction, rotation, and speed of the said object. Force can be used to gradually propel an object in a given direction, or through a little math, as in this case, draw objects nearer. What we're doing here is that while the Spacebar is held down, any objects in the vicinity are drawn to the magnet (in this case, your mouse). In order to accomplish this, we first declare that the following code needs to act on obj_dynamicParent, as opposed to acting on the control object where the code resides. We then set the value of a dir variable to the point_direction of the mouse, as it relates to any child of obj_dynamicParent. From there, we can begin to apply force. With physics_apply_force, the first two values represent the x and y coordinates of the object to which the force is being applied. Since the object(s) in question is/are not static, we simply set the coordinates to whatever value they have at the time. The other two values are used in tandem to calculate the direction in which the object will travel and the force propelling it in Newtons. We get these values, in this instance, by calculating the lengthdir for both x and y. The lengthdir finds the x or y value of a point at a given length (we used 30) at a given angle (we used dir, which represents point_direction, that finds the angle where the mouse's coordinates lie). If you want to increase the length value, then you need to increase the power of the magnet. Creating a moving platform We've now seen both static and dynamic physics objects in GameMaker, but what happens when we want the best of both the worlds? Let's take a look at how to create a platform that can move and affect other objects via collisions but is immune to said collisions. Getting ready Again, we'll be using our existing physics environment, but this time, we'll need a new object. Create a sprite that is128 px wide by 32 px high and assign it to an object called obj_platform. Also, create another object called obj_kinematicParent but don't give it a sprite. Add collision events to obj_staticParent, obj_dynamicParent, and itself. Make sure that there is a comment in each event. How to do it In obj_platform, add a create event. Drag a code block to the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); phy_speed_x = 5; Add a Step event with a code block containing the following code: if (x <64) or (x > room_width-64) { phy_speed_x = phy_speed_x * -1; } Place an instance of obj_platform in the room, which is slightly higher than the highest instance of obj_ground. Once this is done, you can go ahead and test it. Try dropping various objects on the platform and see what happens! How it works Kinematic objects in GameMaker's physics world are essentially static objects that can move. While the platform has a density of 0, it also has a speed of 5 along the x axis. You'll notice that we didn't just use speed equal to 5, as this would not have the desired effect in a physics world. The code in the step simply causes the platform to remain within a set boundary by multiplying its current horizontal speed by -1. Any static object to which a movement is applied automatically becomes a kinematic object. Making a rope Is there anything more useful than a rope? I mean besides your computer, your phone or even this book. Probably, a lot of things, but that doesn't make a rope any less useful. Ropes and chains are also useful in games. Some games, such as Cut the Rope, have based their entire gameplay structure around them. Let's see how we can create ropes and chains in GameMaker. Getting ready For this recipe, you can either continue using the physics environment that we've been working with, or you can simply start from scratch. If you've gone through the rest of this chapter, you should be fairly comfortable with setting up physics objects. I completed this recipe with a fresh .gmx file. Before we begin, go ahead and set up obj_dynamicParent and obj_staticParent with collision events for one another. Next, you'll need to create the obj_ropeHome, obj_rope, obj_block, and obj_ropeControl objects. The sprite for obj_rope can simply be a 4 px wide by 16 px high box, while obj_ropeHome and obj_block can be 32 px squares. Obj_ropeControl needs to use the same sprite as obj_rope, but with the y origin set to 0. Obj_ropeControl should also be invisible. As for parenting, obj_rope should be a child of obj_dynamicParent and obj_ropeHome, and obj_block should be children of obj_staticParent, and obj_ropeControl does not require any parent at all. As always, you'll also need a room in which you need to place your objects. How to do it Open obj_ropeHome and add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); In obj_rope, add a create event with a code block. Enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_ropeControl and add a create event. Drag a code block to the actions box and enter the following code: setLength = image_yscale-1; ropeLength = 16; rope1 = instance_create(x,y,obj_ropeHome2); rope2 = instance_create(x,y,obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); repeat (setLength) { ropeLength += 16; rope1 = rope2; rope2 = instance_create(x, y+ropeLength, obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); } In obj_block, add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ropeHome)/2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Now, add a step event with the following code in a code block: phy_position_x = mouse_x; phy_position_y = mouse_y; Place an instance of obj_ropeControl anywhere in the room. This will be the starting point of the rope. You can place multiple instances of the object if you wish. For every instance of obj_ropeControl you place in the room, use the bounding box to stretch it to however long you wish. This will determine the length of your rope. Place a single instance of obj_block in the room. Once you've completed these steps, you can go ahead and test them. How it works This recipe may seem somewhat complicated but it's really not. What you're doing here is that we are taking multiple instances of the same physics-enabled object and stringing them together. Since you're using instances of the same object, you only have to code one and the rest will follow. Once again, our collisions are handled by our parent objects. This way, you don't have to set collisions for each object. Also, setting the physical properties of each object is done exactly as we have done in previous recipes. By setting the density of obj_ropeHome and obj_block to 0, we're ensuring that they are not affected by gravity or collisions, but they can still collide with other objects and affect them. In this case, we set the physics coordinates of obj_block to those of the mouse so that, when testing, you can use them to collide with the rope, moving it. The most complex code takes place in the create event for obj_ropeControl. Here, we not only define how many sections of a rope or chain will be used, but we also define how they are connected. To begin, the y scale of the control object is measured in order to determine how many instances of obj_rope are required. Based on how long you stretched obj_ropeControl in the room, the rope will be longer (more instances) or shorter (fewer instances). We then set a variable (ropeLength) to the size of the sprite used for obj_rope. This will be used later to tell GameMaker where each instance of obj_rope should be so that we can connect them in a line. Next, we create the object that will hold the obj_ropeHome rope. This is a static object that will not move, no matter how much the rope moves. This is connected to the first instance of obj_rope via a revolute joint. In GameMaker, a revolute joint is used in several ways: it can act as part of a motor, moving pistons; it can act as a joint on a ragdoll body; in this case, it acts as the connection between instances of obj_rope. A revolute joint allows the programmer to code its angle and torque; but for our purposes, this isn't necessary. We declared the objects that are connected via the joint as well as the anchor location, but the other values remain null. Once the rope holder (obj_ropeHome) and initial joint are set up, we can automate the creation of the rest. Using the repeat function, we can tell GameMaker to repeat a block of code a set number of times. In this case, this number is derived from how many instances of obj_rope can fit within the distance between the y origin of obj_ropeControl and the point to which you stretched it. We subtract 1 from this number as GameMaker will calculate too many in order to cover the distance in its entirety. The code that will be repeated does a few things at once. First, it increases the value of the ropeLength variable by 16 for each instance that is calculated. Then, GameMaker changes the value of rope1 (which creates an instance of obj_ropeHome) to that of rope2 (which creates an instance of obj_rope). The rope2 variable is then reestablished to create an instance of obj_rope, but also adds the new value of ropeLength so as to move its coordinates directly below those of the previous instance, thus creating a chain. This process is repeated until the set length of the overall rope is reached. There's more Each section of a rope is a physics object and acts in the physics world. By changing the physics settings, when initially creating the rope sections, you can see how they react to collisions. How far and how quickly the rope moves when pushed by another object is very much related to the difference between their densities. If you make the rope denser than the object colliding with it, the rope will move very little. If you reverse these values, you can cause the rope to flail about, wildly. Play around with the settings and see what happens, but when placing a rope or chain in a game, you really must consider what the rope and other objects are made of. It wouldn't seem right for a lead chain to be sent flailing about by a collision with a pillow; now would it? Summary This article introduces the physics system and demonstrates how GameMaker handles gravity, friction, and so on. Learn how to implement this system to make more realistic games. Resources for Article: Further resources on this subject: Getting to Know LibGDX [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Introducing GameMaker [article]
Read more
  • 0
  • 0
  • 21186
Modal Close icon
Modal Close icon