Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-using-googles-offerings
Packt
22 Dec 2014
8 min read
Save for later

Using Google's offerings

Packt
22 Dec 2014
8 min read
In this article by Juwal Bose, author of the book LibGDX Game Development Essentials, we will learn how to use all those features that Google has to offer. Google provides AdMob (Google Mobile Ads) to display ads to monetize our game. Google Analytics can be used to track basic app data. Google Play services can be used to implement and track global leaderboards and achievements. Before we start implementing all of these, we need to ensure the following points: Use the SDK manager to update to the latest Android SDK tools Download and install Google Play services via the SDK manager (For more resources related to this topic, see here.) Interfacing platform-specific code This chapter deals with an Android project, and much of what we will do will be specific to that platform. We need a way to detect the currently running platform to decide whether to invoke these features or not. Hence, we add a new public Boolean variable, isAndroid, in the ThrustCopter class, which is false by default. We can detect ApplicationType using the following code in the create method: switch (Gdx.app.getType()) { case Android: isAndroid=true; break; case Desktop: break; case WebGL: break; case iOS: break; default: } Now, we can check whether the game is running on an Android device using the following code: if(game.isAndroid){ ... } From the core project, we need to call the Android main class to invoke Android-specific code. We enable this using a new interface created in the core project: IActivityRequestHandler. Then, we make sure our AndroidLauncher main class implements this interface as follows: public class AndroidLauncher extends AndroidApplication implements IActivityRequestHandler{ ... initialize(new ThrustCopter(this), config); Note that we are passing this as a parameter to ThrustCopter, which provides a reference to the implemented interface. As this is Android-specific, other platforms' start classes can pass null as an argument, as we will only use this parameter on the Android platform. In the ThrustCopter class, we save the reference with the name handler, as shown in the following code: public ThrustCopter(IActivityRequestHandler IARH) {    handler=IARH; ... Visit https://github.com/libgdx/libgdx/wiki/Interfacing-with-platform-specific-code for more information. Implementing Google Analytics tracking The default implementation of Google Analytics will automatically provide the following information about your app: the number of users and sessions, session duration, operating systems, device models, and geography. To start off, we need to create a Google Analytics property and app view. Go ahead and start using Google Analytics by accessing it at https://www.google.com/analytics/web/?hl=en. Create a new account and select Mobile app and fill in the details. Once all details are entered, click on Get Tracking ID to generate a new tracking ID. The tracking ID will be unique for each account. The Google Analytics version may change in future, which means the way they are integrated may also change. Check out the Google developers portal for details at https://developers.google.com/analytics/devguides/collection/android/v4/. The AndroidManifest file needs to have the following permissions and the minSdkVersion instance should be set to 9, as follows: <uses-sdk android_minSdkVersion="9" android_targetSdkVersion="23" /> <uses-permission android_name="android.permission.INTERNET" /> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE" /> Have a look at the following Google Analytics registration form: Copy the library project at <android-sdk>/extras/google/google_play_services/libproject/google-play-services_lib/ to the location where you maintain your Android app projects. Import the library project into your Eclipse workspace. Click on File, select Import, select Android, click on Existing Android Code Into Workspace, and browse to the copy of the library project to import it. This step is important for all the Google-related services that we are about to integrate. We need to refer to this library project from our Thrust Copter-android project. Right-click on the Thrust Copter-android project and select Properties. Select the Android section, which will display a blank Library section to the right. Click on Add... to select our library project and add it as a reference. Adding tracker configuration files We can provide configuration files to create Analytics trackers. Usually, we need only one tracker, which is usually called the global tracker, to report the basic analytics data. We add the global_tracker.xml file to the res/xml folder in the Android project. Copy this file from the source provided. Update the ga_trackingId section with the new tracking ID you got on creating your application entry on the Google Analytics site. The screenName section consists of the different scenes that will be tracked. We added the MenuScene and ThrustCopterScene classes to the screenName section. This needs to be changed for each game as follows: <screenName name="com.csharks.thrustcopter.ThrustCopterScene">Thrust Copter Game</screenName>    <screenName name="com.csharks.thrustcopter.MenuScene">Thrust Copter Menu</screenName> Once the tracker XML file is in place, add the following element to the Android Manifest application part: <meta-data    android_name="com.google.android.gms.analytics.globalConfigResource"    android_resource="@xml/global_tracker" /> We need to access the tracker and report activity start, stop, and scene changes. This can be done using the following code in the AndroidLauncher class: Tracker globalTracker; Then, add the following code within the onCreate method: GoogleAnalytics analytics = GoogleAnalytics.getInstance(this); globalTracker=analytics.newTracker(R.xml.global_tracker); Now, we will move on to reporting. We added a new function in the IActivityRequestHandler interface called setTrackerScreenName(String path), which needs to be implemented as well: @Override protected void onStart(){    super.onStart();    GoogleAnalytics.getInstance(this).reportActivityStart(this); } @Override public void onStop() {    super.onStop();    GoogleAnalytics.getInstance(this).reportActivityStop(this); } @Override public void setTrackerScreenName(String path) {    globalTracker.setScreenName(path);    globalTracker.send(new HitBuilders.AppViewBuilder().build()); } We also need to report screen names as well when we switch scenes. We do this within the constructors of MenuScene and ThrustCopterScene as follows: if(game.isAndroid){ game.handler.setTrackerScreenName("com.csharks.thrustcopter.MenuScene"); } It's time to test whether everything is working. Connect your Android device and run our Android project on it. We can see that the analytics reporting is showing up in logcat. Once we have significant data, we can access the Google Analytics Web interface to analyze how the game is being played by the masses. Adding Google Mobile Ads Legacy AdMob is being renamed Google Mobile Ads, which is now linked with Google AdSense. First, we need to set up AdMob to serve ads by visiting https://www.google.com/ads/admob/index.html. Click on the Monetize section and use the Add your app manually option to set up a new banner ad. This will allot a new AdMob ad unit ID. The Ads API is also part of the Google Play services platform that we have already integrated into our Android project. We have already added as follows the necessary permissions to AndroidManifest, but we need to add the following as well: <!--This meta-data tag is required to use Google Play Services.--> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <!--Include the AdActivity configChanges and theme. --> <activity android_name="com.google.android.gms.ads.AdActivity" android_configChanges="keyboard|keyboardHidden|orientation|screenLayout|uiMode|screenSize|smallestScreenSize" android_theme="@android:style/Theme.Translucent" /> AdMob needs its own view, whereas LibGDX creates its own view when initializing. A typical way of coexisting will be our Game view in fullscreen with the Ad view overlaid. We will use RelativeLayout to arrange both views. We need to replace the initialize method with the initializeForView method, which lacks some functionality; we need to specify those manually. The onCreate method of the AndroidLauncher class has the following new code: requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN);   getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN);   RelativeLayout layout = new RelativeLayout(this); View gameView = initializeForView(new ThrustCopter(this), config); layout.addView(gameView);   // Add the AdMob view RelativeLayout.LayoutParams adParams =        new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.WRAP_CONTENT,            RelativeLayout.LayoutParams.WRAP_CONTENT); adParams.addRule(RelativeLayout.ALIGN_PARENT_TOP); adParams.addRule(RelativeLayout.CENTER_HORIZONTAL); adView = new AdView(this); adView.setAdSize(AdSize.BANNER); adView.setAdUnitId(AD_UNIT_ID); startAdvertising(); layout.addView(adView, adParams); setContentView(layout); The startAdvertising function is as follows: private void startAdvertising() { AdRequest adRequest = new AdRequest.Builder().build(); adView.loadAd(adRequest); } The IActivityRequestHandler class has a new method, showAds(boolean show), that toggles the visibility of the adView instance. The method is implemented as follows: @Override public void showAds(boolean show) { handler.sendEmptyMessage(show ? SHOW_ADS : HIDE_ADS); } Here, handler, which is used to access adView from the thread that created it, is initialized as follows: private final int SHOW_ADS = 1; private final int HIDE_ADS = 0;   protected Handler handler = new Handler()    {        @Override        public void handleMessage(Message msg) {            switch(msg.what) {                case SHOW_ADS:                {                    adView.setVisibility(View.VISIBLE);                    break;                }                case HIDE_ADS:                {                    adView.setVisibility(View.GONE);                    break;                }            }        }    }; For more information, visit https://github.com/libgdx/libgdx/wiki/Admob-in-libgdx. Alternatively, runOnUiThread can be used instead of the handleMessage method. Now, we can show ads in the menu and hide it when we switch to the game. Summary In this article, you learned how to handle platform-specific code and you also learned how to use Google Play services to integrate AdMob and Analytics. Resources for Article: Further resources on this subject: Scaling friendly font rendering with distance fields [article] Sparrow iOS Game Framework - The Basics of Our Game [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 7042

article-image-enemy-and-friendly-ais
Packt
17 Dec 2014
29 min read
Save for later

Enemy and Friendly AIs

Packt
17 Dec 2014
29 min read
In this article by Kyle D'Aoust, author of the book Unity Game Development Scripting, we will see how to create enemy and friendly AIs. (For more resources related to this topic, see here.) Artificial Intelligence, also known as AI, is something that you'll see in every video game that you play. First-person shooter, real-time strategy, simulation, role playing games, sports, puzzles, and so on, all have various forms of AI in both large and small systems. In this article, we'll be going over several topics that involve creating AI, including techniques, actions, pathfinding, animations, and the AI manager. Then, finally, we'll put it all together to create an AI package of our own. In this article, you will learn: What a finite state machine is What a behavior tree is How to combine two AI techniques for complex AI How to deal with internal and external actions How to handle outside actions that affect the AI How to play character animations What is pathfinding? How to use a waypoint system How to use Unity's NavMesh pathfinding system How to combine waypoints and NavMesh for complete pathfinding AI techniques There are two very common techniques used to create AI: the finite state machine and the behavior tree. Depending on the game that you are making and the complexity of the AI that you want, the technique you use will vary. In this article, we'll utilize both the techniques in our AI script to maximize the potential of our AI. Finite state machines Finite state machines are one of the most common AI systems used throughout computer programming. To define the term itself, a finite state machine breaks down to a system, which controls an object that has a limited number of states to exist in. Some real-world examples of a finite state machine are traffic lights, television, and a computer. Let's look at an example of a computer finite state machine to get a better understanding. A computer can be in various states. To keep it simple, we will list three main states. These states are On, Off, and Active. The Off state is when the computer does not have power running it, the On state is when the computer does have power running it, and the Active state is when someone is using the computer. Let's take a further look into our computer finite state machine and explore the functions of each of its states: State Functions On Can be used by anyone Can turn off the computer Off Can turn on the computer Computer parts can be operated on Active Can access the Internet and various programs Can communicate with other devices Can turn off the computer Each state has its own functions. Some of the functions of each state affect each other, while some do not. The functions that do affect each other are the functions that control what state the finite state machine is in. If you press the power button on your computer, it will turn on and change the state of your computer to On. While the state of your computer is On, you can use the Internet and possibly some other programs, or communicate to other devices such as a router or printer. Doing so will change the state of your computer to Active. When you are using the computer, you can also turn off the computer by its software or by pressing the power button, therefore changing the state to Off. In video games, you can use a finite state machine to create AI with a simple logic. You can also combine finite state machines with other types of AI systems to create a unique and perhaps more complex AI system. In this article, we will be using finite state machines as well as what is known as a behavior tree. The behavior tree form of the AI system A behavior tree is another kind of AI system that works in a very similar way to finite state machines. Actually, behavior trees are made up of finite state machines that work in a hierarchical system. This system of hierarchy gives us great control over an individual, and perhaps many finite state systems within the behavior tree, allowing us to have a complex AI system. Taking a look back at the table explaining a finite state machine, a behavior tree works the same way. Instead of states, you have behaviors, and in place of the state functions, you have various finite state machines that determine what is done while the AI is in a specific behavior. Let's take a look at the behavior tree that we will be using in this article to create our AI: On the left-hand side, we have four behaviors: Idle, Guard, Combat, and Flee. To the right are the finite state machines that make up each of the behaviors. Idle and Flee only have one finite state machine, while Guard and Combat have multiple. Within the Combat behavior, two of its finite state machines even have a couple of their own finite state machines. As you can see, this hierarchy-based system of finite state machines allows us to use a basic form of logic to create an even more complex AI system. At the same time, we are also getting a lot of control by separating our AI into various behaviors. Each behavior will run its own silo of code, oblivious to the other behaviors. The only time we want a behavior to notice another behavior is either when an internal or external action occurs that forces the behavior of our AI to change. Combining the techniques In this article, we will take both of the AI techniques and combine them to create a great AI package. Our behavior tree will utilize finite state machines to run the individual behaviors, creating a unique and complex AI system. This AI package can be used for an enemy AI as well as a friendly AI. Let's start scripting! Now, let's begin scripting our AI! To start off, create a new C# file and name it AI_Agent. Upon opening it, delete any functions within the main class, leaving it empty. Just after the using statements, add this enum to the script: public enum Behaviors {Idle, Guard, Combat, Flee}; This enum will be used throughout our script to determine what behavior our AI is in. Now let's add it to our class. It is time to declare our first variable: public Behaviors aiBehaviors = Behaviors.Idle; This variable, aiBehaviors, will be the deciding factor of what our AI does. Its main purpose is to have its value checked and changed when needed. Let's create our first function, which will utilize one of this variable's purposes: void RunBehaviors(){switch(aiBehaviors){case Behaviors.Idle:   RunIdleNode();   break;case Behaviors.Guard:   RunGuardNode();   break;case Behaviors.Combat:   RunCombatNode();   break;case Behaviors.Flee:   RunFleeNode();   break;}} What this function will do is check the value of our aiBehaviors variable in a switch statement. Depending on what the value is, it will then call a function to be used within that behavior. This function is actually going to be a finite state machine, which will decide what that behavior does at that point. Now, let's add another function to our script, which will allow us to change the behavior of our AI: void ChangeBehavior(Behaviors newBehavior){aiBehaviors = newBehavior; RunBehaviors();} As you can see, this function works very similarly to the RunBehaviors function. When this function is called, it will take a new behaviors variable and assign its value to aiBehaviors. By doing this, we changed the behavior of our AI. Now let's add the final step to running our behaviors; for now, they will be empty functions that act as placeholders for our internal and external actions. Add these functions to the script: void RunIdleNode(){ } void RunGuardNode(){ }void RunCombatNode(){ }void RunFleeNode(){ } Each of these functions will run the finite state machines that make up the behaviors. These functions are essentially a middleman between the behavior and the behavior's action. Using these functions is the beginning of having more control over our behaviors, something that can't be done with a simple finite state machine. Internal and external actions The actions of a finite state machine can be broken up into internal and external actions. Separating the actions into the two categories makes it easier to define what our AI does in any given situation. The separation is helpful in the planning phase of creating AI, but it can also help in the scripting part as well, since you will know what will and will not be called by other classes and GameObjects. Another way this separation is beneficial is that it eases the work of multiple programmers working on the same AI; each programmer could work on separate parts of the AI without as many conflicts. External actions External actions are functions and activities that are activated when objects outside of the AI object act upon the AI object. Some examples of external actions include being hit by a player, having a spell being cast upon the player, falling from heights, losing the game by an external condition, communicating with external objects, and so on. The external actions that we will be using for our AI are: Changing its health Raising a stat Lowering a stat Killing the AI Internal actions Internal actions are the functions and activities that the AI runs within itself. Examples of these are patrolling a set path, attacking a player, running away from the player, using items, and so on. These are all actions that the AI will choose to do depending on a number of conditions. The internal actions that we will be using for our AI are: Patrolling a path Attacking a player Fleeing from a player Searching for a player Scripting the actions It's time to add some internal and external actions to the script. First, be sure to add the using statement to the top of your script with the other using statements: using System.Collections.Generic; Now, let's add some variables that will allow us to use the actions: public bool isSuspicious = false;public bool isInRange = false;public bool FightsRanged = false;public List<KeyValuePair<string, int>> Stats = new List<KeyValuePair<string, int>>();public GameObject Projectile; The first three of our new variables are conditions to be used in finite state machines to determine what function should be called. Next, we have a list of the KeyValuePair variables, which will hold the stats of our AI GameObject. The last variable is a GameObject, which is what we will use as a projectile for ranged attacks. Remember the empty middleman functions that we previously created? Now with these new variables, we will be adding some code to each of them. Add this code so that the empty functions are now filled: void RunIdleNode(){Idle();} void RunGuardNode(){Guard();}void RunCombatNode(){if(FightsRanged)   RangedAttack();else   MeleeAttack();}void RunFleeNode(){Flee();} Two of the three boolean variables we just created are being used as conditionals to call different functions, effectively creating finite state machines. Next, we will be adding the rest of our actions; these are what is being called by the middleman functions. Some of these functions will be empty placeholders, but will be filled later on in the article: void Idle(){} void Guard(){if(isSuspicious){   SearchForTarget();}else{   Patrol();}}void Combat(){if(isInRange){   if(FightsRanged)   {     RangedAttack();   }   else   {     MeleeAttack();   }}else{   SearchForTarget();}}void Flee(){} void SearchForTarget(){} void Patrol(){} void RangedAttack(){GameObject newProjectile;newProjectile = Instantiate(Projectile, transform.position, Quaternion.identity) as GameObject;} void MeleeAttack(){} In the Guard function, we check to see whether the AI notices the player or not. If it does, then it will proceed to search for the player; if not, then it will continue to patrol along its path. In the Combat function, we first check to see whether the player is within the attacking range; if not, then the AI searches again. If the player is within the attacking range, we check to see whether the AI prefers attacking up close or far away. For ranged attacks, we first create a new, temporary GameObject variable. Then, we set it to an instantiated clone of our Projectile GameObject. From here, the projectile will run its own scripts to determine what it does. This is how we allow our AI to attack the player from a distance. To finish off our actions, we have two more functions to add. The first one will be to change the health of the AI, which is as follows: void ChangeHealth(int Amount){if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == "Health")   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     if(Stats[i].Value <= 0)     {       Destroy(gameObject);     }     else if(Stats[i].Value < 25)     {       isSuspicious = false;       ChangeBehavior(Behaviors.Flee);     }     break;   }}} This function takes an int variable, which is the amount by which we want to change the health of the player. The first thing we do is check to see if the amount is negative; if it is, then we make our AI suspicious and change the behavior accordingly. Next, we search for the health stat in our list and set its value to a new value that is affected by the Amount variable. We then check if the AI's health is below zero to kill it; if not, then we also check if its health is below 25. If the health is that low, we make our AI flee from the player. To finish off our actions, we have one last function to add. It will allow us to affect a specific stat of the AI. These modifications will either add to or subtract from a stat. The modifications can be permanent or restored anytime. For the following instance, the modifications will be permanent: void ModifyStat(string Stat, int Amount){for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == Stat)   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     break;   }}if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}} This function takes a string and an integer. The string is used to search for the specific stat that we want to affect and the integer is how much we want to affect that stat by. It works in a very similar way to how the ChangeHealth function works, except that we first search for a specific stat. We also check to see if the amount is negative. This time, if it is negative, we change our AI behavior to Guard. This seems to be an appropriate response for the AI after being hit by something that negated one of its stats! Pathfinding Pathfinding is how the AI will maneuver around the level. For our AI package, we will be using two different kinds of pathfinding, NavMesh and waypoints. The waypoint system is a common approach to create paths for AI to move around the game level. To allow our AI to move through our level in an intelligent manner, we will use Unity's NavMesh component. Creating paths using the waypoint system Using waypoints to create paths is a common practice in game design, and it's simple too. To sum it up, you place objects or set locations around the game world; these are your waypoints. In the code, you will place all of your waypoints that you created in a container of some kind, such as a list or an array. Then, starting at the first waypoint, you tell the AI to move to the next waypoint. Once that waypoint has been reached, you send the AI off to another one, ultimately creating a system that iterates through all of the waypoints, allowing the AI to move around the game world through the set paths. Although using the waypoint system will grant our AI movement in the world, at this point, it doesn't know how to avoid obstacles that it may come across. That is when you need to implement some sort of mesh navigation system so that the AI won't get stuck anywhere. Unity's NavMesh system The next step in creating AI pathfinding is to create a way for our AI to navigate through the game world intelligently, meaning that it does not get stuck anywhere. In just about every game out there that has a 3D-based AI, the world it inhabits has all sorts of obstacles. These obstacles could be plants, stairs, ramps, boxes, holes, and so on. To get our AI to avoid these obstacles, we will use Unity's NavMesh system, which is built into Unity itself. Setting up the environment Before we can start creating our pathfinding system, we need to create a level for our AI to move around in. To do this, I am just using Unity primitive models such as cubes and capsules. For the floor, create a cube, stretch it out, and squish it to make a rectangle. From there, clone it several times so that you have a large floor made up of cubes. Next, delete a bunch of the cubes and move some others around. This will create holes in our floor, which will be used and tested when we implement the NavMesh system. To make the floor easy to see, I've created a material in green and assigned it to the floor cubes. After this, create a few more cubes, make one really long and one shorter than the previous one but thicker, and the last one will be used as a ramp. I've created an intersection of the really long cube and the thick cube. Then, place the ramp towards the end of the thick cube, giving access to the top of the cubes. Our final step in creating our test environment is to add a few waypoints for our AI. For testing purposes, create five waypoints in this manner. Place one in each corner of the level and one in the middle. For the actual waypoints, use the capsule primitive. For each waypoint, add a rigid body component. Name the waypoints as Waypoint1, Waypoint2, Waypoint3, and so on. The name is not all that important for our code; it just makes it easier to distinguish between waypoints in the inspector. Here's what I made for my level:  Creating the NavMesh Now, we will create the navigation mesh for our scene. The first thing we will do is select all of the floor cubes. In the menu tab in Unity, click on the Window option, and then click on the Navigation option at the bottom of the dropdown; this will open up the Navigation window. This is what you should be seeing right now:  By default, the OffMeshLink Generation option is not checked; be sure to check it. What this does is create links at the edges of the mesh allowing it to communicate with any other OffMeshLink nearby, creating a singular mesh. This is a handy tool since game levels typically use more than one mesh as a floor. The Scene filter will just show specific objects within the hierarchy view list. Selecting all the objects will show all of your GameObjects. Selecting mesh renderers will only show GameObjects that have the mesh renderer component. Then, finally, if you select terrains, only terrains will be shown in the Hierarchy view list. The Navigation Layer dropdown will allow you to set the area as either walkable, not walkable, or jump accessible. Walkable areas typically refer to floors, ramps, and so on. Non-walkable areas refer to walls, rocks, and other various obstacles. Next, click on the Bake tab next to the Object tab. You should see information that looks like this: For this article, I am leaving all the values at their defaults. The Radius property is used to determine how close to the walls the navigation mesh will exist. Height determines how much vertical space is needed for the AI agent to be able to walk on the navigation mesh. Max Slope is the maximum angle that the AI is allowed to travel on for ramps, hills, and so on. The Step Height property is used to determine how high the AI can step up onto surfaces higher than the ground level. For Generated Off Mesh Links, the properties are very similar to each other. The Drop Height value is the maximum amount of space the AI can intelligently drop down to another part of the navigation mesh. Jump Distance is the opposite of Height; it determines how high the AI can jump up to another part of the navigation mesh. The Advanced options are to be used when you have a better understanding of the NavMesh component and want a little more out of it. Here, you can further tweak the accuracy of the NavMesh as well as create Height Mesh to coincide with the navigation mesh. Now that you know all the basics of the Unity NavMesh, let's go ahead and create our navigation mesh. At the bottom-right corner of the Navigation tab in the Inspector window, you should see two buttons: one that says Clear and the other that says Bake. Click on the Bake button now to create your new navigation mesh. Select the ramp and the thick cube that we created earlier. In the Navigation window, make sure that the OffMeshLink Generation option is not checked, and that Navigation Layer is set to Default. If the ramp and the thick cube are not selected, reselect the floor cubes so that you have the floors, ramp, and thick wall selected. Bake the navigation mesh again to create a new one. This is what my scene looks like now with the navigation mesh: You should be able to see the newly generated navigation mesh overlaying the underlying mesh. This is what was created using the default Bake properties. Changing the Bake properties will give you different results, which will come down to what kind of navigation mesh you want the AI to use. Now that we have a navigation mesh, let's create the code for our AI to utilize. First, we will code the waypoint system, and then we will code what is needed for the NavMesh system. Adding our variables To start our navigation system, we will need to add a few variables first. Place these with the rest of our variables: public Transform[] Waypoints;public int curWaypoint = 0;bool ReversePath = false;NavMeshAgent navAgent;Vector3 Destination;float Distance; The first variable is an array of Transforms; this is what we will use to hold our waypoints. Next, we have an integer that is used to iterate through our Transform array. We have a bool variable, which will decide how we should navigate through the waypoints. The next three variables are more oriented towards our navigation mesh that we created earlier. The NavMeshAgent object is what we will reference when we want to interact with the navigation mesh. The destination will be the location that we want the AI to move towards. The distance is what we will use to check how far away we are from that location. Scripting the navigation functions Previously, we created many empty functions; some of these are dependent on pathfinding. Let's start with the Flee function. Add this code to replace the empty function: void Flee(){for(int fleePoint = 0; fleePoint < Waypoints.Length; fleePoint++){   Distance = Vector3.Distance(gameObject.transform.position, Waypoints[fleePoint].position);   if(Distance > 10.00f)   {     Destination = Waypoints[curWaypoint].position;     navAgent.SetDestination(Destination);     break;   }   else if(Distance < 2.00f)   {     ChangeBehavior(Behaviors.Idle);   }}} What this for loop does is pick a waypoint that has Distance of more than 10. If it does, then we set the Destination value to the current waypoint and move the AI accordingly. If the distance from the current waypoint is less than 2, we change the behavior to Idle. The next function that we will adjust is the SearchForTarget function. Add the following code to it, replacing its previous emptiness: void SearchForTarget(){Destination = GameObject.FindGameObjectWithTag("Player").transform.position;navAgent.SetDestination(Destination);Distance = Vector3.Distance(gameObject.transform.position, Destination);if(Distance < 10)   ChangeBehavior(Behaviors.Combat);} This function will now be able to search for a target, the Player target to be more specific. We set Destination to the player's current position, and then move the AI towards the player. When Distance is less than 10, we set the AI behavior to Combat. Now that our AI can run from the player as well as chase them down, let's utilize the waypoints and create paths for the AI. Add this code to the empty Patrol function: void Patrol(){Distance = Vector3.Distance(gameObject.transform.position, Waypoints[curWaypoint].position);if(Distance > 2.00f){   Destination = Waypoints[curWaypoint].position;   navAgent.SetDestination(Destination);}else{   if(ReversePath)   {     if(curWaypoint <= 0)     {       ReversePath = false;     }     else     {       curWaypoint--;       Destination = Waypoints[curWaypoint].position;     }   }   else   {     if(curWaypoint >= Waypoints.Length - 1)     {       ReversePath = true;     }     else     {       curWaypoint++;       Destination = Waypoints[curWaypoint].position;     }   }}} What Patrol will now do is check the Distance variable. If it is far from the current waypoint, we set that waypoint as the new destination of our AI. If the current waypoint is close to the AI, we check the ReversePath Boolean variable. When ReversePath is true, we tell the AI to go to the previous waypoint, going through the path in the reverse order. When ReversePath is false, the AI will go on to the next waypoint in the list of waypoints. With all of this completed, you now have an AI with pathfinding abilities. The AI can also patrol a path set by waypoints and reverse the path when the end has been reached. We have also added abilities for the AI to search for the player as well as flee from the player. Character animations Animations are what bring the characters to life visually in the game. From basic animations to super realistic movements, all the animations are important and really represent what scripters do to the player. Before we add animations to our AI, we first need to get a model mesh for it! Importing the model mesh For this article, I am using a model mesh that I got from the Unity Asset Store. To use the same model mesh that I am using, go to the Unity Asset Store and search for Skeletons Pack. It is a package of four skeleton model meshes that are fully textured, propped, and animated. The asset itself is free and great to use. When you import the package into Unity, it will come with all four models as well as their textures, and an example scene named ShowCase. Open that scene and you should see the four skeletons. If you run the scene, you will see all the skeletons playing their idle animations. Choose the skeleton you want to use for your AI; I chose skeletonDark for mine. Click on the drop-down list of your skeleton in the Hierarchy window, and then on the Bip01 drop-down list. Then, select the magicParticle object. For our AI, we will not need it, so delete it from the Hierarchy window. Create a new prefab in the Project window and name it Skeleton. Now select the skeleton that you want to use from the Hierarchy window and drag it onto the newly created prefab. This will now be the model that you will use for this article. In your AI test scene, drag and drop Skeleton Prefab onto the scene. I have placed mine towards the center of the level, near the waypoint in the middle. In the Inspector window, you will be able to see the Animation component full of animations for the model. Now, we will need to add a few components to our skeleton. Go to the Components menu on the top of the Unity window, select Navigation, and then select NavMesh Agent. Doing this will allow the skeleton to utilize the NavMesh we created earlier. Next, go into the Components menu again and click on Capsule Collider as well as Rigidbody. Your Inspector window should now look like this after adding the components: Your model now has all the necessary components needed to work with our AI script. Scripting the animations To script our animations, we will take a simple approach to it. There won't be a lot of code to deal with, but we will spread it out in various areas of our script where we need to play the animations. In the Idle function, add this line of code: animation.Play("idle"); This simple line of code will play the idle animation. We use animation to access the model's animation component, and then use the Play function of that component to play the animation. The Play function can take the name of the animation to call the correct animation to be played; for this one, we call the idle animation. In the SearchForTarget function, add this line of code to the script: animation.Play("run"); We access the same function of the animation component and call the run animation to play here. Add the same line of code to the Patrol function as well, since we will want to use that animation for that function too. In the RangedAttack and MeleeAttack functions, add this code: animation.Play("attack"); Here, we call the attack animation. If we had a separate animation for ranged attacks, we would use that instead, but since we don't, we will utilize the same animation for both attack types. With this, we finished coding the animations into our AI. It will now play those animations when they are called during gameplay. Putting it all together To wrap up our AI package, we will now finish up the script and add it to the skeleton. Final coding touches At the beginning of our AI script, we created some variables that we have yet to properly assign. We will do that in the Start function. We will also add the Update function to run our AI code. Add these functions to the bottom of the class: void Start(){navAgent = GetComponent<NavMeshAgent>(); Stats.Add(new KeyValuePair<string, int>("Health", 100));Stats.Add(new KeyValuePair<string, int>("Speed", 10));Stats.Add(new KeyValuePair<string, int>("Damage", 25));Stats.Add(new KeyValuePair<string, int>("Agility", 25));Stats.Add(new KeyValuePair<string, int>("Accuracy", 60));} void Update (){RunBehaviors();} In the Start function, we first assign the navAgent variable by getting the NavMeshAgent component from the GameObject. Next, we add new KeyValuePair variables to the Stats array. The Stats array is now filled with a few stats that we created. The Update function calls the RunBehaviors function. This is what will keep the AI running; it will run the correct behavior as long as the AI is active. Filling out the inspector To complete the AI package, we will need to add the script to the skeleton, so drag the script onto the skeleton in the Hierarchy window. In the Size property of the waypoints array, type the number 5 and open up the drop-down list. Starting with Element 0, drag each of the waypoints into the empty slots. For the projectile, create a sphere GameObject and make it a prefab. Now, drag it onto the empty slot next to Projectile. Finally, set the AI Behaviors to Guard. This will make it so that when you start the scene, your AI will be patrolling. The Inspector window of the skeleton should look something like this: Your AI is now ready for gameplay! To make sure everything works, we will need to do some playtesting. Playtesting A great way to playtest the AI is to play the scene in every behavior. Start off with Guard, then run it in Idle, Combat, and Flee. For different outputs, try adjusting some of the variables in the NavMesh Agent component, such as Speed, Angular Speed, and Stopping Distance. Try mixing your waypoints around so the path is different. Summary In this article, you learned how to create an AI package. We explored a couple of techniques to handle AI, such as finite state machines and behavior trees. Then, we dived into AI actions, both internal and external. From there, we figured out how to implement pathfinding with both a waypoint system and Unity's NavMesh system. Finally, we topped the AI package off with animations and put everything together, creating our finalized AI. Resources for Article: Further resources on this subject: Getting Started – An Introduction to GML [article] Animations in Cocos2d-x [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 10220

article-image-using-leap-motion-controller-arduino
Packt
19 Nov 2014
18 min read
Save for later

Using the Leap Motion Controller with Arduino

Packt
19 Nov 2014
18 min read
This article by Brandon Sanders, the author of the book Mastering Leap Motion, focuses on what he specializes in—hardware. While normal applications are all fine and good, he finds it much more gratifying if a program he writes has an impact in the physical world. (For more resources related to this topic, see here.) One of the most popular hobbyist hardware solutions, as I'm sure you know, is the Arduino. This cute little blue board from Italy brought the power of micro controllers to the masses. Throughout this article, we're going to work on integrating a Leap Motion Controller with an Arduino board via a simplistic program; the end goal is to make the built-in LED on an Arduino board blink either slower or faster depending on how far a user's hand is away from the Leap. While this is a relatively simple task, it's a great way to demonstrate how you can connect something like the Leap to an external piece of hardware. From there, it's only a hop, skip, and jump to control robots and other cool things with the Leap! This project will follow the client-server model of programming: we'll be writing a simple Java server which will be run from a computer, and a C++ client which will run on an Arduino board connected to the computer. The server will be responsible for retrieving Leap Motion input and sending it to the client, while the client will be responsible for making an LED blink based on data received from the server. Before we begin, I'd like to note that you can download the completed (and working) project from GitHub at https://github.com/Mizumi/Mastering-Leap-Motion-Chapter-9-Project-Leapduino. A few things you'll need Before you begin working on this tutorial, there are a few things you're going to need: A computer (for obvious reasons). A Leap Motion Controller. An Arduino of some kind. This tutorial is based around the Uno model, but other similar models like the Mega should work just as well. A USB cable to connect your Arduino to your computer. Optionally, the Eclipse IDE (this tutorial will assume you're using Eclipse for the sake of readability and instruction). Setting up the environment First off, you're going to need a copy of the Leap Motion SDK so that you can add the requisite library jar files and DLLs to the project. If you don't already have it, you can get a copy of the SDK from https://www.developer.leapmotion.com/downloads/. Next, you're going to need the Java Simple Serial Connector (JSSC) library and the Arduino IDE. You can download the library JAR file for JSSC from GitHub at https://github.com/scream3r/java-simple-serial-connector/releases. Once the download completes, extract the JAR file from the downloaded ZIP folder and store it somewhere safe; you'll need it later on in this tutorial. You can then proceed to download the Arduino IDE from their official website at http://arduino.cc/en/Main/Software. If you're on Windows, you will be able to download a Windows installer file which will automagically install the entire IDE on to your computer. On the other hand, Mac and Linux users will need to instead download .zip or .tgz files and then extract them manually, running the executable binary from the extracted folder contents. Setting up the project To set up our project, perform the following steps: The first thing we're going to do is create a new Java project. This can be easily achieved by opening up Eclipse (to reiterate for the third time, this tutorial will assume you're using Eclipse) and heading over to File -> New -> Java Project. You will then be greeted by a project creation wizard, where you'll be prompted to choose a name for the project (I used Leapduino). Click on the Finish button when you're done. My current development environment is based around the Eclipse IDE for Java Developers, which can be found at http://www.eclipse.org/downloads. The instructions that follow will use Eclipse nomenclature and jargon, but they will still be usable if you're using something else (like NetBeans). Once the project is created, navigate to it in the Package Explorer window. You'll want to go ahead and perform the following actions: Create a new package for the project by right-clicking on the src folder for your project in the Package Explorer and then navigating to New | Package in the resulting tooltip. You can name it whatever you like; I personally called mine com.mechakana.tutorials. You'll now want to add three files to our newly-created package: Leapduino.java, LeapduinoListener.java, and RS232Protocol.java. To create a new file, simply right-click on the package and then navigate to New | Class. Create a new folder in your project by right-clicking on the project name in the Package Explorer and then navigating to New | Folder in the resulting tooltip. For the purposes of this tutorial, please name it Leapduino. Now add one file to your newly created folder: Leapduino.ino. This file will contain all of the code that we're going to upload to the Arduino. With all of our files created, we need to add the libraries to the project. Go ahead and create a new folder at the root directory of your project, called lib. Within the lib folder, you'll want to place the jssc.jar file that you downloaded earlier, along with the LeapJava.jar file from the Leap Motion SDK. Then, you will want to add the appropriate Leap.dll and LeapJava.dll files for your platform to the root of your project. Finally, you'll need to modify your Java build path to link the LeapJava.jar and jssc.jar files to your project. This can be achieved by right-clicking on your project in the Package Explorer (within Eclipse) and navigating to Build Path… | Configure Build Path…. From there, go to the Libraries tab and click on Add JARs…, selecting the two aforementioned JAR files (LeapJava.jar and jssc.jar). When you're done, your project should look similar to the following screenshot: And you're done; now to write some code! Writing the Java side of things With everything set up and ready to go, we can start writing some code. First off, we're going to write the RS232Protocol class, which will allow our application to communicate with any Arduino board connected to the computer via a serial (RS-232) connection. This is where the JSSC library will come into play, allowing us to quickly and easily write code that would otherwise be quite lengthy (and not fun). Fun fact RS-232 is a standard for serial communications and transmission of data. There was a time when it was a common feature on a personal computer, used for modems, printers, mice, hard drives, and so on. With time, though, the Universal Serial Bus (USB) technology replaced RS-232 for many of those roles. Despite this, today's industrial machines, scientific equipment and (of course) robots still make heavy usage of this protocol due to its light weight and ease of use; the Arduino is no exception! Go ahead and open up the RS232Protocol.java file which we created earlier, and enter the following: package com.mechakana.tutorials; import jssc.SerialPort; import jssc.SerialPortEvent; import jssc.SerialPortEventListener; import jssc.SerialPortException; public class RS232Protocol { //Serial port we're manipulating. private SerialPort port; //Class: RS232Listener public class RS232Listener implements SerialPortEventListener {    public void serialEvent(SerialPortEvent event)    {      //Check if data is available.      if (event.isRXCHAR() && event.getEventValue() > 0)      {        try        {          int bytesCount = event.getEventValue();          System.out.print(port.readString(bytesCount));        }                 catch (SerialPortException e) { e.printStackTrace(); }      }    } } //Member Function: connect public void connect(String newAddress) {    try    {      //Set up a connection.      port = new SerialPort(newAddress);         //Open the new port and set its parameters.      port.openPort();      port.setParams(38400, 8, 1, 0);               //Attach our event listener.      port.addEventListener(new RS232Listener());    }     catch (SerialPortException e) { e.printStackTrace(); } } //Member Function: disconnect public void disconnect() {    try { port.closePort(); }     catch (SerialPortException e) { e.printStackTrace(); } } //Member Function: write public void write(String text) {    try { port.writeBytes(text.getBytes()); }     catch (SerialPortException e) { e.printStackTrace(); } } } All in all, RS232Protocol is a simple class—there really isn't a whole lot to talk about here! However, I'd love to point your attention to one interesting part of the class: public class RS232Listener implements SerialPortEventListener { public void serialEvent(SerialPortEvent event) { /*code*/ } } You might have found it rather odd that we didn't create a function for reading from the serial port—we only created a function for writing to it. This is because we've opted to utilize an event listener, the nested RS232Listener class. Under normal operating conditions, this class's serialEvent function will be called and executed every single time new information is received from the port. When this happens, the function will print all of the incoming data out to the user's screen. Isn't that nifty? Moving on, our next class is a familiar one—LeapduinoListener, a simple Listener implementation. This class represents the meat of our program, receiving Leap Motion tracking data and then sending it over our serial port to the connected Arduino. Go ahead and open up LeapduinoListener.java and enter the following code: package com.mechakana.tutorials; import com.leapmotion.leap.*; public class LeapduinoListener extends Listener {   //Serial port that we'll be using to communicate with the Arduino. private RS232Protocol serial; //Constructor public LeapduinoListener(RS232Protocol serial) {    this.serial = serial; } //Member Function: onInit public void onInit(Controller controller) {    System.out.println("Initialized"); } //Member Function: onConnect public void onConnect(Controller controller) {    System.out.println("Connected"); } //Member Function: onDisconnect public void onDisconnect(Controller controller) {    System.out.println("Disconnected"); } //Member Function: onExit public void onExit(Controller controller) {    System.out.println("Exited"); } //Member Function: onFrame public void onFrame(Controller controller) {    //Get the most recent frame.    Frame frame = controller.frame();    //Verify a hand is in view.    if (frame.hands().count() > 0)    {      //Get some hand tracking data.      int hand = (int) (frame.hands().frontmost().palmPosition().getY());      //Send the hand pitch to the Arduino.      serial.write(String.valueOf(hand));      //Give the Arduino some time to process our data.      try { Thread.sleep(30); }      catch (InterruptedException e) { e.printStackTrace(); }    } } } In this class, we've got the basic Leap Motion API onInit, onConnect, onDisconnect, onExit, and onFrame functions. Our onFrame function is fairly straightforward: we get the most recent frame, verify a hand is within view, retrieve its y axis coordinates (height from the Leap Motion Controller) and then send it off to the Arduino via our instance of the RS232Protocol class (which gets assigned during initialization). The remaining functions simply print text out to the console telling us when the Leap has initialized, connected, disconnected, and exited (respectively). And now, for our final class on the Java side of things: Leapduino! This class is a super basic main class that simply initializes the RS232Protocol class and the LeapduinoListener—that's it! Without further ado, go on ahead and open up Leapduino.java and enter the following code: package com.mechakana.tutorials; import com.leapmotion.leap.Controller; public class Leapduino { //Main public static final void main(String args[]) {      //Initialize serial communications.    RS232Protocol serial = new RS232Protocol();    serial.connect("COM4");    //Initialize the Leapduino listener.    LeapduinoListener leap = new LeapduinoListener(serial);    Controller controller = new Controller();    controller.addListener(leap); } } Like all of the classes so far, there isn't a whole lot to say here. That said, there is one line that you must absolutely be aware of, since it can change depending on how you're Arduino is connected: serial.connect("COM4"); Depending on which port Windows chose for your Arduino when it connected to your computer (more on that next), you will need to modify the COM4 value in the above line to match the port your Arduino is on. Examples of values you'll probable use are COM3, COM4, and COM5. And with that, the Java side of things is complete. If you run this project right now, most likely all you'll see will be two lines of output: Initialized and Connected. If you want to see anything else happen, you'll need to move on to the next section and get the Arduino side of things working. Writing the Arduino side of things With our Java coding done, it's time to write some good-old C++ for the Arduino. If you were able to use the Windows installer for Arduino, simply navigate to the Leapduino.ino file in your Eclipse project explorer and double click on it. If you had to extract the entire Arduino IDE and store it somewhere instead of running a simple Windows installer, navigate to it and launch the Arduino.exe file. From there, select File | Open, navigate to the Leapduino.ino file on your computer and double click on it. You will now be presented with a screen similar to the one here: This is the wonderful Arduino IDE—a minimalistic and straightforward text editor and compiler for the Arduino microcontrollers. On the top left of the IDE, you'll find two circular buttons: the check mark verifies (compiles) your code to make sure it works, and the arrow deploys your code to the Arduino board connected to your computer. On the bottom of the IDE, you'll find the compiler output console (the black box), and on the very bottom right you'll see a line of text telling you which Arduino model is connected to your computer, and on what port (I have an Arduino Uno on COM4 in the preceding screenshot). As is typical for many IDEs and text editors, the big white area in the middle is where your code will go. So without further ado, let's get started with writing some code! Input all of the text shown here into the Arduino IDE: //Most Arduino boards have an LED pre-wired to pin 13. int led = 13; //Current LED state. LOW is off and HIGH is on. int ledState = LOW; //Blink rate in milliseconds. long blinkRate = 500; //Last time the LED was updated. long previousTime = 0; //Function: setup void setup() { //Initialize the built-in LED (assuming the Arduino board has one) pinMode(led, OUTPUT); //Start a serial connection at a baud rate of 38,400. Serial.begin(38400); } //Function: loop void loop() { //Get the current system time in milliseconds. unsigned long currentTime = millis(); //Check if it's time to toggle the LED on or off. if (currentTime - previousTime >= blinkRate) {    previousTime = currentTime;       if (ledState == LOW) ledState = HIGH;    else ledState = LOW;       digitalWrite(led, ledState); } //Check if there is serial data available. if (Serial.available()) {    //Wait for all data to arrive.    delay(20);       //Our data.    String data = "";       //Iterate over all of the available data and compound it into      a string.    while (Serial.available())      data += (char) (Serial.read());       //Set the blink rate based on our newly-read data.    blinkRate = abs(data.toInt() * 2);       //A blink rate lower than 30 milliseconds won't really be      perceptable by a human.    if (blinkRate < 30) blinkRate = 30;       //Echo the data.    Serial.println("Leapduino Client Received:");    Serial.println("Raw Leap Data: " + data + " | Blink Rate (MS):      " + blinkRate); } } Now, let's go over the contents. The first few lines are basic global variables, which we'll be using throughout the program (the comments do a good job of describing them, so we won't go into much detail here). The first function, setup, is an Arduino's equivalent of a constructor; it's called only once, when the Arduino is first turned on. Within the setup function, we initialize the built-in LED (most Arduino boards have an LED pre-wired to pin 13) on the board. We then initialize serial communications at a baud rate of 38,400 bits per second—this will allow our board to communicate with the computer later on. Fun fact The baud rate (abbreviated as Bd in some diagrams) is the unit for symbol rate or modulation rate in symbols or pulses per second. Simply put, on serial ports, the baud rate controls how many bits a serial port can send per second—the higher the number, the faster a serial port can communicate. The question is, why don't we set a ridiculously high rate? Well, the higher you go with the baud rate, the more likely it is for there to be data loss—and we all know data loss just isn't good. For many applications, though, a baud rate of 9,600 to 38,400 bits per second is sufficient. Moving on to the second function, loop is the main function in any Arduino program, which is repeatedly called while the Arduino is turned on. Due to this functionality, many programs will treat any code within this function as if it were inside a while (true) loop. In loop, we start off by getting the current system time (in milliseconds) and then comparing it to our ideal blink rate for the LED. If the time elapsed since our last blink exceeds the ideal blink rate, we'll go ahead and toggle the LED on or off accordingly. We then proceed to check if any data has been received over the serial port. If it has, we'll proceed to wait for a brief period of time, 20 milliseconds, to make sure all data has been received. At that point, our code will proceed to read in all of the data, parse it for an integer (which will be our new blink rate), and then echo the data back out to the serial port for diagnostics purposes. As you can see, an Arduino program (or sketch, as they are formally known) is quite simple. Why don't we test it out? Deploying and testing the application With all of the code written, it's time to deploy the Arduino side of things to the, well, Arduino. The first step is to simply open up your Leapduino.ino file in the Arduino IDE. Once that's done, navigate to Tools | Board and select the appropriate option for your Arduino board. In my case, it's an Arduino Uno. At this point, you'll want to verify that you have an Arduino connected to your computer via a USB cable—after all, we can't deploy to thin air! At this point, once everything is ready, simply hit the Deploy button in the top-left of the IDE, as seen here: If all goes well, you'll see the following output in the console after 15 or so seconds: And with that, your Arduino is ready to go! How about we test it out? Keeping your Arduino plugged into your computer, go on over to Eclipse and run the project we just made. Once it's running, try moving your hand up and down over your Leap Motion controller; if all goes well, you'll see the following output from within the console in Eclipse: All of that data is coming directly from the Arduino, not your Java program; isn't that cool? Now, take a look at your Arduino while you're doing this; you should notice that the built-in LED (circled in the following image, labelled L on the board itself) will begin to blink slower or faster depending on how close your hand gets to the Leap. Circled in red: the built-in L LED on an Arduino Uno, wired to pin 13 by default. With this, you've created a simple Leap Motion application for use with an Arduino. From here, you could go on to make an Arduino-controlled robotic arm driven by coordinates from the Leap, or maybe an interactive light show. The possibilities are endless, and this is just the (albeit extremely, extremely simple) tip of the iceberg. Summary In this article, you had a lengthy look at some things you can do with the Leap Motion Controller and hardware such as Arduino. If you have any questions, I encourage you to contact me directly at brandon@mechakana.com. You can also visit my website, http://www.mechakana.com, for more technological goodies and tutorials. Resources for Article: Further resources on this subject: Major SDK components [Article] 2D Twin-stick Shooter [Article] What's Your Input? [Article]
Read more
  • 0
  • 0
  • 36076

article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 15217

article-image-scaling-friendly-font-rendering-distance-fields
Packt
28 Oct 2014
8 min read
Save for later

Scaling friendly font rendering with distance fields

Packt
28 Oct 2014
8 min read
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem. (For more resources related to this topic, see here.) Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures! In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx. Getting ready For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from http://libgdx.badlogicgames.com/releases and unzip it. Make sure the samples projects are in your workspace. Please visit the link https://github.com/siondream/libgdx-cookbook to download the sample projects which you will need. How to do it… First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx. Generating distance field fonts with Hiero Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero Select the font using either the System or File options. This time, you don't need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time. Remove the Color effect, and add a white Distance field effect. Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot. To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding. Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property. Generate the font by going to File | Save BMFont files (text).... The following is the Hiero UI showing a font texture with a Distance field effect applied to it: Distance field fonts shader We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later. First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows: #ifdef GL_ES precision mediump float; precision mediump int; #endif   uniform sampler2D u_texture;   varying vec4 v_color; varying vec2 v_texCoord;   const float smoothing = 1.0/128.0;   void main() {    float distance = texture2D(u_texture, v_texCoord).a;    float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);    gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); } The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code. Rendering distance field fonts in Libgdx Let's move on to DistanceFieldFontSample.java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader. In the create() method, we instantiate both the fonts and shader normally: normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f);   distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f);   fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag"));   if (!fontShader.isCompiled()) {    Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); } We need to make sure that the texture our distanceFont just loaded is using linear filtering: distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear); Remember to free up resources in the dispose() method, and let's get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader: batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f);   batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end(); The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though: How it works… Let's explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf). A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process: Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case: This produces a clean result. There's more… We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this. Open a command-line window, access your Libgdx package folder and enter the following command: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator The distance field font generator takes the following parameters: --color: This parameter is in hexadecimal RGB format; the default is ffffff --downscale: This is the factor by which the original texture will be downscaled --spread: This is the edge scan distance, expressed in terms of the input Take a look at this example: java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution (https://github.com/jrenner/gdx-smart-font). Summary In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx. Further resources on this subject: Cross-platform Development - Build Once, Deploy Anywhere [Article] Getting into the Store [Article] Adding Animations [Article]
Read more
  • 0
  • 0
  • 8868

article-image-animation-and-unity3d-physics
Packt
27 Oct 2014
5 min read
Save for later

Animation and Unity3D Physics

Packt
27 Oct 2014
5 min read
In this article, written by K. Aava Rani, author of the book, Learning Unity Physics, you will learn to use Physics in animation creation. We will see that there are several animations that can be easily handled by Unity3D's Physics. During development, you will come to know that working with animations and Physics is easy in Unity3D. You will find the combination of Physics and animation very interesting. We are going to cover the following topics: Interpolate and Extrapolate Cloth component and its uses in animation (For more resources related to this topic, see here.) Developing simple and complex animations As mentioned earlier, you will learn how to handle and create simple and complex animations using Physics, for example, creating a rope animation and hanging ball. Let's start with the Physics properties of a Rigidbody component, which help in syncing animation. Interpolate and Extrapolate Unity3D offers a way that its Rigidbody component can help in the syncing of animation. Using the interpolation and extrapolation properties, we sync animations. Interpolation is not only for animation, it works also with Rigidbody. Let's see in detail how interpolation and extrapolation work: Create a new scene and save it. Create a Cube game object and apply Rigidbody on it. Look at the Inspector panel shown in the following screenshot. On clicking Interpolate, a drop-down list that contains three options will appear, which are None, Interpolate, and Extrapolate. By selecting any one of them, we can apply the feature. In interpolation, the position of an object is calculated by the current update time, moving it backwards one Physics update delta time. Delta time or delta timing is a concept used among programmers in relation to frame rate and time. For more details, check out http://docs.unity3d.com/ScriptReference/Time-deltaTime.html. If you look closely, you will observe that there are at least two Physics updates, which are as follows: Ahead of the chosen time Behind the chosen time Unity interpolates between these two updates to get a position for the update position. So, we can say that the interpolation is actually lagging behind one Physics update. The second option is Extrapolate, which is to use for extrapolation. In this case, Unity predicts the future position for the object. Although, this does not show any lag, but incorrect prediction sometime causes a visual jitter. One more important component that is widely used to animate cloth is the Cloth component. Here, you will learn about its properties and how to use it. The Cloth component To make animation easy, Unity provides an interactive component called Cloth. In the GameObject menu, you can directly create the Cloth game object. Have a look at the following screenshot: Unity also provides Cloth components in its Physics sections. To apply this, let's look at an example: Create a new scene and save it. Create a Plane game object. (We can also create a Cloth game object.) Navigate to Component | Physics and choose InteractiveCloth. As shown in the following screenshot, you will see the following properties in the Inspector panel: Let's have a look at the properties one by one. Blending Stiffness and Stretching Stiffness define the blending and stretching stiffness of the Cloth while Damping defines the damp motion of the Cloth. Using the Thickness property, we decide thickness of the Cloth, which ranges from 0.001 to 10,000. If we enable the Use Gravity property, it will affect the Cloth simulation. Similarly, if we enable Self Collision, it allows the Cloth to collide with itself. For a constant or random acceleration, we apply the External Acceleration and Random Acceleration properties, respectively. World Velocity Scale decides movement of the character in the world, which will affect the Cloth vertices. The higher the value, more movement of the character will affect. World Acceleration works similarly. The Interactive Cloth component depends on the Cloth Renderer components. Lots of Cloth components in a game reduces the performance of game. To simulate clothing in characters, we use the Skinned Cloth component. Important points while using the Cloth component The following are the important points to remember while using the Cloth component: Cloth simulation will not generate tangents. So, if you are using a tangent dependent shader, the lighting will look wrong for a Cloth component that has been moved from its initial position. We cannot directly change the transform of moving Cloth game object. This is not supported. Disabling the Cloth before changing the transform is supported. The SkinnedCloth component works together with SkinnedMeshRenderer to simulate clothing on a character. As shown in the following screenshot, we can apply Skinned Cloth: As you can see in the following screenshot, there are different properties that we can use to get the desired effect: We can disable or enable the Skinned Cloth component at any time to turn it on or off. Summary In this article, you learned about how interpolation and extrapolation work. We also learned about Cloth component and its uses in animation with an example. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Unity Networking – The Pong Game [article] The physics engine [article]
Read more
  • 0
  • 0
  • 17974
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animations-cocos2d-x
Packt
23 Sep 2014
24 min read
Save for later

Animations in Cocos2d-x

Packt
23 Sep 2014
24 min read
In this article, created by Siddharth Shekhar, the author of Learning Cocos2d-x Game Development, we will learn different tools that can be used to animate the character. Then, using these animations, we will create a simple state machine that will automatically check whether the hero is falling or is being boosted up into the air, and depending on the state, the character will be animated accordingly. We will cover the following in this article: Animation basics TexturePacker Creating spritesheet for the player Creating and coding the enemy animation Creating the skeletal animation Coding the player walk cycle (For more resources related to this topic, see here.) Animation basics First of all, let's understand what animation is. An animation is made up of different images that are played in a certain order and at a certain speed, for example, movies that run images at 30 fps or 24 fps, depending on which format it is in, NTSC or PAL. When you pause a movie, you are actually seeing an individual image of that movie, and if you play the movie in slow motion, you will see the frames or images that make up to create the full movie. In games while making animations, we will do the same thing: adding frames and running them at a certain speed. We will control the images to play in a particular sequence and interval by code. For an animation to be "smooth", you should have at least 24 images or frames being played in a second, which is known as frames per second (FPS). Each of the images in the animation is called a frame. Let's take the example of a simple walk cycle. Each walk cycle should be of 24 frames. You might say that it is a lot of work, and for sure it is, but the good news is that these 24 frames can be broken down into keyframes, which are important images that give the illusion of the character walking. The more frames you add between these keyframes, the smoother the animation will be. The keyframes for a walk cycle are Contact, Down, Pass, and Up positions. For mobile games, as we would like to get away with as minimal work as possible, instead of having all the 24 frames, some games use just the 4 keyframes to create a walk animation and then speed up the animation so that player is not able to see the missing frames. So overall, if you are making a walk cycle for your character, you will create eight images or four frames for each side. For a stylized walk cycle, you can even get away with a lesser number of frames. For the animation in the game, we will create images that we will cycle through to create two sets of animation: an idle animation, which will be played when the player is moving down, and a boost animation, which will get played when the player is boosted up into the air. Creating animation in games is done using two methods. The most popular form of animation is called spritesheet animation and the other is called skeletal animation. Spritesheet animation Spritesheet animation is when you keep all the frames of the animation in a single file accompanied by a data file that will have the name and location of each of the frames. This is very similar to the BitmapFont. The following is the spritesheet we will be using in the game. For the boost and idle animations, each of the frames for the corresponding animation will be stored in an array and made to loop at a particular predefined speed. The top four images are the frames for the boost animation. Whenever the player taps on the screen, the animation will cycle through these four images appearing as if the player is boosted up because of the jetpack. The bottom four images are for the idle animation when the player is dropping down due to gravity. In this animation, the character will look as if she is blinking and the flames from the jetpack are reduced and made to look as if they are swaying in the wind. Skeletal animation Skeletal animation is relatively new and is used in games such as Rayman Origins that have loads and loads of animations. This is a more powerful way of making animations for 2D games as it gives a lot of flexibility to the developer to create animations that are fast to produce and test. In the case of spritesheet animations, if you had to change a single frame of the animation, the whole spritesheet would have to be recreated causing delay; imagine having to rework 3000 frames of animations in your game. If each frame was hand painted, it would take a lot of time to produce the individual images causing delay in production time, not to mention the effort and time in redrawing images. The other problem is device memory. If you are making a game for the PC, it would be fine, but in the case of mobiles where memory is limited, spritesheet animation is not a viable option unless cuts are made to the design of the game. So, how does skeletal animation work? In the case of skeletal animation, each item to be animated is stored in a separate spritesheet along with the data file for the locations of the individual images for each body part and object to be animated, and another data file is generated that positions and rotates the individual items for each of the frames of the animation. To make this clearer, look at the spritesheet for the same character created with skeletal animation: Here, each part of the body and object to be animated is a separate image, unlike the method used in spritesheet animation where, for each frame of animation, the whole character is redrawn. TexturePacker To create a spritesheet animation, you will have to initially create individual frames in Photoshop, Illustrator, GIMP or any other image editing software. I have already made it and have each of the images for the individual frames ready. Next, you will have to use a software to create spritesheets from images. TexturePacker is a very popular software that is used by industry professionals to create spritesheets. You can download it from https://www.codeandweb.com/. These are the same guys who made PhysicsEditor, which we used to make shapes for Box2D. You can use the trial version of this software. While downloading, choose the version that is compatible with your operating system. Fortunately, TexturePacker is available for all the major operating systems, including Linux. Refer to the following screenshot to check out the steps to use TexturePacker: Once you have downloaded TexturePacker, you have three options: you can click to try the full version for a week, or you can purchase the license, or click on the essential version to use in the trial version. In the trial version, some of the professional features are disabled, so I recommend trying the professional features for a week. Once you click the option, you should see the following interface: Texture packer has three panels; let's start from the right. The right-hand side panel will display the names of all the images that you select to create the spritesheet. The center panel is a preview window that shows how the images are packed. The left-hand side panel gives you options to store the packed texture and data file to be published to and decide the maximum size of the packed image. The Layout section gives a lot of flexibility to set up the individual images in TexturePacker, and then you have the advanced section. Let's look at some of the key items on the panel on the left. The display section The display section consists of the following options: Data Format: As we saw earlier, each exported file creates a spritesheet that has a collection of images and a data file that keeps track of the positions on the spritesheet. The data format usually changes depending upon the framework or engine. In TexturePacker, you can select the framework that you are using to develop the game, and TexturePacker will create a data file format that is compatible with the framework. If you look at the drop-down menu, you can see a lot of popular frameworks and engines in the list such as 2DToolkit, OGRE, Cocos2d, Corona SDK, LibGDX, Moai, Sparrow/Starling, SpriteKit, and Unity. You can also create a regular JSON file too if you wish. Java Script Object Notification (JSON) is similar to an XML file that is used to store and retrieve data. It is a collection of names and value pairs used for data interchanging. Data file: This is the location where you want the exported file to be placed. Texture format: Usually, this is set to .png, but you can select the one that is most convenient. Apart from PNG, you also have PVR, which is used so that people cannot view the image readily and also provides image compression. Png OPT file: This is used to set the quality of PNG images. Image format: This sets the RGB format to be used; usually, you would want this to be set at the default value. AutoSD: If you are going to create images for different resolutions, this option allows you to create resources depending on the different resolutions you are developing the game for, without the need for going into the graphics software, shrinking the images and packing them again for all the resolutions. Content protection: This protects the image and data file with an encryption key so that people can't steal spritesheets from the game file. The Geometry section The Geometry section consists of the following options: Max size: You can specify the maximum width and height of the spritesheet depending upon the framework. Usually, all frameworks allow up to 4092 x 4092, but it mostly depends on the device. Fixed size: Apparently, if you want a fixed size, you will go with this option. Size constraint: Some frameworks prefer the spritesheets to be in the power of 2 (POT), for example, 32x32, 64x64, 256x256, and so on. If this is the case, you need to select the size accordingly. For Cocos2d, you can choose any size. Scale: This is used to scale up or scale down the image. The Layout section The Layout section consists of the following options: Algorithm: This is the algorithm that will be used to make sure that the images you select to create the spritesheet are packed in the most efficient way. If you are using the pro version, choose MaxRects, but if you are using the essential version, you will have to choose Basic. Border Padding / Shape Padding: Border padding packs the gap between the border of the spritesheet and the image that it is surrounding. Shape padding is the padding between the individual images of the spritesheets. If you find that the images are getting overlapped while playing the animation in the game, you might want to increase the values to avoid overlapping. Trim: This removes the extra alpha that is surrounding the image, which would unnecessarily increase the image size of the spritesheet. Advanced features The following are some miscellaneous options in TexturePacker: Texture path: This appends the path of the texture file at the beginning of the texture name Clean transparent pixels: This sets the transparent pixels color to #000 Trim sprite names: This will remove the extension from the names of the sprites (.png and .jpg), so while calling for the name of the frame, you will not have to use extensions Creating a spritesheet for the player Now that we understand the different items in the TextureSettings panel of TexturePacker, let's create our spritesheet for the player animation from individual frames provided in the Resources folder. Open up the folder in the system and select all the images for the player that contains the idle and boost frames. There will be four images for each of the animation. Select all eight images and click-and-drag all the images to the Sprites panel, which is the right-most panel of TexturePacker. Once you have all the images on the Sprites panel, the preview panel at the center will show a preview of the spritesheet that will be created: Now on the TextureSettings panel, for the Data format option, select cocos2d. Then, in the Data file option, click on the folder icon on the right and select the location where you would like to place the data file and give the name as player_anim. Once selected, you will see that the Texture file location also auto populates with the same location. The data file will have a format of .plist and the texture file will have an extension of .png. The .plist format creates data in a markup language similar to XML. Although it is more common on Mac, you can use this data type independent of the platform you use while developing the game using Cocos2d-x. Keep the rest of the settings the same. Save the file by clicking on the save icon on the top to a location where the data and spritesheet files are saved. This way, you can access them easily the next time if you want to make the same modifications to the spritesheet. Now, click on the Publish button and you will see two files, player_anim.plist and player_anim.png, in the location you specified in the Data file and Location file options. Copy and paste these two files in the Resources folder of the project so that we can use these files to create the player states. Creating and coding enemy animation Now, let's create a similar spritesheet and data file for the enemy also. All the required files for the enemy frames are provided in the Resources folder. So, once you create the spritesheet for the enemy, it should look something like the following screenshot. Don't worry if the images are shown in the wrong sequence, just make sure that the files are numbered correctly from 1 to 4 and it is in the sequence the animations needs to be played in. Now, place the enemy_anim.png spritesheet and data file in the Resources folder in the directory and add the following lines of code in the Enemy.cpp file to animate the enemy:   //enemy animation       CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create("enemy_anim.png");    CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();    cache->addSpriteFramesWithFile("enemy_anim.plist");       this->createWithSpriteFrameName("enemy_idle_1.png");    this->addChild(spritebatch);             //idle animation    CCArray* animFrames = CCArray::createWithCapacity(4);      char str1[100] = {0};    for(int i = 1; i <= 4; i++)    {        sprintf(str1, "enemy_idle_%d.png", i);        CCSpriteFrame* frame = cache->spriteFrameByName( str1 );        animFrames->addObject(frame);    }           CCAnimation* idleanimation = CCAnimation::createWithSpriteFrames(animFrames, 0.25f);    this->runAction (CCRepeatForever::create(CCAnimate::create(idleanimation))) ; This is very similar to the code for the player. The only difference is that for the enemy, instead of calling the function on the hero, we call it to the same class. So, now if you build and run the game, you should see the enemy being animated. The following is the screenshot from the updated code. You can now see the flames from the booster engine of the enemy. Sadly, he doesn't have a boost animation but his feet swing in the air. Now that we have mastered the spritesheet animation technique, let's see how to create a simple animation using the skeletal animation technique. Creating the skeletal animation Using this technique, we will create a very simple player walk cycle. For this, there is a software called Spine by Esoteric Software, which is a very widely used professional software to create skeletal animations for 2D games. The software can be downloaded from the company's website at http://esotericsoftware.com/spine-purchase: There are three versions of the software available: the trial, essential, and professional versions. Although majority of the features of the professional version are available in the essential version, it doesn't have ghosting, meshes, free-form deformation, skinning, and IK pinning, which is in beta stage. The inclusion of these features does speed up the animation process and certainly takes out a lot of manual work for the animator or illustrator. To learn more about these features, visit the website and hover the mouse over these features to have a better understanding of what they do. You can follow along by downloading the trial version, which can be done by clicking the Download trial link on the website. Spine is available for all platforms including Windows, Mac, and Linux. So download it for the OS of your choice. On Mac, after downloading and running the software, it will ask to install X11, or you can download and install it from http://xquartz.macosforge.org/landing/. After downloading and installing the plugin, you can open Spine. Once the software is up and running, you should see the following window: Now, create a new project by clicking on the spine icon on the top left. As we can see in the screenshot, we are now in the SETUP mode where we set up the character. On the Tree panel on the right-hand side, in the Hierarchy pane, select the Images folder. After selecting the folder, you will be able to select the path where the individual files are located for the player. Navigate to the player_skeletal_anim folder where all the images are present. Once selected, you will see the panel populate with the images that are present in the folder, namely the following: bookGame_player_Lleg bookGame_player_Rleg bookGame_player_bazooka bookGame_player_body bookGame_player_hand bookGame_player_head Now drag-and-drop all the files from the Images folder onto the scene. Don't worry if the images are not in the right order. In the Draw Order dropdown in the Hierarchy panel, you can move around the different items by drag-and-drop to make them draw in the order that you want them to be displayed. Once reordered, move the individual images on the screen to the appropriate positions: You can move around the images by clicking on the translate button on the bottom of the screen. If you hover over the buttons, you can see the names of the buttons. We will now start creating the bones that we will use to animate the character. In the panel on the bottom of the Tools section, click on the Create button. You should now see the cursor change to the bone creation icon. Before you create a bone, you have to always select the bone that will be the parent. In this case, we select the root bone that is in the center of the character. Click on it and drag downwards and hold the Shift key at the same time. Click-and-drag downwards up to the end of the blue dress of the character; make sure that the blue dress is highlighted. Now release the mouse button. The end point of this bone will be used as the hip joint from where the leg bones will be created for the character. Now select the end of the newly created bone, which you made in the last step, and click-and-drag downwards again holding Shift at the same time to make a bone that goes all the way to the end of the leg. With the leg still getting highlighted, release the mouse button. To create the bone for the other leg, create a new bone again starting from end of the first bone and the hip joint, and while the other leg is selected, release the mouse button to create a bone for the leg. Now, we will create a bone for the hand. Select the root node, the node in the middle of the character while holding Shift again, and draw a bone to the hand while the hand is highlighted. Create a bone for the head by again selecting the root node selected earlier. Draw a bone from the root node to the head while holding Shift and release the mouse button once you are near the ear of the character and the head is highlighted. You will notice that we never created a bone for the bazooka. For the bazooka, we will make the hand as the parent bone so that when the hand gets rotated, the bazooka also rotates along. Click on the bazooka node on the Hierarchy panel (not the image) and drag it to the hand node in the skeleton list. You can rotate each of the bones to check whether it is rotating properly. If not, you can move either the bones or images around by locking either one of them in its place so that you can move or rotate the other freely by clicking either the bones or the images button in the compensate button at the bottom of the screen. The following is the screenshot that shows my setup. You can use it to follow and create the bones to get a more satisfying animation. To animate the character, click on the SETUP button on the top and the layout will change to ANIMATE. You will see that a new timeline has appeared at the bottom. Click on the Animations tab in Hierarchy and rename the animation name from animation to runCycle by double-clicking on it. We will use the timeline to animate the character. Click on the Dopesheet icon at the bottom. This will show all the keyframes that we have made for the animation. As we have not created any, the dopesheet is empty. To create our first keyframe, we will click on the legs and rotate both the bones so that it reflects the contact pose of the walk cycle. Now to set a keyframe, click on the orange-colored key icon next to Rotate in the Transform panel at the bottom of the screen. Click on the translate key, as we will be changing the translation as well later. Once you click on it, the dopesheet will show the bones that you just rotated and also show what changes you made to the bone. Here, we rotated the bone, so you will see Rotation under the bones, and as we clicked on the translate key, it will show the Translate also. Now, frame 24 is the same as frame 0. So, to create the keyframe at frame 24, drag the timeline scrubber to frame 24 and click on the rotate and translate keys again. To set the keyframe at the middle where the contact pose happens but with opposite legs, rotate the legs to where the opposite leg was and select the keys to create a keyframe. For frames 6 and 18, we will keep the walk cycle very simple, so just raise the character above by selecting the root node, move it up in the y direction and click the orange key next to the translate button in the Transform panel at the bottom. Remember that you have to click it once in frame 6 and then move the timeline scrubber to frame 18, move the character up again, and click on the key again to create keyframes for both frames 6 and 18. Now the dopesheet should look as follow: Now to play the animation in a loop, click on the Repeat Animation button to the right of the Play button and then on the Play button. You will see the simple walk animation we created for the character. Next, we will export the data required to create the animation in Cocos2d-x. First, we will export the data for the animation. Click on the Spine button on top and select Export. The following window should pop up. Select JSON and choose the directory in which you would want to save the file to and click on Export: That is not all; we have to create a spritesheet and data file just as we created one in texture packer. There is an inbuilt tool in Spine to create a packed spritesheet. Again, click on the Spine icon and this time select Texture Packer. Here, in the input directory, select the Images folder from where we imported all the images initially. For the output directory, select the location to where the PNG and data files should be saved to. If you click on the settings button, you will see that it looks very similar to what we saw in TexturePacker. Keep the default values as they are. Click on Pack and give the name as player. This will create the .png and .atlas files, which are the spritesheet and data file, respectively: You have three files instead of the two in TexturePacker. There are two data files and an image file. While exporting the JSON file, if you didn't give it a name, you can rename the file manually to player.json just for consistency. Drag the player.atlas, player.json, and player.png files into the project folder. Finally, we come to the fun part where we actually use the data files to animate the character. For testing, we will add the animations to the HelloWorldScene.cpp file and check the result. Later, when we add the main menu, we will move it there so that it shows as soon as the game is launched. Coding the player walk cycle If you want to test the animations in the current project itself, add the following to the HelloWorldScene.h file first: #include <spine/spine-cocos2dx.h> Include the spine header file and create a variable named skeletonNode of the CCSkeletalAnimation type: extension::CCSkeletonAnimation* skeletonNode; Next, we initialize the skeletonNode variable in the HelloWorldScene.cpp file:    skeletonNode = extension::CCSkeletonAnimation::createWithFile("player.json", "player.atlas", 1.0f);    skeletonNode->addAnimation("runCycle",true,0,0);    skeletonNode->setPosition(ccp(visibleSize.width/2 , skeletonNode->getContentSize().height/2));    addChild(skeletonNode); Here, we give the two data files into the createWithFile() function of CCSkeletonAnimation. Then, we initiate it with addAnimation and give it the animation name we gave when we created the animation in Spine, which is runCycle. We next set the position of the skeletonNode; we set it right above the bottom of the screen. Next, we add the skeletonNode to the display list. Now, if you build and run the project, you will see the player getting animated forever in a loop at the bottom of the screen: On the left, we have the animation we created using TexturePacker from CodeAndWeb, and in the middle, we have the animation that was created using Spine from Esoteric Software. Both techniques have their set of advantages, and it also depends upon the type and scale of the game that you are making. Depending on this, you can choose the tool that is more tuned to your needs. If you have a smaller number of animations in your game and if you have good artists, you could use regular spritesheet animations. If you have a lot of animations or don't have good animators in your team, Spine makes the animation process a lot less cumbersome. Either way, both tools in professional hands can create very good animations that will give life to the characters in the game and therefore give a lot of character to the game itself. Summary This article took a very brief look at animations and how to create an animated character in the game using the two of the most popular animation techniques used in games. We also looked at FSM and at how we can create a simple state machine between two states and make the animation change according to the state of the player at that moment. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [Article] Sprites [Article] Cocos2d-x: Installation [Article]
Read more
  • 0
  • 0
  • 11395

article-image-physics-engine
Packt
04 Sep 2014
9 min read
Save for later

The physics engine

Packt
04 Sep 2014
9 min read
In this article by Martin Varga, the author of Learning AndEngine, we will look at the physics in AndEngine. (For more resources related to this topic, see here.) AndEngine uses the Android port of the Box2D physics engine. Box2D is very popular in games, including the most popular ones such as Angry Birds, and many game engines and frameworks use Box2D to simulate physics. It is free, open source, and written in C++, and it is available on multiple platforms. AndEngine offers a Java wrapper API for the C++ Box2D backend, and therefore, no prior C++ knowledge is required to use it. Box2D can simulate 2D rigid bodies. A rigid body is a simplification of a solid body with no deformations. Such objects do not exist in reality, but if we limit the bodies to those moving much slower than the speed of light, we can say that solid bodies are also rigid. Box2D uses real-world units and works with physics terms. A position in a scene in AndEngine is defined in pixel coordinates, whereas in Box2D, it is defined in meters. AndEngine uses a pixel to meter conversion ratio. The default value is 32 pixels per meter. Basic terms Box2D works with something we call a physics world. There are bodies and forces in the physics world. Every body in the simulation has the following few basic properties: Position Orientation Mass (in kilograms) Velocity (in meters per second) Torque (or angular velocity in radians per second) Forces are applied to bodies and the following Newton's laws of motion apply: The first law, An object that is not moving or moving with constant velocity will stay that way until a force is applied to it, can be tweaked a bit The second law, Force is equal to mass multiplied by acceleration, is especially important to understand what will happen when we apply force to different objects The third law, For every action, there is an equal and opposite reaction, is a bit flexible when using different types of bodies Body types There are three different body types in Box2D, and each one is used for a different purpose. The body types are as follows: Static body: This doesn't have velocity and forces do not apply to a static body. If another body collides with a static body, it will not move. Static bodies do not collide with other static and kinematic bodies. Static bodies usually represent walls, floors, and other immobile things. In our case, they will represent platforms which don't move. Kinematic body: This has velocity, but forces don't apply to it. If a kinematic body is moving and a dynamic body collides with it, the kinematic body will continue in its original direction. Kinematic bodies also do not collide with other static and kinematic bodies. Kinematic bodies are useful to create moving platforms, which is exactly how we are going to use them. Dynamic body: A dynamic body has velocity and forces apply to it. Dynamic bodies are the closest to real-world bodies and they collide with all types of bodies. We are going to use a dynamic body for our main character. It is important to understand the consequences of choosing each body type. When we define gravity in Box2D, it will pull all dynamic bodies to the direction of the gravitational acceleration, but static bodies will remain still and kinematic bodies will either remain still or keep moving in their set direction as if there was no gravity. Fixtures Every body is composed of one or more fixtures. Each fixture has the following four basic properties: Shape: In Box2D, fixtures can be circles, rectangles, and polygons Density: This determines the mass of the fixture Friction: This plays a major role in body interactions Elasticity: This is sometimes called restitution and determines how bouncy the object is There are also special properties of fixtures such as filters and filter categories and a single Boolean property called sensor. Shapes The position of fixtures and their shapes in the body determine the overall shape, mass, and the center of gravity of the body. The upcoming figure is an example of a body that consists of three fixtures. The fixtures do not need to connect. They are part of one body, and that means their positions relative to each other will not change. The red dot represents the body's center of gravity. The green rectangle is a static body and the other three shapes are part of a dynamic body. Gravity pulls the whole body down, but the square will not fall. Density Density determines how heavy the fixtures are. Because Box2D is a two-dimensional engine, we can imagine all objects to be one meter deep. In fact, it doesn't matter as long as we are consistent. There are two bodies, each with a single circle fixture, in the following figure. The left circle is exactly twice as big as the right one, but the right one has double the density of the first one. The triangle is a static body and the rectangle and the circles are dynamic, creating a simple scale. When the simulation is run, the scales are balanced. Friction Friction defines how slippery a surface is. A body can consist of multiple fixtures with different friction values. When two bodies collide, the final friction is calculated from the point of collision based on the colliding fixtures. Friction can be given a value between 0 and 1, where 0 means completely frictionless and 1 means super strong friction. Let's say we have a slope which is made of a body with a single fixture that has a friction value of 0.5, as shown in the following figure: The other body consists of a single square fixture. If its friction is 0, the body slides very fast all the way down. If the friction is more than 0, then it would still slide, but slow down gradually. If the value is more than 0.25, it would still slide but not reach the end. Finally, with friction close to 1, the body will not move at all. Elasticity The coefficient of restitution is a ratio between the speeds before and after a collision, and for simplicity, we can call the material property elasticity. In the following figure, there are three circles and a rectangle representing a floor with restitution 0, which means not bouncy at all. The circles have restitutions (from left to right) of 1, 0.5, and 0. When this simulation is started, the three balls will fall with the same speed and touch the floor at the same time. However, after the first bounce, the first one will move upwards and climb all the way to the initial position. The middle one will bounce a little and keep bouncing less and less until it stops. The right one will not bounce at all. The following figure shows the situation after the first bounce: Sensor When we need a fixture that detects collisions but is otherwise not affected by them and doesn't affect other fixtures and bodies, we use a sensor. A goal line in a 2D air hockey top-down game is a good example of a sensor. We want it to detect the disc passing through, but we don't want it to prevent the disc from entering the goal. The physics world The physics world is the whole simulation including all bodies with their fixtures, gravity, and other settings that influence the performance and quality of the simulation. Tweaking the physics world settings is important for large simulations with many objects. These settings include the number of steps performed per second and the number of velocity and position interactions per step. The most important setting is gravity, which is determined by a vector of gravitational acceleration. Gravity in Box2D is simplified, but for the purpose of games, it is usually enough. Box2D works best when simulating a relatively small scene where objects are a few tens of meters big at most. To simulate, for example, a planet's (radial) gravity, we would have to implement our own gravitational force and turn the Box2D built-in gravity off. Forces and impulses Both forces and impulses are used to make a body move. Gravity is nothing else but a constant application of a force. While it is possible to set the position and velocity of a body in Box2D directly, it is not the right way to do it, because it makes the simulation unrealistic. To move a body properly, we need to apply a force or an impulse to it. These two things are almost the same. While forces are added to all the other forces and change the body velocity over time, impulses change the body velocity immediately. In fact, an impulse is defined as a force applied over time. We can imagine a foam ball falling from the sky. When the wind starts blowing from the left, the ball will slowly change its trajectory. Impulse is more like a tennis racket that hits the ball in flight and changes its trajectory immediately. There are two types of forces and impulses: linear and angular. Linear makes the body move left, right, up, and down, and angular makes the body spin around its center. Angular force is called torque. Linear forces and impulses are applied at a given point, which will have different effects based on the position. The following figure shows a simple body with two fixtures and quite high friction, something like a carton box on a carpet. First, we apply force to the center of the large square fixture. When the force is applied, the body simply moves on the ground to the right a little. This is shown in the following figure: Second, we try to apply force to the upper-right corner of the large box. This is shown in the following figure: Using the same force at a different point, the body will be toppled to the right side. This is shown in the following figure:
Read more
  • 0
  • 0
  • 3736

article-image-components-unity
Packt
26 Aug 2014
13 min read
Save for later

Components in Unity

Packt
26 Aug 2014
13 min read
In this article by Simon Jackson, author of Mastering Unity 2D Game Development, we will have a walkthrough of the new 2D system and other new features. We will then understand some of the Unity components deeply. We will then dig into animation and its components. (For more resources related to this topic, see here.) Unity 4.3 improvements Unity 4.3 was not just about the new 2D system; there are also a host of other improvements and features with this release. The major highlights of Unity 4.3 are covered in the following sections. Improved Mecanim performance Mecanim is a powerful tool for both 2D and 3D animations. In Unity 4.3, there have been many improvements and enhancements, including a new game object optimizer that ensures objects are more tightly bound to their skeletal systems and removes unnecessary transform holders. Thus making Mecanim animations lighter and smoother. Refer to the following screenshot: In Unity 4.3, Mecanim also adds greater control to blend animations together, allowing the addition of curves to have smooth transitions, and now it also includes events that can be hooked into at every step. The Windows Phone API improvements and Windows 8.1 support Unity 4.2 introduced Windows Phone and Windows 8 support, since then things have been going wild, especially since Microsoft has thrown its support behind the movement and offered free licensing for the existing Pro owners. Refer to the following screenshot: Unity 4.3 builds solidly on the v4 foundations by bringing additional platform support, and it closes some more gaps between the existing platforms. Some of the advantages are as follows: The emulator is now fully supported with Windows Phone (new x86 phone build) It has more orientation support, which allows even the splash screens to rotate properly and enabling pixel perfect display It has trial application APIs for both Phone and Windows 8 It has improved sensors and location support On top of this, with the recent release of Windows 8.1, Unity 4.3 now also supports Windows 8.1 fully; additionally, Unity 4.5.3 will introduce support Windows Phone 8.1 and universal projects. Dynamic Nav Mesh (Pro version only) If you have only been using the free version of Unity till now, you will not be aware of what a Nav Mesh agent is. Nav Meshes are invisible meshes that are created for your 3D environment at the build time to simplify path finding and navigation for movable entities. Refer to the following screenshot: You can, of course, create the simplified models for your environment and use them in your scenes; however, every time you change your scene, you need to update your navigation model. Nav Meshes simply remove this overhead. Nav Meshes are crucial, especially in larger environments where collision and navigation calculations can make the difference between your game running well or not. Unity 4.3 has improved this by allowing more runtime changes to the dynamic Nav Mesh, allowing you to destroy parts of your scene that alter the walkable parts of your terrain. Nav Mesh calculations are also now multithreaded to give even an even better speed boost to your game. Also, there have been many other under-the-hood fixes and tweaks. Editor updates The Unity editor received a host of updates in Unity 4.3 to improve the performance and usability of the editor, as you can see in the following demo screenshot. Granted most of the improvements are behind the scenes. The improved Unity Editor GUI with huge improvements The editor refactored a lot of the scripting features on the platform, primarily to reduce the code complexity required for a lot of scripting components, such as unifying parts of the API into single components. For example, the LookLikeControls and LookLikeInspector options have been unified into a single LookLike function, which allows easier creation of the editor GUI components. Further simplification of the programmable editor interface is an ongoing task and a lot of headway is being made in each release. Additionally, the keyboard controls have been tweaked to ensure that the navigation works in a uniform way and the sliders/fields work more consistently. MonoDevelop 4.01 Besides the editor features, one of the biggest enhancements has to be the upgrade of the MonoDevelop editor (http://monodevelop.com/), which Unity supports and is shipped with. This has been a long running complaint for most developers simply due to the brand new features in the later editions. Refer to the following screenshot: MonoDevelop isn't made by Unity; it's an open source initiative run by Xamarin hosted on GitHub (https://github.com/mono/monodevelop) for all the willing developers to contribute and submit fixes to. Although the current stable release is 4.2.1, Unity is not fully up to date. Hopefully, this recent upgrade will mean that Unity can keep more in line with the future versions of this free tool. Sadly, this doesn't mean that Unity has yet been upgraded from the modified V2 version of the Mono compiler (http://www.mono-project.com/Main_Page) it uses to the current V3 branch, most likely, due to the reduced platform and the later versions of the Mono support. Movie textures Movie textures is not exactly a new feature in Unity as it has been available for some time for platforms such as Android and iOS. However, in Unity 4.3, it was made available for both the new Windows 8 and Windows Phone platforms. This adds even more functionality to these platforms that were missing in the initial Unity 4.2 release where this feature was introduced. Refer to the following screenshot: With movie textures now added to the platform, other streaming features are also available, for example, webcam (or a built-in camera in this case) and microphone support were also added. Understanding components Components in Unity are the building blocks of any game; almost everything you will use or apply will end up as a component on a GameObject inspector in a scene. Until you build your project, Unity doesn't know which components will be in the final game when your code actually runs (there is some magic applied in the editor). So, these components are not actually attached to your GameObject inspector but rather linked to them. Accessing components using a shortcut Now, in the previous Unity example, we added some behind-the-scenes trickery to enable you to reference a component without first discovering it. We did this by adding shortcuts to the MonoBehavior class that the game object inherits from. You can access the components with the help of the following code: this.renderer.collider.attachedRigidbody.angularDrag = 0.2f; What Unity then does behind the scenes for you is that it converts the preceding code to the following code: var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); var ridgedBody = collider.GetComponent<Rigidbody>(); ridgedBody.angularDrag = 0.2f; The preceding code will also be the same as executing the following code: GetComponent<Renderer>().GetComponent<Collider>().GetComponent<Rigidbody>().angularDrag = 0.2f; Now, while this is functional and working, it isn't very performant or even a best practice as it creates variables and destroys them each time you use them; it also calls GetComponent for each component every time you access them. Using GetComponent in the Start or Awake methods isn't too bad as they are only called once when the script is loaded; however, if you do this on every frame in the update method, or even worse, in FixedUpdate methods, the problem multiplies; not to say you can't, you just need to be aware of the potential cost of doing so. A better way to use components – referencing Now, every programmer knows that they have to worry about garbage and exactly how much memory they should allocate to objects for the entire lifetime of the game. To improve things based on the preceding shortcut code, we simply need to manually maintain the references to the components we want to change or affect on a particular object. So, instead of the preceding code, we could simply use the following code: Rigidbody myScriptRigidBody; void Awake() { var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); myScriptRigidBody = collider.GetComponent<Rigidbody>(); } void Update() { myScriptRigidBody.angularDrag = 0.2f * Time.deltaTime; } This way the RigidBody object that we want to affect can simply be discovered once (when the scripts awakes); then, we can just update the reference each time a value needs to be changed instead of discovering it every time. An even better way Now, it has been pointed out (by those who like to test such things) that even the GetComponent call isn't as fast as it should be because it uses C# generics to determine what type of component you are asking for (it's a two-step process: first, you determine the type and then get the component). However, there is another overload of the GetComponent function in which instead of using generics, you just need to supply the type (therefore removing the need to discover it). To do this, we will simply use the following code instead of the preceding GetComponent<>: myScriptRigidBody =(Rigidbody2D)GetComponent(typeof(Rigidbody2D)); The code is slightly longer and arguably only gives you a marginal increase, but if you need to use every byte of the processing power, it is worth keeping in mind. If you are using the "." shortcut to access components, I recommend that you change that practice now. In Unity 5, they are being removed. There will, however, be a tool built in the project's importer to upgrade any scripts you have using the shortcuts that are available for you. This is not a huge task, just something to be aware of; act now if you can! Animation components All of the animation in the new 2D system in Unity uses the new Mecanim system (introduced in Version 4) for design and control, which once you get used to is very simple and easy to use. It is broken up into three main parts: animation controllers, animation clips, and animator components. Animation controllers Animation controllers are simply state machines that are used to control when an animation should be played and how often, including what conditions control the transition between each state. In the new 2D system, there must be at least one controller per animation for it to play, and controllers can contain many animations as you can see here with three states and transition lines between them: Animation clips Animation clips are the heart of the animation system and have come very far from their previous implementation in Unity. Clips were used just to hold the crafted animations of the 3D models with a limited ability to tweak them for use on a complete 3D model: The new animation dope sheet system (as shown in the preceding screenshot) is very advanced; in fact, now it tracks almost every change in the inspector for sprites, allowing you to animate just about everything. You can even control which sprite from a spritesheet is used for each frame of the animation. The preceding screenshot shows a three-frame sprite animation and a modified x position modifier for the middle image, giving a hopping effect to the sprite as it runs. This ability of the dope sheet system implies there is less burden on the shoulders of art designers to craft complex animations as the animation system itself can be used to produce a great effect. Sprites don't have to be picked from the same spritesheet to be animated. They can come from individual textures or picked from any spritesheet you have imported. The Animator component To use the new animation prepared in a controller, you need to apply it to a game object in the scene. This is done through the Animator component, as shown here: The only property we actually care about in 2D is the Controller property. This is where we attach the controller we just created. Other properties only apply to the 3D humanoid models, so we can ignore them for 2D. For more information about the complete 3D Mecanim system, refer to the Unity Learn guide at http://unity3d.com/learn/tutorials/modules/beginner/animation. Animation is just one of the uses of the Mecanim system. Setting up animation controllers So, to start creating animations, you first need an animation controller in order to define your animation clips. As stated before, this is just a state machine that controls the execution of animations even if there is only one animation. In this case, the controller runs the selected animation for as long as it's told to. If you are browsing around the components that can be added to the game object, you will come across the Animator component, which takes a single animation clip as a parameter. This is the legacy animation system for backward compatibility only. Any new animation clip created and set to this component will not work; it will simply generate a console log item stating The AnimationClip used by the Animation component must be marked as Legacy. So, in Unity 4.3 onwards, just avoid this. Creating an animation controller is just as easy as any other game object. In the Project view, simply right-click on the view and select Create | Animator Controller. Opening the new animation will show you the blank animator controller in the Mecanim state manager window, as shown in the following screenshot: There is a lot of functionality in the Mecanim state engine, which is largely outside the scope of this article. Check out for more dedicated books on this, such as Unity 4 Character Animation with Mecanim, Jamie Dean, Packt Publishing. If you have any existing clips, you can just drag them to the Mecanim controller's Edit window; alternatively, you can just select them in the Project view, right-click on them, and select From selected clip under Create. However, we will cover more of this later in practice. Once you have a controller, you can add it to any game object in your project by clicking on Add Component in the inspector or by navigating to Component | Create and Miscellaneous | Animator and selecting it. Then, you can select your new controller as the Controller property of the animator. Alternatively, you can just drag your new controller to the game object you wish to add it to. Clips in a controller are bound to the spritesheet texture of the object the controller is attached to. Changing or removing this texture will prevent the animation from being displayed correctly. However, it will appear as it's still running. So with a controller in place, let's add some animation to it. Summary In this article, we did a detailed analysis of the new 2D features added in Unity 4.3. Then we overviewed all the main Unity components. Resources for Article: Further resources on this subject: Parallax scrolling [article] What's Your Input? [article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 5940

article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 11866
article-image-whats-your-input
Packt
20 Aug 2014
7 min read
Save for later

What's Your Input?

Packt
20 Aug 2014
7 min read
This article by Venita Pereira, the author of the book Learning Unity 2D Game Development by Example, teaches us all about the various input types and states of a game. We will then go on to learn how to create buttons and the game controls by using code snippets for input detection. "Computers are finite machines; when given the same input, they always produce the same output." – Greg M. Perry, Sams Teach Yourself Beginning Programming in 24 Hours (For more resources related to this topic, see here.) Overview The list of topics that will be covered in this article is as follows: Input versus output Input types Output types Input Manager Input detection Buttons Game controls Input versus output We will be looking at exactly what both input and output in games entail. We will look at their functions, importance, and differentiations. Input in games Input may not seem a very important part of a game at first glance, but in fact it is very important, as input in games involves how the player will interact with the game. All the controls in our game, such as moving, special abilities, and so forth, depend on what controls and game mechanics we would like in our game and the way we would like them to function. Most games have the standard control setup of moving your character. This is to help usability, because if players are already familiar with the controls, then the game is more accessible to a much wider audience. This is particularly noticeable with games of the same genre and platform. For instance, endless runner games usually make use of the tilt mechanic which is made possible by the features of the mobile device. However, there are variations and additions to the pre-existing control mechanics; for example, many other endless runners make use of the simple swipe mechanic, and there are those that make use of both. When designing our games, we can be creative and unique with our controls, thereby innovating a game, but the controls still need to be intuitive for our target players. When first designing our game, we need to know who our target audience of players includes. If we would like our game to be played by young children, for instance, then we need to ensure that they are able to understand, learn, and remember the controls. Otherwise, instead of enjoying the game, they will get frustrated and stop playing it entirely. As an example, a young player may hold a touchscreen device with their fingers over the screen, thereby preventing the input from working correctly depending on whether the game was first designed to take this into account and support this. Different audiences of players interact with a game differently. Likewise, if a player is more familiar with the controls on a specific device, then they may struggle with different controls. It is important to create prototypes to test the input controls of a game thoroughly. Developing a well-designed input system that supports usability and accessibility will make our game more immersive. Output in games Output is the direct opposite of input; it provides the necessary information to the player. However, output is just as essential to a game as input. It provides feedback to the player, letting them know how they are doing. Output lets the player know whether they have done an action correctly or they have done something wrong, how they have performed, and their progression in the form of goals/missions/objectives. Without feedback, a player would feel lost. The player would potentially see the game as being unclear, buggy, or even broken. For certain types of games, output forms the heart of the game. The input in a game gets processed by the game to provide some form of output, which then provides feedback to the player, helping them learn from their actions. This is the cycle of the game's input-output system. The following diagram represents the cycle of input and output: Input types There are many different input types that we can utilize in our games. These various input types can form part of the exciting features that our games have to offer. The following image displays the different input types: The most widely used input types in games include the following: Keyboard: Key presses from a keyboard are supported by Unity and can be used as input controls in PC games as well as games on any other device that supports a keyboard. Mouse: Mouse clicks, motion (of the mouse), and coordinates are all inputs that are supported by Unity. Game controller: This is an input device that generally includes buttons (including shoulder and trigger buttons), a directional pad, and analog sticks. The game controller input is supported by Unity. Joystick: A joystick has a stick that pivots on a base that provides movement input in the form of direction and angle. It also has a trigger, throttle, and extra buttons. It is commonly used in flight simulation games to simulate the control device in an aircraft's cockpit and other simulation games that simulate controlling machines, such as trucks and cranes. Modern game controllers make use of a variation of joysticks known as analog sticks and are therefore treated as the same class of input device as joysticks by Unity. Joystick input is supported by Unity. Microphone: This provides audio input commands for a game. Unity supports basic microphone input. For greater fidelity, a third-party audio recognition tool would be required. Camera: This provides visual input for a game using image recognition. Unity has webcam support to access RGB data, and for more advanced features, third-party tools would be required. Touchscreen: This provides multiple touch inputs from the player's finger presses on the device's screen. This is supported by Unity. Accelerometer: This provides the proper acceleration force at which the device is moved and is supported by Unity. Gyroscope: This provides the orientation of the device as input and is supported by Unity. GPS: This provides the geographical location of the device as input and is supported by Unity. Stylus: Stylus input is similar to touchscreen input in that you use a stylus to interact with the screen; however, it provides greater precision. The latest version of Unity supports the Android stylus. Motion controller: This provides the player's motions as input. Unity does not support this, and therefore, third-party tools would be required. Output types The main output types in games are as follows: Visual output Audio output Controller vibration Unity supports all three. Visual output The Head-Up Display (HUD) is the gaming term for the game's Graphical User Interface (GUI) that provides all the essential information as visual output to the player as well as feedback and progress to the player as shown in the following image: HUD, viewed June 22, 2014, http://opengameart.org/content/golden-ui Other visual output includes images, animations, particle effects, and transitions. Audio Audio is what can be heard through an audio output, such as a speaker, to provide feedback that supports and emphasizes the visual output and, therefore, increases immersion. The following image displays a speaker: Speaker, viewed June 22, 2014, http://pixabay.com/en/loudspeaker-speakers-sound-music-146583/ Controller vibration Controller vibration provides feedback for instances where the player collides with an object or environmental feedback for earthquakes to provide even more immersion as in the following image: Having a game that is designed to provide output meaningfully not only makes it clearer and more enjoyable, but can truly bring the world to life, making it truly engaging for the player.
Read more
  • 0
  • 0
  • 12338

article-image-moving-space-pod-using-touch
Packt
18 Aug 2014
10 min read
Save for later

Moving the Space Pod Using Touch

Packt
18 Aug 2014
10 min read
Moving the Space Pod Using Touch This article written by, Frahaan Hussain, Arutosh Gurung, and Gareth Jones, authors of the book Cocos2d-x Game Development Essentials, will cover how to set up touch events within our game. So far, the game has had no user interaction from a gameplay perspective. This article will rectify this by adding touch controls in order to move the space pod and avoid the asteroids. (For more resources related to this topic, see here.) The topics that will be covered in this article are as follows: Implementing touch Single-touch Multi-touch Using touch locations Moving the spaceship when touching the screen There are two main routes to detect touch provided by Cocos2d-x: Single-touch: This method detects a single-touch event at any given time, which is what will be implemented in the game as it is sufficient for most gaming circumstances Multi-touch: This method provides the functionality that detects multiple touches simultaneously; this is great for pinching and zooming; for example, the Angry Birds game uses this technique Though a single-touch will be the approach that the game will incorporate, multi-touch will also be covered in this article so that you are aware of how to use this in future games. The general process for setting up touches The general process of setting up touch events, be it single or multi-touch, is as follows: Declare the touch functions. Declare a listener to listen for touch events. Assign touch functions to appropriate touch events as follows: When the touch has begun When the touch has moved When the touch has ended Implement touch functions. Add appropriate game logic/code for when touch events have occurred. Single-touch events Single-touch events can be detected at any given time, and for many games this is sufficient as it is for this game. Follow these steps to implement single-touch events into a scene: Declare touch functions in the GameScene.h file as follows: bool onTouchBegan(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchMoved(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchEnded(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchCancelled(cocos2d::Touch *touch, cocos2d::Event * event); This is what the GameScene.h file will look like: The previous functions do the following: The onTouchBegan function detects when a single-touch has occurred, and it returns a Boolean value. This should be true if the event is swallowed by the node and false indicates that it will keep on propagating. The onTouchMoved function detects when the touch moves. The onTouchEnded function detects when the touch event has ended, essentially when the user has lifted up their finger. The onTouchCancelled function detects when a touch event has ended but not by the user; for example, a system alert. The general practice is to call the onTouchEnded method to run the same code, as it can be considered the same event for most games. Declare a Boolean variable in the GameScene.h file, which will be true if the screen is being touched and false if it isn't, and also declare a float variable to keep track of the position being touched: bool isTouching;float touchPosition; This is how it will look in the GameScene.h file: Add the following code in the init() method of GameScene.cpp: auto listener = EventListenerTouchOneByOne::create(); listener->setSwallowTouches(true); listener->onTouchBegan = CC_CALLBACK_2(GameScreen::onTouchBegan, this); listener->onTouchMoved = CC_CALLBACK_2(GameScreen::onTouchMoved, this); listener->onTouchEnded = CC_CALLBACK_2(GameScreen::onTouchEnded, this); listener->onTouchCancelled = CC_CALLBACK_2(GameScreen::onTouchCancelled, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); isTouching = false; touchPosition = 0; This is how it will look in the GameScene.cpp file: There is quite a lot of new code in the previous code snippet, so let's run through it line by line: The first statement declares and initializes a listener for a single-touch The second statement prevents layers underneath from where the touch occurred by detecting the touches The third statement assigns our onTouchBegan method to the onTouchBegan listener The fourth statement assigns our onTouchMoved method to the onTouchMoved listener The fifth statement assigns our onTouchEnded method to the onTouchEnded listener The sixth statement assigns our onTouchCancelled method to the onTouchCancelled listener The seventh statement sets the touch listener to the event dispatcher so the events can be detected The eighth statement sets the isTouching variable to false as the player won't be touching the screen initially when the game starts The final statement initializes the touchPosition variable to 0 Implement the touch functions inside the GameScene.cpp file: bool GameScreen::onTouchBegan(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = true;touchPosition = touch->getLocation().x;return true;}void GameScreen::onTouchMoved(cocos2d::Touch *touch,cocos2d::Event * event){// not used for this game}void GameScreen::onTouchEnded(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = false;}void GameScreen::onTouchCancelled(cocos2d::Touch *touch,cocos2d::Event * event){onTouchEnded(touch, event);} The following is what the GameScene.cpp file will look like: Let's go over the touch functions that have been implemented previously: The onTouchBegan method will set the isTouching variable to true as the user is now touching the screen and is storing the starting touch position The onTouchMoved function isn't used in this game but it has been implemented so that you are aware of the steps for implementing it (as an extra task, you can implement touch movement so that if the user moves his/her finger from one side to another direction, the space pod gets changed) The onTouchEnded method will set the isTouching variable to false as the user is no longer touching the screen The onTouchCancelled method will call the onTouchEnded method as a touch event has essentially ended If the game were to be run, the space pod wouldn't move as the movement code hasn't been implemented yet. It will be implemented within the update() method to move left when the user touches in the left half of the screen and move right when user touches in the right half of the screen. Add the following code at the end of the update() method: // check if the screen is being touchedif (true == isTouching){// check which half of the screen is being touchedif (touchPosition < visibleSize.width / 2){// move the space pod leftplayerSprite->setPosition().x(playerSprite->getPosition().x - (0.50 * visibleSize.width * dt));// check to prevent the space pod from going offthe screen (left side)if (playerSprite->getPosition().x <= 0 +(playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(playerSprite->getContentSize().width / 2);}}else{// move the space pod rightplayerSprite->setPosition().x(playerSprite->getPosition().x + (0.50 * visibleSize.width * dt));// check to prevent the space pod from going off thescreen (right side)if (playerSprite->getPosition().x >=visibleSize.width - (playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(visibleSize.width -(playerSprite->getContentSize().width / 2));}}} The following is how this will look after adding the code: The preceding code performs the following steps: Checks whether the screen is being touched. Checks which side of the screen is being touched. Moves the player left or right. Checks whether the player is going off the screen and if so, stops him/her from moving. Repeats the process until the screen is no longer being touched. This section covered how to set up single-touch events and implement them within the game to be able to move the space pod left and right. Multi-touch events Multi-touch is set up in a similar manner of declaring the functions and creating a listener to actively listen out for touch events. Follow these steps to implement multi-touch into a scene: Firstly, the multi-touch feature needs to be enabled in the AppController.mm file, which is located within the ios folder. To do so, add the following code line below the viewController.view = eaglView; line: [eaglView setMultipleTouchEnabled: YES]; The following is what the AppController.mm file will look like: Declare the touch functions within the game scene header file (the functions do the same thing as the single-touch equivalents but enable multiple touches that can be detected simultaneously): void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesMoved(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesEnded(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesCancelled(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); The following is what the header file will look like: Add the following code in the init() method of the scene.cpp file to listen to the multi-touch events that will use the EventListenerTouchAllAtOnce class, which allows multiple touches to be detected at once: auto listener = EventListenerTouchAllAtOnce::create();listener->onTouchesBegan = CC_CALLBACK_2(GameScreen::onTouchesBegan, this);listener->onTouchesMoved = CC_CALLBACK_2(GameScreen::onTouchesMoved, this);listener->onTouchesEnded = CC_CALLBACK_2(GameScreen::onTouchesEnded, this);listener->onTouchesCancelled = CC_CALLBACK_2(GameScreen::onTouchesCancelled, this);this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); The following is how this will look: Implement the following multi-touch functions inside the scene.cpp: void GameScreen::onTouchesBegan(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("Multi-touch BEGAN"); } void GameScreen::onTouchesMoved(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { for (int i = 0; i < touches.size(); i++) { CCLOG("Touch %i: %f", i, touches[i]- >getLocation().x); } } void GameScreen::onTouchesEnded(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE ENDED"); } Moving the Space Pod Using Touch [ 92 ] void GameScreen::onTouchesCancelled(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE BEEN CANCELLED"); } The following is how this will look: The multi-touch functions just print out a log, stating that they have occurred, but when touches are moved, their respective x positions are logged. This section covered how to implement core foundations for multi-touch events so that they can be used for features such as zooming (for example, zooming into a scene in the Clash Of Clans game) and panning. Multi-touch wasn't incorporated within the game as it wasn't needed, but this section is a good starting point to implement it in future games. Summary This article covered how to set up touch listeners to detect touch events for single-touch and multi-touch. We incorporated single-touch within the game to be able to move the space pod left or right, depending on which half of the screen was being touched. Multi-touch wasn't used as the game didn't require it, but its implementation was shown so that it can be used for future projects. Resources for Article: Further resources on this subject: Cocos2d: Uses of Box2D Physics Engine [article] Cocos2d-x: Installation [article] Thumping Moles for Fun [article]
Read more
  • 0
  • 0
  • 9872

article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 11701
article-image-introspecting-maya-python-and-pymel
Packt
23 Jul 2014
8 min read
Save for later

Introspecting Maya, Python, and PyMEL

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Maya and Python are both excellent and elegant tools that can together achieve amazing results. And while it may be tempting to dive in and start wielding this power, it is prudent to understand some basic things first. In this article, we will look at Python as a language, Maya as a program, and PyMEL as a framework. We will begin by briefly going over how to use the standard Python interpreter, the Maya Python interpreter, the Script Editor in Maya, and your Integrated Development Environment (IDE) or text editor in which you will do the majority of your development. Our goal for the article is to build a small library that can easily link us to documentation about Python and PyMEL objects. Building this library will illuminate how Maya, Python and PyMEL are designed, and demonstrate why PyMEL is superior to maya.cmds. We will use the powerful technique of type introspection to teach us more about Maya's node-based design than any Hypergraph or static documentation can. Creating your library There are generally three different modes you will be developing in while programming Python in Maya: using the mayapy interpreter to evaluate short bits of code and explore ideas, using your Integrated Development Environment to work on the bulk of the code, and using Maya's Script Editor to help iterate and test your work. In this section, we'll start learning how to use all three tools to create a very simple library. Using the interpreter The first thing we must do is find your mayapy interpreter. It should be next to your Maya executable, named mayapy or mayapy.exe. It is a Python interpreter that can run Python code as if it were being run in a normal Maya session. When you launch it, it will start up the interpreter in interactive mode, which means you enter commands and it gives you results, interactively. The >>> and ... characters in code blocks indicate something you should enter at the interactive prompt; the code listing in the article and your prompt should look basically the same. In later listings, long output lines will be elided with ... to save on space. Start a mayapy process by double clicking or calling it from the command line, and enter the following code: >>> print 'Hello, Maya!' Hello, Maya! >>> def hello(): ... return 'Hello, Maya!' ... >>> hello() 'Hello, Maya!' The first statement prints a string, which shows up under the prompting line. The second statement is a multiline function definition. The ... indicates the line is part of the preceding line. The blank line following the ... indicates the end of the function. For brevity, we will leave out empty ... lines in other code listings. After we define our hello function, we invoke it. It returns the string "Hello, Maya!", which is printed out beneath the invocation. Finding a place for our library Now, we need to find a place to put our library file. In order for Python to load the file as a module, it needs to be on some path where Python can find it. We can see all available paths by looking at the path list on the sys module. >>> import sys >>> for p in sys.path: ... print p C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages A number of paths will print out; I've replicated what's on my Windows system, but yours will almost definitely be different. Unfortunately, the default paths don't give us a place to put custom code. They are application installation directories, which we should not modify. Instead, we should be doing our coding outside of all the application installation directories. In fact, it's a good practice to avoid editing anything in the application installation directories entirely. Choosing a development root Let's decide where we will do our coding. To be concise, I'll choose C:mayapybookpylib to house all of our Python code, but it can be anywhere. You'll need to choose something appropriate if you are on OSX or Linux; we will use ~/mayapybook/pylib as our path on these systems, but I'll refer only to the Windows path except where more clarity is needed. Create the development root folder, and inside of it create an empty file named minspect.py. Now, we need to get C:mayapybookpylib onto Python's sys.path so it can be imported. The easiest way to do this is to use the PYTHONPATH environment variable. From a Windows command line you can run the following to add the path, and ensure it worked: > set PYTHONPATH=%PYTHONPATH%;C:mayapybookpylib > mayapy.exe >>> import sys >>> 'C:\mayapybook\pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '...minspect.py'> The following is the equivalent commands on OSX or Linux: $ export PYTHONPATH=$PYTHONPATH:~/mayapybook/pylib $ mayapy >>> import sys >>> '~/mayapybook/pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '.../minspect.py'> There are actually a number of ways to get your development root onto Maya's path. The option presented here (using environment variables before starting Maya or mayapy) is just one of the more straightforward choices, and it works for mayapy as well as normal Maya. Calling sys.path.append('C:\mayapybook\pylib') inside your userSetup.py file, for example, would work for Maya but not mayapy (you would need to use maya.standalone.initialize to register user paths, as we will do later). Using set or export to set environment variables only works for the current process and any new children. If you want it to work for unrelated processes, you may need to modify your global or user environment. Each OS is different, so you should refer to your operating system's documentation or a Google search. Some possibilities are setx from the Windows command line, editing /etc/environment in Linux, or editing /etc/launchd.conf on OS X. If you are in a studio environment and don't want to make changes to people's machines, you should consider an alternative such as using a script to launch Maya which will set up the PYTHONPATH, instead of launching the maya executable directly. Creating a function in your IDE Now it is time to use our IDE to do some programming. We'll start by turning the path printing code we wrote at the interactive prompt into a function in our file. Open C:mayapybookpylibminspect.py in your IDE and type the following code: import sys def syspath(): print 'sys.path:' for p in sys.path: print ' ' + p Save the file, and bring up your mayapy interpreter. If you've closed down the one from the last session, make sure C:mayapybookpylib (or whatever you are using as your development root) is present on your sys.path or the following code will not work! See the preceding section for making sure your development root is on your sys.path. >>> import minspect >>> reload(minspect) <module 'minspect' from '...minspect.py'> >>> minspect.syspath() C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages First, we import the minspect module. It may already be imported if this was an old mayapy session. That is fine, as importing an already-imported module is fast in Python and causes no side effects. We then use the reload function, which we will explore in the next section, to make sure the most up-to-date code is loaded. Finally, we call the syspath function, and its output is printed. Your actual paths will likely vary. Reloading code changes It is very common as you develop that you'll make changes to some code and want to immediately try out the changed code without restarting Maya or mayapy. You can do that with Python's built-in reload function. The reload function takes a module object and reloads it from disk so that the new code will be used. When we jump between our IDE and the interactive interpreter (or the Maya application) as we did earlier, we will usually reload the code to see the effect of our changes. I will usually write out the import and reload lines, but occasionally will only mention them in text preceding the code. Keep in mind that reload is not a magic bullet. When you are dealing with simple data and functions as we are here, it is usually fine. But as you start building class hierarchies, decorators, and other things that have dependencies or state, the situation can quickly get out of control. Always test your code in a fresh version of Maya before declaring it done to be sure it does not have some lingering defect hidden by reloading. Though once you are a master Pythonista you can ignore these warnings and figure out how to reload just about anything!
Read more
  • 0
  • 0
  • 3716

article-image-customizing-skin-guiskin
Packt
22 Jul 2014
8 min read
Save for later

Customizing skin with GUISkin

Packt
22 Jul 2014
8 min read
(For more resources related to this topic, see here.) Prepare for lift off We will begin by creating a new project in Unity. Let's start our project by performing the following steps: First, create a new project and name it MenuInRPG. Click on the Create new Project button, as shown in the following screenshot: Next, import the assets package by going to Assets | Import Package | Custom Package...; choose Chapter2Package.unityPackage, which we just downloaded, and then click on the Import button in the pop-up window link, as shown in the following screenshot: Wait until it's done, and you will see the MenuInRPGGame and SimplePlatform folders in the Window view. Next, click on the arrow in front of the SimplePlatform folder to bring up the drop-down options and you will see the Scenes folder and the SimplePlatform_C# and SimplePlatform_JS scenes, as shown in the following screenshot: Next, double-click on the SimplePlatform_C# (for a C# user) and SimplePlatform_JS (for a Unity JavaScript user) scenes, as shown in the preceding screenshot, to open the scene that we will work on in this project. When you double-click on either of the SimplePlatform scenes, Unity will display a pop-up window asking whether you want to save the current scene or not. As we want to use the SimplePlatform scene, just click on the Don't Save button to open up the SimplePlatform scene, as shown in the following screenshot: Then, go to the MenuInRPGGame/Resources/UI folder and click on the first file to make sure that the Texture Type and Format fields are selected correctly, as shown in the following screenshot: Why do we set it up in this way? This is because we want to have a UI graphic to look as close to the source image as possible. However, we set the Format field to Truecolor, which will make the size of the image larger than Compress, but will show the right color of the UI graphics. Next, we need to set up the Layers and Tags configurations; for this, go to Edit | Project Settings | Tags and set them as follows: Tags Element 0 UI Element 1 Key Element 2 RestartButton Element 3 Floor Element 4 Wall Element 5 Background Element 6 Door Layers User Layer Background User Layer Level Use Layer UI At last, we will save this scene in the MenuInRPGGame/Scenes folder, and name it MenuInRPG by going to File | Save Scene as... and then save it. Engage thrusters Now we are ready to create a GUI skin; for this, perform the following steps: Let's create a new GUISkin object by going to Assets | Create | GUISkin, and we will see New GUISkin in our Project window. Name the GUISkin object as MenuSkin. Then, click on MenuSkin and go to its Inspector window. We will see something similar to the following screenshot: You will see many properties here, but don't be afraid, because this is the main key to creating custom graphics for our UI. Font is the base font for the GUI skin. From Box to Scroll View, each property is GUIStyle, which is used for creating our custom UI. The Custom Styles property is the array of GUIStyle that we can set up to apply extra styles. Settings are the setups for the entire GUI. Next, we will set up the new font style for our menu UI; go to the Font line in the Inspector view, click the circle icon, and select the Federation Kalin font. Now, you have set up the base font for GUISkin. Next, click on the arrow in front of the Box line to bring up a drop-down list. We will see all the properties, as shown in the following screenshot: For more information and to learn more about these properties, visit http://unity3d.com/support/documentation/Components/class-GUISkin.html. Name is basically the name of this style, which by default is box (the default style of GUI.Box). Next, we will be seeing our custom UI to this GUISkin; click on the arrow in front of Normal to bring up the drop-down list, and you will see two parameters—Background and Text Color. Click on the circle icon on the right-hand side of the Background line to bring up the Select Texture2D window and choose the boxNormal texture, or you can drag the boxNormal texture from the MenuInRPG/Resources/UI folder and drop it to the Background space. We can also use the search bar in the Select Texture2D window or the Project view to find our texture by typing boxNormal in the search bar, as shown in the following screenshot: Then, under the Text Color line, we leave the color as the default color—because we don't need any text to be shown in this style—and repeat the previous step with On Normal by using the boxNormal texture. Next, click on the arrow in front of Active under Background. Choose the boxActive texture, and repeat this step for On Active. Then, go to each property in the Box style and set the following parameters: Border: Left: 14, Right: 14, Top: 14, Bottom: 14 Padding: Left: 6, Right: 6, Top: 6, Bottom: 6 For other properties of this style, we will leave them as default. Next, we go to the following properties in the MenuSkin inspector and set them as follows: Label Normal | Text Color R 27, G: 95, B: 104, A: 255 Window Normal | Background myWindow On Normal | Background myWindow Border Left: 27, Right: 27, Top: 55, Bottom: 96 Padding Left: 30, Right: 30, Top: 60, Bottom: 30 Horizontal Scrollbar Normal | Background horScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Thumb Normal | Background horScrollBarThumbNormal Hover | Background horScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Left Button Normal | Background arrowLNormal Hover | Background arrowLHover Fixed Width 14 Fixed Height 15 Horizontal Scrollbar Right Button Normal | Background arrowRNormal Hover | Background arrowRHover Fixed Width 14 Fixed Height 15 Vertical Scrollbar Normal | Background verScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Padding Left: 0, Right: 0, Top: 0, Bottom: 0 Vertical Scrollbar Thumb Normal | Background verScrollBarThumbNormal Hover | Background verScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Vertical Scrollbar Up Button Normal | Background arrowUNormal Hover | Background arrowUHover Fixed Width 16 Fixed Height 14 Vertical Scrollbar Down Button Normal | Background arrowDNormal Hover | Background arrowDHover Fixed Width 16 Fixed Height 14 We have finished setting up of the default styles. Now we will go to the Custom Styles property and create our custom GUIStyle to use for this menu; go to Custom Styles and under Size, change the value to 6. Then, we will see Element 0 to Element 5. Next, we go to the first element or Element 0; under Name, type Tab Button, and we will see Element 0 change to Tab Button. Set it as follows: Tab Button (or Element 0) Name Tab Button Normal Background tabButtonNormal Text Color R: 27, G: 62, B: 67, A: 255 Hover Background tabButtonHover Text Color R: 211, G: 166, B: 9, A: 255 Active Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 On Normal Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 Border Left: 12, Right: 12, Top: 12, Bottom: 4 Padding Left: 6, Right: 6, Top: 6, Bottom: 4 Font Size 14 Alignment Middle Center Fixed Height 31 The settings are shown in the following screenshot: For the Text Color value, we can also use the eyedropper tool next to the color box to copy the same color, as we can see in the following screenshot: We have finished our first style, but we still have five styles left, so let's carry on with Element 1 with the following settings: Exit Button (or Element 1) Name Exit Button Normal | Background buttonCloseNormal Hover | Background buttonCloseHover Fixed Width 26 Fixed Height 22 The settings for Exit Button are showed in the following screenshot: The following styles will create a style for Element 2: Text Item (or Element 2) Name Text Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Left Word Wrap Check The settings for Text Item are shown in the following screenshot: To set up the style for Element 3, the following settings should be done: Text Amount (or Element 3) Name Text Amount Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Right Word Wrap Check The settings for Text Amount are shown in the following screenshot: The following settings should be done to create Selected Item: Selected Item (or Element 4) Name Selected Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Hover Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 Active Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 On Normal Background itemSelectActive Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings are shown in the following screenshot: To create the Disabled Click style, the following settings should be done: Disabled Click (or Element 5) Name Disabled Click Normal Background itemSelectNormal Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings for Disabled Click are shown in the following screenshot: And now, we have finished this step.
Read more
  • 0
  • 0
  • 13832
Modal Close icon
Modal Close icon