Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-using-googles-offerings
Packt
22 Dec 2014
8 min read
Save for later

Using Google's offerings

Packt
22 Dec 2014
8 min read
In this article by Juwal Bose, author of the book LibGDX Game Development Essentials, we will learn how to use all those features that Google has to offer. Google provides AdMob (Google Mobile Ads) to display ads to monetize our game. Google Analytics can be used to track basic app data. Google Play services can be used to implement and track global leaderboards and achievements. Before we start implementing all of these, we need to ensure the following points: Use the SDK manager to update to the latest Android SDK tools Download and install Google Play services via the SDK manager (For more resources related to this topic, see here.) Interfacing platform-specific code This chapter deals with an Android project, and much of what we will do will be specific to that platform. We need a way to detect the currently running platform to decide whether to invoke these features or not. Hence, we add a new public Boolean variable, isAndroid, in the ThrustCopter class, which is false by default. We can detect ApplicationType using the following code in the create method: switch (Gdx.app.getType()) { case Android: isAndroid=true; break; case Desktop: break; case WebGL: break; case iOS: break; default: } Now, we can check whether the game is running on an Android device using the following code: if(game.isAndroid){ ... } From the core project, we need to call the Android main class to invoke Android-specific code. We enable this using a new interface created in the core project: IActivityRequestHandler. Then, we make sure our AndroidLauncher main class implements this interface as follows: public class AndroidLauncher extends AndroidApplication implements IActivityRequestHandler{ ... initialize(new ThrustCopter(this), config); Note that we are passing this as a parameter to ThrustCopter, which provides a reference to the implemented interface. As this is Android-specific, other platforms' start classes can pass null as an argument, as we will only use this parameter on the Android platform. In the ThrustCopter class, we save the reference with the name handler, as shown in the following code: public ThrustCopter(IActivityRequestHandler IARH) {    handler=IARH; ... Visit https://github.com/libgdx/libgdx/wiki/Interfacing-with-platform-specific-code for more information. Implementing Google Analytics tracking The default implementation of Google Analytics will automatically provide the following information about your app: the number of users and sessions, session duration, operating systems, device models, and geography. To start off, we need to create a Google Analytics property and app view. Go ahead and start using Google Analytics by accessing it at https://www.google.com/analytics/web/?hl=en. Create a new account and select Mobile app and fill in the details. Once all details are entered, click on Get Tracking ID to generate a new tracking ID. The tracking ID will be unique for each account. The Google Analytics version may change in future, which means the way they are integrated may also change. Check out the Google developers portal for details at https://developers.google.com/analytics/devguides/collection/android/v4/. The AndroidManifest file needs to have the following permissions and the minSdkVersion instance should be set to 9, as follows: <uses-sdk android_minSdkVersion="9" android_targetSdkVersion="23" /> <uses-permission android_name="android.permission.INTERNET" /> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE" /> Have a look at the following Google Analytics registration form: Copy the library project at <android-sdk>/extras/google/google_play_services/libproject/google-play-services_lib/ to the location where you maintain your Android app projects. Import the library project into your Eclipse workspace. Click on File, select Import, select Android, click on Existing Android Code Into Workspace, and browse to the copy of the library project to import it. This step is important for all the Google-related services that we are about to integrate. We need to refer to this library project from our Thrust Copter-android project. Right-click on the Thrust Copter-android project and select Properties. Select the Android section, which will display a blank Library section to the right. Click on Add... to select our library project and add it as a reference. Adding tracker configuration files We can provide configuration files to create Analytics trackers. Usually, we need only one tracker, which is usually called the global tracker, to report the basic analytics data. We add the global_tracker.xml file to the res/xml folder in the Android project. Copy this file from the source provided. Update the ga_trackingId section with the new tracking ID you got on creating your application entry on the Google Analytics site. The screenName section consists of the different scenes that will be tracked. We added the MenuScene and ThrustCopterScene classes to the screenName section. This needs to be changed for each game as follows: <screenName name="com.csharks.thrustcopter.ThrustCopterScene">Thrust Copter Game</screenName>    <screenName name="com.csharks.thrustcopter.MenuScene">Thrust Copter Menu</screenName> Once the tracker XML file is in place, add the following element to the Android Manifest application part: <meta-data    android_name="com.google.android.gms.analytics.globalConfigResource"    android_resource="@xml/global_tracker" /> We need to access the tracker and report activity start, stop, and scene changes. This can be done using the following code in the AndroidLauncher class: Tracker globalTracker; Then, add the following code within the onCreate method: GoogleAnalytics analytics = GoogleAnalytics.getInstance(this); globalTracker=analytics.newTracker(R.xml.global_tracker); Now, we will move on to reporting. We added a new function in the IActivityRequestHandler interface called setTrackerScreenName(String path), which needs to be implemented as well: @Override protected void onStart(){    super.onStart();    GoogleAnalytics.getInstance(this).reportActivityStart(this); } @Override public void onStop() {    super.onStop();    GoogleAnalytics.getInstance(this).reportActivityStop(this); } @Override public void setTrackerScreenName(String path) {    globalTracker.setScreenName(path);    globalTracker.send(new HitBuilders.AppViewBuilder().build()); } We also need to report screen names as well when we switch scenes. We do this within the constructors of MenuScene and ThrustCopterScene as follows: if(game.isAndroid){ game.handler.setTrackerScreenName("com.csharks.thrustcopter.MenuScene"); } It's time to test whether everything is working. Connect your Android device and run our Android project on it. We can see that the analytics reporting is showing up in logcat. Once we have significant data, we can access the Google Analytics Web interface to analyze how the game is being played by the masses. Adding Google Mobile Ads Legacy AdMob is being renamed Google Mobile Ads, which is now linked with Google AdSense. First, we need to set up AdMob to serve ads by visiting https://www.google.com/ads/admob/index.html. Click on the Monetize section and use the Add your app manually option to set up a new banner ad. This will allot a new AdMob ad unit ID. The Ads API is also part of the Google Play services platform that we have already integrated into our Android project. We have already added as follows the necessary permissions to AndroidManifest, but we need to add the following as well: <!--This meta-data tag is required to use Google Play Services.--> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <!--Include the AdActivity configChanges and theme. --> <activity android_name="com.google.android.gms.ads.AdActivity" android_configChanges="keyboard|keyboardHidden|orientation|screenLayout|uiMode|screenSize|smallestScreenSize" android_theme="@android:style/Theme.Translucent" /> AdMob needs its own view, whereas LibGDX creates its own view when initializing. A typical way of coexisting will be our Game view in fullscreen with the Ad view overlaid. We will use RelativeLayout to arrange both views. We need to replace the initialize method with the initializeForView method, which lacks some functionality; we need to specify those manually. The onCreate method of the AndroidLauncher class has the following new code: requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN);   getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN);   RelativeLayout layout = new RelativeLayout(this); View gameView = initializeForView(new ThrustCopter(this), config); layout.addView(gameView);   // Add the AdMob view RelativeLayout.LayoutParams adParams =        new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.WRAP_CONTENT,            RelativeLayout.LayoutParams.WRAP_CONTENT); adParams.addRule(RelativeLayout.ALIGN_PARENT_TOP); adParams.addRule(RelativeLayout.CENTER_HORIZONTAL); adView = new AdView(this); adView.setAdSize(AdSize.BANNER); adView.setAdUnitId(AD_UNIT_ID); startAdvertising(); layout.addView(adView, adParams); setContentView(layout); The startAdvertising function is as follows: private void startAdvertising() { AdRequest adRequest = new AdRequest.Builder().build(); adView.loadAd(adRequest); } The IActivityRequestHandler class has a new method, showAds(boolean show), that toggles the visibility of the adView instance. The method is implemented as follows: @Override public void showAds(boolean show) { handler.sendEmptyMessage(show ? SHOW_ADS : HIDE_ADS); } Here, handler, which is used to access adView from the thread that created it, is initialized as follows: private final int SHOW_ADS = 1; private final int HIDE_ADS = 0;   protected Handler handler = new Handler()    {        @Override        public void handleMessage(Message msg) {            switch(msg.what) {                case SHOW_ADS:                {                    adView.setVisibility(View.VISIBLE);                    break;                }                case HIDE_ADS:                {                    adView.setVisibility(View.GONE);                    break;                }            }        }    }; For more information, visit https://github.com/libgdx/libgdx/wiki/Admob-in-libgdx. Alternatively, runOnUiThread can be used instead of the handleMessage method. Now, we can show ads in the menu and hide it when we switch to the game. Summary In this article, you learned how to handle platform-specific code and you also learned how to use Google Play services to integrate AdMob and Analytics. Resources for Article: Further resources on this subject: Scaling friendly font rendering with distance fields [article] Sparrow iOS Game Framework - The Basics of Our Game [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 7042

article-image-starting-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Starting Ogre 3D

Packt
25 Nov 2010
7 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction Up until now, the ExampleApplication class has started and initialized Ogre 3D for us; now we are going to do it ourselves. Time for action – starting Ogre 3D This time we are working on a blank sheet. Start with an empty code file, include Ogre3d.h, and create an empty main function: #include "OgreOgre.h"int main (void){ return 0;} Create an instance of the Ogre 3D Root class; this class needs the name of the "plugin.cfg": "plugin.cfg":Ogre::Root* root = new Ogre::Root("plugins_d.cfg"); If the config dialog can't be shown or the user cancels it, close the application: if(!root->showConfigDialog()){ return -1;} Create a render window: Ogre::RenderWindow* window = root->initialise(true,"Ogre3DBeginners Guide"); Next create a new scene manager: Ogre::SceneManager* sceneManager = root->createSceneManager(Ogre::ST_GENERIC); Create a camera and name it camera: Ogre::Camera* camera = sceneManager->createCamera("Camera");camera->setPosition(Ogre::Vector3(0,0,50));camera->lookAt(Ogre::Vector3(0,0,0));camera->setNearClipDistance(5); With this camera, create a viewport and set the background color to black: Ogre::Viewport* viewport = window->addViewport(camera);viewport->setBackgroundColour(Ogre::ColourValue(0.0,0.0,0.0)); Now, use this viewport to set the aspect ratio of the camera: camera->setAspectRatio(Ogre::Real(viewport->getActualWidth())/Ogre::Real(viewport->getActualHeight())); Finally, tell the root to start rendering: root->startRendering(); Compile and run the application; you should see the normal config dialog and then a black window. This window can't be closed by pressing Escape because we haven't added key handling yet. You can close the application by pressing CTRL+C in the console the application has been started from. What just happened? We created our first Ogre 3D application without the help of the ExampleApplication. Because we aren't using the ExampleApplication any longer, we had to include Ogre3D.h, which was previously included by ExampleApplication.h. Before we can do anything with Ogre 3D, we need a root instance. The root class is a class that manages the higher levels of Ogre 3D, creates and saves the factories used for creating other objects, loads and unloads the needed plugins, and a lot more. We gave the root instance one parameter: the name of the file that defines which plugins to load. The following is the complete signature of the constructor: Root(const String & pluginFileName = "plugins.cfg",const String &configFileName = "ogre.cfg",const String & logFileName = "Ogre.log") Besides the name for the plugin configuration file, the function also needs the name of the Ogre configuration and the log file. We needed to change the first file name because we are using the debug version of our application and therefore want to load the debug plugins. The default value is plugins.cfg, which is true for the release folder of the Ogre 3D SDK, but our application is running in the debug folder where the filename is plugins_d.cfg. ogre.cfg contains the settings for starting the Ogre application that we selected in the config dialog. This saves the user from making the same changes every time he/she start our application. With this file Ogre 3D can remember his choices and use them as defaults for the next start. This file is created if it didn't exist, so we don't append an _d to the filename and can use the default; the same is true for the log file. Using the root instance, we let Ogre 3D show the config dialog to the user in step 3. When the user cancels the dialog or anything goes wrong, we return -1 and with this the application closes. Otherwise, we created a new render window and a new scene manager in step 4. Using the scene manager, we created a camera, and with the camera we created the viewport; then, using the viewport, we calculated the aspect ratio for the camera. After creating all requirements, we tell the root instance to start rendering, so our result would be visible. Following is a diagram showing which object was needed to create the other: Adding resources We have now created our first Ogre 3D application, which doesn't need the ExampleApplication. But one important thing is missing: we haven't loaded and rendered a model yet. Time for action – loading the Sinbad mesh We have our application, now let's add a model. After setting the aspect ratio and before starting the rendering, add the zip archive containing the Sinbad model to our resources: Ogre::ResourceGroupManager::getSingleton().addResourceLocation("../../Media/packs/Sinbad.zip","Zip"); We don't want to index more resources at the moment, so index all added resources now: Ogre::ResourceGroupManager::getSingleton().initialiseAllResourceGroups(); Now create an instance of the Sinbad mesh and add it to the scene: Ogre::Entity* ent = sceneManager->createEntity("Sinbad.mesh");sceneManager->getRootSceneNode()->attachObject(ent); Compile and run the application; you should see Sinbad in the middle of the screen: What just happened? We used the ResourceGroupManager to index the zip archive containing the Sinbad mesh and texture files, and after this was done, we told it to load the data with the createEntity() call in step 3. Using resources.cfg Adding a new line of code for each zip archive or folder we want to load is a tedious task and we should try to avoid it. The ExampleApplication used a configuration file called resources.cfg in which each folder or zip archive was listed, and all the content was loaded using this file. Let's replicate this behavior. Time for action – using resources.cfg to load our models Using our previous application, we are now going to parse the resources.cfg. Replace the loading of the zip archive with an instance of a config file pointing at the resources_d.cfg: the resources_d.cfg:Ogre::ConfigFile cf;cf.load(«resources_d.cfg»); First get the iterator, which goes over each section of the config file: Ogre::ConfigFile::SectionIterator sectionIter =cf.getSectionIterator(); Define three strings to save the data we are going to extract from the config file and iterate over each section: Ogre::String sectionName, typeName, dataname;while (sectionIter.hasMoreElements()){ Get the name of the section: sectionName = sectionIter.peekNextKey(); Get the settings contained in the section and, at the same time, advance the section iterator; also create an iterator for the settings itself: Ogre::ConfigFile::SettingsMultiMap *settings = sectionIter.getNext();Ogre::ConfigFile::SettingsMultiMap::iterator i; Iterate over each setting in the section: for (i = settings->begin(); i != settings->end(); ++i){ Use the iterator to get the name and the type of the resources: typeName = i->first;dataname = i->second; Use the resource name, type, and section name to add it to the resource index: Ogre::ResourceGroupManager::getSingleton().addResourceLocation(dataname, typeName, sectionName); Compile and run the application, and you should see the same scene as before.
Read more
  • 0
  • 0
  • 7033

article-image-adding-finesse-your-game
Packt
21 Oct 2013
7 min read
Save for later

Adding Finesse to Your Game

Packt
21 Oct 2013
7 min read
(For more resources related to this topic, see here.) Adding a background There is still a lot of black in the background and as the game has a space theme, let's add some stars in there. The way we'll do this is to add a sphere that we can map the stars texture to, so click on Game Object | Create Other | Sphere, and position it at X: 0, Y: 0, Z: 0. We also need to set the size to X: 100, Y: 100, Z: 100. Drag the stars texture, located at Textures/stars, on to the new sphere that we created in our scene. That was simple, wasn't that? Unity has added the texture to a material that appears on the outside of our sphere while we need it to show on the inside. To fix it, we are going to reverse the triangle order, flip the normal map, and flip the UV map with C# code. Right-click on the Scripts folder and then click on Create and select C# Script. Once you click on it, a script will appear in the Scripts folder; it should already have focus and be asking you to type a name for the script, call it SkyDome. Double-click on the script in Unity and it will open in MonoDevelop. Edit the Start method, as shown in the following code: void Start () {// Get a reference to the meshMeshFilterBase MeshFilter = transform.GetComponent("MeshFilter")as MeshFilter;Mesh mesh = BaseMeshFilter.mesh;// Reverse triangle windingint[] triangles = mesh.triangles;int numpolies = triangles.Length / 3;for(int t = 0;t <numpolies; t++){Int tribuffer = triangles[t * 3];triangles[t * 3] = triangles[(t * 3) + 2];triangles[(t * 3) + 2] = tribuffer;}// Read just uv map for inner sphere projectionVector2[] uvs = mesh.uv;for(int uvnum = 0; uvnum < uvs.Length; uvnum++){uvs[uvnum] = new Vector2(1 - uvs[uvnum].x, uvs[uvnum].y);}// Read just normals for inner sphere projectionVector3[] norms = mesh.normals;for(int normalsnum = 0; normalsnum < norms.Length; normalsnum++){[ 69 ]norms[normalsnum] = -norms[normalsnum];}// Copy local built in arrays back to the meshmesh.uv = uvs;mesh.triangles = triangles;mesh.normals = norms;} The breakdown of the code as is follows: Get the mesh of the sphere. Reverse the way the triangles are drawn. Each triangle has three indexes in the array; this script just swaps the first and last index of each triangle in the array. Adjust the X position for the UV map coordinates. Flip the normals of the sphere. Apply the new values of the reversed triangles, adjusted UV coordinates, and flipped normals to the sphere. Click and drag this script onto your sphere GameObject and test your scene. You should now see something like the following screenshot: Adding extra levels Now that the game is looking better, we can add some more content in to it. Luckily the jagged array we created earlier easily supports adding more levels. Levels can be any size, even with variable column heights per row. Double-click on the Sokoban script in the Project panel and switch over to MonoDevelop. Find levels array and modify it to be as follows: // Create the top array, this will store the level arraysint[][][] levels ={// Create the level array, this will store the row arraynew int [][] {// Create all row array, these will store column datanew int[] {1,1,1,1,1,1,1,1},new int[] {1,0,0,1,0,0,0,1},new int[] {1,0,3,3,0,3,0,1},new int[] {1,0,0,1,0,1,0,1},new int[] {1,0,0,1,3,1,0,1},new int[] {1,0,0,2,2,2,2,1},new int[] {1,0,0,1,0,4,1,1},new int[] {1,1,1,1,1,1,1,1}},// Create a new levelnew int [][] {new int[] {1,1,1,1,0,0,0,0},new int[] {1,0,0,1,1,1,1,1},new int[] {1,0,2,0,0,3,0,1},new int[] {1,0,3,0,0,2,4,1},new int[] {1,1,1,0,0,1,1,1},new int[] {0,0,1,1,1,1,0,0}},// Create a new levelnew int [][] {new int[] {1,1,1,1,1,1,1,1},new int[] {1,4,0,1,2,2,2,1},new int[] {1,0,0,3,3,0,0,1},new int[] {1,0,3,0,0,0,1,1},new int[] {1,0,0,1,1,1,1},new int[] {1,0,0,1},new int[] {1,1,1,1}}}; The preceding code has given us two extra levels, bringing the total to three. The layout of the arrays is still very visual and you can easily see the level layout just by looking at the arrays. Our BuildLevel, CheckIfPlayerIsAttempingToMove and MovePlayer methods only work on the first level at the moment, let's update them to always use the users current level. We'll have to store which level the player is currently on and use that level at all times, incrementing the value when a level is finished. As we'll want this value to persist between plays, we'll be using the PlayerPrefs object that Unity provides for saving player data. Before we get the value, we need to check that it is actually set and exists; otherwise we could see some odd results. Start by declaring our variable for use at the top of the Sokoban script as follows: int currentLevel; Next, we'll need to get the value of the current level from the PlayerPrefs object and store it in the Awake method. Add the following code to the top of your Awake method: if (PlayerPrefs.HasKey("currentLevel")) {currentLevel = PlayerPrefs.GetInt("currentLevel");} else {currentLevel = 0;PlayerPrefs.SetInt("currentLevel", currentLevel);} Here we are checking if we have a value already stored in the PlayerPrefs object, if we do then use it, if we don't then set currentLevel to 0, and then save it to the PlayerPrefs object. To fix the methods mentioned earlier, click on Search | Replace. A new window will appear. Type levels[0] in the top box and levels[currentLevel] in the bottom one, and then click on All. Level complete detection It's all well and good having three levels, but without a mechanism to move between them they are useless. We are going to add a check to see if the player has finished a level, if they have then increment the level counter and load the next level in the array. We only need to do the check at the end of every move; to do so every frame would be redundant. We'll write the following method first and then explain it: // If this method returns true then we have finished the levelboolhaveFinishedLevel () {// Initialise the counter for how many crates are on goal// tilesint cratesOnGoalTiles = 0;// Loop through all the rows in the current levelfor (int i = 0; i< levels[currentLevel].Length; i++) {// Get the tile ID for the column and pass it the switch// statementfor (int j = 0; j < levels[currentLevel][i].Length; j++) {switch (levels[currentLevel][i][j]) {case 5:// Do we have a match for a crate on goal// tile ID? If so increment the countercratesOnGoalTiles++;break;default:break;}}}// Check if the cratesOnGoalTiles variable is the same as the// amountOfCrates we set when building the levelif (amountOfCrates == cratesOnGoalTiles) {return true;} else {return false;}} In the BuildLevel method, whenever we instantiate crate, we increment the amountOfCrates variable. We can use this variable to check if the amount of crates on goal tiles is the same as the amountOfCrates variable, if it is then we know we have finished the current level. The for loops iterate through the current level's rows and columns, and we know that 5 in the array is a crate on a goal tile. The method returns a Boolean based on whether we have finished the level or not. Now let's add the call to the method. The logical place would be inside the MovePlayer method, so go ahead and add a call to the method just after the pCol += tCol; statement. As the method returns true or false, we're going to use it in an if statement, as shown in the following code: // Check if we have finished the levelif (haveFinishedLevel()) {Debug.Log("Finished");} The Debug.Log method will do for now, let's check if it's working. The solution for level one is on YouTube at http://www.youtube.com/watch?v=K5SMwAJrQM8&hd=1. Click on the play icon at the top-middle of the Unity screen and copy the sequence of moves in the video (or solve it yourself), when all the crates are on the goal tiles you'll see Finished in the Console panel. Summary The game now has some structure in the form of levels that you can complete and is easily expandable. If you wanted to take a break from the article, now would be a great time to create and add some levels to the game and maybe add some extra sound effects. All this hard work is for nothing if you can't make any money though, isn't it? Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Flash Game Development: Making of Astro-PANIC! [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 6997

article-image-animation-features-unity-5
Packt
05 Aug 2015
16 min read
Save for later

Animation features in Unity 5

Packt
05 Aug 2015
16 min read
In this article by Valera Cogut, author of the book Unity 5 for Android Essentials you will learn new Mecanim animation features and awesome new audio features in Unity 5. (For more resources related to this topic, see here.) New Mecanim animation features in Unity 5 Unity 5 contains some new awesome possibilities for the Mecanim animation system. Let's look at the new shiny features known in Unity 5. State machine behavior Now, you can inherit your classes from StateMachineBehaviour in order to be able to attach them to your Mecanim animation states. This class has the following very important callbacks: OnStateEnter OnStateUpdate OnStateExit OnStateMove OnStateIK The StateMachineBehaviour scripts behave like MonoBehaviour scripts, which you can attach on as many objects as you wish; the same is true for StateMachineBehaviour. You can use this solution with or without any animation at all. State machine transition Unity 5 introduced a new awesome feature for Mecanim animation systems known as state machine transitions in order to construct a higher abstraction level. In addition, entry and exit nodes were created. By these two additional nodes to StateMachine, you can now branch your start or finish state depending on your special conditions and requirements. These mixes of transitions are possible: StateMachine | StateMachine, State | StateMachine, State | State. In addition, you also can reorder your layers or parameters. This is the new UI that allows it by a very simple and useful drag-n-drop method. Asset creation API One more awesome possibility in Unity 5 was introduced using scripts in Unity Editor in order to programmatically create assets, such as layers, controllers, states, StateMachine, and blend trees. You can use different solutions with a high-level API provided by Unity engine maintenance and a low-level API, where you should manage all your assets manually. You can find more about both API versions on Unity documentation pages. Direct blend tree Another new feature that was introduced with the new BlendTree type is known as direct. It provides direct mapping and animator parameters to the weight of BlendTree children. Possibilities with Unity 5 have been enhanced with two useful features for Mecanim animation system: Camera can scale, orbit, and pan You can access your parameters in runtime Programmatically creating assets by Unity 5 API The following code snippets are self-explanatory, pretty simple, and straightforward. I list them just as a very useful reminder. Creating the controller To create a controller you can use the following code: var animatorController = UnityEditor.Animations.AnimatorController.CreateAnimatorControllerAtPath ("Assets/Your/Folder/Name/state_machine_transitions.controller"); Adding parameters To add parameters to the controller, you can use this code: animatorController.AddParameter("Parameter1", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter2", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter3″, UnityEditor.Animations.AnimatorControllerParameterType.Trigger); Adding state machines To add state machines, you can use the following code: var sm1 = animatorController.layers[0].stateMachine; var sm2 = sm1.AddStateMachine("sm2"); var sm3 = sm1.AddStateMachine("sm3"); Adding states To add states, you can use the code given here: var s1 = sm2.AddState("s1″); var s2 = sm3.AddState("s2″); var s3 = sm3.AddState("s3″); Adding transitions To add transitions, you can use the following code: var exitTransition = s1.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter1"); exitTransition.duration = 0;   var transition1 = sm2.AddAnyStateTransition(s1); transition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); transition.duration = 0;   var transition2 = sm3.AddEntryTransition(s2); transition2.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3″); sm3.AddEntryTransition(s3); sm3.defaultState = s2;   var exitTransition = s3.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3"); exitTransition.duration = 0;   var smt = rootStateMachine.AddStateMachineTransition(sm2, sm3); smt.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); sm2.AddStateMachineTransition(sm1, sm3); Going deeper into new audio features Let's start with new amazing Audio Mixer possibilities. Now, you can do true submixing of audio in Unity 5. In the following figure, you can see a very simple example with different sound categories required in a game: Now in Unity 5, you can mix different sound collections within categories and tune up volume control and effects only once in a single place so that you can save a lot of time and effort. This new awesome audio feature in Unity 5 allows you to create a fantastic mood and atmosphere for your game. Each Audio Mixer can have a hierarchy of AudioGroups: The Audio Mixer can not only do a lot of useful things, but also mix different sound groups in one place. Different audio effects are applied sequentially in each AudioGroup. Now you're getting closer to the amazing, awesome, and shiny new features in Unity 5 for audio system! A callback script OnAudioFilterRead, which made possible the processing of samples directly into their scripts, previously was handled exclusively by the code. Unity now also supports custom plugins to create different effects. With these innovations, Unity 5 for audio system now has its own applications synthesizer, which has become much easier and more flexible than possible. Mood transitions As mentioned earlier, the mood of the game can be controlled with a mix of sound. This can be achieved with the involvement of new stems and music or ambient sounds. Another common way to accomplish this is to move the state of the mixture. A very effective way of taking mood where you want to go is by changing the volume section's mixture and transferring it to the different states of effect parameters. Inside, everything is the Audio Mixer's ability to identify pictures. Pictures capture the status of all parameters in Audio Mixer. Everything from investigative wet levels to AudioGroup tone levels can be captured and moved between the various parameters. You can even create a complex mixture of states between a whole bunch of pictures in your game, creating all kinds of possibilities and goals. Imagine installing all these things without having to write a line of code to the script. Physics and particle system effects in Unity 5 Physics for 2D and 3D in Unity are very similar, because they use the same concepts like Ias rigidbodies, joints, and colliders. However, Box2D has more features than Unity's 2D physics engine. It is not a problem to mix 2D and 3D physics engines (built-in, custom, third-party) in Unity. So, Unity provides an easy development way for your innovative games and applications. If you need to develop some real-life physics in your project, then you should not write your own library, framework, or engine, except specific requirements. However, you should try existing physics engines, libraries, or frameworks with many features already made. Let's start our introduction into Unity's built-in physics engine. In the case that you need to set your object under Unity's built-in physics management, you just need to attach the Rigidbody component to this object. After that, your object can collide with other entities in its world and gravity will have an affect on it. In other words, Rigidbody will be simulated physically. In your scripts, you can move any of your Rigidbodies by adding vector forces to them. It is not recommended to move the Transform component of a non-kinematic Rigidbody, because it will not collide correctly with other items. Instead, you can apply forces and torque to your Rigidbody. A Rigidbody can be used also to develop cars with wheel colliders and with some of your scripts to apply forces to it. Furthermore, a Rigidbody is used not only for vehicles, but also you can use it for any other physics issues such as airplanes, robots with various scripts for applying forces, and with joints. The most useful way to utilize a Rigidbody is to use it in collaboration with some primitive colliders (built-in in Unity) such as BoxCollider and SphereCollider. Next, we will show you two things to remember about Rigidbody: In your object's hierarchy, you must never have a child and its parent with the Rigidbody component together at the same time It is not recommended to scale Rigidbody's parent object One of the most important and fundamental components of physics in Unity is a Rigidbody component. This component activates physics calculations on the attached object. If you need your object to react to collisions( for example, while playing billiards, balls collide with each other and scatter in different directions) then you must also attach a Collider component on your GameObject. If you have attached a Rigidbody component to your object, then your object will move through the physics engine, and I recommend that you do not move your object by changing its position or rotation in the Transform component. If you need some way to move your object, you should apply the various forces acting on the object so that the Unity physics engine assumes all obligations for the calculation of collisions and moving dynamic objects. Also, in some situations, there is a need for a Rigidbody component, but your object must be moved only by changing its position or rotation properties in the Transform component. It is sometimes necessary to use components without Rigidbody calculating collisions of the object and its motion physics. That is, your object will move by your script or, for example, by running your animation. In order to solve this problem, you should just activate its IsKinematic property. Sometimes, it is required to use a combination of these two modes when IsKinematic is turned on and when it is turned off. You can create a symbiosis of these two modes, changing the IsKinematic parameter directly in your code or in your animation. Changing the IsKinematic property very often from your code or from your animation can be the cause of overhead in your performance. Therefore, you should use it very carefully and only when you really need it. A kinematic Rigidbody object is defined by the IsKinematic toggle option. If a Rigidbody is Kinematic, this object will not be affected by collisions, gravity, or forces. There is a Rigidbody component for 3D physics engine and an analogous Rigidbody2D for 2D physics engine. A kinematic Rigidbody can interact with other non-kinematic Rigidbodies. In the event of using kinematic Rigidbodies, you should translate their positions and rotation values of the Transform component by your scripts or animations. When there is a collision between Kinematic and non-kinematic Rigidbodies, then the Kinematic object will properly wake up non-kinematic Rigidbody. Furthermore, the first Rigidbody will apply friction to the second Rigidbody if the second object is on top of the first object. Let's list some possible usage examples of kinematic Rigidbodies: There are situations when you need your objects to be under physics management, but sometimes to be controlled explicitly from your scripts or animations. As an example, you can attach Rigidbodies to the bones of your animated personage and connect them with joints in order to utilize your entity as a ragdoll. If you are controlling your character by Unity's animation system, you should enable the IsKinematic checkbox. Sometimes you may require your hero to be affected by Unity's built-in physics engine if you are hitting the hero. In this case you should disable the IsKinematic checkbox. If you need a moving item that can push different items, yet not by itself. In case you have a moving platform and you need to place some Rigidbody objects on top, you ought to enable the IsKinematic checkbox rather than simply attaching a collider without a Rigidbody. You may need to enable the IsKinematic property of your Rigidbody object that is animated and has a genuine Rigidbody follower by utilizing one of the accessible joints. Earlier, I mentioned the collider, but now is the time to discuss this component in more detail. In the case of Unity, the physics engine can calculate collisions. You must specify geometric shapes for your object by attaching the Collider component. In most cases, the collider does not have to be the same shape as your mesh with many polygons. Therefore, it is desirable to use simple colliders, which will significantly improve your performance, otherwise with more complex geometric shapes you risk significantly increasing the computing time for physics collisions. Simple colliders in Unity are known as primitive colliders: BoxCollider, BoxCollider2D, SphereCollider, CircleCollider2D, and CapsuleCollider. Also, no one forbids you to combine different primitive colliders to create a more realistic geometric shape that the physics engine can handle very fast compared to MeshCollider. Therefore, to accelerate your performance, you should use primitive colliders wherever possible. You can also hang on to the child objects of different primitive colliders, which will change its position and rotation, depending on the parent Transform component. The Rigidbody component must be attached only to the GameObject root in the hierarchy of your entity. Unity provides a MeshCollider component for 3D physics and a PolygonCollider2D component for 2D physics. The MeshCollider component will use your object's mesh for its geometric shape. In PolygonCollider2D, you can edit directly in Unity and create any 2D geometry for your 2D physical computations. In order to react in collisions between different mesh colliders, you must enable a Convex property. You will certainly sacrifice performance for more accurate physics calculations, but if you have the right balance between quality and performance, then you can achieve good performance only through a proper approach. Objects are static when they have a Collider component without a Rigidbody component. Therefore, you should not move or rotate them by changing properties in their Transform component, because it will leave a heavy imprint on your performance as a physics engine should recalculate many polygons of various objects for right collisions and ray casts. Dynamic objects are those that have a Rigidbody component. Static objects (attached with the Collider component and without Rigidbody components) can interact with dynamic objects (attached with Collider and Rigidbody components). Furthermore, static objects will not be moved by collisions like dynamic objects. Also, Rigidbodies can sleep in order to increase performance. Unity provides the ability to control sleep in a Rigidbodies component directly in the code using following functions: Rigidbody.IsSleeping() Rigidbody.Sleep() Rigidbody.WakeUp() There are two variables characterized in the physics manager. You can open physics manager right from Unity menu here: Edit | Project Settings | Physics: Rigidbody.sleepVelocity: The default value is 0.14. This indicates lower limitations for linear velocity (from zero to infinity) below which objects will sleep. Rigidbody.sleepAngularVelocity: The default value is 0.14. This indicates lower limitations for angular velocity (from zero to infinity) below which objects will sleep. Rigidbodies awaken when: An alternate Rigidbody impacts the resting Rigidbody An alternate Rigidbody was joined through a joint At the point of adjusting a property of the Rigidbody At the point of adding force vectors A kinematic Rigidbody can wake the other sleeping Rigidbodies while static objects (attached with a Collider component and without a Rigidbody component) can't wake your sleeping Rigidbodies. The PhysX physics engine which is integrated into Unity works well on mobile devices, but mobile devices certainly have far fewer resources than powerful desktops. Let's look at a few points to optimize the physics engine in Unity: First of all, note that you can adjust the Fixed Timestep parameter in the time manager in order to reduce costs for the physical execution time updates. If you increase the value, you can increase the quality and accuracy of physics in your game or in your application, but you will lose the time to process. This can greatly reduce your productivity, or in other words, it can increase CPU overhead. The maximum allowed timestep indicates how much time will be spent in the worst case for physical treatment. The total processing time for physics depends on the awake rigidbodies and colliders in the scene, as well as the level of complexity of the colliders. Unity provides the ability to use physical materials for setting various properties such as friction and elasticity. For example, a piece of ice in your game may have very low friction or equal to zero (minimum value), while a jumping ball may have a very high friction force or equal to one (maximum value) and also very high elasticity. You should play with the settings of your physical materials for different objects and choose the most suitable solution for you and the best solution for your performance. Triggers do not require a lot of processing costs by the physics engine and can greatly help in improving your performance. Triggers are useful in situations where, for example, in your game you need to identify areas near all lights that are automatically turned on in the evening or night if the player is in its trigger zone or in other words within the geometric shape of its collider, which you can design as you wish. Unity triggers allow writing the three callbacks, which will be called when your object enters the trigger, while your object is staying in trigger, and when this object leaves the trigger. Thus, you can register any of these functions, the necessary instructions, for example, turn on the flashlight when entering the trigger zone or turn it off when exiting the trigger zone. It is important to know that in Unity, static objects (objects without a Rigidbody component) will not cause your callbacks to get into the zone trigger if your trigger does not contain a Rigidbody component; that is, in other words at least one of these objects must have a Rigidbody component in order to not ignore your callbacks. In the case of two triggers, there should be at least one object attached with a Rigidbody component to your callbacks were not ignored. Remember that when two objects are attached with Rigidbody and Collider components and if at least one of them is the trigger, then the trigger callbacks will be called and not the collision callbacks. I would also like to point out that your callbacks will be called for each object included in the collision or trigger zone. Also, you can directly control whether your collider is a trigger or not by setting the flag isTrigger value to true or false in your code. Of course, you can mix both options in order to obtain the best performance. All collision callbacks will be called only if at least one of two interacted rigidbodies is not kinematic. Summary This article covered new Mecanim animation features in Unity 5. You were introduced to the new awesome audio features in Unity 5. We also covered many useful details for your performance within Unity built-in physics and particle systems. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 6954

Packt
25 Mar 2015
35 min read
Save for later

Fun with Sprites – Sky Defense

Packt
25 Mar 2015
35 min read
This article is written by Roger Engelbert, the author of Cocos2d-x by Example: Beginner's Guide - Second Edition. Time to build our second game! This time, you will get acquainted with the power of actions in Cocos2d-x. I'll show you how an entire game could be built just by running the various action commands contained in Cocos2d-x to make your sprites move, rotate, scale, fade, blink, and so on. And you can also use actions to animate your sprites using multiple images, like in a movie. So let's get started. In this article, you will learn: How to optimize the development of your game with sprite sheets How to use bitmap fonts in your game How easy it is to implement and run actions How to scale, rotate, swing, move, and fade out a sprite How to load multiple .png files and use them to animate a sprite How to create a universal game with Cocos2d-x (For more resources related to this topic, see here.) The game – sky defense Meet our stressed-out city of...your name of choice here. It's a beautiful day when suddenly the sky begins to fall. There are meteors rushing toward the city and it is your job to keep it safe. The player in this game can tap the screen to start growing a bomb. When the bomb is big enough to be activated, the player taps the screen again to detonate it. Any nearby meteor will explode into a million pieces. The bigger the bomb, the bigger the detonation, and the more meteors can be taken out by it. But the bigger the bomb, the longer it takes to grow it. But it's not just bad news coming down. There are also health packs dropping from the sky and if you allow them to reach the ground, you'll recover some of your energy. The game settings This is a universal game. It is designed for the iPad retina screen and it will be scaled down to fit all the other screens. The game will be played in landscape mode, and it will not need to support multitouch. The start project The command line I used was: cocos new SkyDefense -p com.rengelbert.SkyDefense -l cpp -d /Users/rengelbert/Desktop/SkyDefense In Xcode you must set the Devices field in Deployment Info to Universal, and the Device Family field is set to Universal. And in RootViewController.mm, the supported interface orientation is set to Landscape. The game we are going to build requires only one class, GameLayer.cpp, and you will find that the interface for this class already contains all the information it needs. Also, some of the more trivial or old-news logic is already in place in the implementation file as well. But I'll go over this as we work on the game. Adding screen support for a universal app Now things get a bit more complicated as we add support for smaller screens in our universal game, as well as some of the most common Android screen sizes. So open AppDelegate.cpp. Inside the applicationDidFinishLaunching method, we now have the following code: auto screenSize = glview->getFrameSize(); auto designSize = Size(2048, 1536); glview->setDesignResolutionSize(designSize.width, designSize.height, ResolutionPolicy::EXACT_FIT); std::vector<std::string> searchPaths; if (screenSize.height > 768) {    searchPaths.push_back("ipadhd");    director->setContentScaleFactor(1536/designSize.height); } else if (screenSize.height > 320) {    searchPaths.push_back("ipad");    director->setContentScaleFactor(768/designSize.height); } else {    searchPaths.push_back("iphone");    director->setContentScaleFactor(380/designSize.height); } auto fileUtils = FileUtils::getInstance(); fileUtils->setSearchPaths(searchPaths); Once again, we tell our GLView object (our OpenGL view) that we designed the game for a certain screen size (the iPad retina screen) and once again, we want our game screen to resize to match the screen on the device (ResolutionPolicy::EXACT_FIT). Then we determine where to load our images from, based on the device's screen size. We have art for iPad retina, then for regular iPad which is shared by iPhone retina, and for the regular iPhone. We end by setting the scale factor based on the designed target. Adding background music Still inside AppDelegate.cpp, we load the sound files we'll use in the game, including a background.mp3 (courtesy of Kevin MacLeod from incompetech.com), which we load through the command: auto audioEngine = SimpleAudioEngine::getInstance(); audioEngine->preloadBackgroundMusic(fileUtils->fullPathForFilename("background.mp3").c_str()); We end by setting the effects' volume down a tad: //lower playback volume for effects audioEngine->setEffectsVolume(0.4f); For background music volume, you must use setBackgroundMusicVolume. If you create some sort of volume control in your game, these are the calls you would make to adjust the volume based on the user's preference. Initializing the game Now back to GameLayer.cpp. If you take a look inside our init method, you will see that the game initializes by calling three methods: createGameScreen, createPools, and createActions. We'll create all our screen elements inside the first method, and then create object pools so we don't instantiate any sprite inside the main loop; and we'll create all the main actions used in our game inside the createActions method. And as soon as the game initializes, we start playing the background music, with its should loop parameter set to true: SimpleAudioEngine::getInstance()-   >playBackgroundMusic("background.mp3", true); We once again store the screen size for future reference, and we'll use a _running Boolean for game states. If you run the game now, you should only see the background image: Using sprite sheets in Cocos2d-x A sprite sheet is a way to group multiple images together in one image file. In order to texture a sprite with one of these images, you must have the information of where in the sprite sheet that particular image is found (its rectangle). Sprite sheets are often organized in two files: the image one and a data file that describes where in the image you can find the individual textures. I used TexturePacker to create these files for the game. You can find them inside the ipad, ipadhd, and iphone folders inside Resources. There is a sprite_sheet.png file for the image and a sprite_sheet.plist file that describes the individual frames inside the image. This is what the sprite_sheet.png file looks like: Batch drawing sprites In Cocos2d-x, sprite sheets can be used in conjunction with a specialized node, called SpriteBatchNode. This node can be used whenever you wish to use multiple sprites that share the same source image inside the same node. So you could have multiple instances of a Sprite class that uses a bullet.png texture for instance. And if the source image is a sprite sheet, you can have multiple instances of sprites displaying as many different textures as you could pack inside your sprite sheet. With SpriteBatchNode, you can substantially reduce the number of calls during the rendering stage of your game, which will help when targeting less powerful systems, though not noticeably in more modern devices. Let me show you how to create a SpriteBatchNode. Time for action – creating SpriteBatchNode Let's begin implementing the createGameScreen method in GameLayer.cpp. Just below the lines that add the bg sprite, we instantiate our batch node: void GameLayer::createGameScreen() {   //add bg auto bg = Sprite::create("bg.png"); ...   SpriteFrameCache::getInstance()-> addSpriteFramesWithFile("sprite_sheet.plist"); _gameBatchNode = SpriteBatchNode::create("sprite_sheet.png"); this->addChild(_gameBatchNode); In order to create the batch node from a sprite sheet, we first load all the frame information described by the sprite_sheet.plist file into SpriteFrameCache. And then we create the batch node with the sprite_sheet.png file, which is the source texture shared by all sprites added to this batch node. (The background image is not part of the sprite sheet, so it's added separately before we add _gameBatchNode to GameLayer.) Now we can start putting stuff inside _gameBatchNode. First, the city: for (int i = 0; i < 2; i++) { auto sprite = Sprite::createWithSpriteFrameName   ("city_dark.png");    sprite->setAnchorPoint(Vec2(0.5,0)); sprite->setPosition(_screenSize.width * (0.25f + i * 0.5f),0)); _gameBatchNode->addChild(sprite, kMiddleground); sprite = Sprite::createWithSpriteFrameName ("city_light.png"); sprite->setAnchorPoint(Vec2(0.5,0)); sprite->setPosition(Vec2(_screenSize.width * (0.25f + i * 0.5f), _screenSize.height * 0.1f)); _gameBatchNode->addChild(sprite, kBackground); } Then the trees: //add trees for (int i = 0; i < 3; i++) { auto sprite = Sprite::createWithSpriteFrameName("trees.png"); sprite->setAnchorPoint(Vec2(0.5f, 0.0f)); sprite->setPosition(Vec2(_screenSize.width * (0.2f + i * 0.3f),0)); _gameBatchNode->addChild(sprite, kForeground);   } Notice that here we create sprites by passing it a sprite frame name. The IDs for these frame names were loaded into SpriteFrameCache through our sprite_sheet.plist file. The screen so far is made up of two instances of city_dark.png tiling at the bottom of the screen, and two instances of city_light.png also tiling. One needs to appear on top of the other and for that we use the enumerated values declared in GameLayer.h: enum { kBackground, kMiddleground, kForeground }; We use the addChild( Node, zOrder) method to layer our sprites on top of each other, using different values for their z order. So for example, when we later add three sprites showing the trees.png sprite frame, we add them on top of all previous sprites using the highest value for z that we find in the enumerated list, which is kForeground. Why go through the trouble of tiling the images and not using one large image instead, or combining some of them with the background image? Because I wanted to include the greatest number of images possible inside the one sprite sheet, and have that sprite sheet to be as small as possible, to illustrate all the clever ways you can use and optimize sprite sheets. This is not necessary in this particular game. What just happened? We began creating the initial screen for our game. We are using a SpriteBatchNode to contain all the sprites that use images from our sprite sheet. So SpriteBatchNode behaves as any node does—as a container. And we can layer individual sprites inside the batch node by manipulating their z order. Bitmap fonts in Cocos2d-x The Cocos2d-x Label class has a static create method that uses bitmap images for the characters. The bitmap image we are using here was created with the program GlyphDesigner, and in essence, it works just as a sprite sheet does. As a matter of fact, Label extends SpriteBatchNode, so it behaves just like a batch node. You have images for all individual characters you'll need packed inside a PNG file (font.png), and then a data file (font.fnt) describing where each character is. The following screenshot shows how the font sprite sheet looks like for our game: The difference between Label and a regular SpriteBatchNode class is that the data file also feeds the Label object information on how to write with this font. In other words, how to space out the characters and lines correctly. The Label objects we are using in the game are instantiated with the name of the data file and their initial string value: _scoreDisplay = Label::createWithBMFont("font.fnt", "0"); And the value for the label is changed through the setString method: _scoreDisplay->setString("1000"); Just as with every other image in the game, we also have different versions of font.fnt and font.png in our Resources folders, one for each screen definition. FileUtils will once again do the heavy lifting of finding the correct file for the correct screen. So now let's create the labels for our game. Time for action – creating bitmap font labels Creating a bitmap font is somewhat similar to creating a batch node. Continuing with our createGameScreen method, add the following lines to the score label: _scoreDisplay = Label::createWithBMFont("font.fnt", "0"); _scoreDisplay->setAnchorPoint(Vec2(1,0.5)); _scoreDisplay->setPosition(Vec2   (_screenSize.width * 0.8f, _screenSize.height * 0.94f)); this->addChild(_scoreDisplay); And then add a label to display the energy level, and set its horizontal alignment to Right: _energyDisplay = Label::createWithBMFont("font.fnt", "100%", TextHAlignment::RIGHT); _energyDisplay->setPosition(Vec2   (_screenSize.width * 0.3f, _screenSize.height * 0.94f)); this->addChild(_energyDisplay); Add the following line for an icon that appears next to the _energyDisplay label: auto icon = Sprite::createWithSpriteFrameName ("health_icon.png"); icon->setPosition( Vec2(_screenSize.   width * 0.15f, _screenSize.height * 0.94f) ); _gameBatchNode->addChild(icon, kBackground); What just happened? We just created our first bitmap font object in Cocos2d-x. Now let's finish creating our game's sprites. Time for action – adding the final screen sprites The last sprites we need to create are the clouds, the bomb and shockwave, and our game state messages. Back to the createGameScreen method, add the clouds to the screen: for (int i = 0; i < 4; i++) { float cloud_y = i % 2 == 0 ? _screenSize.height * 0.4f : _screenSize.height * 0.5f; auto cloud = Sprite::createWithSpriteFrameName("cloud.png"); cloud->setPosition(Vec2 (_screenSize.width * 0.1f + i * _screenSize.width * 0.3f, cloud_y)); _gameBatchNode->addChild(cloud, kBackground); _clouds.pushBack(cloud); } Create the _bomb sprite; players will grow when tapping the screen: _bomb = Sprite::createWithSpriteFrameName("bomb.png"); _bomb->getTexture()->generateMipmap(); _bomb->setVisible(false);   auto size = _bomb->getContentSize();   //add sparkle inside bomb sprite auto sparkle = Sprite::createWithSpriteFrameName("sparkle.png"); sparkle->setPosition(Vec2(size.width * 0.72f, size.height *   0.72f)); _bomb->addChild(sparkle, kMiddleground, kSpriteSparkle);   //add halo inside bomb sprite auto halo = Sprite::createWithSpriteFrameName   ("halo.png"); halo->setPosition(Vec2(size.width * 0.4f, size.height *   0.4f)); _bomb->addChild(halo, kMiddleground, kSpriteHalo); _gameBatchNode->addChild(_bomb, kForeground); Then create the _shockwave sprite that appears after the _bomb goes off: _shockWave = Sprite::createWithSpriteFrameName ("shockwave.png"); _shockWave->getTexture()->generateMipmap(); _shockWave->setVisible(false); _gameBatchNode->addChild(_shockWave); Finally, add the two messages that appear on the screen, one for our intro state and one for our gameover state: _introMessage = Sprite::createWithSpriteFrameName ("logo.png"); _introMessage->setPosition(Vec2   (_screenSize.width * 0.5f, _screenSize.height * 0.6f)); _introMessage->setVisible(true); this->addChild(_introMessage, kForeground);   _gameOverMessage = Sprite::createWithSpriteFrameName   ("gameover.png"); _gameOverMessage->setPosition(Vec2   (_screenSize.width * 0.5f, _screenSize.height * 0.65f)); _gameOverMessage->setVisible(false); this->addChild(_gameOverMessage, kForeground); What just happened? There is a lot of new information regarding sprites in the previous code. So let's go over it carefully: We started by adding the clouds. We put the sprites inside a vector so we can move the clouds later. Notice that they are also part of our batch node. Next comes the bomb sprite and our first new call: _bomb->getTexture()->generateMipmap(); With this we are telling the framework to create antialiased copies of this texture in diminishing sizes (mipmaps), since we are going to scale it down later. This is optional of course; sprites can be resized without first generating mipmaps, but if you notice loss of quality in your scaled sprites, you can fix that by creating mipmaps for their texture. The texture must have size values in so-called POT (power of 2: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, and so on). Textures in OpenGL must always be sized this way; when they are not, Cocos2d-x will do one of two things: it will either resize the texture in memory, adding transparent pixels until the image reaches a POT size, or stop the execution on an assert. With textures used for mipmaps, the framework will stop execution for non-POT textures. I add the sparkle and the halo sprites as children to the _bomb sprite. This will use the container characteristic of nodes to our advantage. When I grow the bomb, all its children will grow with it. Notice too that I use a third parameter to addChild for halo and sparkle: bomb->addChild(halo, kMiddleground, kSpriteHalo); This third parameter is an integer tag from yet another enumerated list declared in GameLayer.h. I can use this tag to retrieve a particular child from a sprite as follows: auto halo = (Sprite *)   bomb->getChildByTag(kSpriteHalo); We now have our game screen in place: Next come object pools. Time for action – creating our object pools The pools are just vectors of objects. And here are the steps to create them: Inside the createPools method, we first create a pool for meteors: void GameLayer::createPools() { int i; _meteorPoolIndex = 0; for (i = 0; i < 50; i++) { auto sprite = Sprite::createWithSpriteFrameName("meteor.png"); sprite->setVisible(false); _gameBatchNode->addChild(sprite, kMiddleground, kSpriteMeteor); _meteorPool.pushBack(sprite); } Then we create an object pool for health packs: _healthPoolIndex = 0; for (i = 0; i < 20; i++) { auto sprite = Sprite::createWithSpriteFrameName("health.png"); sprite->setVisible(false); sprite->setAnchorPoint(Vec2(0.5f, 0.8f)); _gameBatchNode->addChild(sprite, kMiddleground, kSpriteHealth); _healthPool.pushBack(sprite); } We'll use the corresponding pool index to retrieve objects from the vectors as the game progresses. What just happened? We now have a vector of invisible meteor sprites and a vector of invisible health sprites. We'll use their respective pool indices to retrieve these from the vector as needed as you'll see in a moment. But first we need to take care of actions and animations. With object pools, we reduce the number of instantiations during the main loop, and it allows us to never destroy anything that can be reused. But if you need to remove a child from a node, use ->removeChild or ->removeChildByTag if a tag is present. Actions in a nutshell If you remember, a node will store information about position, scale, rotation, visibility, and opacity of a node. And in Cocos2d-x, there is an Action class to change each one of these values over time, in effect animating these transformations. Actions are usually created with a static method create. The majority of these actions are time-based, so usually the first parameter you need to pass an action is the time length for the action. So for instance: auto fadeout = FadeOut::create(1.0f); This creates a fadeout action that will take one second to complete. You can run it on a sprite, or node, as follows: mySprite->runAction(fadeout); Cocos2d-x has an incredibly flexible system that allows us to create any combination of actions and transformations to achieve any effect we desire. You may, for instance, choose to create an action sequence (Sequence) that contains more than one action; or you can apply easing effects (EaseIn, EaseOut, and so on) to your actions. You can choose to repeat an action a certain number of times (Repeat) or forever (RepeatForever); and you can add callbacks to functions you want called once an action is completed (usually inside a Sequence action). Time for action – creating actions with Cocos2d-x Creating actions with Cocos2d-x is a very simple process: Inside our createActions method, we will instantiate the actions we can use repeatedly in our game. Let's create our first actions: void GameLayer::createActions() { //swing action for health drops auto easeSwing = Sequence::create( EaseInOut::create(RotateTo::create(1.2f, -10), 2), EaseInOut::create(RotateTo::create(1.2f, 10), 2), nullptr);//mark the end of a sequence with a nullptr _swingHealth = RepeatForever::create( (ActionInterval *) easeSwing ); _swingHealth->retain(); Actions can be combined in many different forms. Here, the retained _swingHealth action is a RepeatForever action of Sequence that will rotate the health sprite first one way, then the other, with EaseInOut wrapping the RotateTo action. RotateTo takes 1.2 seconds to rotate the sprite first to -10 degrees and then to 10. And the easing has a value of 2, which I suggest you experiment with to get a sense of what it means visually. Next we add three more actions: //action sequence for shockwave: fade out, callback when //done _shockwaveSequence = Sequence::create( FadeOut::create(1.0f), CallFunc::create(std::bind(&GameLayer::shockwaveDone, this)), nullptr); _shockwaveSequence->retain();   //action to grow bomb _growBomb = ScaleTo::create(6.0f, 1.0); _growBomb->retain();   //action to rotate sprites auto rotate = RotateBy::create(0.5f , -90); _rotateSprite = RepeatForever::create( rotate ); _rotateSprite->retain(); First, another Sequence. This will fade out the sprite and call the shockwaveDone function, which is already implemented in the class and turns the _shockwave sprite invisible when called. The last one is a RepeatForever action of a RotateBy action. In half a second, the sprite running this action will rotate -90 degrees and will do that again and again. What just happened? You just got your first glimpse of how to create actions in Cocos2d-x and how the framework allows for all sorts of combinations to accomplish any effect. It may be hard at first to read through a Sequence action and understand what's happening, but the logic is easy to follow once you break it down into its individual parts. But we are not done with the createActions method yet. Next come sprite animations. Animating a sprite in Cocos2d-x The key thing to remember is that an animation is just another type of action, one that changes the texture used by a sprite over a period of time. In order to create an animation action, you need to first create an Animation object. This object will store all the information regarding the different sprite frames you wish to use in the animation, the length of the animation in seconds, and whether it loops or not. With this Animation object, you then create a Animate action. Let's take a look. Time for action – creating animations Animations are a specialized type of action that require a few extra steps: Inside the same createActions method, add the lines for the two animations we have in the game. First, we start with the animation that shows an explosion when a meteor reaches the city. We begin by loading the frames into an Animation object: auto animation = Animation::create(); int i; for(i = 1; i <= 10; i++) { auto name = String::createWithFormat("boom%i.png", i); auto frame = SpriteFrameCache::getInstance()->getSpriteFrameByName(name->getCString()); animation->addSpriteFrame(frame); } Then we use the Animation object inside a Animate action: animation->setDelayPerUnit(1 / 10.0f); animation->setRestoreOriginalFrame(true); _groundHit = Sequence::create(    MoveBy::create(0, Vec2(0,_screenSize.height * 0.12f)),    Animate::create(animation),    CallFuncN::create(CC_CALLBACK_1(GameLayer::animationDone, this)), nullptr); _groundHit->retain(); The same steps are repeated to create the other explosion animation used when the player hits a meteor or a health pack. animation = Animation::create(); for(int i = 1; i <= 7; i++) { auto name = String::createWithFormat("explosion_small%i.png", i); auto frame = SpriteFrameCache::getInstance()->getSpriteFrameByName(name->getCString()); animation->addSpriteFrame(frame); }   animation->setDelayPerUnit(0.5 / 7.0f); animation->setRestoreOriginalFrame(true); _explosion = Sequence::create(      Animate::create(animation),    CallFuncN::create(CC_CALLBACK_1(GameLayer::animationDone, this)), nullptr); _explosion->retain(); What just happened? We created two instances of a very special kind of action in Cocos2d-x: Animate. Here is what we did: First, we created an Animation object. This object holds the references to all the textures used in the animation. The frames were named in such a way that they could easily be concatenated inside a loop (boom1, boom2, boom3, and so on). There are 10 frames for the first animation and seven for the second. The textures (or frames) are SpriteFrame objects we grab from SpriteFrameCache, which as you remember, contains all the information from the sprite_sheet.plist data file. So the frames are in our sprite sheet. Then when all frames are in place, we determine the delay of each frame by dividing the total amount of seconds we want the animation to last by the total number of frames. The setRestoreOriginalFrame method is important here. If we set setRestoreOriginalFrame to true, then the sprite will revert to its original appearance once the animation is over. For example, if I have an explosion animation that will run on a meteor sprite, then by the end of the explosion animation, the sprite will revert to displaying the meteor texture. Time for the actual action. Animate receives the Animation object as its parameter. (In the first animation, we shift the position of the sprite just before the explosion appears, so there is an extra MoveBy method.) And in both instances, I make a call to an animationDone callback already implemented in the class. It makes the calling sprite invisible: void GameLayer::animationDone (Node* pSender) { pSender->setVisible(false); } We could have used the same method for both callbacks (animationDone and shockwaveDone) as they accomplish the same thing. But I wanted to show you a callback that receives as an argument, the node that made the call and one that did not. Respectively, these are CallFuncN and CallFunc, and were used inside the action sequences we just created. Time to make our game tick! Okay, we have our main elements in place and are ready to add the final bit of logic to run the game. But how will everything work? We will use a system of countdowns to add new meteors and new health packs, as well as a countdown that will incrementally make the game harder to play. On touch, the player will start the game if the game is not running, and also add bombs and explode them during gameplay. An explosion creates a shockwave. On update, we will check against collision between our _shockwave sprite (if visible) and all our falling objects. And that's it. Cocos2d-x will take care of all the rest through our created actions and callbacks! So let's implement our touch events first. Time for action – handling touches Time to bring the player to our party: Time to implement our onTouchBegan method. We'll begin by handling the two game states, intro and game over: bool GameLayer::onTouchBegan (Touch * touch, Event * event){   //if game not running, we are seeing either intro or //gameover if (!_running) {    //if intro, hide intro message    if (_introMessage->isVisible()) {      _introMessage->setVisible(false);        //if game over, hide game over message    } else if (_gameOverMessage->isVisible()) {      SimpleAudioEngine::getInstance()->stopAllEffects();      _gameOverMessage->setVisible(false);         }       this->resetGame();    return true; } Here we check to see if the game is not running. If not, we check to see if any of our messages are visible. If _introMessage is visible, we hide it. If _gameOverMessage is visible, we stop all current sound effects and hide the message as well. Then we call a method called resetGame, which will reset all the game data (energy, score, and countdowns) to their initial values, and set _running to true. Next we handle the touches. But we only need to handle one each time so we use ->anyObject() on Set: auto touch = (Touch *)pTouches->anyObject();   if (touch) { //if bomb already growing... if (_bomb->isVisible()) {    //stop all actions on bomb, halo and sparkle    _bomb->stopAllActions();    auto child = (Sprite *) _bomb->getChildByTag(kSpriteHalo);    child->stopAllActions();    child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle);    child->stopAllActions();       //if bomb is the right size, then create shockwave    if (_bomb->getScale() > 0.3f) {      _shockWave->setScale(0.1f);      _shockWave->setPosition(_bomb->getPosition());      _shockWave->setVisible(true);      _shockWave->runAction(ScaleTo::create(0.5f, _bomb->getScale() * 2.0f));      _shockWave->runAction(_shockwaveSequence->clone());      SimpleAudioEngine::getInstance()->playEffect("bombRelease.wav");      } else {      SimpleAudioEngine::getInstance()->playEffect("bombFail.wav");    }    _bomb->setVisible(false);    //reset hits with shockwave, so we can count combo hits    _shockwaveHits = 0; //if no bomb currently on screen, create one } else {    Point tap = touch->getLocation();    _bomb->stopAllActions();    _bomb->setScale(0.1f);    _bomb->setPosition(tap);    _bomb->setVisible(true);    _bomb->setOpacity(50);    _bomb->runAction(_growBomb->clone());         auto child = (Sprite *) _bomb->getChildByTag(kSpriteHalo);      child->runAction(_rotateSprite->clone());      child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle);      child->runAction(_rotateSprite->clone()); } } If _bomb is visible, it means it's already growing on the screen. So on touch, we use the stopAllActions() method on the bomb and we use the stopAllActions() method on its children that we retrieve through our tags: child = (Sprite *) _bomb->getChildByTag(kSpriteHalo); child->stopAllActions(); child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle); child->stopAllActions(); If _bomb is the right size, we start our _shockwave. If it isn't, we play a bomb failure sound effect; there is no explosion and _shockwave is not made visible. If we have an explosion, then the _shockwave sprite is set to 10 percent of the scale. It's placed at the same spot as the bomb, and we run a couple of actions on it: we grow the _shockwave sprite to twice the scale the bomb was when it went off and we run a copy of _shockwaveSequence that we created earlier. Finally, if no _bomb is currently visible on screen, we create one. And we run clones of previously created actions on the _bomb sprite and its children. When _bomb grows, its children grow. But when the children rotate, the bomb does not: a parent changes its children, but the children do not change their parent. What just happened? We just added part of the core logic of the game. It is with touches that the player creates and explodes bombs to stop meteors from reaching the city. Now we need to create our falling objects. But first, let's set up our countdowns and our game data. Time for action – starting and restarting the game Let's add the logic to start and restart the game. Let's write the implementation for resetGame: void GameLayer::resetGame(void) {    _score = 0;    _energy = 100;       //reset timers and "speeds"    _meteorInterval = 2.5;    _meteorTimer = _meteorInterval * 0.99f;    _meteorSpeed = 10;//in seconds to reach ground    _healthInterval = 20;    _healthTimer = 0;    _healthSpeed = 15;//in seconds to reach ground       _difficultyInterval = 60;    _difficultyTimer = 0;       _running = true;       //reset labels    _energyDisplay->setString(std::to_string((int) _energy) + "%");    _scoreDisplay->setString(std::to_string((int) _score)); } Next, add the implementation of stopGame: void GameLayer::stopGame() {       _running = false;       //stop all actions currently running    int i;    int count = (int) _fallingObjects.size();       for (i = count-1; i >= 0; i--) {        auto sprite = _fallingObjects.at(i);        sprite->stopAllActions();        sprite->setVisible(false);        _fallingObjects.erase(i);    }    if (_bomb->isVisible()) {        _bomb->stopAllActions();        _bomb->setVisible(false);        auto child = _bomb->getChildByTag(kSpriteHalo);        child->stopAllActions();        child = _bomb->getChildByTag(kSpriteSparkle);        child->stopAllActions();    }    if (_shockWave->isVisible()) {        _shockWave->stopAllActions();        _shockWave->setVisible(false);    }    if (_ufo->isVisible()) {        _ufo->stopAllActions();        _ufo->setVisible(false);        auto ray = _ufo->getChildByTag(kSpriteRay);       ray->stopAllActions();        ray->setVisible(false);    } } What just happened? With these methods we control gameplay. We start the game with default values through resetGame(), and we stop all actions with stopGame(). Already implemented in the class is the method that makes the game more difficult as time progresses. If you take a look at the method (increaseDifficulty) you will see that it reduces the interval between meteors and reduces the time it takes for meteors to reach the ground. All we need now is the update method to run the countdowns and check for collisions. Time for action – updating the game We already have the code that updates the countdowns inside the update. If it's time to add a meteor or a health pack we do it. If it's time to make the game more difficult to play, we do that too. It is possible to use an action for these timers: a Sequence action with a Delay action object and a callback. But there are advantages to using these countdowns. It's easier to reset them and to change them, and we can take them right into our main loop. So it's time to add our main loop: What we need to do is check for collisions. So add the following code: if (_shockWave->isVisible()) { count = (int) _fallingObjects.size(); for (i = count-1; i >= 0; i--) {    auto sprite = _fallingObjects.at(i);    diffx = _shockWave->getPositionX() - sprite->getPositionX();    diffy = _shockWave->getPositionY() - sprite->getPositionY();    if (pow(diffx, 2) + pow(diffy, 2) <= pow(_shockWave->getBoundingBox().size.width * 0.5f, 2)) {    sprite->stopAllActions();    sprite->runAction( _explosion->clone());    SimpleAudioEngine::getInstance()->playEffect("boom.wav");    if (sprite->getTag() == kSpriteMeteor) {      _shockwaveHits++;      _score += _shockwaveHits * 13 + _shockwaveHits * 2;    }    //play sound    _fallingObjects.erase(i); } } _scoreDisplay->setString(std::to_string(_score)); } If _shockwave is visible, we check the distance between it and each sprite in _fallingObjects vector. If we hit any meteors, we increase the value of the _shockwaveHits property so we can award the player for multiple hits. Next we move the clouds: //move clouds for (auto sprite : _clouds) { sprite->setPositionX(sprite->getPositionX() + dt * 20); if (sprite->getPositionX() > _screenSize.width + sprite->getBoundingBox().size.width * 0.5f)    sprite->setPositionX(-sprite->getBoundingBox().size.width * 0.5f); } I chose not to use a MoveTo action for the clouds to show you the amount of code that can be replaced by a simple action. If not for Cocos2d-x actions, we would have to implement logic to move, rotate, swing, scale, and explode all our sprites! And finally: if (_bomb->isVisible()) {    if (_bomb->getScale() > 0.3f) {      if (_bomb->getOpacity() != 255)        _bomb->setOpacity(255);    } } We give the player an extra visual cue to when a bomb is ready to explode by changing its opacity. What just happened? The main loop is pretty straightforward when you don't have to worry about updating individual sprites, as our actions take care of that for us. We pretty much only need to run collision checks between our sprites, and to determine when it's time to throw something new at the player. So now the only thing left to do is grab the meteors and health packs from the pools when their timers are up. So let's get right to it. Time for action – retrieving objects from the pool We just need to use the correct index to retrieve the objects from their respective vector: To retrieve meteor sprites, we'll use the resetMeteor method: void GameLayer::resetMeteor(void) {    //if too many objects on screen, return    if (_fallingObjects.size() > 30) return;       auto meteor = _meteorPool.at(_meteorPoolIndex);      _meteorPoolIndex++;    if (_meteorPoolIndex == _meteorPool.size())      _meteorPoolIndex = 0;      int meteor_x = rand() % (int) (_screenSize.width * 0.8f) + _screenSize.width * 0.1f;    int meteor_target_x = rand() % (int) (_screenSize.width * 0.8f) + _screenSize.width * 0.1f;       meteor->stopAllActions();    meteor->setPosition(Vec2(meteor_x, _screenSize.height + meteor->getBoundingBox().size.height * 0.5));    //create action    auto rotate = RotateBy::create(0.5f , -90);    auto repeatRotate = RepeatForever::create( rotate );    auto sequence = Sequence::create (                MoveTo::create(_meteorSpeed, Vec2(meteor_target_x, _screenSize.height * 0.15f)),                CallFunc::create(std::bind(&GameLayer::fallingObjectDone, this, meteor) ), nullptr);   meteor->setVisible ( true ); meteor->runAction(repeatRotate); meteor->runAction(sequence); _fallingObjects.pushBack(meteor); } We grab the next available meteor from the pool, then we pick a random start and end x value for its MoveTo action. The meteor starts at the top of the screen and will move to the bottom towards the city, but the x value is randomly picked each time. We rotate the meteor inside a RepeatForever action, and we use Sequence to move the sprite to its target position and then call back fallingObjectDone when the meteor has reached its target. We finish by adding the new meteor we retrieved from the pool to the _fallingObjects vector so we can check collisions with it. The method to retrieve the health (resetHealth) sprites is pretty much the same, except that swingHealth action is used instead of rotate. You'll find that method already implemented in GameLayer.cpp. What just happened? So in resetGame we set the timers, and we update them in the update method. We use these timers to add meteors and health packs to the screen by grabbing the next available one from their respective pool, and then we proceed to run collisions between an exploding bomb and these falling objects. Notice that in both resetMeteor and resetHealth we don't add new sprites if too many are on screen already: if (_fallingObjects->size() > 30) return; This way the game does not get ridiculously hard, and we never run out of unused objects in our pools. And the very last bit of logic in our game is our fallingObjectDone callback, called when either a meteor or a health pack has reached the ground, at which point it awards or punishes the player for letting sprites through. When you take a look at that method inside GameLayer.cpp, you will notice how we use ->getTag() to quickly ascertain which type of sprite we are dealing with (the one calling the method): if (pSender->getTag() == kSpriteMeteor) { If it's a meteor, we decrease energy from the player, play a sound effect, and run the explosion animation; an autorelease copy of the _groundHit action we retained earlier, so we don't need to repeat all that logic every time we need to run this action. If the item is a health pack, we increase the energy or give the player some points, play a nice sound effect, and hide the sprite. Play the game! We've been coding like mad, and it's finally time to run the game. But first, don't forget to release all the items we retained. In GameLayer.cpp, add our destructor method: GameLayer::~GameLayer () {       //release all retained actions    CC_SAFE_RELEASE(_growBomb);    CC_SAFE_RELEASE(_rotateSprite);    CC_SAFE_RELEASE(_shockwaveSequence);    CC_SAFE_RELEASE(_swingHealth);    CC_SAFE_RELEASE(_groundHit);    CC_SAFE_RELEASE(_explosion);    CC_SAFE_RELEASE(_ufoAnimation);    CC_SAFE_RELEASE(_blinkRay);       _clouds.clear();    _meteorPool.clear();    _healthPool.clear();    _fallingObjects.clear(); } The actual game screen will now look something like this: Now, let's take this to Android. Time for action – running the game in Android Follow these steps to deploy the game to Android: This time, there is no need to alter the manifest because the default settings are the ones we want. So, navigate to proj.android and then to the jni folder and open the Android.mk file in a text editor. Edit the lines in LOCAL_SRC_FILES to read as follows: LOCAL_SRC_FILES := hellocpp/main.cpp \                    ../../Classes/AppDelegate.cpp \                    ../../Classes/GameLayer.cpp Follow the instructions from the HelloWorld and AirHockey examples to import the game into Eclipse. Save it and run your application. This time, you can try out different size screens if you have the devices. What just happened? You just ran a universal app in Android. And nothing could have been simpler. Summary In my opinion, after nodes and all their derived objects, actions are the second best thing about Cocos2d-x. They are time savers and can quickly spice things up in any project with professional-looking animations. And I hope with the examples found in this article, you will be able to create any action you need with Cocos2d-x. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Moving the Space Pod Using Touch [article] Cocos2d-x: Installation [article]
Read more
  • 0
  • 0
  • 6889

article-image-advanced-effects-using-blender-particle-system
Packt
18 Aug 2010
6 min read
Save for later

Advanced Effects using Blender Particle System

Packt
18 Aug 2010
6 min read
(For more resources on Blender, see here.) The list above might be a bit daunting to some users but don't worry, I will discuss as much as I could (and bear with me when I ramble a lot) and hopefully I'll succeed in imbibing as much information as possible so when you're done reading this, you're proud to say: “I know Particle System!”, just like how Neo said in the Matrix: “I know Kung-Fu”. Unlike the previous articles that I've written before where I solely used one version of Blender through the entirety of the process, this time we might switch between the legacy Blender 2.4* and the recently-developed Blender 2.5*. The reason for this is that some Particle System features that we have been happily using in Blender 2.4* isn't merged yet in Blender 2.5*, making it unusable for the moment. I guess that is reasonable enough since Blender 2.5* is still undergoing heavy development and is still in beta stage. But who knows, maybe during this time of writing, it is already being developed or already is. So in line with that, here are the basic requirements for you to get going: Blender 2.49b (http://www.blender.org/download/get-blender/) Blender 2.53 (http://www.blender.org/development/release-logs/blender-250/) Basic Blender Particle System Knowledge (refer to http://www.packtpub.com/article/getting-started-with-blender-particle-system-1 for some info) lots and lots of patience! And just a bonus, we decided to provide you with the .blend files for all of our examples illustrated here. So hop on! Disintegration Effect The disintegration effect has been a common and very popular visual effect seen in feature movies, advertisements, and simply an eye candy. Often, it starts by having an object in its original and full form then after a while it will dissipate and disappear as though it was now made of dust. You can see this effect in one of the tests I did before here: http://vimeo.com/6763010. Much of the inspiration came from Daniel (aka NionsChannel in Youtube) who has really some nice effects on his list. The basic requirements for achieving this kind of effect are: a suitable particle system, highly subdivided mesh, and a force field. With that said, let's go ahead and start tinkering, shall we? Fire up Blender 2.49b and delete the default Cube (if any). (Move the mouse over the image to enlarge it.) (Deleting the Default Cube) Next, add or model the object of your choice. For purposes of this tutorial, let's add a simple UV Sphere with 256 Segments and 256 Rings, however, if your machine couldn't handle the high subdivision levels, you can lower it down to your liking. NOTE: The higher number of subdivisions you set, the finer and the more seamless the “shards” will be. Additionally, you can always go to Edit mode and press W > Subdivide to subdivide your mesh accordingly or adding a Subsurf modifier and applying it afterwards. The higher number of subdivisions you set, the finer and the more seamless the “shards” will be. Additionally, you can always go to Edit mode and press W > Subdivide to subdivide your mesh accordingly or adding a Subsurf modifier and applying it afterwards. (Adding a UV Sphere) (Highly Subdivided UV Sphere) After the UV Sphere has been added, proceed to Edit Mode and check over at the header the amount of faces it has. We'll use this as a base for the amount of particles that we'll be adding later on for the actual simulation. (UV Sphere Face Count) While in Edit Mode and all the vertices selected, press W then choose Set Smooth to smooth out the geometry shading. Now go back to Object Mode and proceed to Object (F7) in the Buttons Window then on the Particle Buttons, then click on Add New under Particle System tab to add a new particle system. (Adding a New Particle System) Rename the just-added particle system to something more relevant like “disintegration”. Then on the Amount input, we'll be changing the default 1000 to the number of faces our UV Sphere currently has (that's the reason we checked a while back in edit mode). So in this case, type in 65536. This will then correspond to one particle is equal to one face of our uv sphere. Next, change the End value to something shorter than 100 which is default. Let's try 40 for this example, which means all of the 65536 particles will be emitted within 40 frames. Basing from the default 25 frames per second rate, this would mean all those particles will be emitted in less than 2 seconds, which is what we want for this. Next is the Life value which we should be set to something longer as compared to the default 50 which is a little bit too early for our simulation. Let's set Life to 150; this will make our particles stay in our simulation area longer and not disappear earlier than expected. Under “Emit From:” panel, enable Random and Even then leave the other defaults as they are. Then finally, alter the values in the Physics tab and see which ones you are satisfied with. Check the screenshot below for some reference. (Particle System Settings) The next part is the icing on the cake, where we'll be adding a force field to generate the particle system's motion as though it was affected by real world effects like wind, turbulence, etc. With your cursor centered on your UV Sphere, add an Empty, name it “force”, and make sure the object rotation is cleared (ALT R) such that the local z-axis is oriented on the world z-axis. The UV Sphere and the Empty (“force”) should be in the same layer for the following effect to work. (Empty “force” Added) After adding our Empty object, we need to tell Blender how this object will affect our particle system. We'll do this by adding force values to this object. Forces in Blender act as external effectors for physics systems, which includes our particle system. You'll see what I mean in a while. Let's select the UV Sphere object and add a new Material Datablock to the object. (Adding a New Material to the Sphere) After adding a new material datablock, you can go ahead and tweak the material and shader settings the way you want to. Just like how I did mine (see screenshot): (Adding Material to the Sphere) With the Sphere still selected, head over to the Texture buttons under Shading (F5) and add a new texture slot. (Adding a New Texture) Next, choose Clouds as the Texture Type, increase the Noise Size and Noise Depth accordingly and just leave the Noise Basis to the default Blender Original, this will ensure a better distinction for the form that our particle system will exhibit later on. And the last but not the least, increase the Contrast of the texture, which will exaggerate the shape of our particle form later on. (Cloud Texture Settings)
Read more
  • 0
  • 0
  • 6845
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-editing-operators-blender
Packt
29 Mar 2011
6 min read
Save for later

Introduction to the Editing Operators in Blender

Packt
29 Mar 2011
6 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects        For editing task, 3D View has some features to help you in your work. These commands are easily executable from the Object/Mesh operators panel (T Key with mouse cursor over 3D View Editor). Here you have some different options depending if you are in Object Mode or Edit Mode. You can work according to different Modes in Blender (Object Mode, Edit Mode, Sculpt Mode, Pose Mode, etc) and accordingly you will be able to deal with different properties and operators. For modeling, you will use Edit Mode and Object Mode. (See illustration 1) Illustration 1: The Mode Menu Changing Mode in Blender modifies your workflow and offers you different options/properties according to the chosen mode. Differences between Object and Edit Mode To understand the differences between Object and Edit Mode easily, I'll make a comparison. In Blender, an Object is the physical representation of anything, and Mesh is the material that it is made up of. If you have got a plasticine puppet over the table, and if you see it, move it, and bring it to trash, then your puppet is in Object Mode. If you want to modify it by stretching its legs, adding more plasticine or something similar, then you need to manipulate it. This action is done over the Mesh in Edit Mode in Blender. It is important that you get this idea to understand at every moment the Mode you are working on. Then, you are in 3D View Editor and Object Mode (by default). If you got the default user scene with a cube, just select the cube (selected by default) with RMB (Right Mouse Button) and delete it with X Key. The purpose of this article is not to create the whole model, but to introduce to you the operators used for creating it. I will begin by telling you how to start a single model. Practical Exercise: Go to the Top view in 3D View Editor by pressing 7 Key in the Numpad. Take care you are not in the Persp view. You could check in the top left corner of the editor. If you are in the “Top Persp” then press 5 Key in the Numpad to go to “Top Ortho”. If you were there from the start, just avoid this step. We'll add our first plane to the scene. This plane will become our main character at the end. To add this plane just press Shift+A or click on the Add Plane| in the top menu in the Info Editor. (See illustration 2 and 3) Illustration 2: Add Menu. Floating The Add Object Menu is accessible within 3D View Editor at any moment by pressing Shift+A, resulting in a floating menu. Illustration 3: Add Menu The Add Object Menu is also accessible from the Add option within the Info Editor (at the top by default). Once we select Plane from that menu, we have a plane in our scene, 3D View Editor and Top View. This plane is currently in Object Mode. You can check this in the Mode Menu we have seen before. Now go to Right View by pressing 3 Key in the Numpad, then press R Key to rotate the plane and enter -90. Press Enter after that. Then go to the Front View by pressing 1 Key in the Numpad. We are now going to apply some very interesting Modifiers to the plane to help us in the modeling stage. We have our first plane in vertical position in Front View and in Object Mode. Press Tab Key to enter the Edit Mode and W Key to Subdivide the plane. A floating menu appears after pressing W Key. This is the Specials Menu with very interesting options there. At the moment Subdivide is the one we are interested in. (See illustration 4) Illustration 4: Specials Menu Specials Menu helps you in the Editing stage with interesting operators like Subdivide, Merge, Remove Doubles, etc. We will use some of these operators in future steps, so keep an eye on this panel. Well, back to the model. We have our plane in 3D View Editor, Edit Mode and Subdivide was recently applied. Notice that you will have a plane subdivided into four little planes by default. The amount of subdivision can be modified in the Subdivide panel below Mesh operators (If collapsed, T Key with mouse cursor over 3D View Editor) as we have seen previously. You now have a Subdivide Number of Cuts. To keep the target, we will go with the default one. So then we have a plane subdivided in little ones (four). By default all vertices are selected, to deselect all vertices, press A Key. Now our plane has no vertex selected. Next step is to set up a Mirror Modifier to help us model just the left mid (right mid for us because we are looking at our model in Front View). With all vertices deselected, press B Key to activate Border Select and make a square selecting left side of the plane with LMB (Left Mouse Button). (See illustration 5) Illustration 5: Border Select To select or deselect vertex, edges or faces in Edit Mode you could use Border Select (B Key) or Circle Select (C Key). First you need to make a square to select. Second is a circle that can be selected by clicking and dragging like in painting editors. Diameter of Circle Select could be adjusted with MMB (Middle Mouse Button). With left side of the plane selected, press X Key to delete those vertices. The Delete Menu will then offer to you different options to delete, in our case just vertex option. We now have only the right side of the panel, to which we will apply the Mirror Modifier. For that we must know at this point where to find them. To manage different operators or actions in our objects or meshes, we have the Buttons that will open the right Property Editor, according to our needs. (See illustration 6) Illustration 6: Buttons Selector Buttons open the right Property Editor.
Read more
  • 0
  • 0
  • 6777

article-image-opengl-40-building-c-shader-program-class
Packt
03 Aug 2011
5 min read
Save for later

OpenGL 4.0: Building a C++ Shader Program Class

Packt
03 Aug 2011
5 min read
  OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your OpenGL Shading language use Getting ready There's not much to prepare for with this one, you just need to build an environment that supports C++. Also, I'll assume that you are using GLM for matrix and vector support. If not, just leave out the functions involving the GLM classes. The reader would benefit from the previous articles on Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 and OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects. How to do it... We'll use the following header file for our C++ class: namespace GLSLShader { enum GLSLShaderType { VERTEX, FRAGMENT, GEOMETRY,TESS_CONTROL, TESS_EVALUATION }; }; class GLSLProgram { private: int handle; bool linked; string logString; int getUniformLocation(const char * name ); bool fileExists( const string & fileName ); public: GLSLProgram(); bool compileShaderFromFile( const char * fileName, GLSLShader::GLSLShaderType type ); bool compileShaderFromString( const string & source, GLSLShader::GLSLShaderType type ); bool link(); void use(); string log(); int getHandle(); bool isLinked(); void bindAttribLocation( GLuint location, const char * name); void bindFragDataLocation( GLuint location, const char * name ); void setUniform(const char *name,float x,float y, float z); void setUniform(const char *name, const vec3 & v); void setUniform(const char *name, const vec4 & v); void setUniform(const char *name, const mat4 & m); void setUniform(const char *name, const mat3 & m); void setUniform(const char *name, float val ); void setUniform(const char *name, int val ); void setUniform(const char *name, bool val ); void printActiveUniforms(); void printActiveAttribs(); }; The techniques involved in the implementation of these functions are covered in previous recipes. (Code available here). We'll discuss some of the design decisions in the next section. How it works... The state stored within a GLSLProgram object includes the handle to the OpenGL shader program object (handle), a Boolean variable indicating whether or not the program has been successfully linked (linked), and a string for storing the most recent log produced by a compile or link action (logString). The two private functions are utilities used by other public functions. The getUniformLocation function is used by the setUniform functions to find the location of a uniform variable, and the fileExists function is used by compileShaderFromFile to check for file existence. The constructor simply initializes linked to false, handle to zero, and logString to the empty string. The variable handle will be initialized by calling glCreateProgram when the first shader is compiled. The compileShaderFromFile and compileShaderFromString functions attempt to compile a shader of the given type (the type is provided as the second argument). They create the shader object, load the source code, and then attempt to compile the shader. If successful, the shader object is attached to the OpenGL program object (by calling glAttachShader) and a value of true is returned. Otherwise, the log is retrieved and stored in logString, and a value of false is returned. The link function simply attempts to link the program by calling glLinkProgram. It then checks the link status, and if successful, sets the variable linked to true and returns true. Otherwise, it gets the program log (by calling glGetProgramInfoLog), stores it in logString, and returns false. The use function simply calls glUseProgram if the program has already been successfully linked; otherwise, it does nothing. The log function returns the contents of logString, which should contain the log of the most recent compile or link action. The functions getHandle and isLinked are simply "getter" functions that return the handle to the OpenGL program object and the value of the linked variable. The functions bindAttribLocation and bindFragDataLocation are wrappers around glBindAttribLocation and glBindFragDataLocation. Note that these functions should only be called prior to linking the program. The setUniform overloaded functions are straightforward wrappers around the appropriate glUniform functions. Each of them calls getUniformLocation to query for the variable's location before calling the glUniform function. Finally, the printActiveUniforms and printActiveAttribs functions are useful mainly for debugging purposes. They simply display a list of the active uniforms/attributes to standard output. The following is a simple example of the use of the GLSLProgram class: GLSLProgram prog; if( ! prog.compileShaderFromFile("myshader.vert", GLSLShader::VERTEX)) { printf("Vertex shader failed to compile!n%s", prog.log().c_str()); exit(1); } if( ! prog.compileShaderFromFile("myshader.frag", GLSLShader::FRAGMENT)) { printf("Fragment shader failed to compile!n%s", prog.log().c_str()); exit(1); } // Possibly call bindAttribLocation or bindFragDataLocation // here... if( ! prog.link() ) { printf("Shader program failed to link!n%s", prog.log().c_str()); exit(1); } prog.use(); prog.printActiveUniforms(); prog.printActiveAttribs(); prog.setUniform("ModelViewMatrix", matrix); prog.setUniform("LightPosition", 1.0f, 1.0f, 1.0f); ... Summary This article covered the topic of Building a C++ Shader Program Class. Further resources on this subject: OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] GLSL 4.0: Discarding Fragments to Create a Perforated Look [Article]
Read more
  • 0
  • 0
  • 6683

article-image-audio-playback
Packt
04 Sep 2013
17 min read
Save for later

Audio Playback

Packt
04 Sep 2013
17 min read
(For more resources related to this topic, see here.) Understanding FMOD One of the main reasons why I chose FMOD for this book is that it contains two separate APIs—the FMOD Ex Programmer's API, for low-level audio playback, and FMOD Designer, for high-level data-driven audio. This will allow us to cover game audio programming at different levels of abstraction without having to use entirely different technologies. Besides that reason, FMOD is also an excellent piece of software, with several advantages to game developers: License: It is free for non-commercial use, and has reasonable licenses for commercial projects. Cross-platform: It works across an impressive number of platforms. You can run it on Windows, Mac, Linux, Android, iOS, and on most of the modern video game consoles by Sony, Microsoft, and Nintendo. Supported formats: It has native support for a huge range of audio file formats, which saves you the trouble of having to include other external libraries and decoders. Programming languages: Not only can you use FMOD with C and C++, there are also bindings available for other programming languages, such as C# and Python. Popularity: It is extremely popular, being widely considered as the industry standard nowadays. It was used in games such as BioShock, Crysis, Diablo 3, Guitar Hero, Start Craft II, and World of Warcraft. It is also used to power several popular game engines, such as Unity3D and CryEngine. Features: It is packed with features, covering everything from simple audio playback, streaming and 3D sound, to interactive music, DSP effects and low-level audio programming. Installing FMOD Ex Programmer's API Installing a C++ library can be a bit daunting at first. The good side is that once you have done it for the first time, the process is usually the same for every other library. Here are the steps that you should follow if you are using Microsoft Visual Studio: Download the FMOD Ex Programmer's API from http://www.fmod.org and install it to a folder that you can remember, such as C:FMOD. Create a new empty project, and add at least one .cpp file to it. Then, right-click on the project node on the Solution Explorer , and select Properties from the list. For all the steps that follow, make sure that the Configuration option is set to All Configurations . Navigate to C/C++ | General , and add C:FMODapiinc to the list of Additional Include Directories (entries are separated by semicolons). Navigate to Linker | General , and add C:FMODapilib to the list of Additional Library Directories . Navigate to Linker | Input , and add fmodex_vc.lib to the list of Additional Dependencies . Navigate to Build Events | Post-Build Event , and add xcopy /y "C:FMODapifmodex.dll" "$(OutDir)" to the Command Lin e list. Include the <fmod.hpp> header file from your code. Creating and managing the audio system Everything that happens inside FMOD is managed by a class named FMOD::System, which we must start by instantiating with the FMOD::Syste m_Create() function: FMOD::System* system; FMOD::System_Create(&system); Notice that the function returns the system object through a parameter. You will see this pattern every time one of the FMOD functions needs to return a value, because they all reserve the regular return value for an error code. We will discuss error checking in a bit, but for now let us get the audio engine up and running. Now that we have a system object instantiated, we also need to initialize it by calling the init() method: system->init(100, FMOD_INIT_NORMAL, 0); The first parameter specifies the maximum number of channels to allocate. This controls how many sounds you are able to play simultaneously. You can choose any number for this parameter because the system performs some clever priority management behind the scenes and distributes the channels using the available resources. The second and third parameters customize the initialization process, and you can usually leave them as shown in the example. Many features that we will use work properly only if we update the system object every frame. This is done by calling the update() method from inside your game loop: system->update(); You should also remember to shutdown the system object before your game ends, so that it can dispose of all resources. This is done by calling the release() method: system->release(); Loading and streaming audio files One of the greatest things about FMOD is that you can load virtually any audio file format with a single method call. To load an audio file into memory, use the createSound() method: FMOD::Sound* sound; system->createSound("sfx.wav", FMOD_DEFAULT, 0, &sound); To stream an audio file from disk without having to store it in memory, use the createStream() method: FMOD::Sound* stream; system->createStream("song.ogg", FMOD_DEFAULT, 0, &stream); Both methods take the path of the audio file as the first parameter, and return a pointer to an FMOD::Sound object through the fourth parameter, which you can use to play the sound. The paths in the previous examples are relative to the application path. If you are running these examples in Visual Studio, make sure that you copy the audio files into the output folder (for example, using a post-build event such as xcopy /y "$(ProjectDir)*.ogg" "$(OutDir)"). The choice between loading and streaming is mostly a tradeoff between memory and processing power. When you load an audio file, all of its data is uncompressed and stored in memory, which can take up a lot of space, but the computer can play it without much effort. Streaming, on the other hand, barely uses any memory, but the computer has to access the disk constantly, and decode the audio data on the fly. Another difference (in FMOD at least) is that when you stream a sound, you can only have one instance of it playing at any time. This limitation exists because there is only one decode buffer per stream. Therefore, for sound effects that have to be played multiple times simultaneously, you have to either load them into memory, or open multiple concurrent streams. As a rule of thumb, streaming is great for music tracks, voice cues, and ambient tracks, while most sound effects should be loaded into memory. The second and third parameters allow us to customize the behavior of the sound. There are many different options available, but the following list summarizes the ones we will be using the most. Using FMOD_DEFAULT is equivalent to combining the first option of each of these categories: FMOD_LOOP_OFF and FMOD_LOOP_NORMAL: These modes control whether the sound should only play once, or loop once it reaches the end FMOD_HARDWARE and FMOD_SOFTWARE: These modes control whether the sound should be mixed in hardware (better performance) or software (more features) FMOD_2D and FMOD_3D: These modes control whether to use 3D sound We can combine multiple modes using the bitwise OR operator (for instance, FMOD_DEFAULT | FMOD_LOOP_NORMAL | FMOD_SOFTWARE). We can also tell the system to stream a sound even when we are using the createSound() method, by setting the FMOD_CREATESTREAM flag. In fact, the createStream() method is simply a shortcut for this. When we do not need a sound anymore (or at the end of the game) we should dispose of it by calling the release() method of the sound object. We should always release the sounds we create, regardless of the audio system also being released. sound->release(); Playing sounds With the sounds loaded into memory or prepared for streaming, all that is left is telling the system to play them using the playSound() method: FMOD::Channel* channel; system->playSound(FMOD_CHANNEL_FREE, sound, false, &channel); The first parameter selects in which channel the sound will play. You should usually let FMOD handle it automatically, by passing FMOD_CHANNEL_FREE as the parameter. The second parameter is a pointer to the FMOD::Sound object that you want to play. The third parameter controls whether the sound should start in a paused state, giving you a chance to modify some of its properties without the changes being audible. If you set this to true, you will also need to use the next parameter so that you can unpause it later. The fourth parameter is an output parameter that returns a pointer to the FMOD::Channel object in which the sound will play. You can use this handle to control the sound in multiple ways, which will be the main topic of the next chapter. You can ignore this last parameter if you do not need any control over the sound, and simply pass in 0 in its place. This can be useful for non-lopping one-shot sounds. system->playSound(FMOD_CHANNEL_FREE, sound, false, 0); Checking for errors So far, we have assumed that every operation will always work without errors. However, in a real scenario, there is room for a lot to go wrong. For example, we could try to load an audio file that does not exist. In order to report errors, every function and method in FMOD has a return value of type FMOD_RESULT, which will only be equal to FMOD_OK if everything went right. It is up to the user to check this value and react accordingly: FMOD_RESULT result = system->init(100, FMOD_INIT_NORMAL, 0); if (result != FMOD_OK) { // There was an error, do something about it } For starters, it would be useful to know what the error was. However, since FMOD_RESULT is an enumeration, you will only see a number if you try to print it. Fortunately, there is a function called FMOD_ErrorString() inside the fmod_errors.h header file which will give you a complete description of the error. You might also want to create a helper function to simplify the error checking process. For instance, the following function will check for errors, print a description of the error to the standard output, and exit the application: #include <iostream> #include <fmod_errors.h> void ExitOnError(FMOD_RESULT result) { if (result != FMOD_OK) { std::cout << FMOD_ErrorString(result) << std::endl; exit(-1); } } You could then use that function to check for any critical errors that should cause the application to abort: ExitOnError(system->init(100, FMOD_INIT_NORMAL, 0)); The initialization process described earlier also assumes that everything will go as planned, but a real game should be prepared to deal with any errors. Fortunately, there is a template provided in the FMOD documentation which shows you how to write a robust initialization sequence. It is a bit long to cover here, so I urge you to refer to the file named Getting started with FMOD for Windows.pdf inside the documentation folder for more information. For clarity, all of the code examples will continue to be presented without error checking, but you should always check for errors in a real project. Project 1 building a simple audio manager In this project, we will be creating a SimpleAudioManager class that combines everything that was covered in this chapter. Creating a wrapper for an underlying system that only exposes the operations that we need is known as the façade design pattern , and is very useful in order to keep things nice and simple. Since we have not seen how to manipulate sound yet, do not expect this class to be powerful enough to be used in a complex game. Its main purpose will be to let you load and play one-shot sound effects with very little code (which could in fact be enough for very simple games). It will also free you from the responsibility of dealing with sound objects directly (and having to release them) by allowing you to refer to any loaded sound by its filename. The following is an example of how to use the class: SimpleAudioManager audio; audio.Load("explosion.wav"); audio.Play("explosion.wav"); From an educational point of view, what is perhaps even more important is that you use this exercise as a way to get some ideas on how to adapt the technology to your needs. It will also form the basis of the next chapters in the book, where we will build systems that are more complex. Class definition Let us start by examining the class definition: #include <string> #include <map> #include <fmod.hpp> typedef std::map<std::string, FMOD::Sound*> SoundMap; class SimpleAudioManager { public: SimpleAudioManager(); ~SimpleAudioManager(); void Update(float elapsed); void Load(const std::string& path); void Stream(const std::string& path); void Play(const std::string& path); private: void LoadOrStream(const std::string& path, bool stream); FMOD::System* system; SoundMap sounds; }; From browsing through the list of public class members, it should be easy to deduce what it is capable of doing: The class can load audio files (given a path) using the Load() method The class can stream audio files (given a path) using the Stream() method The class can play audio files (given a path) using the Play() method (granted that they have been previously loaded or streamed) There is also an Update() method and a constructor/destructor pair to manage the sound system The private class members, on the other hand, can tell us a lot about the inner workings of the class: At the core of the class is an instance of FMOD::System responsible for driving the entire sound engine. The class initializes the sound system on the constructor, and releases it on the destructor. Sounds are stored inside an associative container, which allows us to search for a sound given its file path. For this purpose, we will be relying on one of the C++ Standard Template Library (STL ) associative containers, the std::map class, as well as the std::string class for storing the keys. Looking up a string key is a bit inefficient (compared to an integer, for example), but it should be fast enough for our needs. An advantage of having all the sounds stored on a single container is that we can easily iterate over them and release them from the class destructor. Since the code for loading and streaming audio file is almost the same, the common functionality has been extracted into a private method called LoadOrStream(), to which Load() and Stream() delegate all of the work. This prevents us from repeating the code needlessly. Initialization and destruction Now, let us walk through the implementation of each of these methods. First we have the class constructor, which is extremely simple, as the only thing that it needs to do is initialize the system object. SimpleAudioManager::SimpleAudioManager() { FMOD::System_Create(&system); system->init(100, FMOD_INIT_NORMAL, 0); } Updating is even simpler, consisting of a single method call: void SimpleAudioManager::Update(float elapsed) { system->update(); } The destructor, on the other hand, needs to take care of releasing the system object, as well as all the sound objects that were created. This process is not that complicated though. First, we iterate over the map of sounds, releasing each one in turn, and clearing the map at the end. The syntax might seem a bit strange if you have never used an STL iterator before, but all that it means is to start at the beginning of the container, and keep advancing until we reach its end. Then we finish off by releasing the system object as usual. SimpleAudioManager::~SimpleAudioManager() { // Release every sound object and clear the map SoundMap::iterator iter; for (iter = sounds.begin(); iter != sounds.end(); ++iter) iter->second->release(); sounds.clear(); // Release the system object system->release(); system = 0; } Loading or streaming sounds Next in line are the Load() and Stream() methods, but let us examine the private LoadOrStream() method first. This method takes the path of the audio file as a parameter, and checks if it has already been loaded (by querying the sound map). If the sound has already been loaded there is no need to do it again, so the method returns. Otherwise, the file is loaded (or streamed, depending on the value of the second parameter) and stored in the sound map under the appropriate key. void SimpleAudioManager::LoadOrStream(const std::string& path, bool stream) { // Ignore call if sound is already loaded if (sounds.find(path) != sounds.end()) return; // Load (or stream) file into a sound object FMOD::Sound* sound; if (stream) system->createStream(path.c_str(), FMOD_DEFAULT, 0, &sound); else system->createSound(path.c_str(), FMOD_DEFAULT, 0, &sound); // Store the sound object in the map using the path as key sounds.insert(std::make_pair(path, sound)); } With the previous method in place, both the Load() and the Stream() methods can be trivially implemented as follows: void SimpleAudioManager::Load(const std::string& path) { LoadOrStream(path, false); } void SimpleAudioManager::Stream(const std::string& path) { LoadOrStream(path, true); } Playing sounds Finally, there is the Play() method, which works the other way around. It starts by checking if the sound has already been loaded, and does nothing if the sound is not found on the map. Otherwise, the sound is played using the default parameters. void SimpleAudioManager::Play(const std::string& path) { // Search for a matching sound in the map SoundMap::iterator sound = sounds.find(path); // Ignore call if no sound was found if (sound == sounds.end()) return; // Otherwise play the sound system->playSound(FMOD_CHANNEL_FREE, sound->second, false, 0); } We could have tried to automatically load the sound in the case when it was not found. In general, this is not a good idea, because loading a sound is a costly operation, and we do not want that happening during a critical gameplay section where it could slow the game down. Instead, we should stick to having separate load and play operations. A note about the code samples Although this is a book about audio, all the samples need an environment to run on. In order to keep the audio portion of the samples as clear as possible, we will also be using the Simple and Fast Multimedia Library 2.0 (SFML ) (http://www.sfml-dev.org). This library can very easily take care of all the miscellaneous tasks, such as window creation, timing, graphics, and user input, which you will find in any game. For example, here is a complete sample using SFML and the SimpleAudioManager class. It creates a new window, loads a sound, runs a game loop at 60 frames per second, and plays the sound whenever the user presses the space key. #include <SFML/Window.hpp> #include "SimpleAudioManager.h" int main() { sf::Window window(sf::VideoMode(320, 240), "AudioPlayback"); sf::Clock clock; // Place your initialization logic here SimpleAudioManager audio; audio.Load("explosion.wav"); // Start the game loop while (window.isOpen()) { // Only run approx 60 times per second float elapsed = clock.getElapsedTime().asSeconds(); if (elapsed < 1.0f / 60.0f) continue; clock.restart(); sf::Event event; while (window.pollEvent(event)) { // Handle window events if (event.type == sf::Event::Closed) window.close(); // Handle user input if (event.type == sf::Event::KeyPressed && event.key.code == sf::Keyboard::Space) audio.Play("explosion.wav"); } // Place your update and draw logic here audio.Update(elapsed); } // Place your shutdown logic here return 0; } Summary In this article, we have seen some of the advantages of using the FMOD audio engine. We saw how to install the FMOD Ex Programmer's API in Visual Studio, how to initialize, manage, and release the FMOD sound system, how to load or stream an audio file of any type from disk, how to play a sound that has been previously loaded by FMOD, how to check for errors in every FMOD function, and how to create a simple audio manager that encapsulates the act of loading and playing audio files behind a simple interface. Resources for Article : Further resources on this subject: Using SpriteFonts in a Board-based Game with XNA 4.0 [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 6656

article-image-environmental-effects-3d-graphics-xna-game-studio-40
Packt
16 Dec 2010
10 min read
Save for later

Environmental Effects in 3D Graphics with XNA Game Studio 4.0

Packt
16 Dec 2010
10 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them We will look at a technique called region growing to add plants and trees to the terrain's surface, and finish by combining the terrain with our sky box, water, and billboarding effects to create a mountain scene: Building a terrain from a heightmap A heightmap is a 2D image that stores, in each pixel, the height of the corresponding point on a grid of vertices. The pixel values range from 0 to 1, so in practice we will multiply them by the maximum height of the terrain to get the final height of each vertex. We build a terrain out of vertices and indices as a large rectangular grid with the same number of vertices as the number of pixels in the heightmap. Let's start by creating a new Terrain class. This class will keep track of everything needed to render our terrain: textures, the effect, vertex and index buffers, and so on. public class Terrain { VertexPositionNormalTexture[] vertices; // Vertex array VertexBuffer vertexBuffer; // Vertex buffer int[] indices; // Index array IndexBuffer indexBuffer; // Index buffer float[,] heights; // Array of vertex heights float height; // Maximum height of terrain float cellSize; // Distance between vertices on x and z axes int width, length; // Number of vertices on x and z axes int nVertices, nIndices; // Number of vertices and indices Effect effect; // Effect used for rendering GraphicsDevice GraphicsDevice; // Graphics device to draw with Texture2D heightMap; // Heightmap texture } The constructor will initialize many of these values: public Terrain(Texture2D HeightMap, float CellSize, float Height, GraphicsDevice GraphicsDevice, ContentManager Content) { this.heightMap = HeightMap; this.width = HeightMap.Width; this.length = HeightMap.Height; this.cellSize = CellSize; this.height = Height; this.GraphicsDevice = GraphicsDevice; effect = Content.Load<Effect>("TerrainEffect"); // 1 vertex per pixel nVertices = width * length; // (Width-1) * (Length-1) cells, 2 triangles per cell, 3 indices per // triangle nIndices = (width - 1) * (length - 1) * 6; vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionNormalTexture), nVertices, BufferUsage.WriteOnly); indexBuffer = new IndexBuffer(GraphicsDevice, IndexElementSize.ThirtyTwoBits, nIndices, BufferUsage.WriteOnly); } Before we can generate any normals or indices, we need to know the dimensions of our grid. We know that the width and length are simply the width and height of our heightmap, but we need to extract the height values from the heightmap. We do this with the getHeights() function: private void getHeights() { // Extract pixel data Color[] heightMapData = new Color[width * length]; heightMap.GetData<Color>(heightMapData); // Create heights[,] array heights = new float[width, length]; // For each pixel for (int y = 0; y < length; y++) for (int x = 0; x < width; x++) { // Get color value (0 - 255) float amt = heightMapData[y * width + x].R; // Scale to (0 - 1) amt /= 255.0f; // Multiply by max height to get final height heights[x, y] = amt * height; } } This will initialize the heights[,] array, which we can then use to build our vertices. When building vertices, we simply lay out a vertex for each pixel in the heightmap, spaced according to the cellSize variable. Note that this will create (width – 1) * (length – 1) "cells"—each with two triangles: The function that does this is as shown: private void createVertices() { vertices = new VertexPositionNormalTexture[nVertices]; // Calculate the position offset that will center the terrain at (0, 0, 0) Vector3 offsetToCenter = -new Vector3(((float)width / 2.0f) * cellSize, 0, ((float)length / 2.0f) * cellSize); // For each pixel in the image for (int z = 0; z < length; z++) for (int x = 0; x < width; x++) { // Find position based on grid coordinates and height in // heightmap Vector3 position = new Vector3(x * cellSize, heights[x, z], z * cellSize) + offsetToCenter; // UV coordinates range from (0, 0) at grid location (0, 0) to // (1, 1) at grid location (width, length) Vector2 uv = new Vector2((float)x / width, (float)z / length); // Create the vertex vertices[z * width + x] = new VertexPositionNormalTexture( position, Vector3.Zero, uv); } } When we create our terrain's index buffer, we need to lay out two triangles for each cell in the terrain. All we need to do is find the indices of the vertices at each corner of each cell, and create the triangles by specifying those indices in clockwise order for two triangles. For example, to create the triangles for the first cell in the preceding screenshot, we would specify the triangles as [0, 1, 4] and [4, 1, 5]. private void createIndices() { indices = new int[nIndices]; int i = 0; // For each cell for (int x = 0; x < width - 1; x++) for (int z = 0; z < length - 1; z++) { // Find the indices of the corners int upperLeft = z * width + x; int upperRight = upperLeft + 1; int lowerLeft = upperLeft + width; int lowerRight = lowerLeft + 1; // Specify upper triangle indices[i++] = upperLeft; indices[i++] = upperRight; indices[i++] = lowerLeft; // Specify lower triangle indices[i++] = lowerLeft; indices[i++] = upperRight; indices[i++] = lowerRight; } } The last thing we need to calculate for each vertex is the normals. Because we are creating the terrain from scratch, we will need to calculate all of the normals based only on the height data that we are given. This is actually much easier than it sounds: to calculate the normals we simply calculate the normal of each triangle of the terrain and add that normal to each vertex involved in the triangle. Once we have done this for each triangle, we simply normalize again, averaging the influences of each triangle connected to each vertex. private void genNormals() { // For each triangle for (int i = 0; i < nIndices; i += 3) { // Find the position of each corner of the triangle Vector3 v1 = vertices[indices[i]].Position; Vector3 v2 = vertices[indices[i + 1]].Position; Vector3 v3 = vertices[indices[i + 2]].Position; // Cross the vectors between the corners to get the normal Vector3 normal = Vector3.Cross(v1 - v2, v1 - v3); normal.Normalize(); // Add the influence of the normal to each vertex in the // triangle vertices[indices[i]].Normal += normal; vertices[indices[i + 1]].Normal += normal; vertices[indices[i + 2]].Normal += normal; } // Average the influences of the triangles touching each // vertex for (int i = 0; i < nVertices; i++) vertices[i].Normal.Normalize(); } We'll finish off the constructor by calling these functions in order and then setting the vertices and indices that we created into their respective buffers: createVertices(); createIndices(); genNormals(); vertexBuffer.SetData<VertexPositionNormalTexture>(vertices); indexBuffer.SetData<int>(indices); Now that we've created the framework for this class, let's create the TerrainEffect.fx effect. This effect will, for the moment, be responsible for some simple directional lighting and texture mapping. We'll need a few effect parameters: float4x4 View; float4x4 Projection; float3 LightDirection = float3(1, -1, 0); float TextureTiling = 1; texture2D BaseTexture; sampler2D BaseTextureSampler = sampler_state { Texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; The TextureTiling parameter will determine how many times our texture is repeated across the terrain's surface—simply stretching it across the terrain would look bad because it would need to be stretched to a very large size. "Tiling" it across the terrain will look much better. We will need a very standard vertex shader: struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, mul(View, Projection)); output.Normal = input.Normal; output.UV = input.UV; return output; } The pixel shader is also very standard, except that we multiply the texture coordinates by the TextureTiling parameter. This works because the texture sampler's address mode is set to "wrap", and thus the sampler will simply wrap the texture coordinates past the edge of the texture, creating the tiling effect. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize(LightDirection)); light = clamp(light + 0.4f, 0, 1); // Simple ambient lighting float3 tex = tex2D(BaseTextureSampler, input.UV * TextureTiling); return float4(tex * light, 1); } The technique definition is the same as our other effects: technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } In order to use the effect with our terrain, we'll need to add a few more member variables to the Terrain class: Texture2D baseTexture; float textureTiling; Vector3 lightDirection; These values will be set from the constructor: public Terrain(Texture2D HeightMap, float CellSize, float Height, Texture2D BaseTexture, float TextureTiling, Vector3 LightDirection, GraphicsDevice GraphicsDevice, ContentManager Content) { this.baseTexture = BaseTexture; this.textureTiling = TextureTiling; this.lightDirection = LightDirection; // etc... Finally, we can simply set these effect parameters along with the View and Projection parameters in the Draw() function: effect.Parameters["BaseTexture"].SetValue(baseTexture); effect.Parameters["TextureTiling"].SetValue(textureTiling); effect.Parameters["LightDirection"].SetValue(lightDirection); Let's now add the terrain to our game. We'll need a new member variable in the Game1 class: Terrain terrain; We'll need to initialize it in the LoadContent() method: terrain = new Terrain(Content.Load<Texture2D>("terrain"), 30, 4800, Content.Load<Texture2D>("grass"), 6, new Vector3(1, -1, 0), GraphicsDevice, Content); Finally, we can draw it in the Draw() function: terrain.Draw(camera.View, camera.Projection); Multitexturing Our terrain looks pretty good as it is, but to make it more believable the texture applied to it needs to vary—snow and rocks at the peaks, for example. To do this, we will use a technique called multitexturing, which uses the red, blue, and green channels of a texture as a guide as to where to draw textures that correspond to those channels. For example, sand may correspond to red, snow to blue, and rock to green. Adding snow would then be as simple as painting blue onto the areas of this "texture map" that correspond with peaks on the heightmap. We will also have one extra texture that fills in the area where no colors have been painted onto the texture map—grass, for example. To begin with, we will need to modify our texture parameters on our effect from one texture to five: the texture map, the base texture, and the three color channel mapped textures. texture RTexture; sampler RTextureSampler = sampler_state { texture = <RTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture GTexture; sampler GTextureSampler = sampler_state { texture = <GTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BTexture; sampler BTextureSampler = sampler_state { texture = <BTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BaseTexture; sampler BaseTextureSampler = sampler_state { texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture WeightMap; sampler WeightMapSampler = sampler_state { texture = <WeightMap>; AddressU = Clamp; AddressV = Clamp; MinFilter = Linear; MagFilter = Linear; }; Second, we need to update our pixel shader to draw these textures onto the terrain: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize( LightDirection)); light = clamp(light + 0.4f, 0, 1); float3 rTex = tex2D(RTextureSampler, input.UV * TextureTiling); float3 gTex = tex2D(GTextureSampler, input.UV * TextureTiling); float3 bTex = tex2D(BTextureSampler, input.UV * TextureTiling); float3 base = tex2D(BaseTextureSampler, input.UV * TextureTiling); float3 weightMap = tex2D(WeightMapSampler, input.UV); float3 output = clamp(1.0f - weightMap.r - weightMap.g - weightMap.b, 0, 1); output *= base; output += weightMap.r * rTex + weightMap.g * gTex + weightMap.b * bTex; return float4(output * light, 1); } We'll need to add a way to set these values to the Terrain class: public Texture2D RTexture, BTexture, GTexture, WeightMap; All we need to do now is set these values to the effect in the Draw() function: effect.Parameters["RTexture"].SetValue(RTexture); effect.Parameters["GTexture"].SetValue(GTexture); effect.Parameters["BTexture"].SetValue(BTexture); effect.Parameters["WeightMap"].SetValue(WeightMap); To use multitexturing in our game, we'll need to set these values in the Game1 class: terrain.WeightMap = Content.Load<Texture2D>("weightMap"); terrain.RTexture = Content.Load<Texture2D>("sand"); terrain.GTexture = Content.Load<Texture2D>("rock"); terrain.BTexture = Content.Load<Texture2D>("snow");
Read more
  • 0
  • 0
  • 6602
article-image-character-head-modeling-blender-part-2-2
Packt
29 Sep 2009
5 min read
Save for later

Character Head Modeling in Blender: Part 2

Packt
29 Sep 2009
5 min read
Modeling: the ear Ask just about any beginning modeler (and many experienced ones) and they'll tell you that the ear is a challenge! There are so many turns and folds in the human ear that it poses a modeling nightmare. But, that being said, it is also an excellent exercise in clean modeling. The ear alone, once successfully tackled, will make you a better modeler all around. The way we are going to go about this is much the same way we got started with the edgeloops: Position your 3D Cursor at the center of the ear from both the Front and the Side views Add a new plane with Spacebar > Add > Plane Extrude along the outer shape of the ear We are working strictly from the Side View for the first bit. Use the same process of extruding and moving to do the top, inside portion of the ear: Watch your topology closely, it can become very messy, very fast! Continue for the bottom: The next step is to rotate your view around with your MMB to a nice angle and Extrude out along the X-axis: Select the main loop of the ear E > Region Before placing the new faces, hit X to lock the movement to the X-axis. From here it's just a matter of shaping the ear by moving vertices around to get the proper depth and definition on the ear. It will also save you some time editing if: Select the whole ear by hovering your mouse over it and hitting L Hit R > Z to rotate along the Z-axis Then do the same along the Y-axis, R > Y This will just better position the ear. Connecting the ear to the head can be a bit of challenge, due to the much higher number of vertices it is made up of in comparison to parts of the head. This can be solved by using some cleaver modeling techniques. Let's start by extruding in the outside edge of the ear to create the back side: Now is where it gets tricky, best to just follow the screenshot: You will notice that I have used the direction of my edges coming in from the eye to increase my face count, thus making it easier to connect the ear. One of the general rules of thumb when it comes to good topology is to stay away from triangles. We want to keep our mesh comprised of strictly quads, or faces with four sides to them. Once again, we can use the same techniques seen before, and some of the tricks we just used on the ear to connect the back of the ear to the head: You will notice that I have disabled the mirror modifier's display while in Edit Mode, this makes working on the inside of the head much easier. This can be done via the modifier panel. Final: tweaking And that's it! After connecting the ear to the head the model is essentially finished. At this point it is a good idea to give your whole model the once over, checking it out from all different angles, perspective vs. orthographic modes, etc. If you find yourself needing to tweak the proportions (almost always do) a really easy way to do it is by using the Proportional Editing tool, which can be accessed by hitting O. This allows you to move the mesh around with a fall-off, basically a magnet, such that anything within the radius will move with your selection. Here is the final model: Conclusion Thank you all for reading this and I hope you have found it helpful in your head modeling endeavours. At this point, the best thing you can do is...do it all over again! Repetition in any kind of modeling always helps, but it's particularly true with head modeling. Also, always use references to help you along. You may hear some people telling you not to use references, that it makes your work stale and unoriginal. This is absolutely not true (assuming you're not just copying down the image and calling it your own...). References are an excellent resource, for everything from proportion, to perspective, to anatomy, etc. If used properly, it will show in your work, they really do help. From here, just keep hacking away at it, thanks for reading and best of luck! Happy blending! If you have read this article you may be interested to view : Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 1 Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 2 Creating an Underwater Scene in Blender- Part 1 Creating an Underwater Scene in Blender- Part 2 Creating an Underwater Scene in Blender- Part 3 Creating Convincing Images with Blender Internal Renderer-part1 Creating Convincing Images with Blender Internal Renderer-part2 Textures in Blender
Read more
  • 0
  • 0
  • 6513

article-image-advanced-lighting-3d-graphics-xna-game-studio-40
Packt
22 Dec 2010
9 min read
Save for later

Advanced Lighting in 3D Graphics with XNA Game Studio 4.0

Packt
22 Dec 2010
9 min read
  3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them         Implementing a point light with HLSL A point light is just a light that shines equally in all directions around itself (like a light bulb) and falls off over a given distance: In this case, a point light is simply modeled as a directional light that will slowly fade to darkness over a given distance. To achieve a linear attenuation, we would simply divide the distance between the light and the object by the attenuation distance, invert the result (subtract from 1), and then multiply the lambertian lighting with the result. This would cause an object directly next to the light source to be fully lit, and an object at the maximum attenuation distance to be completely unlit. However, in practice, we will raise the result of the division to a given power before inverting it to achieve a more exponential falloff: Katt = 1 – (d / a) f In the previous equation, Katt is the brightness scalar that we will multiply the lighting amount by, d is the distance between the vertex and light source, a is the distance at which the light should stop affecting objects, and f is the falloff exponent that determines the shape of the curve. We can implement this easily with HLSL and a new Material class. The new Material class is similar to the material for a directional light, but specifies a light position rather than a light direction. For the sake of simplicity, the effect we will use will not calculate specular highlights, so the material does not include a "specularity" value. It also includes new values, LightAttenuation and LightFalloff, which specify the distance at which the light is no longer visible and what power to raise the division to. public class PointLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public float LightAttenuation { get; set; } public float LightFalloff { get; set; } public PointLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 0, 0); LightColor = new Vector3(.85f, .85f, .85f); LightAttenuation = 5000; LightFalloff = 2; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightAttenuation"] != null) effect.Parameters["LightAttenuation"].SetValue( LightAttenuation); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } The new effect has parameters to reflect those values: float4x4 World; float4x4 View; float4x4 Projection; float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 0, 0); float3 LightColor = float3(1, 1, 1); float LightAttenuation = 5000; float LightFalloff = 2; texture BasicTexture; sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; }; bool TextureEnabled = true; The vertex shader output struct now includes a copy of the vertex's world position that will be used to calculate the light falloff (attenuation) and light direction. struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; float4 WorldPosition : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.WorldPosition = worldPosition; output.UV = input.UV; output.Normal = mul(input.Normal, World); return output; } Finally, the pixel shader calculates the light much the same way that the directional light did, but uses a per-vertex light direction rather than a global light direction. It also determines how far along the attenuation value the vertex's position is and darkens it accordingly. The texture, ambient light, and diffuse color are calculated as usual: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); float d = distance(LightPosition, input.WorldPosition); float att = 1 - pow(clamp(d / LightAttenuation, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } We can now achieve the above image using the following scene setup from the Game1 class: models.Add(new CModel(Content.Load<Model>("teapot"), new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); Effect simpleEffect = Content.Load<Effect>("PointLightEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); PointLightMaterial mat = new PointLightMaterial(); mat.LightPosition = new Vector3(0, 1500, 1500); mat.LightAttenuation = 3000; models[0].Material = mat; models[1].Material = mat; camera = new FreeCamera(new Vector3(0, 300, 1600), MathHelper.ToRadians(0), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); Implementing a spot light with HLSL A spot light is similar in theory to a point light—in that it fades out after a given distance. However, the fading is not done around the light source, but is based on the angle between the direction of an object and the light source, and the light's actual direction. If the angle is larger than the light's "cone angle", we will not light the vertex. Katt = (dot(p - lp, ld) / cos(a)) f In the previous equation, Katt is still the scalar that we will multiply our diffuse lighting with, p is the position of the vertex, lp is the position of the light, ld is the direction of the light, a is the cone angle, and f is the falloff exponent. Our new spot light material reflects these values: public class SpotLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public Vector3 LightDirection { get; set; } public float ConeAngle { get; set; } public float LightFalloff { get; set; } public SpotLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 3000, 0); LightColor = new Vector3(.85f, .85f, .85f); ConeAngle = 30; LightDirection = new Vector3(0, -1, 0); LightFalloff = 20; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["ConeAngle"] != null) effect.Parameters["ConeAngle"].SetValue( MathHelper.ToRadians(ConeAngle / 2)); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } Now we can create a new effect that will render a spot light. We will start by copying the point light's effect and making the following changes to the second block of effect parameters: float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 5000, 0); float3 LightDirection = float3(0, -1, 0); float ConeAngle = 90; float3 LightColor = float3(1, 1, 1); float LightFalloff = 20; Finally, we can update the pixel shader to perform the lighting calculations: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); // (dot(p - lp, ld) / cos(a))^f float d = dot(-lightDir, normalize(LightDirection)); float a = cos(ConeAngle); float att = 0; if (a < d) att = 1 - pow(clamp(a / d, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } If we were to then set up the material as follows and use our new effect, we would see the following result: SpotLightMaterial mat = new SpotLightMaterial(); mat.LightDirection = new Vector3(0, -1, -1); mat.LightPosition = new Vector3(0, 3000, 2700); mat.LightFalloff = 200; Drawing multiple lights Now that we can draw one light, the natural question to ask is how to draw more than one light. Well this, unfortunately, is not simple. There are a number of approaches—the easiest of which is to simply loop through a certain number of lights in the pixel shader and sum a total lighting value. Let's create a new shader based on the directional light effect that we created in the last chapter to do just that. We'll start by copying that effect, then modifying some of the effect parameters as follows. Notice that instead of a single light direction and color, we instead have an array of three of each, allowing us to draw up to three lights: #define NUMLIGHTS 3 float3 DiffuseColor = float3(1, 1, 1); float3 AmbientColor = float3(0.1, 0.1, 0.1); float3 LightDirection[NUMLIGHTS]; float3 LightColor[NUMLIGHTS]; float SpecularPower = 32; float3 SpecularColor = float3(1, 1, 1); Second, we need to update the pixel shader to do the lighting calculations one time per light: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Start with diffuse color float3 color = DiffuseColor; // Texture if necessary if (TextureEnabled) color *= tex2D(BasicTextureSampler, input.UV); // Start with ambient lighting float3 lighting = AmbientColor; float3 normal = normalize(input.Normal); float3 view = normalize(input.ViewDirection); // Perform lighting calculations per light for (int i = 0; i < NUMLIGHTS; i++) { float3 lightDir = normalize(LightDirection[i]); // Add lambertian lighting lighting += saturate(dot(lightDir, normal)) * LightColor[i]; float3 refl = reflect(lightDir, normal); // Add specular highlights lighting += pow(saturate(dot(refl, view)), SpecularPower) * SpecularColor; } // Calculate final color float3 output = saturate(lighting) * color; return float4(output, 1); } We now need a new Material class to work with this shader: public class MultiLightingMaterial : Material { public Vector3 AmbientColor { get; set; } public Vector3[] LightDirection { get; set; } public Vector3[] LightColor { get; set; } public Vector3 SpecularColor { get; set; } public MultiLightingMaterial() { AmbientColor = new Vector3(.1f, .1f, .1f); LightDirection = new Vector3[3]; LightColor = new Vector3[] { Vector3.One, Vector3.One, Vector3.One }; SpecularColor = new Vector3(1, 1, 1); } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientColor"] != null) effect.Parameters["AmbientColor"].SetValue(AmbientColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["SpecularColor"] != null) effect.Parameters["SpecularColor"].SetValue(SpecularColor); } } If we wanted to implement the three directional light systems found in the BasicEffect class, we would now just need to copy the light direction values over to our shader: Effect simpleEffect = Content.Load<Effect>("MultiLightingEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); MultiLightingMaterial mat = new MultiLightingMaterial(); BasicEffect effect = new BasicEffect(GraphicsDevice); effect.EnableDefaultLighting(); mat.LightDirection[0] = -effect.DirectionalLight0.Direction; mat.LightDirection[1] = -effect.DirectionalLight1.Direction; mat.LightDirection[2] = -effect.DirectionalLight2.Direction; mat.LightColor = new Vector3[] { new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f) }; models[0].Material = mat; models[1].Material = mat;
Read more
  • 0
  • 0
  • 6459

article-image-event-driven-programming
Packt
06 Feb 2015
22 min read
Save for later

Event-driven Programming

Packt
06 Feb 2015
22 min read
In this article by Alan Thorn author of the book Mastering Unity Scripting will cover the following topics: Events Event management (For more resources related to this topic, see here.) The Update events for MonoBehaviour objects seem to offer a convenient place for executing code that should perform regularly over time, spanning multiple frames, and possibly multiple scenes. When creating sustained behaviors over time, such as artificial intelligence for enemies or continuous motion, it may seem that there are almost no alternatives to filling an Update function with many if and switch statements, branching your code in different directions depending on what your objects need to do at the current time. But, when the Update events are seen this way, as a default place to implement prolonged behaviors, it can lead to severe performance problems for larger and more complex games. On deeper analysis, it's not difficult to see why this would be the case. Typically, games are full of so many behaviors, and there are so many things happening at once in any one scene that implementing them all through the Update functions is simply unfeasible. Consider the enemy characters alone, they need to know when the player enters and leaves their line of sight, when their health is low, when their ammo has expired, when they're standing on harmful terrain, when they're taking damage, when they're moving or not, and lots more. On thinking initially about this range of behaviors, it seems that all of them require constant and continuous attention because enemies should always know, instantly, when changes in these properties occur as a result of the player input. That is, perhaps, the main reason why the Update function seems to be the most suitable place in these situations but there are better alternatives, namely, event-driven programming. By seeing your game and your application in terms of events, you can make considerable savings in performance. This article then considers the issue of events and how to manage them game wide. Events Game worlds are fully deterministic systems; in Unity, the scene represents a shared 3D Cartesian space and timeline inside which finite GameObjects exist. Things only happen within this space when the game logic and code permits them to. For example, objects can only move when there is code somewhere that tells them to do so, and under specific conditions, such as when the player presses specific buttons on the keyboard. Notice from the example that behaviors are not simply random but are interconnected; objects move only when keyboard events occur. There is an important connection established between the actions, where one action entails another. These connections or linkages are referred to as events; each unique connection being a single event. Events are not active but passive; they represent moments of opportunity but not action in themselves, such as a key press, a mouse click, an object entering a collider volume, the player being attacked, and so on. These are examples of events and none of them say what the program should actually do, but only the kind of scenario that just happened. Event-driven programming starts with the recognition of events as a general concept and comes to see almost every circumstance in a game as an instantiation of an event; that is, as an event situated in time, not just an event concept but as a specific event that happens at a specific time. Understanding game events like these is helpful because all actions in a game can then be seen as direct responses to events as and when they happen. Specifically, events are connected to responses; an event happens and triggers a response. Further, the response can go on to become an event that triggers further responses and so on. In other words, the game world is a complete, integrated system of events and responses. Once the world is seen this way, the question then arises as to how it can help us improve performance over simply relying on the Update functions to move behaviors forward on every frame. And the method is simply by finding ways to reduce the frequency of events. Now, stated in this way, it may sound a crude strategy, but it's important. To illustrate, let's consider the example of an enemy character firing a weapon at the player during combat. Throughout the gameplay, the enemy will need to keep track of many properties. Firstly, their health, because when it runs low the enemy should seek out medical kits and aids to restore their health again. Secondly, their ammo, because when it runs low the enemy should seek to collect more and also the enemy will need to make reasoned judgments about when to fire at the player, such as only when they have a clear line of sight. Now, by simply thinking about this scenario, we've already identified some connections between actions that might be identified as events. But before taking this consideration further, let's see how we might implement this behavior using an Update function, as shown in the following code sample 4-1. Then, we'll look at how events can help us improve on that implementation: // Update is called once per frame void Update () {    //Check enemy health    //Are we dead?    if(Health <= 0)    {          //Then perform die behaviour          Die();          return;    }    //Check for health low    if(health <= 20)    {        //Health is low, so find first-aid          RunAndFindHealthRestore();          return;    }    //Check ammo    //Have we run out of ammo?    if(Ammo <= 0)    {          //Then find more          SearchMore();          return;    }    //Health and ammo are fine. Can we see player? If so, shoot    if(HaveLineOfSight)    {            FireAtPlayer();    } } The preceding code sample 4-1 shows a heavy Update function filled with lots of condition checking and responses. In essence, the Update function attempts to merge event handling and response behaviors into one and the results in an unnecessarily expensive process. If we think about the event connections between these different processes (the health and ammo check), we see how the code could be refactored more neatly. For example, ammo only changes on two occasions: when a weapon is fired or when new ammo is collected. Similarly, health only changes on two occasions: when an enemy is successfully attacked by the player or when an enemy collects a first-aid kit. In the first case, there is a reduction, and in the latter case, an increase. Since these are the only times when the properties change (the events), these are the only points where their values need to be validated. See the following code sample 4-2 for a refactored enemy, which includes C# properties and a much reduced Update function: using UnityEngine; using System.Collections; public class EnemyObject : MonoBehaviour {    //-------------------------------------------------------    //C# accessors for private variables    public int Health    {          get{return _health;}          set          {                //Clamp health between 0-100                _health = Mathf.Clamp(value, 0, 100);               //Check if dead                if(_health <= 0)                {                      OnDead();                      return;                }                //Check health and raise event if required                if(_health <= 20)               {                      OnHealthLow();                      return;                }          }    }    //-------------------------------------------------------    public int Ammo    {          get{return _ammo;}          set          {              //Clamp ammo between 0-50              _ammo = Mathf.Clamp(value,0,50);                //Check if ammo empty                if(_ammo <= 0)                {                      //Call expired event                      OnAmmoExpired();                      return;                }          }    }    //-------------------------------------------------------    //Internal variables for health and ammo    private int _health = 100;    private int _ammo = 50;    //-------------------------------------------------------    // Update is called once per frame    void Update ()    {    }    //-------------------------------------------------------    //This event is called when health is low    void OnHealthLow()    {          //Handle event response here    }    //-------------------------------------------------------    //This event is called when enemy is dead    void OnDead()    {        //Handle event response here    }    //-------------------------------------------------------    //Ammo run out event    void OnAmmoExpired()    {        //Handle event response here    }    //------------------------------------------------------- } The enemy class in the code sample 4-2 has been refactored to an event-driven design, where properties such as Ammo and Health are validated not inside the Update function but on assignment. From here, events are raised wherever appropriate based on the newly assigned values. By adopting an event-driven design, we introduce performance optimization and cleanness into our code; we reduce the excess baggage and value checks as found with the Update function in the code sample 4-1, and instead we only allow value-specific events to drive our code, knowing they'll be invoked only at the relevant times. Event management Event-driven programming can make our lives a lot easier. But no sooner than we accept events into the design do we come across a string of new problems that require a thoroughgoing resolution. Specifically, we saw in the code sample 4-2 how C# properties for health and ammo are used to validate and detect for relevant changes and then to raise events (such as OnDead) where appropriate. This works fine in principle, at least when the enemy must be notified about events that happen to itself. However, what if an enemy needed to know about the death of another enemy or needed to know when a specified number of other enemies had been killed? Now, of course, thinking about this specific case, we could go back to the enemy class in the code sample 4-2 and amend it to call an OnDead event not just for the current instance but for all other enemies using functions such as SendMessage. But this doesn't really solve our problem in the general sense. In fact, let's state the ideal case straight away; we want every object to optionally listen for every type of event and to be notified about them as and when they happen, just as easily as if the event had happened to them. So the question that we face now is about how to code an optimized system to allow easy event management like this. In short, we need an EventManager class that allows objects to listen to specific events. This system relies on three central concepts, as follows: Event Listener: A listener refers to any object that wants to be notified about an event when it happens, even its own events. In practice, almost every object will be a listener for at least one event. An enemy, for example, may want notifications about low health and low ammo among others. In this case, it's a listener for at least two separate events. Thus, whenever an object expects to be told when an event happens, it becomes a listener. Event Poster: In contrast to listeners, when an object detects that an event has occurred, it must announce or post a public notification about it that allows all other listeners to be notified. In the code sample 4-2, the enemy class detects the Ammo and Health events using properties and then calls the internal events, if required. But to be a true poster in this sense, we require that the object must raise events at a global level. Event Manager: Finally, there's an overarching singleton Event Manager object that persists across levels and is globally accessible. This object effectively links listeners to posters. It accepts notifications of events sent by posters and then immediately dispatches the notifications to all appropriate listeners in the form of events. Starting event management with interfaces The first or original entity in the event handling system is the listener—the thing that should be notified about specific events as and when they happen. Potentially, a listener could be any kind of object or any kind of class; it simply expects to be notified about specific events. In short, the listener will need to register itself with the Event Manager as a listener for one or more specific events. Then, when the event actually occurs, the listener should be notified directly by a function call. So, technically, the listener raises a type-specificity issue for the Event Manager about how the manager should invoke an event on the listener if the listener could potentially be an object of any type. Of course, this issue can be worked around, as we've seen, using either SendMessage or BroadcastMessage. Indeed, there are event handling systems freely available online, such as NotificationCenter that rely on these functions. However, we'll avoid them using interfaces and use polymorphism instead, as both SendMessage and BroadcastMessage rely heavily on reflection. Specifically, we'll create an interface from which all listener objects derive. More information on the freely available NotificationCenter (C# version) is available from the Unity wiki at http://wiki.unity3d.com/index.php?title=CSharpNotificationCenter. In C#, an interface is like a hollow abstract base class. Like a class, an interface brings together a collection of methods and functions into a single template-like unit. But, unlike a class, an interface only allows you to define function prototypes such as the name, return type, and arguments for a function. It doesn't let you define a function body. The reason being that an interface simply defines the total set of functions that a derived class will have. The derived class may implement the functions however necessary, and the interface simply exists so that other objects can invoke the functions via polymorphism without knowing the specific type of each derived class. This makes interfaces a suitable candidate to create a Listener object. By defining a Listener interface from which all objects will be derived, every object has the ability to be a listener for events. The following code sample 4-3 demonstrates a sample Listener interface: 01 using UnityEngine; 02 using System.Collections; 03 //----------------------------------------------------------- 04 //Enum defining all possible game events 05 //More events should be added to the list 06 public enum EVENT_TYPE {GAME_INIT, 07                                GAME_END, 08                                 AMMO_EMPTY, 09                                 HEALTH_CHANGE, 10                                 DEAD}; 11 //----------------------------------------------------------- 12 //Listener interface to be implemented on Listener classes 13 public interface IListener 14 { 15 //Notification function invoked when events happen 16 void OnEvent(EVENT_TYPE Event_Type, Component Sender,    Object Param = null); 17 } 18 //----------------------------------------------------------- The following are the comments for the code sample 4-3: Lines 06-10: This enumeration should define a complete list of all possible game events that could be raised. The sample code lists only five game events: GAME_INIT, GAME_END, AMMO_EMPTY, HEALTH_CHANGE, and DEAD. Your game will presumably have many more. You don't actually need to use enumerations for encoding events; you could just use integers. But I've used enumerations to improve event readability in code. Lines 13-17: The Listener interface is defined as IListener using the C# interfaces. It supports just one event, namely OnEvent. This function will be inherited by all derived classes and will be invoked by the manager whenever an event occurs for which the listener is registered. Notice that OnEvent is simply a function prototype; it has no body. More information on C# interfaces can be found at http://msdn.microsoft.com/en-us/library/ms173156.aspx. Using the IListener interface, we now have the ability to make a listener from any object using only class inheritance; that is, any object can now declare itself as a listener and potentially receive events. For example, a new MonoBehaviour component can be turned into a listener with the following code sample 4-4. This code uses multiple inheritance, that is, it inherits from two classes. More information on multiple inheritance can be found at http://www.dotnetfunda.com/articles/show/1185/multiple-inheritance-in-csharp: using UnityEngine; using System.Collections; public class MyCustomListener : MonoBehaviour, IListener {    // Use this for initialization    void Start () {}    // Update is called once per frame    void Update () {}    //---------------------------------------    //Implement OnEvent function to receive Events    public void OnEvent(EVENT_TYPE Event_Type, Component Sender, Object Param = null)    {    }    //--------------------------------------- } Creating an EventManager Any object can now be turned into a listener, as we've seen. But still the listeners must register themselves with a manager object of some kind. Thus, it is the duty of the manager to call the events on the listeners when the events actually happen. Let's now turn to the manager itself and its implementation details. The manager class will be called EventManager, as shown in the following code sample 4-5. This class, being a persistent singleton object, should be attached to an empty GameObject in the scene where it will be directly accessible to every other object through a static instance property. More on this class and its usage is considered in the subsequent comments: 001 using UnityEngine; 002 using System.Collections; 003 using System.Collections.Generic; 004 //----------------------------------- 005 //Singleton EventManager to send events to listeners 006 //Works with IListener implementations 007 public class EventManager : MonoBehaviour 008 { 009     #region C# properties 010 //----------------------------------- 011     //Public access to instance 012     public static EventManager Instance 013       { 014             get{return instance;} 015            set{} 016       } 017   #endregion 018 019   #region variables 020       // Notifications Manager instance (singleton design pattern) 021   private static EventManager instance = null; 022 023     //Array of listeners (all objects registered for events) 024     private Dictionary<EVENT_TYPE, List<IListener>> Listeners          = new Dictionary<EVENT_TYPE, List<IListener>>(); 025     #endregion 026 //----------------------------------------------------------- 027     #region methods 028     //Called at start-up to initialize 029     void Awake() 030     { 031             //If no instance exists, then assign this instance 032             if(instance == null) 033           { 034                   instance = this; 035                   DontDestroyOnLoad(gameObject); 036           } 037             else 038                   DestroyImmediate(this); 039     } 040//----------------------------------------------------------- 041     /// <summary> 042     /// Function to add listener to array of listeners 043     /// </summary> 044     /// <param name="Event_Type">Event to Listen for</param> 045     /// <param name="Listener">Object to listen for event</param> 046     public void AddListener(EVENT_TYPE Event_Type, IListener        Listener) 047    { 048           //List of listeners for this event 049           List<IListener> ListenList = null; 050 051           // Check existing event type key. If exists, add to list 052           if(Listeners.TryGetValue(Event_Type,                out ListenList)) 053           { 054                   //List exists, so add new item 055                   ListenList.Add(Listener); 056                   return; 057           } 058 059           //Otherwise create new list as dictionary key 060           ListenList = new List<IListener>(); 061           ListenList.Add(Listener); 062           Listeners.Add(Event_Type, ListenList); 063     } 064 //----------------------------------------------------------- 065       /// <summary> 066       /// Function to post event to listeners 067       /// </summary> 068       /// <param name="Event_Type">Event to invoke</param> 069       /// <param name="Sender">Object invoking event</param> 070       /// <param name="Param">Optional argument</param> 071       public void PostNotification(EVENT_TYPE Event_Type,          Component Sender, Object Param = null) 072       { 073           //Notify all listeners of an event 074 075           //List of listeners for this event only 076           List<IListener> ListenList = null; 077 078           //If no event exists, then exit 079           if(!Listeners.TryGetValue(Event_Type,                out ListenList)) 080                   return; 081 082             //Entry exists. Now notify appropriate listeners 083             for(int i=0; i<ListenList.Count; i++) 084             { 085                   if(!ListenList[i].Equals(null)) 086                   ListenList[i].OnEvent(Event_Type, Sender, Param); 087             } 088     } 089 //----------------------------------------------------------- 090     //Remove event from dictionary, including all listeners 091     public void RemoveEvent(EVENT_TYPE Event_Type) 092     { 093           //Remove entry from dictionary 094           Listeners.Remove(Event_Type); 095     } 096 //----------------------------------------------------------- 097       //Remove all redundant entries from the Dictionary 098     public void RemoveRedundancies() 099     { 100             //Create new dictionary 101             Dictionary<EVENT_TYPE, List<IListener>>                TmpListeners = new Dictionary                <EVENT_TYPE, List<IListener>>(); 102 103             //Cycle through all dictionary entries 104             foreach(KeyValuePair<EVENT_TYPE, List<IListener>>                Item in Listeners) 105             { 106                   //Cycle all listeners, remove null objects 107                   for(int i = Item.Value.Count-1; i>=0; i--) 108                   { 109                         //If null, then remove item 110                         if(Item.Value[i].Equals(null)) 111                                 Item.Value.RemoveAt(i); 112                   } 113 114           //If items remain in list, then add to tmp dictionary 115                   if(Item.Value.Count > 0) 116                         TmpListeners.Add (Item.Key,                              Item.Value); 117             } 118 119             //Replace listeners object with new dictionary 120             Listeners = TmpListeners; 121     } 122 //----------------------------------------------------------- 123       //Called on scene change. Clean up dictionary 124       void OnLevelWasLoaded() 125       { 126           RemoveRedundancies(); 127       } 128 //----------------------------------------------------------- 129     #endregion 130 } More information on the OnLevelWasLoaded event can be found at http://docs.unity3d.com/ScriptReference/MonoBehaviour.OnLevelWasLoaded.html. The following are the comments for the code sample 4-5: Line 003: Notice the addition of the System.Collections.Generic namespace giving us access to additional mono classes, including the Dictionary class. This class will be used throughout the EventManager class. In short, the Dictionary class is a special kind of 2D array that allows us to store a database of values based on key-value pairing. More information on the Dictionary class can be found at http://msdn.microsoft.com/en-us/library/xfhwa508%28v=vs.110%29.aspx. Line 007: The EventManager class is derived from MonoBehaviour and should be attached to an empty GameObject in the scene where it will exist as a persistent singleton. Line 024: A private member variable Listeners is declared using a Dictionary class. This structure maintains a hash-table array of key-value pairs, which can be looked up and searched like a database. The key-value pairing for the EventManager class takes the form of EVENT_TYPE and List<Component>. In short, this means that a list of event types can be stored (such as HEALTH_CHANGE), and for each type there could be none, one, or more components that are listening and which should be notified when the event occurs. In effect, the Listeners member is the primary data structure on which the EventManager relies to maintain who is listening for what. Lines 029-039: The Awake function is responsible for the singleton functionality, that is, to make the EventManager class into a singleton object that persists across scenes. Lines 046-063: The AddListener method of EventManager should be called by a Listener object once for each event for which it should listen. The method accepts two arguments: the event to listen for (Event_Type) and a reference to the listener object itself (derived from IListener), which should be notified if and when the event happens. The AddListener function is responsible for accessing the Listeners dictionary and generating a new key-value pair to store the connection between the event and the listener. Lines 071-088: The PostNotification function can be called by any object, whether a listener or not, whenever an event is detected. When called, the EventManager cycles all matching entries in the dictionary, searching for all listeners connected to the current event, and notifies them by invoking the OnEvent method through the IListener interface. Lines 098-127: The final methods for the EventManager class are responsible for maintaining data integrity of the Listeners structure when a scene change occurs and the EventManager class persists. Although the EventManager class persists across scenes, the listener objects themselves in the Listeners variable may not do so. They may get destroyed on scene changes. If so, scene changes will invalidate some listeners, leaving the EventManager with invalid entries. Thus, the RemoveRedundancies method is called to find and eliminate all invalid entries. The OnLevelWasLoaded event is invoked automatically by Unity whenever a scene change occurs. More information on the OnLevelWasLoaded event can be found online at: http://docs.unity3d.com/ScriptReference/MonoBehaviour.OnLevelWasLoaded.html. #region and #endregion The two preprocessor directives #region and #endregion (in combination with the code folding feature) can be highly useful for improving the readability of your code and also for improving the speed with which you can navigate the source file. They add organization and structure to your source code without affecting its validity or execution. Effectively, #region marks the top of a code block and #endregion marks the end. Once a region is marked, it becomes foldable, that is, it becomes collapsible using the MonoDevelop code editor, provided the code folding feature is enabled. Collapsing a region of code is useful for hiding it from view, which allows you to concentrate on reading other areas relevant to your needs, as shown in the following screenshot: Enabling code folding in MonoDevelop To enable code folding in MonoDevelop, select Options in Tools from the application menu. This displays the Options window. From here, choose the General tab in the Text Editor option and click on Enable code folding as well as Fold #regions by default. Using EventManager Now, let's see how to put the EventManager class to work in a practical context from the perspective of listeners and posters in a single scene. First, to listen for an event (any event) a listener must register itself with the EventManager singleton instance. Typically, this will happen once and at the earliest opportunity, such as the Start function. Do not use the Awake function; this is reserved for an object's internal initialization as opposed to the functionality that reaches out beyond the current object to the states and setup of others. See the following code sample 4-6 and notice that it relies on the Instance static property to retrieve a reference to the active EventManager singleton: //Called at start-up void Start() { //Add myself as listener for health change events EventManager.Instance.AddListener(EVENT_TYPE.HEALTH_CHANGE, this); } Having registered listeners for one or more events, objects can then post notifications to EventManager as events are detected, as shown in the following code sample 4-7: public int Health { get{return _health;} set {    //Clamp health between 0-100    _health = Mathf.Clamp(value, 0, 100);    //Post notification - health has been changed   EventManager.Instance. PostNotification(EVENT_TYPE.HEALTH_CHANGE, this, _health); } } Finally, after a notification is posted for an event, all the associated listeners are updated automatically through EventManager. Specifically, EventManager will call the OnEvent function of each listener, giving listeners the opportunity to parse event data and respond where needed, as shown in the following code sample 4-7: //Called when events happen public void OnEvent(EVENT_TYPE Event_Type, Component Sender, object Param = null) { //Detect event type switch(Event_Type) {    case EVENT_TYPE.HEALTH_CHANGE:          OnHealthChange(Sender, (int)Param);    break; } } Summary This article focused on the manifold benefits available for your applications by adopting an event-driven framework consistently through the EventManager class. In implementing such a manager, we were able to rely on either interfaces or delegates, and either method is powerful and extensible. Specifically, we saw how it's easy to add more and more functionality into an Update function but how doing this can lead to severe performance issues. Better is to analyze the connections between your functionality to refactor it into an event-driven framework. Essentially, events are the raw material of event-driven systems. They represent a necessary connection between one action (the cause) and another (the response). To manage events, we created the EventManager class—an integrated class or system that links posters to listeners. It receives notifications from posters about events as and when they happen and then immediately dispatches a function call to all listeners for the event. Resources for Article: Further resources on this subject: Customizing skin with GUISkin [Article] 2D Twin-stick Shooter [Article] Components in Unity [Article]
Read more
  • 0
  • 0
  • 6437
article-image-working-away3d-cameras
Packt
06 Jun 2011
10 min read
Save for later

Working with Away3D Cameras

Packt
06 Jun 2011
10 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Cameras are an absolutely essential part of the 3D world of computer graphics. In fact, no real-time 3D engine can exist without having a camera object. Cameras are our eyes into the 3D world. Away3D has a decent set of cameras, which at the time of writing, consists of Camera3D, TargetCamera3D, HoverCamera3D, and SpringCam classes. Although they have similar base features, each one has some additional functionality to make it different. Creating an FPS controller There are different scenarios where you wish to get a control of the camera in first person, such as in FPS video games. Basically, we want to move and rotate our camera in any horizontal direction defined by the combination of x and y rotation of the user mouse and by keyboard keys input. In this recipe, you will learn how to develop such a class from scratch, which can then be useful in your consequential projects where FPS behavior is needed. Getting ready Set up a basic Away3D scene extending AwayTemplate and give it the name FPSDemo. Then, create one more class which should extend Sprite and give it the name FPSController. How to do it... FPSController class encapsulates all the functionalities of the FPS camera. It is going to receive the reference to the scene camera and apply FPS behavior "behind the curtain". FPSDemo class is a basic Away3D scene setup where we are going to test our FPSController: FPSController.as package utils { public class FPSController extends Sprite { private var _stg:Stage; private var _camera:Object3D private var _moveLeft_Boolean=false; private var _moveRight_Boolean=false; private var _moveForward_Boolean=false; private var _moveBack_Boolean=false; private var _controllerHeigh:Number; private var _camSpeed_Number=0; private static const CAM_ACCEL_Number=2; private var _camSideSpeed_Number=0; private static const CAM_SIDE_ACCEL_Number=2; private var _forwardLook_Vector3D=new Vector3D(); private var _sideLook_Vector3D=new Vector3D(); private var _camTarget_Vector3D=new Vector3D(); private var _oldPan_Number=0; private var _oldTilt_Number=0; private var _pan_Number=0; private var _tilt_Number=0; private var _oldMouseX_Number=0; private var _oldMouseY_Number=0; private var _canMove_Boolean=false; private var _gravity:Number; private var _jumpSpeed_Number=0; private var _jumpStep:Number; private var _defaultGrav:Number; private static const GRAVACCEL_Number=1.2; private static const MAX_JUMP_Number=100; private static const FRICTION_FACTOR_Number=0.75; private static const DEGStoRADs:Number = Math.PI / 180; public function FPSController(camera:Object3D,stg:Stage, height_Number=20,gravity:Number=5,jumpStep:Number=5) { _camera=camera; _stg=stg; _controllerHeigh=height; _gravity=gravity; _defaultGrav=gravity; _jumpStep=jumpStep; init(); } private function init():void{ _camera.y=_controllerHeigh; addListeners(); } private function addListeners():void{ _stg.addEventListener(MouseEvent.MOUSE_DOWN, onMouseDown,false,0,true); _stg.addEventListener(MouseEvent.MOUSE_UP, onMouseUp,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_UP, onKeyUp,false,0,true); } private function onMouseDown(e:MouseEvent):void{ _oldPan=_pan; _oldTilt=_tilt; _oldMouseX=_stg.mouseX+400; _oldMouseY=_stg.mouseY-300; _canMove=true; } private function onMouseUp(e:MouseEvent):void{ _canMove=false; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = true;break; case 68:_moveRight = true;break; case 87:_moveForward = true;break; case 83:_moveBack = true;break; case Keyboard.SPACE: if(_camera.y<MAX_JUMP+_controllerHeigh){ _jumpSpeed=_jumpStep; }else{ _jumpSpeed=0; } break; } } private function onKeyUp(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = false;break; case 68:_moveRight = false;break; case 87:_moveForward = false;break; case 83:_moveBack = false;break; case Keyboard.SPACE:_jumpSpeed=0;break; } } public function walk():void{ _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; if(_moveForward){ _camSpeed+=CAM_ACCEL;} if(_moveBack){_camSpeed-=CAM_ACCEL;} if(_moveLeft){_camSideSpeed-=CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+=CAM_SIDE_ACCEL;} if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1,0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; _camera.y+=_jumpSpeed; if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos (tiltRADs) +_camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos (tiltRADs) +_camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } _camera.lookAt(_camTarget); } } } Now let's put it to work in the main application: FPSDemo.as package { public class FPSDemo extends AwayTemplate { [Embed(source="assets/buildings/CityScape.3ds",mimeType=" application/octet-stream")] private var City:Class; [Embed(source="assets/buildings/CityScape.png")] private var CityTexture:Class; private var _cityModel:Object3D; private var _fpsWalker:FPSController; public function FPSDemo() { super(); } override protected function initGeometry() : void{ parse3ds(); } private function parse3ds():void{ var max3ds_Max3DS=new Max3DS(); _cityModel=max3ds.parseGeometry(City); _view.scene.addChild(_cityModel); _cityModel.materialLibrary.getMaterial("bakedAll [Plane0"). material=new BitmapMaterial(Cast.bitmap(new CityTexture())); _cityModel.scale(3); _cityModel.x=0; _cityModel.y=0; _cityModel.z=700; _cityModel.rotate(Vector3D.X_AXIS,-90); _cam.z=-1000; _fpsWalker=new FPSController(_cam,stage,_view,20,12,250); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } } } How it works... FPSController class looks a tad scary, but that is only at first glance. First we pass the following arguments into the constructor: camera: Camera3D reference (here Camera3D, by the way, is the most appropriate one for FPS). stg: References to flash stage because we are going to assign listeners to it from within the class. height: It is the camera distance from the ground. We imply here that the ground is at 0,0,0. gravity: Gravity force for jump. JumpStep: Jump altitude. Next we define listeners for mouse UP and DOWN states as well as events for registering input from A,W,D,S keyboard keys to be able to move the FPSController in four different directions. In the onMouseDown() event handler, we update the old pan, tilt the previous mouseX and mouseY values as well as by assigning the current values when the mouse has been pressed to _oldPan, _oldTilt, _oldMouseX, and _oldMouseY variables accordingly. That is a widely used technique. We need to do this trick in order to have nice and continuous transformation of the camera each time we start moving the FPSController. In the methods onKeyUp() and onKeyDown(), we switch the flags that indicate to the main movement execution code. This will be seen shortly and we will also see which way the camera should be moved according to the relevant key press. The only part that is different here is the block of code inside the Keyboard.SPACE case. This code activates jump behavior when the space key is pressed. On the SPACE bar, the camera jumpSpeed (that, by default, is zero) receives the _jumpStep incremented value and this, in case the camera has not already reached the maximum altitude of the jump defined by MAX_JUMP, is added to the camera ground height. Now it's the walk() function's turn. This method is supposed to be called on each frame in the main class: _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; Two preceding lines slow down, or in other words apply friction to the front and side movements. Without applying the friction. It will take a lot of time for the controller to stop completely after each movement as the velocity decrease is very slow due to the easing. Next we want to accelerate the movements in order to have a more realistic result. Here is acceleration implementation for four possible walk directions: if(_moveForward){ _camSpeed+= CAM_ACCEL;} if(_moveBack){_camSpeed-= CAM_ACCEL;} if(_moveLeft){_camSideSpeed-= CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+= CAM_SIDE_ACCEL;} The problem is that because we slow down the movement by continuously dividing current speed when applying the drag, the speed value actually never becomes zero. Here we define the range of values closest to zero and resetting the side and front speeds to 0 as soon as they enter this range: if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } Now we need to create an ability to move the camera in the direction it is looking. To achieve this we have to transform the forward vector, which present the forward look of the camera, into the camera space denoted by _camera transformation matrix. We use the deltaTransformVector() method as we only need the transformation portion of the matrix dropping out the translation part: _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; Here we make pretty much the same change as the previous one but for the sideways movement transforming the side vector by the camera's matrix: _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1, 0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; And we also have to acquire base values for rotations from mouse movement. _pan is for the horizontal (x-axis) and _tilt is for the vertical (y-axis) rotation: if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } We also limit the y-rotation so that the controller would not rotate too low into the ground and conversely, too high into zenith. Notice that this entire block is wrapped into a _canMove Boolean flag that is set to true only when the mouse DOWN event is dispatched. We do it to prevent the rotation when the user doesn't interact with the controller. Finally we need to incorporate the camera local rotations into the movement process. So that while moving, you will be able to rotate the camera view too: var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos(tiltRADs) + _camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos(tiltRADs) + _camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; And the last thing is applying gravity force each time the controller jumps up: if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } Here we first check whether the camera y-position is still bigger than its height, this means that the camera is in the "air" now. If true, we apply gravity acceleration to gravity because, as we know, in real life, the falling body constantly accelerates over time. In the second statement, we check whether the camera has reached its default height. If true, we reset the camera to its default y-position and also reset the gravity property as it has grown significantly from the acceleration addition during the last jump. To test it in a real application, we should initiate an instance of the FPSController class. Here is how it is done in FPSDemo.as: _fpsWalker=new FPSController(_cam,stage,20,12,250); We pass to it our scene camera3D instance and the rest of the parameters that were discussed previously. The last thing to do is to set the walk() method to be called on each frame: override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } Now you can start developing the Away3D version of Unreal Tournament!
Read more
  • 0
  • 0
  • 6425

article-image-introduction-blender-25-color-grading
Packt
11 Nov 2010
11 min read
Save for later

Introduction to Blender 2.5: Color Grading

Packt
11 Nov 2010
11 min read
Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        I would like to thank a few people who have made this all possible and I wouldn't be inspired doing this now without their great aid: To Francois Tarlier (http://www.francois-tarlier.com) for patiently bearing with my questions, for sharing his thoughts on color grading with Blender, and for simply developing things to make these things existent in Blender. A clear example of this would be the addition of the Color Balance Node in Blender 2.5's Node Compositor (which I couldn't live without). To Matt Ebb (http://mke3.net/) for creating tools to make Blender's Compositor better and for supporting the efforts of making one. And lastly, to Stu Maschwitz (http://www.prolost.com) for his amazing tips and tricks on color grading. Now, for some explanation. Color grading is usually defined as the process of altering and/or enhancing the colors of a motion picture or a still image. Traditionally, this happens by altering the subject photo-chemically (color timing) in a laboratory. But with modern tools and techniques, color grading can now be achieved digitally. Software like Apple's Final Cut Pro, Adobe's After Effects, Red Giant Software’s Magic Bullet Looks, etc. Luckily, the latest version of Blender has support for color grading by using a selection and plethora of nodes that will then process our input accordingly. However, I really want to stress here that often, it doesn't matter what tools you use, it all really depends on how crafty and artistic you are, regardless of whatever features your application has. Normally, color grading could also be related to color correction in some ways, however strictly speaking, color correction deals majorly on a “correctional” aspect (white balancing, temperature changes, etc.) rather than a specific alteration that would otherwise be achieved when applied with color grading. With color grading, we can turn a motion picture or still image into different types of mood and time of the day, we can fake lens filters and distortions, highlight part of an image via bright spotting, remove red eye effects, denoise an image, add glares, and a lot more. With all the things mentioned above, they can be grouped into three major categories, namely: Color Balancing Contrasting Stylization Material Variation Compensation With Color Balancing, we are trying to fix tint errors and colorizations that occurred during hardware post-production, something that would happen when recording the data into, say, a camera's memory right after it has been internally processed. Or sometimes, this could also be applied to fix some white balance errors that were overlooked while shooting or recording. These are, however, non-solid rules that aren't followed all the time. We can, however, use color balancing to simply correct the tones of an image or frame such that the human skin will look more natural with respect to the scene it is located at. Contrasting deals with how subject/s are emphasized with respect to the scene it is located at. It could also refer to vibrance and high dynamic imaging. It could also be just a general method of “popping out” necessary details present in a frame. Stylization refers to effects that are added on top of the original footage/image after applying color correction, balancing, etc. Some examples would be: dreamy effect, day to night conversion, retro effect, sepia, and many more. And last but not the least is Material Variation Compensation. Often, as artists, there will come a point in time that after hours and hours of waiting for your renders to finish, you will realize at the last minute that something is just not right with how the materials are set up. If you're on a tight deadline, rerendering the entire sequence or frame is not an option. Thankfully, but not absolute all the time, we can compensate this by using color grading techniques to specifically tell Blender to adjust just a portion of an image that looks wrong and save us a ton of time if we were to rerender again. However, with the vast topics that Color Grading has, I can only assume that I will only be leading you to the introductory steps to get you started and for you to have a basis for your own experiments. To have a view of what we could possibly discuss, you can check some of the videos I've done here: http://vimeo.com/13262256 http://vimeo.com/13995077 And to those of you interested with some presets, Francois Tarlier has provided some in this page http://code.google.com/p/ft-projects/downloads/list. Outlining some of the aspects that we'll go through in Part 1 of this article, here's a list of the things we will be doing: Loading Image Files in the Compositor Loading Sequence Files in the Compositor Loading Movie Files in the Compositor Contrasting with Color Curves Colorizing with Color Curves Color Correcting with Color Curves And before we start, here are some prerequisites that you should have: Latest Blender 2.5 version (grab one from http://www.graphicall.org or from the latest svn updates) Movies, Footages, Animations (check http://www.stockfootageforfree.com for free stock footages) Still Images Intermediate Blender skill level Initialization With all the prerequisites met and before we get our hands dirty, there are some things we need to do. Fire up Blender 2.5 and you'll notice (by default) that Blender starts with a cool splash screen and with it on the upper right hand portion, you can see the Blender version number and the revision number. As much as possible, you would want to have a similar revision number as what we'll be using here, or better yet, a newer one. This will ensure that tools we'll be using are up to date, bug free, and possibly feature-pumped. Move the mouse over the image to enlarge it. (Blender 2.5 Initial Startup Screen) After we have ensured we have the right version (and revision number) of Blender, it's time to set up our scenes and screens accordingly to match our ideal workflow later on. Before starting any color grading session, make sure you have a clear plan of what you want to achieve and to do with your footages and images. This way you can eliminate the guessing part and save a lot of time in the process. Next step is to make sure we are in the proper screen for doing color grading. You'll see in the menu bar at the top that we are using the “Default” screen. This is useful for general-purpose Blender workflow like Modeling, Lighting, and Shading setup. To harness Blender's intuitive interface, we'll go ahead and change this screen to something more obvious and useful. (Screen Selection Menu) Click the button on the left of the screen selection menu and you'll see a list of screens to choose from. For this purpose, we'll choose “Compositing”. After enabling the screen, you'll notice that Blender's default layout has been changed to something more varied, but not very dramatic. (Choosing the Compositing Screen) The Compositing Screen will enable us to work seamlessly with color grading in that, by default, it has everything we need to start our session. By default, the compositing screen has the Node Editor on top, the UV/Image Editor on the lower left hand side, the 3D View on the lower right hand side. On the far right corner, equaling the same height as these previous three windows, is the Properties Window, and lastly (but not so obvious) is the Timeline Window which is just below the Properties Window as is situated on the far lower right corner of your screen. Since we won't be digging too much on Blender's 3D aspect here, we can go ahead and ignore the lower right view (3D View), or better yet, let's merge the UV/Image Editor to the 3D View such that the UV/Image Editor will encompass mostly the lower half of the screen (as seen below). You could also merge the Properties Window and the Timeline Window such that the only thing present on the far right hand side is the Properties Window. (Merging the Screen Windows) (Merged Screens) (Merged Screens) Under the Node Editor Window, click on and enable Use Nodes. This will tell Blender that we'll be using the node system in conjunction with the settings we'll be enabling later on. (Enabling “Use Nodes”) After clicking on Use Nodes, you'll notice nodes start appearing on the Node Editor Window, namely the Render Layer and Composite nodes. This is one good hint that Blender now recognizes the nodes as part of its rendering process. But that's not enough yet. Looking on the far right window (Properties Window), look for the Shading and Post Processing tabs under Render. If you can't see some parts, just scroll through until you do. (Locating the Shading and Post Processing Tabs) Under the Shading tab, disable all check boxes except for Texture. This will ensure that we won't get any funny output later on. It will also eliminate the error debugging process, if we do encounter some. (Disabling Shading Options) Next, let's proceed to the Post Processing tab and disable Sequencer. Then let's make sure that Compositing is enabled and checked. (Disabling Post Processing Options) Thats it for now, but we'll get back to the Properties Window whenever necessary. Let's move our attention back to the Node Editor Window above. Same keyboard shortcuts apply here compared to the 3D Viewport. To review, here are the shortcuts we might find helpful while working on the Node Editor Window:   Select Node Right Mouse Button Confirm Left Mouse Button Zoom In Mouse Wheel Up/CTRL + Mouse Wheel Drag Zoom Out Mouse Wheel Down/CTRL + Mouse Wheel Drag Pan Screen Middle Mouse Drag Move Node G Box Selection B Delete Node X Make Links F Cut Links CTRL Left Mouse Button Hide Node H Add Node SHIFT A Toggle Full Screen SHIFT SPACE Now, let's select the Render Layer Node and delete it. We won't be needing it now since we're not directly working with Blender's internal render layer system yet, since we'll be solely focusing our attention on uploading images and footages for grading work. Select the Composite Node and move it far right, just to get it out of view for now. (Deleting the Render Layer Node and Moving the Composite Node) Loading image files in the compositor Blender's Node Compositor can upload pretty much any image format you have. Most of the time, you might want only to work with JPG, PNG, TIFF, and EXR file formats. But choose what you prefer, just be aware though of the image format's compression features. For most of my compositing tasks, I commonly use PNG, it being a lossless type of image, meaning, even after processing it a few times, it retains its original quality and doesn't compress which results in odd results, like in a JPG file. However, if you really want to push your compositing project and use data such as z-buffer (depth), etc. you'll be good with EXR, which is one of the best out there, but it creates such huge file sizes depending on the settings you have. Play around and see which one is most comfortable with you. For ease, we'll load up JPG images for now. With the Node Editor Window active, left click somewhere on an empty space on the left side, imagine placing an imaginative cursor there with the left mouse button. This will tell Blender to place here the node we'll be adding. Next, press SHIFT A. This will bring up the add menu. Choose Input then click on Image. (Adding an Image Node) Most often, when you have the Composite Node selected before performing this action, Blender will automatically connect and link the newly added node to the composite node. If not, you can connect the Image Node's image output node to the Composite Node's image input node. (Image Node Connected to Composite Node) To load images into the Compositor, simply click on Open on the the Image Node and this will bring up a menu for you to browse on. Once you've chosen the desired image, you can double left click on the image or single click then click on Open. After that is done, you'll notice the Image Node's and the Composite Node's preview changed accordingly. (Image Loaded in the Compositor) This image is now ready for compositing work.
Read more
  • 0
  • 0
  • 6397
Modal Close icon
Modal Close icon