Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-retopology-3ds-max
Packt
06 Nov 2012
6 min read
Save for later

Retopology in 3ds Max

Packt
06 Nov 2012
6 min read
(For more resources related to this topic, see here.) High poly model import Different applications are biased to different file formats and may therefore have different import procedures. The Send To functionality between 3ds Mudbox and 3ds Max(which is possible as both are Autodesk products).This is essentially a .fbx transfer. If you are using ZBrush, you will want to get used to the GoZequivalent transfer feature. Note that GoZ must be run from ZBrush to 3ds Max before it can go in the other direction. GoZ also works the free, mini-modeler tool from Pixologic(who make ZBrush too)called Sculptris, which is available at http://www.pixologic.com/sculptris/. In the following example, we'll directly export from Sculptris, a model made from a sphere(so it needs retopology to get a clean base mesh). We'll export it as a .obj and import it to 3ds Max in order to show a few of the idiosyncrasies of this situation. With a model that is sculpted from a primitive base, such as a sphere or box, there are no meaningful texture coordinates, so it would be impossible to paint the model. Although many sculpting programs, including Sculptris, do automapping, the results are seldom optimal. Importing a model into Sculptris The following steps detail the instructions on importing a model into Sculptris: Install Sculptris 6 Alpha and run it. Note that the default scene is a sphere made of triangle faces that dynamically subdivide where you paint. Use the brush tools to experiment with this a while. Click on Open and browse the provided content for this article, and open Packt3dsMaxChapter 9Creature.sc1. The file format .sc1 is native to Sculptris. To get this model to work in 3ds Max, you will need to choose Export and save it instead as Sculptris.obj. Importing the Sculptris.OBJ mesh in 3ds Max After we have imported a model into Sculptris, we'll move on to see how we can save this file into 3ds Max. The importing part is fairly easy. In 3ds Max, choose File | Import and browse to Sculptris.obj, the mesh you just exported from Sculptris. You could also try the example .obj called Packt3dsMaxChapter 9RetopoBullStart.obj. The import options you set matter a lot. You will need to make sure that the options Import as single mesh and Import as Editable Poly are on. This makes sure that the symmetrical object employed in the Sculptris scene (actually a separate mesh that conforms to the model) doesn't prevent the import. While importing, you should also swap the Normals radio button from the From SM group to Auto Smooth, to avoid the triangulated mesh looking faceted. A model begun in Sculptris won't contain any smoothing information when sent to 3ds Max and will come in faceted if you don't choose Auto Smooth. After importing, another way to do the same thing is to apply a Smooth modifier. The Auto Smooth value should be 90 or so, to reduce the likelihood of any sharp creases. Finally, once the model is imported into the 3ds Max scene, move it in the Z plane so it is standing on the ground plane, and make sure its pivot is at 0,0,0. This can be done by choosing Edit | Transform Toolbox and clicking on Origin in the Pivot section. Note that the model's edges are all tiny triangles. This is a result of the way Sculptris adds detail to a model. Retopology will help us get a quad-based model to continue working from. The idea of retopology is to build up a new, nicely constructed model on top of the high-resolution model. The high-resolution model serves as a guide surface. If you are curious, apply an Unwrap UVW modifier to the model and see how its UV mapping looks. Probably a bit scary. A high-resolution model such as this one (250K polys) is virtually impossible to manually UV map,at least not quickly. So we need to simplify the model. If you can't see the Ribbon, go to Customize | Show UI | Show Ribbon, or press the icon in the main toolbar. Then click on the Freeform tab. With the creature mesh selected, click on Grid in the Freeform tab. This specifies the source to which we'll conform the new mesh that we're going to generate next. We don't want to conform to the grid, so change this to Draw On: Surface and then assign the source mesh using the Pick button below the Surface button, shown in the following screenshot: Each time you relaunch 3ds Max to keep working on the retopology, you'll have to reassign the high-resolution mesh as the source surface in the same way. You could also use Draw On: Selection, which would be handy if the source was, in fact, a bunch of different meshes. There is an Offset value you can adjust so that the mesh you'll generate next will sit slightly above the source mesh that can help reduce frustration from the lower-resolution mesh, which is likely to sink in places within the more curvy, high-resolution mesh. If you're just starting out, try leaving the setting alone and see how it turns out. An additional way to help see what you are doing is to apply a semitransparent material or low Visibility value to the high-resolution model (or press Alt + X while it is selected). Next, in a nested part of the Ribbon, we have to set a new object or model to work on (that doesn't exist yet). Click on the PolyDraw rollout at the bottom of the Freeform tab. Having expanded PolyDraw, click on the New Object button and we're ready to start retopologizing. I would strongly suggest raising the Min Distance value in the PolyDraw section, so when you create the first polygons they aren't too small. When using the Strips brush, usually I set the Min Distance to around 25-35, but it depends on the model scale and the level of detail you want. Just like with modeling, when you retopologize, it is best to move from large forms to small details. The object will be called something like Box001, an Editable Poly beginning in the Vertex mode. You can rename it to Retopo or something more memorable. Turn on the Strips mode and make sure Edged Faces is toggled on (F4) so you can see the high-resolution model's center line. Starting at the head, draw a strip of polygons along the symmetry line so that there's an edge on either side. As this model is symmetrical, we only have to work on half of it. If you hold the mouse over the Strips mode icon , you'll get a tool tip that explains how Strips are made, and if you press Y, you can watch a video demo albeit drawing on the Grid. Note that the size of the polygons, as you draw, is determined by the Min Distance value under PolyDraw. Bear in mind that apart from the Min Distance value, the size of the polygons drawn also depends on the current viewport zoom. This is handy because when working on tighter detail, you'll tend to zoom in closer to the source mesh.
Read more
  • 0
  • 0
  • 6363

article-image-3d-animation-techniques-xna-game-studio-40-2
Packt
14 Jan 2011
3 min read
Save for later

3D Animation Techniques with XNA Game Studio 4.0

Packt
14 Jan 2011
3 min read
Object animation We will first look at the animation of objects as a whole. The most common ways to animate an object are rotation and translation (movement). We will begin by creating a class that will interpolate a position and rotation value between two extremes over a given amount of time. We could also have it interpolate between two scaling values, but it is very uncommon for an object to change size in a smooth manner during gameplay, so we will leave it out for simplicity's sake. The ObjectAnimation class has a number of parameters—starting and ending position and rotation values, a duration to interpolate during those values, and a Boolean indicating whether or not the animation should loop or just remain at the end value after the duration has passed: public class ObjectAnimation { Vector3 startPosition, endPosition, startRotation, endRotation; TimeSpan duration; bool loop; } We will also store the amount of time that has elapsed since the animation began, and the current position and rotation values: TimeSpan elapsedTime = TimeSpan.FromSeconds(0); public Vector3 Position { get; private set; } public Vector3 Rotation { get; private set; } The constructor will initialize these values: public ObjectAnimation(Vector3 StartPosition, Vector3 EndPosition, Vector3 StartRotation, Vector3 EndRotation, TimeSpan Duration, bool Loop) { this.startPosition = StartPosition; this.endPosition = EndPosition; this.startRotation = StartRotation; this.endRotation = EndRotation; this.duration = Duration; this.loop = Loop; Position = startPosition; Rotation = startRotation; } Finally, the Update() function takes the amount of time that has elapsed since the last update and updates the position and rotation values accordingly: public void Update(TimeSpan Elapsed) { // Update the time this.elapsedTime += Elapsed; // Determine how far along the duration value we are (0 to 1) float amt = (float)elapsedTime.TotalSeconds / (float)duration. TotalSeconds; if (loop) while (amt > 1) // Wrap the time if we are looping amt -= 1; else // Clamp to the end value if we are not amt = MathHelper.Clamp(amt, 0, 1); // Update the current position and rotation Position = Vector3.Lerp(startPosition, endPosition, amt); Rotation = Vector3.Lerp(startRotation, endRotation, amt); } As a simple example, we'll create an animation (in the Game1 class) that rotates our spaceship in a circle over a few seconds: We'll also have it move the model up and down for demonstration's sake: ObjectAnimation anim; We initialize it in the constructor: models.Add(new CModel(Content.Load<Model>("ship"), Vector3.Zero, Vector3.Zero, new Vector3(0.25f), GraphicsDevice)); anim = new ObjectAnimation(new Vector3(0, -150, 0), new Vector3(0, 150, 0), Vector3.Zero, new Vector3(0, -MathHelper.TwoPi, 0), TimeSpan.FromSeconds(10), true); We update it as follows: anim.Update(gameTime.ElapsedGameTime); models[0].Position = anim.Position; models[0].Rotation = anim.Rotation;
Read more
  • 0
  • 0
  • 6276

article-image-adding-bodies-world
Packt
11 Dec 2012
4 min read
Save for later

Adding Bodies to the World

Packt
11 Dec 2012
4 min read
(For more resources related on Spring, see here.) Creating a fixture A fixture is used to bind the shape on a body, and to define its material setting density, friction, and restitution. The first step is to create the fixture: var fixtureDef:b2FixtureDef = new b2FixtureDef(); fixtureDef.shape=circleShape; Once we have created the fixture with the constructor, we assign the previously created shape using the shape property. Finally we are ready to add the ball to the world: var theBall_b2Body=world.CreateBody(bodyDef); theBall.CreateFixture(fixtureDef); b2Body is the body itself: the physical, concrete body that has been created using the bodyDef attribute. To recap, use the following steps when you want to place a body in the world: i. Create a body definition, which will hold body information such as its position. ii. Create a shape, which is how the body will look. iii. Create a fixture to attach the shape to the body definition. iv. Create the body itself in the world using the fixture. Once you know the importance of each step, adding bodies to your Box2D World will be easy and fun. Back to our project. The following is how the class should look now: package {import flash.display.Sprite;import flash.events.Event;import Box2D.Dynamics.*;import Box2D.Collision.*;import Box2D.Collision.Shapes.*;import Bo x2D.Common.Math.*;public class Main extends Sprite {private var world:b2World;private var worldScale_Number=30;public function Main() {world=new b2World(new b2Vec2(0,9.81),true);var bodyDef_b2BodyDef=new b2BodyDef();bodyDef.position.Set(320/worldScale,30/worldScale);var circleShape:b2CircleShape;circleShape=new b2CircleShape(25/worldScale);var fixtureDef:b2FixtureDef = new b2FixtureDef();fixtureDef.shape=circleShape;var theBall_b2Body=world.CreateBody(bodyDef);theBall.CreateFixture(fixtureDef);addEventListener(Event.ENTER_FRAME,updateWorld);}private function updateWorld(e:Event):void {world.Step(1/30,10,10);world.ClearForces;}}} Time to save the project and test it. Ready to see your first Box2D body in action? Run the movie! Ok, it did not display anything. Before you throw this article, let me tell you that Box2D only simulates the physic world, but it does not display anything. This means your body is alive and kicking in your Box2D World; it's just that you can't see it. Creating a box shape Let's perform the following steps: First, body and fixture definitions can be reassigned to define our new body. This way, we don't need to declare another bodyDef variable, but we just need to reuse the one we used for the creation of the sphere by changing its position: bodyDef.position.Set(320/worldScale,470/worldScale); Now the body definition is located in the horizontal center, and close to the bottom of the screen. To create a polygon shape, we will use the b2PolygonShape class : var polygonShape_b2PolygonShape=new b2PolygonShape(); This way we create a polygon shape in the same way we created the circle shape earlier. Polygon shapes must follow some restrictions, but at the moment because we only need an axis-aligned box, the SetAsBox method is all we need. polygonShape.SetAsBox(320/worldScale,10/worldScale); The method requires two arguments: the half-width and the half-height of the box. In the end, our new polygon shape will have its center at pixels (320, 470), and it will have a width of 640 pixels and a height of 20 pixels—just what we need to create a fl oor. Now we change the shape attribute of the fixture definition, attaching the new polygon shape: fixtureDef.shape=polygonShape; Finally, we can create the world body and embed the fixture in it, just like we did with the sphere. var theFloor_b2Body=world.CreateBody(bodyDef); theFloor.CreateFixture(fixtureDef); The following is how your Main function should look now: public function Main() {world=new b2World(new b2Vec2(0,9.81),true);var bodyDef_b2BodyDef=new b2BodyDef();bodyDef.position.Set(320/worldScale,30/worldScale);var circleShape:b2CircleShape;circleShape=new b2CircleShape(25/worldScale);var fixtureDef_b2FixtureDef=new b2FixtureDef();fixtureDef.shape=circleShape;var theBall_b2Body=world.CreateBody(bodyDef);theBall.CreateFixture(fixtureDef);bodyDef.position.Set(320/worldScale,470/worldScale);var polygonShape_b2PolygonShape=new b2PolygonShape();polygonShape.SetAsBox(320/worldScale,10/worldScale);fixtureDef.shape=polygonShape;var theFloor_b2Body=world.CreateBody(bodyDef);theFloor.CreateFixture(fixtureDef);var debugDraw_b2DebugDraw=new b2DebugDraw();var debugSprite_Sprite=new Sprite();addChild(debugSprite);debugDraw.SetSprite(debugSprite);debugDraw.SetDrawScale(worldScale);debugDraw.SetFlags(b2DebugDraw.e_shapeBit);debugDraw.SetFillAlpha(0.5);world.SetDebugDraw(debugDraw);addEventListener(Event.ENTER_FRAME,updateWorld);} Test the movie and you'll see the floor:
Read more
  • 0
  • 0
  • 6244

article-image-openscenegraph-advanced-scene-graph-components
Packt
22 Dec 2010
12 min read
Save for later

OpenSceneGraph: advanced scene graph components

Packt
22 Dec 2010
12 min read
Creating billboards in a scene In the 3D world, a billboard is a 2D image that is always facing a designated direction. Applications can use billboard techniques to create many kinds of special effects, such as explosions, fares, sky, clouds, and trees. In fact, any object can be treated as a billboard with itself cached as the texture, while looking from a distance. Thus, the implementation of billboards becomes one of the most popular techniques, widely used in computer games and real-time visual simulation programs. The osg::BillBoard class is used to represent a list of billboard objects in a 3D scene. It is derived from osg::Geode, and can orient all of its children (osg::Drawable objects) to face the viewer's viewpoint. It has an important method, setMode(), that is used to determine the rotation behavior, which must set one of the following enumerations as the argument UsagePOINT_ROT_EYEIf all drawables are rotated about the viewer position with the object coordinate Z axis constrained to the window coordinate Y axis.POINT_ROT_WORLDIf drawables are rotated about the viewer directly from their original orientation to the current eye direction in the world space.AXIAL_ROTIf drawables are rotated about an axis specified by setAxis(). Every drawable in the osg::BillBoard node should have a pivot point position, which is specified via the overloaded addDrawable() method, for example: billboard->addDrawable( child, osg::Vec3(1.0f, 0.0f, 0.0f) ); All drawables also need a unified initial front face orientation, which is used for computing rotation values. The initial orientation is set by the setNormal() method. And each newly-added drawable must ensure that its front face orientation is in the same direction as this normal value; otherwise the billboard results may be incorrect. Time for action – creating banners facing you The prerequisite for implementing billboards in OSG is to create one or more quad geometries first. These quads are then managed by the osg::BillBoard class. This forces all child drawables to automatically rotate around a specified axis, or face the viewer. These can be done by presetting a unified normal value and rotating each billboard according to the normal and current rotation axis or viewing vector. We will create two banks of OSG banners, arranged in a V, to demonstrate the use of billboards in OSG. No matter where the viewer is and how he manipulates the scene camera, the front faces of banners are facing the viewer all the time. This feature can then be used to represent textured trees and particles in user applications. Include the necessary headers: #include <osg/Billboard> #include <osg/Texture2D> #include <osgDB/ReadFile> #include <osgViewer/Viewer> Create the quad geometry directly from the osg::createTexturedQuadGeometry() function. Every generated quad is of the same size and origin point, and uses the same image file. Note that the osg256.png file can be found in the data directory of your OSG installation path, but it requires the osgdb_png plugin for reading image data. osg::Geometry* createQuad() { osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; osg::ref_ptr<osg::Image> image = osgDB::readImageFile( "Images/osg256.png" ); texture->setImage( image.get() ); osg::ref_ptr<osg::Geometry> quad= osg::createTexturedQuadGeometry( osg::Vec3(-0.5f, 0.0f,-0.5f), osg::Vec3(1.0f,0.0f,0.0f), osg::Vec3(0.0f,0.0f,1.0f) ); osg::StateSet* ss = quad->getOrCreateStateSet() ss->setTextureAttributeAndModes( 0, texture.get() ); return quad.release(); } In the main entry, we first create the billboard node and set the mode to POINT_ROT_EYE. That is, the drawable will rotate to face the viewer and keep its Z axis upright in the rendering window. The default normal setting of the osg::BillBoard class is the negative Y axis, so rotating it to the viewing vector will show the quads on the XOZ plane in the best appearance: osg::ref_ptr<osg::Billboard> geode = new osg::Billboard; geode->setMode( osg::Billboard::POINT_ROT_EYE ); Now let's create the banner quads and arrange them in a V formation: osg::Geometry* quad = createQuad(); for ( unsigned int i=0; i<10; ++i ) { float id = (float)i; geode->addDrawable( quad, osg::Vec3(-2.5f+0.2f*id, id, 0.0f) ); geode->addDrawable( quad, osg::Vec3( 2.5f-0.2f*id, id, 0.0f) ); } All quad textures' backgrounds are automatically cleared because of the alpha test, which is performed internally in the osgdb_png plugin. That means we have to set correct rendering order of all the drawables to ensure that the entire process is working properly: osg::StateSet* ss = geode->getOrCreateStateSet(); ss->setRenderingHint( osg::StateSet::TRANSPARENT_BIN ); It's time for us to start the viewer, as there are no important steps left to create and render billboards: osgViewer::Viewer viewer; viewer.setSceneData( geode.get() ); return viewer.run(); Try navigating in the scene graph: You will find that the billboard's children are always rotating to face the viewer, but the images' Y directions are never changed (point to the window's Y coordinate all along). Replace the mode POINT_ROT_EYE to POINT_ROT_WORLD and see if there is any difference: What just happened? The basic usage of billboards in OSG scene graph is shown in this example. But it is still possible to be further improved. All the banner geometries here are created with the createQuad() function, which means that the same quad and the same texture are reallocated at least 20 times! The object sharing mechanism is certainly an optimization here. Unfortunately, it is not clever enough to add the same drawable object to osg::Billboard with different positions, which could cause the node to work improperly. What we could do is to create multiple quad geometries that share the same texture object. This will highly reduce the video card's texture memory occupancy and the rendering load. Another possible issue is that somebody may require loaded nodes to be rendered as billboards, not only as drawables. A node can consist of different kinds of child nodes, and is much richer than a basic shape or geometry mesh. OSG also provides the osg::AutoTransform class, which automatically rotates an object's children to be aligned with screen coordinates. Have a go hero – planting massive trees on the ground Billboards are widely used for simulating massive trees and plants. One or more tree pictures with transparent backgrounds are applied to quads of different sizes, and then added to the billboard node. These trees will automatically face the viewer, or to be more real, rotate about an axis as if its branches and leaves are always at the front. Now let's try to create some simple billboard trees. We only need to prepare an image nice enough. Creating texts Text is one of the most important components in all kinds of virtual reality programs. It is used everywhere—for displaying stats on the screen, labeling 3D objects, logging, and debugging. Texts always have at least one font to specify the typeface and qualities, as well as other parameters, including size, alignment, layout (left-to-right or right-to-left), and resolution, to determine its display behaviors. OpenGL doesn't directly support the loading of fonts and displaying texts in 3D space, but OSG provides full support for rendering high quality texts and configuring different text attributes, which makes it much easier to develop related applications. The osgText library actually implements all font and text functionalities. It requires the osgdb_freetype plugin to work properly. This plugin can load and parse TrueType fonts with the help of FreeType, a famous third-party dependency. After that, it returns an osgText::Font instance, which is made up of a complete set of texture glyphs. The entire process can be described with the osgText::readFontFile() function. The osgText::TextBase class is the pure base class of all OSG text types. It is derived from osg::Drawable, but doesn't support display lists by default. Its subclass, osgText::Text, is used to manage fat characters in the world coordinates. Important methods includes setFont(), setPosition(), setCharacterSize(), and setText(), each of which is easy to understand and use, as shown in the following example. Time for action – writing descriptions for the Cessna This time we are going to display a Cessna in the 3D space and provide descriptive texts in front of the rendered scene. A heads-up display (HUD) camera can be used here, which is rendered after the main camera, and only clears the depth buffer for directly updating texts to the frame buffer. The HUD camera will then render its child nodes in a way that is always visible. Include the necessary headers: #include <osg/Camera> #include <osgDB/ReadFile> #include <osgText/Font> #include <osgText/Text> #include <osgViewer/Viewer> The osgText::readFontFile() function is used for reading a suitable font file, for instance, an undistorted TrueType font. The OSG data paths (specified with OSG_FILE_PATH) and the windows system path will be searched to see if the specified file exists: osg::ref_ptr<osgText::Font> g_font = osgText::readFontFile("fonts/arial.ttf"); Create a standard HUD camera and set a 2D orthographic projection matrix for the purpose of drawing 3D texts in two dimensions. The camera should not receive any user events, and should never be affected by any parent transformations. These are guaranteed by the setAllowEventFocus() and setReferenceFrame() methods: setAllowEventFocus() and setReferenceFrame() methods: osg::Camera* createHUDCamera( double left, double right, double bottom, double top ) { osg::ref_ptr<osg::Camera> camera = new osg::Camera; camera->setReferenceFrame( osg::Transform::ABSOLUTE_RF ); camera->setClearMask( GL_DEPTH_BUFFER_BIT ); camera->setRenderOrder( osg::Camera::POST_RENDER ); camera->setAllowEventFocus( false ); camera->setProjectionMatrix( osg::Matrix::ortho2D(left, right, bottom, top) ); return camera.release(); } The text is created by a separate global function, too. It defines a font object describing every character's glyph, as well as the size and position parameters in the world space, and the content of the text. In the HUD text implementation, texts should always align with the XOY plane: osgText::Text* createText( const osg::Vec3& pos, const std::string& content, float size ) { osg::ref_ptr<osgText::Text> text = new osgText::Text; text->setFont( g_font.get() ); text->setCharacterSize( size ); text->setAxisAlignment( osgText::TextBase::XY_PLANE ); text->setPosition( pos ); text->setText( content ); return text.release(); } In the main entry, we create a new osg::Geode node and add multiple text objects to it. These introduce the leading features of a Cessna. Of course, you can add your own explanations about this type of monoplane by using additional osgText::Text drawables: osg::ref_ptr<osg::Geode> textGeode = new osg::Geode; textGeode->addDrawable( createText( osg::Vec3(150.0f, 500.0f, 0.0f), "The Cessna monoplane", 20.0f) ); textGeode->addDrawable( createText( osg::Vec3(150.0f, 450.0f, 0.0f), "Six-seat, low-wing and twin-engined", 15.0f) ); The node including all texts should be added to the HUD camera. To ensure that the texts won't be affected by OpenGL normals and lights (they are textured geometries, after all), we have to disable lighting for the camera node: osg::Camera* camera = createHUDCamera(0, 1024, 0, 768); camera->addChild( textGeode.get() ); camera->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF ); The last step is to add the Cessna model and the camera to the scene graph, and start the viewer as usual: osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild( osgDB::readNodeFile("cessna.osg") ); root->addChild( camera ); osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); In the rendering window, you will see two lines of text over the Cessna model. No matter how you translate, rotate, or scale on the view matrix, the HUD texts will never be covered. Thus, users can always read the most important information directly, without looking away from their usual perspectives: What just happened? To build the example code with CMake or other native compilers, you should add the osgText library as dependence, and include the osgParticle, osgShadow, and osgFX libraries. Here we specify the font from the arial.ttf file. This is a default font in most Windows and UNIX systems, and can also be found in OSG data paths. As you can see, this kind of font offers developers highly-precise displayed characters, regardless of font size settings. This is because the outlines of TrueType fonts are made of mathematical line segments and Bezier curves, which means they are not vector fonts. Bitmap (raster) fonts don't have such features and may sometimes look ugly when resized. Disable setFont() here, to force osgText to use a default 12x12 bitmap font. Can you figure out the difference between these two fonts? Have a go hero – using wide characters to support more languages The setText() method of osgText::Text accepts std::string variables directly. Meanwhile, it also accepts wide characters as the input argument. For example: wchar_t* wstr = …; text->setText( wstr ); This makes it possible to support multi-languages, for instance, Chinese and Japanese characters. Now, try obtaining a sequence of wide characters either by defining them directly or converting from multi-byte characters, and apply them to the osgText::Text object, to see if the language that you are interested in can be rendered. Please note that the font should also be changed to support the corresponding language.
Read more
  • 0
  • 0
  • 6230

article-image-nodes
Packt
19 Aug 2015
18 min read
Save for later

Nodes

Packt
19 Aug 2015
18 min read
In this article by Samanyu Chopra, author of the book iOS Game Development By Example, we will study about nodes, which play an important role in understanding the tree structure of a game. Further, we will discuss about types of nodes in the Sprite Kit and their uses in detail. (For more resources related to this topic, see here.) All you need to know about nodes We have discussed many things about nodes so far. Almost everything you are making in a game with Sprite Kit is a node. Scenes that we are presenting to view are instances of the SKScene class, which is a subclass of the SKEffectNode class, which is itself a subclass of the SKNode class. Indirectly, SKScene is a subclass of the SKNode class. As a game follows the node tree formation, a scene acts like a root node and the other nodes are used as its children. It should be remembered that although SKNode is a base class for the node you see in a scene, it itself does not draw anything. It only provides some basic features to its subclass nodes. All the visual content we see in a Sprite Kit made game, is drawn by using the appropriate SKNode subclasses. Following are some subclasses of SKNode classes, which are used for different behaviors in a Sprite Kit-based game: SKSpriteNode: This class is used to instantiate a texture sprite in the game;SKVideoNode, this class is used to play video content in a scene. SKLabelNode: This class is used to draw labels in a game, with many customizing options, such as font type, font size, font color, and so on. SKShapeNode: This class is used to make a shape based on a path, at run time. For example, drawing a line or making a drawing game. SKEmitterNode: This class is used for emitting particle effects in scene, with many options, such as position, number of particles, color, and so on. SKCropNode: This class is basically used for cropping its child nodes, using a mask. Using this, you can selectively block areas of a layer. SKEffectNode: SKEffectNode is the parent of the SKScene class and the subclass of the SKNode class. It is used for applying image filter to its children. SKLightNode: SKLightNode class is used to make light and shadow effects in scene. SKFieldNode: This is a useful feature of Sprite Kit. You can define a portion of scene with some physical properties, for example, in space game, having a gravity effect on a blackhole, which attracts the things which are nearby. So, these are the basic subclasses of SKNode which are used frequently in Sprite Kit. SKNode provides some basic properties to its subclasses, which are used to view a node inside a scene, such as: position: This sets up the position of a node in a scene xScale: This scales in the width of a node yScale: This scales in the height of a node zRotation: This facilitates the rotation of a node in a clockwise or anti-clockwise direction frame: Node content bounding rectangle without accounting its children We know that the SKNode class does not draw anything by itself. So, what is the use of if? Well, we can use SKNode instances to manage our other nodes in different layers separately, or we can use them to manage different nodes in the same layer. Let's take a look at how we can do this. Using the SKNode object in the game Now, we will discover what the various aspects of SKNode are used for. Say you have to make a body from different parts of sprites, like a car. You can make it from sprites of wheels and body. The wheels and body of a car run in synchronization with each other, so that one control their action together, rather than manage each part separately. This can be done by adding them as a child of the SKNode class object and updating this node to control the activity of the car. The SKNode class object can be used for layering purposes in a game. Suppose we have three layers in our game: the foreground layer, which represents foreground sprites, the middle layer, which represents middle sprites, and the background layer which represents background sprites. If we want a parallax effect in our game, we will have to update each sprite's position separately or we can make three SKNode objects, referring to each layer, and add the sprites to their respective nodes. Now we have to update only these three nodes' position and the sprites will update their position automatically. The SKNode class can be used to make some kind of check point in a game, which is hidden but performs or triggers some event when a player crosses them, such as level end, bonus, or death trap. We can remove or add the whole sub tree inside a node and perform the necessary functions, such as rotating, scaling, positioning, and so on. Well, as we described that we can use the SKNode object as checkpoints in the game, it is important to recognize them in your scene. So, how we do that? Well the SKNode class provides a property for this. Let's find out more about it. Recognition of a node The SKNode class provides a property with a name, to recognize the correct node. It takes string as a parameter. Either you can search a node by its name or can use one of the two methods provided by SKNode, which are as follows: func childNodeWithName(name:String) -> SKNode: This function takes the name string as a parameter, and if it finds a node with a specific name, it returns that node or else it returns nil. If there is more than one node sharing the same name, it will return the first node in the search. func enumerateChildNodesWithName(name:String, usingBlock:((SKNode!,UnsafeMutablePointer<ObjCBool>)->Void)!): When you need all the nodes sharing the same name, use this function. This function takes the name and block as a parameter. In usingBlock, you need to provide two parameters. One matching node, and the other a pointer of type Boolean. In our game, if you remember, we used the name property inside PlayButton to recognize the node when a user taps on it. It's a very useful property to search for the desired node. So, let's have a quick look at other properties or methods of the SKNode class. Initializing a node There are two initializers to make an instance of SKNode. Both are available in iOS 8.0 or later. convenience init (fileNamed filename: String): This initializer is used for making a node by loading an archive file from main bundle. For this, you have to pass a file name with an sks extension in the main bundle. init(): It is used to make a simple node without any parameter. It is useful for layering purposes in a game. As we already discussed the positioning of a node, let's discuss some functions and properties that are used to build a node tree. Building a node tree SKNode provides some functions and properties to work with a node tree. Following are some of the functions: addChild(node:SKNode): This is a very common function and is used mostly to make a node tree structure. We already used it to add nodes to scenes. insertChild(node:SKNode,atIndex index: Int): This is used when you have to insert a child in a specific position in the array. removeFromParent(): This simply removes a node from its parent. removeAllChildren(): It is used when you have to clear all the children in a node. removeChildrenInArray(nodes:[AnyObject]!): It take an array of SKNode objects and removes it from the receiving node. inParentHierarchy(parent:SKNode) -> Bool: It takes an SKNode object to check as a parent of the receiving node, and returns a Boolean value according to that condition. There are some useful properties used in a node tree, as follows: children: This is a read only property. It contains the receiving node's children in the array. parent: This is also a read only property. It contain the reference of the parent of the receiving node, and if there is none, then it returns nil. scene: This too is a read only property. If the node is embedded in the scene, it will contain the reference of the scene, otherwise nil. In a game, we need some specific task on a node, such as changing its position from one point to another, changing sprites in a sequence, and so on. These tasks are done using actions on node. Let's talk about them now. Actions on a node tree Actions are required for some specific tasks in a game. For this, the SKNode class provides some basic functions, which are as follows. runAction(action:SKAction!): This function takes an SKAction class object as a parameter and performs the action on the receiving node. runAction(action:SKAction!,completion block: (() -> Void)!): This function takes an SKAction class object and a compilation block as object. When the action completes, it calls the block.  runAction(action:SKAction,withKey key:String!): This function takes an SKAction class object and a unique key, to identify this action and perform it on the receiving node. actionForKey(key:String) -> SKAction?: This takes a String key as a parameter and returns an associative SKAction object for that key identifier. This happens if it exists, otherwise it returns nil. hasActions() -> Bool: Through this action, if the node has any executing action, it returns true, or else false. removeAllActions(): This function removes all actions from the receiving node. removeActionForKey(key:String): This takes String name as key and removes an action associated with that key, if it exists. Some useful properties to control these actions are as follows: speed: This is used to speed up or speed down the action motion. The default value is 1.0 to run at normal speed; with increasing value, speed increases. paused: This Boolean value determines whether an action on the node should be paused or resumed. Sometimes, we require changing a point coordinate system according to a node inside a scene. The SKNode class provides two functions to interchange a point's coordinate system with respect to a node in a scene. Let's talk about them. Coordinate system of a node We can convert a point with respect to the coordinate system of any node tree. The functions to do that, are as follows: convertPoint(point:CGPoint, fromNode node : SKNode) -> CGPoint: This takes a point in another node's coordinate system and the other node as its parameter, and returns a converted point according to the receiving node's coordinate system. convertPoint(point:CGPoint, toNode node:SKNode) ->CGPoint: It takes a point in the receiving node's coordinate system and the other nodes in the node tree as its parameters, and returns the same point converted according to the other node's coordinate system. We can also determine if a point is inside a node's area or not. containsPoint(p:CGPoint) -> Bool: This returns the Boolean value according to the position of a point inside or outside of a receiving node's bounding box. nodeAtPoint(p:CGPoint) -> SKNode: This returns the deepest descendant node that intersects the point. If that is not there, then it returns the receiver node. nodesAtPoint(p:CGPoint) -> [AnyObject]: This returns an array of all the SKNode objects in the subtree that intersect the point. If no nodes intersect the point, an empty array is returned. Apart from these, the SKNode class provides some other functions and properties too. Let's talk about them. Other functions and properties Some other functions and properties of the SKNode class are as follows: intersectsNode(node:SKNode) -> Bool: As the name suggest, it returns a Boolean value according to the intersection of the receiving node and another node from the function parameter. physicsBody: It is a property of the SKNode class. The default value is nil, which means that this node will not take part in any physical simulation in the scene. If it contains any physical body, then it will change its position and rotation in accordance with the physical simulation in the scene. userData : NSMutableDictionary?: The userData property is used to store data for a node in a dictionary form. We can store position, rotation, and many custom data sets about the node inside it. constraints: [AnyObject]?: It contains an array of constraints SKConstraint objects to the receiving node. Constraints are used to limit the position or rotation of a node inside a scene. reachConstraints: SKReachConstraints?: This is basically used to make restricted values for the receiving node by making an SKReachConstraints object. For example, to make joints move in a human body. Node blending modes: The SKNode class declares an enum SKBlendMode of the int type to blend the receiving node's color by using source and destination pixel colors. The constant’s used for this are as follows: Alpha: It is used to blend source and destination colors by multiplying the source alpha value Add: It is used to add the source and destination colors Subtract: It is used to subtract the source color from the destination color Multiply: It is used to multiply the source color by the destination color MultiplyX2: It is used to multiply the source color by the destination color, and after that, the resulting color is doubled Screen: It is used to multiply the inverted source and the destination color respectively and it then inverts the final result color Replace: It is used to replace the destination color by source color calculateAccumulatedFrame()->CGRect: We know that a node does not draw anything by itself, but if a node has descendants that draw content, then we may be required to know the overall frame size of that node. This function calculates the frame that contains the content of the receiver node and all of its descendants. Now, we are ready to see some basic SKNode subclasses in action. The classes we are going to discuss are as follows: SKLabelNode SKCropNode SKShapeNode SKEmitterNode SKLightNode SKVideoNode To study these classes, we are going to create six different SKScene subclasses in our project, so that we can learn them separately. Now, having learned in detail about nodes, we can proceed further to utilize the concept of nodes in a game. Creating subclasses for our Platformer game With the theoretical understanding of nodes, one wonders how this concept is helpful in developing a game. To understand the development of a game using the concept of Nodes, we now go ahead with writing and executing code for our Platformer game. Create the subclasses of different nodes in Xcode, following the given steps: From the main menu, select New File | Swift | Save As | NodeMenuScene.swift: Make sure Platformer is ticked as the target. Now Create and Open and make the NodeMenuScene class by subclassing SKScene. Following the previous same steps as, make CropScene, ShapeScene, ParticleScene, LightScene, and VideoNodeScene files, respectively. Open the GameViewController.swift file and replace the viewDidLoad function by typing out the following code: override func viewDidLoad() { super.viewDidLoad() let menuscene = NodeMenuScene() let skview = view as SKView skview.showsFPS = true skview.showsNodeCount = true skview.ignoresSiblingOrder = true menuscene.scaleMode = .ResizeFill menuscene.anchorPoint = CGPoint(x: 0.5, y: 0.5) menuscene.size = view.bounds.size skview.presentScene(menuscene) } In this code, we just called our NodeMenuScene class from the GameViewController class. Now, it's time to add some code to the NodeMenuScene class. NodeMenuScene Open the NodeMenuScene.swift file and type in the code as shown next. Do not worry about the length of the code; as this code is for creating the node menu screen, most of the functions are similar to creating buttons: import Foundation import SpriteKit let BackgroundImage = "BG" let FontFile = "Mackinaw1" let sKCropNode = "SKCropNode" let sKEmitterNode = "SKEmitterNode" let sKLightNode = "SKLightNode" let sKShapeNode = "SKShapeNode" let sKVideoNode = "SKVideoNode" class NodeMenuScene: SKScene { let transitionEffect = SKTransition.flipHorizontalWithDuration(1.0) var labelNode : SKNode? var backgroundNode : SKNode? override func didMoveToView(view: SKView) { backgroundNode = getBackgroundNode() backgroundNode!.zPosition = 0 self.addChild(backgroundNode!) labelNode = getLabelNode() labelNode?.zPosition = 1 self.addChild(labelNode!) } func getBackgroundNode() -> SKNode { var bgnode = SKNode() var bgSprite = SKSpriteNode(imageNamed: "BG") bgSprite.xScale = self.size.width/bgSprite.size.width bgSprite.yScale = self.size.height/bgSprite.size.height bgnode.addChild(bgSprite) return bgnode } func getLabelNode() -> SKNode { var labelNode = SKNode() var cropnode = SKLabelNode(fontNamed: FontFile) cropnode.fontColor = UIColor.whiteColor() cropnode.name = sKCropNode cropnode.text = sKCropNode cropnode.position = CGPointMake(CGRectGetMinX(self.frame)+cropnode.frame.width/2, CGRectGetMaxY(self.frame)-cropnode.frame.height) labelNode.addChild(cropnode) var emitternode = SKLabelNode(fontNamed: FontFile) emitternode.fontColor = UIColor.blueColor() emitternode.name = sKEmitterNode emitternode.text = sKEmitterNode emitternode.position = CGPointMake(CGRectGetMinX(self.frame) + emitternode.frame.width/2 , CGRectGetMidY(self.frame) - emitternode.frame.height/2) labelNode.addChild(emitternode) var lightnode = SKLabelNode(fontNamed: FontFile) lightnode.fontColor = UIColor.whiteColor() lightnode.name = sKLightNode lightnode.text = sKLightNode lightnode.position = CGPointMake(CGRectGetMaxX(self.frame) - lightnode.frame.width/2 , CGRectGetMaxY(self.frame) - lightnode.frame.height) labelNode.addChild(lightnode) var shapetnode = SKLabelNode(fontNamed: FontFile) shapetnode.fontColor = UIColor.greenColor() shapetnode.name = sKShapeNode shapetnode.text = sKShapeNode shapetnode.position = CGPointMake(CGRectGetMaxX(self.frame) - shapetnode.frame.width/2 , CGRectGetMidY(self.frame) - shapetnode.frame.height/2) labelNode.addChild(shapetnode) var videonode = SKLabelNode(fontNamed: FontFile) videonode.fontColor = UIColor.blueColor() videonode.name = sKVideoNode videonode.text = sKVideoNode videonode.position = CGPointMake(CGRectGetMaxX(self.frame) - videonode.frame.width/2 , CGRectGetMinY(self.frame) ) labelNode.addChild(videonode) return labelNode } var once:Bool = true override func touchesBegan(touches: NSSet, withEvent event: UIEvent) { if !once { return } for touch: AnyObject in touches { let location = touch.locationInNode(self) let node = self.nodeAtPoint(location) if node.name == sKCropNode { once = false var scene = CropScene() scene.anchorPoint = CGPointMake(0.5, 0.5) scene.scaleMode = .ResizeFill scene.size = self.size self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKEmitterNode { once = false var scene = ParticleScene() scene.anchorPoint = CGPointMake(0.5, 0.5) scene.scaleMode = .ResizeFill scene.size = self.size self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKLightNode { once = false var scene = LightScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene , transition:transitionEffect) } else if node.name == sKShapeNode { once = false var scene = ShapeScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKVideoNode { once = false var scene = VideoNodeScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene , transition:transitionEffect) } } } } We will get the following screen from the previous code: The screen is obtained when we execute the NodeMenuScene,swift file In the preceding code, after import statements, we defined some String variables. We are going to use these variables as Label names in scene .We also added our font name as a string variable. Inside this class, we made two node references: one for background and the other for those labels which we are going to use in this scene. We are using these two nodes to make layers in our game. It is best to categorize the nodes in a scene, so that we can optimize the code. We make an SKTransition object reference of the flip horizontal effect. You can use other transition effects too. Inside the didMoveToView() function, we just get the node and add it to our scene and set their z position. Now, if we look at the getBackgroundNode() function, we can see that we made a node by the SKNode class instance, a background by the SKSpriteNode class instance, and then added it to node and returned it. If you see the syntax of this function, you will see -> SKNode. It means that this function returns an SKNode object. The same goes in the function, getLabelNode(). It also returns a node containing all the SKLabelNode class objects. We have given a font and a name to these labels and set the position of them in the screen. The SKLabelNode class is used to make labels in Sprite Kit with many customizable options. In the touchBegan() function, we get the information on which Label is touched, and we then call the appropriate scene with transitions. With this, we have created a scene with the transition effect. By tapping on each button, you can see the transition effect. d shadows will also change themselves according to the source. Summary In this article, we learned about nodes in detail. We discussed many properties and functions of the SKNode class of Sprite Kit, along with its usage. Also, we discussed about the building of a node tree, and actions on a node tree. Now we are familiar with the major subclasses of SKNode, namely SKLabelNode, SKCropNode, SKShapeNode, SKEmitterNode, SKLightNode, and SKVideoNode, along with their implementation in our game. Resources for Article: Further resources on this subject: Sprites, Camera, Actions! [article] Cross-platform Building [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article]
Read more
  • 0
  • 0
  • 6193

article-image-introduction-3d-design-using-autocad
Packt
06 Jun 2013
10 min read
Save for later

Introduction to 3D Design using AutoCAD

Packt
06 Jun 2013
10 min read
(For more resources related to this topic, see here.) The Z coordinate 3D is all about the third Z coordinate. In 2D, we only care for the X and Y axes, but never used the Z axis. And most of the time, we don't even use coordinates, just the top-twenty AutoCAD commands, the Ortho tool, and so on. But in 3D, the correct use of coordinates can substantially accelerate our work. We will first briefly cover how to introduce points by coordinates and how to extrapolate to the third dimension. Absolute coordinates The location of all entities in AutoCAD is related to a coordinate system. Any coordinate system is characterized by an origin and positive directions for the X and Y axes. The Z axis is obtained directly from the X and Y axes by the right-hand rule: if we rotate the right hand from the X axis to the Y axis, the thumb indicates the positive Z direction. Picture that when prompting for a point; besides specifying it in the drawing area with a pointing device such as a mouse, we can enter coordinates using the keyboard. The format for the absolute Cartesian coordinates related to the origin is defined by the values of the three orthogonal coordinates, namely, X, Y, and Z, separated by commas: X coordinate, Y coordinate, Z coordinate The Z coordinate can be omitted. For instance, if we define a point with the absolute coordinates 30, 20, and 10, this means 30 absolute is in the X direction, 20 is in the Y direction, and 10 is in the Z direction. Relative coordinates Frequently, we want to specify a point in the coordinates, but one that is related to the previous point. The format for the relative Cartesian coordinates is defined by the symbol AT (@), followed by increment values in the three directions, separated by commas: @X increment, Y increment, Z increment Of course, one or more increments can be 0. The Z increment can be omitted. For instance, if we define a point with relative coordinates, @0,20,10, this means in relation to the previous point, 0 is in X, 20 is in Y, and 10 is in Z directions. Point filters When we want to specify a point but decompose it step-by-step, that is, separate its coordinates based on different locations, we may use filters. When prompting for a point, we access filters by digitizing the X, Y, or Z axes for individual coordinates, or XY, YZ, or ZX for pairs of coordinates. Another way is from the osnap menu, CTRL + mouse right-click, and then Point Filters. AutoCAD requests for the remaining coordinates until the completion of point definition. Imagine that we want to specify a point, for instance, the center of a circle, where its X coordinate is given by the midpoint of an edge, its y coordinate is the midpoint of another edge, and finally its Z coordinate is any point on a top face. Assuming that Midpoint osnap is predefined, the dialog should be: Command: CIRCLESpecify center point for circle or [3P/2P/Ttr (tan tan radius)]: .Xof midpoint of edge(need YZ): .Yof midpoint of edge(need Z): any point on top faceSpecify radius of circle or [Diameter]: value Workspaces AutoCAD comes with several workspaces. It's up to each of us to choose a workspace based on a classic environment or the ribbon. To change workspaces, we can pick the workspace switching button on the status bar: There are other processes for acceding this command such as the workspaces list on the Quick Access Toolbar (title bar), the Workspaces toolbar, or by digitizing WSCURRENT, but the access shown is consistent among all versions and always available. Classic environment The classic environment is based on the toolbars and the menu bar and doesn't use the ribbon. AutoCAD comes with AutoCAD Classic workspace, but it's very simple to adapt and view the suitable toolbars for 3D. The advantages of using this environment are speed and consistency. To show another toolbar, we right-click over any toolbar and choose it. Typically, we want to have the following toolbars visible besides Standard and Layers: Layers II, Modeling, Solid Editing, and Render: Ribbon environment Since the 2009 version, AutoCAD also allows for a ribbon-based environment. Normally, this environment uses neither toolbars nor the menu bar. AutoCAD comes with two ribbon workspaces, namely, 3D Basics and 3D Modeling; the first being less useful than the second. The advantages are that we have consistency with other software, commands are divided into panels and tabs, the ribbon can be collapsed to a single line, and it includes some commands not available on the toolbars. The disadvantage is that as it's a dynamic environment, we frequently have to activate other panels to access commands and some important commands and functions are not always visible: When modeling in 3D, the layers list visibility is almost mandatory. We may add this list to the Quick Access Toolbar by applying the CUI command or by right-clicking above the command icon we want to add. Another way is to pull the Layers panel to the drawing area, thus making it permanently visible. Layers, transparency, and other properties When we are modeling in AutoCAD, the ability to control object properties is essential. After some hours spent on a new 3D model, we can have hundreds of objects that overlap and obscure the model's visibility. Here are the most important properties. Layers If a correct layers application is fundamental in 2D, in 3D it assumes extreme importance. Each type of 3D object should be in a proper layer, thus allowing us to control its properties: Name: A good piece of advice is to not mix 2D with 3D objects in the same layers. So, layers for 3D objects must be easily identified, for instance, by adding a 3D prefix. Freeze/Thaw: In 3D, the density of screen information can be huge. So freezing and unfreezing layers is a permanent process. It's better to freeze the layers than to turn off because objects on frozen layers are not processed (for instance, regenerating or counting for ZOOM Extents), thus accelerating the 3D process. Lock/Unlock: It's quite annoying to notice that at an advanced phase of our project, our walls moved and caused several errors. If we need that information visible, the best way to avoid these errors is to lock layers. Color: A good and logical color palette assigned to our layers can improve our understanding while modeling. Transparency: If we want to see through walls or other objects at the creation process, we may give a value between 0 and 90 percent to the layers transparency. Last but not least, the best and the easiest process to assign rendering materials to objects is by layer, so another good point is to apply a correct and detailed layer scheme. Transparency Transparency, as a property for layers or for objects, has been available since Version 2011. Besides its utility for layers, it can also be applied directly to objects. For instance, we may have a layer called 3D-SLAB and just want to see through the upper slab. We can change the objects' transparency with PROPERTIES (Ctrl + 1). To see transparencies in the drawing area, the TPY button (on the status bar) must be on. Visibility Another recent improvement in AutoCAD is the ability to hide or to isolate objects without changing layer properties. We select the objects to hide or to isolate (all objects not selected are hidden) and right-click on them. On the cursor menu, we choose Isolate and then: Isolate Objects: All objects not selected are invisible, using the ISOLATEOBJECTS command Hide Objects: The selected objects are invisible, using the HIDEOBJECTS command End Object Isolation: All objects are turned on, using the UNISOLATEOBJECTS command. There is a small lamp icon on the status bar, the second icon from the right. If the lamp is red, it means that there are hidden objects; if it is yellow, all objects are visible: Shown on the following image is the application of transparency and hide objects to the left wall and the upper slab: Auxiliary tools AutoCAD software is very precise and the correct application of these auxiliary tools is a key factor for good projects. All users should be familiar with at least Ortho and Osnap tools. Following is the application of auxiliary tools in 3D projects complemented with the first exercise. OSNAP, ORTHO, POLAR, and OTRACK auxiliary tools Let's start with object snapping, probably the most frequently used tool for precision. Every time AutoCAD prompts for a point, we can access predefined object snaps (also known as osnaps) if the OSNAP button on the status bar is on. To change it, we only have to click on the OSNAP button or press F3. If we want an individual osnap, we can, among other ways, digitize the first three letters (for instance, MID for midpoint) or use the osnap menu (CTRL + right-click). Osnaps work everywhere in 3D (which is great) and is especially useful is the Extension osnap mode, which allows you to specify a point with a distance in the direction of any edge. But what if we want to specify the projection of 3D points onto the working XY plane? Easy! If the OSNAPZ variable is set to 1, all specified points are projected onto the plane. This variable is not saved and 0 is assigned as the initial value. More great news is that ORTHO (F8) and POLAR (F10) work in 3D. That is, we can specify points by directing the cursor along the Z axis and assign distances. Lots of @ spared, no? OTRACK (F11), used to derive points from predefined osnaps, also works along the Z-axis direction. We pause over an osnap and can assign a distance along a specific direction or just obtain a crossing: 3DOsnap tool Starting with Version 2011, AutoCAD allows you to specify 3D object snaps. Also, here we can access predefined 3D osnaps keeping 3DOSNAP (F4) on, or we can access them individually. There are osnaps for vertices, midpoints on edges, centers of faces, knots (spline points), points perpendicular to faces, and points nearest to faces. Exercise 1.1 Using the LINE command, coordinates, and auxiliary tools, let's create a cabinet skeleton. All dimensions are in meters and we start from the lower-left corner. The ORTHO or POLAR button must be on and the OTRACK and OSNAP buttons with Endpoint and Midpoint predefined. As in 2D, rotating the wheel mouse forward, we zoom in; rotating the wheel backward, we zoom out; all related to cursor position. To automatically orbit around the model, we hold down SHIFT and the wheel simultaneously. The cursor changes to two small ellipses and then we drag the mouse to orbit around the model. Visualization is the subject of the next article We run the LINE command at any point, block direction X (POLAR or ORTHO) and assign the distance: Command: LINESpecify first point: any pointSpecify next point or [Undo]: 0.6 We block the Z direction and assign the distance: Specify next point or [Undo]: 0.7 The best way to specify this point is with relative coordinates: Specify next point or [Close/Undo]: @-0.3,0,0.4 We block the Z direction and assign the distance: Specify next point or [Close/Undo]: 0.7 The best way to close the left polygon is to pause over the first point, move the cursor up to find the crossing, with Polar or Ortho coming from the last point, and apply Close option to close the polygon: Specify next point or [Close/Undo]: point with OTRACKSpecify next point or [Close/Undo]: C We copy all lines 1 meter in the Y direction: Command: COPYSelect objects: Specify opposite corner: 6 foundSelect objects: EnterCurrent settings: Copy mode = MultipleSpecify base point or [Displacement/mOde] <Displacement>: pointSpecify second point or [Array] <use first point as displacement>:1Specify second point or [Array/Exit/Undo] <Exit>: Enter We complete the cabinet skeleton by drawing lines between endpoints Command: LINE
Read more
  • 0
  • 0
  • 6174
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-video-editing-blender-using-video-sequence-editor-part-2
Packt
19 Feb 2010
7 min read
Save for later

Video Editing in Blender using Video Sequence Editor: Part 2

Packt
19 Feb 2010
7 min read
Video Sequence Editor (The Main Dish) Since we’ve been always using Blender’s default screen for modeling, setting up materials, or node compositing, let’s try to deviate for a moment and make use of Blender’s screen features to jump from one preset to another, which is a very useful tool in my opinion. Moving your attention over to Blender’s main menu (located at the very top of the window, below the header), you’ll notice the drop-down menu with a prefix of SR: at the beginning. This is Blender’s screen system which can come in handy anytime you want to switch to any preset view or customized view, quickly! You can click on the box itself with the text to edit the name of the screen you currently have, or you can use the dropdown button to either add a new screen or to choose between the selection. Right now we don’t have the diligence to create a new and customized screen since the presets are already of best use. Clicking the drop-down button, you’ll be presented with different screen names, of which, we will be selecting the fourth one labeled 4-Sequence. Instantaneously, after confirming your selection, your Blender screen will be warped to yet another spaceship-like interface, don’t worry though, we’ll get used to it pretty much soon. Changing Screen Layouts On the upper left hand corner, we have our IPO window which is used to add refined and custom controls over the behavior of our strips/inputs, on its right is the preview window, on the middle part is the VSE editor, below it is the Timeline, and lastly at the bottom part is the buttons window. Sequence Screen Layout For this part of the article, we’ll only be delving into some of these parts, namely the Preview, VSE Editor, Timeline, and the Buttons Window. I could’ve just said “except the IPO Window. So before we actually try and add in our videos, there are things we need to do: hover your mouse pointer over to the Timeline Window and press SHIFT+T to bring up the Time Value option and choose Seconds. This will help us later on to recognize our video lengths with seconds as the unit and not frames, which can become clearer as we go on. And next is to click the Sequencer button under the Scene(F10) menu. This will enable us to see options for our video later on. Timeline and Button Options Next thing we need to do is to add our videos (at long last) into the VSE Editor and finally to editing on them.  To do this, move your mouse pointer over to the VSE Editor and press spacebar > Movie or Movie + Audio (HD) or you can click Add > Movie or Movie + Audio (HD) on the menu header. This will then lead you into Blender’s file browser where you can locate your videos. If you want to load your videos simultaneously, you can right click, hold, and drag on the videos to select them, and click the load button. They will then be automatically concatenated, each having its own individual video strip. But right now, I only want to load them one at a time so we could focus on editing them separately and not worry about the other strips floating around and messing around with our view. Once the video/s are loaded into the VSE window and are selected, you’ll notice the Sequencer Buttons Window populated with options.  Normally, we will see four (4) tabs, namely: Edit, Input, Filter, and Proxy. Let’s leave the default settings now as they work great as they are (but you can always doodle around the buttons and settings and see how they work, try to experiment!) Basically, I will introduce you to some of Blender’s video editing capabilities such as: cutting, duplicating, transition effects, artistic glows, and speed controls. Discussing the extents and features of this editor might take me a whole new set of article or two so right now, I’ll only lead you to the basic concepts and have you and your imagination with a lot of experimentation lead you to where you want. First off, Cutting video strips. Often, you want to delete parts of your video or move a section of it on a certain time on your collection, this is where cutting comes in handy. In Blender VSE world, you can cut a video by clicking over or scrubbing on the frame where you want to start your cut and press K for hard-cut, SHIFT+K for soft-cut. Once this operation is successful, you’ll notice your strip change appearance as a result of the cut, and depending on the number of cuts you made, that’s how much sub-strips you have which can individually be moved (G) or deleted X or delete key. Cutting, Moving, and Deleting Strips Once you have made the necessary cuts, you can always arrange your strips by moving them beside each other or with gaps, depending on what you want to achieve. Additionally, you can scrub your videos by click and dragging your mouse over to the VSE window (with the green vertical line as your current frame marker) or you could also click and drag over to the Timeline Window. As you scrub your videos, you’ll notice a live preview of what’s happening on the Preview Window on the upper right hand corner of your screen. Another cool trick with the VSE is adding markers to label parts of your animation or videos. Markers are also a way of identifying events in your timeline as they happen so you won’t lose track of what has occurred on that frame in time. You can add markers to your VSE Window where your current frame marker (vertical green line) by pressing CTRL+ALT+M or by clicking Marker > Add Marker on the menu, and add a label to it by pressing CTRL+M or by clicking Marker > (Re)Name Marker. These markers would also appear on your Timeline Window. Adding and (Re)Naming Markers Next is duplicating strips. Sometimes in your video editing endeavors, you wanted to repeat parts of the video for more emphasis or even just for artistic purposes. Luckily, duplicating strips in Blender’s VSE is just as easy as selecting the strips(s) you wish to duplicate and press SHIFT+D or clicking Strip > Duplicate in the menu. Duplicating Strips Now we discuss transition effects, which are one of the nicest things video editing has ever offered. In this part, we’ll try adding some simple transition effects from within the VSE to add subtlety and variation to our strips. Like any other video editing application, it requires you to have at least two strips of video/image to create the transition. We do this in Blender by selecting two strips by first selecting the first strip with right click, then adding to it a second strip by shift right clicking. This way you’re telling Blender from what video to what will the transition occur. Say, you have selected video A first then shift selected video B next, if we’ll now try to add the transition effect, it will happen from video A to video B and not vice versa. The simplest transition that we could add now is the Gamma Cross which simply takes the first strip and fades it into the second one and so on. Do this by selecting two strips and press spacebar then click Gamma Cross or click Add > Effect > Gamma Cross.  With its default settings, when you now scrub your strips or use the timeline, you’ll notice that in between the two strips is a blend of both. Moving any of the video strips will update automatically the length of the Gamma Cross that’s present.
Read more
  • 0
  • 0
  • 6043

article-image-directx-graphics-diagnostic
Packt
19 Sep 2013
6 min read
Save for later

DirectX graphics diagnostic

Packt
19 Sep 2013
6 min read
(For more resources related to this topic, see here.) Debugging a captured frame is usually a real challenge in comparison with debugging C++ code. We are dealing with hundreds of thousands of more, pixels that are produced, and in addition, there might be several functions being processed by the GPU. Typically, in modern games, there are different passes on a frame constructing it; also, there are many post-process renderings that will be applied on the final result to increase the quality of the frame. All these processes make it quite difficult to find why a specific pixel is drawn with an unexpected color during debugging! Visual Studio 2012 comes with a series of tools that intend to assist game developers. The new DirectX graphics diagnostics tools are a set of development tools integrated with Microsoft Visual Studio 2012, which can help us to analyze and debug the frames captured from a Direct3D application. Some of the functionalities of these tools come from the PIX for a Windows tool, which is a part of DirectX SDK. Please note that the DirectX graphics diagnostics tools are not supported by Visual Studio 2012 Express at the time of writing this article. In this article, we are going to explain a complete example that shows how to use graphics diagnostics to capture and analyze the graphics information of a frame. Open the final project of this article, DirectX Graphics Diagnostics, and let's see what is going on with the GPU. Intel Graphics Performance Analyzer (Intel GPA) is another suite of graphics analysis and optimization tools that are supported by Windows Store applications. At the time of writing this article, the final release of this suite (Intel GPA 2013 R2) is able to analyze Windows 8 Store applications, but tracing the captured frames is not supported yet. Also, Nvidia Nsight™ Visual Studio Edition 3.1 is another option which supports Visual Studio 2012 and Direct3D 11.1 for debugging, profiling, and tracing heterogeneous compute and graphics application. Capturing the frame To start debugging the application, press Alt + F5 or select the Start Diagnostics command from Debug | Graphics | Start Diagnostics, as shown in the following screenshot: You can capture graphics information by using the application in two ways. The first way is to use Visual Studio to manually capture the frame while it is running, and the second way is to use the programmatic capture API. The latter is useful when the application is about to run on a computer that does not have Visual Studio installed or when you would like to capture the graphics information from the Windows RT devices. For in the first way, when the application starts, press the Print Screen key (Prt Scr). For in the second way, for preparing the application to use the programmatic capture, you need to use the CaptureCurrentFrame API. So, make sure to add the following header to the pch.h file: #include <vsgcapture.h> For Windows Store applications, the location of the temp directory is specific to each user and application, and can be found at C:usersusernameAppDataLocalPackagespackage family nameTempState. Now you can capture your frame by calling the g_pVsgDbg->CaptureCurrentFrame() function. By default, the name of the captured file is default.vsglog. Remember, do not start the graphics diagnostics when using the programmatic capture API, just run or debug your application. The Graphics Experiment window After a frame is captured, it is displayed in Visual Studio as Graphics Experiment.vsglog. Each captured frame will be added to the Frame List and is presented as a thumbnail image at the bottom of the Graphics Experiment tab. This logfile contains all the information needed for debugging and tracing. As you can see in the following screenshot, there are three subwindows: the left one displays the captured frames, the right one, which is named Graphics Events List, demonstrates the list of all DirectX events, and finally, the Graphics Pixel History subwindow in the middle is responsible for displaying the activities of the selected pixel in the running frame: Let's start with the Graphics Pixel History subwindow. As you can see in the preceding screenshot, we selected one of the pixels on the spaceship model. Now let us take a look at the Graphics Pixel History subwindow of that pixel, as shown in the following screenshot: The preceding screenshot shows how this pixel has been modified by each DirectX event; first it is initialized with a specific color, then it is changed to blue by the ClearRenderTargetView function and after this, it is changed to the color of our model by drawing indexed primitives. Open the collapsed DrawIndexed function to see what really happens in the Vertex Shader and Pixel Shader pipelines. The following screenshot shows the information about each of the vertices: The input layout of the vertex buffer is VertexPositionNormalTangentColorTexture. In this article, you can see each vertex's value of the model's triangle. Now, we would like to debug the Pixel Shader of this pixel, so just press the green triangular icon to start debugging. As you can see in the following screenshot, when the debug process is started, Visual Studio will navigate to the source code of the Pixel Shader: Now you can easily debug the Pixel Shader code of the specific pixel in the DrawIndexed stage. You can also right-click on each pixel of the captured frame and select Graphics Object Table to check the Direct3D object's data. Following screenshot shows the Graphics Event List subwindow. Draw calls in this list are typically more important events: The event that is displayed with the icon marks a draw event, the one that is displayed with the icon marks an event that occurs before the captured frame, and the user-defined event marker or the group shown with the icon can be defined inside the application code. In this example, we mark an event (Start rendering the model) before rendering the model and mark another event (The model has been rendered) after the model is rendered. You can create these events by using the ID3DUserDefinedAnnotation:: BeginEvent, ID3DUserDefinedAnnotation:: EndEvent, and ID3DUserDefinedAnnotation:: SetMarker interfaces. Summary In this article, you have learned DirectX graphics diagnostic, how to capture the frame, and the graphics Experiment window. Resources for Article: Further resources on this subject: Getting Started with GameSalad [Article] 2D game development with Monkey [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 6014

article-image-introduction-game-development-using-unity-3d
Packt
24 Sep 2010
9 min read
Save for later

Introduction to Game Development Using Unity 3D

Packt
24 Sep 2010
9 min read
  Unity 3D Game Development by Example Beginner's Guide A seat-of-your-pants manual for building fun, groovy little games quickly Read more about this book (For more resources on this subject, see here.) Technology is a tool. It helps us accomplish amazing things, hopefully more quickly and more easily and more amazingly than if we hadn't used the tool. Before we had newfangled steam-powered hammering machines, we had hammers. And before we had hammers, we had the painful process of smacking a nail into a board with our bare hands. Technology is all about making our lives better and easier. And less painful. Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard. Browser-based 3D? Welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web. Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 5996

article-image-components-unity
Packt
26 Aug 2014
13 min read
Save for later

Components in Unity

Packt
26 Aug 2014
13 min read
In this article by Simon Jackson, author of Mastering Unity 2D Game Development, we will have a walkthrough of the new 2D system and other new features. We will then understand some of the Unity components deeply. We will then dig into animation and its components. (For more resources related to this topic, see here.) Unity 4.3 improvements Unity 4.3 was not just about the new 2D system; there are also a host of other improvements and features with this release. The major highlights of Unity 4.3 are covered in the following sections. Improved Mecanim performance Mecanim is a powerful tool for both 2D and 3D animations. In Unity 4.3, there have been many improvements and enhancements, including a new game object optimizer that ensures objects are more tightly bound to their skeletal systems and removes unnecessary transform holders. Thus making Mecanim animations lighter and smoother. Refer to the following screenshot: In Unity 4.3, Mecanim also adds greater control to blend animations together, allowing the addition of curves to have smooth transitions, and now it also includes events that can be hooked into at every step. The Windows Phone API improvements and Windows 8.1 support Unity 4.2 introduced Windows Phone and Windows 8 support, since then things have been going wild, especially since Microsoft has thrown its support behind the movement and offered free licensing for the existing Pro owners. Refer to the following screenshot: Unity 4.3 builds solidly on the v4 foundations by bringing additional platform support, and it closes some more gaps between the existing platforms. Some of the advantages are as follows: The emulator is now fully supported with Windows Phone (new x86 phone build) It has more orientation support, which allows even the splash screens to rotate properly and enabling pixel perfect display It has trial application APIs for both Phone and Windows 8 It has improved sensors and location support On top of this, with the recent release of Windows 8.1, Unity 4.3 now also supports Windows 8.1 fully; additionally, Unity 4.5.3 will introduce support Windows Phone 8.1 and universal projects. Dynamic Nav Mesh (Pro version only) If you have only been using the free version of Unity till now, you will not be aware of what a Nav Mesh agent is. Nav Meshes are invisible meshes that are created for your 3D environment at the build time to simplify path finding and navigation for movable entities. Refer to the following screenshot: You can, of course, create the simplified models for your environment and use them in your scenes; however, every time you change your scene, you need to update your navigation model. Nav Meshes simply remove this overhead. Nav Meshes are crucial, especially in larger environments where collision and navigation calculations can make the difference between your game running well or not. Unity 4.3 has improved this by allowing more runtime changes to the dynamic Nav Mesh, allowing you to destroy parts of your scene that alter the walkable parts of your terrain. Nav Mesh calculations are also now multithreaded to give even an even better speed boost to your game. Also, there have been many other under-the-hood fixes and tweaks. Editor updates The Unity editor received a host of updates in Unity 4.3 to improve the performance and usability of the editor, as you can see in the following demo screenshot. Granted most of the improvements are behind the scenes. The improved Unity Editor GUI with huge improvements The editor refactored a lot of the scripting features on the platform, primarily to reduce the code complexity required for a lot of scripting components, such as unifying parts of the API into single components. For example, the LookLikeControls and LookLikeInspector options have been unified into a single LookLike function, which allows easier creation of the editor GUI components. Further simplification of the programmable editor interface is an ongoing task and a lot of headway is being made in each release. Additionally, the keyboard controls have been tweaked to ensure that the navigation works in a uniform way and the sliders/fields work more consistently. MonoDevelop 4.01 Besides the editor features, one of the biggest enhancements has to be the upgrade of the MonoDevelop editor (http://monodevelop.com/), which Unity supports and is shipped with. This has been a long running complaint for most developers simply due to the brand new features in the later editions. Refer to the following screenshot: MonoDevelop isn't made by Unity; it's an open source initiative run by Xamarin hosted on GitHub (https://github.com/mono/monodevelop) for all the willing developers to contribute and submit fixes to. Although the current stable release is 4.2.1, Unity is not fully up to date. Hopefully, this recent upgrade will mean that Unity can keep more in line with the future versions of this free tool. Sadly, this doesn't mean that Unity has yet been upgraded from the modified V2 version of the Mono compiler (http://www.mono-project.com/Main_Page) it uses to the current V3 branch, most likely, due to the reduced platform and the later versions of the Mono support. Movie textures Movie textures is not exactly a new feature in Unity as it has been available for some time for platforms such as Android and iOS. However, in Unity 4.3, it was made available for both the new Windows 8 and Windows Phone platforms. This adds even more functionality to these platforms that were missing in the initial Unity 4.2 release where this feature was introduced. Refer to the following screenshot: With movie textures now added to the platform, other streaming features are also available, for example, webcam (or a built-in camera in this case) and microphone support were also added. Understanding components Components in Unity are the building blocks of any game; almost everything you will use or apply will end up as a component on a GameObject inspector in a scene. Until you build your project, Unity doesn't know which components will be in the final game when your code actually runs (there is some magic applied in the editor). So, these components are not actually attached to your GameObject inspector but rather linked to them. Accessing components using a shortcut Now, in the previous Unity example, we added some behind-the-scenes trickery to enable you to reference a component without first discovering it. We did this by adding shortcuts to the MonoBehavior class that the game object inherits from. You can access the components with the help of the following code: this.renderer.collider.attachedRigidbody.angularDrag = 0.2f; What Unity then does behind the scenes for you is that it converts the preceding code to the following code: var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); var ridgedBody = collider.GetComponent<Rigidbody>(); ridgedBody.angularDrag = 0.2f; The preceding code will also be the same as executing the following code: GetComponent<Renderer>().GetComponent<Collider>().GetComponent<Rigidbody>().angularDrag = 0.2f; Now, while this is functional and working, it isn't very performant or even a best practice as it creates variables and destroys them each time you use them; it also calls GetComponent for each component every time you access them. Using GetComponent in the Start or Awake methods isn't too bad as they are only called once when the script is loaded; however, if you do this on every frame in the update method, or even worse, in FixedUpdate methods, the problem multiplies; not to say you can't, you just need to be aware of the potential cost of doing so. A better way to use components – referencing Now, every programmer knows that they have to worry about garbage and exactly how much memory they should allocate to objects for the entire lifetime of the game. To improve things based on the preceding shortcut code, we simply need to manually maintain the references to the components we want to change or affect on a particular object. So, instead of the preceding code, we could simply use the following code: Rigidbody myScriptRigidBody; void Awake() { var renderer = this.GetComponent<Renderer>(); var collider = renderer.GetComponent<Collider>(); myScriptRigidBody = collider.GetComponent<Rigidbody>(); } void Update() { myScriptRigidBody.angularDrag = 0.2f * Time.deltaTime; } This way the RigidBody object that we want to affect can simply be discovered once (when the scripts awakes); then, we can just update the reference each time a value needs to be changed instead of discovering it every time. An even better way Now, it has been pointed out (by those who like to test such things) that even the GetComponent call isn't as fast as it should be because it uses C# generics to determine what type of component you are asking for (it's a two-step process: first, you determine the type and then get the component). However, there is another overload of the GetComponent function in which instead of using generics, you just need to supply the type (therefore removing the need to discover it). To do this, we will simply use the following code instead of the preceding GetComponent<>: myScriptRigidBody =(Rigidbody2D)GetComponent(typeof(Rigidbody2D)); The code is slightly longer and arguably only gives you a marginal increase, but if you need to use every byte of the processing power, it is worth keeping in mind. If you are using the "." shortcut to access components, I recommend that you change that practice now. In Unity 5, they are being removed. There will, however, be a tool built in the project's importer to upgrade any scripts you have using the shortcuts that are available for you. This is not a huge task, just something to be aware of; act now if you can! Animation components All of the animation in the new 2D system in Unity uses the new Mecanim system (introduced in Version 4) for design and control, which once you get used to is very simple and easy to use. It is broken up into three main parts: animation controllers, animation clips, and animator components. Animation controllers Animation controllers are simply state machines that are used to control when an animation should be played and how often, including what conditions control the transition between each state. In the new 2D system, there must be at least one controller per animation for it to play, and controllers can contain many animations as you can see here with three states and transition lines between them: Animation clips Animation clips are the heart of the animation system and have come very far from their previous implementation in Unity. Clips were used just to hold the crafted animations of the 3D models with a limited ability to tweak them for use on a complete 3D model: The new animation dope sheet system (as shown in the preceding screenshot) is very advanced; in fact, now it tracks almost every change in the inspector for sprites, allowing you to animate just about everything. You can even control which sprite from a spritesheet is used for each frame of the animation. The preceding screenshot shows a three-frame sprite animation and a modified x position modifier for the middle image, giving a hopping effect to the sprite as it runs. This ability of the dope sheet system implies there is less burden on the shoulders of art designers to craft complex animations as the animation system itself can be used to produce a great effect. Sprites don't have to be picked from the same spritesheet to be animated. They can come from individual textures or picked from any spritesheet you have imported. The Animator component To use the new animation prepared in a controller, you need to apply it to a game object in the scene. This is done through the Animator component, as shown here: The only property we actually care about in 2D is the Controller property. This is where we attach the controller we just created. Other properties only apply to the 3D humanoid models, so we can ignore them for 2D. For more information about the complete 3D Mecanim system, refer to the Unity Learn guide at http://unity3d.com/learn/tutorials/modules/beginner/animation. Animation is just one of the uses of the Mecanim system. Setting up animation controllers So, to start creating animations, you first need an animation controller in order to define your animation clips. As stated before, this is just a state machine that controls the execution of animations even if there is only one animation. In this case, the controller runs the selected animation for as long as it's told to. If you are browsing around the components that can be added to the game object, you will come across the Animator component, which takes a single animation clip as a parameter. This is the legacy animation system for backward compatibility only. Any new animation clip created and set to this component will not work; it will simply generate a console log item stating The AnimationClip used by the Animation component must be marked as Legacy. So, in Unity 4.3 onwards, just avoid this. Creating an animation controller is just as easy as any other game object. In the Project view, simply right-click on the view and select Create | Animator Controller. Opening the new animation will show you the blank animator controller in the Mecanim state manager window, as shown in the following screenshot: There is a lot of functionality in the Mecanim state engine, which is largely outside the scope of this article. Check out for more dedicated books on this, such as Unity 4 Character Animation with Mecanim, Jamie Dean, Packt Publishing. If you have any existing clips, you can just drag them to the Mecanim controller's Edit window; alternatively, you can just select them in the Project view, right-click on them, and select From selected clip under Create. However, we will cover more of this later in practice. Once you have a controller, you can add it to any game object in your project by clicking on Add Component in the inspector or by navigating to Component | Create and Miscellaneous | Animator and selecting it. Then, you can select your new controller as the Controller property of the animator. Alternatively, you can just drag your new controller to the game object you wish to add it to. Clips in a controller are bound to the spritesheet texture of the object the controller is attached to. Changing or removing this texture will prevent the animation from being displayed correctly. However, it will appear as it's still running. So with a controller in place, let's add some animation to it. Summary In this article, we did a detailed analysis of the new 2D features added in Unity 4.3. Then we overviewed all the main Unity components. Resources for Article: Further resources on this subject: Parallax scrolling [article] What's Your Input? [article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 5940
article-image-using-basic-projectiles
Packt
27 Apr 2015
22 min read
Save for later

Using Basic Projectiles

Packt
27 Apr 2015
22 min read
"Flying is learning how to throw yourself at the ground and miss."                                                                                              – Douglas Adams In this article by Michael Haungs, author of the book Creative Greenfoot, we will create a simple game using basic movements in Greenfoot. Actors in creative Greenfoot applications, such as games and animations, often have movement that can best be described as being launched. For example, a soccer ball, bullet, laser, light ray, baseball, and firework are examples of this type of object. One common method of implementing this type of movement is to create a set of classes that model real-world physical properties (mass, velocity, acceleration, friction, and so on) and have game or simulation actors inherit from these classes. Some refer to this as creating a physics engine for your game or simulation. However, this course of action is complex and often overkill. There are often simple heuristics we can use to approximate realistic motion. This is the approach we will take here. In this article, you will learn about the basics of projectiles, how to make an object bounce, and a little about particle effects. We will apply what you learn to a small platform game that we will build up over the course of this article. Creating realistic flying objects is not simple, but we will cover this topic in a methodical, step-by-step approach, and when we are done, you will be able to populate your creative scenarios with a wide variety of flying, jumping, and launched objects. It's not as simple as Douglas Adams makes it sound in his quote, but nothing worth learning ever is. (For more resources related to this topic, see here.) Cupcake Counter It is beneficial to the learning process to discuss topics in the context of complete scenarios. Doing this forces us to handle issues that might be elided in smaller, one-off examples. In this article, we will build a simple platform game called Cupcake Counter (shown in Figure 1). We will first look at a majority of the code for the World and Actor classes in this game without showing the code implementing the topic of this article, that is, the different forms of projectile-based movement. Figure 1: This is a screenshot of Cupcake Counter How to play The goal of Cupcake Counter is to collect as many cupcakes as you can before being hit by either a ball or a fountain. The left and right arrow keys move your character left and right and the up arrow key makes your character jump. You can also use the space bar key to jump. After touching a cupcake, it will disappear and reappear randomly on another platform. Balls will be fired from the turret at the top of the screen and fountains will appear periodically. The game will increase in difficulty as your cupcake count goes up. The game requires good jumping and avoiding skills. Implementing Cupcake Counter Create a scenario called Cupcake Counter and add each class to it as they are discussed. The CupcakeWorld class This subclass of World sets up all the actors associated with the scenario, including a score. It is also responsible for generating periodic enemies, generating rewards, and increasing the difficulty of the game over time. The following is the code for this class: import greenfoot.*; import java.util.List;   public class CupcakeWorld extends World { private Counter score; private Turret turret; public int BCOUNT = 200; private int ballCounter = BCOUNT; public int FCOUNT = 400; private int fountainCounter = FCOUNT; private int level = 0; public CupcakeWorld() {    super(600, 400, 1, false);    setPaintOrder(Counter.class, Turret.class, Fountain.class,    Jumper.class, Enemy.class, Reward.class, Platform.class);    prepare(); } public void act() {    checkLevel(); } private void checkLevel() {    if( level > 1 ) generateBalls();    if( level > 4 ) generateFountains();    if( level % 3 == 0 ) {      FCOUNT--;      BCOUNT--;      level++;    } } private void generateFountains() {    fountainCounter--;    if( fountainCounter < 0 ) {      List<Brick> bricks = getObjects(Brick.class);      int idx = Greenfoot.getRandomNumber(bricks.size());      Fountain f = new Fountain();      int top = f.getImage().getHeight()/2 + bricks.get(idx).getImage().getHeight()/2;      addObject(f, bricks.get(idx).getX(),      bricks.get(idx).getY()-top);      fountainCounter = FCOUNT;    } } private void generateBalls() {    ballCounter--;    if( ballCounter < 0 ) {      Ball b = new Ball();      turret.setRotation(15 * -b.getXVelocity());      addObject(b, getWidth()/2, 0);      ballCounter = BCOUNT;    } } public void addCupcakeCount(int num) {    score.setValue(score.getValue() + num);    generateNewCupcake(); } private void generateNewCupcake() {    List<Brick> bricks = getObjects(Brick.class);    int idx = Greenfoot.getRandomNumber(bricks.size());    Cupcake cake = new Cupcake();    int top = cake.getImage().getHeight()/2 +    bricks.get(idx).getImage().getHeight()/2;    addObject(cake, bricks.get(idx).getX(),    bricks.get(idx).getY()-top); } public void addObjectNudge(Actor a, int x, int y) {    int nudge = Greenfoot.getRandomNumber(8) - 4;    super.addObject(a, x + nudge, y + nudge); } private void prepare(){    // Add Bob    Bob bob = new Bob();    addObject(bob, 43, 340);    // Add floor    BrickWall brickwall = new BrickWall();    addObject(brickwall, 184, 400);    BrickWall brickwall2 = new BrickWall();    addObject(brickwall2, 567, 400);    // Add Score    score = new Counter();    addObject(score, 62, 27);    // Add turret    turret = new Turret();    addObject(turret, getWidth()/2, 0);    // Add cupcake    Cupcake cupcake = new Cupcake();    addObject(cupcake, 450, 30);    // Add platforms    for(int i=0; i<5; i++) {      for(int j=0; j<6; j++) {        int stagger = (i % 2 == 0 ) ? 24 : -24;        Brick brick = new Brick();        addObjectNudge(brick, stagger + (j+1)*85, (i+1)*62);      }    } } } Let's discuss the methods in this class in order. First, we have the class constructor CupcakeWorld(). After calling the constructor of the superclass, it calls setPaintOrder() to set the actors that will appear in front of other actors when displayed on the screen. The main reason why we use it here, is so that no actor will cover up the Counter class, which is used to display the score. Next, the constructor method calls prepare() to add and place the initial actors into the scenario. Inside the act() method, we will only call the function checkLevel(). As the player scores points in the game, the level variable of the game will also increase. The checkLevel() function will change the game a bit according to its level variable. When our game first starts, no enemies are generated and the player can easily get the cupcake (the reward). This gives the player a chance to get accustomed to jumping on platforms. As the cupcake count goes up, balls and fountains will be added. As the level continues to rise, checkLevel() reduces the delay between creating balls (BCOUNT) and fountains (FCOUNT). The level variable of the game is increased in the addCupcakeCount() method. The generateFountains() method adds a Fountain actor to the scenario. The rate at which we create fountains is controlled by the delay variable fountainContainer. After the delay, we create a fountain on a randomly chosen Brick (the platforms in our game). The getObjects() method returns all of the actors of a given class presently in the scenario. We then use getRandomNumber() to randomly choose a number between one and the number of Brick actors. Next, we use addObject() to place the new Fountain object on the randomly chosen Brick object. Generating balls using the generateBalls() method is a little easier than generating fountains. All balls are created in the same location as the turret at the top of the screen and sent from there with a randomly chosen trajectory. The rate at which we generate new Ball actors is defined by the delay variable ballCounter. Once we create a Ball actor, we rotate the turret based on its x velocity. By doing this, we create the illusion that the turret is aiming and then firing Ball Actor. Last, we place the newly created Ball actor into the scenario using the addObject() method. The addCupcakeCount() method is called by the actor representing the player (Bob) every time the player collides with Cupcake. In this method, we increase score and then call generateNewCupcake() to add a new Cupcake actor to the scenario. The generateNewCupcake() method is very similar to generateFountains(), except for the lack of a delay variable, and it randomly places Cupcake on one of the bricks instead of a Fountain actor. In all of our previous scenarios, we used a prepare() method to add actors to the scenario. The major difference between this prepare() method and the previous ones, is that we use the addObjectNudge() method instead of addObject() to place our platforms. The addObjectNudge() method simply adds a little randomness to the placement of the platforms, so that every new game is a little different. The random variation in the platforms will cause the Ball actors to have different bounce patterns and require the player to jump and move a bit more carefully. In the call to addObjectNudge(), you will notice that we used the numbers 85 and 62. These are simply numbers that spread the platforms out appropriately, and they were discovered through trial and error. I created a blue gradient background to use for the image of CupcakeWorld. Enemies In Cupcake Counter, all of the actors that can end the game if collided with are subclasses of the Enemy class. Using inheritance is a great way to share code and reduce redundancy for a group of similar actors. However, we often will create class hierarchies in Greenfoot solely for polymorphism. Polymorphism refers to the ability of a class in an object-orientated language to take on many forms. We are going to use it, so that our player actor only has to check for collision with an Enemy class and not every specific type of Enemy, such as Ball or RedBall. Also, by coding this way, we are making it very easy to add code for additional enemies, and if we find that our enemies have redundant code, we can easily move that code into our Enemy class. In other words, we are making our code extensible and maintainable. Here is the code for our Enemy class: import greenfoot.*;   public abstract class Enemy extends Actor { } The Ball class extends the Enemy class. Since Enemy is solely used for polymorphism, the Ball class contains all of the code necessary to implement bouncing and an initial trajectory. Here is the code for this class: import greenfoot.*;   public class Ball extends Enemy { protected int actorHeight; private int speedX = 0; public Ball() {    actorHeight = getImage().getHeight();    speedX = Greenfoot.getRandomNumber(8) - 4;    if( speedX == 0 ) {      speedX = Greenfoot.getRandomNumber(100) < 50 ? -1 : 1;    } } public void act() {    checkOffScreen(); } public int getXVelocity() {    return speedX; } private void checkOffScreen() {    if( getX() < -20 || getX() > getWorld().getWidth() + 20 ) {      getWorld().removeObject(this);    } else if( getY() > getWorld().getHeight() + 20 ) {      getWorld().removeObject(this);    } } } The implementation of Ball is missing the code to handle moving and bouncing. As we stated earlier, we will go over all the projectile-based code after providing the code we are using as the starting point for this game. In the Ball constructor, we randomly choose a speed in the x direction and save it in the speedX instance variable. We have included one accessory method to return the value of speedX (getXVelocity()). Last, we include checkOffScreen() to remove Ball once it goes off screen. If we do not do this, we would have a form of memory leak in our application because Greenfoot will continue to allocate resources and manage any actor until it is removed from the scenario. For the Ball class, I choose to use the ball.png image, which comes with the standard installation of Greenfoot. In this article, we will learn how to create a simple particle effect. Creating an effect is more about the use of a particle as opposed to its implementation. In the following code, we create a generic particle class, Particles, that we will extend to create a RedBall particle. We have organized the code in this way to easily accommodate adding particles in the future. Here is the code: import greenfoot.*;   public class Particles extends Enemy { private int turnRate = 2; private int speed = 5; private int lifeSpan = 50; public Particles(int tr, int s, int l) {    turnRate = tr;    speed = s;    lifeSpan = l;    setRotation(-90); } public void act() {    move();    remove(); } private void move() {    move(speed);    turn(turnRate); } private void remove() {    lifeSpan--;    if( lifeSpan < 0 ) {      getWorld().removeObject(this);    } } } Our particles are implemented to move up and slightly turn each call of the act() method. A particle will move lifeSpan times and then remove itself. As you might have guessed, lifeSpan is another use of a delay variable. The turnRate property can be either positive (to turn slightly right) or negative (to turn slightly left). We only have one subclass of Particles, RedBall. This class supplies the correct image for RedBall, supplies the required input for the Particles constructor, and then scales the image according to the parameters scaleX and scaleY. Here's the implementation: import greenfoot.*;   public class RedBall extends Particles { public RedBall(int tr, int s, int l, int scaleX, int scaleY) {    super(tr, s, l);    getImage().scale(scaleX, scaleY); } } For RedBall, I used the Greenfoot-supplied image red-draught.png. Fountains In this game, fountains add a unique challenge. After reaching level five (see the World class CupcakeWorld), Fountain objects will be generated and randomly placed in the game. Figure 2 shows a fountain in action. A Fountain object continually spurts RedBall objects into the air like water from a fountain. Figure 2: This is a close-up of a Fountain object in the game Cupcake Counter Let's take a look at the code that implements the Fountain class: import greenfoot.*; import java.awt.Color;   public class Fountain extends Actor { private int lifespan = 75; private int startDelay = 100; private GreenfootImage img; public Fountain() {    img = new GreenfootImage(20,20);    img.setColor(Color.blue);    img.setTransparency(100);    img.fill();    setImage(img); } public void act() {    if( --startDelay == 0 ) wipeView();    if( startDelay < 0 ) createRedBallShower(); } private void wipeView() {    img.clear(); } private void createRedBallShower() { } } The constructor for Fountain creates a new blue, semitransparent square and sets that to be its image. We start with a blue square to give the player of the game a warning that a fountain is about to erupt. Since fountains are randomly placed at any location, it would be unfair to just drop one on our player and instantly end the game. This is also why RedBall is a subclass of Enemy and Fountain is not. It is safe for the player to touch the blue square. The startDelay delay variable is used to pause for a short amount of time, then remove the blue square (using the function wipeView()), and then start the RedBall shower (using the createRedBallShower() function). We can see this in the act() method. Turrets In the game, there is a turret in the top-middle of the screen that shoots purple bouncy balls at the player. It is shown in Figure 1. Why do we use a bouncy-ball shooting turret? Because this is our game and we can! The implementation of the Turret class is very simple. Most of the functionality of rotating the turret and creating Ball to shoot is handled by CupcakeWorld in the generateBalls() method already discussed. The main purpose of this class is to just draw the initial image of the turret, which consists of a black circle for the base of the turret and a black rectangle to serve as the cannon. Here is the code: import greenfoot.*; import java.awt.Color;   public class Turret extends Actor { private GreenfootImage turret; private GreenfootImage gun; private GreenfootImage img; public Turret() {    turret = new GreenfootImage(30,30);    turret.setColor(Color.black);    turret.fillOval(0,0,30,30);       gun = new GreenfootImage(40,40);    gun.setColor(Color.black);    gun.fillRect(0,0,10,35);       img = new GreenfootImage(60,60);    img.drawImage(turret, 15, 15);    img.drawImage(gun, 25, 30);    img.rotate(0);       setImage(img); } } We previously talked about the GreenfootImage class and how to use some of its methods to do custom drawing. One new function we introduced is drawImage(). This method allows you to draw one GreenfootImage into another. This is how you compose images, and we used it to create our turret from a rectangle image and a circle image. Rewards We create a Reward class for the same reason we created an Enemy class. We are setting ourselves up to easily add new rewards in the future. Here is the code: import greenfoot.*;   public abstract class Reward extends Actor { } The Cupcake class is a subclass of the Reward class and represents the object on the screen the player is constantly trying to collect. However, cupcakes have no actions to perform or state to keep track of; therefore, its implementation is simple: import greenfoot.*;   public class Cupcake extends Reward { } When creating this class, I set its image to be muffin.png. This is an image that comes with Greenfoot. Even though the name of the image is a muffin, it still looks like a cupcake to me. Jumpers The Jumper class is a class that will allow all subclasses of it to jump when pressing either the up arrow key or the spacebar. At this point, we just provide a placeholder implementation: import greenfoot.*;   public abstract class Jumper extends Actor { protected int actorHeight; public Jumper() {    actorHeight = getImage().getHeight(); } public void act() {    handleKeyPresses(); } protected void handleKeyPresses() { } } The next class we are going to present is the Bob class. The Bob class extends the Jumper class and then adds functionality to let the player move it left and right. Here is the code: import greenfoot.*;   public class Bob extends Jumper { private int speed = 3; private int animationDelay = 0; private int frame = 0; private GreenfootImage[] leftImages; private GreenfootImage[] rightImages; private int actorWidth; private static final int DELAY = 3; public Bob() {    super();       rightImages = new GreenfootImage[5];    leftImages = new GreenfootImage[5];       for( int i=0; i<5; i++ ) {      rightImages[i] = new GreenfootImage("images/Dawson_Sprite_Sheet_0" + Integer.toString(3+i) + ".png");      leftImages[i] = new GreenfootImage(rightImages[i]);      leftImages[i].mirrorHorizontally();    }       actorWidth = getImage().getWidth(); } public void act() {    super.act();    checkDead();    eatReward(); } private void checkDead() {    Actor enemy = getOneIntersectingObject(Enemy.class);    if( enemy != null ) {      endGame();    } } private void endGame() {    Greenfoot.stop(); } private void eatReward() {    Cupcake c = (Cupcake) getOneIntersectingObject(Cupcake.class);    if( c != null ) {      CupcakeWorld rw = (CupcakeWorld) getWorld();      rw.removeObject(c);      rw.addCupcakeCount(1);    } } // Called by superclass protected void handleKeyPresses() {    super.handleKeyPresses();       if( Greenfoot.isKeyDown("left") ) {      if( canMoveLeft() ) {moveLeft();}    }    if( Greenfoot.isKeyDown("right") ) {      if( canMoveRight() ) {moveRight();}    } } private boolean canMoveLeft() {    if( getX() < 5 ) return false;    return true; } private void moveLeft() {    setLocation(getX() - speed, getY());    if( animationDelay % DELAY == 0 ) {      animateLeft();      animationDelay = 0;    }    animationDelay++; } private void animateLeft() {    setImage( leftImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } private boolean canMoveRight() {    if( getX() > getWorld().getWidth() - 5) return false;    return true; } private void moveRight() {    setLocation(getX() + speed, getY());    if( animationDelay % DELAY == 0 ) {      animateRight();      animationDelay = 0;    }    animationDelay++; } private void animateRight() {    setImage( rightImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } } Like CupcakeWorld, this class is substantial. We will discuss each method it contains sequentially. First, the constructor's main duty is to set up the images for the walking animation. The images came from www.wikia.com and were supplied, in the form of a sprite sheet, by the user Mecha Mario. A direct link to the sprite sheet is http://smbz.wikia.com/wiki/File:Dawson_Sprite_Sheet.PNG. Note that I manually copied and pasted the images I used from this sprite sheet using my favorite image editor. Free Internet resources Unless you are also an artist or a musician in addition to being a programmer, you are going to be hard pressed to create all of the assets you need for your Greenfoot scenario. If you look at the credits for AAA video games, you will see that the number of artists and musicians actually equal or even outnumber the programmers. Luckily, the Internet comes to the rescue. There are a number of websites that supply legally free assets you can use. For example, the website I used to get the images for the Bob class supplies free content under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA) license. It is very important that you check the licensing used for any asset you download off the Internet and follow those user agreements carefully. In addition, make sure that you fully credit the source of your assets. For games, you should include a Credits screen to cite all the sources for the assets you used. The following are some good sites for free, online assets: www.wikia.com newgrounds.com http://incompetech.com opengameart.org untamed.wild-refuge.net/rpgxp.php Next, we have the act() method. It first calls the act() method of its superclass. It needs to do this so that we get the jumping functionality that is supplied by the Jumper class. Then, we call checkDead() and eatReward(). The checkDead()method ends the game if this instance of the Bob class touches an enemy, and eatReward() adds one to our score, by calling the CupcakeWorld method addCupcakeCount(), every time it touches an instance of the Cupcake class. The rest of the class implements moving left and right. The main method for this is handleKeyPresses(). Like in act(), the first thing we do, is call handleKeyPresses() contained in the Jumper superclass. This runs the code in Jumper that handles the spacebar and up arrow key presses. The key to handling key presses is the Greenfoot method isKeyDown() (see the following information box). We use this method to check if the left arrow or right arrow keys are presently being pressed. If so, we check whether or not the actor can move left or right using the methods canMoveLeft() and canMoveRight(), respectively. If the actor can move, we then call either moveLeft() or moveRight(). Handling key presses in Greenfoot The second tutorial explains how to control actors with the keyboard. To refresh your memory, we are going to present some information on the keyboard control here. The primary method we use in implementing keyboard control is isKeyDown(). This method provides a simple way to check whether a certain key is being pressed. Here is an excerpt from Greenfoot's documentation: public static boolean isKeyDown(java.lang.String keyName) Check whether a given key is currently pressed down.   Parameters: keyName:This is the name of the key to check.   This returns : true if the key is down.   Using isKeyDown() is easy. The ease of capturing and using input is one of the major strengths of Greenfoot. Here is example code that will pause the execution of the game if the "p" key is pressed:   if( Greenfoot.isKeyDown("p") { Greenfoot.stop(); } Next, we will discuss canMoveLeft(), moveLeft(), and animateLeft(). The canMoveRight(), moveRight(), and animateRight()methods mirror their functionality and will not be discussed. The sole purpose of canMoveLeft() is to prevent the actor from walking off the left-hand side of the screen. The moveLeft() method moves the actor using setLocation() and then animates the actor to look as though it is moving to the left-hand side. It uses a delay variable to make the walking speed look natural (not too fast). The animateLeft() method sequentially displays the walking-left images. Platforms The game contains several platforms that the player can jump or stand on. The platforms perform no actions and only serve as placeholders for images. We use inheritance to simplify collision detection. Here is the implementation of Platform: import greenfoot.*;   public class Platform extends Actor { } Here's the implementation of BrickWall: import greenfoot.*;   public class BrickWall extends Platform { } Here's the implementation of Brick: import greenfoot.*;   public class Brick extends Platform { } You should now be able to compile and test Cupcake Counter. Make sure you handle any typos or other errors you introduced while inputting the code. For now, you can only move left and right. Summary We have created a simple game using basic movements in Greenfoot. Resources for Article: Further resources on this subject: A Quick Start Guide to Scratch 2.0 [article] Games of Fortune with Scratch 1.4 [article] Cross-platform Development - Build Once, Deploy Anywhere [article]
Read more
  • 0
  • 0
  • 5898

article-image-setting-panda3d-and-configuring-development-tools
Packt
14 Apr 2011
7 min read
Save for later

Setting Up Panda3D and Configuring Development Tools

Packt
14 Apr 2011
7 min read
  Panda3D 1.7 Game Developer's Cookbook Panda3D is a very powerful and feature-rich game engine that comes with a lot of features needed for creating modern video games. Using Python as a scripting language to interface with the low-level programming libraries makes it easy to quickly create games because this layer of abstraction neatly hides many of the complexities of handling assets, hardware resources, or graphics rendering, for example. This also allows simple games and prototypes to be created very quickly and keeps the code needed for getting things going to a minimum. Panda3D is a complete game engine package. This means that it is not just a collection of game programming libraries with a nice Python interface, but also includes all the supplementary tools for previewing, converting, and exporting assets as well as packing game code and data for redistribution. Delivering such tools is a very important aspect of a game engine that helps with increasing the productivity of a development team. The Panda3D engine is a very nice set of building blocks needed for creating entertainment software, scaling nicely to the needs of hobbyists, students, and professional game development teams. Panda3D is known to have been used in projects ranging from one-shot experimental prototypes to full-scale commercial MMORPG productions like Toontown Online or Pirates of the Caribbean Online. Before you are able to start a new project and use all the powerful features provided by Panda3D to their fullest, though, you need to prepare your working environment and tools. By the end of this article, you will have a strong set of programming tools at hand, as well as the knowledge of how to configure Panda3D to your future projects' needs. Downloading and configuring NetBeans to work with Panda3D When writing code, having the right set of tools at hand and feeling comfortable when using them is very important. Panda3D uses Python for scripting and there are plenty of good integrated development environments available for this language like IDLE, Eclipse, or Eric. Of course, Python code can be written using the excellent Vim or Emacs editors too. Tastes do differ, and every programmer has his or her own preferences when it comes to this decision. To make things easier and have a uniform working environment, however, we are going to use the free NetBeans IDE for developing Python scripts. This choice was made out of pure preference and one of the many great alternatives might be used as well for following through the recipes in this article, but may require different steps for the initial setup and getting samples to run. In this recipe we will install and configure the NetBeans integrated development environment to suit our needs for developing games with Panda3D using the Python programming language. Getting ready Before beginning, be sure to download and install Panda3D. To download the engine SDK and tools, go to www.panda3d.org/download.php: The Panda3D Runtime for End-Users is a prebuilt redistributable package containing a player program and a browser plugin. These can be used to easily run packaged Panda3D games. Under Snapshot Builds, you will be able to find daily builds of the latest version of the Panda3D engine. These are to be handled with care, as they are not meant for production purposes. Finally, the link labeled Panda3D SDK for Developers is the one you need to follow to retrieve a copy of the Panda3D development kit and tools. This will always take you to the latest release of Panda3D, which at this time is version 1.7.0. This version was marked as unstable by the developers but has been working in a stable way for this article. This version also added a great amount of interesting features, like the web browser plugin, an advanced shader, and graphics pipeline or built-in shadow effects, which really are worth a try. Click the link that says Panda3D SDK for Developers to reach the page shown in the following screenshot: Here you can select one of the SDK packages for the platforms that Panda3D is available on. This article assumes a setup of NetBeans on Windows but most of the samples should work on these alternative platforms too, as most of Panda3D's features have been ported to all of these operating systems. To download and install the Panda3D SDK, click the Panda3D SDK 1.7.0 link at the top of the page and download the installer package. Launch the program and follow the installation wizard, always choosing the default settings. In this and all of the following recipes we'll assume the install path to be C:Panda3D-1.7.0, which is the default installation location. If you chose a different location, it might be a good idea to note the path and be prepared to adapt the presented file and folder paths to your needs! How to do it... Follow these steps to set up your Panda3D game development environment: Point your web browser to netbeans.org and click the prominent Download FREE button: Ignore the big table showing all kinds of different versions on the following page and scroll down. Click the link that says JDK with NetBeans IDE Java SE bundle. This will take you to the following page as shown here. Click the Downloads link to the right to proceed. You will find yourself at another page, as shown in the screenshot. Select Windows in the Platform dropdown menu and tick the checkbox to agree to the license agreement. Click the Continue button to proceed. Follow the instructions on the next page. Click the file name to start the download. Launch the installer and follow the setup wizard. Once installed, start the NetBeans IDE. In the main toolbar click Tools | Plugins. Select the tab that is labeled Available Plugins. Browse the list until you find Python and tick the checkbox next to it: Click Install. This will start a wizard that downloads and installs the necessary features for Python development. At the end of the installation wizard you will be prompted to restart the NetBeans IDE, which will finish the setup of the Python feature. Once NetBeans reappears on your screen, click Tools | Python Platforms. In the Python Platform Manager window, click the New button and browse for the file C:Panda3D-1.7.0pythonppython.exe. Select Python 2.6.4 from the platforms list and click the Make Default button. Your settings should now reflect the ones shown in the following screenshot: Finally we select the Python Path tab and once again, compare your settings to the screenshot: Click the Close button and you are done! How it works... In the preceding steps we configured NetBeans to use the Python runtime that drives the Panda3D engine and as we can see, it is very easy to install and set up our working environment for Panda3D. There's more... Different than other game engines, Panda3D follows an interesting approach in its internal architecture. While the more common approach is to embed a scripting runtime into the game engine's executable, Panda3D uses the Python runtime as its main executable. The engine modules handling such things as loading assets, rendering graphics, or playing sounds are implemented as native extension modules. These are loaded by Panda3D's custom Python interpreter as needed when we use them in our script code. Essentially, the architecture of Panda3D turns the hierarchy between native code and the scripting runtime upside down. While in other game engines, native code initiates calls to the embedded scripting runtime, Panda3D shifts the direction of program flow. In Panda3D, the Python runtime is the core element of the engine that lets script code initiate calls into native programming libraries. To understand Panda3D, it is important to understand this architectural decision. Whenever we start the ppython executable, we start up the Panda3D engine. If you ever get into a situation where you are compiling your own Panda3D runtime from source code, don't forget to revisit steps 13 to 17 of this recipe to configure NetBeans to use your custom runtime executable!
Read more
  • 0
  • 0
  • 5789

article-image-cross-platform-development-build-once-deploy-anywhere
Packt
01 Oct 2013
19 min read
Save for later

Cross-platform Development - Build Once, Deploy Anywhere

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) The demo application – how the projects work together Take a look at the following diagram to understand and familiarize yourself with the configuration pattern that all of your Libgdx applications will have in common: What you see here is a compact view of four projects. The demo project to the very left contains the shared code that is referenced (that is, added to the build path) by all the other platform-specific projects. The main class of the demo application is MyDemo.java. However, looking at it from a more technical view, the main class where an application gets started by the operating system, which will be referred to as Starter Classes from now on. Notice that Libgdx uses the term "Starter Class" to distinguish between these two types of main classes in order to avoid confusion. We will cover everything related to the topic of Starter Classes in a moment. While taking a closer look at all these directories in the preceding screenshot, you may have spotted that there are two assets folders: one in the demo-desktop project and another one in demo-android. This brings us to the question, where should you put all the application's assets? The demo-android project plays a special role in this case. In the preceding screenshot, you see a subfolder called data, which contains an image named libgdx.png, and it also appears in the demo-desktop project in the same place. Just remember to always put all of your assets into the assets folder under the demo-android project. The reason behind this is that the Android build process requires direct access to the application's assets folder. During its build process, a Java source file, R.java, will automatically be generated under the gen folder. It contains special information for Android about the available assets. It would be the usual way to access assets through Java code if you were explicitly writing an Android application. However, in Libgdx, you will want to stay platform-independent as much as possible and access any resource such as assets only through methods provided by Libgdx. You may wonder how the other platform-specific projects will be able to access the very same assets without having to maintain several copies per project. Needless to say that this would require you to keep all copies manually synchronized each time the assets change. Luckily, this problem has already been taken care of by the generator as follows: The demo-desktop project uses a linked resource, a feature by Eclipse, to add existing files or folders to other places in a workspace. You can check this out by right-clicking on the demo-desktop project then navigating to Properties | Resource | Linked Resources and clicking on the Linked Resources tab. The demo-html project requires another approach since Google Web Toolkit ( GWT ) has a different build process compared to the other projects. There is a special file GwtDefinition.gwt.xml that allows you to set the asset path by setting the configuration property gdx.assetpath, to the assets folder of the Android project. Notice that it is good practice to use relative paths such as ../demo-android/assets so that the reference does not get broken in case the workspace is moved from its original location. Take this advice as a precaution to protect you and maybe your fellow developers too from wasting precious time on something that can be easily avoided by using the right setup right from the beginning. The following is the code listing for GwtDefinition.gwt.xml from demo-html : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit trunk//EN" "http://google-web-toolkit.googlecode.com/svn/trunk/ distro-source/core/src/gwt-module.dtd"> <module> <inherits name='com.badlogic.gdx.backends.gdx_backends_gwt' /> <inherits name='MyDemo' /> <entry-point class='com.packtpub.libgdx.demo.client.GwtLauncher' /> <set-configuration-property name="gdx.assetpath" value="../demo-android/assets" /> </module> Backends Libgdx makes use of several other libraries to interface the specifics of each platform in order to provide cross-platform support for your applications. Generally, a backend is what enables Libgdx to access the corresponding platform functionalities when one of the abstracted (platform-independent) Libgdx methods is called. For example, drawing an image to the upper-left corner of the screen, playing a sound file at a volume of 80 percent, or reading and writing from/to a file. Libgdx currently provides the following three backends: LWJGL (Lightweight Java Game Library) Android JavaScript/WebGL As already mentioned in Introduction to Libgdx and Project Setup , there will also be an iOS backend in the near future. LWJGL (Lightweight Java Game Library) LWJGL ( Lightweight Java Game Library ) is an open source Java library originally started by Caspian Rychlik-Prince to ease game development in terms of accessing the hardware resources on desktop systems. In Libgdx, it is used for the desktop backend to support all the major desktop operating systems, such as Windows, Linux, and Mac OS X. For more details, check out the official LWJGL website at http://www.lwjgl.org/. Android Google frequently releases and updates their official Android SDK. This represents the foundation for Libgdx to support Android in the form of a backend. There is an API Guide available which explains everything the Android SDK has to offer for Android developers. You can find it at http://developer.android.com/guide/components/index.html. WebGL WebGL support is one of the latest additions to the Libgdx framework. This backend uses the GWT to translate Java code into JavaScript and SoundManager2 ( SM2 ), among others, to add a combined support for HTML5, WebGL, and audio playback. Note that this backend requires a WebGL-capable web browser to run the application. You might want to check out the official website of GWT: https://developers.google.com/web-toolkit/. You might want to check out the official website of SM2: http://www.schillmania.com/projects/soundmanager2/. You might want to check out the official website of WebGL: http://www.khronos.org/webgl/. There is also a list of unresolved issues you might want to check out at https://github.com/libgdx/libgdx/blob/master/backends/gdx-backends-gwt/issues.txt. Modules Libgdx provides six core modules that allow you to access the various parts of the system your application will run on. What makes these modules so great for you as a developer is that they provide you with a single Application Programming Interface ( API ) to achieve the same effect on more than just one platform. This is extremely powerful because you can now focus on your own application and you do not have to bother with the specialties that each platform inevitably brings, including the nasty little bugs that may require tricky workarounds. This is all going to be transparently handled in a straightforward API which is categorized into logic modules and is globally available anywhere in your code, since every module is accessible as a static field in the Gdx class. Naturally, Libgdx does always allow you to create multiple code paths for per-platform decisions. For example, you could conditionally increase the level of detail in a game when run on the desktop platform, since desktops usually have a lot more computing power than mobile devices. The application module The application module can be accessed through Gdx.app. It gives you access to the logging facility, a method to shutdown gracefully, persist data, query the Android API version, query the platform type, and query the memory usage. Logging Libgdx employs its own logging facility. You can choose a log level to filter what should be printed to the platform's console. The default log level is LOG_INFO. You can use a settings file and/or change the log level dynamically at runtime using the following code line: Gdx.app.setLogLevel(Application.LOG_DEBUG); The available log levels are: LOG_NONE: This prints no logs. The logging is completely disabled. LOG_ERROR: This prints error logs only. LOG_INFO: This prints error and info logs. LOG_DEBUG: This prints error, info, and debug logs. To write an info, debug, or an error log to the console, use the following listings: Gdx.app.log("MyDemoTag", "This is an info log."); Gdx.app.debug("MyDemoTag", "This is a debug log."); Gdx.app.error("MyDemoTag", "This is an error log."); Shutting down gracefully You can tell Libgdx to shutdown the running application. The framework will then stop the execution in the correct order as soon as possible and completely de-allocate any memory that is still in use, freeing both Java and the native heap. Use the following listing to initiate a graceful shutdown of your application: Gdx.app.exit(); You should always do a graceful shutdown when you want to terminate your application. Otherwise, you will risk creating memory leaks, which is a really bad thing. On mobile devices, memory leaks will probably have the biggest negative impact due to their limited resources. Persisting data If you want to persist your data, you should use the Preferences class. It is merely a dictionary or a hash map data type which stores multiple key-value pairs in a file. Libgdx will create a new preferences file on the fly if it does not exist yet. You can have several preference files using unique names in order to split up data into categories. To get access to a preference file, you need to request a Preferences instance by its filename as follows: Preferences prefs = Gdx.app.getPreferences("settings.prefs"); To write a (new) value, you have to choose a key under which the value should be stored. If this key already exists in a preferences file, it will be overwritten. Do not forget to call flush() afterwards to persist the data, or else all the changes will be lost. prefs.putInteger("sound_volume", 100); // volume @ 100% prefs.flush(); Persisting data needs a lot more time than just modifying values in memory (without flushing). Therefore, it is always better to modify as many values as possible before a final flush() method is executed. To read back a certain value from a preferences file, you need to know the corresponding key. If this key does not exist, it will be set to the default value. You can optionally pass your own default value as the second argument (for example, in the following listing, 50 is for the default sound volume): int soundVolume = prefs.getInteger("sound_volume", 50); Querying the Android API Level On Android, you can query the Android API Level, which allows you to handle things differently for certain versions of the Android OS. Use the following listing to find out the version: Gdx.app.getVersion(); On platforms other than Android, the version returned is always 0. Querying the platform type You may want to write a platform-specific code where it is necessary to know the current platform type. The following example shows how it can be done: switch (Gdx.app.getType()) { case Desktop: // Code for Desktop application break; case Android: // Code for Android application break; case WebGL: // Code for WebGL application break; default: // Unhandled (new?) platform application break; } Querying memory usage You can query the system to find out its current memory footprint of your application. This may help you find excessive memory allocations that could lead to application crashes. The following functions return the amount of memory (in bytes) that is in use by the corresponding heap: long memUsageJavaHeap = Gdx.app.getJavaHeap(); long memUsageNativeHeap = Gdx.app.getNativeHeap(); Graphics module The graphics module can be accessed either through Gdx.getGraphics() or by using the shortcut variable Gdx.graphics. Querying delta time Query Libgdx for the time span between the current and the last frame in seconds by calling Gdx.graphics.getDeltaTime(). Querying display size Query the device's display size returned in pixels by calling Gdx.graphics.getWidth() and Gdx.graphics.getHeight(). Querying the FPS (frames per second) counter Query a built-in frame counter provided by Libgdx to find the average number of frames per second by calling Gdx.graphics.getFramesPerSecond(). Audio module The audio module can be accessed either through Gdx.getAudio() or by using the shortcut variable Gdx.audio. Sound playback To load sounds for playback, call Gdx.audio.newSound(). The supported file formats are WAV, MP3, and OGG. There is an upper limit of 1 MB for decoded audio data. Consider the sounds to be short effects like bullets or explosions so that the size limitation is not really an issue. Music streaming To stream music for playback, call Gdx.audio.newMusic(). The supported file formats are WAV, MP3, and OGG. Input module The input module can be accessed either through Gdx.getInput() or by using the shortcut variable Gdx.input. In order to receive and handle input properly, you should always implement the InputProcessor interface and set it as the global handler for input in Libgdx by calling Gdx.input.setInputProcessor(). Reading the keyboard/touch/mouse input Query the system for the last x or y coordinate in the screen coordinates where the screen origin is at the top-left corner by calling either Gdx.input.getX() or Gdx.input.getY(). To find out if the screen is touched either by a finger or by mouse, call Gdx.input.isTouched() To find out if the mouse button is pressed, call Gdx.input.isButtonPressed() To find out if the keyboard is pressed, call Gdx.input.isKeyPressed() Reading the accelerometer Query the accelerometer for its value on the x axis by calling Gdx.input.getAccelerometerX(). Replace the X in the method's name with Y or Z to query the other two axes. Be aware that there will be no accelerometer present on a desktop, so Libgdx always returns 0. Starting and canceling vibrator On Android, you can let the device vibrate by calling Gdx.input.vibrate(). A running vibration can be cancelled by calling Gdx.input.cancelVibrate(). Catching Android soft keys You might want to catch Android's soft keys to add an extra handling code for them. If you want to catch the back button, call Gdx.input.setCatchBackKey(true). If you want to catch the menu button, call Gdx.input.setCatchMenuKey(true). On a desktop where you have a mouse pointer, you can tell Libgdx to catch it so that you get a permanent mouse input without having the mouse ever leave the application window. To catch the mouse cursor, call Gdx.input.setCursorCatched(true). The files module The files module can be accessed either through Gdx.getFiles() or by using the shortcut variable Gdx.files. Getting an internal file handle You can get a file handle for an internal file by calling Gdx.files.internal(). An internal file is relative to the assets folder on the Android and WebGL platforms. On a desktop, it is relative to the root folder of the application. Getting an external file handle You can get a file handle for an external file by calling Gdx.files.external(). An external file is relative to the SD card on the Android platform. On a desktop, it is relative to the user's home folder. Note that this is not available for WebGL applications. The network module The network module can be accessed either through Gdx.getNet() or by using the shortcut variable Gdx.net. HTTP GET and HTTP POST You can make HTTP GET and POST requests by calling either Gdx.net.httpGet() or Gdx.net.httpPost(). Client/server sockets You can create client/server sockets by calling either Gdx.net.newClientSocket() or Gdx.net.newServerSocket(). Opening a URI in a web browser To open a Uniform Resource Identifier ( URI ) in the default web browser, call Gdx.net.openURI(URI). Libgdx's Application Life-Cycle and Interface The Application Life-Cycle in Libgdx is a well-defined set of distinct system states. The list of these states is pretty short: create, resize, render, pause, resume, and dispose. Libgdx defines an ApplicationListener interface that contains six methods, one for each system state. The following code listing is a copy that is directly taken from Libgdx's sources. For the sake of readability, all comments have been stripped. public interface ApplicationListener { public void create (); public void resize (int width, int height); public void render (); public void pause (); public void resume (); public void dispose (); } All you need to do is implement these methods in your main class of the shared game code project. Libgdx will then call each of these methods at the right time. The following diagram visualizes the Libgdx's Application Life-Cycle: Note that a full and dotted line basically has the same meaning in the preceding figure. They both connect two consecutive states and have a direction of flow indicated by a little arrowhead on one end of the line. A dotted line additionally denotes a system event. When an application starts, it will always begin with create(). This is where the initialization of the application should happen, such as loading assets into memory and creating an initial state of the game world. Subsequently, the next state that follows is resize(). This is the first opportunity for an application to adjust itself to the available display size (width and height) given in pixels. Next, Libgdx will handle system events. If no event has occurred in the meanwhile, it is assumed that the application is (still) running. The next state would be render(). This is where a game application will mainly do two things: Update the game world model Draw the scene on the screen using the updated game world model Afterwards, a decision is made upon which the platform type is detected by Libgdx. On a desktop or in a web browser, the displaying application window can be resized virtually at any time. Libgdx compares the last and current sizes on every cycle so that resize() is only called if the display size has changed. This makes sure that the running application is able to accommodate a changed display size. Now the cycle starts over by handling (new) system events once again. Another system event that can occur during runtime is the exit event. When it occurs, Libgdx will first change to the pause() state, which is a very good place to save any data that would be lost otherwise after the application has terminated. Subsequently, Libgdx changes to the dispose() state where an application should do its final clean-up to free all the resources that it is still using. This is also almost true for Android, except that pause() is an intermediate state that is not directly followed by a dispose() state at first. Be aware that this event may occur anytime during application runtime while the user has pressed the Home button or if there is an incoming phone call in the meanwhile. In fact, as long as the Android operating system does not need the occupied memory of the paused application, its state will not be changed to dispose(). Moreover, it is possible that a paused application might receive a resume system event, which in this case would change its state to resume(), and it would eventually arrive at the system event handler again. Starter Classes A Starter Class defines the entry point (starting point) of a Libgdx application. They are specifically written for a certain platform. Usually, these kinds of classes are very simple and mostly consist of not more than a few lines of code to set certain parameters that apply to the corresponding platform. Think of them as a kind of boot-up sequence for each platform. Once booting has finished, the Libgdx framework hands over control from the Starter Class (for example, the demo-desktop project) to your shared application code (for example, the demo project) by calling the different methods from the ApplicationListener interface that the MyDemo class implements. Remember that the MyDemo class is where the shared application code begins. We will now take a look at each of the Starter Classes that were generated during the project setup. Running the demo application on a desktop The Starter Class for the desktop application is called Main.java. The following listing is Main.java from demo-desktop : package com.packtpub.libgdx.demo; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; public class Main { public static void main(String[] args) { LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration(); cfg.title = "demo"; cfg.useGL20 = false; cfg.width = 480; cfg.height = 320; new LwjglApplication(new MyDemo(), cfg); } } In the preceding code listing, you see the Main class, a plain Java class without the need to implement an interface or inherit from another class. Instead, a new instance of the LwjglApplication class is created. This class provides a couple of overloaded constructors to choose from. Here, we pass a new instance of the MyDemo class as the first argument to the constructor. Optionally, an instance of the LwjglApplicationConfiguration class can be passed as the second argument. The configuration class allows you to set every parameter that is configurable for a Libgdx desktop application. In this case, the window title is set to demo and the window's width and height is set to 480 by 320 pixels. This is all you need to write and configure a Starter Class for a desktop. Let us try to run the application now. To do this, right-click on the demo-desktop project in Project Explorer in Eclipse and then navigate to Run As | Java Application. Eclipse may ask you to select the Main class when you do this for the first time. Simply select the Main class and also check that the correct package name ( com.packtpub.libgdx.demo ) is displayed next to it. The desktop application should now be up and running on your computer. If you are working on Windows, you should see a window that looks as follows: Summary In this article, we learned about Libgdx and how all the projects of an application work together. We covered Libgdx's backends, modules, and Starter Classes. Additionally, we covered what the Application Life Cycle and corresponding interface are, and how they are meant to work. Resources for Article: Further resources on this subject: Panda3D Game Development: Scene Effects and Shaders [Article] Microsoft XNA 4.0 Game Development: Receiving Player Input [Article] Introduction to Game Development Using Unity 3D [Article]
Read more
  • 0
  • 0
  • 5715
article-image-how-create-openscenegraph-application
Packt
07 Apr 2011
11 min read
Save for later

How to Create an OpenSceneGraph Application

Packt
07 Apr 2011
11 min read
OpenSceneGraph 3.0: Beginner's Guide Constructing your own projects To build an executable program from your own source code, a platform-dependent solution or makefile is always required. At the beginning of this article, we are going to introduce another way to construct platform-independent projects with the CMake system, by which means, we are able to focus on interacting with the code and ignore the painstaking compiling and building process. Time for action – building applications with CMake Before constructing your own project with CMake scripts, it could be helpful to keep the headers and source files together in an empty directory first. The second step is to create a CMakeLists.txt file using any text editor, then and start writing some simple CMake build rules. The following code will implement a project with additional OSG headers and dependency libraries. Please enter them into the newly-created CMakeLists.txt file: cmake_minimum_required( VERSION 2.6 ) project( MyProject ) find_package( OpenThreads ) find_package( osg ) find_package( osgDB ) find_package( osgUtil ) find_package( osgViewer ) macro( config_project PROJNAME LIBNAME ) include_directories( ${${LIBNAME}_INCLUDE_DIR} ) target_link_libraries( ${PROJNAME} ${${LIBNAME}_LIBRARY} ) endmacro() add_executable( MyProject main.cpp ) config_project( MyProject OPENTHREADS ) config_project( MyProject OSG ) config_project( MyProject OSGDB ) config_project( MyProject OSGUTIL ) config_project( MyProject OSGVIEWER ) We have only added a main.cpp source file here, which is made up of the "Hello World" example and will be compiled to generate an executable file named MyProject. This small project depends on five major OSG components. All of these configurations can be modified to meet certain requirements and different user applications. Next, start cmake-gui and drag your CMakeLists.txt into the GUI. You may not be familiar with the CMake scripts to be executed, at present. However, the CMake wiki will be helpful for further understanding: http://www.cmake.org/Wiki/CMake. Create and build a Visual Studio solution or a makefile. The only point is that you have to ensure that your CMake software version is equal to or greater than 2.6, and make sure you have the OSG_ROOT environment variable set. Otherwise, the find_package() macro may not be able to find OSG installations correctly. The following image shows the unexpected errors encountered because OSG headers and libraries were not found in the path indicated by OSG_ROOT (or the variable was just missed): Note that, there is no INSTALL project in the Visual Studio solution, or any make install command to run at this time, because we don't write such CMake scripts for post-build installations. You could just run the executable file in the build directory directly. What just happened? CMake provides easy-to-read commands to automatically find dependencies for user projects. It will check preset directories and environment variables to see if there are any headers and libraries for the required package. The environment variable OSG_ROOT (OSG_DIR is OK, too) will facilitate in looking for OSG under Windows and UNIX, as CMake will first search for valid paths defined in it, and check if there are OSG prebuilt headers and libraries existing in these paths. Have a go hero – testing with different generators Just try a series of tests to generate your project, using Visual Studio, MinGW, and the UNIX gcc compiler. You will find that CMake is a convenient tool for building binary files from source code on different platforms. Maybe this is also a good start to learning programming in a multi-platform style. Using a root node Now we are going to write some code and build it with a self-created CMake script. We will again make a slight change to the frequently-used "Hello World" example. Time for action – improving the "Hello World" example The included headers, <osgDB/ReadFile> and <osgViewer/Viewer>, do not need to be modified. We only add a root variable that provides the runtime access to the Cessna model and assigns it to the setSceneData() method. In the main entry, record the Cessna model with a variable named root: osg::ref_ptr<osg::Node> root = osgDB::readNodeFile("cessna.osg"); osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); Build and run it at once: You will see no difference between this example and the previous "Hello World". So what actually happened? What just happened? In this example, we introduced two new OSG classes: osg::ref_ptr<> and osg::Node. The osg::Node class represents the basic element of a scene graph. The variable root stands for the root node of a Cessna model, which is used as the scene data to be visualized. Meanwhile, an instance of the osg::ref_ptr<> class template is created to manage the node object. It is a smart pointer, which provides additional features for the purpose of efficient memory management. Understanding memory management In a typical programming scenario, the developer should create a pointer to the root node, which directly or indirectly manages all other child nodes of the scene graph. In that case, the application will traverse the scene graph and delete each node and its internal data carefully when they no longer need to be rendered. This process is tiresome and error-prone, debugging dozens of bad trees and wild pointers, because developers can never know how many other objects still keep a pointer to the one being deleted. However without writing the management code, data segments occupied by all scene nodes will never be deleted, which will lead to unexpected memory leaks. This is why memory management is important in OSG programming. A basic concept of memory management always involves two topics: Allocation: Providing the memory needed by an object, by allocating the required memory block. Deallocation: Recycling the allocated memory for reuse, when its data is no longer used. Some modern languages, such as C#, Java, and Visual Basic, use a garbage collector to free memory blocks that are unreachable from any program variables. That means to store the number of objects reaching a memory block, and deallocate the memory when the number decrements to zero. The standard C++ approach does not work in such a way, but we can mimic it by means of a smart pointer, which is defined as an object that acts like a pointer, but is much smarter in the management of memory. For example, the boost library provides the boost::shared_ptr<> class template to store pointers in order to dynamically allocated related objects. ref_ptr<> and Referenced classes Fortunately, OSG also provides a native smart pointer, osg::ref_ptr<>, for the purpose of automatic garbage collection and deallocation. To make it work properly, OSG also provides the osg::Referenced class to manage reference-counted memory blocks, which is used as the base class of any classes that may serve as the template argument. The osg::ref_ptr<> class template re-implements a number of C++ operators as well as member functions, and thus provides convenient methods to developers. Its main components are as follows: get(): This public method returns the managed pointer, for instance, the osg::Node* pointer if you are using osg::Node as the template argument. operator*(): This is actually a dereference operator, which returns l-value at the pointer address, for instance, the osg::Node& reference variable. operator->() and operator=(): These operators allow a user application to use osg::ref_ptr<> as a normal pointer. The former calls member functions of the managed object, and the latter replaces the current managed pointer with a new one. operator==(), operator!=(), and operator!(): These operators help to compare smart pointers, or check if a certain pointer is invalid. An osg::ref_ptr<> object with NULL value assigned or without any assignment is considered invalid. valid(): This public method returns true if the managed pointer is not NULL. The expression some_ptr.valid() equals to some_ptr!=NULL if some_ptr is defined as a smart pointer. release(): This public method is useful when returning the managed address from a function. The osg::Referenced class is the pure base class of all elements in a scene graph, such as nodes, geometries, rendering states, and any other allocatable scene objects. The osg::Node class actually inherits from osg::Referenced indirectly. This is the reason why we program as follows: osg::ref_ptr<osg::Node> root; The osg::Referenced class contains an integer number to handle the memory block allocated. The reference count is initialized to 0 in the class constructor, and will be increased by 1 if the osg::Referenced object is referred to by an osg::ref_ptr<> smart pointer. On the contrary, the number will be decreased by 1 if the object is removed from a certain smart pointer. The object itself will be automatically destroyed when no longer referenced by any smart pointers. The osg::Referenced class provides three main member methods: The public method ref() increases the referenced counting number by 1 The public method unref() decreases the referenced counting number by 1 The public method referenceCount() returns the value of the current referenced counting number, which is useful for code debugging These methods could also work for classes that are derived from osg::Referenced. Note that it is very rarely necessary to call ref() or unref() directly in user programs, which means that the reference count is managed manually and may conflict with what the osg::ref_ptr<> is going to do. Otherwise, OSG's internal garbage collecting system will get the wrong number of smart pointers in use and even crash when managing memory blocks in an improper way. Collecting garbage: why and how Here are some reasons for using smart pointers and the garbage collection system in programming: Fewer bugs: Using smart pointers means the automatic initialization and cleanup of pointers. No dangling pointers will be created because they are always reference-counted. Efficient management: Objects will be reclaimed as soon as they are no longer referenced, which gives more available memory to applications with limited resources. Easy to debug: We can easily obtain the referenced counting number and other information on objects, and then apply other optimizations and experiments. For instance, a scene graph tree is composed by a root node and multiple levels of child nodes. Assuming that all children are managed with osg::ref_ptr<>, user applications may only keep the pointer to the root node. As is illustrated by the following image, the operation of deleting the root node pointer will cause a cascading effect that will destroy the whole node hierarchy: Each node in the example scene graph is managed by its parent, and will automatically be unreferenced during the deletion of the parent node. This node, if no longer referenced by any other nodes, will be destroyed immediately, and all of its children will be freed up. The entire scene graph will finally be cleaned without worries after the last group node or leaf node is deleted. The process is really convenient and efficient, isn't it? Please make sure the OSG smart pointer can work for you, and use a class derived from osg::Referenced as the osg::ref_ptr<> template argument, and correctly assign newly-allocated objects to smart pointers. A smart pointer can be used either as a local variable, a global variable, or a class member variable, and will automatically decrease the referenced counting number when reassigned to another object or moved out of the smart pointer's declaration scope. It is strongly recommended that user applications always use smart pointers to manage their scenes, but there are still some issues that need special attention: osg::Referenced and its derivatives should be created from the heap only. They cannot be used as local variables because class destructors are declared protected internally for safety. For example: osg::ref_ptr<osg::Node> node = new osg::Node; // this is legal osg::Node node; // this is illegal! A regular C++ pointer is still workable temporarily. But user applications should remember to assign it to osg::ref_ptr<> or add it to a scene graph element (almost all OSG scene classes use smart pointers to manage child objects) in the end, as it is always the safest approach. osg::Node* tmpNode = new osg::Node; // this is OK ... osg::ref_ptr<osg::Node> node = tmpNode; // Good finish! Don't play with reference cycles, as the garbage collecting mechanism cannot handle it. A reference cycle means that an object refers to itself directly or indirectly, which leads to an incorrect calculation of the referenced counting number. The scene graph shown in the following image contains two kinds of reference cycles, which are both invalid. The node Child 1.1 directly adds itself as the child node and will form a dead cycle while traversing to its children, because it is the child of itself, too! The node Child 2.2, which also makes a reference cycle indirectly, will cause the same problem while running: Now let's have a better grasp of the basic concepts of memory management, through a very simple example.  
Read more
  • 0
  • 0
  • 5713

article-image-data-driven-design
Packt
10 Jul 2013
21 min read
Save for later

Data-driven Design

Packt
10 Jul 2013
21 min read
(For more resources related to this topic, see here.) Loading XML files I have chosen to use XML files because they are so easy to parse. We are not going to write our own XML parser, rather we will use an open source library called TinyXML. TinyXML was written by Lee Thomason and is available under the zlib license from http://sourceforge.net/projects/tinyxml/. Once downloaded the only setup we need to do is to include a few of the files in our project: tinyxmlerror.cpp tinyxmlparser.cpp tinystr.cpp tinystr.h tinyxml.cpp tinyxml.h Also, at the top of tinyxml.h, add this line of code: #define TIXML_USE_STL By doing this we ensure that we are using the STL versions of the TinyXML functions. We can now go through a little of how an XML file is structured. It's actually fairly simple and we will only give a brief overview to help you get up to speed with how we will use it. Basic XML structure Here is a basic XML file: <?xml version="1.0" ?> <ROOT> <ELEMENT> </ELEMENT> </ROOT> The first line of the file defines the format of the XML file. The second line is our Root element; everything else is a child of this element. The third line is the first child of the root element. Now let's look at a slightly more complicated XML file: <?xml version="1.0" ?> <ROOT> <ELEMENTS> <ELEMENT>Hello,</ELEMENT> <ELEMENT> World!</ELEMENT> </ELEMENTS> </ROOT> As you can see we have now added children to the first child element. You can nest as many children as you like. But without a good structure, your XML file may become very hard to read. If we were to parse the above file, here are the steps we would take: Load the XML file. Get the root element, <ROOT>. Get the first child of the root element, <ELEMENTS>. For each child, <ELEMENT> of <ELEMENTS>, get the content. Close the file. Another useful XML feature is the use of attributes. Here is an example: <ROOT> <ELEMENTS> <ELEMENT text="Hello,"/> <ELEMENT text=" World!"/> </ELEMENTS> </ROOT> We have now stored the text we want in an attribute named text. When this file is parsed, we would now grab the text attribute for each element and store that instead of the content between the <ELEMENT></ELEMENT> tags. This is especially useful for us as we can use attributes to store lots of different values for our objects. So let's look at something closer to what we will use in our game: <?xml version="1.0" ?> <STATES> <!--The Menu State--> <MENU> <TEXTURES> <texture filename="button.png" ID="playbutton"/> <texture filename="exit.png" ID="exitbutton"/> </TEXTURES> <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton"/> </OBJECTS> </MENU> <!--The Play State--> <PLAY> </PLAY> <!-- The Game Over State --> <GAMEOVER> </GAMEOVER> </STATES> This is slightly more complex. We define each state in its own element and within this element we have objects and textures with various attributes. These attributes can be loaded in to create the state. With this knowledge of XML you can easily create your own file structures if what we cover within this book is not to your needs. Implementing Object Factories We are now armed with a little XML knowledge but before we move forward, we are going to take a look at Object Factories. An object factory is a class that is tasked with the creation of our objects. Essentially, we tell the factory the object we would like it to create and it goes ahead and creates a new instance of that object and then returns it. We can start by looking at a rudimentary implementation: GameObject* GameObjectFactory::createGameObject(ID id) { switch(id) { case "PLAYER": return new Player(); break; case "ENEMY": return new Enemy(); break; // lots more object types } } This function is very simple. We pass in an ID for the object and the factory uses a big switch statement to look it up and return the correct object. Not a terrible solution but also not a particularly good one, as the factory will need to know about each type it needs to create and maintaining the switch statement for many different objects would be extremely tedious. We want this factory not to care about which type we ask for. It shouldn't need to know all of the specific types we want it to create. Luckily this is something that we can definitely achieve. Using Distributed Factories Through the use of Distributed Factories we can make a generic object factory that will create any of our types. Distributed factories allow us to dynamically maintain the types of objects we want our factory to create, rather than hard code them into a function (like in the preceding simple example). The approach we will take is to have the factory contain std::map that maps a string (the type of our object) to a small class called Creator whose only purpose is the creation of a specific object. We will register a new type with the factory using a function that takes a string (the ID) and a Creator class and adds them to the factory's map. We are going to start with the base class for all the Creator types. Create GameObjectFactory.h and declare this class at the top of the file. #include <string> #include <map> #include "GameObject.h" class BaseCreator { public: virtual GameObject* createGameObject() const = 0; virtual ~BaseCreator() {} }; We can now go ahead and create the rest of our factory and then go through it piece by piece. class GameObjectFactory { public: bool registerType(std::string typeID, BaseCreator* pCreator) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); // if the type is already registered, do nothing if(it != m_creators.end()) { delete pCreator; return false; } m_creators[typeID] = pCreator; return true; } GameObject* create(std::string typeID) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); if(it == m_creators.end()) { std::cout << "could not find type: " << typeID << "n"; return NULL; } BaseCreator* pCreator = (*it).second; return pCreator->createGameObject(); } private: std::map<std::string, BaseCreator*> m_creators; }; This is quite a small class but it is actually very powerful. We will cover each part separately starting with std::map m_creators. std::map<std::string, BaseCreator*> m_creators; This map holds the important elements of our factory, the functions of the class essentially either add or remove from this map. This becomes apparent when we look at the registerType function: bool registerType(std::string typeID, BaseCreator* pCreator) This function takes the ID we want to associate the object type with (as a string), and the creator object for that class. The function then attempts to find the type using the std::mapfind function: std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); If the type is found then it is already registered. The function then deletes the passed in pointer and returns false: if(it != m_creators.end()) { delete pCreator; return false; } If the type is not already registered then it can be assigned to the map and then true is returned: m_creators[typeID] = pCreator; return true; } As you can see, the registerType function is actually very simple; it is just a way to add types to the map. The create function is very similar: GameObject* create(std::string typeID) { std::map<std::string, BaseCreator*>::iterator it = m_creators.find(typeID); if(it == m_creators.end()) { std::cout << "could not find type: " << typeID << "n"; return 0; } BaseCreator* pCreator = (*it).second; return pCreator->createGameObject(); } The function looks for the type in the same way as registerType does, but this time it checks whether the type was not found (as opposed to found). If the type is not found we return 0, and if the type is found then we use the Creator object for that type to return a new instance of it as a pointer to GameObject. It is worth noting that the GameObjectFactory class should probably be a singleton. We won't cover how to make it a singleton in this article. Try implementing it yourself or see how it is implemented in the source code download. Fitting the factory into the framework With our factory now in place, we can start altering our GameObject classes to use it. Our first step is to ensure that we have a Creator class for each of our objects. Here is one for Player: class PlayerCreator : public BaseCreator { GameObject* createGameObject() const { return new Player(); } }; This can be added to the bottom of the Player.h file. Any object we want the factory to create must have its own Creator implementation. Another addition we must make is to move LoaderParams from the constructor to their own function called load. This stops the need for us to pass the LoaderParams object to the factory itself. We will put the load function into the GameObject base class, as we want every object to have one. class GameObject { public: virtual void draw()=0; virtual void update()=0; virtual void clean()=0; // new load function virtual void load(const LoaderParams* pParams)=0; protected: GameObject() {} virtual ~GameObject() {} }; Each of our derived classes will now need to implement this load function. The SDLGameObject class will now look like this: SDLGameObject::SDLGameObject() : GameObject() { } voidSDLGameObject::load(const LoaderParams *pParams) { m_position = Vector2D(pParams->getX(),pParams->getY()); m_velocity = Vector2D(0,0); m_acceleration = Vector2D(0,0); m_width = pParams->getWidth(); m_height = pParams->getHeight(); m_textureID = pParams->getTextureID(); m_currentRow = 1; m_currentFrame = 1; m_numFrames = pParams->getNumFrames(); } Our objects that derive from SDLGameObject can use this load function as well; for example, here is the Player::load function: Player::Player() : SDLGameObject() { } void Player::load(const LoaderParams *pParams) { SDLGameObject::load(pParams); } This may seem a bit pointless but it actually saves us having to pass through LoaderParams everywhere. Without it, we would need to pass LoaderParams through the factory's create function which would then in turn pass it through to the Creator object. We have eliminated the need for this by having a specific function that handles parsing our loading values. This will make more sense once we start parsing our states from a file. We have another issue which needs rectifying; we have two classes with extra parameters in their constructors (MenuButton and AnimatedGraphic). Both classes take an extra parameter as well as LoaderParams. To combat this we will add these values to LoaderParams and give them default values. LoaderParams(int x, int y, int width, int height, std::string textureID,int numFrames, int callbackID = 0, int animSpeed = 0) : m_x(x), m_y(y), m_width(width), m_height(height), m_textureID(textureID), m_numFrames(numFrames), m_callbackID(callbackID), m_animSpeed(animSpeed) { } In other words, if the parameter is not passed in, then the default values will be used (0 in both cases). Rather than passing in a function pointer as MenuButton did, we are using callbackID to decide which callback function to use within a state. We can now start using our factory and parsing our states from an XML file. Parsing states from an XML file The file we will be parsing is the following (test.xml in source code downloads): <?xml version="1.0" ?> <STATES> <MENU> <TEXTURES> <texture filename="assets/button.png" ID="playbutton"/> <texture filename="assets/exit.png" ID="exitbutton"/> </TEXTURES> <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton" numFrames="0" callbackID="1"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton" numFrames="0" callbackID="2"/> </OBJECTS> </MENU> <PLAY> </PLAY> <GAMEOVER> </GAMEOVER> </STATES> We are going to create a new class that parses our states for us called StateParser. The StateParser class has no data members, it is to be used once in the onEnter function of a state and then discarded when it goes out of scope. Create a StateParser.h file and add the following code: #include <iostream> #include <vector> #include "tinyxml.h" class GameObject; class StateParser { public: bool parseState(const char* stateFile, std::string stateID, std::vector<GameObject*> *pObjects); private: void parseObjects(TiXmlElement* pStateRoot, std::vector<GameObject*> *pObjects); void parseTextures(TiXmlElement* pStateRoot, std::vector<std::string> *pTextureIDs); }; We have three functions here, one public and two private. The parseState function takes the filename of an XML file as a parameter, along with the current stateID value and a pointer to std::vector of GameObject* for that state. The StateParser.cpp file will define this function: bool StateParser::parseState(const char *stateFile, string stateID, vector<GameObject *> *pObjects, std::vector<std::string> *pTextureIDs) { // create the XML document TiXmlDocument xmlDoc; // load the state file if(!xmlDoc.LoadFile(stateFile)) { cerr << xmlDoc.ErrorDesc() << "n"; return false; } // get the root element TiXmlElement* pRoot = xmlDoc.RootElement(); // pre declare the states root node TiXmlElement* pStateRoot = 0; // get this states root node and assign it to pStateRoot for(TiXmlElement* e = pRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == stateID) { pStateRoot = e; } } // pre declare the texture root TiXmlElement* pTextureRoot = 0; // get the root of the texture elements for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("TEXTURES")) { pTextureRoot = e; } } // now parse the textures parseTextures(pTextureRoot, pTextureIDs); // pre declare the object root node TiXmlElement* pObjectRoot = 0; // get the root node and assign it to pObjectRoot for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("OBJECTS")) { pObjectRoot = e; } } // now parse the objects parseObjects(pObjectRoot, pObjects); return true; } There is a lot of code in this function so it is worth covering in some depth. We will note the corresponding part of the XML file, along with the code we use, to obtain it. The first part of the function attempts to load the XML file that is passed into the function: // create the XML document TiXmlDocument xmlDoc; // load the state file if(!xmlDoc.LoadFile(stateFile)) { cerr << xmlDoc.ErrorDesc() << "n"; return false; } It displays an error to let you know what happened if the XML loading fails. Next we must grab the root node of the XML file: // get the root element TiXmlElement* pRoot = xmlDoc.RootElement(); // <STATES> The rest of the nodes in the file are all children of this root node. We must now get the root node of the state we are currently parsing; let's say we are looking for MENU: // declare the states root node TiXmlElement* pStateRoot = 0; // get this states root node and assign it to pStateRoot for(TiXmlElement* e = pRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == stateID) { pStateRoot = e; } } This piece of code goes through each direct child of the root node and checks if its name is the same as stateID. Once it finds the correct node it assigns it to pStateRoot. We now have the root node of the state we want to parse. <MENU> // the states root node Now that we have a pointer to the root node of our state we can start to grab values from it. First we want to load the textures from the file so we look for the <TEXTURE> node using the children of the pStateRoot object we found before: // pre declare the texture root TiXmlElement* pTextureRoot = 0; // get the root of the texture elements for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("TEXTURES")) { pTextureRoot = e; } } Once the <TEXTURE> node is found, we can pass it into the private parseTextures function (which we will cover a little later). parseTextures(pTextureRoot, std::vector<std::string> *pTextureIDs); The function then moves onto searching for the <OBJECT> node and, once found, it passes it into the private parseObjects function. We also pass in the pObjects parameter: // pre declare the object root node TiXmlElement* pObjectRoot = 0; // get the root node and assign it to pObjectRoot for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { if(e->Value() == string("OBJECTS")) { pObjectRoot = e; } } parseObjects(pObjectRoot, pObjects); return true; } At this point our state has been parsed. We can now cover the two private functions, starting with parseTextures. void StateParser::parseTextures(TiXmlElement* pStateRoot, std::vector<std::string> *pTextureIDs) { for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { string filenameAttribute = e->Attribute("filename"); string idAttribute = e->Attribute("ID"); pTextureIDs->push_back(idAttribute); // push into list TheTextureManager::Instance()->load(filenameAttribute, idAttribute, TheGame::Instance()->getRenderer()); } } This function gets the filename and ID attributes from each of the texture values in this part of the XML: <TEXTURES> <texture filename="button.png" ID="playbutton"/> <texture filename="exit.png" ID="exitbutton"/> </TEXTURES> It then adds them to TextureManager. TheTextureManager::Instance()->load(filenameAttribute, idAttribute, TheGame::Instance()->getRenderer()); The parseObjects function is quite a bit more complicated. It creates objects using our GameObjectFactory function and reads from this part of the XML file: <OBJECTS> <object type="MenuButton" x="100" y="100" width="400" height="100" textureID="playbutton" numFrames="0" callbackID="1"/> <object type="MenuButton" x="100" y="300" width="400" height="100" textureID="exitbutton" numFrames="0" callbackID="2"/> </OBJECTS> The parseObjects function is defined like so: void StateParser::parseObjects(TiXmlElement *pStateRoot, std::vector<GameObject *> *pObjects) { for(TiXmlElement* e = pStateRoot->FirstChildElement(); e != NULL; e = e->NextSiblingElement()) { int x, y, width, height, numFrames, callbackID, animSpeed; string textureID; e->Attribute("x", &x); e->Attribute("y", &y); e->Attribute("width",&width); e->Attribute("height", &height); e->Attribute("numFrames", &numFrames); e->Attribute("callbackID", &callbackID); e->Attribute("animSpeed", &animSpeed); textureID = e->Attribute("textureID"); GameObject* pGameObject = TheGameObjectFactory::Instance() ->create(e->Attribute("type")); pGameObject->load(new LoaderParams (x,y,width,height,textureID,numFrames,callbackID, animSpeed)); pObjects->push_back(pGameObject); } } First we get any values we need from the current node. Since XML files are pure text, we cannot simply grab ints or floats from the file. TinyXML has functions with which you can pass in the value you want to be set and the attribute name. For example: e->Attribute("x", &x); This sets the variable x to the value contained within attribute "x". Next comes the creation of a GameObject * class using the factory. GameObject* pGameObject = TheGameObjectFactory::Instance()->create(e->Attribute("type")); We pass in the value from the type attribute and use that to create the correct object from the factory. After this we must use the load function of GameObject to set our desired values using the values loaded from the XML file. pGameObject->load(new LoaderParams(x,y,width,height,textureID,numFrames,callbackID)); And finally we push pGameObject into the pObjects array, which is actually a pointer to the current state's object vector. pObjects->push_back(pGameObject); Loading the menu state from an XML file We now have most of our state loading code in place and can make use of this in the MenuState class. First we must do a little legwork and set up a new way of assigning the callbacks to our MenuButton objects, since this is not something we could pass in from an XML file. The approach we will take is to give any object that wants to make use of a callback an attribute named callbackID in the XML file. Other objects do not need this value and LoaderParams will use the default value of 0. The MenuButton class will make use of this value and pull it from its LoaderParams, like so: void MenuButton::load(const LoaderParams *pParams) { SDLGameObject::load(pParams); m_callbackID = pParams->getCallbackID(); m_currentFrame = MOUSE_OUT; } The MenuButton class will also need two other functions, one to set the callback function and another to return its callback ID: void setCallback(void(*callback)()) { m_callback = callback;} int getCallbackID() { return m_callbackID; } Next we must create a function to set callbacks. Any state that uses objects with callbacks will need an implementation of this function. The most likely states to have callbacks are menu states, so we will rename our MenuState class to MainMenuState and make MenuState an abstract class that extends from GameState. The class will declare a function that sets the callbacks for any items that need it and it will also have a vector of the Callback objects as a member; this will be used within the setCallbacks function for each state. class MenuState : public GameState { protected: typedef void(*Callback)(); virtual void setCallbacks(const std::vector<Callback>& callbacks) = 0; std::vector<Callback> m_callbacks; }; The MainMenuState class (previously MenuState) will now derive from this MenuState class. #include "MenuState.h" #include "GameObject.h" class MainMenuState : public MenuState { public: virtual void update(); virtual void render(); virtual bool onEnter(); virtual bool onExit(); virtual std::string getStateID() const { return s_menuID; } private: virtual void setCallbacks(const std::vector<Callback>& callbacks); // call back functions for menu items static void s_menuToPlay(); static void s_exitFromMenu(); static const std::string s_menuID; std::vector<GameObject*> m_gameObjects; }; Because MainMenuState now derives from MenuState, it must of course declare and define the setCallbacks function. We are now ready to use our state parsing to load the MainMenuState class. Our onEnter function will now look like this: bool MainMenuState::onEnter() { // parse the state StateParser stateParser; stateParser.parseState("test.xml", s_menuID, &m_gameObjects, &m_textureIDList); m_callbacks.push_back(0); //pushback 0 callbackID start from 1 m_callbacks.push_back(s_menuToPlay); m_callbacks.push_back(s_exitFromMenu); // set the callbacks for menu items setCallbacks(m_callbacks); std::cout << "entering MenuStaten"; return true; } We create a state parser and then use it to parse the current state. We push any callbacks into the m_callbacks array inherited from MenuState. Now we need to define the setCallbacks function: void MainMenuState::setCallbacks(const std::vector<Callback>& callbacks) { // go through the game objects for(int i = 0; i < m_gameObjects.size(); i++) { // if they are of type MenuButton then assign a callback based on the id passed in from the file if(dynamic_cast<MenuButton*>(m_gameObjects[i])) { MenuButton* pButton = dynamic_cast<MenuButton*>(m_gameObjects[i]); pButton->setCallback(callbacks[pButton->getCallbackID()]); } } } We use dynamic_cast to check whether the object is a MenuButton type; if it is then we do the actual cast and then use the objects callbackID as the index into the callbacks vector and assign the correct function. While this method of assigning callbacks could be seen as not very extendable and could possibly be better implemented, it does have a redeeming feature; it allows us to keep our callbacks inside the state they will need to be called from. This means that we won't need a huge header file with all of the callbacks in. One last alteration we need is to add a list of texture IDs to each state so that we can clear all of the textures that were loaded for that state. Open up GameState.h and we will add a protected variable. protected: std::vector<std::string> m_textureIDList; We will pass this into the state parser in onEnter and then we can clear any used textures in the onExit function of each state, like so: // clear the texture manager for(int i = 0; i < m_textureIDList.size(); i++) { TheTextureManager::Instance()-> clearFromTextureMap(m_textureIDList[i]); } Before we start running the game we need to register our MenuButton type with the GameObjectFactory. Open up Game.cpp and in the Game::init function we can register the type. TheGameObjectFactory::Instance()->registerType("MenuButton", new MenuButtonCreator()); We can now run the game and see our fully data-driven MainMenuState.
Read more
  • 0
  • 0
  • 5698
Modal Close icon
Modal Close icon