Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-creating-and-warping-3d-text-away3d-36
Packt
11 Feb 2011
7 min read
Save for later

Creating and Warping 3D Text with Away3D 3.6

Packt
11 Feb 2011
7 min read
  Away3D 3.6 Essentials The external library, swfvector, is contained in the wumedia package. More information about the swfvector library can be found at http://code.google.com/p/swfvector/. This library was not developed as part of the Away3D engine, but has been integrated since version 2.4 and 3.4, to enable Away3D to provide a way to create and display text 3D objects within the scene. Embedding fonts Creating a text 3D object in Away3D requires a source SWF file with an embedded font. To accommodate this, we will create a very simple application using the Fonts class below. This class embeds a single true-type font called Vera Sans from the Vera.ttf file. When compiled, the resulting SWF file can then be referenced by our Away3D application, allowing the embedded font file to be accessed. When embedding fonts using the Flex 4 SDK, you may need to set the embedAsCFF property to false, like: [Embed(mimeType="application/x-font", source="Vera. ttf", fontName="Vera Sans", embedAsCFF=false)] This is due to the new way fonts can be embedded with the latest versions of the Flex SDK. You can find more information on the embedAsCFF property at http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf6320a-7fea.html. package { import flash.display.Sprite; public class Fonts extends Sprite { [Embed(mimeType="application/x-font", source="Vera.ttf", fontName="Vera Sans")] public var VeraSans:Class; } } The font used here is Bitstream Vera, which can be freely distributed, and can be obtained from http://www.gnome.org/fonts/. However, not all fonts can be freely redistributed, so be mindful of the copyright or license restrictions that may be imposed by a particular font. Displaying text in the scene Text 3D objects are represented by the TextField3D class, from the away3d.primitives package. Creating a text 3D object requires two steps: Extracting the fonts that were embedded inside a separate SWF file. Creating a new TextField3D object. Let's create an application called FontDemo that creates a 3D textfield and adds it to the scene. package { We import the TextField3D class, making it available within our application. import away3d.primitives.TextField3D; The VectorText class will be used to extract the fonts from the embedded SWF file. import wumedia.vector.VectorText; public class FontDemo extends Away3DTemplate { The Fonts.SWF file was created by compiling the Fonts class above. We want to embed this SWF file as raw data, so we specify the MIME type to be application/octet-stream. [Embed(source="Fonts.swf", mimeType="application/octet-stream")] protected var Fonts:Class; public function FontDemo() { super(); } protected override function initEngine():void { super.initEngine(); Before any TextField3D objects can be created we need to extract the fonts from the embedded SWF file. This is done by calling the static extractFonts() function in the VectorText class, and passing a new instance of the embedded SWF file. Because we specified the MIME type of the embedded file to be application/octet-stream, a new instance of the class is created as a ByteArray. VectorText.extractFont(new Fonts()); } protected override function initScene():void { super.initScene(); this.camera.z = 0; Here we create the new instance of the TextField3D class. The first parameter is the font name, which corresponds to the font name included in the embedded SWF file. The TextField3D constructor also takes an init object, whose parameters are listed in the next table. var text:TextField3D = new TextField3D("Vera Sans", { text: "Away3D Essentials", align: VectorText.CENTER, z: 300 } ); scene.addChild(text); } } } The following table shows you the init object parameters accepted by the TextField3D constructor. When the application is run, the scene will contain a single 3D object that has been created to spell out the words "Away3D Essentials" and formatted using the supplied font. At this point, the text 3D object can be transformed and interacted with, just like other 3D object. 3D Text materials You may be aware of applying bitmap materials to the surface of a 3D object according to their UV coordinates. The default UV coordinates defined by a TextField3D object generally do not allow bitmap materials to be applied in a useful manner. However, simple colored materials like WireframeMaterial, WireColorMaterial, and ColorMaterial can be applied to a TextField3D object. Extruding 3D text By default, a text 3D object has no depth (although it is visible from both sides). One of the extrusion classes called TextExtrusion can be used to create an additional 3D object that uses the shape of a text 3D object and extends it into a third dimension. When combined, the TextExtrusion and TextField3D objects can be used to create the appearance of a solid block of text. The FontExtrusionDemo class in the following code snippet gives an example of this process: package { import away3d.containers.ObjectContainer3D; import away3d.extrusions.TextExtrusion; import away3d.primitives.TextField3D; import flash.events.Event; import wumedia.vector.VectorText; public class FontExtrusionDemo extends Away3DTemplate { [Embed(source="Fonts.swf", mimeType="application/octet-stream")] protected var Fonts:Class; The TextField3D 3D object and the extrusion 3D object are both added as children of a ObjectContainer3D object, referenced by the container property. protected var container:ObjectContainer3D; The text property will reference the TextField3D object used to display the 3D text. protected var text:TextField3D; The extrusion property will reference the TextExtrusion object used to give the 3D text some depth. protected var extrusion:TextExtrusion; public function FontExtrusionDemo() { super(); } protected override function initEngine():void { super.initEngine(); this.camera.z = 0; VectorText.extractFont(new Fonts()); } protected override function initScene():void { super.initScene(); text = new TextField3D("Vera Sans", { text: "Away3D Essentials", align: VectorText.CENTER } ); The TextExtrusion constructor takes a reference to the TextField3D object (or any other Mesh object). It also accepts an init object, which we have used to specify the depth of the 3D text, and to make both sides of the extruded mesh visible. extrusion = new TextExtrusion(text, { depth: 10, bothsides:true } ); The ObjectContainer3D object is created, supplying the TextField3D and TextExtrusion 3D objects that were created above as children. The initial position of the ObjectContainer3D object is set to 300 units down the positive end of the Z-axis. container = new ObjectContainer3D(text, extrusion, { z: 300 } ); The container is then added as a child of the scene. scene.addChild(container); } protected override function onEnterFrame(event:Event):void { super.onEnterFrame(event); The container is slowly rotated around its Y-axis by modifying the rotationY property in every frame. In previous examples, we have simply incremented the rotation property, without any regard for when the value became larger than 360 degrees. After all, rotating a 3D object by 180 or 540 degrees has the same overall effect. But in this case, we do want to keep the value of the rotationY property between 0 and 360 so we can easily test to see if the rotation is within a given range. To do this, we use the mod (%) operator. container.rotationY = (container.rotationY + 1) % 360; Z-sorting issues can rise due to the fact that the TextExtrusion and TextField3D objects are so closely aligned. This issue results in parts of the TextField3D or TextExturude 3D objects showing through where it is obvious that they should be hidden. To solve this problem, we can use the procedure to force the sorting order of 3D objects. Here we are assigning a positive value to the TextField3D screenZOffset property to force it to be drawn behind the TextExturude object, when the container has been rotated between 90 and 270 degrees around the Y-axis. When the container is rotated like this, the TextField3D object is at the back of the scene. Otherwise, the TextField3D is drawn in front by assigning a negative value to the screenZOffset property. if (container.rotationY > 90 && container.rotationY < 270) text.screenZOffset = 10; else text.screenZOffset = -10; } } } The result of the FontExtrusionDemo application is shown in the following image:
Read more
  • 0
  • 0
  • 5686

article-image-ogre3d-scene-graph
Packt
20 Dec 2010
13 min read
Save for later

The Ogre 3D scene graph

Packt
20 Dec 2010
13 min read
Creating a scene node We will learn how to create a new scene node and attach our 3D model to it. How to create a scene node with Ogre 3D We will follow these steps: In the old version of our code, we had the following two lines in the createScene() function: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad.mesh"); mSceneMgr->getRootSceneNode()->attachObject(ent); Replace the last line with the following: Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); Then add the following two lines; the order of those two lines is irrelevant forthe resulting scene: mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Compile and start the application. What just happened? We created a new scene node named Node 1. Then we added the scene node to the root scene node. After this, we attached our previously created 3D model to the newly created scene node so it would be visible. How to work with the RootSceneNode The call mSceneMgr->getRootSceneNode() returns the root scene node. This scene node is a member variable of the scene manager. When we want something to be visible, we need to attach it to the root scene node or a node which is a child or a descendent in any way. In short, there needs to be a chain of child relations from the root node to the node; otherwise it won't be rendered. As the name suggests, the root scene node is the root of the scene. So the entire scene will be, in some way, attached to the root scene node. Ogre 3D uses a so-called scene graph to organize the scene. This graph is like a tree, it has one root, the root scene node, and each node can have children. We already have used this characteristic when we called mSceneMgr->getRootSceneNode()->addChild(node);. There we added the created scene node as a child to the root. Directly afterwards, we added another kind of child to the scene node with node->attachObject(ent);. Here, we added an entity to the scene node. We have two different kinds of objects we can add to a scene node. Firstly, we have other scene nodes, which can be added as children and have children themselves. Secondly, we have entities that we want rendered. Entities aren't children and can't have children themselves. They are data objects which are associated with the node and can be thought of as leaves of the tree. There are a lot of other things we can add to a scene, like lights, particle systems, and so on. We will later learn what these things are and how to use them. Right now, we only need entities. Our current scene graph looks like this: The first thing we need to understand is what a scene graph is and what it does. A scene graph is used to represent how different parts of a scene are related to each other in 3D space. 3D space Ogre 3D is a 3D rendering engine, so we need to understand some basic 3D concepts. The most basic construct in 3D is a vector, which is represented by an ordered triple (x,y,z). Each position in a 3D space can be represented by such a triple using the Euclidean coordination system for three dimensions. It is important to know that there are different kinds of coordinate systems in 3D space. The only difference between the systems is the orientation of the axis and the positive rotation direction. There are two systems that are widely used, namely, the left-handed and the right-handed versions. In the following image, we see both systems—on the left side, we see the left-handed version; and on the right side, we see the right-handed one. Source:http://en.wikipedia.org/wiki/File:Cartesian_coordinate_system_handedness.svg The names left-and right-handed are based on the fact that the orientation of the axis can be reconstructed using the left and right hand. The thumb is the x-axis, the index finger the y-axis, and the middle finger the z-axis. We need to hold our hands so that we have a ninety-degree angle between thumb and index finger and also between middle and index finger. When using the right hand, we get a right-handed coordination system. When using the left hand, we get the left-handed version. Ogre uses the right-handed system, but rotates it so that the positive part of the x-axis is pointing right and the negative part of the x-axis points to the left. The y-axis is pointing up and the z-axis is pointing out of the screen and it is known as the y-up convention. This sounds irritating at first, but we will soon learn to think in this coordinate system. The website http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html contains a rather good picture-based explanation of the different coordination systems and how they relate to each other. Scene graph A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. We already discussed that a scene graph has a root and is organized like a tree. But we didn't touch on the most important function of a scene graph. Each node of a scene graph has a list of its children as well as a transformation in the 3D space. The transformation is composed of three aspects, namely, the position, the rotation, and the scale. The position is a triple (x,y,z), which obviously describes the position of the node in the scene. The rotation is stored using a quaternion, a mathematical concept for storing rotations in 3D space, but we can think of rotations as a single floating point value for each axis, describing how the node is rotated using radians as units. Scaling is quite easy; again, it uses a triple (x,y,z), and each part of the triple is simply the factor to scale the axis with. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change. When we move the parent 10 units along the x-axis, all children will also be moved by 10 units along the x-axis. The final orientation of each child is computed using the orientation of all parents. This fact will become clearer with the next diagram. The position of MyEntity in this scene will be (10,0,0) and MyEntity2 will be at (10,10,20). Let's try this in Ogre 3D. Pop quiz – finding the position of scene nodes Look at the following tree and determine the end positions of MyEntity and MyEntity2: MyEntity(60,60,60) and MyEntity2(0,0,0) MyEntity(70,50,60) and MyEntity2(10,-10,0) MyEntity(60,60,60) and MyEntity2(10,10,10) Setting the position of a scene node Now, we will try to create the setup of the scene from the diagram before the previous image. Time for action – setting the position of a scene node Add this new line after the creation of the scene node: node->setPosition(10,0,0); To create a second entity, add this line at the end of the createScene() function: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Then create a second scene node: Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); Add the second node to the first one: node->addChild(node2); Set the position of the second node: node2->setPosition(0,10,20); Attach the second entity to the second node: node2->attachObject(ent2); Compile the program and you should see two instances of Sinbad: What just happened? We created a scene which matches the preceding diagram. The first new function we used was at step 1. Easily guessed, the function setPosition(x,y,z) sets the position of the node to the given triple. Keep in mind that this position is relative to the parent. We wanted MyEntity2 to be at (10,10,20), because we added node2, which holds MyEntity2, to a scene node which already was at the position (10,0,0). We only needed to set the position of node2 to (0,10,20). When both positions combine, MyEntity2 will be at (10,10,20). Pop quiz – playing with scene nodes We have the scene node node1 at (0,20,0) and we have a child scene node node2, which has an entity attached to it. If we want the entity to be rendered at (10,10,10), at which position would we need to set node2? (10,10,10) (10,-10,10) (-10,10,-10) Have a go hero – adding a Sinbad Add a third instance of Sinbad and let it be rendered at the position (10,10,30). Rotating a scene node We already know how to set the position of a scene node. Now, we will learn how to rotate a scene node and another way to modify the position of a scene node. Time for action – rotating a scene node We will use the previous code, but create completely new code for the createScene() function. Remove all code from the createScene() function. First create an instance of Sinbad.mesh and then create a new scene node. Set the position of the scene node to (10,10,0), at the end attach the entity to the node, and add the node to the root scene node as a child: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new instance of the model, also a new scene node, and set the position to (10,0,0): Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); node->addChild(node2); node2->setPosition(10,0,0); Now add the following two lines to rotate the model and attach the entity to the scene node: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); node2->attachObject(ent2); Do the same again, but this time use the function yaw instead of the function pitch and the translate function instead of the setPosition function: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Ogre::SceneNode* node3 = mSceneMgr->createSceneNode("Node3",); node->addChild(node3); node3->translate(20,0,0); node3->yaw(Ogre::Degree(90.0f)); node3->attachObject(ent3); And the same again with roll instead of yaw or pitch: Ogre::Entity* ent4 = mSceneMgr->createEntity("MyEntity4","Sinbad. mesh"); Ogre::SceneNode* node4 = mSceneMgr->createSceneNode("Node4"); node->addChild(node4); node4->setPosition(30,0,0); node4->roll(Ogre::Radian(Ogre::Math::HALF_PI)); node4->attachObject(ent4); Compile and run the program, and you should see the following screenshot: What just happened? We repeated the code we had before four times and always changed some small details. The first repeat is nothing special. It is just the code we had before and this instance of the model will be our reference model to see what happens to the other three instances we made afterwards. In step 4, we added one following additional line: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); The function pitch(Ogre::Radian(Ogre::Math::HALF_PI)) rotates a scene node around the x-axis. As said before, this function expects a radian as parameter and we used half of pi, which means a rotation of ninety degrees. In step 5, we replaced the function call setPosition(x,y,z) with translate(x,y,z). The difference between setPosition(x,y,z) and translate(x,y,z) is that setPosition sets the position—no surprises here. translate adds the given values to the position of the scene node, so it moves the node relatively to its current position. If a scene node has the position (10,20,30) and we call setPosition(30,20,10), the node will then have the position (30,20,10). On the other hand, if we call translate(30,20,10), the node will have the position (40,40,40). It's a small, but important, difference. Both functions can be useful if used in the correct circumstances, like when we want to position in a scene, we would use the setPosition(x,y,z) function. However, when we want to move a node already positioned in the scene, we would use translate(x,y,z). Also, we replaced pitch(Ogre::Radian(Ogre::Math::HALF_PI))with yaw(Ogre::Degree(90.0f)). The yaw() function rotates the scene node around the y-axis. Instead of Ogre::Radian(), we used Ogre::Degree(). Of course, Pitch and yaw still need a radian to be used. However, Ogre 3D offers the class Degree(), which has a cast operator so the compiler can automatically cast into a Radian(). Therefore, the programmer is free to use a radian or degree to rotate scene nodes. The mandatory use of the classes makes sure that it's always clear which is used, to prevent confusion and possible error sources. Step 6 introduces the last of the three different rotate function a scene node has, namely, roll(). This function rotates the scene node around the z-axis. Again, we could use roll(Ogre::Degree(90.0f)) instead of roll(Ogre::Radian(Ogre::Math::HALF_PI)). The program when run shows a non-rotated model and all three possible rotations. The left model isn't rotated, the model to the right of the left model is rotated around the x-axis, the model to the left of the right model is rotated around the y-axis, and the right model is rotated around the z-axis. Each of these instances shows the effect of a different rotate function. In short, pitch() rotates around the x-axis, yaw() around the y-axis, and roll() around the z-axis. We can either use Ogre::Degree(degree) or Ogre::Radian(radian) to specify how much we want to rotate. Pop quiz – rotating a scene node Which are the three functions to rotate a scene node? pitch, yawn, roll pitch, yaw, roll pitching, yaw, roll Have a go hero – using Ogre::Degree Remodel the code we wrote for the previous section in such a way that each occurrence of Ogre::Radian is replaced with an Ogre::Degree and vice versa, and the rotation is still the same. Scaling a scene node We already have covered two of the three basic operations we can use to manipulate our scene graph. Now it's time for the last one, namely, scaling. Time for action – scaling a scene node Once again, we start with the same code block we used before. Remove all code from the createScene() function and insert the following code block: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new entity: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Now we use a function that creates the scene node and adds it automatically as a child. Then we do the same thing we did before: Ogre::SceneNode* node2 = node->createChildSceneNode("node2"); node2->setPosition(10,0,0); node2->attachObject(ent2); Now, after the setPosition() function, call the following line to scale the model: node2->scale(2.0f,2.0f,2.0f); Create a new entity: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Now we call the same function as in step 3, but with an additional parameter: Ogre::SceneNode* node3 = node->createChildSceneNode("node3",Ogre:: Vector3(20,0,0)); After the function call, insert this line to scale the model: node3->scale(0.2f,0.2f,0.2f); Compile the program and run it, and you should see the following image:
Read more
  • 0
  • 0
  • 5671

article-image-photo-compositing-gimp-part-1
Packt
07 Dec 2009
7 min read
Save for later

Photo Compositing with The GIMP: Part 1

Packt
07 Dec 2009
7 min read
Basing from my previous GIMP article titled Creating Pseudo-3D Imagery with GIMP, you learned how to do some basic selection manipulation, gradient application, faking Depth of Field, etc.  In line with that, I’m following it with a new article very much related to the concepts discussed therein but we’ll raise the bar a bit by having a glimpse on compositing, where we’ll use an existing image or photograph and later add in our 2-dimensional element seamlessly with the said picture. So if you haven’t read yet “Creating Pseudo-3D Imagery with GIMP”, I highly suggest you do so since almost all major concepts we’ll tackle here are based off of that article.  But if you have an idea on how to do the implied concepts here, then you’re good to go. If you have been following my advices lately, this might feel cliché to you, but you can’t blame me if I say “Always plan what you have to do!”, right? There you go, another useful and tad overused advice. Just to give you an overview, this article you are about to spend some time on will teach you basically how to: 1) add 2-dimensional elements to photos or just any other image you wish to, 2) apply effects to better enhance the composition, 3) plan out your scenes well However, this guide doesn’t teach you how to pick the right color combination nor does it help you how to shoot great photographs, but hopefully though, at the end of your reading, you’ll soon be able to apply the concepts with no hassle and get comfortable with it each time you do. Some of you might be a bit daunted by the title alone of this article, especially those of you most inclined with specialized compositing software, but as much as I would want to make use of those applications, I’m much more comfortable exploring what GIMP is capable of, not only as a simple drawing application but as a minor compositing app as well.  The concepts that I present here though are just basic representations of what compositing actually is.  And in this context, we’ll only be focusing on still images as reference and output all throughout this article.  If you wanted however to do compositing on series of images, animation, or movie, I highly suggest GIMP’s 3D partner – Blender. Ok, promotion set aside, let’s head back to the topic at hand. To give you an idea (because I believe [and I’m positive you do too] that pictures speak louder than words), here’s what we should be having by the end of this article, probably not exactly matching it but fairly close enough and I’ll try my best to be as guiding as possible. So let’s hop on! Heart and Sphere Composited with GIMP Compose, Compose, Compose! Yup, you read it thrice, I did too, don’t worry.  So what’s the fuss about composing anyway? The answer is pretty straightforward, though. Just like how a song is written through a composition, a photo/image is almost the same thing.  Without the proper composition, your image would never give life.  By composition, I mean a proper mix of colors, framing, lighting, etc.  This is one of the hardest obstacles any artist or photographer might face.  It will either ruin a majestic idea or it will turn your doodle into a wonderful creation you could almost hear the melody of your lines rhythm through your senses (wow, that was almost a mouthful!). Whichever tool you’re comfortable using, it really doesn’t differ a lot as compared to how you could easily interpret your ideas into something much more fruitful than worrying how to work your way around. That’s probably one reason I stuck into using GIMP, not only am I confident it can deliver anything I could 2-dimensionally think of but more importantly I am comfortable using it, which is a very important thing regarding design in my opinion. Just like how I wrote this article, composition comes into play (or you might already have doubted me already?).  Without the drafts and planning I made, I don’t believe I could even finish writing a paragraph of this one. To start off the process, we’ll use one photograph I shot just for this article (in an attempt to recreate the first image I showed you). Or if you don’t want to follow this article thoroughly, you can grab a sample photo from Google Images or from Stock Exchange (www.sxc.hu), just be sure to credit the owner though or whatever conditions or licenses the image has. Photo to work on Photo Enhancement Honestly, the photo we have is already decent enough to work with, but let’s just try making it better so we won’t have to go and adjust it later on. First, let’s open our image and do some primary color correction to it, just in case you’re the type who thinks “something has got to be better, always”.  Go ahead and fire up our tool of choice (GIMP in this case) and open the image (as you can see below). Opening the image in GIMP   With our photo active in our canvas and the layer it is on (which is the only layer that you see in the Layer Window by default), right click on the image, select Color, then choose Levels. Adjusting the image’s color levels is one good way to fix some color cast problems and to edit the color range of your colors non-destructively (extreme cases excluded), another great tool is using the Curves Tool to manipulate your image the same way that you do with Levels. But again, for the sake of this tutorial, we’ll use the levels tool since it’s much easier and faster to edit. You can see a screenshot below of the Levels Tool that we’ll be using in awhile. Levels Tool One nifty tool we can use under our Levels Tool is the Auto function which (you guessed it right again!), automates the color adjustment on our image based on the histogram reading and graph analysis of GIMP. Oftentimes, it makes the task easier for you but it might also ruin your image.  Nothing beats your visual judgment still so if you’re not contented with what the Auto Leveling gives you, go on and move the sliders that you see in the window.  Normally, I only adjust the Value data of the image to correct it’s overall brightness and contrast without altering the overall color mood of the photo.  But if in case you weren’t lucky enough to set your color balance settings on your camera the moment you shot the photo or if you felt the image you’re seeing infront of you is color casted too much, you can freely choose the other color channels (Red, Green, and Blue respectively) from the drop-down menu. You can see a screenshot below on how I adjusted the photo we currently loaded. Value Level Adjustment   RGB Color Level Choices That’s basically all that we need to do to enhance our photo (or you could go ahead and repeat the process a few more times to get the appropriate feel you wanted). If you wanted a safer way of editing (just in case you might run out of undo steps), duplicate your base layer that holds your image and work on the duplicate layer instead of the original one, then you can just switch the visibility on and off to see the changes you’ve made so far.
Read more
  • 0
  • 0
  • 5663

article-image-introduction-color-theory-and-lighting-basics-blender
Packt
14 Apr 2011
7 min read
Save for later

Introduction to Color Theory and Lighting Basics in Blender

Packt
14 Apr 2011
7 min read
Basic color theory To fully understand how light works, we need to have a basic understanding of what color is and how different colors interact with each other. The study of this phenomenon is known as color theory. What is color? When light comes in contact with an object, the object absorbs a certain amount of that light. The rest is reflected into the eye of the viewer in the form of color. The easiest way to visualize colors and their relations is in the form of a color wheel. Primary colors There are millions of colors, but there are only three colors that cannot be created through color mixing—red, yellow, and blue. These colors are known as primary colors, which are used to create the other colors on the color wheel through a process known as color mixing. Through color mixing, we get other "sets" of colors, including secondary and tertiary colors. Secondary colors Secondary colors are created when two primary colors are mixed together. For example, mixing red and blue makes purple, red and yellow make orange, and blue and yellow make green. Tertiary colors It's natural to assume that, because mixing two primary colors creates a secondary color, mixing two secondary colors would create a tertiary color. Surprisingly, this isn't the case. A tertiary color is, in fact, the result of mixing a primary and secondary color together. This gives us the remainder of the color wheel: Red-orange Orange-yellow Chartreuse Turquoise Indigo Violet-red Color relationships There are other relationships between colors that we should know about before we start using Blender. The first is complimentary colors. Complimentary colors are colors that are across from each other on the color wheel. For example, red and green are compliments. Complimentary colors are especially useful for creating contrast in an image, because mixing them together darkens the hue. In a computer program, mixing perfect compliments together will result in black, but mixing compliments in a more traditional medium such as oil pastels results in more of a dark brown hue. In both situations, though, the compliments are used to create a darker value. Be wary of using complimentary colors in computer graphics—if complimentary colors mix accidentally, it will result in black artifacts in images or animations. The other color relationship that we should be aware of is analogous colors. Analogous colors are colors found next to each other on the color wheel. For example, red, red-orange, and orange are analogous. Here's the kicker—red, orange, and yellow can also be analogous as well. A good rule to follow is as long as you don't span more than one primary color on the color wheel, they're most likely considered analogous colors. Color temperature Understanding color temperature is an essential step in understanding how lights work—at the very least, it helps us understand why certain lights emit the colors they do. No light source emits a constant light wavelength. Even the sun, although considered a constant light source, is filtered by the atmosphere to various degrees based on the time of the day, changing its perceived color. Color temperature is typically measured in degrees Kelvin (°K), and has a color range from a red to blue hue, like in the image below: Real world, real lights So how is color applicable beyond a two-dimensional color wheel? In the real world, our eyes perceive color because light from the sun—which contains all colors in the visible color spectrum—is reflected off of objects in our field of vision. As light hits an object, some wavelengths are absorbed, while the rest are reflected. Those reflected rays are what determine the color we perceive that particular object to be. Of course, the sun isn't the only source of light we have. There are many different types of natural and artificial light sources, each with its own unique properties. The most common types of light sources we may try to simulate in Blender include: Candlelight Incandescent light Florescent light Sunlight Skylight Candlelight Candlelight is a source of light as old as time. It has been used for thousands of years and is still used today in many cases. The color temperature of a candle's light is about 1500 K, giving it a warm red-orange hue. Candlelight also has a tendency to create really high contrast between lit areas and unlit areas in a room, which creates a very successful dramatic effect. Incandescent light bulbs When most people hear the term "light bulb", the incandescent light bulb immediately comes to mind. It's also known as a tungsten-halogen light bulb. It's your typical household light bulb, burning at approximately 2800 K-3200 K. This color temperature value still allows it to fall within the orange-yellow part of the spectrum, but it is noticeably brighter than the light of a candle. Florescent light bulbs Florescent lights are an alternative to incandescent. Also known as mercury vapor lights, fluorescents burn at a color temperature range of 3500 K-5900 K, allowing them to emit a color anywhere between a yellow and a white hue. They're commonly used when lighting a large area effectively, such as a warehouse, school hallway, or even a conference room. The sun and the sky Now let's take a look at some natural sources of light! The most obvious example is the sun. The sun burns at a color temperature of approximately 5500 K, giving it its bright white color. We rarely use pure white as a light's color in 3D though—it makes your scene look too artificial. Instead, we may choose to use a color that best suits the scene at hand. For example, if we are lighting a desert scene, we may choose to use a beige color to simulate light bouncing off the sand. But even so, this still doesn't produce an entirely realistic effect. This is where the next source of light comes in—the sky. The sky can produce an entire array of colors from deep purple to orange to bright blue. It produces a color temperature range of 6000 K-20,000 K. That's a huge range! We can really use this to our advantage in our 3D scenes—the color of the sky can have the final say in what the mood of your scene ends up being. Chromatic adaptation What is chromatic adaptation? We're all more familiar with this process than you may realize. As light changes, the color we perceive from the world around us changes. To accommodate for those changes, our eyes adjust what we see to something we're more familiar with (or what our brains would consider normal). When working in 3D you have to keep this in mind, because even though your 3D scene may be physically lit correctly, it may not look natural because the computer renders the final image objectively, without the chromatic adaptation that we, as humans, are used to. Take this image for example. In the top image, the second card from the left appears to be a stronger shade of pink than the corresponding card in the bottom picture. Believe it or not, they are the exact same color, but because of the red hue of the second photo, our brains change how we perceive that image.
Read more
  • 0
  • 0
  • 5651

article-image-ogre-3d-fixed-function-pipeline-and-shaders
Packt
25 Nov 2010
13 min read
Save for later

Ogre 3D: Fixed Function Pipeline and Shaders

Packt
25 Nov 2010
13 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Introduction Fixed Function Pipeline is the rendering pipeline on the graphics card that produces those nice shiny pictures we love looking at. As the prefix Fixed suggests, there isn't a lot of freedom to manipulate the Fixed Function Pipeline for the developer. We can tweak some parameters using the material files, but nothing fancy. That's where shaders can help fill the gap. Shaders are small programs that can be loaded onto the graphics card and then function as a part of the rendering process. These shaders can be thought of as little programs written in a C-like language with a small, but powerful, set of functions. With shaders, we can almost completely control how our scene is rendered and also add a lot of new effects that weren't possible with only the Fixed Function Pipeline. Render Pipeline To understand shaders, we need to first understand how the rendering process works as a whole. When rendering, each vertex of our model is translated from local space into camera space, then each triangle gets rasterized. This means, the graphics card calculates how to represent the model in an image. These image parts are called fragments. Each fragment is then processed and manipulated. We could apply a specific part of a texture to this fragment to texture our model or we could simply assign it a color when rendering a model in only one color. After this processing, the graphics card tests if the fragment is covered by another fragment that is nearer to the camera or if it is the fragment nearest to the camera. If this is true, the fragment gets displayed on the screen. In newer hardware, this step can occur before the processing of the fragment. This can save a lot of computation time if most of the fragments won't be seen in the end result. The following is a very simplified graph showing the pipeline: With almost each new graphics card generation, new shader types were introduced. It began with vertex and pixel/fragment shaders. The task of the vertex shader is to transform the vertices into camera space, and if needed, modify them in any way, like when doing animations completely on the GPU. The pixel/fragment shader gets the rasterized fragments and can apply a texture to them or manipulate them in other ways, for example, for lighting models with an accuracy of a pixel. Time for action – our first shader application Let's write our first vertex and fragment shaders: In our application, we only need to change the used material. Change it to MyMaterial13. Also remove the second quad: manual->begin("MyMaterial13", RenderOperation::OT_TRIANGLE_LIST); Now we need to create this material in our material file. First, we are going to define the fragment shader. Ogre 3D needs five pieces of information about the shader: The name of the shader In which language it is written In which source file it is stored How the main function of this shader is called In what profile we want the shader to be compiled All this information should be in the material file: fragment_program MyFragmentShader1 cg { source Ogre3DBeginnersGuideShaders.cg entry_point MyFragmentShader1 profiles ps_1_1 arbfp1 } The vertex shader needs the same parameter, but we also have to define a parameter that is passed from Ogre 3D to our shader. This contains the matrix that we will use for transforming our quad into camera space: vertex_program MyVertexShader1 cg { source Ogre3DBeginnerGuideShaders.cg entry_point MyVertexShader1 profiles vs_1_1 arbvp1 default_params { param_named_auto worldViewMatrix worldviewproj_matrix } } The material itself just uses the vertex and fragment shader names to reference them: material MyMaterial13 { technique { pass { vertex_program_ref MyVertexShader1 { } fragment_program_ref MyFragmentShader1 { } } } } Now we need to write the shader itself. Create a file named Ogre3DBeginnersGuideShaders.cg in the mediamaterialsprograms folder of your Ogre 3D SDK. Each shader looks like a function. One difference is that we can use the out keyword to mark a parameter as an outgoing parameter instead of the default incoming parameter. The out parameters are used by the rendering pipeline for the next rendering step. The out parameters of a vertex shader are processed and then passed into the pixel shader as an in parameter. The out parameter from a pixel shader is used to create the final render result. Remember to use the correct name for the function; otherwise, Ogre 3D won't find it. Let's begin with the fragment shader because it's easier: void MyFragmentShader1(out float4 color: COLOR) The fragment shader will return the color blue for every pixel we render: { color = float4(0,0,1,0); } That's all for the fragment shader; now we come to the vertex shader. The vertex shader has three parameters—the position for the vertex, the translated position of the vertex as an out variable, and as a uniform variable for the matrix we are using for the translation: void MyVertexShader1( float4 position : POSITION, out float4 oPosition : POSITION, uniform float4x4 worldViewMatrix) Inside the shader, we use the matrix and the incoming position to calculate the outgoing position: { oPosition = mul(worldViewMatrix, position); } Compile and run the application. You should see our quad, this time rendered in blue. What just happened? Quite a lot happened here; we will start with step 2. Here we defined the fragment shader we are going to use. Ogre 3D needs five pieces of information for a shader. We define a fragment shader with the keyword fragment_program, followed by the name we want the fragment program to have, then a space, and at the end, the language in which the shader will be written. As for programs, shader code was written in assembly and in the early days, programmers had to write shader code in assembly because there wasn't another language to be used. But also, as with general programming language, soon there came high-level programming to ease the pain of writing shader code. At the moment, there are three different languages that shaders can be written in: HLSL, GLSL, and CG. The shader language HLSL is used by DirectX and GLSL is the language used by OpenGL. CG was developed by NVidia in cooperation with Microsoft and is the language we are going to use. This language is compiled during the start up of our application to their respective assembly code. So shaders written in HLSL can only be used with DirectX and GLSL shaders with OpenGL. But CG can compile to DirectX and OpenGL shader assembly code; that's the reason why we are using it to be truly cross platform. That's two of the five pieces of information that Ogre 3D needs. The other three are given in the curly brackets. The syntax is like a property file—first the key and then the value. One key we use is source followed by the file where the shader is stored. We don't need to give the full path, just the filename will do, because Ogre 3D scans our directories and only needs the filename to find the file. Another key we are using is entry_point followed by the name of the function we are going to use for the shader. In the code file, we created a function called MyFragmentShader1 and we are giving Ogre 3D this name as the entry point for our fragment shader. This means, each time we need the fragment shader, this function is called. The function has only one parameter out float4 color : COLOR. The prefix out signals that this parameter is an out parameter, meaning we will write a value into it, which will be used by the render pipeline later on. The type of this parameter is called float4, which simply is an array of four float values. For colors, we can think of it as a tuple (r,g,b,a) where r stands for red, g for green, b for blue, and a for alpha: the typical tuple to description colors. After the name of the parameter, we got a : COLOR. In CG, this is called a semantic describing for what the parameter is used in the context of the render pipeline. The parameter :COLOR tells the render pipeline that this is a color. In combination with the out keyword and the fact that this is a fragment shader, the render pipeline can deduce that this is the color we want our fragment to have. The last piece of information we supply uses the keyword profiles with the values ps_1_1 and arbfp1. To understand this, we need to talk a bit about the history of shaders. With each generation of graphics cards, a new generation of shaders have been introduced. What started as a fairly simple C-like programming language without even IF conditions are now really complex and powerful programming languages. Right now, there are several different versions for shaders and each with a unique function set. Ogre 3D needs to know which of these versions we want to use. ps_1_1 means pixel shader version 1.1 and arbfp1 means fragment program version 1. We need both profiles because ps_1_1 is a DirectX specific function set and arbfp1 is a function subset for OpenGL. We say we are cross platform, but sometimes we need to define values for both platforms. All subsets can be found at http://www.ogre3d.org/docs/manual/manual_18.html. That's all needed to define the fragment shader in our material file. In step 3, we defined our vertex shader. This part is very similar to the fragment shader definition code; the main difference is the default_params block. This block defines parameters that are given to the shader during runtime. param_named_auto defines a parameter that is automatically passed to the shader by Ogre 3D. After this key, we need to give the parameter a name and after this, the value keyword we want it to have. We name the parameter worldViewMatrix; any other name would also work, and the value we want it to have has the key worldviewproj_matrix. This key tells Ogre 3D we want our parameter to have the value of the WorldViewProjection matrix. This matrix is used for transforming vertices from local into camera space. A list of all keyword values can be found at http://www.ogre3d.org/docs/manual/manual_23.html#SEC128. How we use these values will be seen shortly. Step 4 used the work we did before. As always, we defined our material with one technique and one pass; we didn't define a texture unit but used the keyword vertex_program_ref. After this keyword, we need to put the name of a vertex program we defined, in our case, this is MyVertexShader1. If we wanted, we could have put some more parameters into the definition, but we didn't need to, so we just opened and closed the block with curly brackets. The same is true for fragment_program_ref. Writing a shader Now that we have defined all necessary things in our material file, let's write the shader code itself. Step 6 defines the function head with the parameter we discussed before, so we won't go deeper here. Step 7 defines the function body; for this fragment shader, the body is extremely simple. We created a new float4 tuple (0,0,1,0), describes the color blue and assigns this color to our out parameter color. The effect is that everything that is rendered with this material will be blue. There isn't more to the fragment shader, so let's move on to the vertex shader. Step 8 defines the function header. The vertex shader has 3 parameters— two are marked as positions using CG semantics and the other parameter is a 4x4 matrix using float4 as values named worldViewMatrix. Before the parameter type definition, there is the keyword uniform. Each time our vertex shader is called, it gets a new vertex as the position parameter input, calculates the position of this new vertex, and saves it in the oPosition parameter. This means with each call, the parameter changes. This isn't true for the worldViewMatrix. The keyword uniform denotes parameters that are constant over one draw call. When we render our quad, the worldViewMatrix doesn't change while the rest of the parameters are different for each vertex processed by our vertex shader. Of course, in the next frame, the worldViewMatrix will probably have changed. Step 9 creates the body of the vertex shader. In the body, we multiply the vertex that we got with the world matrix to get the vertex translated into camera space. This translated vertex is saved in the out parameter to be processed by the rendering pipeline. We will look more closely into the render pipeline after we have experimented with shaders a bit more. Texturing with shaders We have painted our quad in blue, but we would like to use the previous texture. Time for action – using textures in shaders Create a new material named MyMaterial14. Also create two new shaders named MyFragmentShader2 and MyVertexShader2. Remember to copy the fragment and vertex program definitions in the material file. Add to the material file a texture unit with the rock texture: texture_unit { texture terr_rock6.jpg } We need to add two new parameters to our fragment shader. The first is a two tuple of floats for the texture coordinates. Therefore, we also use the semantic to mark the parameter as the first texture coordinates we are using. The other new parameter is of type sampler2D, which is another name for texture. Because the texture doesn't change on a per fragment basis, we mark it as uniform. This keyword indicates that the parameter value comes from outside the CG program and is set by the rendering environment, in our case, by Ogre 3D: void MyFragmentShader2(float2 uv : TEXCOORD0, out float4 color : COLOR, uniform sampler2D texture) In the fragment shader, replace the color assignment with the following line: color = tex2D(texture, uv); The vertex shader also needs some new parameters—one float2 for the incoming texture coordinates and one float2 as the outgoing texture coordinates. Both are our TEXCOORD0 because one is the incoming and the other is the outgoing TEXCOORD0: void MyVertexShader2( float4 position : POSITION, out float4 oPosition : POSITION, float2 uv : TEXCOORD0, out float2 oUv : TEXCOORD0, uniform float4x4 worldViewMatrix) In the body, we calculate the outgoing position of the vertex: oPosition = mul(worldViewMatrix, position); For the texture coordinates, we assign the incoming value to the outgoing value: oUv = uv; Remember to change the used material in the application code, and then compile and run it. You should see the quad with the rock texture.
Read more
  • 0
  • 0
  • 5590

Packt
07 Jul 2014
12 min read
Save for later

HTML5 Game Development – A Ball-shooting Machine with Physics Engine

Packt
07 Jul 2014
12 min read
(For more resources related to this topic, see here.) Mission briefing In this article, we focus on the physics engine. We will build a basketball court where the player needs to shoot the ball in to the hoop. A player shoots the ball by keeping the mouse button pressed and releasing it. The direction is visualized by an arrow and the power is proportional to the duration of the mouse press and hold event. There are obstacles present between the ball and the hoop. The player either avoids the obstacles or makes use of them to put the ball into the hoop. Finally, we use CreateJS to visualize the physics world into the canvas. You may visit http://makzan.net/html5-games/ball-shooting-machine/ to play a dummy game in order to have a better understanding of what we will be building throughout this article. The following screenshot shows a player shooting the ball towards the hoop, with a power indicator: Why is it awesome? When we build games without a physics engine, we create our own game loop and reposition each game object in every frame. For instance, if we move a character to the right, we manage the position and movement speed ourselves. Imagine that we are coding a ball-throwing logic now. We need to keep track of several variables. We have to calculate the x and y velocity based on the time and force applied. We also need to take the gravity into account; not to mention the different angles and materials we need to consider while calculating the bounce between the two objects. Now, let's think of a physical world. We just defined how objects interact and all the collisions that happen automatically. It is similar to a real-world game; we focus on defining the rule and the world will handle everything else. Take basketball as an example. We define the height of the hoop, size of the ball, and distance of the three-point line. Then, the players just need to throw the ball. We never worry about the flying parabola and the bouncing on the board. Our space takes care of them by using the physics laws. This is exactly what happens in the simulated physics world; it allows us to apply the physics properties to game objects. The objects are affected by the gravity and we can apply forces to them, making them collide with each other. With the help of the physics engine, we can focus on defining the game-play rules and the relationship between the objects. Without the need to worry about collision and movement, we can save time to explore different game plays. We then elaborate and develop the setup further, as we like, among the prototypes. We define the position of the hoop and the ball. Then, we apply an impulse force to the ball in the x and y dimensions. The engine will handle all the things in between. Finally, we get an event trigger if the ball passes through the hoop. It is worth noting that some blockbuster games are also made with a physics engine. This includes games such as Angry Birds, Cut the Rope, and Where's My Water. Your Hotshot objectives We will divide the article into the following eight tasks: Creating the simulated physics world Shooting a ball Handling collision detection Defining levels Launching a bar with power Adding a cross obstacle Visualizing graphics Choosing a level Mission checklist We create a project folder that contains the index.html file and the scripts and styles folders. Inside the scripts folder, we create three files: physics.js, view.js, and game.js. The physics.js file is the most important file in this article. It contains all the logic related to the physics world including creating level objects, spawning dynamic balls, applying force to the objects, and handling collision. The view.js file is a helper for the view logic including the scoreboard and the ball-shooting indicator. The game.js file, as usual, is the entry point of the game. It also manages the levels and coordinates between the physics world and view. Preparing the vendor files We also need a vendors folder that holds the third-party libraries. This includes the CreateJS suite—EaselJS, MovieClip, TweenJS, PreloadJS—and Box2D. Box2D is the physics engine that we are going to use in this article. We need to download the engine code from https://code.google.com/p/box2dweb/. It is a port version from ActionScript to JavaScript. We need the Box2dWeb-2.1.a.3.min.js file or its nonminified version for debugging. We put this file in the vendors folder. Box2D is an open source physics-simulation engine that was created by Erin Catto. It was originally written in C++. Later, it was ported to ActionScript because of the popularity of Flash games, and then it was ported to JavaScript. There are different versions of ports. The one we are using is called Box2DWeb, which was ported from ActionScript's version Box2D 2.1. Using an old version may cause issues. Also, it will be difficult to find help online because most developers have switched to 2.1. Creating a simulated physics world Our first task is to create a simulated physics world and put two objects inside it. Prepare for lift off In the index.html file, the core part is the game section. We have two canvas elements in this game. The debug-canvas element is for the Box2D engine and canvas is for the CreateJS library: <section id="game" class="row"> <canvas id="debug-canvas" width="480" height="360"></canvas> <canvas id="canvas" width="480" height="360"></canvas> </section> We prepare a dedicated file for all the physics-related logic. We prepare the physics.js file with the following code: ;(function(game, cjs, b2d){ // code here later }).call(this, game, createjs, Box2D); Engage thrusters The following steps create the physics world as the foundation of the game: The Box2D classes are put in different modules. We will need to reference some common classes as we go along. We use the following code to create an alias for these Box2D classes: // alias var b2Vec2 = Box2D.Common.Math.b2Vec2 , b2AABB = Box2D.Collision.b2AABB , b2BodyDef = Box2D.Dynamics.b2BodyDef , b2Body = Box2D.Dynamics.b2Body , b2FixtureDef = Box2D.Dynamics.b2FixtureDef , b2Fixture = Box2D.Dynamics.b2Fixture , b2World = Box2D.Dynamics.b2World , b2MassData = Box2D.Collision.Shapes.b2MassData , b2PolygonShape = Box2D.Collision.Shapes.b2PolygonShape , b2CircleShape = Box2D.Collision.Shapes.b2CircleShape , b2DebugDraw = Box2D.Dynamics.b2DebugDraw , b2MouseJointDef = Box2D.Dynamics.Joints.b2MouseJointDef , b2RevoluteJointDef = Box2D.Dynamics.Joints.b2RevoluteJointDef ; We prepare a variable that states how many pixels define 1 meter in the physics world. We also define a Boolean to determine if we need to draw the debug draw: var pxPerMeter = 30; // 30 pixels = 1 meter. Box3D uses meters and we use pixels. var shouldDrawDebug = false; All the physics methods will be put into the game.physics object. We create this literal object before we code our logics: var physics = game.physics = {}; The first method in the physics object creates the world: physics.createWorld = function() { var gravity = new b2Vec2(0, 9.8); this.world = new b2World(gravity, /*allow sleep= */ true); // create two temoporary bodies var bodyDef = new b2BodyDef; var fixDef = new b2FixtureDef; bodyDef.type = b2Body.b2_staticBody; bodyDef.position.x = 100/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; fixDef.shape = new b2PolygonShape(); fixDef.shape.SetAsBox(20/pxPerMeter, 20/pxPerMeter); this.world.CreateBody(bodyDef).CreateFixture(fixDef); bodyDef.type = b2Body.b2_dynamicBody; bodyDef.position.x = 200/pxPerMeter; bodyDef.position.y = 100/pxPerMeter; this.world.CreateBody(bodyDef).CreateFixture(fixDef); // end of temporary code } The update method is the game loop's tick event for the physics engine. It calculates the world step and refreshes debug draw. The world step upgrades the physics world. We'll discuss it later: physics.update = function() { this.world.Step(1/60, 10, 10); if (shouldDrawDebug) { this.world.DrawDebugData(); } this.world.ClearForces(); }; Before we can refresh the debug draw, we need to set it up. We pass a canvas reference to the Box2D debug draw instance and configure the drawing settings: physics.showDebugDraw = function() { shouldDrawDebug = true; //set up debug draw var debugDraw = new b2DebugDraw(); debugDraw.SetSprite(document.getElementById("debug-canvas").getContext("2d")); debugDraw.SetDrawScale(pxPerMeter); debugDraw.SetFillAlpha(0.3); debugDraw.SetLineThickness(1.0); debugDraw.SetFlags(b2DebugDraw.e_shapeBit | b2DebugDraw.e_jointBit); this.world.SetDebugDraw(debugDraw); }; Let's move to the game.js file. We define the game-starting logic that sets up the EaselJS stage and Ticker. It creates the world and sets up the debug draw. The tick method calls the physics.update method: ;(function(game, cjs){ game.start = function() { cjs.EventDispatcher.initialize(game); // allow the game object to listen and dispatch custom events. game.canvas = document.getElementById('canvas'); game.stage = new cjs.Stage(game.canvas); cjs.Ticker.setFPS(60); cjs.Ticker.addEventListener('tick', game.stage); // add game.stage to ticker make the stage.update call automatically. cjs.Ticker.addEventListener('tick', game.tick); // gameloop game.physics.createWorld(); game.physics.showDebugDraw(); }; game.tick = function(){ if (cjs.Ticker.getPaused()) { return; } // run when not paused game.physics.update(); }; game.start(); }).call(this, game, createjs); After these steps, we should have a result as shown in the following screenshot. It is a physics world with two bodies. One body stays in position and the other one falls to the bottom. Objective complete – mini debriefing We have defined our first physical world with one static object and one dynamic object that falls to the bottom. A static object is an object that is not affected by gravity and any other forces. On the other hand, a dynamic object is affected by all the forces. Defining gravity In reality, we have gravity on every planet. It's the same in the Box2D world. We need to define gravity for the world. This is a ball-shooting game, so we will follow the rules of gravity on Earth. We use 0 for the x-axis and 9.8 for the y-axis. It is worth noting that we do not need to use the 9.8 value. For instance, we can set a smaller gravity value to simulate other planets in space—maybe even the moon; or, we can set the gravity to zero to create a top-down view of the ice hockey game, where we apply force to the puck and benefit from the collision. Debug draw The physics engine focuses purely on the mathematical calculation. It doesn't care about how the world will be presented finally, but it does provide a visual method in order to make the debugging easier. This debug draw is very useful before we use our graphics to represent the world. We won't use the debug draw in production. Actually, we can decide how we want to visualize this physics world. We have learned two ways to visualize the game. The first way is by using the DOM objects and the second one is by using the canvas drawing method. We will visualize the world with our graphics in later tasks. Understanding body definition and fixture definition In order to define objects in the physics world, we need two definitions: a body definition and fixture definition. The body is in charge of the physical properties, such as its position in the world, taking and applying force, moving speed, and the angular speed when rotating. We use fixtures to handle the shape of the object. The fixture definition also defines the properties on how the object interacts with others while colliding, such as friction and restitution. Defining shapes Shapes are defined in a fixture. The two most common shapes in Box2D are rectangle and circle. We define a rectangle with the SetAsBox function by providing half of its width and height. Also, the circle shape is defined by the radius. It is worth noting that the position of the body is at the center of the shape. It is different from EaselJS in that the default origin point is set at the top-left corner. Pixels per meter When we define the dimension and location of the body, we use meter as a unit. That's because Box2D uses metric for calculation to make the physics behavior realistic. But we usually calculate in pixels on the screen. So, we need to convert between pixels on the screen and meters in the physics world. That's why we need the pxPerMeter variable here. The value of this variable might change from project to project. The update method In the game tick, we update the physics world. The first thing we need to do is take the world to the next step. Box2D calculates objects based on steps. It is the same as we see in the physical world when a second is passed. If a ball is falling, at any fixed time, the ball is static with the property of the falling velocity. In the next millisecond, or nanosecond, the ball falls to a new position. This is exactly how steps work in the Box2D world. In every single step, the objects are static with their physics properties. When we go a step further, Box2D takes the properties into consideration and applies them to the objects. This step takes three arguments. The first argument is the time passed since the last step. Normally, it follows the frame-per-second parameter that we set for the game. The second and the third arguments are the iteration of velocity and position. This is the maximum iterations Box2D tries when resolving a collision. Usually, we set them to a low value. The reason we clear the force is because the force will be applied indefinitely if we do not clear it. That means the object keeps receiving the force on each frame until we clear it. Normally, clearing forces on every frame will make the objects more manageable. Classified intel We often need to represent a 2D vector in the physics world. Box2D uses b2vec for this purpose. Similar to the b2vec function, we use quite a lot of Box2D functions and classes. They are modularized into namespaces. We need to alias the most common classes to make our code shorter.
Read more
  • 0
  • 0
  • 5564
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-customizing-avatar-flash-multiplayer-virtual-worlds
Packt
27 Aug 2010
5 min read
Save for later

Customizing an Avatar in Flash Multiplayer Virtual Worlds

Packt
27 Aug 2010
5 min read
(For more resources on Flash, see here.) Customizing your avatar A Flash virtual world is a social community in which players interact with each other and have their own identity. Virtual world usually lets a user decide the avatar's appearance by choosing the combination of different styles and colors. Customizing different styles Each part of the avatar will have different styles and shapes to form different combinations of the appearance of the avatar. Thanks to the timeline and movie clip features in Flash, we can put different styles of each part within the movie clip. For example, the following screenshot shows the head movie clip with different head styles placed frame by frame and we can use gotoAndStop to display the style we want. Customizing the color ActionScript supports changing the color transform for a given movie clip. It supports not only color tint but also applying color filter and detailed RGB transformation. We will use the simple color tint to change the color of the avatar. As the color transform is applying to the whole movie clip, we cannot simply tint the avatar movie clip because that will make the whole avatar tint to one solid color. In order to tint a partial part of the movie clip, we specifically create a movie clip in each part and name it color_area. We later program the ActionScript to change all movie clip names with color_area to the customized color. Adding customization to avatar class We are going to change the style and color by ActionScript in avatar class. We need to import the ColorTransform class in flash.geom package to change the color with ActionScript. import flash.geom.ColorTransform; We need several instance variables to hold the styles and color state. public const totalStyles:Number = 3;public var currentColor:Number = 0x704F4C;public var currentStyle:Number = 1; We wrap the whole block of color transform code into one function. The color transform adds RGB color transformation to the target movie clip. We only use colorTransform to tint the color here but it also supports percentage transform that adds partial color to the target movie clip. We will apply the color transform to the color area inside the head of the avatar in 4 directions. public function changeColor(newColor:Number = 0x000000):void { currentColor = newColor; for each(var avatar:MovieClip in _directionArray){ var avatarColor:ColorTransform = new ColorTransform(); avatarColor.color = newColor; avatar.head.color_area.transform.colorTransform = avatarColor; } } We modified the color by using color transform and used timeline to style the avatar style. Every frame in the head movie clip represents a style with its color tint area. We display the new style by changing the current frame of the avatar movie clip. It is also necessary to change the color again after switching the style because every style contains its own color area. public function changeStyle(styleNumber:int):void { for each(var avatar:MovieClip in _directionArray){ /* display the giving style in all parts of avatar*/ avatar.head.gotoAndStop(styleNumber); avatar.body.gotoAndStop(styleNumber); avatar.lefthand.gotoAndStop(styleNumber); avatar.righthand.gotoAndStop(styleNumber); /* need to apply the color again after changing the style */ var avatarColor:ColorTransform = new ColorTransform(); avatarColor.color = currentColor; avatar.head.color_area.transform.colorTransform = avatarColor; } currentStyle = styleNumber; } The purpose of the avatar class is to control the appearance of the avatar. We just implemented the direction, color, and style switching methods and it is now ready for customization panel to use. Designing a customization panel Avatars in virtual worlds and games often provide players with different kinds of customization. Some games allow users to customize the whole body with lots of options while some games may only provide two to three basic customizations. The layout design of the customization panel is often based on the number of options. There are two common customization panel layouts in the market. One layout displays arrows for a user to select next and previous styles. The other one displays a thumbnail view of the options within the same category. The arrows selection layout is suitable for an avatar that contains limited parts for customization. There may be only two to four categories and not many options in each category. Players can easily loop through different style combinations and choose their favorite one using this layout. The following avatar customization screenshot from the 2D Online RPG called Dragon Fable uses the arrows selection layout: The thumbnail view layout is suitable for avatars that can be highly customized. There are often many categories to customize and each category provides a lot of options for players to choose. Some virtual worlds even provide micro modification so that players can adjust details on the chosen style such as the distance between the eyes. Players do not need to iterate the large amount of styles and can quickly choose a style option among them with the thumbnail view. The following screenshot is an online Mii editor. Mii is the avatar system in the Nintendo Wii console. This is an online clone of the Mii avatar customization. It allows a large amount of avatar customization by the thumbnail view layout with extended features such as scaling and moving the elements.
Read more
  • 0
  • 0
  • 5545

article-image-creating-games-cocos2d-x-easy-and-100-percent-free-0
Packt
01 Apr 2015
5 min read
Save for later

Creating Games with Cocos2d-x is Easy and 100 percent Free

Packt
01 Apr 2015
5 min read
In this article by Raydelto Hernandez, the author of the book Building Android games with Cocos2d-x, we will talk about the Cocos2d-x game engine, which is widely used to create Android games. The launch of the Apple App Store back in 2008 leveraged the reach capacity of indie game developers who since its occurrence are able to reach millions of users and compete with large companies, outperforming them in some situations. This reality led the trend of creating reusable game engines, such as Cocos2d-iPhone, which is written natively using Objective-C by the Argentine iPhone developer, Ricardo Quesada. Cocos2d-iPhone allowed many independent developers to reach the top charts of downloads. (For more resources related to this topic, see here.) Picking an existing game engine is a smart choice for indies and large companies since it allows them to focus on the game logic rather than rewriting core features over and over again. Thus, there are many game engines out there with all kinds of licenses and characteristics. The most popular game engines for mobile systems right now are Unity, Marmalade, and Cocos2d-x; the three of them have the capabilities to create 2D and 3D games. Determining which one is the best in terms of ease of use and availability of tools may be debatable, but there is one objective fact, which we can mention that could be easily verified. Among these three engines, Cocos2d-x is the only one that you can use for free no matter how much money you make using it. We highlighted in this article's title that Cocos2d-x is completely free. This was emphasized because the other two frameworks also allow some free usage; nevertheless, both of these at some point require a payment for the usage license. In order to understand why Cocos2d-x is still free and open source, we need to understand how this tool was born. Ricardo, an enthusiastic Python programmer, often participated in game creation challenges that required participants to develop games from scratch within a week. Back in those days, Ricardo and his team rewrote the core engine for each game until they came up with the idea of creating a framework to encapsulate core game capabilities. These capabilities could be used on any two-dimensional game to make it open source, so contributions could be received worldwide. This is why Cocos2d was originally written for fun. With the launch of the first iPhone in 2007, Ricardo led the development of the port of the Cocos2d Python framework to the iPhone platform using its native language, Objective-C. Cocos2d-iPhone quickly became popular among indie game developers, some of them turning into Appillionaires, as Chris Stevens called these individuals and enterprises that made millions of dollars during the App Store bubble period. This phenomenon made game development companies look at this framework created by hobbyists as a tool to create their products. Zynga was one of the first big companies to adopt Cocos2d as their framework to deliver their famous Farmville game to iPhone in 2009. This company has been trading on NASDAQ since 2011 and has more than 2,000 employees. In July 2010, a C++ port of the Cocos2d iPhone called Cocos2d-x, was written in China with the objective of taking the power of this framework to other platforms, such as the Android operating system, which by that time was gaining market share at a spectacular rate. In 2011, this Cocos2d port was acquired by Chukong Technologies, the third largest mobile game development company in China, who later hired the original Cocos2d-IPhone author to join their team. Today, Cocos2d-x-based games dominate the top grossing charts of Google Play and the App Store, especially in Asia. Recognized companies and leading studios, such as Konami, Zynga, Bandai Namco, Wooga, Disney Mobile, and Square Enix are using Cocos2d-x in their games. Currently, there are 400,000 developers working on adding new functionalities and making this framework as stable as possible. These include engineers from Google, ARM, Intel, BlackBerry, and Microsoft who officially support the ports of their products, such as Windows Phone, Windows, Windows Metro Interface, and they're planning to support Cocos2d-x for the Xbox in this year. Cocos2d-x is a very straightforward engine that requires a little learning to grasp it. I teach game development courses at many universities using this framework; during the first week, the students are capable of creating a game with the complexity of the famous title Doodle Jump. This can be easily achieved because the framework provides us all the single components that are required for our game, such as physics, audio handling, collision detection, animation, networking, data storage, user input, map rendering, scene transitions, 3D rendering, particle systems rendering, font handling, menu creation, displaying forms, threads handling, and so on. This abstracts us from the low-level logic and allows us to focus on the game logic. Summary In conclusion, if you are willing to learn how to develop games for mobile platforms, I strongly recommend you to learn and use the Cocos2d-x framework because it is easy to use, is totally free, is an open source. This means that you can better understand it by reading its source, you could modify it if needed, and you have the warranty that you will never be forced to pay a license fee if your game becomes a hit. Another big advantage of this framework is its highly available documentation, including the Packt Publishing collection of Cocos2d-x game development books. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [article] Why should I make cross-platform games? [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 5502

article-image-what-lumion
Packt
20 Dec 2013
9 min read
Save for later

What is Lumion?

Packt
20 Dec 2013
9 min read
(For more resources related to this topic, see here.) Why use Lumion? The short answer is that Lumion is easy to use and the final product is of a good quality. The long answer is that every construction project needs technical drawings and documents. Although this technical information is fine for a construction crew, usually the client has no idea what a CAD plan means. They can have an idea where the kitchen or the living room will be, but translating that 2D information to 3D is not always easy in the client's mind. This can be an issue if we need to give a presentation or if we are trying to sell something that is not built yet. And truth be told, an image sells more than words. That's where Lumion comes in. Lumion is the fastest way to render high quality still pictures and videos, and it makes it so easy to import our 3D models from any 3D modeling software, such as SketchUp, AutoCAD, Revit, ArchiCAD, and 3ds Max, and create a scene in minutes. So, Lumion 3D is a distinct architectural visualization software not only because it is faster to render, but also because it is very user friendly and intuitive. Another reason why we can use Lumion to create architectural visualizations is because we can have a great idea of how our project will look in natural surroundings at any time of the day or season, and this in just a few minutes. Now if you are an architect, it is doubtless that you want to enhance your project characteristics in the best way possible. Lumion can help you achieve this in hours instead of the inevitable days and weeks of rendering time. The following screenshot is an example of what you can get with Lumion in just a few minutes: However, this tool is not exclusively meant for architects. For example, if you are an interior designer, you may want to present how the textures, furniture, and colors would look at different angles, places, moods, and light conditions. Lumion provides nice interior visualization with furniture, good lighting, and realistic textures. In conclusion, Lumion is a great tool that improves the process of creating a building, or an art, or an architectural project. The time we need to get those results is less in comparison to other solutions such as using 3ds Max and V-Ray. What can we get from Lumion? Asking what Lumion can us is a double-edged question. Looking at the previous screenshots, we can get an idea of the final result. The final quality depends only on your creativity and time. I have seen amazing videos created with Lumion, but then you may need a touch of other software to create eye-catching compositions. Now, the package that we get with Lumion is another thing. You already know that we can easily create beautiful still images and videos, but we need to bear in mind that Lumion is not a tool designed to create photo-realistic renders. Nevertheless, we can get so much from this application that you will forget photo-realistic renders. Lumion is a powerful and advanced 3D engine that allows us to work with heavier models, and we can make our scene come alive with the click-and-drag technique. To do this, Lumion comes with a massive library where we can find: 2409 models of trees, transports, exterior and interior models, and people 103 ambient, people, and machine sounds 518 types of materials and textures 28 types of landscape In addition to this extensive collection, there are more features that we can add; we can include realistic water in our scene (oceans, rivers, pools, waterfalls, and fountains), we can sculpt the landscape to adapt to our needs, and we can add rain, snow, fog, wind, and animate objects, and we can add camera effects. You just need a blank 3D model; import and start working because, as you can see, Lumion is well equipped with almost everything we need to create architectural visualisations. Lumion's 3D interface Now that we know what we can do with Lumion and the content available, we will take some time to explore Lumion and get our hands dirty. In my opinion and experience, it is much easier to learn something if at the same time we apply what we are learning. So, in the next section we are going to explore the Lumion interface with the menus and different settings. But to do that we will use a small tutorial as a quick start. By doing this, we will explore Lumion and at the same time see how easy it is to produce great results. We will see that Lumion is easier to learn and more accessible than other software. So go ahead and fire up Lumion and let's have a quick tour before we start working with it. I am going to explain to you what each tab does, to help you see how you can do simple tasks, such as saving and loading a scene, changing the settings, and creating a new scene. Let's start with the first tab that Lumion shows us. A look into the New tab On startup, Lumion goes straight to the New tab. The New tab, as the name indicates, is a good place to start when you want to create a new scene. We can create a new scene based on a few presets or just create an empty scene. We can choose from Night, Sunset, Sunny Day, Flatlands, Island, Lake, Desert, Cold Climate, and an Empty scene. I found these presets as a quick help to cut some time, because in the end everything we get from these presets, we can create in a few minutes. So, there is nothing special about them. When you start Lumion, this will be the first thing you will see: The nine presets you can find on the New tab Now, we will finally see the Objects menu and the following is what this menu looks like: The Objects menu Here is where the fun starts. We have at our disposal eight categories of objects and more, such as Nature, Transport, Sound, Effects, Indoor, People and Animals, Outdoor, and Lights and special objects. Each of these menus has subcategories. If you are working with the Lumion PRO version, you can choose from more than 2000 models. Even if you don't have this version, cheer up! You can still import your own models and textures. It is really simple to add a model. First, we need to select the category we want to use. So in this case, click on the Nature button. Now that we have this category selected, click on the thumbnail above the Place button and a new window with the Nature library will appear. We don't have just trees, we have grass, plants, flowers, rocks, and clusters. Now let me show you one trick. Click on the Grass tab and select Rough_Grass1_RT. Now that we are back to the Build mode, press the Ctrl key and click on the ground. We are randomly adding 10 copies of the object, which in this case is really handy. So, after playing a little with Lumion, we can get something like the following screenshot: Our scene after adding some trees, grass, and animals Just think, it took me about 30 minutes to create something like this. Now imagine what you can do. Let's save our scene and turn our attention to the right-hand side of the Lumion 3D interface, where we can find the menus as shown in the following screenshot: The Build mode button Starting from the top of the preceding screenshot we can see the blue rectangle with a question mark. If we put our mouse cursor over this rectangle, we can see a quick help guideline for our interface. The next button informs us that we are in the Build mode and if, for example, you are working in the Photo or Video mode, this button lets you go back to the scene. Lumion materials Lumion helps us with this important step by offering many types of materials: 518 materials that are ready to use. You may need to do some adjustments, but the major hard work was already done for you. The materials that we can assign to our model are as follows: Wood: 45 materials Wood floor : 67 materials Brick: 32 materials Tiles: 99 materials Ground: 39 materials Concrete: 43 materials Carpet: 20 materials Misc: 109 materials Asphalt: 12 materials Metal: 47 materials This is one of the reasons why it is so easy to create still images and videos with Lumion. Everything is set up for you, including parameters, details, and textures. However, we may need to do some minor adjustments, and for that, it is important to understand or at least have a basic notion of what each setting does. So, let's have a quick look at how we can configure materials in Lumion. The Landscape material The best way to explain this material is by showing you an example. So, let's say that along with the model, you also created a terrain like the one you can see in the following screenshot: The house along with a terrain model Import the terrain if needed, and add a new material to this terrain. Go to the Custom menu and click on the Landscape material. The Landscape material allows you to seamlessly blend or merge parts of the model with the landscape. Make sure that the terrain intersects with the ground so that it can be perfectly blended. The following screenshot shows this Landscape material applied to my 3D terrain: The terrain model merged with the landscape Adding this material not only allows you to use this cool feature, but as you can see in the picture, we can also start painting soil types of the landscape in the imported terrain. The other two materials that I want to introduce you to are the Standard and Water materials. The Standard material is a simple material without any textures or settings, and we can use this material to start something from scratch. The Water material can have several applications, but perhaps, the most common one is, for example, pools. Summary This article helped you in starting with Lumion, and gave you a taste of how easily and quickly you can get great images and videos. In particular, you have learned the basic steps to save and load scenes, import models, add materials, change the terrain and weather, and create photos and videos. You also learned how to use and configure the prebuilt materials in Lumion and found out how to use the Landscape material to create a terrain. Resources for Article: Further resources on this subject: The Spotfire Architecture Overview [Article] augmentedTi: The application architecture [Article] The architecture of JavaScriptMVC [Article]
Read more
  • 0
  • 0
  • 5474

article-image-designing-avatar-flash-multiplayer-virtual-worlds
Packt
27 Aug 2010
9 min read
Save for later

Designing an Avatar in Flash Multiplayer Virtual Worlds

Packt
27 Aug 2010
9 min read
(For more resources on Flash, see here.) Designing an avatar Avatar is very important in a virtual world because most of the features are designed around avatars. Users interact with each other via their avatars, they explore the virtual world via avatars, and they complete challenges to level up their avatars. An avatar is composited by graphics and animation. The avatar graphics are its looks. It is not a static image but a collection of images to display the directions and appearance. There are different approaches of drawing the avatar graphics depending on the render methods and how many directions and animations the avatar needs. Animations represent different actions of the avatar. The most basic animation is walking. Other animations such as hand waving and throwing objects are also common. There will be different animation sets for different virtual world designs. A fighting topic virtual world will probably contain a collection of fighting animation sets. A hunting topic virtual world will contain animations of collection items and using hunting tools. Determining the direction numbers of avatars' views Isometric tile is composed by diamond shapes with four-edge connection to the other tiles. It is not hard to imagine that every avatar in the isometric view may face towards four directions. They are the north east, south east, south west, and north west. However, sometimes using only these four directions may not be enough; some game designs may require the avatar to face the user or walk to the other isometric tile a cross the diamond corner. In this case, eight directions are required. The direction number of the avatars affects the artwork drawing directly. Just imagine that we are building a virtual world where players can fight with each other. How many animations are there for an avatar to fight? Say, five sets of animations. How many directions can the avatar faces? 4? 8? Or even 12? For example, we are now talking about five sets of animations with 8 directions of each avatar. That's already 40 animations for only one avatar. We may design the virtual world to have 12 kinds of avatars and each avatar to have different clothes for customization. The graphics workload keeps increasing when only one of these aspects increases. That's why I often consider different approaches that reduce the graphic workload of the avatars. Take four directions as an example. In most cases, we have very similar animations when the avatar is facing south-east and south-west. And the animation of north-east and north-west are similar too. Therefore, it is a common technique that mirrors the animation of west side into east side. It can be easily done in Flash by just changing the x-axis of the scaling property to between -1 and 1. This property results in the avatar flipping from one side to another side. For a 4-directions animation set, only 2 directions need to be drawn. In an 8-directions animation set, only 5 directions need to be drawn. Next, we will discuss the rendering methods that will conclude how the amount of directions, animations, and customization affect the graphic workload. Rendering avatars in Flash virtual world There are different approaches to render avatars in Flash virtual world. Each rendered method comes with both advantages and disadvantages. Some methods take more time to draw with fancy outlook while others may take more time to program. It is important to decide which rendering methods of the avatar are required in predevelopment stage. It will be much more difficult to change the rendering method after the project is in development. We will discuss different rendering methods and the pros and cons of them. Drawing an avatar using vector animation It is convenient to use the Flash native vector drawing for avatar because every drawing can be done within the Flash. The output can be cute and cartoon style. One advantage of using vector is that color customization is easy to implement by using the native ActionScript color transform. We can easily assign different colors to different parts of the avatar without extra graphic drawing. Another advantage of using vector animation is that we can scale up and down the avatars whenever needed. It is useful when we need to zoom in or out of the map and the avatars in the virtual world. The following graph shows the comparison of scaling up a vector and bitmap graphic: The disadvantage is that we need to draw the animation of every part of the avatar in every direction frame by frame. Flash tweening can help but the workload is heavier than other methods. We can prerender the animations or control them by ActionScript in methods discussed later. In vector animation, every animation is hand-drawn and thus any late modification on the avatar design can cost quite a lot of workload. There may not be too many directions of the avatars meaning the rotation of the avatars will not be very smooth. Rendering avatars using bitmap sprite sheet Sprite sheet is a graphics technique that is used in almost all game platforms. Sprite sheet is a large bitmap file that contains every frame of animation. Bitmap data from each frame is masked and rendered to the screen. A Flash developer may think that there is a timeline with frame one on the top left and counting the frame from left to right in each row from top to bottom. This technique is useful when the avatar graphic designer has experience in other game platforms. Another advantage of using bitmap data is faster rendering than vector in Flash player. The other advantage is the sprite sheet can be rendered from 3D software. For example, we can make an avatar model in Maya (http://autodesk.com/maya) or 3Ds Max (http://autodesk.com/3dsmax) with animations set up. Then we set up eight cameras with orthographic perspective. The orthographic perspective ensures the rendered image fits the isometric world. After setting up the scene, just render the whole animation with eight different cameras and we will get all the bitmap files of the avatar. The benefit is that the rendering process is automatic so that we can reduce the workload a lot. Later if we want to modify the character, we only need to modify it in the 3D software and render it again. One big disadvantage of using sprite sheet is the file size. The sprite sheets are in bitmap format and one set of animation can cost up to several hundred kilobytes. The file size can be very large when there are many animations and many more bitmaps for switching styles of the avatar. The other disadvantage is that changing color is quite difficult. Unlike vector rendering where color replacement can be done by ActionScript, we need to replace another bitmap data to change the color. That means every available color doubles the file size. Rendering avatars using real-time 3D engine We described how to use 3D software to prerender graphics of the avatars in the previous section. Instead of prerendering the graphics into 2D bitmap, we can integrate a Flash 3D engine to render the 3D model into isometric view in real time. Real-time 3D rendering is the next trend of Flash. There are several 3D engines available in the market that support rendering complex 3D models with animations. Papervision3D (http://blog.papervision3d.org/) and Away3D (http://away3d.com/) are two examples among them. The advantage of using 3D rendering in isometric is that the rotation of avatars can be very smooth. Also different textures can share the same model and different models can share the same animation skeleton. Thanks to this great graphic reusability, 3D rendering virtual world can create different combinations of avatar appearance and animations without adding extra graphic workload in development. However, one disadvantage of using 3D rendering is the Flash player performance. The latest version of Flash player is 10.1 at the time of writing. The following screenshots show that the CPU resources usage is very high when rendering the isometric 3D environment with three avatars on screen: Rendering avatars using 2D bone skeleton Bone skeleton used to be an uncommon method to render avatar. What it does is creates an animated skeleton and then glues different parts of body together onto the skeleton. It is somehow similar to the skeleton and mesh relationship in 3-D software but in two dimensions instead. A lot of mathematics is needed to calculate the position and rotation of each part of the body and make the implementation difficult. Thanks to the introduction of bone tool and inverse kinematics in Flash CS4, this technique is becoming more mature and easier to be used in the Flash world. Adobe has posted a tutorial about using bone tool to create a 2D character (http://www.adobe.com/devnet/flash/articles/character_animation_ik.html). The following screenshot shows another bone skeleton example from gotoAndPlay demonstrating how to glue the parts into a walking animation. The post can be found in this link: http://www.gotoandplay.it/_articles/2007/04/skeletal_animation.php The advantage of using 2D bone skeleton is that animations are controlled by ActionScript. The reusing of animations means this technique fits those game designs that require many animations. A dancing virtual world that requires a lot of different unique animations is one example that may need this technique. One disadvantage is that the large amount of mathematic calculation for the animations makes it difficult to implement. Every rendering methods has its own advantages and disadvantages and not one of the methods fits all type of games. It is the game designer's job to decide a suitable rendering method for a game or virtual world project. Therefore, it is important to know their limitations and consider thoughtfully before getting started with development. We can take a look at how other Flash virtual worlds render avatars by checking the showcase of the SmartFoxServer (http://www.smartfoxserver.com/showcase/).
Read more
  • 0
  • 0
  • 5461
article-image-webgl-animating-3d-scene
Packt
20 Jun 2012
10 min read
Save for later

WebGL: Animating a 3D scene

Packt
20 Jun 2012
10 min read
We will discuss the following topics: Global versus local transformations Matrix stacks and using them to perform animation Using JavaScript timers to do time-based animation Parametric curves Global transformation allows us to create two different kinds of cameras. Once we have applied the camera transform to all the objects in the scene, each one of them could update its position; representing, for instance, targets that are moving in a first-person shooting game, or the position of other competitors in a car racing game. This can be achieved by modifying the current Model-View transform for each object. However, if we modified the Model-View matrix, how could we make sure that these modifications do not affect other objects? After all, we only have one Model-View matrix, right? The solution to this dilemma is to use matrix stacks. Matrix stacks A matrix stack provides a way to apply local transforms to individual objects in our scene while at the same time we keep the global transform (camera transform) coherent for all of them. Let's see how it works. Each rendering cycle (each call to draw function) requires calculating the scene matrices to react to camera movements. We are going to update the Model-View matrix for each object in our scene before passing the matrices to the shading program (as attributes). We do this in three steps as follows: Once the global Model-View matrix (camera transform) has been calculated, we proceed to save it in a stack. This step will allow us to recover the original matrix once we had applied to any local transforms. Calculate an updated Model-View matrix for each object in the scene. This update consists of multiplying the original Model-View matrix by a matrix that represents the rotation, translation, and/or scaling of each object in the scene. The updated Model-View matrix is passed to the program and the respective object then appears in the location indicated by its local transform. We recover the original matrix from the stack and then we repeat steps 1 to 3 for the next object that needs to be rendered. The following diagram shows this three-step procedure for one object: Animating a 3D scene To animate a scene is nothing else than applying the appropriate local transformations to objects in it. For instance, if we have a cone and a sphere and we want to move them, each one of them will have a corresponding local transformation that will describe its location, orientation, and scale. In the previous section, we saw that matrix stacks allow recovering the original Model-View transform so we can apply the correct local transform for the next object to be rendered. Knowing how to move objects with local transforms and matrix stacks, the question that needs to be addressed is: When? If we calculated the position that we want to give to the cone and the sphere of our example every time we called the draw function, this would imply that the animation rate would be dependent on how fast our rendering cycle goes. A slower rendering cycle would produce choppy animations and a too fast rendering cycle would create the illusion of objects jumping from one side to the other without smooth transitions. Therefore, it is important to make the animation independent from the rendering cycle. There are a couple of JavaScript elements that we can use to achieve this goal: The requestAnimFrame function and JavaScript timers. requestAnimFrame function The window.requestAnimFrame() function is currently being implemented in HTML5-WebGL enabled Internet browsers. This function is designed such that it calls the rendering function (whatever function we indicate) in a safe way only when the browser/tab window is in focus. Otherwise, there is no call. This saves precious CPU, GPU, and memory resources. Using the requestAnimFrame function, we can obtain a rendering cycle that goes as fast as the hardware allows and at the same time, it is automatically suspended whenever the window is out of focus. If we used requestAnimFrame to implement our rendering cycle, we could use then a JavaScript timer that fires up periodically calculating the elapsed time and updating the animation time accordingly. However, the function is a feature that is still in development. To check on the status of the requestAnimFrame function, please refer to the following URL: Mozilla Developer Network JavaScript timers We can use two JavaScript timers to isolate the rendering rate from the animation rate. The rendering rate is controlled by the class WebGLApp. This class invokes the draw function, defined in our page, periodically using a JavaScript timer. Unlike the requestAnimFrame function, JavaScript timers keep running in the background even when the page is not in focus. This is not an optimal performance for your computer given that you are allocating resources to a scene that you are not even looking. To mimic some of the requestAnimFrame intelligent behavior provided for this purpose, we can use the onblur and onfocus events of the JavaScript window object. Let's see what we can do: Action (What) Goal (Why) Method (How) Pause the rendering To stop the rendering until the window is in focus Clear the timer calling clearInterval in the window.onblur function Slow the rendering To reduce resource consumption but make sure that the 3D scene keeps evolving even if we are not looking at it We can clear current timer calling clearInterval in the window.onblur function and create a new timer with a more relaxed interval (higher value) Resume the rendering To activate the 3D scene at full speed when the browser window recovers its focus We start a new timer with the original render rate in the window.onfocus function By reducing the JavaScript timer rate or clearing the timer, we can handle hardware resources more efficiently. In WebGLApp you can see how the onblur and onfocus events have been used to control the rendering timer as described previously. Timing strategies In this section, we will create the second JavaScript timer that will allow controlling the animation. As previously mentioned, a second JavaScript timer will provide independency between how fast your computer can render frames and how fast we want the animation to go. We have called this property the animation rate. However, before moving forward you should know that there is a caveat when working with timers: JavaScript is not a multi-threaded language. This means that if there are several asynchronous events occurring at the same time (blocking events) the browser will queue them for their posterior execution. Each browser has a different mechanism to deal with blocking event queues. There are two blocking event-handling alternatives for the purpose of developing an animation timer. Animation strategy The first alternative is to calculate the elapsed time inside the timer callback. The pseudo-code looks like the following: var initialTime = undefined; var elapsedTime = undefined; var animationRate = 30; //30 ms function animate(deltaT){ //calculate object positions based on deltaT } function onFrame(){ elapsedTime = (new Date).getTime() – initialTime; if (elapsedTime < animationRate) return; //come back later animate(elapsedTime); initialTime = (new Date).getTime(); } function startAnimation(){ setInterval(onFrame,animationRate/1000); } Doing so, we can guarantee that the animation time is independent from how often the timer callback is actually executed. If there are big delays (due to other blocking events) this method can result in dropped frames. This means the object's positions in our scene will be immediately moved to the current position that they should be in according to the elapsed time (between consecutive animation timer callbacks) and then the intermediate positions are to be ignored. The motion on screen may jump but often a dropped animation frame is an acceptable loss in a real-time application, for instance, when we move one object from point A to point B over a given period of time. However, if we were using this strategy when shooting a target in a 3D shooting game, we could quickly run into problems. Imagine that you shoot a target and then there is a delay, next thing you know the target is no longer there! Notice that in this case where we need to calculate a collision, we cannot afford to miss frames, because the collision could occur in any of the frames that we would drop otherwise without analyzing. The following strategy solves that problem. Simulation strategy There are several applications such as the shooting game example where we need all the intermediate frames to assure the integrity of the outcome. For example, when working with collision detection, physics simulations, or artificial intelligence for games. In this case, we need to update the object's positions at a constant rate. We do so by directly calculating the next position for the objects inside the timer callback. var animationRate = 30; //30 ms var deltaPosition = 0.1 function animate(deltaP){ //calculate object positions based on deltaP } function onFrame(){ animate(deltaPosition); } function startAnimation(){ setInterval(onFrame,animationRate/1000); } This may lead to frozen frames when there is a long list of blocking events because the object's positions would not be timely updated. Combined approach: animation and simulation Generally speaking, browsers are really efficient at handling blocking events and in most cases, the performance would be similar regardless of the chosen strategy. Then, deciding to calculate the elapsed time or the next position in timer callbacks will then depend on your particular application. Nonetheless, there are some cases where it is desirable to combine both animation and simulation strategies. We can create a timer callback that calculates the elapsed time and updates the animation as many times as required per frame. The pseudocode looks like the following: var initialTime = undefined; var elapsedTime = undefined; var animationRate = 30; //30 ms var deltaPosition = 0.1; function animate(delta){ //calculate object positions based on delta } function onFrame(){ elapsedTime = (new Date).getTime() - initialTime; if (elapsedTime < animationRate) return; //come back later! var steps = Math.floor(elapsedTime / animationRate); while(steps > 0){ animate(deltaPosition); steps -= 1; } initialTime = (new Date).getTime(); } function startAnimation(){ initialTime = (new Date).getTime(); setInterval(onFrame,animationRate/1000); } You can see from the preceding code snippet that the animation will always update at a fixed rate, no matter how much time elapses between frames. If the app is running at 60 Hz, the animation will update once every other frame, if the app runs at 30 Hz the animation will update once per frame, and if the app runs at 15 Hz the animation will update twice per frame. The key is that by always moving the animation forward a fixed amount it is far more stable and deterministic. The following diagram shows the responsibilities of each function in the call stack for the combined approach: This approach can cause issues if for whatever reason an animation step actually takes longer to compute than the fixed step, but if that is occurring, you really ought to simplify your animation code or put out a recommended minimum system spec for your application. Web Workers: Real multithreading in JavaScript You may want to know that if performance is really critical to you and you need to ensure that a particular update loop always fires at a consistent rate then you could use Web Workers. Web Workers is an API that allows web applications to spawn background processes running scripts in parallel to their main page. This allows for thread-like operation with message-passing as the coordination mechanism. You can find the Web Workers specification at the following URL: W3C Web Workers
Read more
  • 0
  • 0
  • 5448

article-image-creating-convincing-images-blender-internal-renderer-part1
Packt
20 Oct 2009
9 min read
Save for later

Creating Convincing Images with Blender Internal Renderer-part1

Packt
20 Oct 2009
9 min read
Throughout the years that have passed since the emergence of Computer Graphics, many aspiring artists tried convincingly to recreate the real world through works of applied art, some of which include oil painting, charcoal painting, matte painting, and even the most basic ones like pastel and/or crayon drawings has already made it through the artistic universe of realism. Albeit the fact that recreating the real world is like reinventing the wheel (which some artists might argue with), it is not an easy task to involve yourself into. It takes a lot of practice, perseverance, and personality to make it through.  But one lesson I have learned from the art world is to consciously and subconsciously observe the world around you. Pay attention to details. Observe how a plant behaves at different environmental conditions, look how a paper's texture is changed when wet, or probably observe how water in a river distorts the underlying objects. These are just some of the things that you can observe around you, and there are a million more or even an infinite number of things you can observe in your lifetime. In the advent of 3D as part of the numerous studies involved in Computer Graphics, a lot of effort has been made into developing tools and applications that emulate real-world environment.  It has become an unstated norm that the more realistic looking an image is, the greater the impact it has on viewers.  That, in turn, is partly true, but the real essence into creating stunning images is to know how it would look beautiful amidst the criteria that are present.  It is not a general requirement that all your images must look hyper realistic, you just have to know and judge how it would look good, after all that's what CG is all about.  And believe it or not, cheating the eye is an essential tool of the trade. In 3D rendering context, there are a number of ways on how to achieve realism in your scenes, but intuitively, the use of external renderers and advanced raytracers does help a lot in the setup and makes the creation process a bit lighter as compared to manually setting up lights, shaders, etc.  But that comes at a rendering time tradeoff.  Unfortunately though, I won't be taking you to the steps on how to setup your scenes for use in external renderers, but instead I'll walk you through the steps on how to achieve slightly similar effects as to that of externals with the use of the native renderer or the internal renderer as some might call it. Hopefully in this short article, I can describe to you numerous ways on how to achieve good-looking and realistic images with some nifty tools, workarounds from within Blender and use the Blender Internal Renderer to achieve these effects. So, let's all get a cup of tea, a comfortable couch, and hop in! On a nutshell, what makes an image look real? Shading, Materials, Shadows, Textures, Transparency, Reflection, Refraction, Highlights, Contrast, Color Balance, DoF, Lens Effects, Geometry (bevels), Subtlety, Environment, Staging, Composite Nodes, Story.. Before Anything Else... Beyond anything that will be discussed here, nothing beats a properly planned and well-imagined scene.  I cannot stress enough how important it is to begin everything with deep and careful planning.  Be it just a ball on a table or a flying scaled bear with a head of a tarsier and legs that of a mouse (?), it is very vital to plan beforehand.  Believe me, once you've planned everything right, you're almost done with your work (which I didn't believe then until I did give it a try).  And of course, with your touch of artistic flavors, a simple scene could just be the grandest one that history has ever seen. This article, by any means, does not technically teach you how to model subjects for your scene nor does it detail the concepts behind lighting (which is an article on its own and probably beyond my knowledge) nor does it teach you “the way” to do things but instead it will lead you through a process by which you will be able to understand your scene better and the concepts behind. I would also be leading you through a series of steps using the same scene we've setup from the beginning and hopefully by the end of the day, we could achieve something that comprises what has been discussed here so far. I have blabbered too much already, haven't I? Yeah.  Ok, on to the real thing. Before you begin the proceeding steps, it is a must (it really really is) to go grab your copy of Blender over at http://www.blender.org/download/get-blender/. The version I used for this tutorial is 2.49a (which should be the latest one being offered at Blender.org [as of this writing]). Scene Setup With every historical and memorable piece, it is a vital part of your 3d journey to setup something on your scene.  I couldn't imagine how a 3D artist could pass on a work with a blank animated scene, hyper minimal I might say. To start off, fire up Blender or your favorite 3D App for that matter and get your scene ready with your models, objects, subjects, or whatever you call them, just get them there inside your scene so we could have something to look at for now, won't we? On the image below (finally, a graphic one!), you could see a sample scene I've setup and a quick render of the said scene. The first image shows my scene with the model, two spheres, a plane, a lamp, and a camera. The second image shows the rendered version.   You'll notice that the image looks dull and lifeless, that is because it lacks the proper visual elements necessary for creating a convincing scene.  The current setup is all set to default, with the objects having no material data but just the premade ones set by Blender and the light’s settings set as they were by default. Shading and Materials To address some issues, we need to identify first what needs to be corrected.  The first thing we might want to do is to add some initial materials to the objects we have just so we could clearly distinguish their roles in the scene and to add some life to the somewhat dry set we have here.  To do so, select one object at a time and add a material. Let’s first select the main character of the scene (or any subject you wish for that matter) by clicking RMB (Right Mouse Button) on the character object, then under the Buttons Window, select Shading (F5), then click the Material Buttons tab, and click on “Add New” to add a new material to our object. Adding a New Material After doing so, more options will show up and this is where the real fun begins. The only thing we’ll be doing for now is to add some basic color and shading to our objects just so we could deviate from the standard gray default.  You’ll notice on the image below that I’ve edited quite a few options.  That’s what we only want for now, let’s leave the other settings as they are and we’ll get back to it as soon as we need to. Character Initial Material Settings   Big Sphere Initial Material Settings Small sphere Initial Material Settings   Ground Initial Material Settings If we do a test render now, here’s how it will look like: Render With Colors Still not so convincing, but somehow we managed to add a level of variety to our scene as compared to the initial render we’ve made.  Looking at the latest render we did, you’ll notice that the character with the two spheres still seem to be floating in space, creating no interaction whatsoever with the ground plane below it. Another thing would be the lack of diffuse color on some parts of the objects, thus creating a pitch black color which, as in this case, doesn’t seem to look good at all since we’re trying to achieve a well-lit, natural environment as much as possible. A quick and easy solution to this issue would be to enable Ambient Occlusion under the World Settings tab. This will tell Blender to create a fake global illumination effect as though you have added a bunch of lights to create the occlusion.  This would be a case similar to adding a dome of spot lights, with each light having a low energy level, creating a subtle AO effect. But for the purposes of this article, we’d be settling for Ambient Occlusion since it is faster to setup and eliminates the additional need for further tweaking. We access the AO (Ambient Occlusion) menu via the World Buttons tab under Shading (F5) menu then clicking the Amb Occ subtab.  Activate Ambient Occlusion then click on Use Falloff and edit the default strength of 1.00 to 0.70, doing so will create further diffusion on darker areas that have been hidden from the occlusion process.  Next would be to click Pixel Cache, I don’t know much technically what this does but what I know from experience is this speeds up the occlusion calculation.   Ambient Occlusion Settings Below, you’ll see the effects of AO as applied to the scene.  Notice the subtle diffusion of color and shadows and the interaction of the objects and the plane ground through the occlusion process.  So far we’ve only used a single lamp as fill light, but later on we’ll be adding further light sources to create a better effect. Render with Ambient Occlusion Whew, we’ve been doing something lately, haven’t we? So far what we did was to create a scene and a render image that will give us a better view of what it’s going to look like.  Next stop, we’ll be creating a base light setup to further create shadows and better looking diffusion. Soon we go!
Read more
  • 0
  • 0
  • 5440

article-image-ordering-buildings-flash-virtual-worlds
Packt
16 Aug 2010
2 min read
Save for later

Ordering the Buildings in Flash Virtual Worlds

Packt
16 Aug 2010
2 min read
(For more resources on Flash, see here.) Ordering the buildings The buildings are not well placed on the map. They overlap with each other in a very strange way. That is because we are now viewing the 3D isometric world in 2D screen with wrong ordering. When we view the 3D perspective in the following way, the closer objects should block the view of the objects behind. The buildings in the preceding image do not obey this real-world rule and cause some strange overlapping. We are going to solve this problem in the next section. Ordering the movie clips in Flash In Flash, every movie clip has its own depth. The depth is called z-order or the z-index of the movie clip. A movie clip with bigger z-order number is higher and covers the others with lower z-order when overlapping. By swapping their z-order, we can rearrange the movie clips on how they overlap and create the correct ordering of isometric buildings. Determining an object's location and view According to our tile-based isometric map, the object that locates in larger number of the x and y axis is in front of the object that locates in smaller number of the x and y axis. We can thus compare the isometric x and y coordinate to determine which object is in front. There is a special case when all shapes of the buildings occupy square shapes. In this situation, the order of the movie clip's z order can be easily determined by comparing their y position.
Read more
  • 0
  • 0
  • 5437
article-image-sparrow-ios-game-framework-basics-our-game
Packt
25 Jun 2014
10 min read
Save for later

Sparrow iOS Game Framework - The Basics of Our Game

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) Taking care of cross-device compatibility When developing an iOS game, we need to know which device to target. Besides the obvious technical differences between all of the iOS devices, there are two factors we need to actively take care of: screen size and texture size limit. Let's take a closer look at how to deal with the texture size limit and screen sizes. Understanding the texture size limit Every graphics card has a limit for the maximum size texture it can display. If a texture is bigger than the texture size limit, it can't be loaded and will appear black on the screen. A texture size limit has power-of-two dimensions and is a square such as 1024 pixels in width and in height or 2048 x 2048 pixels. When loading a texture, they don't need to have power-of-two dimensions. In fact, the texture does not have to be a square. However, it is a best practice for a texture to have power-of-two dimensions. This limit holds for big images as well as a bunch of small images packed into a big image. The latter is commonly referred to as a sprite sheet. Take a look at the following sample sprite sheet to see how it's structured: How to deal with different screen sizes While the screen size is always measured in pixels, the iOS coordinate system is measured in points. The screen size of an iPhone 3GS is 320 x 480 pixels and also 320 x 480 points. On an iPhone 4, the screen size is 640 x 960 pixels, but is still 320 by 480 points. So, in this case, each point represents four pixels: two in width and two in height. A 100-point wide rectangle will be 200 pixels wide on an iPhone 4 and 100 pixels on an iPhone 3GS. It works similarly for the devices with large display screens, such as the iPhone 5. Instead of 480 points, it's 568 points. Scaling the viewport Let's explain the term viewport first: the viewport is the visible portion of the complete screen area. We need to be clear about which devices we want our game to run on. We take the biggest resolution that we want to support and scale it down to a smaller resolution. This is the easiest option, but it might not lead to the best results; touch areas and the user interface scale down as well. Apple recommends for touch areas to be at least a 40-point square; so, depending on the user interface, some elements might get scaled down so much that they get harder to touch. Take a look at the following screenshot, where we choose the iPad Retina resolution (2048 x 1536 pixels) as our biggest resolution and scale down all display objects on the screen for the iPad resolution (1024 x 768 pixels): Scaling is a popular option for non-iOS environments, especially for PC and Mac games that support resolutions from 1024 x 600 pixels to full HD. Sparrow and the iOS SDK provide some mechanisms that will facilitate handling Retina and non-Retina iPad devices without the need to scale the whole viewport. Black borders Some games in the past have been designed for a 4:3 resolution display but then made to run on a widescreen device that had more screen space. So, the option was to either scale a 4:3 resolution to widescreen, which will distort the whole screen, or put some black borders on either side of the screen to maintain the original scale factor. Showing black borders is something that is now considered as bad practice, especially when there are so many games out there which scale quite well across different screen sizes and platforms. Showing non-interactive screen space If our pirate game is a multiplayer, we may have a player on an iPad and another on an iPhone 5. So, the player with the iPad has a bigger screen and more screen space to maneuver their ship. The worst case will be if the player with the iPad is able to move their ship outside the visual range for the iPhone player to see, which will result in a serious advantage for the iPad player. Luckily for us, we don't require competitive multiplayer functionality. Still, we need to keep a consistent screen space for players to move their ship in for game balance purposes. We wouldn't want to tie the difficulty level to the device someone is playing on. Let's compare the previous screenshot to the black border example. Instead of the ugly black borders, we just show more of the background. In some cases, it's also possible to move some user interface elements to the areas which are not visible on other devices. However, we will need to consider whether we want to keep the same user experience across devices and whether moving these elements will result in a disadvantage for users who don't have this extra screen space on their devices. Rearranging screen elements Rearranging screen elements is probably the most time-intensive and sophisticated way of solving this issue. In this example, we have a big user interface at the top of the screen in the portrait mode. Now, if we were to leave it like this in the landscape mode, the top of the screen will be just the user interface, leaving very little room for the game itself. In this case, we have to be deliberate about what kind of elements we need to see on the screen and which elements are using up too much screen estate. Screen real estate (or screen estate) is the amount of space available on a display for an application or a game to provide output. We will then have to reposition them, cut them up in to smaller pieces, or both. The most prominent example of this technique is Candy Crush (a popular trending game) by King. While this concept applies particularly to device rotation, this does not mean that it can't be used for universal applications. Choosing the best option None of these options are mutually exclusive. For our purposes, we are going to show non-interactive screen space, and if things get complicated, we might also resort to rearranging screen elements depending on our needs. Differences between various devices Let's take a look at the differences in the screen size and the texture size limit between the different iOS devices: Device Screen size (in pixels) Texture size limit (in pixels) iPhone 3GS 480 x 360 2048 x 2048 iPhone 4 (including iPhone 4S) and iPod Touch 4th generation 960 x 640 2048 x 2048 iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation 1136 x 640 2048 x 2048 iPad 2 1024 x 768 2048 x 2048 iPad (3rd and 4th generations) and iPad Air 2048 x 1536 4096 x 4096 iPad Mini 1024 x 768 4096 x 4096 Utilizing the iOS SDK Both the iOS SDK and Sparrow can aid us in creating a universal application. Universal application is the term for apps that target more than one device, especially for an app that targets the iPhone and iPad device family. The iOS SDK provides a handy mechanism for loading files for specific devices. Let's say we are developing an iPhone application and we have an image that's called my_amazing_image.png. If we load this image on our devices, it will get loaded—no questions asked. However, if it's not a universal application, we can only scale the application using the regular scale button on iPad and iPhone Retina devices. This button appears on the bottom-right of the screen. If we want to target iPad, we have two options: The first option is to load the image as is. The device will scale the image. Depending on the image quality, the scaled image may look bad. In this case, we also need to consider that the device's CPU will do all the scaling work, which might result in some slowdown depending on the app's complexity. The second option is to add an extra image for iPad devices. This one will use the ~ipad suffix, for example, my_amazing_image~ipad.png. When loading the required image, we will still use the filename my_amazing_image.png. The iOS SDK will automatically detect the different sizes of the image supplied and use the correct size for the device. Beginning with Xcode 5 and iOS 7, it is possible to use asset catalogs. Asset catalogs can contain a variety of images grouped into image sets. Image sets contain all the images for the targeted devices. These asset catalogs don't require files with suffixes any more. These can only be used for splash images and application icons. We can't use asset catalogs for textures we load with Sparrow though. The following table shows which suffix is needed for which device: Device Retina File suffix iPhone 3GS No None iPhone 4 (including iPhone 4S) and iPod Touch (4th generation) Yes @2x @2x~iphone iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch (5th generation) Yes -568h@2x iPad 2 No ~ipad iPad (3rd and 4th generations) and iPad Air Yes @2x~ipad iPad Mini No ~ipad How does this affect the graphics we wish to display? The non-Retina image will be 128 pixels in width and 128 pixels in height. The Retina image, the one with the @2x suffix, will be exactly double the size of the non-Retina image, that is, 256 pixels in width and 256 pixels in height. Retina and iPad support in Sparrow Sparrow supports all the filename suffixes shown in the previous table, and there is a special case for iPad devices, which we will take a closer look at now. When we take a look at AppDelegate.m in our game's source, note the following line: [_viewController startWithRoot:[Game class] supportHighResolutions:YES doubleOnPad:YES]; The first parameter, supportHighResolutions, tells the application to load Retina images (with the @2x suffix) if they are available. The doubleOnPad parameter is the interesting one. If this is set to true, Sparrow will use the @2x images for iPad devices. So, we don't need to create a separate set of images for iPad, but we can use the Retina iPhone images for the iPad application. In this case, the width and height are 512 and 384 points respectively. If we are targeting iPad Retina devices, Sparrow introduces the @4x suffix, which requires larger images and leaves the coordinate system at 512 x 384 points. App icons and splash images If we are talking about images of different sizes for the actual game content, app icons and splash images are also required to be in different sizes. Splash images (also referred to as launch images) are the images that show up while the application loads. The iOS naming scheme applies for these images as well, so for Retina iPhone devices such as iPhone 4, we will name an image as Default@2x.png, and for iPhone 5 devices, we will name an image as Default-568h@2x.png. For the correct size of app icons, take a look at the following table: Device Retina App icon size iPhone 3GS No 57 x 57 pixels iPhone 4 (including iPhone 4S) and iPod Touch 4th generation Yes 120 x 120 pixels iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation Yes 120 x 120 pixels iPad 2 No 76 x 76 pixels iPad (3rd and 4th generation) and iPad Air Yes 152 x 152 pixels iPad Mini No 76 x 76 pixels The bottom line The more devices we want to support, the more graphics we need, which directly increases the application file size, of course. Adding iPad support to our application is not a simple task, but Sparrow does some groundwork. One thing we should keep in mind though: if we are only targeting iOS 7.0 and higher, we don't need to include non-Retina iPhone images any more. Using @2x and @4x will be enough in this case, as support for non-Retina devices will soon end. Summary This article deals with setting up our game to work on iPhone, iPod Touch, and iPad in the same manner. Resources for Article: Further resources on this subject: Mobile Game Design [article] Bootstrap 3.0 is Mobile First [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 5386

article-image-organizing-virtual-filesystem
Packt
23 Nov 2013
13 min read
Save for later

Organizing a Virtual Filesystem

Packt
23 Nov 2013
13 min read
(For more resources related to this topic, see here.) Files are the building blocks of any computer system. This article deals with portable handling of read-only application resources, and provides recipes to store the application data. Let us briefly consider the problems covered in this article. The first one is the access to application data files. Often, application data for desktop operating systems resides in the same folder as the executable file. With Android, things get a little more complicated. The application files are packaged in the .apk file, and we simply cannot use the standard fopen()-like functions, or the std::ifstream and std::ofstream classes. The second problem results from the different rules for the filenames and paths. Windows and Linux-based systems use different path separator characters, and provide different low-level file access APIs. The third problem comes from the fact that file I/O operations can easily become the slowest part in the whole application. User experience can become problematic if interaction lags are involved. To avoid delays, we should perform the I/O on a separate thread and handle the results of the Read() operation on yet another thread. To implement this, we have all the tools required. We start with abstract I/O interfaces, implement a portable .zip archives handling approach, and proceed to asynchronous resources loading. Abstracting file streams File I/O APIs differ slightly between Windows and Android (POSIX) operating systems, and we have to hide these differences behind a consistent set of C++ interfaces. Getting ready Please make sure you are familiar with the UNIX concept of the file and memory mapping. Wikipedia may be a good start (http://en.wikipedia.org/wiki/Memory-mapped_file). How to do it... From now on, our programs will read input data using the following simple interface. The base class iObject is used to add an intrusive reference counter to instances of this class: class iIStream: public iObject { public: virtual void Seek( const uint64 Position ) = 0; virtual uint64 Read( void* Buf, const uint64 Size ) = 0; virtual bool Eof() const = 0; virtual uint64 GetSize() const = 0; virtual uint64 GetPos() const = 0; The following are a few methods that take advantage of memory-mapped files: virtual const ubyte* MapStream() const = 0; virtual const ubyte* MapStreamFromCurrentPos() const = 0; }; This interface supports both memory-mapped access using the MapStream() and MapStreamFromCurrentPos() member functions, and sequential access with the BlockRead() and Seek() methods. To write some data to the storage, we use an output stream interface, as follows (again, the base class iObject is used to add a reference counter): class iOStream: public iObject { public: virtual void Seek( const uint64 Position ) = 0; virtual uint64 GetFilePos() const = 0; virtual uint64 Write( const void* B, const uint64 Size ) = 0; }; The Seek(), GetFileSize(), GetFilePos(), and filename-related methods of the iIStream interface can be implemented in a single class called FileMapper: class FileMapper: public iIStream { public: explicit FileMapper( clPtr<iRawFile> File ); virtual ~FileMapper(); virtual std::string GetVirtualFileName() const{return FFile->GetVirtualFileName(); } virtual std::string GetFileName() const{ return FFile->GetFileName(); } Read a continuous block of data from this stream and return the number of bytes actually read: virtual uint64 BlockRead( void* Buf, const uint64 Size ) { uint64 RealSize =( Size > GetBytesLeft() ) ? GetBytesLeft() : Size; Return zero if we have already read everything: if ( RealSize < 0 ) { return 0; } memcpy( Buf, ( FFile->GetFileData() + FPosition ),static_cast<size_t>( RealSize ) ); Advance the current position and return the number of copied bytes: FPosition += RealSize; return RealSize; } virtual void Seek( const uint64 Position ) { FPosition = Position; } virtual uint64 GetFileSize() const { return FFile->GetFileSize(); } virtual uint64 GetFilePos() const { return FPosition; } virtual bool Eof() const { return ( FPosition >= GetFileSize() ); } virtual const ubyte* MapStream() const { return FFile->GetFileData(); } virtual const ubyte* MapStreamFromCurrentPos() const { return ( FFile->GetFileData() + FPosition ); } private: clPtr<iRawFile> FFile; uint64 FPosition; }; The FileMapper uses the following iRawFile interface to abstract the data access: class iRawFile: public iObject { public: iRawFile() {}; virtual ~iRawFile() {}; void SetVirtualFileName( const std::string& VFName );voidSetFileName( const std::string& FName );std::string GetVirtualFileName() const; std::string GetFileName(); virtual const ubyte* GetFileData() const = 0; virtual uint64 GetFileSize() const = 0; protected: std::string FFileName; std::string FVirtualFileName; }; Along with the trivial GetFileName() and SetFileName() methods implemented here, in the following recipes we implement the GetFileData() and GetFileSize() methods. How it works... The iIStream::BlockRead() method is useful when handling non-seekable streams. For the fastest access possible, we use memory-mapped files implemented in the following recipe. The MapStream() and MapStreamFromCurrentPos() methods are there to provide access to memory-mapped files in a convenient way. These methods return a pointer to the memory where your file, or a part of it, is mapped to. The iOStream::Write() method works similar to the standard ofstream::write() function. Refer to the project 1_AbstractStreams for the full source code of this and the following recipe. There's more... The important problem while programming for multiple platforms, in our case for Windows and Linux-based Android, is the conversion of filenames. We define the following PATH_SEPARATOR constant, using OS-specific macros, to determine the path separator character in the following way: #if defined( _WIN32 ) const char PATH_SEPARATOR = '\'; #else const char PATH_SEPARATOR = '/'; #endif The following simple function helps us to make sure we use valid filenames for our operating system: inline std::string Arch_FixFileName(const std::string& VName) { std::string s( VName ); std::replace( s.begin(), s.end(), '\', PATH_SEPARATOR ); std::replace( s.begin(), s.end(), '/', PATH_SEPARATOR ); return s; } See also Implementing portable memory-mapped files Working with in-memory files Implementing portable memory-mapped files Modern operating systems provide a powerful mechanism called the memory-mapped files. In short, it allows us to map the contents of the file into the application address space. In practice, this means we can treat files as usual arrays and access them using C pointers. Getting ready To understand the implementation of the interfaces from the previous recipe we recommend to read about memory mapping. The overview of this mechanism implementation in Windows can be found on the MSDN page at http://msdn.microsoft.com/en-us/library/ms810613.aspx. To find out more about memory mapping, the reader may refer to the mmap() function documentation. How to do it... In Windows, memory-mapped files are created using the CreateFileMapping() and MapViewOfFile() API calls. Android uses the mmap() function, which works pretty much the same way. Here we declare the RawFile class implementing the iRawFile interface. RawFile holds a pointer to a memory-mapped file and its size: ubyte* FFileData; uint64 FSize; For the Windows version, we use two handles for the file and memory-mapping object, and for the Android, we use only the file handle: #ifdef _WIN32 HANDLE FMapFile; HANDLE FMapHandle; #else int FFileHandle; #endif We use the following function to open the file and create the memory mapping: bool RawFile::Open( const string& FileName,const string& VirtualFileName ) { At first, we need to obtain a valid file descriptor associated with the file: #ifdef OS_WINDOWS FMapFile = (void*)CreateFileA( FFileName.c_str(),GENERIC_READ,FILE_SHARE_READ,NULL, OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL |FILE_FLAG_RANDOM_ACCESS,NULL ); #else FFileHandle = open( FileName.c_str(), O_RDONLY ); if ( FFileHandle == -1 ) { FFileData = NULL; FSize = 0; } #endif Using the file descriptor, we can create a file mapping. Here we omit error checks for the sake of clarity. However, the example in the supplementary materials contains more error checks: #ifdef OS_WINDOWS FMapHandle = (void*)CreateFileMapping( ( HANDLE )FMapFile,NULL, PAGE_READONLY,0, 0, NULL ); FFileData = (Lubyte*)MapViewOfFile((HANDLE)FMapHandle,FILE_MAP_READ, 0,0,0 ); DWORD dwSizeLow = 0, dwSizeHigh = 0; dwSizeLow = ::GetFileSize( FMapFile, &dwSizeHigh ); FSize = ((uint64)dwSizeHigh << 32) | (uint64)dwSizeLow; #else struct stat FileInfo; fstat( FFileHandle, &FileInfo ); FSize = static_cast<uint64>( FileInfo.st_size ); FFileData = (Lubyte*) mmap(NULL, FSize, PROT_READ,MAP_PRIVATE, FFileHandle, 0); close( FFileHandle ); #endif return true; } The correct deinitialization function closes all the handles: bool RawFile::Close() { #ifdef OS_WINDOWS if ( FFileData ) UnmapViewOfFile( FFileData ); if ( FMapHandle ) CloseHandle( (HANDLE)FMapHandle ); CloseHandle( (HANDLE)FMapFile ); #else if ( FFileData ) munmap( (void*)FFileData, FSize ); #endif return true; } The main functions of the iRawFile interface, GetFileData and GetFileSize, have trivial implementation here: virtual const ubyte* GetFileData() { return FFileData; } virtual uint64 GetFileSize() { return FSize; } How it works... To use the RawFile class we create an instance and wrap it into a FileMapper class instance: clPtr<RawFile> F = new RawFile(); F->Open("SomeFileName"); clPtr<FileMapper> FM = new FileMapper(F); The FM object can be used with any function supporting the iIStream interface. The hierarchy of all our iRawFile implementations looks like what is shown in the following figure: Implementing file writers Quite frequently, our application might want to store some of its data on the disk. Another typical use case we have already encountered is the downloading of some file from the network into a memory buffer. Here, we implement two variations of the iOStream interface for the ordinary and in-memory files. How to do it... Let us derive the FileWriter class from the iOStream interface. We add the Open() and Close() member functions on top of the iOStream interface and carefully implement the Write() operation. Our output stream implementation does not use memory-mapped files and uses ordinary file descriptors, as shown in the following code: class FileWriter: public iOStream { public: FileWriter(): FPosition( 0 ) {} virtual ~FileWriter() { Close(); } bool Open( const std::string& FileName ) { FFileName = FileName; We split Android and Windows-specific code paths using defines: #ifdef _WIN32 FMapFile = CreateFile( FFileName.c_str(),GENERIC_WRITE, FILE_SHARE_READ,NULL, CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL, NULL ); return !( FMapFile == ( void* )INVALID_HANDLE_VALUE ); #else FMapFile = open( FFileName.c_str(), O_WRONLY|O_CREAT ); FPosition = 0; return !( FMapFile == -1 ); #endif } The same technique is used in the other methods. The difference between both OS systems is is trivial, so we decided to keep everything inside a single class and separate the code using defines: void Close() { #ifdef _WIN32 CloseHandle( FMapFile ); #else if ( FMapFile != -1 ) { close( FMapFile ); } #endif } virtual std::string GetFileName() const { return FFileName; } virtual uint64 GetFilePos() const { return FPosition; } virtual void Seek( const uint64 Position ) { #ifdef _WIN32 SetFilePointerEx( FMapFile,*reinterpret_cast<const LARGE_INTEGER*>(&Position ),NULL, FILE_BEGIN ); #else if ( FMapFile != -1 ) { lseek( FMapFile, Position, SEEK_SET ); } #endif FPosition = Position; } However, things may get more complex if you decide to support more operating systems. It can be a good refactoring exercise. virtual uint64 Write( const void* Buf, const uint64 Size ) { #ifdef _WIN32 DWORD written; WriteFile( FMapFile, Buf, DWORD( Size ),&written, NULL ); #else if ( FMapFile != -1 ) { write( FMapFile, Buf, Size ); } #endif FPosition += Size; return Size; } private: std::string FFileName; #ifdef _WIN32 HANDLE FMapFile; #else int FMapFile; #endif uint64 FPosition; }; How it works… Now we can also present an implementation of the iOStream that stores everything in a memory block. To store arbitrary data in a memory block, we declare the Blob class, as shown in the following code: class Blob: public iObject { public: Blob(); virtual ~Blob(); Set the blob data pointer to some external memory block: void SetExternalData( void* Ptr, size_t Sz ); Direct access to data inside this blob: void* GetData(); … Get the current size of the blob: size_t GetSize() const; Check if this blob is responsible for managing the dynamic memory it uses: bool OwnsData() const; … Increase the size of the blob and add more data to it. This method is very useful in a network downloader: bool AppendBytes( void* Data, size_t Size ); … }; There are lots of other methods in this class. You can find the full source code in the Blob.h file. We use this Blob class, and declare the MemFileWriter class, which implements our iOStream interface, in the following way: class MemFileWriter: public iOStream { public: MemFileWriter(clPtr<Blob> Container); Change the absolute position inside a file, where new data will be written to: virtual void Seek( const uint64 Position ) { if ( Position > FContainer->GetSize() ) { Check if we are allowed to resize the blob: if ( Position > FMaxSize - 1 ) { return; } And try to resize it: if ( !FContainer->SafeResize(static_cast<size_t>( Position ) + 1 )) { return; } } FPosition = Position; } Write data to the current position of this file: virtual uint64 Write( const void* Buf, const uint64 Size ) { uint64 ThisPos = FPosition; Ensure there is enough space: Seek( ThisPos + Size ); if ( FPosition + Size > FMaxSize ) { return 0; } void* DestPtr = ( void* )( &( ( ( ubyte* )(FContainer->GetData() ))[ThisPos] ) ); Write the actual data: memcpy( DestPtr, Buf, static_cast<size_t>( Size ) ); return Size; } } private: … }; We omit the trivial implementations of GetFileName(), GetFilePos(), GetMaxSize(), SetContainer(), GetContainer(), GetMaxSize(), and SetMaxSize() member functions, along with fields declarations. You will find the full source code of them in the code bundle of the book. See also Working with in-memory files Working with in-memory files Sometimes it is very convenient to be able to treat some arbitrary in-memory runtime generated data as if it were in a file. As an example, let's consider using a JPEG image downloaded from a photo hosting, as an OpenGL texture. We do not need to save it into the internal storage, as it is a waste of CPU time. We also do not want to write separate code for loading images from memory. Since we have our abstract iIStream and iRawFile interfaces, we just implement the latter to support memory blocks as the data source. Getting ready In the previous recipes, we already used the Blob class, which is a simple wrapper around a void* buffer. How to do it... Our iRawFile interface consists of two methods: GetFileData() and GetFileSize(). We just delegate these calls to an instance of Blob: class ManagedMemRawFile: public iRawFile { public: ManagedMemRawFile(): FBlob( NULL ) {} virtual const ubyte* GetFileData() const { return ( const ubyte* )FBlob->GetData(); } virtual uint64 GetFileSize() const { return FBlob->GetSize(); } void SetBlob( const clPtr<Blob>& Ptr ) { FBlob = Ptr; } private: clPtr<Blob> FBlob; }; Sometimes it is useful to avoid the overhead of using a Blob object, and for such cases we provide another class, MemRawFile, that holds a raw pointer to a memory block and optionally takes care of the memory allocation: class MemRawFile: public iRawFile { public: virtual const ubyte* GetFileData() const { return (const ubyte*) FBuffer; } virtual uint64 GetFileSize() const { return FBufferSize; } void CreateFromString( const std::string& InString ); void CreateFromBuffer( const void* Buf, uint64 Size ); void CreateFromManagedBuffer( const void* Buf, uint64 Size ); private: bool FOwnsBuffer; const void* FBuffer; uint64 FBufferSize; }; How it works... We use the MemRawFile as an adapter for the memory block extracted from a .zip file and ManagedMemRawFile as the container for data downloaded from photo sites.
Read more
  • 0
  • 0
  • 5377
Modal Close icon
Modal Close icon