Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-away3d-detecting-collisions
Packt
02 Jun 2011
6 min read
Save for later

Away3D: Detecting Collisions

Packt
02 Jun 2011
6 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction In this article, you are going to learn how to check intersection (collision) between 3D objects. Detecting collisions between objects in Away3D This recipe will teach you the fundamentals of collision detection between objects in 3D space. We are going to learn how to perform a few types of intersection tests. These tests can hardly be called collision detection in their physical meaning, as we are not going to deal here with any simulation of collision reaction between two bodies. Instead, the goal of the recipe is to understand the collision tests from a mathematical point of view. Once you are familiar with intersection test techniques,the road to creating of physical collision simulations is much shorter. There are many types of intersection tests in mathematics. These include some simple tests such as AABB (axially aligned bounding box), Sphere - Sphere, or more complex such as Triangle - Triangle, Ray - Plane, Line - Plane, and more. Here, we will cover only those which we can achieve using built-in Away3D functionality. These are AABB and AABS (axially aligned bounding sphere) intersections, as well as Ray-AABS and the more complex Ray- Triangle. The rest of the methods are outside of the scope of this article and you can learn about applying them from various 3D math resources. Getting ready Setup an Away3D scene in a new file extending AwayTemplate. Give the class a name CollisionDemo. How to do it... In the following example, we perform an intersection test between two spheres based on their bounding boxes volumes. You can move one of the spheres along X and Y with arrow keys onto the second sphere. On the objects overlapping, the intersected (static) sphere glows with a red color. AABB test: CollisionDemo.as package { public class CollisionDemo extends AwayTemplate { private var _objA:Sphere; private var _objB:Sphere; private var _matA:ColorMaterial; private var _matB:ColorMaterial; private var _gFilter_GlowFilter=new GlowFilter(); public function CollisionDemo() { super(); _cam.z=-500; } override protected function initMaterials() : void{ _matA=new ColorMaterial(0xFF1255); _matB=new ColorMaterial(0x00FF11); } override protected function initGeometry() : void{ _objA=new Sphere({radius:30,material:_matA}); _objB=new Sphere({radius:30,material:_matB}); _view.scene.addChild(_objA); _view.scene.addChild(_objB); _objB.ownCanvas=true; _objA.debugbb=true; _objB.debugbb=true; _objA.transform.position=new Vector3D(-80,0,400); _objB.transform.position=new Vector3D(80,0,400); } override protected function initListeners() : void{ super.initListeners(); stage.addEventListener(KeyboardEvent.KEY_DOWN,onKeyDown); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); if(AABBTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } } private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode){ case 38:_objA.moveUp(5); break; case 40:_objA.moveDown(5); break; case 37:_objA.moveLeft(5); break; case 39:_objA.moveRight(5); break; case 65:_objA.rotationZ-=3; break; case 83:_objA.rotationZ+=3; break; default: } } } } In this screenshot, the green sphere bounding box has a red glow while it is being intersected by the red sphere's bounding box: How it works... Testing intersections between two AABBs is really simple. First, we need to acquire the boundaries of the object for each axis. The box boundaries for each axis of any Object3D are defined by a minimum value for that axis and maximum value. So let's look at the AABBTest() method. Axis boundaries are defined by parentMin and parentMax for each axis, which are accessible for each object extending Object3D. You can see that Object3D also has minX,minY,minZ and maxX,maxY,maxZ. These properties define the bounding box boundaries too, but in objects space and therefore aren't helpful in AABB tests between two objects. So in order for a given bounding box to intersect a bounding box of other objects, three conditions have to be met for each of them: Minimal X coordinate for each of the objects should be less than maximum X of another. Minimal Y coordinate for each of the objects should be less than maximum Y of another. Minimal Z coordinate for each of the objects should be less than maximum Z of another. If one of the conditions is not met for any of the two AABBs, there is no intersection. The preceding algorithm is expressed in the AABBTest() function: private function AABBTest():Boolean{ if(_objA.parentMinX>_objB.parentMaxX||_objB.parentMinX>_objA. parentMaxX){ return false; } if(_objA.parentMinY>_objB.parentMaxY||_objB.parentMinY>_objA. parentMaxY){ return false; } if(_objA.parentMinZ>_objB.parentMaxZ||_objB.parentMinZ>_objA. parentMaxZ){ return false; } return true; } As you can see, if all of the conditions we listed previously are met, the execution will skip all the return false blocks and the function will return true, which means the intersection has occurred. There's more... Now let's take a look at the rest of the methods for collision detection, which are AABS-AABS, Ray-AABS, and Ray-Triangle. AABS test The intersection test between two bounding spheres is even simpler to perform than AABBs. The algorithm works as follows. If the distance between the centers of two spheres is less than the sum of their radius, then the objects intersect. Piece of cake! Isn't it? Let's implement it within the code. The AABS collision algorithm gives us the best performance. While there are many other even more sophisticated approaches, try to use this test if you are not after extreme precision. (Most of the casual games can live with this approximation). First, let's switch the debugging mode of _objA and _objB to bounding spheres. In the last application we built, go to the initGeometry() function and change: _objA.debugbb=true; _objB.debugbb=true; To: _objA.debugbs=true; _objB.debugbs=true; Next, we add the function to the class which implements the algorithm we described previously: private function AABSTest():Boolean{ var dist_Number=Vector3D.distance(_objA.position,_objB. position); if(dist<=(_objA.radius+_objB.radius)){ return true; } return false; } Finally, we add the call to the method inside onEnterFrame(): if(AABSTest()){ _objB.filters=[_gFilter]; }else{ _objB.filters=[]; } Each time AABSTest returns true, the intersected sphere is highlighted with a red glow:
Read more
  • 0
  • 0
  • 12164

article-image-importing-3d-formats-away3d
Packt
31 May 2011
5 min read
Save for later

Importing 3D Formats into Away3D

Packt
31 May 2011
5 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Introduction The Away3D library contains a large set of 3D geometric primitives such as Cube, Sphere, Plane, and many more. Nevertheless, when we think of developing breathtaking and cutting edge 3D applications, there is really no way to get it done without using more sophisticated models than just basic primitives. Therefore, we need to use external 3D modeling programs such as Autodesk 3DsMax and Maya, or Blender to create complex models. The Power of Away3D is that it allows us to import a wide range of 3D formats for static meshes as well as for animations. Besides the models, the not less important part of the 3D world is textures. They are critical in making the model look cool and influencing the ultimate user experience. In this article, you will learn essential techniques to import different 3D formats into Away3D. Exporting models from 3DsMax/Maya/Blender You can export the following modeling formats from 3D programs: (Wavefront), Obj, DAE (Collada), 3ds, Ase (ASCII), MD2, Kmz, 3DsMax, and Maya can export natively Obj, DAE, 3ds, and ASCII. One of the favorite 3D formats of Away3D developers is DAE (Collada), although it is not the best in terms of performance because the file is basically an XML which becomes slow to parse when containing a lot of data. The problem is that although 3DsMax and Maya have got a built-in Collada exporter, the models from the output do not work in Away3D. The work around is to use open source Collada exporters such as ColladaMax/ColladaMaya, OpenCollada. The only difference between these two is the software versions support. Getting ready Go to http://opencollada.org/download.html and download the OpenCollada plugin for the appropriate software (3DsMax or Maya). Go to http://sourceforge.net/projects/colladamaya/files/ and download the ColladaMax or colladamaya plugin. Follow the instructions of the installation dialog of the plugin. The plugin will get installed automatically in the 3dsMax/Maya plugins directory (taking into account that the software was installed into the default path). How to do it... 3DsMax: Here is how to export Collada using OpenCollada plugin in 3DsMax2011. In order to export Collada (DAE) from 3DsMax, you should do the following: In 3DsMax, go to File and click on Export or Export Selected (target model selected). Select the OpenCOLLADA(*.DAE) format from the formats drop-down list. ColladaMax export settings: (Currently 3DsMax 2009 and lower) ColladaMax export settings are almost the same as those of OpenCollada. The only difference you can see in the exporting interface is the lack of Copy Images and Export user defined properties checkboxes. Select the checkboxes as is shown in the previous screenshot. Relative paths: Makes sure the texture paths are relative. Normals: Exporting object's normals. Copy Images: Is optional. If we select this option, the exporter outputs a folder with related textures into the same directory as the exported object. Triangulate: In case some parts of the mesh consist of more than three angled polygons, they get triangulated. Animation settings: Away3D supports bones animations from external assets. If you set bones animation and wish to export it, then check the Sample animation and set the begin and end frame for animation span that you want to export from the 3DsMax animation timeline. Maya: For showcase purposes, you can download a 30-day trial version of Autodesk Maya 2011. The installation process in Maya is slightly different: Open Maya. Go to top menu bar and select Window. In the drop-down list, select Settings/Preferences, in the new drop-down list, select Plug-in manager. Now you should see the Plug-in Manager interface: Now click on the Browse button and navigate to the directory where you extracted the OpenCollada ZIP archive. Select the COLLADAMaya.mll file and open it. Now you should see the OpenCollada plugin under the Other Registered Plugins category. Check the AutoLoad checkbox if you wish for the plugin to be loaded automatically the next time you start the program. After your model is ready for export, click on File | Export All or Export selected. The export settings for ColladaMaya are the same as for 3DsMax. How it works... The Collada file is just another XML but with a different format name (.dae). When exporting a model in a Collada format, the exporter writes into the XML nodes tree all essential data describing the model structure as well as animation data when one exports bone-based animated models. When deploying your DAE models to the web hosting directory, don't forget to change the .DAE extension to .XML. Forgetting will result in the file not being able to load because .DAE extension is ignored by most servers by default. There's more... Besides the Collada, you can also export OBJ, 3Ds, and ASE. Fortunately, for exporting these formats, you don't need any third party plugins but only those already located in the software. Free programs such as Blender also serve as an alternative to expansive commercial software such as Maya, or 3DsMax Blender comes with already built-in Collada exporter. Actually, it has two such exporters. At the time of this writing, these are 1.3 and 1.4. You should use 1.4 as 1.3 seems to output corrupted files that are not parsed in Away3D. The export process looks exactly like the one for 3dsMax. Select your model. Go to File, then Export. In the drop-down list of different formats, select Collada 1.4. The following interface opens: Select Triangles, Only Export Selection (if you wish to export only selected object), and Sample Animation. Set exporting destination path and click on Export and close. You are done.
Read more
  • 0
  • 0
  • 10361

article-image-blender-25-detailed-render-earth-space
Packt
25 May 2011
10 min read
Save for later

Blender 2.5: Detailed Render of the Earth from Space

Packt
25 May 2011
10 min read
Blender 2.5 HOTSHOT Challenging and fun projects that will push your Blender skills to the limit Our purpose is to create a very detailed view of the earth from space. By detailed, we mean that it includes land, oceans, and clouds, and not only the color and specular reflection, but also the roughness they seem to have, when seen from space. For this project, we are going to perform some work with textures and get them properly set up for our needs (and also for Blender's way of working). What Does It Do? We will create a nice image of the earth resembling the beautiful pictures that are taken from orbiting of the earth, showing the sun rising over the rim of the planet. For this, we will need to work carefully with some textures, set up a basic scene, and create a fairly complex setup of nodes for compositing the final result. In our final image, we will get very nice effects, such as the volumetric effect of the atmosphere that we can see round its rim, the strong highlight of the sun when rising over the rim of the earth, and the very calm, bluish look of the dark part of the earth when lit by the moon. Why Is It Awesome? With this project, we are going to understand how important it is to have good textures to work with. Having the right textures for the job saves lots of time when producing a high-quality rendered image. Not only are we going to work with some very good textures that are freely available on the Internet, but we are also going to perform some hand tweaking to get them tuned exactly as we need them. This way we can also learn how much time can be saved by just doing some preprocessing on the textures to create finalized maps that will be fed directly to the material, without having to resort to complex tricks that would only cause us headaches. One of the nicest aspects of this project is that we are going to see how far we take a very simple scene by using the compositor in Blender. We are definitely going to learn some useful tricks for compositing. Your Hotshot Objectives This project will be tackled in five parts: Preprocessing the textures Object setup Lighting setup Compositing preparation Compositing Mission Checklist The very key for the success of our project is getting the right set of quality images at a sufficiently high resolution. Let's go to www.archive.org and search for www.oera.net/How2.htm on the 'wayback machine'. Choose the snapshot from the Apr 18, 2008 link. Click on the image titled Texture maps of the Earth and Planets. Once there, let's download these images: Earth texture natural colors Earth clouds Earth elevation/bump Earth water/land mask Remember to save the high-resolution version of the images, and put them in the tex folder, inside the project's main folder. We will also need to use Gimp to perform the preprocessing of the textures, so let's make sure to have it installed. We'll be working with version 2.6. Preprocessing the Textures The textures we downloaded are quite good, both in resolution and in the way they clearly separate each aspect of the shading of the earth. There is a catch though—using the clouds, elevation, and water/land textures as they are will cause us a lot of headache inside Blender. So let's perform some better basic preprocessing to get finalized and separated maps for each channel of the shader that will be created. Engage Thrusters For each one of the textures that we're going to work on, let's make sure to get the previous one closed to avoid mixing the wrong textures. Clouds Map Drag the EarthClouds_2500x1250.jpg image from the tex folder into the empty window of Gimp to get it loaded. Now locate the Layers window and right-click on the thumbnail of the Background layer, and select the entry labeled Add Layer Mask... from the menu. In the dialog box, select the Grayscale copy of layer option. Once the mask is added to the layer, the black part of the texture should look transparent. If we take a look at the image after adding the mask, we'll notice the clouds seem to have too much transparency. To solve this, we will perform some adjustment directly on the mask of the layer. Go to the Layers window and click on the thumbnail of the mask (the one to the right-hand side) to make it active (its border should become white). Then go to the main window (the one containing the image) and go to Colors | Curves.... In the Adjust Color Curves dialog, add two control points and get the curve shown in the next screenshot: The purpose of this curve is to get the light gray pixels of the mask to become lighter and the dark ones to get darker; the strong slope between the two control points will cause the border of the mask to be sharper. Make sure that the Value channel is selected and click on OK. Now let's take a look at the image and see how strong the contrast of the image is and how well defined the clouds are now. Finally, let's go to Image| Mode| RGB to set the internal data format for the image to a safe format (thus avoiding the risk of having Blender confused by it). Now we only need to go to File| Save A Copy... and save it as EarthClouds.png in the tex folder of the project. In the dialogs asking for confirmation, make sure to tell Gimp to apply the layer mask (click on Export in the first dialog). For the settings of the PNG file, we can use the default values. Let's close the current image in Gimp and get the main window empty in order to start working on the next texture. Specular Map Let's start by dragging the image named EarthMask_2500x1250.jpg onto the main window of Gimp to get it open. Then drag the image EarthClouds_2500x1250.jpg over the previous one to get it added as a separate layer in Gimp. Now, we need to make sure that the images are correctly aligned. To do this, let's go to View| Zoom| 4:1 (400%), to be able to move the layer with pixel precision easily. Now go to the bottom right-hand side corner of the window and click-and-drag over the four-arrows icon until the part of the image shown in the viewport is one of the corners. After looking at the right place, let's go to the Toolbox and activate the Move tool. Finally, we just need to drag the clouds layer so that its corner exactly matches the corner of the water/land image. Then let's switch to another zoom level by going to View| Zoom| 1:4 (25%). Now let's go to the Layers window, select the EarthClouds layer, and set its blending mode to Multiply (Mode drop-down, above the layers list). Now we just need to go to the main window and go to Colors| Invert. Finally, let's switch the image to RGB mode by going to Image| Mode| RGB and we are done with the processing. Remember to save the image as EarthSpecMap.jpg in the tex folder of the project and close it in Gimp. The purpose of creating this specular map is to correctly mix the specularity of the ocean (full) with one of the clouds that is above the ocean (null). This way, we get a correct specularity, both in the ocean and in the clouds. If we just used the water or land mask to control specularity, then the clouds above the ocean would have specular reflection, which is wrong. Bump Map The bump map controls the roughness of the material; this one is very important as it adds a lot of detail to the final render without having to create actual geometry to represent it. First, drag the EarthElevation_2500x1250.jpg to the main window of Gimp to get it open. Then let's drag the EarthClouds_2500x1250.jpg image over the previous one, so that it gets loaded as a layer above the first one. Now zoom in by going to View| Zoom| 4:1 (400%). Drag the image so that you are able to see one of its corners and use the move tool to get the clouds layer exactly matching the elevation layer. Then switch back to a wider view by going to View| Zoom| 1:4 (25%). Now it's time to add a mask to the clouds layer. Right-click on the clouds layer and select the Add Layer Mask... entry from the menu. Then select the Grayscale copy of layer option in the dialog box and click Add. What we have thus far is a map that defines how intense the roughness of the surface in each point will be. But there's is a problem: The clouds are as bright as or even brighter than the Andes and the Himalayas, which means the render process will distort them quite a lot. Since we know that the intensity of the roughness on the clouds must be less, let's perform another step to get the map corrected accordingly. Let's select the left thumbnail of the clouds layer (color channel of the layer), then go to the main window and open the color levels using the Levels tool by going to Colors| Levels.... In the Output Levels part of the dialog box, let's change the value 255 (on the right-hand side) to 66 and then click on OK. Now we have a map that clearly gives a stronger value to the highest mounts on earth than to the clouds, which is exactly what we needed. Finally, we just need to change the image mode to RGB (Image| Mode| RGB) and save it as EarthBumpMap.jpg in the tex folder of the project. Notice that we are mixing the bump maps of the clouds and the mountains. The reason for this is that working with separate bump maps will get us into a very tricky situation when working inside Blender; definitely, working with a single bump map is way easier than trying to mix two or more. Now we can close Gimp, since we will work exclusively within Blender from now on. Objective Complete - Mini Debriefing This part of the project was just a preparation of the textures. We must create these new textures for three reasons: To get the clouds' texture having a proper alpha channel; this will save us trouble when working with it in Blender. To control the spec map properly, in the regions where there are clouds, as the clouds must not have specular reflection. To create a single, unified bump map for the whole planet. This will save us lots of trouble when controlling the Normal channel of the material in Blender. Notice that we are using the term "bump map" to refer to a texture that will be used to control the "normal" channel of the material. The reason to not call it "normal map" is because a normal map is a special kind of texture that isn't coded in grayscale, like our current texture.
Read more
  • 0
  • 0
  • 16611

article-image-blender-25-simulating-manufactured-materials
Packt
21 Apr 2011
10 min read
Save for later

Blender 2.5: Simulating Manufactured Materials

Packt
21 Apr 2011
10 min read
Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects Creating metals Of all material surfaces, metals must be one of the most popular to appear in 3D design projects. Metals tend to be visually pleasing with brightly colored surfaces that will gleam when polished. They also exhibit fascinating surface detail due to oxidization and age-related weathering. Being malleable, these surfaces will dent and scratch to display their human interaction. All these issues mean that man-made metal objects are great objects to design outstanding material and texture surfaces within Blender. It is possible in Blender to design metal surfaces using quite simple material setups. Although it may seem logical to create complex node-based solutions to capture all the complexity apparent within a metal surface, the standard Blender material arrangement can achieve all that is necessary to represent almost any metal. Metals have their own set of unique criteria that need application within a material simulation. These include: Wide specularity due to the nature of metals being polished or dulled by interaction Unique bump maps, either representing the construction, and/ or as a result of interaction Reflection – metals, more than many other surfaces, can display reflection. Normally, this can be simulated by careful use of the specular settings in simulation but, occasionally, we will need to have other objects and environments reflected in a metal surface. Blender has a vast array of tools to help you simulate almost any metal surface. Some of these mimic real-world metal tooling effects like anisotropic blend types to simulate brushed metal surfaces, or blurred reflections sometimes seen on sandblasted metal surfaces. All these techniques, while producing realistic metal effects, tend to be very render intensive. We will work with some of the simpler tools in Blender to not only produce realistic results but also conserve memory usage and render times. We will start with a simple but pleasing copper surface. Copper has the unique ability to be used in everything from building materials, through cooking, to money. Keeping up with a building theme, we will create a copper turret material of the type of large copper usage that might be seen on anything from a fairy castle to a modern-day embellishment of a corporate building. One of the pleasant features of such a large structural use of copper is its surface color. A brown/orange predominant color, when new, is changed to a complementary color, light green/blue when oxidized. This oxidization also varies the specularity of its surface and in combination with its man-made construction, using plating creates a very pleasing material. Getting ready To prepare for this recipe, you will need to create a simple mesh to represent a copper-plated turret-like roof. You can be as extravagant as you wish in designing an interesting shape. Give the mesh a few curves, and variations in scale, so that you can see how the textures deform to the shape. The overall scale of this should be about 2.5 times larger than the default cube and about 1.5 times in width at its widest point. If you would prefer to use the same mesh as used in the recipe, you can download it as a pre-created blendfile from the Packtpub website. If you create a turret-like object yourself, ensure that all the normals are facing outwards. You can do this by selecting all of the vertices in edit mode, and then clicking on Normals/ Recalculate in the Tools Shelf. Also, set the surface shading to Smooth in the same menu. Depending on how many vertices you use to create your mesh, you may want to add a Sub-surface modifier to ensure that the model renders to give a nice smooth surface on which we will create the copper-plating material simulation. In the scene used in the example blendfile, three lights have been used. A Sun type lamp at location X 7.321, Y 1.409, Z 11.352 with a color of white and Energy of 1.00. However, it should only be set to provide specular lighting. It is positioned to create a nice specular reflection of the curved part of the turret. A Point lamp type set at X 9.286, Y -3.631, Z 5.904 with a color of white and Energy of 1.00. A Hemi type lamp at location X -9.208, Y 6.059, Z 5.904 with a color of R 1.00, B 0.97, B 0.66 and an Energy of 1.00. These will help simulate daylight and a nice specular reflection as you might see on a bright day. Now would be a good time to save your work. If you have downloaded the pre-created blendfile, or produced one yourself, save it with an incremented filename as copperturret- 01.blend. It will also be necessary for you to download, three images that will provide a color map, a bump map, and a specular map for the plated surface of our turret. They are simple grayscale images that are relatively easily created in a paint package. Essentially, one image is a tiled collection of metal plates with some surface detail, and the other is derived from this image by creating a higher contrast image from the first. This will be used as a specularity map. The third has the same outline as each tile edge but with simple blends from black to white. This will provide a bump map to give the general slope of each metal plate. All three, separate, are available for download from Packtpub website as: Chapt-02/textures/plating.png Chapt-02/textures/plating-bump-1.png Chapt-02/textures/plating-spec-pos.png Once downloaded, save these files into a textures subdirectory below where you have saved the blendfile. How to do it... We are going to create the effect of plating on the turret object, tiling an image around its surface to make it look as though it has been fashioned by master coppersmiths decades ago. Open the copper-turret-01.blend. This file currently has no materials or textures associated with it. With your turret mesh selected, create a new material in the Materials panel. Name your new material copper-roof. Change the Diffuse color to R 1.00, G 0.50, B 0.21. You can use the default diffuse shading type as Lambert. Set the Specular color to R 1.00, G 0.93, B 0.78 and the type to Wardiso with Intensity 0.534, and Slope 0.300. That's the general color set for our material, we now need to create some textures to add the magic. Move over to the Texture panel and select the first texture slot. Create a new texture of type Image or Movie, and name it color-map. From the Image tab, Open the image plating.png that should be in the textures subfolder where you saved the blendfile. This is a grayscale image composed from a number of photographs with grime maps applied within a paint package. Each plate has been scaled and repositioned to produce a random-looking, but tileable texture. Creating such textures is not a quick process. However, the time spent in producing a good image will make your materials look so much better. Under the Mapping tab, select Coordinates of type Generated Projection and of type Tube. Under Image Mapping, select Extension/ Repeat, and set the Repeat values of X 3 and Y 2. This will repeat the texture three times around the circumference of the turret and two times on its height. In the Influence tab, select Diffuse/Color and set to 0.500. Also, set Geometry/ Normal to 5.00. Finally, select Blend type Multiply, RGB to Intensity, and set the color to a nice bright orange with R 0.94, G 0.56, and B 0.00. Save your work as copper-turret-02.blend, and perform a test render. If necessary, you can perform a partial render of just one area of your camera view by using the SHIFT+B shortcut and dragging the border around just an area of the camera view. An orange-dashed border will show what area of the image will be rendered. If you also set the Crop selector in the Render panel under Dimensions, it will only render that bordered area and not the black un-rendered portion. You should see that both the color and bump have produced a subtle change in appearance of the copper turret simulation. However, the bump map is all rather even with each plate looking as though they are all the same thickness rather than one laid on top of another. Time to employ another bump map to create that overlapped look. With the turret object selected, move to the Texture panel and in the next free texture slot, create a new texture of type Image or Movie, and name it plate-bumps. In the Image tab, open the image plating-bump-1.png. Under the Image Mapping tab, select Extension of type Repeat and set the Repeat to X 3, Y 2. In the Mapping tab, ensure the Coordinates are set to Generated and the Projection to Tube. Finally, under the Influence tab, only have the Geometry/Normal set with a value of 10.000. Save your work, naming the file copper-turret-03.blend, and perform another test render. Renders of this model will be quite quick, so don't be afraid to regularly render to examine your progress. Your work should have a more pleasing sloped tiled copper look. However, the surface is still a little dull. Let us add some weather-beaten damage to help bind the images tiled on the surface to the structure below. With the turret object selected, choose the next free texture slot in the Texture panel. Create a new texture of Type Clouds and name it beaten-bumps. In the Clouds tab, set Grayscale and Noise/Hard, and set the Basis to Blender Original with Size 0.11, and Depth 6. Under the Mapping tab, set the Coordinates to Generated, and Projection to Tube. Below projection, change the X,Y,Z to Z, Y, X. Finally, under the Influence tab only, select Geometry/Normal and set to -0.200. Save your work again, incrementing the filename to copper-turret-04.blend. A test render at this point will not produce an enormous difference from the previous render but the effect is there. If you examine each stage render of the recipe so far you will see the subtle but important changes the textures have made. How it works... Creating metal surfaces, in 3D packages like Blender, will almost always require a photographic image to map the man-made nature of the material. Images can add color, bump, or normal maps, as well as specular variety, to show these man-made structures. Because metals can have so much variety in their surface appearance, more than one map will be required. In our example, we used three images that were created in a paint package. They have been designed to give a tileable texture so that the effect can be repeated across the surface without producing discernible repeats. Producing such images can be ttime-consuming but producing a good image map will make your materials much more believable. Occasionally, it will be possible to combine color, bump, and specularity maps into a single image but try to avoid this as it will undoubtedly lead to unnatural-looking metals. Sometimes, the simplest of bump maps can make all the difference to a material. In the middle image shown previously, we see a series of simple blends marking the high and low points of overlapping copper plates. It's working in a very similar way to the recipe on slate roof tiles. However, it is also being used in conjunction with the plating image that supplies the color and just a little bump. We have also supplied a third bump map using a procedural texture, Clouds. Procedurals have the effect of creating random variation across a surface, so here it is used to help tie together and break any repeats formed by the tiled images. Using multiple bump maps is an extremely efficient way of adding subtle detail to any material and here, you can almost see the builders of this turret leaning against it to hammer down the rivets.
Read more
  • 0
  • 0
  • 3695

article-image-zbrush-faqs
Packt
20 Apr 2011
3 min read
Save for later

ZBrush FAQs

Packt
20 Apr 2011
3 min read
ZBrush 4 Sculpting for Games: Beginner's Guide Sculpt machines, environments, and creatures for your game development projects Q: Why do we use ZBrush and why is it so widely used in the game and film industry? A: ZBrush is very good for creating highly detailed models in a very short time. This may sound trivial, but it is very sought-after and if you have seen the amazing detail on some creatures in Avatar (film), The Lord of the Rings (film) or Gears of War (game), you'll know how much this adds to the experience. Without the possibilities of ZBrush, we weren't able to achieve such an incredible level of detail that looks almost real, like this detailed close-up of an arm: But apart from creating hyper-realistic models in games or films, ZBrush also focuses on making model creation easier and more lifelike. For these reasons, it essentially tries to mimic working with real clay, which is easy to understand. So it's all about adding and removing digital clay, which is quite a fun and intuitive way of creating 3D-models. Q: Where can one get more information on ZBrush? A: Now that you're digging into ZBrush, these websites are worth a visit: http://www.pixologic.com. As the developers of ZBrush, this site features many customer stories, tutorials, and most interestingly the turntable gallery, where you can rotate freely around ZBrush models from others. http://www.ZBrushcentral.com. The main forum with answers for all ZBrush-related questions and a nice "top-row-gallery". http://www.ZBrush.info. This is a wiki, hosted by pixologic, containing the online documentation for ZBrush. Q: What are the most important hotkeys in ZBrush? A: The following are some of the most important hotkeys in ZBrush: To Rotate your model, left-click anywhere on an unoccupied area of the canvas and drag the mouse. To Move your model, hold Alt while left-clicking anywhere on an unoccupied area of the canvas and drag the mouse. To Scale your model, Press Alt while left-clicking anywhere on an unoccupied area of the canvas, which is moving. Now release the Alt key while keeping the mouse button pressed and drag. Q: What is the difference between 2D, 2.5D, and 3D images in ZBrush? A: 2D digital Images are a flat representation of color, consisting of pixels. Each pixel holds color information. Opposed to that, 3D models—as the name says—can hold 3-dimensional information. A 2.5D image stores the color information like an image, but additionally knows how far away the pixels in the image are from the viewer and in which direction they are pointing. With this information you can, for example, change the lighting in your 2.5D image, without having to repaint it, which can be a real time-saver. To make this even clearer, the next list shows some of the actions we can perform, depending if we're working in 2D, 2.5D, or 3D: 3D – Rotation, deformation, lighting, 2.5D – Deformation, lighting, pixel-based effects 2D – Pixel-based effects A pixel-based effect, for example, could be the contrast brush or the glow brush, which can't be applied to a 3D-model. Q: How can we switch between 2.5D and 3D mode? A: We can switch between 2.5D and 3D mode by using the Edit button.
Read more
  • 0
  • 0
  • 3005

article-image-panda3d-game-development-scene-effects-and-shaders
Packt
20 Apr 2011
8 min read
Save for later

Panda3D game development: scene effects and shaders

Packt
20 Apr 2011
8 min read
While brilliant gameplay is the key to a fun and successful game, it is essential to deliver beautiful visuals to provide a pleasing experience and immerse the player in the game world. The looks of many modern productions are massively dominated by all sorts of visual magic to create the jaw-dropping visual density that is soaked up by players with joy and makes them feel connected to the action and the gameplay they are experiencing. The appearance of your game matters a lot to its reception by players. Therefore it is important to know how to leverage your technology to get the best possible looks out of it. This is why this article will show you how Panda3D allows you to create great looking games using lights, shaders, and particles. Adding lights and shadows in Panda3d Lights and shadows are very important techniques for producing a great presentation. Proper scene lighting sets the mood and also adds depth to an otherwise flat-looking scene, while shadows add more realism, and more importantly, root the shadow-casting objects to the ground, destroying the impression of models floating in mid-air. This recipe will show you how to add lights to your game scenes and make objects cast shadows to boost your visuals. Getting ready You need to create the setup presented in Setting up the game structure before proceeding, as this recipe continues and builds upon this base code. How to do it... This recipe consists of these tasks: Add the following code to Application.py: from direct.showbase.ShowBase import ShowBase from direct.actor.Actor import Actor from panda3d.core import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.panda = Actor("panda", {"walk": "panda-walk"}) self.panda.reparentTo(render) self.panda.loop("walk") cm = CardMaker("plane") cm.setFrame(-10, 10, -10, 10) plane = render.attachNewNode(cm.generate()) plane.setP(270) self.cam.setPos(0, -40, 6) ambLight = AmbientLight("ambient") ambLight.setColor(Vec4(0.2, 0.1, 0.1, 1.0)) ambNode = render.attachNewNode(ambLight) render.setLight(ambNode) dirLight = DirectionalLight("directional") dirLight.setColor(Vec4(0.1, 0.4, 0.1, 1.0)) dirNode = render.attachNewNode(dirLight) dirNode.setHpr(60, 0, 90) render.setLight(dirNode) pntLight = PointLight("point") pntLight.setColor(Vec4(0.8, 0.8, 0.8, 1.0)) pntNode = render.attachNewNode(pntLight) pntNode.setPos(0, 0, 15) self.panda.setLight(pntNode) sptLight = Spotlight("spot") sptLens = PerspectiveLens() sptLight.setLens(sptLens) sptLight.setColor(Vec4(1.0, 0.0, 0.0, 1.0)) sptLight.setShadowCaster(True) sptNode = render.attachNewNode(sptLight) sptNode.setPos(-10, -10, 20) sptNode.lookAt(self.panda) render.setLight(sptNode) render.setShaderAuto() Start the program with the F6 key. You will see the following scene: How it works... As we can see when starting our program, the panda is lit by multiple lights, casting shadows onto itself and the ground plane. Let's see how we achieved this effect. After setting up the scene containing our panda and a ground plane, one of each possible light type is added to the scene. The general pattern we follow is to create new light instances before adding them to the scene using the attachNewNode() method. Finally, the light is turned on with setLight(), which causes the calling object and all of its children in the scene graph to receive light. We use this to make the point light only affect the panda but not the ground plane. Shadows are very simple to enable and disable by using the setShadowCaster() method, as we can see in the code that initializes the spotlight. The line render.setShaderAuto() enables the shader generator, which causes the lighting to be calculated pixel perfect. Additionally, for using shadows, the shader generator needs to be enabled. If this line is removed, lighting will look coarser and no shadows will be visible at all. Watch the amount of lights you are adding to your scene! Every light that contributes to the scene adds additional computation cost, which will hit you if you intend to use hundreds of lights in a scene! Always try to detect the nearest lights in the level to use for lighting and disable the rest to save performance. There's more... In the sample code, we add several types of lights with different properties, which may need some further explanation. Ambient light sets the base tone of a scene. It has no position or direction—the light color is just added to all surface colors in the scene, which avoids unlit parts of the scene to appear completely black. You shouldn't set the ambient color to very high intensities. This will decrease the effect of other lights and make the scene appear flat and washed out. Directional lights do not have a position, as only their orientation counts. This light type is generally used to simulate sunlight—it comes from a general direction and affects all light-receiving objects equally. A point light illuminates the scene from a point of origin from which light spreads towards all directions. You can think of it as a (very abstract) light bulb. Spotlights, just like the headlights of a car or a flashlight, create a cone of light that originates from a given position and points towards a direction. The way the light spreads is determined by a lens, just like the viewing frustum of a camera. Using light ramps The lighting system of Panda3D allows you to pull off some additional tricks to create some dramatic effects with scene lights. In this recipe, you will learn how to use light ramps to modify the lights affect on the models and actors in your game scenes. Getting ready In this recipe we will extend the code created in Adding lights and shadows found in this article. Please review this recipe before proceeding if you haven't done so yet. How to do it... Light ramps can be used like this: Open Application.py and add and modify the existing code as shown: from direct.showbase.ShowBase import ShowBase from direct.actor.Actor import Actor from panda3d.core import * from direct.interval.IntervalGlobal import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.panda = Actor("panda", {"walk": "panda-walk"}) self.panda.reparentTo(render) self.panda.loop("walk") cm = CardMaker("plane") cm.setFrame(-10, 10, -10, 10) plane = render.attachNewNode(cm.generate()) plane.setP(270) self.cam.setPos(0, -40, 6) ambLight = AmbientLight("ambient") ambLight.setColor(Vec4(0.3, 0.2, 0.2, 1.0)) ambNode = render.attachNewNode(ambLight) render.setLight(ambNode) dirLight = DirectionalLight("directional") dirLight.setColor(Vec4(0.3, 0.9, 0.3, 1.0)) dirNode = render.attachNewNode(dirLight) dirNode.setHpr(60, 0, 90) render.setLight(dirNode) pntLight = PointLight("point") pntLight.setColor(Vec4(3.9, 3.9, 3.8, 1.0)) pntNode = render.attachNewNode(pntLight) pntNode.setPos(0, 0, 15) self.panda.setLight(pntNode) sptLight = Spotlight("spot") sptLens = PerspectiveLens() sptLight.setLens(sptLens) sptLight.setColor(Vec4(1.0, 0.4, 0.4, 1.0)) sptLight.setShadowCaster(True) sptNode = render.attachNewNode(sptLight) sptNode.setPos(-10, -10, 20) sptNode.lookAt(self.panda) render.setLight(sptNode) render.setShaderAuto() self.activeRamp = 0 toggle = Func(self.toggleRamp) switcher = Sequence(toggle, Wait(3)) switcher.loop() def toggleRamp(self): if self.activeRamp == 0: render.setAttrib(LightRampAttrib.makeDefault()) elif self.activeRamp == 1: render.setAttrib(LightRampAttrib.makeHdr0()) elif self.activeRamp == 2: render.setAttrib(LightRampAttrib.makeHdr1()) elif self.activeRamp == 3: render.setAttrib(LightRampAttrib.makeHdr2()) elif self.activeRamp == 4: render.setAttrib(LightRampAttrib. makeSingleThreshold(0.1, 0.3)) elif self.activeRamp == 5: render.setAttrib(LightRampAttrib. makeDoubleThreshold(0, 0.1, 0.3, 0.8)) self.activeRamp += 1 if self.activeRamp > 5: self.activeRamp = 0 Press F6 to start the sample and see it switch through the available light ramps as shown in this screenshot: How it works... The original lighting equation that is used by Panda3D to calculate the final screen color of a lit pixel limits color intensities to values within a range from zero to one. By using light ramps we are able to go beyond these limits or even define our own ones to create dramatic effects just like the ones we can see in the sample program. In the sample code, we increase the lighting intensity and add a method that switches between the available light ramps, beginning with LightRampAttrib.makeDefault() which sets the default clamping thresholds for the lighting calculations. Then, the high dynamic range ramps are enabled one after another. These light ramps allow you to have a higher range of color intensities that go beyond the standard range between zero and one. These high intensities are then mapped back into the displayable range, allocating different amounts of values within it to displaying brightness. By using makeHdr0(), we allocate a quarter of the displayable range to brightness values that are greater than one. With makeHdr1() it is a third and with makeHdr2() we are causing Panda3D to use half of the color range for overly bright values. This doesn't come without any side effects, though. By increasing the range used for high intensities, we are decreasing the range of color intensities available for displaying colors that are within the limits of 0 and 1, thus losing contrast and making the scene look grey and washed out. Finally, with the makeSingleThreshold() and makeDoubleThreshold() methods, we are able to create very interesting lighting effects. With a single threshold, lighting values below the given limit will be ignored, while anything that exceeds the threshold will be set to the intensity given in the second parameter of the method. The double threshold system works analogous to the single threshold, but lighting intensity will be normalized to two possible values, depending on which of the two thresholds was exceeded.
Read more
  • 0
  • 0
  • 9439
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-color-theory-and-lighting-basics-blender
Packt
14 Apr 2011
7 min read
Save for later

Introduction to Color Theory and Lighting Basics in Blender

Packt
14 Apr 2011
7 min read
Basic color theory To fully understand how light works, we need to have a basic understanding of what color is and how different colors interact with each other. The study of this phenomenon is known as color theory. What is color? When light comes in contact with an object, the object absorbs a certain amount of that light. The rest is reflected into the eye of the viewer in the form of color. The easiest way to visualize colors and their relations is in the form of a color wheel. Primary colors There are millions of colors, but there are only three colors that cannot be created through color mixing—red, yellow, and blue. These colors are known as primary colors, which are used to create the other colors on the color wheel through a process known as color mixing. Through color mixing, we get other "sets" of colors, including secondary and tertiary colors. Secondary colors Secondary colors are created when two primary colors are mixed together. For example, mixing red and blue makes purple, red and yellow make orange, and blue and yellow make green. Tertiary colors It's natural to assume that, because mixing two primary colors creates a secondary color, mixing two secondary colors would create a tertiary color. Surprisingly, this isn't the case. A tertiary color is, in fact, the result of mixing a primary and secondary color together. This gives us the remainder of the color wheel: Red-orange Orange-yellow Chartreuse Turquoise Indigo Violet-red Color relationships There are other relationships between colors that we should know about before we start using Blender. The first is complimentary colors. Complimentary colors are colors that are across from each other on the color wheel. For example, red and green are compliments. Complimentary colors are especially useful for creating contrast in an image, because mixing them together darkens the hue. In a computer program, mixing perfect compliments together will result in black, but mixing compliments in a more traditional medium such as oil pastels results in more of a dark brown hue. In both situations, though, the compliments are used to create a darker value. Be wary of using complimentary colors in computer graphics—if complimentary colors mix accidentally, it will result in black artifacts in images or animations. The other color relationship that we should be aware of is analogous colors. Analogous colors are colors found next to each other on the color wheel. For example, red, red-orange, and orange are analogous. Here's the kicker—red, orange, and yellow can also be analogous as well. A good rule to follow is as long as you don't span more than one primary color on the color wheel, they're most likely considered analogous colors. Color temperature Understanding color temperature is an essential step in understanding how lights work—at the very least, it helps us understand why certain lights emit the colors they do. No light source emits a constant light wavelength. Even the sun, although considered a constant light source, is filtered by the atmosphere to various degrees based on the time of the day, changing its perceived color. Color temperature is typically measured in degrees Kelvin (°K), and has a color range from a red to blue hue, like in the image below: Real world, real lights So how is color applicable beyond a two-dimensional color wheel? In the real world, our eyes perceive color because light from the sun—which contains all colors in the visible color spectrum—is reflected off of objects in our field of vision. As light hits an object, some wavelengths are absorbed, while the rest are reflected. Those reflected rays are what determine the color we perceive that particular object to be. Of course, the sun isn't the only source of light we have. There are many different types of natural and artificial light sources, each with its own unique properties. The most common types of light sources we may try to simulate in Blender include: Candlelight Incandescent light Florescent light Sunlight Skylight Candlelight Candlelight is a source of light as old as time. It has been used for thousands of years and is still used today in many cases. The color temperature of a candle's light is about 1500 K, giving it a warm red-orange hue. Candlelight also has a tendency to create really high contrast between lit areas and unlit areas in a room, which creates a very successful dramatic effect. Incandescent light bulbs When most people hear the term "light bulb", the incandescent light bulb immediately comes to mind. It's also known as a tungsten-halogen light bulb. It's your typical household light bulb, burning at approximately 2800 K-3200 K. This color temperature value still allows it to fall within the orange-yellow part of the spectrum, but it is noticeably brighter than the light of a candle. Florescent light bulbs Florescent lights are an alternative to incandescent. Also known as mercury vapor lights, fluorescents burn at a color temperature range of 3500 K-5900 K, allowing them to emit a color anywhere between a yellow and a white hue. They're commonly used when lighting a large area effectively, such as a warehouse, school hallway, or even a conference room. The sun and the sky Now let's take a look at some natural sources of light! The most obvious example is the sun. The sun burns at a color temperature of approximately 5500 K, giving it its bright white color. We rarely use pure white as a light's color in 3D though—it makes your scene look too artificial. Instead, we may choose to use a color that best suits the scene at hand. For example, if we are lighting a desert scene, we may choose to use a beige color to simulate light bouncing off the sand. But even so, this still doesn't produce an entirely realistic effect. This is where the next source of light comes in—the sky. The sky can produce an entire array of colors from deep purple to orange to bright blue. It produces a color temperature range of 6000 K-20,000 K. That's a huge range! We can really use this to our advantage in our 3D scenes—the color of the sky can have the final say in what the mood of your scene ends up being. Chromatic adaptation What is chromatic adaptation? We're all more familiar with this process than you may realize. As light changes, the color we perceive from the world around us changes. To accommodate for those changes, our eyes adjust what we see to something we're more familiar with (or what our brains would consider normal). When working in 3D you have to keep this in mind, because even though your 3D scene may be physically lit correctly, it may not look natural because the computer renders the final image objectively, without the chromatic adaptation that we, as humans, are used to. Take this image for example. In the top image, the second card from the left appears to be a stronger shade of pink than the corresponding card in the bottom picture. Believe it or not, they are the exact same color, but because of the red hue of the second photo, our brains change how we perceive that image.
Read more
  • 0
  • 0
  • 5651

article-image-setting-panda3d-and-configuring-development-tools
Packt
14 Apr 2011
7 min read
Save for later

Setting Up Panda3D and Configuring Development Tools

Packt
14 Apr 2011
7 min read
  Panda3D 1.7 Game Developer's Cookbook Panda3D is a very powerful and feature-rich game engine that comes with a lot of features needed for creating modern video games. Using Python as a scripting language to interface with the low-level programming libraries makes it easy to quickly create games because this layer of abstraction neatly hides many of the complexities of handling assets, hardware resources, or graphics rendering, for example. This also allows simple games and prototypes to be created very quickly and keeps the code needed for getting things going to a minimum. Panda3D is a complete game engine package. This means that it is not just a collection of game programming libraries with a nice Python interface, but also includes all the supplementary tools for previewing, converting, and exporting assets as well as packing game code and data for redistribution. Delivering such tools is a very important aspect of a game engine that helps with increasing the productivity of a development team. The Panda3D engine is a very nice set of building blocks needed for creating entertainment software, scaling nicely to the needs of hobbyists, students, and professional game development teams. Panda3D is known to have been used in projects ranging from one-shot experimental prototypes to full-scale commercial MMORPG productions like Toontown Online or Pirates of the Caribbean Online. Before you are able to start a new project and use all the powerful features provided by Panda3D to their fullest, though, you need to prepare your working environment and tools. By the end of this article, you will have a strong set of programming tools at hand, as well as the knowledge of how to configure Panda3D to your future projects' needs. Downloading and configuring NetBeans to work with Panda3D When writing code, having the right set of tools at hand and feeling comfortable when using them is very important. Panda3D uses Python for scripting and there are plenty of good integrated development environments available for this language like IDLE, Eclipse, or Eric. Of course, Python code can be written using the excellent Vim or Emacs editors too. Tastes do differ, and every programmer has his or her own preferences when it comes to this decision. To make things easier and have a uniform working environment, however, we are going to use the free NetBeans IDE for developing Python scripts. This choice was made out of pure preference and one of the many great alternatives might be used as well for following through the recipes in this article, but may require different steps for the initial setup and getting samples to run. In this recipe we will install and configure the NetBeans integrated development environment to suit our needs for developing games with Panda3D using the Python programming language. Getting ready Before beginning, be sure to download and install Panda3D. To download the engine SDK and tools, go to www.panda3d.org/download.php: The Panda3D Runtime for End-Users is a prebuilt redistributable package containing a player program and a browser plugin. These can be used to easily run packaged Panda3D games. Under Snapshot Builds, you will be able to find daily builds of the latest version of the Panda3D engine. These are to be handled with care, as they are not meant for production purposes. Finally, the link labeled Panda3D SDK for Developers is the one you need to follow to retrieve a copy of the Panda3D development kit and tools. This will always take you to the latest release of Panda3D, which at this time is version 1.7.0. This version was marked as unstable by the developers but has been working in a stable way for this article. This version also added a great amount of interesting features, like the web browser plugin, an advanced shader, and graphics pipeline or built-in shadow effects, which really are worth a try. Click the link that says Panda3D SDK for Developers to reach the page shown in the following screenshot: Here you can select one of the SDK packages for the platforms that Panda3D is available on. This article assumes a setup of NetBeans on Windows but most of the samples should work on these alternative platforms too, as most of Panda3D's features have been ported to all of these operating systems. To download and install the Panda3D SDK, click the Panda3D SDK 1.7.0 link at the top of the page and download the installer package. Launch the program and follow the installation wizard, always choosing the default settings. In this and all of the following recipes we'll assume the install path to be C:Panda3D-1.7.0, which is the default installation location. If you chose a different location, it might be a good idea to note the path and be prepared to adapt the presented file and folder paths to your needs! How to do it... Follow these steps to set up your Panda3D game development environment: Point your web browser to netbeans.org and click the prominent Download FREE button: Ignore the big table showing all kinds of different versions on the following page and scroll down. Click the link that says JDK with NetBeans IDE Java SE bundle. This will take you to the following page as shown here. Click the Downloads link to the right to proceed. You will find yourself at another page, as shown in the screenshot. Select Windows in the Platform dropdown menu and tick the checkbox to agree to the license agreement. Click the Continue button to proceed. Follow the instructions on the next page. Click the file name to start the download. Launch the installer and follow the setup wizard. Once installed, start the NetBeans IDE. In the main toolbar click Tools | Plugins. Select the tab that is labeled Available Plugins. Browse the list until you find Python and tick the checkbox next to it: Click Install. This will start a wizard that downloads and installs the necessary features for Python development. At the end of the installation wizard you will be prompted to restart the NetBeans IDE, which will finish the setup of the Python feature. Once NetBeans reappears on your screen, click Tools | Python Platforms. In the Python Platform Manager window, click the New button and browse for the file C:Panda3D-1.7.0pythonppython.exe. Select Python 2.6.4 from the platforms list and click the Make Default button. Your settings should now reflect the ones shown in the following screenshot: Finally we select the Python Path tab and once again, compare your settings to the screenshot: Click the Close button and you are done! How it works... In the preceding steps we configured NetBeans to use the Python runtime that drives the Panda3D engine and as we can see, it is very easy to install and set up our working environment for Panda3D. There's more... Different than other game engines, Panda3D follows an interesting approach in its internal architecture. While the more common approach is to embed a scripting runtime into the game engine's executable, Panda3D uses the Python runtime as its main executable. The engine modules handling such things as loading assets, rendering graphics, or playing sounds are implemented as native extension modules. These are loaded by Panda3D's custom Python interpreter as needed when we use them in our script code. Essentially, the architecture of Panda3D turns the hierarchy between native code and the scripting runtime upside down. While in other game engines, native code initiates calls to the embedded scripting runtime, Panda3D shifts the direction of program flow. In Panda3D, the Python runtime is the core element of the engine that lets script code initiate calls into native programming libraries. To understand Panda3D, it is important to understand this architectural decision. Whenever we start the ppython executable, we start up the Panda3D engine. If you ever get into a situation where you are compiling your own Panda3D runtime from source code, don't forget to revisit steps 13 to 17 of this recipe to configure NetBeans to use your custom runtime executable!
Read more
  • 0
  • 0
  • 5789

article-image-flash-game-development-making-astro-panic
Packt
11 Apr 2011
10 min read
Save for later

Flash Game Development: Making of Astro-PANIC!

Packt
11 Apr 2011
10 min read
  Flash Game Development by Example Build 10 classic Flash games and learn game development along the way         Read more about this book       (For more resources on flash, see here.) Astro-PANIC! was released as an all machine language Commodore 64 game to be typed in the February 1984 issue of COMPUTE!'s Gazette magazine. At that time there wasn't any blog with source codes to download or copy/paste into your projects, so the only way to learn from other programmers was buying computer magazines and typing the example codes on your computer. Since I suppose you never played this game, I would recommend you play it a bit on http://www.freewebarcade.com/game/astro-panic/. Defining game design Here are the rules to design our Astro-PANIC! prototype: The player controls a spaceship with the mouse, being able to move it horizontally on the bottom of the screen. At each level, a given number of enemy spaceships appear and roam around the stage at a constant speed in a constant direction. Enemies cannot leave the stage, and they will bounce inside it as they touch stage edges. Enemies don't shoot, and the only way they will kill the player is by touching the spaceship. The player can only have one bullet on stage at any time, and hitting an enemy with the bullet will destroy it. Destroying all enemies means passing the level, and at each level the number of enemies and their speed increases. These are the basic rules. We'll add some minor improvements during the design of the game itself, but before we start drawing the graphics, keep in mind we'll design something with the look and feel of old coin operator monitors, with bright glowing graphics. Creating the game and drawing the graphics Create a new file (File | New). Then, from New Document window select Actionscript 3.0. Set its properties as width to 640 px, height to 480 px, background color to #000000 (black) and frame rate to 60. Also define the Document Class as Main and save the file as astro-panic.fla. Though 30 frames per second is the ideal choice for smooth animations, we will use 60 frames per second to create a very fast paced game. There are three actors in this game: the player-controlled spaceship, the bullet and the enemy. In astro-panic.fla, create three new Movie Clip symbols and call them spaceship_mc for the spaceship, bullet_mc for the bullet, and enemy_mc for the enemy. Set them all as exportable for ActionScript. Leave all other settings at their default values, just like you did in previous article on Tetris. From left to right: The spaceship (spaceship_mc), the bullet (bullet_mc), and the enemy (enemy_mc). I made all assets with the shape of a circle. The spaceship is half a circle with a radius of 30 pixels, the bullet is a circle with a 4 pixels radius, and the enemy is a circle with a radius of 25 pixels. All of them have the registration point in their centers, and enemy_mc has a dynamic text field in it called level. If you've already met dynamic text fields during the making of Minesweeper it won't be a problem to add it. At the moment I am writing a couple of zeros to test how the dynamic text field fits in the enemy shape. Now we are ready to code. Adding and controlling the spaceship As usual we know we are going to use classes to manage both enter frame and mouse click events, so we'll import all the required classes immediately. The spaceship is controlled with the mouse, but can only move along x-axis. Without closing astro_panic.fla, create a new file and from New Document window select ActionScript 3.0 Class. Save this file as Main.as in the same path you saved astro_panic.fla. Then write: package { import flash.display.Sprite; import flash.events.Event; import flash.events.MouseEvent; public class Main extends Sprite { private var spaceship:spaceship_mc; public function Main() { placeSpaceship(); addEventListener(Event.ENTER_FRAME,onEnterFrm); } private function placeSpaceship():void { spaceship=new spaceship_mc(); addChild(spaceship); spaceship.y=479; } private function onEnterFrm(e:Event):void { spaceship.x=mouseX; if (spaceship.x<30) { spaceship.x=30; } if (spaceship.x>610) { spaceship.x=610; } } }} At this time you should know everything about the concept behind this script. placeSpaceship is the function which constructs, adds to Display List and places the spaceship_mc DisplayObject called spaceship. In enter_frame function we just move the spaceship in the same position of the x-axis of the mouse. We don't want the spaceship to hide in a corner, so it won't be able to follow the axis of the mouse if it gets too close to stage edges. Test the movie, and move the mouse. Your spaceship will follow it, while being bound to the ground. Now we should give the spaceship an old arcade look. Adding a glow filter AS3 allows us to dynamically apply a wide range of filters to DisplayObjects on the fly. We'll add a glow filter to simulate old 'arcades' pixel luminosity. flash.filters.GlowFilter class lets us apply a glow effect to DisplayObjects. First, we need to import it. import flash.display.Sprite;import flash.events.Event;import flash.events.MouseEvent;import flash.filters.GlowFilter; At this time, we can simply create a new variable to construct a GlowFilter object. Change placeSpaceship this way: private function placeSpaceship():void { ... var glow_GlowFilter=new GlowFilter(0x00FFFF,1,6,6,2,2); spaceship.filters=new Array(glow);} We specify the color as 0x00FFFF (cyan to draw the spaceship), the alpha (1 = full opacity), and the amount of horizontal and vertical blur (both 6). I want you to notice that I used 6 for horizontal and vertical blur because I like the effect I achieve with such value. If you are planning to use a lot of filters, remember values that are a power of 2 (such as 4 and 8, but not 6) render more quickly than other values. The remaining two arguments are the strength, that determines the spread of the filter (if you use Photoshop, it's something like spread and size of the glow filter you can apply on layers) and the quality. Quality can range from 1 to 15 but values higher than 3 may affect performances and the same final effect can be set playing with blur. Finally the filter is added. spaceship.filters=new Array(glow); filters DisplayObject's property wants an array with all the filters you want to associate to the DisplayObject. In our case, we are adding only one filter but we have to include it in the array anyway. Test the movie and you will see your spaceship glow. In the previous picture, you can see the difference between the spaceship without and with the glow effect applied. Now your spaceship is ready to fire. Making spaceship fire Nobody would face an alien invasion with a harmless spaceship, so we are going to make it fire. We need to create a variable to manage bullet_mc DisplayObject and I have said the spaceship can fire only one bullet at a time, so we need another variable to tell us if the spaceship is already firing. If it's firing, it cannot fire. If it's not firing, it can fire. Add two new class level variables: private var spaceship:spaceship_mc;private var isFiring_Boolean=false;private var bullet:bullet_mc; isFiring is the Boolean variable that we'll use to determine if the spaceship is firing. false means it's not firing. bullet will represent the bullet itself. The player will be able to fire with mouse click, so a listener is needed in Main function: public function Main() { placeSpaceship(); addEventListener(Event.ENTER_FRAME,onEnterFrm); stage.addEventListener(MouseEvent.CLICK,onMouseCk);} Now every time the player clicks the mouse, onMouseCk function is called. This is the function: private function onMouseCk(e:MouseEvent):void { if (! isFiring) { placeBullet(); isFiring=true; }} It's very easy: if isFiring is false (the spaceship isn't already firing), placeBullet function is called to physically place a bullet then isFiring is set to true because now the spaceship is firing. The same placeBullet function isn't complex: private function placeBullet():void { bullet=new bullet_mc(); addChild(bullet); bullet.x=spaceship.x; bullet.y=430; var glow_GlowFilter=new GlowFilter(0xFF0000,1,6,6,2,2); bullet.filters=new Array(glow);} It's very similar to placeSpaceship function, the bullet is created, added to Display List, placed on screen, and a red glow effect is added. The only thing I would explain is the concept behind x and y properties: bullet.x=spaceship.x; Setting bullet's x property equal to spaceship's x property will place the bullet exactly where the spaceship is at the moment of firing. bullet.y=430; 430 is a good y value to make the bullet seem as it were just fired from the turret. Test the movie, and you will be able to fire a bullet with a mouse click. The bullet at the moment remains static in the point where we fired it. Making the bullet fly To make the bullet fly, we have to define its speed and move it upwards. Then we'll remove it once it leaves the stage and reset isFiring to false to let the player fire again. Add a constant to class level variables: private const BULLET_SPEED_uint=5;private var spaceship:spaceship_mc;private var isFiring_Boolean=false;private var bullet:bullet_mc; BULLET_SPEED is the amount of pixels the bullet will fly at each frame. We won't manage upgrades or power-ups, so we can say its value will never change. That's why it's defined as a constant. To manage bullet movement, we need to add some lines at the end of onEnterFrm function. You may wonder why we are managing both the spaceship and the bullet inside the same class rather than creating a separate class for each one. You'll discover it when you manage enemies' movement, later in this article. Meanwhile, add this code to onEnterFrm function. private function onEnterFrm(e:Event):void { ... if (isFiring) { bullet.y-=BULLET_SPEED; if (bullet.y<0) { removeChild(bullet); bullet=null; isFiring=false; } }} The new code is executed only if isFiring is true. We are sure we have a bullet on stage when isFiring is true. bullet.y-=BULLET_SPEED; Moves the bullet upward by BULLET_SPEED pixels. if (bullet.y<0) { ... } This if statement checks if y property is less than 0. This means the bullet flew off the screen. In this case we physically remove the bullet from the game with removeChild(bullet);bullet=null; and we give the player the capability of firing again with isFiring=false; Test the movie and fire, now your bullets will fly until they reach the top of the stage. Then you will be able to fire again. Since nobody wants to fire for the sake of firing, we'll add some enemies to shoot down.
Read more
  • 0
  • 0
  • 4272

article-image-how-create-openscenegraph-application
Packt
07 Apr 2011
11 min read
Save for later

How to Create an OpenSceneGraph Application

Packt
07 Apr 2011
11 min read
OpenSceneGraph 3.0: Beginner's Guide Constructing your own projects To build an executable program from your own source code, a platform-dependent solution or makefile is always required. At the beginning of this article, we are going to introduce another way to construct platform-independent projects with the CMake system, by which means, we are able to focus on interacting with the code and ignore the painstaking compiling and building process. Time for action – building applications with CMake Before constructing your own project with CMake scripts, it could be helpful to keep the headers and source files together in an empty directory first. The second step is to create a CMakeLists.txt file using any text editor, then and start writing some simple CMake build rules. The following code will implement a project with additional OSG headers and dependency libraries. Please enter them into the newly-created CMakeLists.txt file: cmake_minimum_required( VERSION 2.6 ) project( MyProject ) find_package( OpenThreads ) find_package( osg ) find_package( osgDB ) find_package( osgUtil ) find_package( osgViewer ) macro( config_project PROJNAME LIBNAME ) include_directories( ${${LIBNAME}_INCLUDE_DIR} ) target_link_libraries( ${PROJNAME} ${${LIBNAME}_LIBRARY} ) endmacro() add_executable( MyProject main.cpp ) config_project( MyProject OPENTHREADS ) config_project( MyProject OSG ) config_project( MyProject OSGDB ) config_project( MyProject OSGUTIL ) config_project( MyProject OSGVIEWER ) We have only added a main.cpp source file here, which is made up of the "Hello World" example and will be compiled to generate an executable file named MyProject. This small project depends on five major OSG components. All of these configurations can be modified to meet certain requirements and different user applications. Next, start cmake-gui and drag your CMakeLists.txt into the GUI. You may not be familiar with the CMake scripts to be executed, at present. However, the CMake wiki will be helpful for further understanding: http://www.cmake.org/Wiki/CMake. Create and build a Visual Studio solution or a makefile. The only point is that you have to ensure that your CMake software version is equal to or greater than 2.6, and make sure you have the OSG_ROOT environment variable set. Otherwise, the find_package() macro may not be able to find OSG installations correctly. The following image shows the unexpected errors encountered because OSG headers and libraries were not found in the path indicated by OSG_ROOT (or the variable was just missed): Note that, there is no INSTALL project in the Visual Studio solution, or any make install command to run at this time, because we don't write such CMake scripts for post-build installations. You could just run the executable file in the build directory directly. What just happened? CMake provides easy-to-read commands to automatically find dependencies for user projects. It will check preset directories and environment variables to see if there are any headers and libraries for the required package. The environment variable OSG_ROOT (OSG_DIR is OK, too) will facilitate in looking for OSG under Windows and UNIX, as CMake will first search for valid paths defined in it, and check if there are OSG prebuilt headers and libraries existing in these paths. Have a go hero – testing with different generators Just try a series of tests to generate your project, using Visual Studio, MinGW, and the UNIX gcc compiler. You will find that CMake is a convenient tool for building binary files from source code on different platforms. Maybe this is also a good start to learning programming in a multi-platform style. Using a root node Now we are going to write some code and build it with a self-created CMake script. We will again make a slight change to the frequently-used "Hello World" example. Time for action – improving the "Hello World" example The included headers, <osgDB/ReadFile> and <osgViewer/Viewer>, do not need to be modified. We only add a root variable that provides the runtime access to the Cessna model and assigns it to the setSceneData() method. In the main entry, record the Cessna model with a variable named root: osg::ref_ptr<osg::Node> root = osgDB::readNodeFile("cessna.osg"); osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); Build and run it at once: You will see no difference between this example and the previous "Hello World". So what actually happened? What just happened? In this example, we introduced two new OSG classes: osg::ref_ptr<> and osg::Node. The osg::Node class represents the basic element of a scene graph. The variable root stands for the root node of a Cessna model, which is used as the scene data to be visualized. Meanwhile, an instance of the osg::ref_ptr<> class template is created to manage the node object. It is a smart pointer, which provides additional features for the purpose of efficient memory management. Understanding memory management In a typical programming scenario, the developer should create a pointer to the root node, which directly or indirectly manages all other child nodes of the scene graph. In that case, the application will traverse the scene graph and delete each node and its internal data carefully when they no longer need to be rendered. This process is tiresome and error-prone, debugging dozens of bad trees and wild pointers, because developers can never know how many other objects still keep a pointer to the one being deleted. However without writing the management code, data segments occupied by all scene nodes will never be deleted, which will lead to unexpected memory leaks. This is why memory management is important in OSG programming. A basic concept of memory management always involves two topics: Allocation: Providing the memory needed by an object, by allocating the required memory block. Deallocation: Recycling the allocated memory for reuse, when its data is no longer used. Some modern languages, such as C#, Java, and Visual Basic, use a garbage collector to free memory blocks that are unreachable from any program variables. That means to store the number of objects reaching a memory block, and deallocate the memory when the number decrements to zero. The standard C++ approach does not work in such a way, but we can mimic it by means of a smart pointer, which is defined as an object that acts like a pointer, but is much smarter in the management of memory. For example, the boost library provides the boost::shared_ptr<> class template to store pointers in order to dynamically allocated related objects. ref_ptr<> and Referenced classes Fortunately, OSG also provides a native smart pointer, osg::ref_ptr<>, for the purpose of automatic garbage collection and deallocation. To make it work properly, OSG also provides the osg::Referenced class to manage reference-counted memory blocks, which is used as the base class of any classes that may serve as the template argument. The osg::ref_ptr<> class template re-implements a number of C++ operators as well as member functions, and thus provides convenient methods to developers. Its main components are as follows: get(): This public method returns the managed pointer, for instance, the osg::Node* pointer if you are using osg::Node as the template argument. operator*(): This is actually a dereference operator, which returns l-value at the pointer address, for instance, the osg::Node& reference variable. operator->() and operator=(): These operators allow a user application to use osg::ref_ptr<> as a normal pointer. The former calls member functions of the managed object, and the latter replaces the current managed pointer with a new one. operator==(), operator!=(), and operator!(): These operators help to compare smart pointers, or check if a certain pointer is invalid. An osg::ref_ptr<> object with NULL value assigned or without any assignment is considered invalid. valid(): This public method returns true if the managed pointer is not NULL. The expression some_ptr.valid() equals to some_ptr!=NULL if some_ptr is defined as a smart pointer. release(): This public method is useful when returning the managed address from a function. The osg::Referenced class is the pure base class of all elements in a scene graph, such as nodes, geometries, rendering states, and any other allocatable scene objects. The osg::Node class actually inherits from osg::Referenced indirectly. This is the reason why we program as follows: osg::ref_ptr<osg::Node> root; The osg::Referenced class contains an integer number to handle the memory block allocated. The reference count is initialized to 0 in the class constructor, and will be increased by 1 if the osg::Referenced object is referred to by an osg::ref_ptr<> smart pointer. On the contrary, the number will be decreased by 1 if the object is removed from a certain smart pointer. The object itself will be automatically destroyed when no longer referenced by any smart pointers. The osg::Referenced class provides three main member methods: The public method ref() increases the referenced counting number by 1 The public method unref() decreases the referenced counting number by 1 The public method referenceCount() returns the value of the current referenced counting number, which is useful for code debugging These methods could also work for classes that are derived from osg::Referenced. Note that it is very rarely necessary to call ref() or unref() directly in user programs, which means that the reference count is managed manually and may conflict with what the osg::ref_ptr<> is going to do. Otherwise, OSG's internal garbage collecting system will get the wrong number of smart pointers in use and even crash when managing memory blocks in an improper way. Collecting garbage: why and how Here are some reasons for using smart pointers and the garbage collection system in programming: Fewer bugs: Using smart pointers means the automatic initialization and cleanup of pointers. No dangling pointers will be created because they are always reference-counted. Efficient management: Objects will be reclaimed as soon as they are no longer referenced, which gives more available memory to applications with limited resources. Easy to debug: We can easily obtain the referenced counting number and other information on objects, and then apply other optimizations and experiments. For instance, a scene graph tree is composed by a root node and multiple levels of child nodes. Assuming that all children are managed with osg::ref_ptr<>, user applications may only keep the pointer to the root node. As is illustrated by the following image, the operation of deleting the root node pointer will cause a cascading effect that will destroy the whole node hierarchy: Each node in the example scene graph is managed by its parent, and will automatically be unreferenced during the deletion of the parent node. This node, if no longer referenced by any other nodes, will be destroyed immediately, and all of its children will be freed up. The entire scene graph will finally be cleaned without worries after the last group node or leaf node is deleted. The process is really convenient and efficient, isn't it? Please make sure the OSG smart pointer can work for you, and use a class derived from osg::Referenced as the osg::ref_ptr<> template argument, and correctly assign newly-allocated objects to smart pointers. A smart pointer can be used either as a local variable, a global variable, or a class member variable, and will automatically decrease the referenced counting number when reassigned to another object or moved out of the smart pointer's declaration scope. It is strongly recommended that user applications always use smart pointers to manage their scenes, but there are still some issues that need special attention: osg::Referenced and its derivatives should be created from the heap only. They cannot be used as local variables because class destructors are declared protected internally for safety. For example: osg::ref_ptr<osg::Node> node = new osg::Node; // this is legal osg::Node node; // this is illegal! A regular C++ pointer is still workable temporarily. But user applications should remember to assign it to osg::ref_ptr<> or add it to a scene graph element (almost all OSG scene classes use smart pointers to manage child objects) in the end, as it is always the safest approach. osg::Node* tmpNode = new osg::Node; // this is OK ... osg::ref_ptr<osg::Node> node = tmpNode; // Good finish! Don't play with reference cycles, as the garbage collecting mechanism cannot handle it. A reference cycle means that an object refers to itself directly or indirectly, which leads to an incorrect calculation of the referenced counting number. The scene graph shown in the following image contains two kinds of reference cycles, which are both invalid. The node Child 1.1 directly adds itself as the child node and will form a dead cycle while traversing to its children, because it is the child of itself, too! The node Child 2.2, which also makes a reference cycle indirectly, will cause the same problem while running: Now let's have a better grasp of the basic concepts of memory management, through a very simple example.  
Read more
  • 0
  • 0
  • 5713
article-image-zbrush-4-modeling-creature-zsketch
Packt
06 Apr 2011
4 min read
Save for later

ZBrush 4: how to model a creature with ZSketch

Packt
06 Apr 2011
4 min read
What the creature looks like The Brute is a crossbreed between a harmless emu and a wild forest bear. It is a roaming rogue living in the deepest forests. Only a few people have seen it and survived, so it's said to be between three and eight meters high. Despite its size, it combines strength and agility in a dangerous way. It is said that it hides his rather cute-looking head with trophies of its victims. ZSketching a character In this workflow, we can think of our ZSpheres as a skeleton we can place our virtual clay onto. So we try to build the armature, not as thick as the whole arm, but as thick as the underlying bone would be. With that in mind, let's get started. Time for action – creating the basic armature with ZSpheres Let's say the art director comes to your desk and shows you a concept of a monster to be created for the game project that you're working on. As always, there's little to no time for this task. Don't panic; just sketch it with ZSketch in no time. Let's see how this works: Pick a new ZSphere and align its rotation by holding Shift. Set your draw size down to 1. Activate Symmetry on the X-axis. The root ZSphere can't be deleted without deleting everything else, so the best place for this would be in the pelvis area. Placing the cursor on the line of symmetry will create a single ZSphere—this is indicated by the cursor turning green. Start out to create the armature or skeleton from the root ZSphere, commencing from the pelvis to the head, as shown in the next screenshot. Similar to the human spine, it roughly follows an S-curve: Continue by adding the shoulders. A little trick is to start the clavicle bone a bit lower at the spine, which gives a more natural motion in the shoulder area. Add the arms with the fingers as one ZSphere plus the thumbs, we'll refine it later. The arms should be lowered and bent so that we're able to judge the overall proportions better, as the next image shows: This "skeleton" will also be used for moving or posing our model, so we'll try to place ZSpheres where our virtual joints would be, for example, at the elbow joint. Add the hips, stretching out from the pelvis and continue with the legs. Try to bend the legs a bit (which looks more natural) as shown in the next screenshot. Finally, add the foot as one ZSphere for the creature to stand on: Now we have all the basic features of the armature ready. Let's check the concept again to get our character's proportions right. Because our character is more of a compact, bulky build, we have to shorten his legs and neck a bit. Make sure to check the perspective view, too. Inside any game engine, characters will be viewed in perspective. We can also set the focal angle under Draw FocalAngle|. The default value is 50. Switching perspective off helps comparing lengths. Add another ZSphere in the belly area to better control its mass, even if it looks embarrassing. To make him look less like Anubis, you may want to lower the top-most ZSphere a bit, so it will fit the horns. Our revised armature could now look like this with perspective enabled: With the overall proportions done, let's move on with details, starting with the toes. Insert another ZSphere next to the heels and continue by adding the toes, including the tiny fourth toe, as shown in the next screenshot: With larger ZSpheres, we can better judge the mass of the foot. But because we need a thinner bone-like structure, let's scale them down once we're done. Be careful to scale the ZSpheres, and not the Link spheres in-between them. This keeps them in place while scaling, as shown in the next image:
Read more
  • 0
  • 0
  • 3641

article-image-collision-detection-and-physics-panda3d-game-development
Packt
30 Mar 2011
12 min read
Save for later

Collision Detection and Physics in Panda3D Game Development

Packt
30 Mar 2011
12 min read
Panda3D 1.7 Game Developer's Cookbook Over 80 recipes for developing 3D games with Panda3D, a full-scale 3D game engine In a video game, the game world or level defines the boundaries within which the player is allowed to interact with the game environment. But how do we enforce these boundaries? How do we keep the player from running through walls? This is where collision detection and response come into play. Collision detection and response not only allow us to keep players from passing through the level boundaries, but also are the basis for many forms of interaction. For example, lots of actions in games are started when the player hits an invisible collision mesh, called a trigger, which initiates a scripted sequence as a response to the player entering its boundaries. Simple collision detection and response form the basis for nearly all forms of interaction in video games. It’s responsible for keeping the player within the level, for crates being pushable, for telling if and where a bullet hit the enemy. What if we could add some extra magic to the mix to make our games even more believable, immersive, and entertaining? Let’s think again about pushing crates around: What happens if the player pushes a stack of crates? Do they just move like they have been glued together, or will they start to tumble and eventually topple over? This is where we add physics to the mix to make things more interesting, realistic, and dynamic. In this article, we will take a look at the various collision detection and physics libraries that the Panda3D engine allows us to work with. Putting in some extra effort, we will also see that it is not very hard to integrate a physics engine that is not part of the Panda3D SDK. Using the built-in collision detection system Not all problems concerning world and player interaction need to be handled by a fully fledged physics API—sometimes a much more basic and lightweight system is just enough for our purposes. This is why in this recipe we dive into the collision handling system that is built into the Panda3D engine. Getting ready This recipe relies upon the project structure created in Setting up the game structure (code download-Ch:1), Setting Up Panda3D and Configuring Development Tools. How to do it... Let’s go through this recipe’s tasks: Open Application.py and add the include statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * import random class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.addSmiley() self.addFloor() taskMgr.add(self.updateSmiley, "UpdateSmiley") Next, add the method that initializes the collision detection system: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.accept("frowney-in-floor", self.onCollision) Next, implement the method for adding the frowney model to the scene: def addSmiley(self): self.frowney = loader.loadModel("frowney") self.frowney.reparentTo(render) self.frowney.setPos(0, 0, 10) self.frowney.setPythonTag("velocity", 0) col = self.frowney.attachNewNode(CollisionNode("frowney")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) The following methods will add a floor plane to the scene and handle the collision response: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollision(self, entry): vel = random.uniform(0.01, 0.2) self.frowney.setPythonTag("velocity", vel) Add this last piece of code. This will make the frowney model bounce up and down: def updateSmiley(self, task): vel = self.frowney.getPythonTag("velocity") z = self.frowney.getZ() self.frowney.setZ(z + vel) vel -= 0.001 self.frowney.setPythonTag("velocity", vel) return task.cont Hit the F6 key to launch the program: How it works... We start off by adding some setup code that calls the other initialization routines. We also add the task that will update the smiley’s position. In the setupCD() method, we initialize the collision detection system. To be able to find out which scene objects collided and issue the appropriate responses, we create an instance of the CollisionTraverser class and assign it to base.cTrav. The variable name is important, because this way, Panda3D will automatically update the CollisionTraverser every frame. The engine checks if a CollisionTraverser was assigned to that variable and will automatically add the required tasks to Panda3D’s update loop. Additionally, we enable debug drawing, so collisions are being visualized at runtime. This will overlay a visualization of the collision meshes the collision detection system uses internally. In the last lines of setupCD(), we instantiate a collision handler that sends a message using Panda3D’s event system whenever a collision is detected. The method call addInPattern(“%fn-in-%in”) defines the pattern for the name of the event that is created when a collision is encountered the first time. %fn will be replaced by the name of the object that bumps into another object that goes by the name that will be inserted in the place of %in. Take a look at the event handler that is added below to get an idea of what these events will look like. After the code for setting up the collision detection system is ready, we add the addSmiley() method, where we first load the model and then create a new collision node, which we attach to the model’s node so it is moved around together with the model. We also add a sphere collision shape, defined by its local center coordinates and radius. This is the shape that defines the boundaries; the collision system will test against it to determine whether two objects have touched. To complete this step, we register our new collision node with the collision traverser and configure it to use the collision handler that sends events as a collision response. Next, we add an infinite floor plane and add the event handling method for reacting on collision notifications. Although the debug visualization shows us a limited rectangular area, this plane actually has an unlimited width and height. In our case, this means that at any given x- and y-coordinate, objects will register a collision when any point on their bounding volume reaches a z-coordinate of 0. It’s also important to note that the floor is not registered as a collider here. This is contrary to what we did for the frowney model and guarantees that the model will act as the collider, and the floor will be treated as the collidee when a contact between the two is encountered. While the onCollision() method makes the smiley model go up again, the code in updateSmiley() constantly drags it downwards. Setting the velocity tag on the frowney model to a positive or negative value, respectively, does this in these two methods. We can think of that as forces being applied. Whenever we encounter a collision with the ground plane, we add a one-shot bounce to our model. But what goes up must come down, eventually. Therefore, we continuously add a gravity force by decreasing the model’s velocity every frame. There’s more... This sample only touched a few of the features of Panda3D’s collision system. The following sections are meant as an overview to give you an impression of what else is possible. For more details, take a look into Panda3D’s API reference. Collision Shapes In the sample code, we used CollisionPlane and CollisionSphere, but there are several more shapes available: CollisionBox: A simple rectangular shape. Crates, boxes, and walls are example usages for this kind of collision shape. CollisionTube: A cylinder with rounded ends. This type of collision mesh is often used as a bounding volume for first and third person game characters. CollisionInvSphere: This shape can be thought of as a bubble that contains objects, like a fish bowl. Everything that is outside the bubble is reported to be colliding. A CollisionInvSphere may be used to delimit the boundaries of a game world, for example. CollisionPolygon: This collision shape is formed from a set of vertices, and allows for the creating of freeform collision meshes. This kind of shape is the most complex to test for collisions, but also the most accurate one. Whenever polygon-level collision detection is important, when doing hit detection in a shooter for example, this collision mesh comes in handy. CollisionRay: This is a line that, starting from one point, extends to infinity in a given direction. Rays are usually shot into a scene to determine whether one or more objects intersect with them. This can be used for various tasks like finding out if a bullet shot in the given direction hit a target, or simple AI tasks like finding out whether a bot is approaching a wall. CollisionLine: Like CollisionRay, but stretches to infinity in both directions. CollisionSegment: This is a special form of ray that is limited by two end points. CollisionParabola: Another special type of ray that is bent. The flying curves of ballistic objects are commonly described as parabolas. Naturally, we would use this kind of ray to find collisions for bullets, for example. Collision Handlers Just like it is the case with collision shapes for this recipe, we only used CollisionHandlerEvent for our sample program, even though there are several more collision handler classes available: CollisionHandlerPusher: This collision handler automatically keeps the collider out of intersecting vertical geometry, like walls. CollisionHandlerFloor: Like CollisionHandlerPusher, but works in the horizontal plane. CollisionHandlerQueue: A very simple handler. All it does is add any intersecting objects to a list. PhysicsCollisionHandler: This collision handler should be used in connection with Panda3D’s built-in physics engine. Whenever a collision is found by this collision handler, the appropriate response is calculated by the simple physics engine that is built into the engine. Using the built-in physics system Panda3D has a built-in physics system that treats its entities as simple particles with masses to which forces may be applied. This physics system is a great amount simpler than a fully featured rigid body one. But it still is enough for cheaply, quickly, and easily creating some nice and simple physics effects. Getting ready To be prepared for this recipe, please first follow the steps found in Setting up the game structure (code download-Ch:1). Also, the collision detection system of Panda3D will be used, so reading up on it in Using the built-in collision detection system might be a good idea! How to do it... The following steps are required to work with Panda3D’s built-in physics system: Edit Application.py and add the required import statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * from panda3d.physics import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.setupPhysics() self.addSmiley() self.addFloor() Next, add the methods for initializing the collision detection and physics systems to the Application class: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.notifier.addOutPattern("%fn-out-%in") self.accept("smiley-in-floor", self.onCollisionStart) self.accept("smiley-out-floor", self.onCollisionEnd) def setupPhysics(self): base.enableParticles() gravNode = ForceNode("gravity") render.attachNewNode(gravNode) gravityForce = LinearVectorForce(0, 0, -9.81) gravNode.addForce(gravityForce) base.physicsMgr.addLinearForce(gravityForce) Next, implement the method for adding a model and physics actor to the scene: def addSmiley(self): actor = ActorNode("physics") actor.getPhysicsObject().setMass(10) self.phys = render.attachNewNode(actor) base.physicsMgr.attachPhysicalNode(actor) self.smiley = loader.loadModel("smiley") self.smiley.reparentTo(self.phys) self.phys.setPos(0, 0, 10) thrustNode = ForceNode("thrust") self.phys.attachNewNode(thrustNode) self.thrustForce = LinearVectorForce(0, 0, 400) self.thrustForce.setMassDependent(1) thrustNode.addForce(self.thrustForce) col = self.smiley.attachNewNode(CollisionNode("smiley")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) Add this last piece of source code that adds the floor plane to the scene to Application.py: Application.py: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollisionStart(self, entry): base.physicsMgr.addLinearForce(self.thrustForce) def onCollisionEnd(self, entry): base.physicsMgr.removeLinearForce(self.thrustForce) Start the program by pressing F6: How it works... After adding the mandatory libraries and initialization code, we proceed to the code that sets up the collision detection system. Here we register event handlers for when the smiley starts or stops colliding with the floor. The calls involved in setupCD() are very similar to the ones used in Using the built-in collision detection system. Instead of moving the smiley model in our own update task, we use the built-in physics system to calculate new object positions based on the forces applied to them. In setupPhysics(), we call base.enableParticles() to fire up the physics system. We also attach a new ForceNode to the scene graph, so all physics objects will be affected by the gravity force. We also register the force with base.physicsMgr, which is automatically defined when the physics engine is initialized and ready. In the first couple of lines in addSmiley(), we create a new ActorNode, give it a mass, attach it to the scene graph and register it with the physics manager class. The graphical representation, which is the smiley model in this case, is then added to the physics node as a child so it will be moved automatically as the physics system updates. We also add a ForceNode to the physics actor. This acts as a thruster that applies a force that pushes the smiley upwards whenever it intersects the floor. As opposed to the gravity force, the thruster force is set to be mass dependant. This means that no matter how heavy we set the smiley to be, it will always be accelerated at the same rate by the gravity force. The thruster force, on the other hand, would need to be more powerful if we increased the mass of the smiley. The last step when adding a smiley is adding its collision node and shape, which leads us to the last methods added in this recipe, where we add the floor plane and define that the thruster should be enabled when the collision starts, and disabled when the objects’ contact phase ends.
Read more
  • 0
  • 0
  • 11173

article-image-introduction-editing-operators-blender-sequel
Packt
29 Mar 2011
3 min read
Save for later

Introduction to the Editing Operators in Blender: A Sequel

Packt
29 Mar 2011
3 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects         Read more about this book       (For more resources on Blender, see here.) Well, ready to go with a simple mouth structure, right? Then select the two lower vertex as in picture below. We'll extrude them in Z axis and then again in Y axis to finish scaling. First, just select the two lower vertices. (See illustration 12) Illustration 12: Modeling Stage After all past actions you should have something similar to this picture. Use the known select commands to select the two lower vertices. For manipulating the 3D View and checking your model at any perspective you want, use MMB (Middle Mouse Button). This will show you your model at any perspective allowing you to move around it. Also, you can probably notice that in the latest picture I have disabled the Transform Manipulator (those green, red and blue arrows). In the modeling stage they are not primordial to use if you get used to commands, but they will help you in other areas such as animation. Transform Manipulator function is to help in the Location, Rotation and Scale procedure while manipulating data (objects, meshes, vertex, edges, etc). You can disable/enable Transform Manipulator from the buttons close the Orientations (Global by default). Other way to do that is with Ctrl + Space bar Key. (See illustration 13) Illustration 13: Transform Manipulator Buttons You can manage the transformation mode with different manipulators. From left to side buttons are: Enable/Disable manipulators view, Translate manipulator mode, Rotate manipulator mode, and Scale manipulator mode. Extrude now the selected bit of mesh in the Z Axis. Then go to E Key and Z Axis after that to just extrude in the right axis. Make it small because this will be the start point to extrude the lower side of the mouth and the jaw. When you do that, extrude again two times in the Y Axis in negative normals, in other words pointing to the direction we started modeling. So, E Key and Y Axis (negative normals) and repeat it again. Make those extrudes to coincide with upper edges. After that, just scale a little bit with S Key. (See illustration 14) Illustration 14: Modeling Stage After all past actions you should have something similar to this picture. After some steps, we have something similar (more or less) to our model nose and mouth and you have learned all basics commands for successful and creative modeling. Just all you need to know at this point to develop interesting characters in your projects. Let's go to complete the jaw now. Select vertices as they are in the illustration below (See illustration 15). Go to Right View by pressing 3 Key in Numpad and extrude them to Z Axis, so press E Key to extrude and Z to tell Blender the right direction to extrude (doing so is useful to extrude in straight line according with axis, but is not required always, that just depends on the model). After that you should rotate the selected elements, so press R Key to rotate and you should have something similar to the illustration 18. (See illustration 18) Illustration 15: Modeling Stage After all past actions you should have something similar to this picture. Select these four vertices to extrude the mouth and the jaw. Illustration 16: Modeling Stage After all past actions you should have something similar to this picture. Previous selection after the rotation action.
Read more
  • 0
  • 0
  • 1840
article-image-introduction-editing-operators-blender
Packt
29 Mar 2011
6 min read
Save for later

Introduction to the Editing Operators in Blender

Packt
29 Mar 2011
6 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects        For editing task, 3D View has some features to help you in your work. These commands are easily executable from the Object/Mesh operators panel (T Key with mouse cursor over 3D View Editor). Here you have some different options depending if you are in Object Mode or Edit Mode. You can work according to different Modes in Blender (Object Mode, Edit Mode, Sculpt Mode, Pose Mode, etc) and accordingly you will be able to deal with different properties and operators. For modeling, you will use Edit Mode and Object Mode. (See illustration 1) Illustration 1: The Mode Menu Changing Mode in Blender modifies your workflow and offers you different options/properties according to the chosen mode. Differences between Object and Edit Mode To understand the differences between Object and Edit Mode easily, I'll make a comparison. In Blender, an Object is the physical representation of anything, and Mesh is the material that it is made up of. If you have got a plasticine puppet over the table, and if you see it, move it, and bring it to trash, then your puppet is in Object Mode. If you want to modify it by stretching its legs, adding more plasticine or something similar, then you need to manipulate it. This action is done over the Mesh in Edit Mode in Blender. It is important that you get this idea to understand at every moment the Mode you are working on. Then, you are in 3D View Editor and Object Mode (by default). If you got the default user scene with a cube, just select the cube (selected by default) with RMB (Right Mouse Button) and delete it with X Key. The purpose of this article is not to create the whole model, but to introduce to you the operators used for creating it. I will begin by telling you how to start a single model. Practical Exercise: Go to the Top view in 3D View Editor by pressing 7 Key in the Numpad. Take care you are not in the Persp view. You could check in the top left corner of the editor. If you are in the “Top Persp” then press 5 Key in the Numpad to go to “Top Ortho”. If you were there from the start, just avoid this step. We'll add our first plane to the scene. This plane will become our main character at the end. To add this plane just press Shift+A or click on the Add Plane| in the top menu in the Info Editor. (See illustration 2 and 3) Illustration 2: Add Menu. Floating The Add Object Menu is accessible within 3D View Editor at any moment by pressing Shift+A, resulting in a floating menu. Illustration 3: Add Menu The Add Object Menu is also accessible from the Add option within the Info Editor (at the top by default). Once we select Plane from that menu, we have a plane in our scene, 3D View Editor and Top View. This plane is currently in Object Mode. You can check this in the Mode Menu we have seen before. Now go to Right View by pressing 3 Key in the Numpad, then press R Key to rotate the plane and enter -90. Press Enter after that. Then go to the Front View by pressing 1 Key in the Numpad. We are now going to apply some very interesting Modifiers to the plane to help us in the modeling stage. We have our first plane in vertical position in Front View and in Object Mode. Press Tab Key to enter the Edit Mode and W Key to Subdivide the plane. A floating menu appears after pressing W Key. This is the Specials Menu with very interesting options there. At the moment Subdivide is the one we are interested in. (See illustration 4) Illustration 4: Specials Menu Specials Menu helps you in the Editing stage with interesting operators like Subdivide, Merge, Remove Doubles, etc. We will use some of these operators in future steps, so keep an eye on this panel. Well, back to the model. We have our plane in 3D View Editor, Edit Mode and Subdivide was recently applied. Notice that you will have a plane subdivided into four little planes by default. The amount of subdivision can be modified in the Subdivide panel below Mesh operators (If collapsed, T Key with mouse cursor over 3D View Editor) as we have seen previously. You now have a Subdivide Number of Cuts. To keep the target, we will go with the default one. So then we have a plane subdivided in little ones (four). By default all vertices are selected, to deselect all vertices, press A Key. Now our plane has no vertex selected. Next step is to set up a Mirror Modifier to help us model just the left mid (right mid for us because we are looking at our model in Front View). With all vertices deselected, press B Key to activate Border Select and make a square selecting left side of the plane with LMB (Left Mouse Button). (See illustration 5) Illustration 5: Border Select To select or deselect vertex, edges or faces in Edit Mode you could use Border Select (B Key) or Circle Select (C Key). First you need to make a square to select. Second is a circle that can be selected by clicking and dragging like in painting editors. Diameter of Circle Select could be adjusted with MMB (Middle Mouse Button). With left side of the plane selected, press X Key to delete those vertices. The Delete Menu will then offer to you different options to delete, in our case just vertex option. We now have only the right side of the panel, to which we will apply the Mirror Modifier. For that we must know at this point where to find them. To manage different operators or actions in our objects or meshes, we have the Buttons that will open the right Property Editor, according to our needs. (See illustration 6) Illustration 6: Buttons Selector Buttons open the right Property Editor.
Read more
  • 0
  • 0
  • 6777

article-image-flash-game-development-creation-complete-tetris-game
Packt
25 Mar 2011
10 min read
Save for later

Flash Game Development: Creation of a Complete Tetris Game

Packt
25 Mar 2011
10 min read
Tetris features shapes called tetrominoes, geometric shapes composed of four squared blocks connected orthogonally, that fall from the top of the playing field. Once a tetromino touches the ground, it lands and cannot be moved anymore, being part of the ground itself, and a new tetromino falls from the top of the game field, usually a 10x20 tiles vertical rectangle. The player can move the falling tetromino horizontally and rotate by 90 degrees to create a horizontal line of blocks. When a line is created, it disappears and any block above the deleted line falls down. If the stacked tetrominoes reach the top of the game field, it's game over. Defining game design This time I won't talk about the game design itself, since Tetris is a well known game and as you read this article you should be used to dealing with game design. By the way, there is something really important about this game you need to know before you start reading this article. You won't draw anything in the Flash IDE. That is, you won't manually draw tetrominoes, the game field, or any other graphic assets. Everything will be generated on the fly using AS3 drawing methods. Tetris is the best game for learning how to draw with AS3 as it only features blocks, blocks, and only blocks. Moreover, although the game won't include new programming features, its principles make Tetris the hardest game of the entire book. Survive Tetris and you will have the skills to create the next games focusing more on new features and techniques rather than on programming logic. Importing classes and declaring first variables The first thing we need to do, as usual, is set up the project and define the main class and function, as well as preparing the game field. Create a new file (File | New) then from New Document window select Actionscript 3.0. Set its properties as width to 400 px, height to 480 px, background color to #333333 (a dark gray), and frame rate to 30 (quite useless anyway since there aren't animations, but you can add an animated background on your own). Also, define the Document Class as Main and save the file as tetris.fla. Without closing tetris.fla, create a new file and from New Document window select ActionScript 3.0 Class. Save this file as Main.as in the same path you saved tetris.fla. Then write: package { import flash.display.Sprite; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.KeyboardEvent; public class Main extends Sprite { private const TS_uint=24; private var fieldArray:Array; private var fieldSprite:Sprite; public function Main() { // tetris!! } } } We already know we have to interact with the keyboard to move, drop, and rotate tetrominoes and we have to deal with timers to manage falling delay, so I already imported all needed libraries. Then, there are some declarations to do: private const TS_uint=24; TS is the size, in pixels, of the tiles representing the game field. It's a constant as it won't change its value during the game, and its value is 24. With 20 rows of tiles, the height of the whole game field will be 24x20 = 480 pixels, as tall as the height of our movie. private var fieldArray:Array; fieldArray is the array that will numerically represent the game field. private var fieldSprite:Sprite; fieldSprite is the DisplayObject that will graphically render the game field. Let's use it to add some graphics. Drawing game field background Nobody wants to see an empty black field, so we are going to add some graphics. As said, during the making of this game we won't use any drawn Movie Clip, so every graphic asset will be generated by pure ActionScript. The idea: Draw a set of squares to represent the game field. The development: Add this line to Main function: public function Main() { generateField(); } then write generateField function this way: private function generateField():void { fieldArray = new Array(); fieldSprite=new Sprite(); addChild(fieldSprite); fieldSprite.graphics.lineStyle(0,0x000000); for (var i_uint=0; i<20; i++) { fieldArray[i]=new Array(); for (var j_uint=0; j<10; j++) { fieldArray[i][j]=0; fieldSprite.graphics.beginFill(0x444444); fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); fieldSprite.graphics.endFill(); } } } Test the movie and you will see: The 20x10 game field has been rendered on the stage in a lighter gray. I could have used constants to define values like 20 and 10, but I am leaving it to you at the end of the article. Let's see what happened: fieldArray = new Array(); fieldSprite=new Sprite(); addChild(fieldSprite); These lines just construct fieldArray array and fieldSprite DisplayObject, then add it to stage as you have already seen a million times. fieldSprite.graphics.lineStyle(0,0x000000); This line introduces a new world called Graphics class. This class contains a set of methods that will allow you to draw vector shapes on Sprites. lineStyle method sets a line style that you will use for your drawings. It accepts a big list of arguments, but at the moment we'll focus on the first two of them. The first argument is the thickness of the line, in points. I set it to 0 because I wanted it as thin as a hairline, but valid values are 0 to 255. The second argument is the hexadecimal color value of the line, in this case black. Hexadecimal uses sixteen distinct symbols to represent numbers from 0 to 15. Numbers from zero to nine are represented with 0-9 just like the decimal numeral system, while values from ten to fifteen are represented by letters A-F. That's the way it is used in most common paint software and in the web to represent colors. You can create hexadecimal numbers by preceding them with 0x. Also notice that lineStyle method, like all Graphics class methods, isn't applied directly on the DisplayObject itself but as a method of the graphics property. for (var i_uint=0; i<20; i++) { ... } The remaining lines are made by the classical couple of for loops initializing fieldArray array in the same way you already initialized all other array-based games, and drawing the 200 (20x10) rectangles that will form the game field. fieldSprite.graphics.beginFill(0x444444); beginFill method is similar to lineStyle as it sets the fill color that you will use for your drawings. It accepts two arguments, the color of the fill (a dark gray in this case) and the opacity (alpha). Since I did not specify the alpha, it takes the default value of 1 (full opacity). fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); With a line and a fill style, we are ready to draw some squares with drawRect method, that draws a rectangle. The four arguments represent respectively the x and y position relative to the registration point of the parent DisplayObject (fieldSprite, that happens to be currently on 0,0 in this case), the width and the height of the rectangle. All the values are to be intended in pixels. fieldSprite.graphics.endFill(); endFill method applies a fill to everything you drew after you called beginFill method. This way we are drawing a square with a TS pixels side for each for iteration. At the end of both loops, we'll have 200 squares on the stage, forming the game field. Drawing a better game field background Tetris background game fields are often represented as a checkerboard, so let's try to obtain the same result. The idea: Once we defined two different colors, we will paint even squares with one color, and odd squares with the other color. The development: We have to modify the way generateField function renders the background: private function generateField():void { var colors_Array=new Array("0x444444","0x555555");"); fieldArray = new Array(); var fieldSprite_Sprite=new Sprite(); addChild(fieldSprite); fieldSprite.graphics.lineStyle(0,0x000000); for (var i_uint=0; i<20; i++) { fieldArray[i]=new Array(); for (var j_uint=0; j<10; j++) { fieldArray[i][j]=0; fieldSprite.graphics.beginFill(colors[(j%2+i%2)%2]); fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); fieldSprite.graphics.endFill(); } } } We can define an array of colors and play with modulo operator to fill the squares with alternate colors and make the game field look like a chessboard grid. The core of the script lies in this line: fieldSprite.graphics.beginFill(colors[(j%2+i%2)%2]); that plays with modulo to draw a checkerboard. Test the movie and you will see: Now the game field looks better. Creating the tetrominoes The concept behind the creation of representable tetrominoes is the hardest part of the making of this game. Unlike the previous games you made, such as Snake, that will feature actors of the same width and height (in Snake the head is the same size as the tail), in Tetris every tetromino has its own width and height. Moreover, every tetromino but the square one is not symmetrical, so its size is going to change when the player rotates it. How can we manage a tile-based game with tiles of different width and height? The idea: Since tetrominoes are made by four squares connected orthogonally (that is, forming a right angle), we can split tetrominoes into a set of tiles and include them into an array. The easiest way is to include each tetromino into a 4x4 array, although most of them would fit in smaller arrays, it's good to have a standard array. Something like this: Every tetromino has its own name based on the alphabet letter it reminds, and its own color, according to The Tetris Company (TTC), the company that currently owns the trademark of the game Tetris. Just for your information, TTC sues every Tetris clone whose name somehow is similar to "Tetris", so if you are going to create and market a Tetris clone, you should call it something like "Crazy Bricks" rather than "Tetriz". Anyway, following the previous picture, from left-to-right and from top-to-bottom, the "official" names and colors for tetrominoes are: I—color: cyan (0x00FFFF) T—color: purple (0xAA00FF) L—color: orange (0xFFA500) J—color: blue (0x0000FF) Z—color: red (0xFF0000) S—color: green (0x00FF00) O—color: yellow (0xFFFF00) The development: First, add two new class level variables: private const TS_uint=24; private var fieldArray:Array; private var fieldSprite:Sprite; private var tetrominoes:Array = new Array(); private var colors_Array=new Array(); tetrominoes array is the four-dimensional array containing all tetrominoes information, while colors array will store their colors. Now add a new function call to Main function: public function Main() { generateField(); initTetrominoes(); } initTetrominoes function will initialize tetrominoes-related arrays. private function initTetrominoes():void { // I tetrominoes[0]=[[[0,0,0,0],[1,1,1,1],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[0,1,0,0],[0,1,0,0]]]; colors[0]=0x00FFFF; // T tetrominoes[1]=[[[0,0,0,0],[1,1,1,0],[0,1,0,0],[0,0,0,0]], [[0,1,0,0],[1,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,1,0,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,1,0],[0,1,0,0],[0,0,0,0]]]; colors[1]=0x767676; // L tetrominoes[2]=[[[0,0,0,0],[1,1,1,0],[1,0,0,0],[0,0,0,0]], [[1,1,0,0],[0,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,0,1,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[0,1,1,0],[0,0,0,0]]]; colors[2]=0xFFA500; // J tetrominoes[3]=[[[1,0,0,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,1,0],[0,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,0,0,0],[1,1,1,0],[0,0,1,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[1,1,0,0],[0,0,0,0]]]; colors[3]=0x0000FF; // Z tetrominoes[4]=[[[0,0,0,0],[1,1,0,0],[0,1,1,0],[0,0,0,0]], [[0,0,1,0],[0,1,1,0],[0,1,0,0],[0,0,0,0]]]; colors[4]=0xFF0000; // S tetrominoes[5]=[[[0,0,0,0],[0,1,1,0],[1,1,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,1,0],[0,0,1,0],[0,0,0,0]]]; colors[5]=0x00FF00; // O tetrominoes[6]=[[[0,1,1,0],[0,1,1,0],[0,0,0,0],[0,0,0,0]]]; colors[6]=0xFFFF00; } colors array is easy to understand: it's just an array with the hexadecimal value of each tetromino color. tetrominoes is a four-dimensional array. It's the first time you see such a complex array, but don't worry. It's no more difficult than the two-dimensional arrays you've been dealing with since the creation of Minesweeper. Tetrominoes are coded into the array this way: tetrominoes[n] contains the arrays with all the information about the n-th tetromino. These arrays represent the various rotations, the four rows and the four columns. tetrominoes[n][m] contains the arrays with all the information about the n-th tetromino in the m-th rotation. These arrays represent the four rows and the four columns. tetrominoes[n][m][o] contains the array with the four elements of the n-th tetromino in the m-th rotation in the o-th row. tetrominoes[n][m][o][p] is the p-th element of the array representing the o-th row in the m-th rotation of the n-th tetromino. Such element can be 0 if it's an empty space or 1 if it's part of the tetromino. There isn't much more to explain as it's just a series of data entry. Let's add our first tetromino to the field.
Read more
  • 0
  • 0
  • 16481
Modal Close icon
Modal Close icon