Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - 3D Game Development

115 Articles
article-image-character-head-modeling-blender-part-2-2
Packt
29 Sep 2009
5 min read
Save for later

Character Head Modeling in Blender: Part 2

Packt
29 Sep 2009
5 min read
Modeling: the ear Ask just about any beginning modeler (and many experienced ones) and they'll tell you that the ear is a challenge! There are so many turns and folds in the human ear that it poses a modeling nightmare. But, that being said, it is also an excellent exercise in clean modeling. The ear alone, once successfully tackled, will make you a better modeler all around. The way we are going to go about this is much the same way we got started with the edgeloops: Position your 3D Cursor at the center of the ear from both the Front and the Side views Add a new plane with Spacebar > Add > Plane Extrude along the outer shape of the ear We are working strictly from the Side View for the first bit. Use the same process of extruding and moving to do the top, inside portion of the ear: Watch your topology closely, it can become very messy, very fast! Continue for the bottom: The next step is to rotate your view around with your MMB to a nice angle and Extrude out along the X-axis: Select the main loop of the ear E > Region Before placing the new faces, hit X to lock the movement to the X-axis. From here it's just a matter of shaping the ear by moving vertices around to get the proper depth and definition on the ear. It will also save you some time editing if: Select the whole ear by hovering your mouse over it and hitting L Hit R > Z to rotate along the Z-axis Then do the same along the Y-axis, R > Y This will just better position the ear. Connecting the ear to the head can be a bit of challenge, due to the much higher number of vertices it is made up of in comparison to parts of the head. This can be solved by using some cleaver modeling techniques. Let's start by extruding in the outside edge of the ear to create the back side: Now is where it gets tricky, best to just follow the screenshot: You will notice that I have used the direction of my edges coming in from the eye to increase my face count, thus making it easier to connect the ear. One of the general rules of thumb when it comes to good topology is to stay away from triangles. We want to keep our mesh comprised of strictly quads, or faces with four sides to them. Once again, we can use the same techniques seen before, and some of the tricks we just used on the ear to connect the back of the ear to the head: You will notice that I have disabled the mirror modifier's display while in Edit Mode, this makes working on the inside of the head much easier. This can be done via the modifier panel. Final: tweaking And that's it! After connecting the ear to the head the model is essentially finished. At this point it is a good idea to give your whole model the once over, checking it out from all different angles, perspective vs. orthographic modes, etc. If you find yourself needing to tweak the proportions (almost always do) a really easy way to do it is by using the Proportional Editing tool, which can be accessed by hitting O. This allows you to move the mesh around with a fall-off, basically a magnet, such that anything within the radius will move with your selection. Here is the final model: Conclusion Thank you all for reading this and I hope you have found it helpful in your head modeling endeavours. At this point, the best thing you can do is...do it all over again! Repetition in any kind of modeling always helps, but it's particularly true with head modeling. Also, always use references to help you along. You may hear some people telling you not to use references, that it makes your work stale and unoriginal. This is absolutely not true (assuming you're not just copying down the image and calling it your own...). References are an excellent resource, for everything from proportion, to perspective, to anatomy, etc. If used properly, it will show in your work, they really do help. From here, just keep hacking away at it, thanks for reading and best of luck! Happy blending! If you have read this article you may be interested to view : Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 1 Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 2 Creating an Underwater Scene in Blender- Part 1 Creating an Underwater Scene in Blender- Part 2 Creating an Underwater Scene in Blender- Part 3 Creating Convincing Images with Blender Internal Renderer-part1 Creating Convincing Images with Blender Internal Renderer-part2 Textures in Blender
Read more
  • 0
  • 0
  • 6513

article-image-advanced-lighting-3d-graphics-xna-game-studio-40
Packt
22 Dec 2010
9 min read
Save for later

Advanced Lighting in 3D Graphics with XNA Game Studio 4.0

Packt
22 Dec 2010
9 min read
  3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them         Implementing a point light with HLSL A point light is just a light that shines equally in all directions around itself (like a light bulb) and falls off over a given distance: In this case, a point light is simply modeled as a directional light that will slowly fade to darkness over a given distance. To achieve a linear attenuation, we would simply divide the distance between the light and the object by the attenuation distance, invert the result (subtract from 1), and then multiply the lambertian lighting with the result. This would cause an object directly next to the light source to be fully lit, and an object at the maximum attenuation distance to be completely unlit. However, in practice, we will raise the result of the division to a given power before inverting it to achieve a more exponential falloff: Katt = 1 – (d / a) f In the previous equation, Katt is the brightness scalar that we will multiply the lighting amount by, d is the distance between the vertex and light source, a is the distance at which the light should stop affecting objects, and f is the falloff exponent that determines the shape of the curve. We can implement this easily with HLSL and a new Material class. The new Material class is similar to the material for a directional light, but specifies a light position rather than a light direction. For the sake of simplicity, the effect we will use will not calculate specular highlights, so the material does not include a "specularity" value. It also includes new values, LightAttenuation and LightFalloff, which specify the distance at which the light is no longer visible and what power to raise the division to. public class PointLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public float LightAttenuation { get; set; } public float LightFalloff { get; set; } public PointLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 0, 0); LightColor = new Vector3(.85f, .85f, .85f); LightAttenuation = 5000; LightFalloff = 2; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightAttenuation"] != null) effect.Parameters["LightAttenuation"].SetValue( LightAttenuation); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } The new effect has parameters to reflect those values: float4x4 World; float4x4 View; float4x4 Projection; float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 0, 0); float3 LightColor = float3(1, 1, 1); float LightAttenuation = 5000; float LightFalloff = 2; texture BasicTexture; sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; }; bool TextureEnabled = true; The vertex shader output struct now includes a copy of the vertex's world position that will be used to calculate the light falloff (attenuation) and light direction. struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; float4 WorldPosition : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.WorldPosition = worldPosition; output.UV = input.UV; output.Normal = mul(input.Normal, World); return output; } Finally, the pixel shader calculates the light much the same way that the directional light did, but uses a per-vertex light direction rather than a global light direction. It also determines how far along the attenuation value the vertex's position is and darkens it accordingly. The texture, ambient light, and diffuse color are calculated as usual: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); float d = distance(LightPosition, input.WorldPosition); float att = 1 - pow(clamp(d / LightAttenuation, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } We can now achieve the above image using the following scene setup from the Game1 class: models.Add(new CModel(Content.Load<Model>("teapot"), new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); Effect simpleEffect = Content.Load<Effect>("PointLightEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); PointLightMaterial mat = new PointLightMaterial(); mat.LightPosition = new Vector3(0, 1500, 1500); mat.LightAttenuation = 3000; models[0].Material = mat; models[1].Material = mat; camera = new FreeCamera(new Vector3(0, 300, 1600), MathHelper.ToRadians(0), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); Implementing a spot light with HLSL A spot light is similar in theory to a point light—in that it fades out after a given distance. However, the fading is not done around the light source, but is based on the angle between the direction of an object and the light source, and the light's actual direction. If the angle is larger than the light's "cone angle", we will not light the vertex. Katt = (dot(p - lp, ld) / cos(a)) f In the previous equation, Katt is still the scalar that we will multiply our diffuse lighting with, p is the position of the vertex, lp is the position of the light, ld is the direction of the light, a is the cone angle, and f is the falloff exponent. Our new spot light material reflects these values: public class SpotLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public Vector3 LightDirection { get; set; } public float ConeAngle { get; set; } public float LightFalloff { get; set; } public SpotLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 3000, 0); LightColor = new Vector3(.85f, .85f, .85f); ConeAngle = 30; LightDirection = new Vector3(0, -1, 0); LightFalloff = 20; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["ConeAngle"] != null) effect.Parameters["ConeAngle"].SetValue( MathHelper.ToRadians(ConeAngle / 2)); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } Now we can create a new effect that will render a spot light. We will start by copying the point light's effect and making the following changes to the second block of effect parameters: float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 5000, 0); float3 LightDirection = float3(0, -1, 0); float ConeAngle = 90; float3 LightColor = float3(1, 1, 1); float LightFalloff = 20; Finally, we can update the pixel shader to perform the lighting calculations: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); // (dot(p - lp, ld) / cos(a))^f float d = dot(-lightDir, normalize(LightDirection)); float a = cos(ConeAngle); float att = 0; if (a < d) att = 1 - pow(clamp(a / d, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } If we were to then set up the material as follows and use our new effect, we would see the following result: SpotLightMaterial mat = new SpotLightMaterial(); mat.LightDirection = new Vector3(0, -1, -1); mat.LightPosition = new Vector3(0, 3000, 2700); mat.LightFalloff = 200; Drawing multiple lights Now that we can draw one light, the natural question to ask is how to draw more than one light. Well this, unfortunately, is not simple. There are a number of approaches—the easiest of which is to simply loop through a certain number of lights in the pixel shader and sum a total lighting value. Let's create a new shader based on the directional light effect that we created in the last chapter to do just that. We'll start by copying that effect, then modifying some of the effect parameters as follows. Notice that instead of a single light direction and color, we instead have an array of three of each, allowing us to draw up to three lights: #define NUMLIGHTS 3 float3 DiffuseColor = float3(1, 1, 1); float3 AmbientColor = float3(0.1, 0.1, 0.1); float3 LightDirection[NUMLIGHTS]; float3 LightColor[NUMLIGHTS]; float SpecularPower = 32; float3 SpecularColor = float3(1, 1, 1); Second, we need to update the pixel shader to do the lighting calculations one time per light: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Start with diffuse color float3 color = DiffuseColor; // Texture if necessary if (TextureEnabled) color *= tex2D(BasicTextureSampler, input.UV); // Start with ambient lighting float3 lighting = AmbientColor; float3 normal = normalize(input.Normal); float3 view = normalize(input.ViewDirection); // Perform lighting calculations per light for (int i = 0; i < NUMLIGHTS; i++) { float3 lightDir = normalize(LightDirection[i]); // Add lambertian lighting lighting += saturate(dot(lightDir, normal)) * LightColor[i]; float3 refl = reflect(lightDir, normal); // Add specular highlights lighting += pow(saturate(dot(refl, view)), SpecularPower) * SpecularColor; } // Calculate final color float3 output = saturate(lighting) * color; return float4(output, 1); } We now need a new Material class to work with this shader: public class MultiLightingMaterial : Material { public Vector3 AmbientColor { get; set; } public Vector3[] LightDirection { get; set; } public Vector3[] LightColor { get; set; } public Vector3 SpecularColor { get; set; } public MultiLightingMaterial() { AmbientColor = new Vector3(.1f, .1f, .1f); LightDirection = new Vector3[3]; LightColor = new Vector3[] { Vector3.One, Vector3.One, Vector3.One }; SpecularColor = new Vector3(1, 1, 1); } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientColor"] != null) effect.Parameters["AmbientColor"].SetValue(AmbientColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["SpecularColor"] != null) effect.Parameters["SpecularColor"].SetValue(SpecularColor); } } If we wanted to implement the three directional light systems found in the BasicEffect class, we would now just need to copy the light direction values over to our shader: Effect simpleEffect = Content.Load<Effect>("MultiLightingEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); MultiLightingMaterial mat = new MultiLightingMaterial(); BasicEffect effect = new BasicEffect(GraphicsDevice); effect.EnableDefaultLighting(); mat.LightDirection[0] = -effect.DirectionalLight0.Direction; mat.LightDirection[1] = -effect.DirectionalLight1.Direction; mat.LightDirection[2] = -effect.DirectionalLight2.Direction; mat.LightColor = new Vector3[] { new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f) }; models[0].Material = mat; models[1].Material = mat;
Read more
  • 0
  • 0
  • 6459

article-image-working-away3d-cameras
Packt
06 Jun 2011
10 min read
Save for later

Working with Away3D Cameras

Packt
06 Jun 2011
10 min read
Away3D 3.6 Cookbook Over 80 practical recipes for creating stunning graphics and effects with the fascinating Away3D engine Cameras are an absolutely essential part of the 3D world of computer graphics. In fact, no real-time 3D engine can exist without having a camera object. Cameras are our eyes into the 3D world. Away3D has a decent set of cameras, which at the time of writing, consists of Camera3D, TargetCamera3D, HoverCamera3D, and SpringCam classes. Although they have similar base features, each one has some additional functionality to make it different. Creating an FPS controller There are different scenarios where you wish to get a control of the camera in first person, such as in FPS video games. Basically, we want to move and rotate our camera in any horizontal direction defined by the combination of x and y rotation of the user mouse and by keyboard keys input. In this recipe, you will learn how to develop such a class from scratch, which can then be useful in your consequential projects where FPS behavior is needed. Getting ready Set up a basic Away3D scene extending AwayTemplate and give it the name FPSDemo. Then, create one more class which should extend Sprite and give it the name FPSController. How to do it... FPSController class encapsulates all the functionalities of the FPS camera. It is going to receive the reference to the scene camera and apply FPS behavior "behind the curtain". FPSDemo class is a basic Away3D scene setup where we are going to test our FPSController: FPSController.as package utils { public class FPSController extends Sprite { private var _stg:Stage; private var _camera:Object3D private var _moveLeft_Boolean=false; private var _moveRight_Boolean=false; private var _moveForward_Boolean=false; private var _moveBack_Boolean=false; private var _controllerHeigh:Number; private var _camSpeed_Number=0; private static const CAM_ACCEL_Number=2; private var _camSideSpeed_Number=0; private static const CAM_SIDE_ACCEL_Number=2; private var _forwardLook_Vector3D=new Vector3D(); private var _sideLook_Vector3D=new Vector3D(); private var _camTarget_Vector3D=new Vector3D(); private var _oldPan_Number=0; private var _oldTilt_Number=0; private var _pan_Number=0; private var _tilt_Number=0; private var _oldMouseX_Number=0; private var _oldMouseY_Number=0; private var _canMove_Boolean=false; private var _gravity:Number; private var _jumpSpeed_Number=0; private var _jumpStep:Number; private var _defaultGrav:Number; private static const GRAVACCEL_Number=1.2; private static const MAX_JUMP_Number=100; private static const FRICTION_FACTOR_Number=0.75; private static const DEGStoRADs:Number = Math.PI / 180; public function FPSController(camera:Object3D,stg:Stage, height_Number=20,gravity:Number=5,jumpStep:Number=5) { _camera=camera; _stg=stg; _controllerHeigh=height; _gravity=gravity; _defaultGrav=gravity; _jumpStep=jumpStep; init(); } private function init():void{ _camera.y=_controllerHeigh; addListeners(); } private function addListeners():void{ _stg.addEventListener(MouseEvent.MOUSE_DOWN, onMouseDown,false,0,true); _stg.addEventListener(MouseEvent.MOUSE_UP, onMouseUp,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown,false,0,true); _stg.addEventListener(KeyboardEvent.KEY_UP, onKeyUp,false,0,true); } private function onMouseDown(e:MouseEvent):void{ _oldPan=_pan; _oldTilt=_tilt; _oldMouseX=_stg.mouseX+400; _oldMouseY=_stg.mouseY-300; _canMove=true; } private function onMouseUp(e:MouseEvent):void{ _canMove=false; } private function onKeyDown(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = true;break; case 68:_moveRight = true;break; case 87:_moveForward = true;break; case 83:_moveBack = true;break; case Keyboard.SPACE: if(_camera.y<MAX_JUMP+_controllerHeigh){ _jumpSpeed=_jumpStep; }else{ _jumpSpeed=0; } break; } } private function onKeyUp(e:KeyboardEvent):void{ switch(e.keyCode) { case 65:_moveLeft = false;break; case 68:_moveRight = false;break; case 87:_moveForward = false;break; case 83:_moveBack = false;break; case Keyboard.SPACE:_jumpSpeed=0;break; } } public function walk():void{ _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; if(_moveForward){ _camSpeed+=CAM_ACCEL;} if(_moveBack){_camSpeed-=CAM_ACCEL;} if(_moveLeft){_camSideSpeed-=CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+=CAM_SIDE_ACCEL;} if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1,0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; _camera.y+=_jumpSpeed; if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos (tiltRADs) +_camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos (tiltRADs) +_camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } _camera.lookAt(_camTarget); } } } Now let's put it to work in the main application: FPSDemo.as package { public class FPSDemo extends AwayTemplate { [Embed(source="assets/buildings/CityScape.3ds",mimeType=" application/octet-stream")] private var City:Class; [Embed(source="assets/buildings/CityScape.png")] private var CityTexture:Class; private var _cityModel:Object3D; private var _fpsWalker:FPSController; public function FPSDemo() { super(); } override protected function initGeometry() : void{ parse3ds(); } private function parse3ds():void{ var max3ds_Max3DS=new Max3DS(); _cityModel=max3ds.parseGeometry(City); _view.scene.addChild(_cityModel); _cityModel.materialLibrary.getMaterial("bakedAll [Plane0"). material=new BitmapMaterial(Cast.bitmap(new CityTexture())); _cityModel.scale(3); _cityModel.x=0; _cityModel.y=0; _cityModel.z=700; _cityModel.rotate(Vector3D.X_AXIS,-90); _cam.z=-1000; _fpsWalker=new FPSController(_cam,stage,_view,20,12,250); } override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } } } How it works... FPSController class looks a tad scary, but that is only at first glance. First we pass the following arguments into the constructor: camera: Camera3D reference (here Camera3D, by the way, is the most appropriate one for FPS). stg: References to flash stage because we are going to assign listeners to it from within the class. height: It is the camera distance from the ground. We imply here that the ground is at 0,0,0. gravity: Gravity force for jump. JumpStep: Jump altitude. Next we define listeners for mouse UP and DOWN states as well as events for registering input from A,W,D,S keyboard keys to be able to move the FPSController in four different directions. In the onMouseDown() event handler, we update the old pan, tilt the previous mouseX and mouseY values as well as by assigning the current values when the mouse has been pressed to _oldPan, _oldTilt, _oldMouseX, and _oldMouseY variables accordingly. That is a widely used technique. We need to do this trick in order to have nice and continuous transformation of the camera each time we start moving the FPSController. In the methods onKeyUp() and onKeyDown(), we switch the flags that indicate to the main movement execution code. This will be seen shortly and we will also see which way the camera should be moved according to the relevant key press. The only part that is different here is the block of code inside the Keyboard.SPACE case. This code activates jump behavior when the space key is pressed. On the SPACE bar, the camera jumpSpeed (that, by default, is zero) receives the _jumpStep incremented value and this, in case the camera has not already reached the maximum altitude of the jump defined by MAX_JUMP, is added to the camera ground height. Now it's the walk() function's turn. This method is supposed to be called on each frame in the main class: _camSpeed *= FRICTION_FACTOR; _camSideSpeed*= FRICTION_FACTOR; Two preceding lines slow down, or in other words apply friction to the front and side movements. Without applying the friction. It will take a lot of time for the controller to stop completely after each movement as the velocity decrease is very slow due to the easing. Next we want to accelerate the movements in order to have a more realistic result. Here is acceleration implementation for four possible walk directions: if(_moveForward){ _camSpeed+= CAM_ACCEL;} if(_moveBack){_camSpeed-= CAM_ACCEL;} if(_moveLeft){_camSideSpeed-= CAM_SIDE_ACCEL;} if(_moveRight){_camSideSpeed+= CAM_SIDE_ACCEL;} The problem is that because we slow down the movement by continuously dividing current speed when applying the drag, the speed value actually never becomes zero. Here we define the range of values closest to zero and resetting the side and front speeds to 0 as soon as they enter this range: if (_camSpeed < 2 && _camSpeed > -2){ _camSpeed=0; } if (_camSideSpeed < 0.05 && _camSideSpeed > -0.05){ _camSideSpeed=0; } Now we need to create an ability to move the camera in the direction it is looking. To achieve this we have to transform the forward vector, which present the forward look of the camera, into the camera space denoted by _camera transformation matrix. We use the deltaTransformVector() method as we only need the transformation portion of the matrix dropping out the translation part: _forwardLook=_camera.transform.deltaTransformVector(new Vector3D(0,0,1)); _forwardLook.normalize(); _camera.x+=_forwardLook.x*_camSpeed; _camera.z+=_forwardLook.z*_camSpeed; Here we make pretty much the same change as the previous one but for the sideways movement transforming the side vector by the camera's matrix: _sideLook=_camera.transform.deltaTransformVector(new Vector3D(1, 0,0)); _sideLook.normalize(); _camera.x+=_sideLook.x*_camSideSpeed; _camera.z+=_sideLook.z*_camSideSpeed; And we also have to acquire base values for rotations from mouse movement. _pan is for the horizontal (x-axis) and _tilt is for the vertical (y-axis) rotation: if(_canMove){ _pan = 0.3*(_stg.mouseX+400 - _oldMouseX) + _oldPan; _tilt = -0.3*(_stg.mouseY-300 - _oldMouseY) + _oldTilt; if (_tilt > 70){ _tilt = 70; } if (_tilt < -70){ _tilt = -70; } } We also limit the y-rotation so that the controller would not rotate too low into the ground and conversely, too high into zenith. Notice that this entire block is wrapped into a _canMove Boolean flag that is set to true only when the mouse DOWN event is dispatched. We do it to prevent the rotation when the user doesn't interact with the controller. Finally we need to incorporate the camera local rotations into the movement process. So that while moving, you will be able to rotate the camera view too: var panRADs_Number=_pan*DEGStoRADs; var tiltRADs_Number=_tilt*DEGStoRADs; _camTarget.x = 100*Math.sin( panRADs) * Math.cos(tiltRADs) + _camera.x; _camTarget.z = 100*Math.cos( panRADs) * Math.cos(tiltRADs) + _camera.z; _camTarget.y = 100*Math.sin(tiltRADs) +_camera.y; And the last thing is applying gravity force each time the controller jumps up: if(_camera.y>_controllerHeigh){ _gravity*=GRAVACCEL; _camera.y-=_gravity; } if(_camera.y<=_controllerHeigh ){ _camera.y=_controllerHeigh; _gravity=_defaultGrav; } Here we first check whether the camera y-position is still bigger than its height, this means that the camera is in the "air" now. If true, we apply gravity acceleration to gravity because, as we know, in real life, the falling body constantly accelerates over time. In the second statement, we check whether the camera has reached its default height. If true, we reset the camera to its default y-position and also reset the gravity property as it has grown significantly from the acceleration addition during the last jump. To test it in a real application, we should initiate an instance of the FPSController class. Here is how it is done in FPSDemo.as: _fpsWalker=new FPSController(_cam,stage,20,12,250); We pass to it our scene camera3D instance and the rest of the parameters that were discussed previously. The last thing to do is to set the walk() method to be called on each frame: override protected function onEnterFrame(e:Event) : void{ super.onEnterFrame(e); _fpsWalker.walk(); } Now you can start developing the Away3D version of Unreal Tournament!
Read more
  • 0
  • 0
  • 6425

article-image-introduction-blender-25-color-grading
Packt
11 Nov 2010
11 min read
Save for later

Introduction to Blender 2.5: Color Grading

Packt
11 Nov 2010
11 min read
Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        I would like to thank a few people who have made this all possible and I wouldn't be inspired doing this now without their great aid: To Francois Tarlier (http://www.francois-tarlier.com) for patiently bearing with my questions, for sharing his thoughts on color grading with Blender, and for simply developing things to make these things existent in Blender. A clear example of this would be the addition of the Color Balance Node in Blender 2.5's Node Compositor (which I couldn't live without). To Matt Ebb (http://mke3.net/) for creating tools to make Blender's Compositor better and for supporting the efforts of making one. And lastly, to Stu Maschwitz (http://www.prolost.com) for his amazing tips and tricks on color grading. Now, for some explanation. Color grading is usually defined as the process of altering and/or enhancing the colors of a motion picture or a still image. Traditionally, this happens by altering the subject photo-chemically (color timing) in a laboratory. But with modern tools and techniques, color grading can now be achieved digitally. Software like Apple's Final Cut Pro, Adobe's After Effects, Red Giant Software’s Magic Bullet Looks, etc. Luckily, the latest version of Blender has support for color grading by using a selection and plethora of nodes that will then process our input accordingly. However, I really want to stress here that often, it doesn't matter what tools you use, it all really depends on how crafty and artistic you are, regardless of whatever features your application has. Normally, color grading could also be related to color correction in some ways, however strictly speaking, color correction deals majorly on a “correctional” aspect (white balancing, temperature changes, etc.) rather than a specific alteration that would otherwise be achieved when applied with color grading. With color grading, we can turn a motion picture or still image into different types of mood and time of the day, we can fake lens filters and distortions, highlight part of an image via bright spotting, remove red eye effects, denoise an image, add glares, and a lot more. With all the things mentioned above, they can be grouped into three major categories, namely: Color Balancing Contrasting Stylization Material Variation Compensation With Color Balancing, we are trying to fix tint errors and colorizations that occurred during hardware post-production, something that would happen when recording the data into, say, a camera's memory right after it has been internally processed. Or sometimes, this could also be applied to fix some white balance errors that were overlooked while shooting or recording. These are, however, non-solid rules that aren't followed all the time. We can, however, use color balancing to simply correct the tones of an image or frame such that the human skin will look more natural with respect to the scene it is located at. Contrasting deals with how subject/s are emphasized with respect to the scene it is located at. It could also refer to vibrance and high dynamic imaging. It could also be just a general method of “popping out” necessary details present in a frame. Stylization refers to effects that are added on top of the original footage/image after applying color correction, balancing, etc. Some examples would be: dreamy effect, day to night conversion, retro effect, sepia, and many more. And last but not the least is Material Variation Compensation. Often, as artists, there will come a point in time that after hours and hours of waiting for your renders to finish, you will realize at the last minute that something is just not right with how the materials are set up. If you're on a tight deadline, rerendering the entire sequence or frame is not an option. Thankfully, but not absolute all the time, we can compensate this by using color grading techniques to specifically tell Blender to adjust just a portion of an image that looks wrong and save us a ton of time if we were to rerender again. However, with the vast topics that Color Grading has, I can only assume that I will only be leading you to the introductory steps to get you started and for you to have a basis for your own experiments. To have a view of what we could possibly discuss, you can check some of the videos I've done here: http://vimeo.com/13262256 http://vimeo.com/13995077 And to those of you interested with some presets, Francois Tarlier has provided some in this page http://code.google.com/p/ft-projects/downloads/list. Outlining some of the aspects that we'll go through in Part 1 of this article, here's a list of the things we will be doing: Loading Image Files in the Compositor Loading Sequence Files in the Compositor Loading Movie Files in the Compositor Contrasting with Color Curves Colorizing with Color Curves Color Correcting with Color Curves And before we start, here are some prerequisites that you should have: Latest Blender 2.5 version (grab one from http://www.graphicall.org or from the latest svn updates) Movies, Footages, Animations (check http://www.stockfootageforfree.com for free stock footages) Still Images Intermediate Blender skill level Initialization With all the prerequisites met and before we get our hands dirty, there are some things we need to do. Fire up Blender 2.5 and you'll notice (by default) that Blender starts with a cool splash screen and with it on the upper right hand portion, you can see the Blender version number and the revision number. As much as possible, you would want to have a similar revision number as what we'll be using here, or better yet, a newer one. This will ensure that tools we'll be using are up to date, bug free, and possibly feature-pumped. Move the mouse over the image to enlarge it. (Blender 2.5 Initial Startup Screen) After we have ensured we have the right version (and revision number) of Blender, it's time to set up our scenes and screens accordingly to match our ideal workflow later on. Before starting any color grading session, make sure you have a clear plan of what you want to achieve and to do with your footages and images. This way you can eliminate the guessing part and save a lot of time in the process. Next step is to make sure we are in the proper screen for doing color grading. You'll see in the menu bar at the top that we are using the “Default” screen. This is useful for general-purpose Blender workflow like Modeling, Lighting, and Shading setup. To harness Blender's intuitive interface, we'll go ahead and change this screen to something more obvious and useful. (Screen Selection Menu) Click the button on the left of the screen selection menu and you'll see a list of screens to choose from. For this purpose, we'll choose “Compositing”. After enabling the screen, you'll notice that Blender's default layout has been changed to something more varied, but not very dramatic. (Choosing the Compositing Screen) The Compositing Screen will enable us to work seamlessly with color grading in that, by default, it has everything we need to start our session. By default, the compositing screen has the Node Editor on top, the UV/Image Editor on the lower left hand side, the 3D View on the lower right hand side. On the far right corner, equaling the same height as these previous three windows, is the Properties Window, and lastly (but not so obvious) is the Timeline Window which is just below the Properties Window as is situated on the far lower right corner of your screen. Since we won't be digging too much on Blender's 3D aspect here, we can go ahead and ignore the lower right view (3D View), or better yet, let's merge the UV/Image Editor to the 3D View such that the UV/Image Editor will encompass mostly the lower half of the screen (as seen below). You could also merge the Properties Window and the Timeline Window such that the only thing present on the far right hand side is the Properties Window. (Merging the Screen Windows) (Merged Screens) (Merged Screens) Under the Node Editor Window, click on and enable Use Nodes. This will tell Blender that we'll be using the node system in conjunction with the settings we'll be enabling later on. (Enabling “Use Nodes”) After clicking on Use Nodes, you'll notice nodes start appearing on the Node Editor Window, namely the Render Layer and Composite nodes. This is one good hint that Blender now recognizes the nodes as part of its rendering process. But that's not enough yet. Looking on the far right window (Properties Window), look for the Shading and Post Processing tabs under Render. If you can't see some parts, just scroll through until you do. (Locating the Shading and Post Processing Tabs) Under the Shading tab, disable all check boxes except for Texture. This will ensure that we won't get any funny output later on. It will also eliminate the error debugging process, if we do encounter some. (Disabling Shading Options) Next, let's proceed to the Post Processing tab and disable Sequencer. Then let's make sure that Compositing is enabled and checked. (Disabling Post Processing Options) Thats it for now, but we'll get back to the Properties Window whenever necessary. Let's move our attention back to the Node Editor Window above. Same keyboard shortcuts apply here compared to the 3D Viewport. To review, here are the shortcuts we might find helpful while working on the Node Editor Window:   Select Node Right Mouse Button Confirm Left Mouse Button Zoom In Mouse Wheel Up/CTRL + Mouse Wheel Drag Zoom Out Mouse Wheel Down/CTRL + Mouse Wheel Drag Pan Screen Middle Mouse Drag Move Node G Box Selection B Delete Node X Make Links F Cut Links CTRL Left Mouse Button Hide Node H Add Node SHIFT A Toggle Full Screen SHIFT SPACE Now, let's select the Render Layer Node and delete it. We won't be needing it now since we're not directly working with Blender's internal render layer system yet, since we'll be solely focusing our attention on uploading images and footages for grading work. Select the Composite Node and move it far right, just to get it out of view for now. (Deleting the Render Layer Node and Moving the Composite Node) Loading image files in the compositor Blender's Node Compositor can upload pretty much any image format you have. Most of the time, you might want only to work with JPG, PNG, TIFF, and EXR file formats. But choose what you prefer, just be aware though of the image format's compression features. For most of my compositing tasks, I commonly use PNG, it being a lossless type of image, meaning, even after processing it a few times, it retains its original quality and doesn't compress which results in odd results, like in a JPG file. However, if you really want to push your compositing project and use data such as z-buffer (depth), etc. you'll be good with EXR, which is one of the best out there, but it creates such huge file sizes depending on the settings you have. Play around and see which one is most comfortable with you. For ease, we'll load up JPG images for now. With the Node Editor Window active, left click somewhere on an empty space on the left side, imagine placing an imaginative cursor there with the left mouse button. This will tell Blender to place here the node we'll be adding. Next, press SHIFT A. This will bring up the add menu. Choose Input then click on Image. (Adding an Image Node) Most often, when you have the Composite Node selected before performing this action, Blender will automatically connect and link the newly added node to the composite node. If not, you can connect the Image Node's image output node to the Composite Node's image input node. (Image Node Connected to Composite Node) To load images into the Compositor, simply click on Open on the the Image Node and this will bring up a menu for you to browse on. Once you've chosen the desired image, you can double left click on the image or single click then click on Open. After that is done, you'll notice the Image Node's and the Composite Node's preview changed accordingly. (Image Loaded in the Compositor) This image is now ready for compositing work.
Read more
  • 0
  • 0
  • 6397

article-image-3d-animation-techniques-xna-game-studio-40-2
Packt
14 Jan 2011
3 min read
Save for later

3D Animation Techniques with XNA Game Studio 4.0

Packt
14 Jan 2011
3 min read
Object animation We will first look at the animation of objects as a whole. The most common ways to animate an object are rotation and translation (movement). We will begin by creating a class that will interpolate a position and rotation value between two extremes over a given amount of time. We could also have it interpolate between two scaling values, but it is very uncommon for an object to change size in a smooth manner during gameplay, so we will leave it out for simplicity's sake. The ObjectAnimation class has a number of parameters—starting and ending position and rotation values, a duration to interpolate during those values, and a Boolean indicating whether or not the animation should loop or just remain at the end value after the duration has passed: public class ObjectAnimation { Vector3 startPosition, endPosition, startRotation, endRotation; TimeSpan duration; bool loop; } We will also store the amount of time that has elapsed since the animation began, and the current position and rotation values: TimeSpan elapsedTime = TimeSpan.FromSeconds(0); public Vector3 Position { get; private set; } public Vector3 Rotation { get; private set; } The constructor will initialize these values: public ObjectAnimation(Vector3 StartPosition, Vector3 EndPosition, Vector3 StartRotation, Vector3 EndRotation, TimeSpan Duration, bool Loop) { this.startPosition = StartPosition; this.endPosition = EndPosition; this.startRotation = StartRotation; this.endRotation = EndRotation; this.duration = Duration; this.loop = Loop; Position = startPosition; Rotation = startRotation; } Finally, the Update() function takes the amount of time that has elapsed since the last update and updates the position and rotation values accordingly: public void Update(TimeSpan Elapsed) { // Update the time this.elapsedTime += Elapsed; // Determine how far along the duration value we are (0 to 1) float amt = (float)elapsedTime.TotalSeconds / (float)duration. TotalSeconds; if (loop) while (amt > 1) // Wrap the time if we are looping amt -= 1; else // Clamp to the end value if we are not amt = MathHelper.Clamp(amt, 0, 1); // Update the current position and rotation Position = Vector3.Lerp(startPosition, endPosition, amt); Rotation = Vector3.Lerp(startRotation, endRotation, amt); } As a simple example, we'll create an animation (in the Game1 class) that rotates our spaceship in a circle over a few seconds: We'll also have it move the model up and down for demonstration's sake: ObjectAnimation anim; We initialize it in the constructor: models.Add(new CModel(Content.Load<Model>("ship"), Vector3.Zero, Vector3.Zero, new Vector3(0.25f), GraphicsDevice)); anim = new ObjectAnimation(new Vector3(0, -150, 0), new Vector3(0, 150, 0), Vector3.Zero, new Vector3(0, -MathHelper.TwoPi, 0), TimeSpan.FromSeconds(10), true); We update it as follows: anim.Update(gameTime.ElapsedGameTime); models[0].Position = anim.Position; models[0].Rotation = anim.Rotation;
Read more
  • 0
  • 0
  • 6276

article-image-introduction-game-development-using-unity-3d
Packt
24 Sep 2010
9 min read
Save for later

Introduction to Game Development Using Unity 3D

Packt
24 Sep 2010
9 min read
  Unity 3D Game Development by Example Beginner's Guide A seat-of-your-pants manual for building fun, groovy little games quickly Read more about this book (For more resources on this subject, see here.) Technology is a tool. It helps us accomplish amazing things, hopefully more quickly and more easily and more amazingly than if we hadn't used the tool. Before we had newfangled steam-powered hammering machines, we had hammers. And before we had hammers, we had the painful process of smacking a nail into a board with our bare hands. Technology is all about making our lives better and easier. And less painful. Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard. Browser-based 3D? Welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web. Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 5996
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-panda3d-and-configuring-development-tools
Packt
14 Apr 2011
7 min read
Save for later

Setting Up Panda3D and Configuring Development Tools

Packt
14 Apr 2011
7 min read
  Panda3D 1.7 Game Developer's Cookbook Panda3D is a very powerful and feature-rich game engine that comes with a lot of features needed for creating modern video games. Using Python as a scripting language to interface with the low-level programming libraries makes it easy to quickly create games because this layer of abstraction neatly hides many of the complexities of handling assets, hardware resources, or graphics rendering, for example. This also allows simple games and prototypes to be created very quickly and keeps the code needed for getting things going to a minimum. Panda3D is a complete game engine package. This means that it is not just a collection of game programming libraries with a nice Python interface, but also includes all the supplementary tools for previewing, converting, and exporting assets as well as packing game code and data for redistribution. Delivering such tools is a very important aspect of a game engine that helps with increasing the productivity of a development team. The Panda3D engine is a very nice set of building blocks needed for creating entertainment software, scaling nicely to the needs of hobbyists, students, and professional game development teams. Panda3D is known to have been used in projects ranging from one-shot experimental prototypes to full-scale commercial MMORPG productions like Toontown Online or Pirates of the Caribbean Online. Before you are able to start a new project and use all the powerful features provided by Panda3D to their fullest, though, you need to prepare your working environment and tools. By the end of this article, you will have a strong set of programming tools at hand, as well as the knowledge of how to configure Panda3D to your future projects' needs. Downloading and configuring NetBeans to work with Panda3D When writing code, having the right set of tools at hand and feeling comfortable when using them is very important. Panda3D uses Python for scripting and there are plenty of good integrated development environments available for this language like IDLE, Eclipse, or Eric. Of course, Python code can be written using the excellent Vim or Emacs editors too. Tastes do differ, and every programmer has his or her own preferences when it comes to this decision. To make things easier and have a uniform working environment, however, we are going to use the free NetBeans IDE for developing Python scripts. This choice was made out of pure preference and one of the many great alternatives might be used as well for following through the recipes in this article, but may require different steps for the initial setup and getting samples to run. In this recipe we will install and configure the NetBeans integrated development environment to suit our needs for developing games with Panda3D using the Python programming language. Getting ready Before beginning, be sure to download and install Panda3D. To download the engine SDK and tools, go to www.panda3d.org/download.php: The Panda3D Runtime for End-Users is a prebuilt redistributable package containing a player program and a browser plugin. These can be used to easily run packaged Panda3D games. Under Snapshot Builds, you will be able to find daily builds of the latest version of the Panda3D engine. These are to be handled with care, as they are not meant for production purposes. Finally, the link labeled Panda3D SDK for Developers is the one you need to follow to retrieve a copy of the Panda3D development kit and tools. This will always take you to the latest release of Panda3D, which at this time is version 1.7.0. This version was marked as unstable by the developers but has been working in a stable way for this article. This version also added a great amount of interesting features, like the web browser plugin, an advanced shader, and graphics pipeline or built-in shadow effects, which really are worth a try. Click the link that says Panda3D SDK for Developers to reach the page shown in the following screenshot: Here you can select one of the SDK packages for the platforms that Panda3D is available on. This article assumes a setup of NetBeans on Windows but most of the samples should work on these alternative platforms too, as most of Panda3D's features have been ported to all of these operating systems. To download and install the Panda3D SDK, click the Panda3D SDK 1.7.0 link at the top of the page and download the installer package. Launch the program and follow the installation wizard, always choosing the default settings. In this and all of the following recipes we'll assume the install path to be C:Panda3D-1.7.0, which is the default installation location. If you chose a different location, it might be a good idea to note the path and be prepared to adapt the presented file and folder paths to your needs! How to do it... Follow these steps to set up your Panda3D game development environment: Point your web browser to netbeans.org and click the prominent Download FREE button: Ignore the big table showing all kinds of different versions on the following page and scroll down. Click the link that says JDK with NetBeans IDE Java SE bundle. This will take you to the following page as shown here. Click the Downloads link to the right to proceed. You will find yourself at another page, as shown in the screenshot. Select Windows in the Platform dropdown menu and tick the checkbox to agree to the license agreement. Click the Continue button to proceed. Follow the instructions on the next page. Click the file name to start the download. Launch the installer and follow the setup wizard. Once installed, start the NetBeans IDE. In the main toolbar click Tools | Plugins. Select the tab that is labeled Available Plugins. Browse the list until you find Python and tick the checkbox next to it: Click Install. This will start a wizard that downloads and installs the necessary features for Python development. At the end of the installation wizard you will be prompted to restart the NetBeans IDE, which will finish the setup of the Python feature. Once NetBeans reappears on your screen, click Tools | Python Platforms. In the Python Platform Manager window, click the New button and browse for the file C:Panda3D-1.7.0pythonppython.exe. Select Python 2.6.4 from the platforms list and click the Make Default button. Your settings should now reflect the ones shown in the following screenshot: Finally we select the Python Path tab and once again, compare your settings to the screenshot: Click the Close button and you are done! How it works... In the preceding steps we configured NetBeans to use the Python runtime that drives the Panda3D engine and as we can see, it is very easy to install and set up our working environment for Panda3D. There's more... Different than other game engines, Panda3D follows an interesting approach in its internal architecture. While the more common approach is to embed a scripting runtime into the game engine's executable, Panda3D uses the Python runtime as its main executable. The engine modules handling such things as loading assets, rendering graphics, or playing sounds are implemented as native extension modules. These are loaded by Panda3D's custom Python interpreter as needed when we use them in our script code. Essentially, the architecture of Panda3D turns the hierarchy between native code and the scripting runtime upside down. While in other game engines, native code initiates calls to the embedded scripting runtime, Panda3D shifts the direction of program flow. In Panda3D, the Python runtime is the core element of the engine that lets script code initiate calls into native programming libraries. To understand Panda3D, it is important to understand this architectural decision. Whenever we start the ppython executable, we start up the Panda3D engine. If you ever get into a situation where you are compiling your own Panda3D runtime from source code, don't forget to revisit steps 13 to 17 of this recipe to configure NetBeans to use your custom runtime executable!
Read more
  • 0
  • 0
  • 5789

article-image-cross-platform-development-build-once-deploy-anywhere
Packt
01 Oct 2013
19 min read
Save for later

Cross-platform Development - Build Once, Deploy Anywhere

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) The demo application – how the projects work together Take a look at the following diagram to understand and familiarize yourself with the configuration pattern that all of your Libgdx applications will have in common: What you see here is a compact view of four projects. The demo project to the very left contains the shared code that is referenced (that is, added to the build path) by all the other platform-specific projects. The main class of the demo application is MyDemo.java. However, looking at it from a more technical view, the main class where an application gets started by the operating system, which will be referred to as Starter Classes from now on. Notice that Libgdx uses the term "Starter Class" to distinguish between these two types of main classes in order to avoid confusion. We will cover everything related to the topic of Starter Classes in a moment. While taking a closer look at all these directories in the preceding screenshot, you may have spotted that there are two assets folders: one in the demo-desktop project and another one in demo-android. This brings us to the question, where should you put all the application's assets? The demo-android project plays a special role in this case. In the preceding screenshot, you see a subfolder called data, which contains an image named libgdx.png, and it also appears in the demo-desktop project in the same place. Just remember to always put all of your assets into the assets folder under the demo-android project. The reason behind this is that the Android build process requires direct access to the application's assets folder. During its build process, a Java source file, R.java, will automatically be generated under the gen folder. It contains special information for Android about the available assets. It would be the usual way to access assets through Java code if you were explicitly writing an Android application. However, in Libgdx, you will want to stay platform-independent as much as possible and access any resource such as assets only through methods provided by Libgdx. You may wonder how the other platform-specific projects will be able to access the very same assets without having to maintain several copies per project. Needless to say that this would require you to keep all copies manually synchronized each time the assets change. Luckily, this problem has already been taken care of by the generator as follows: The demo-desktop project uses a linked resource, a feature by Eclipse, to add existing files or folders to other places in a workspace. You can check this out by right-clicking on the demo-desktop project then navigating to Properties | Resource | Linked Resources and clicking on the Linked Resources tab. The demo-html project requires another approach since Google Web Toolkit ( GWT ) has a different build process compared to the other projects. There is a special file GwtDefinition.gwt.xml that allows you to set the asset path by setting the configuration property gdx.assetpath, to the assets folder of the Android project. Notice that it is good practice to use relative paths such as ../demo-android/assets so that the reference does not get broken in case the workspace is moved from its original location. Take this advice as a precaution to protect you and maybe your fellow developers too from wasting precious time on something that can be easily avoided by using the right setup right from the beginning. The following is the code listing for GwtDefinition.gwt.xml from demo-html : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit trunk//EN" "http://google-web-toolkit.googlecode.com/svn/trunk/ distro-source/core/src/gwt-module.dtd"> <module> <inherits name='com.badlogic.gdx.backends.gdx_backends_gwt' /> <inherits name='MyDemo' /> <entry-point class='com.packtpub.libgdx.demo.client.GwtLauncher' /> <set-configuration-property name="gdx.assetpath" value="../demo-android/assets" /> </module> Backends Libgdx makes use of several other libraries to interface the specifics of each platform in order to provide cross-platform support for your applications. Generally, a backend is what enables Libgdx to access the corresponding platform functionalities when one of the abstracted (platform-independent) Libgdx methods is called. For example, drawing an image to the upper-left corner of the screen, playing a sound file at a volume of 80 percent, or reading and writing from/to a file. Libgdx currently provides the following three backends: LWJGL (Lightweight Java Game Library) Android JavaScript/WebGL As already mentioned in Introduction to Libgdx and Project Setup , there will also be an iOS backend in the near future. LWJGL (Lightweight Java Game Library) LWJGL ( Lightweight Java Game Library ) is an open source Java library originally started by Caspian Rychlik-Prince to ease game development in terms of accessing the hardware resources on desktop systems. In Libgdx, it is used for the desktop backend to support all the major desktop operating systems, such as Windows, Linux, and Mac OS X. For more details, check out the official LWJGL website at http://www.lwjgl.org/. Android Google frequently releases and updates their official Android SDK. This represents the foundation for Libgdx to support Android in the form of a backend. There is an API Guide available which explains everything the Android SDK has to offer for Android developers. You can find it at http://developer.android.com/guide/components/index.html. WebGL WebGL support is one of the latest additions to the Libgdx framework. This backend uses the GWT to translate Java code into JavaScript and SoundManager2 ( SM2 ), among others, to add a combined support for HTML5, WebGL, and audio playback. Note that this backend requires a WebGL-capable web browser to run the application. You might want to check out the official website of GWT: https://developers.google.com/web-toolkit/. You might want to check out the official website of SM2: http://www.schillmania.com/projects/soundmanager2/. You might want to check out the official website of WebGL: http://www.khronos.org/webgl/. There is also a list of unresolved issues you might want to check out at https://github.com/libgdx/libgdx/blob/master/backends/gdx-backends-gwt/issues.txt. Modules Libgdx provides six core modules that allow you to access the various parts of the system your application will run on. What makes these modules so great for you as a developer is that they provide you with a single Application Programming Interface ( API ) to achieve the same effect on more than just one platform. This is extremely powerful because you can now focus on your own application and you do not have to bother with the specialties that each platform inevitably brings, including the nasty little bugs that may require tricky workarounds. This is all going to be transparently handled in a straightforward API which is categorized into logic modules and is globally available anywhere in your code, since every module is accessible as a static field in the Gdx class. Naturally, Libgdx does always allow you to create multiple code paths for per-platform decisions. For example, you could conditionally increase the level of detail in a game when run on the desktop platform, since desktops usually have a lot more computing power than mobile devices. The application module The application module can be accessed through Gdx.app. It gives you access to the logging facility, a method to shutdown gracefully, persist data, query the Android API version, query the platform type, and query the memory usage. Logging Libgdx employs its own logging facility. You can choose a log level to filter what should be printed to the platform's console. The default log level is LOG_INFO. You can use a settings file and/or change the log level dynamically at runtime using the following code line: Gdx.app.setLogLevel(Application.LOG_DEBUG); The available log levels are: LOG_NONE: This prints no logs. The logging is completely disabled. LOG_ERROR: This prints error logs only. LOG_INFO: This prints error and info logs. LOG_DEBUG: This prints error, info, and debug logs. To write an info, debug, or an error log to the console, use the following listings: Gdx.app.log("MyDemoTag", "This is an info log."); Gdx.app.debug("MyDemoTag", "This is a debug log."); Gdx.app.error("MyDemoTag", "This is an error log."); Shutting down gracefully You can tell Libgdx to shutdown the running application. The framework will then stop the execution in the correct order as soon as possible and completely de-allocate any memory that is still in use, freeing both Java and the native heap. Use the following listing to initiate a graceful shutdown of your application: Gdx.app.exit(); You should always do a graceful shutdown when you want to terminate your application. Otherwise, you will risk creating memory leaks, which is a really bad thing. On mobile devices, memory leaks will probably have the biggest negative impact due to their limited resources. Persisting data If you want to persist your data, you should use the Preferences class. It is merely a dictionary or a hash map data type which stores multiple key-value pairs in a file. Libgdx will create a new preferences file on the fly if it does not exist yet. You can have several preference files using unique names in order to split up data into categories. To get access to a preference file, you need to request a Preferences instance by its filename as follows: Preferences prefs = Gdx.app.getPreferences("settings.prefs"); To write a (new) value, you have to choose a key under which the value should be stored. If this key already exists in a preferences file, it will be overwritten. Do not forget to call flush() afterwards to persist the data, or else all the changes will be lost. prefs.putInteger("sound_volume", 100); // volume @ 100% prefs.flush(); Persisting data needs a lot more time than just modifying values in memory (without flushing). Therefore, it is always better to modify as many values as possible before a final flush() method is executed. To read back a certain value from a preferences file, you need to know the corresponding key. If this key does not exist, it will be set to the default value. You can optionally pass your own default value as the second argument (for example, in the following listing, 50 is for the default sound volume): int soundVolume = prefs.getInteger("sound_volume", 50); Querying the Android API Level On Android, you can query the Android API Level, which allows you to handle things differently for certain versions of the Android OS. Use the following listing to find out the version: Gdx.app.getVersion(); On platforms other than Android, the version returned is always 0. Querying the platform type You may want to write a platform-specific code where it is necessary to know the current platform type. The following example shows how it can be done: switch (Gdx.app.getType()) { case Desktop: // Code for Desktop application break; case Android: // Code for Android application break; case WebGL: // Code for WebGL application break; default: // Unhandled (new?) platform application break; } Querying memory usage You can query the system to find out its current memory footprint of your application. This may help you find excessive memory allocations that could lead to application crashes. The following functions return the amount of memory (in bytes) that is in use by the corresponding heap: long memUsageJavaHeap = Gdx.app.getJavaHeap(); long memUsageNativeHeap = Gdx.app.getNativeHeap(); Graphics module The graphics module can be accessed either through Gdx.getGraphics() or by using the shortcut variable Gdx.graphics. Querying delta time Query Libgdx for the time span between the current and the last frame in seconds by calling Gdx.graphics.getDeltaTime(). Querying display size Query the device's display size returned in pixels by calling Gdx.graphics.getWidth() and Gdx.graphics.getHeight(). Querying the FPS (frames per second) counter Query a built-in frame counter provided by Libgdx to find the average number of frames per second by calling Gdx.graphics.getFramesPerSecond(). Audio module The audio module can be accessed either through Gdx.getAudio() or by using the shortcut variable Gdx.audio. Sound playback To load sounds for playback, call Gdx.audio.newSound(). The supported file formats are WAV, MP3, and OGG. There is an upper limit of 1 MB for decoded audio data. Consider the sounds to be short effects like bullets or explosions so that the size limitation is not really an issue. Music streaming To stream music for playback, call Gdx.audio.newMusic(). The supported file formats are WAV, MP3, and OGG. Input module The input module can be accessed either through Gdx.getInput() or by using the shortcut variable Gdx.input. In order to receive and handle input properly, you should always implement the InputProcessor interface and set it as the global handler for input in Libgdx by calling Gdx.input.setInputProcessor(). Reading the keyboard/touch/mouse input Query the system for the last x or y coordinate in the screen coordinates where the screen origin is at the top-left corner by calling either Gdx.input.getX() or Gdx.input.getY(). To find out if the screen is touched either by a finger or by mouse, call Gdx.input.isTouched() To find out if the mouse button is pressed, call Gdx.input.isButtonPressed() To find out if the keyboard is pressed, call Gdx.input.isKeyPressed() Reading the accelerometer Query the accelerometer for its value on the x axis by calling Gdx.input.getAccelerometerX(). Replace the X in the method's name with Y or Z to query the other two axes. Be aware that there will be no accelerometer present on a desktop, so Libgdx always returns 0. Starting and canceling vibrator On Android, you can let the device vibrate by calling Gdx.input.vibrate(). A running vibration can be cancelled by calling Gdx.input.cancelVibrate(). Catching Android soft keys You might want to catch Android's soft keys to add an extra handling code for them. If you want to catch the back button, call Gdx.input.setCatchBackKey(true). If you want to catch the menu button, call Gdx.input.setCatchMenuKey(true). On a desktop where you have a mouse pointer, you can tell Libgdx to catch it so that you get a permanent mouse input without having the mouse ever leave the application window. To catch the mouse cursor, call Gdx.input.setCursorCatched(true). The files module The files module can be accessed either through Gdx.getFiles() or by using the shortcut variable Gdx.files. Getting an internal file handle You can get a file handle for an internal file by calling Gdx.files.internal(). An internal file is relative to the assets folder on the Android and WebGL platforms. On a desktop, it is relative to the root folder of the application. Getting an external file handle You can get a file handle for an external file by calling Gdx.files.external(). An external file is relative to the SD card on the Android platform. On a desktop, it is relative to the user's home folder. Note that this is not available for WebGL applications. The network module The network module can be accessed either through Gdx.getNet() or by using the shortcut variable Gdx.net. HTTP GET and HTTP POST You can make HTTP GET and POST requests by calling either Gdx.net.httpGet() or Gdx.net.httpPost(). Client/server sockets You can create client/server sockets by calling either Gdx.net.newClientSocket() or Gdx.net.newServerSocket(). Opening a URI in a web browser To open a Uniform Resource Identifier ( URI ) in the default web browser, call Gdx.net.openURI(URI). Libgdx's Application Life-Cycle and Interface The Application Life-Cycle in Libgdx is a well-defined set of distinct system states. The list of these states is pretty short: create, resize, render, pause, resume, and dispose. Libgdx defines an ApplicationListener interface that contains six methods, one for each system state. The following code listing is a copy that is directly taken from Libgdx's sources. For the sake of readability, all comments have been stripped. public interface ApplicationListener { public void create (); public void resize (int width, int height); public void render (); public void pause (); public void resume (); public void dispose (); } All you need to do is implement these methods in your main class of the shared game code project. Libgdx will then call each of these methods at the right time. The following diagram visualizes the Libgdx's Application Life-Cycle: Note that a full and dotted line basically has the same meaning in the preceding figure. They both connect two consecutive states and have a direction of flow indicated by a little arrowhead on one end of the line. A dotted line additionally denotes a system event. When an application starts, it will always begin with create(). This is where the initialization of the application should happen, such as loading assets into memory and creating an initial state of the game world. Subsequently, the next state that follows is resize(). This is the first opportunity for an application to adjust itself to the available display size (width and height) given in pixels. Next, Libgdx will handle system events. If no event has occurred in the meanwhile, it is assumed that the application is (still) running. The next state would be render(). This is where a game application will mainly do two things: Update the game world model Draw the scene on the screen using the updated game world model Afterwards, a decision is made upon which the platform type is detected by Libgdx. On a desktop or in a web browser, the displaying application window can be resized virtually at any time. Libgdx compares the last and current sizes on every cycle so that resize() is only called if the display size has changed. This makes sure that the running application is able to accommodate a changed display size. Now the cycle starts over by handling (new) system events once again. Another system event that can occur during runtime is the exit event. When it occurs, Libgdx will first change to the pause() state, which is a very good place to save any data that would be lost otherwise after the application has terminated. Subsequently, Libgdx changes to the dispose() state where an application should do its final clean-up to free all the resources that it is still using. This is also almost true for Android, except that pause() is an intermediate state that is not directly followed by a dispose() state at first. Be aware that this event may occur anytime during application runtime while the user has pressed the Home button or if there is an incoming phone call in the meanwhile. In fact, as long as the Android operating system does not need the occupied memory of the paused application, its state will not be changed to dispose(). Moreover, it is possible that a paused application might receive a resume system event, which in this case would change its state to resume(), and it would eventually arrive at the system event handler again. Starter Classes A Starter Class defines the entry point (starting point) of a Libgdx application. They are specifically written for a certain platform. Usually, these kinds of classes are very simple and mostly consist of not more than a few lines of code to set certain parameters that apply to the corresponding platform. Think of them as a kind of boot-up sequence for each platform. Once booting has finished, the Libgdx framework hands over control from the Starter Class (for example, the demo-desktop project) to your shared application code (for example, the demo project) by calling the different methods from the ApplicationListener interface that the MyDemo class implements. Remember that the MyDemo class is where the shared application code begins. We will now take a look at each of the Starter Classes that were generated during the project setup. Running the demo application on a desktop The Starter Class for the desktop application is called Main.java. The following listing is Main.java from demo-desktop : package com.packtpub.libgdx.demo; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; public class Main { public static void main(String[] args) { LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration(); cfg.title = "demo"; cfg.useGL20 = false; cfg.width = 480; cfg.height = 320; new LwjglApplication(new MyDemo(), cfg); } } In the preceding code listing, you see the Main class, a plain Java class without the need to implement an interface or inherit from another class. Instead, a new instance of the LwjglApplication class is created. This class provides a couple of overloaded constructors to choose from. Here, we pass a new instance of the MyDemo class as the first argument to the constructor. Optionally, an instance of the LwjglApplicationConfiguration class can be passed as the second argument. The configuration class allows you to set every parameter that is configurable for a Libgdx desktop application. In this case, the window title is set to demo and the window's width and height is set to 480 by 320 pixels. This is all you need to write and configure a Starter Class for a desktop. Let us try to run the application now. To do this, right-click on the demo-desktop project in Project Explorer in Eclipse and then navigate to Run As | Java Application. Eclipse may ask you to select the Main class when you do this for the first time. Simply select the Main class and also check that the correct package name ( com.packtpub.libgdx.demo ) is displayed next to it. The desktop application should now be up and running on your computer. If you are working on Windows, you should see a window that looks as follows: Summary In this article, we learned about Libgdx and how all the projects of an application work together. We covered Libgdx's backends, modules, and Starter Classes. Additionally, we covered what the Application Life Cycle and corresponding interface are, and how they are meant to work. Resources for Article: Further resources on this subject: Panda3D Game Development: Scene Effects and Shaders [Article] Microsoft XNA 4.0 Game Development: Receiving Player Input [Article] Introduction to Game Development Using Unity 3D [Article]
Read more
  • 0
  • 0
  • 5715

article-image-ogre3d-scene-graph
Packt
20 Dec 2010
13 min read
Save for later

The Ogre 3D scene graph

Packt
20 Dec 2010
13 min read
Creating a scene node We will learn how to create a new scene node and attach our 3D model to it. How to create a scene node with Ogre 3D We will follow these steps: In the old version of our code, we had the following two lines in the createScene() function: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad.mesh"); mSceneMgr->getRootSceneNode()->attachObject(ent); Replace the last line with the following: Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); Then add the following two lines; the order of those two lines is irrelevant forthe resulting scene: mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Compile and start the application. What just happened? We created a new scene node named Node 1. Then we added the scene node to the root scene node. After this, we attached our previously created 3D model to the newly created scene node so it would be visible. How to work with the RootSceneNode The call mSceneMgr->getRootSceneNode() returns the root scene node. This scene node is a member variable of the scene manager. When we want something to be visible, we need to attach it to the root scene node or a node which is a child or a descendent in any way. In short, there needs to be a chain of child relations from the root node to the node; otherwise it won't be rendered. As the name suggests, the root scene node is the root of the scene. So the entire scene will be, in some way, attached to the root scene node. Ogre 3D uses a so-called scene graph to organize the scene. This graph is like a tree, it has one root, the root scene node, and each node can have children. We already have used this characteristic when we called mSceneMgr->getRootSceneNode()->addChild(node);. There we added the created scene node as a child to the root. Directly afterwards, we added another kind of child to the scene node with node->attachObject(ent);. Here, we added an entity to the scene node. We have two different kinds of objects we can add to a scene node. Firstly, we have other scene nodes, which can be added as children and have children themselves. Secondly, we have entities that we want rendered. Entities aren't children and can't have children themselves. They are data objects which are associated with the node and can be thought of as leaves of the tree. There are a lot of other things we can add to a scene, like lights, particle systems, and so on. We will later learn what these things are and how to use them. Right now, we only need entities. Our current scene graph looks like this: The first thing we need to understand is what a scene graph is and what it does. A scene graph is used to represent how different parts of a scene are related to each other in 3D space. 3D space Ogre 3D is a 3D rendering engine, so we need to understand some basic 3D concepts. The most basic construct in 3D is a vector, which is represented by an ordered triple (x,y,z). Each position in a 3D space can be represented by such a triple using the Euclidean coordination system for three dimensions. It is important to know that there are different kinds of coordinate systems in 3D space. The only difference between the systems is the orientation of the axis and the positive rotation direction. There are two systems that are widely used, namely, the left-handed and the right-handed versions. In the following image, we see both systems—on the left side, we see the left-handed version; and on the right side, we see the right-handed one. Source:http://en.wikipedia.org/wiki/File:Cartesian_coordinate_system_handedness.svg The names left-and right-handed are based on the fact that the orientation of the axis can be reconstructed using the left and right hand. The thumb is the x-axis, the index finger the y-axis, and the middle finger the z-axis. We need to hold our hands so that we have a ninety-degree angle between thumb and index finger and also between middle and index finger. When using the right hand, we get a right-handed coordination system. When using the left hand, we get the left-handed version. Ogre uses the right-handed system, but rotates it so that the positive part of the x-axis is pointing right and the negative part of the x-axis points to the left. The y-axis is pointing up and the z-axis is pointing out of the screen and it is known as the y-up convention. This sounds irritating at first, but we will soon learn to think in this coordinate system. The website http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html contains a rather good picture-based explanation of the different coordination systems and how they relate to each other. Scene graph A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. We already discussed that a scene graph has a root and is organized like a tree. But we didn't touch on the most important function of a scene graph. Each node of a scene graph has a list of its children as well as a transformation in the 3D space. The transformation is composed of three aspects, namely, the position, the rotation, and the scale. The position is a triple (x,y,z), which obviously describes the position of the node in the scene. The rotation is stored using a quaternion, a mathematical concept for storing rotations in 3D space, but we can think of rotations as a single floating point value for each axis, describing how the node is rotated using radians as units. Scaling is quite easy; again, it uses a triple (x,y,z), and each part of the triple is simply the factor to scale the axis with. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change. When we move the parent 10 units along the x-axis, all children will also be moved by 10 units along the x-axis. The final orientation of each child is computed using the orientation of all parents. This fact will become clearer with the next diagram. The position of MyEntity in this scene will be (10,0,0) and MyEntity2 will be at (10,10,20). Let's try this in Ogre 3D. Pop quiz – finding the position of scene nodes Look at the following tree and determine the end positions of MyEntity and MyEntity2: MyEntity(60,60,60) and MyEntity2(0,0,0) MyEntity(70,50,60) and MyEntity2(10,-10,0) MyEntity(60,60,60) and MyEntity2(10,10,10) Setting the position of a scene node Now, we will try to create the setup of the scene from the diagram before the previous image. Time for action – setting the position of a scene node Add this new line after the creation of the scene node: node->setPosition(10,0,0); To create a second entity, add this line at the end of the createScene() function: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Then create a second scene node: Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); Add the second node to the first one: node->addChild(node2); Set the position of the second node: node2->setPosition(0,10,20); Attach the second entity to the second node: node2->attachObject(ent2); Compile the program and you should see two instances of Sinbad: What just happened? We created a scene which matches the preceding diagram. The first new function we used was at step 1. Easily guessed, the function setPosition(x,y,z) sets the position of the node to the given triple. Keep in mind that this position is relative to the parent. We wanted MyEntity2 to be at (10,10,20), because we added node2, which holds MyEntity2, to a scene node which already was at the position (10,0,0). We only needed to set the position of node2 to (0,10,20). When both positions combine, MyEntity2 will be at (10,10,20). Pop quiz – playing with scene nodes We have the scene node node1 at (0,20,0) and we have a child scene node node2, which has an entity attached to it. If we want the entity to be rendered at (10,10,10), at which position would we need to set node2? (10,10,10) (10,-10,10) (-10,10,-10) Have a go hero – adding a Sinbad Add a third instance of Sinbad and let it be rendered at the position (10,10,30). Rotating a scene node We already know how to set the position of a scene node. Now, we will learn how to rotate a scene node and another way to modify the position of a scene node. Time for action – rotating a scene node We will use the previous code, but create completely new code for the createScene() function. Remove all code from the createScene() function. First create an instance of Sinbad.mesh and then create a new scene node. Set the position of the scene node to (10,10,0), at the end attach the entity to the node, and add the node to the root scene node as a child: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new instance of the model, also a new scene node, and set the position to (10,0,0): Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); node->addChild(node2); node2->setPosition(10,0,0); Now add the following two lines to rotate the model and attach the entity to the scene node: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); node2->attachObject(ent2); Do the same again, but this time use the function yaw instead of the function pitch and the translate function instead of the setPosition function: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Ogre::SceneNode* node3 = mSceneMgr->createSceneNode("Node3",); node->addChild(node3); node3->translate(20,0,0); node3->yaw(Ogre::Degree(90.0f)); node3->attachObject(ent3); And the same again with roll instead of yaw or pitch: Ogre::Entity* ent4 = mSceneMgr->createEntity("MyEntity4","Sinbad. mesh"); Ogre::SceneNode* node4 = mSceneMgr->createSceneNode("Node4"); node->addChild(node4); node4->setPosition(30,0,0); node4->roll(Ogre::Radian(Ogre::Math::HALF_PI)); node4->attachObject(ent4); Compile and run the program, and you should see the following screenshot: What just happened? We repeated the code we had before four times and always changed some small details. The first repeat is nothing special. It is just the code we had before and this instance of the model will be our reference model to see what happens to the other three instances we made afterwards. In step 4, we added one following additional line: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); The function pitch(Ogre::Radian(Ogre::Math::HALF_PI)) rotates a scene node around the x-axis. As said before, this function expects a radian as parameter and we used half of pi, which means a rotation of ninety degrees. In step 5, we replaced the function call setPosition(x,y,z) with translate(x,y,z). The difference between setPosition(x,y,z) and translate(x,y,z) is that setPosition sets the position—no surprises here. translate adds the given values to the position of the scene node, so it moves the node relatively to its current position. If a scene node has the position (10,20,30) and we call setPosition(30,20,10), the node will then have the position (30,20,10). On the other hand, if we call translate(30,20,10), the node will have the position (40,40,40). It's a small, but important, difference. Both functions can be useful if used in the correct circumstances, like when we want to position in a scene, we would use the setPosition(x,y,z) function. However, when we want to move a node already positioned in the scene, we would use translate(x,y,z). Also, we replaced pitch(Ogre::Radian(Ogre::Math::HALF_PI))with yaw(Ogre::Degree(90.0f)). The yaw() function rotates the scene node around the y-axis. Instead of Ogre::Radian(), we used Ogre::Degree(). Of course, Pitch and yaw still need a radian to be used. However, Ogre 3D offers the class Degree(), which has a cast operator so the compiler can automatically cast into a Radian(). Therefore, the programmer is free to use a radian or degree to rotate scene nodes. The mandatory use of the classes makes sure that it's always clear which is used, to prevent confusion and possible error sources. Step 6 introduces the last of the three different rotate function a scene node has, namely, roll(). This function rotates the scene node around the z-axis. Again, we could use roll(Ogre::Degree(90.0f)) instead of roll(Ogre::Radian(Ogre::Math::HALF_PI)). The program when run shows a non-rotated model and all three possible rotations. The left model isn't rotated, the model to the right of the left model is rotated around the x-axis, the model to the left of the right model is rotated around the y-axis, and the right model is rotated around the z-axis. Each of these instances shows the effect of a different rotate function. In short, pitch() rotates around the x-axis, yaw() around the y-axis, and roll() around the z-axis. We can either use Ogre::Degree(degree) or Ogre::Radian(radian) to specify how much we want to rotate. Pop quiz – rotating a scene node Which are the three functions to rotate a scene node? pitch, yawn, roll pitch, yaw, roll pitching, yaw, roll Have a go hero – using Ogre::Degree Remodel the code we wrote for the previous section in such a way that each occurrence of Ogre::Radian is replaced with an Ogre::Degree and vice versa, and the rotation is still the same. Scaling a scene node We already have covered two of the three basic operations we can use to manipulate our scene graph. Now it's time for the last one, namely, scaling. Time for action – scaling a scene node Once again, we start with the same code block we used before. Remove all code from the createScene() function and insert the following code block: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new entity: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Now we use a function that creates the scene node and adds it automatically as a child. Then we do the same thing we did before: Ogre::SceneNode* node2 = node->createChildSceneNode("node2"); node2->setPosition(10,0,0); node2->attachObject(ent2); Now, after the setPosition() function, call the following line to scale the model: node2->scale(2.0f,2.0f,2.0f); Create a new entity: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Now we call the same function as in step 3, but with an additional parameter: Ogre::SceneNode* node3 = node->createChildSceneNode("node3",Ogre:: Vector3(20,0,0)); After the function call, insert this line to scale the model: node3->scale(0.2f,0.2f,0.2f); Compile the program and run it, and you should see the following image:
Read more
  • 0
  • 0
  • 5671

article-image-introduction-color-theory-and-lighting-basics-blender
Packt
14 Apr 2011
7 min read
Save for later

Introduction to Color Theory and Lighting Basics in Blender

Packt
14 Apr 2011
7 min read
Basic color theory To fully understand how light works, we need to have a basic understanding of what color is and how different colors interact with each other. The study of this phenomenon is known as color theory. What is color? When light comes in contact with an object, the object absorbs a certain amount of that light. The rest is reflected into the eye of the viewer in the form of color. The easiest way to visualize colors and their relations is in the form of a color wheel. Primary colors There are millions of colors, but there are only three colors that cannot be created through color mixing—red, yellow, and blue. These colors are known as primary colors, which are used to create the other colors on the color wheel through a process known as color mixing. Through color mixing, we get other "sets" of colors, including secondary and tertiary colors. Secondary colors Secondary colors are created when two primary colors are mixed together. For example, mixing red and blue makes purple, red and yellow make orange, and blue and yellow make green. Tertiary colors It's natural to assume that, because mixing two primary colors creates a secondary color, mixing two secondary colors would create a tertiary color. Surprisingly, this isn't the case. A tertiary color is, in fact, the result of mixing a primary and secondary color together. This gives us the remainder of the color wheel: Red-orange Orange-yellow Chartreuse Turquoise Indigo Violet-red Color relationships There are other relationships between colors that we should know about before we start using Blender. The first is complimentary colors. Complimentary colors are colors that are across from each other on the color wheel. For example, red and green are compliments. Complimentary colors are especially useful for creating contrast in an image, because mixing them together darkens the hue. In a computer program, mixing perfect compliments together will result in black, but mixing compliments in a more traditional medium such as oil pastels results in more of a dark brown hue. In both situations, though, the compliments are used to create a darker value. Be wary of using complimentary colors in computer graphics—if complimentary colors mix accidentally, it will result in black artifacts in images or animations. The other color relationship that we should be aware of is analogous colors. Analogous colors are colors found next to each other on the color wheel. For example, red, red-orange, and orange are analogous. Here's the kicker—red, orange, and yellow can also be analogous as well. A good rule to follow is as long as you don't span more than one primary color on the color wheel, they're most likely considered analogous colors. Color temperature Understanding color temperature is an essential step in understanding how lights work—at the very least, it helps us understand why certain lights emit the colors they do. No light source emits a constant light wavelength. Even the sun, although considered a constant light source, is filtered by the atmosphere to various degrees based on the time of the day, changing its perceived color. Color temperature is typically measured in degrees Kelvin (°K), and has a color range from a red to blue hue, like in the image below: Real world, real lights So how is color applicable beyond a two-dimensional color wheel? In the real world, our eyes perceive color because light from the sun—which contains all colors in the visible color spectrum—is reflected off of objects in our field of vision. As light hits an object, some wavelengths are absorbed, while the rest are reflected. Those reflected rays are what determine the color we perceive that particular object to be. Of course, the sun isn't the only source of light we have. There are many different types of natural and artificial light sources, each with its own unique properties. The most common types of light sources we may try to simulate in Blender include: Candlelight Incandescent light Florescent light Sunlight Skylight Candlelight Candlelight is a source of light as old as time. It has been used for thousands of years and is still used today in many cases. The color temperature of a candle's light is about 1500 K, giving it a warm red-orange hue. Candlelight also has a tendency to create really high contrast between lit areas and unlit areas in a room, which creates a very successful dramatic effect. Incandescent light bulbs When most people hear the term "light bulb", the incandescent light bulb immediately comes to mind. It's also known as a tungsten-halogen light bulb. It's your typical household light bulb, burning at approximately 2800 K-3200 K. This color temperature value still allows it to fall within the orange-yellow part of the spectrum, but it is noticeably brighter than the light of a candle. Florescent light bulbs Florescent lights are an alternative to incandescent. Also known as mercury vapor lights, fluorescents burn at a color temperature range of 3500 K-5900 K, allowing them to emit a color anywhere between a yellow and a white hue. They're commonly used when lighting a large area effectively, such as a warehouse, school hallway, or even a conference room. The sun and the sky Now let's take a look at some natural sources of light! The most obvious example is the sun. The sun burns at a color temperature of approximately 5500 K, giving it its bright white color. We rarely use pure white as a light's color in 3D though—it makes your scene look too artificial. Instead, we may choose to use a color that best suits the scene at hand. For example, if we are lighting a desert scene, we may choose to use a beige color to simulate light bouncing off the sand. But even so, this still doesn't produce an entirely realistic effect. This is where the next source of light comes in—the sky. The sky can produce an entire array of colors from deep purple to orange to bright blue. It produces a color temperature range of 6000 K-20,000 K. That's a huge range! We can really use this to our advantage in our 3D scenes—the color of the sky can have the final say in what the mood of your scene ends up being. Chromatic adaptation What is chromatic adaptation? We're all more familiar with this process than you may realize. As light changes, the color we perceive from the world around us changes. To accommodate for those changes, our eyes adjust what we see to something we're more familiar with (or what our brains would consider normal). When working in 3D you have to keep this in mind, because even though your 3D scene may be physically lit correctly, it may not look natural because the computer renders the final image objectively, without the chromatic adaptation that we, as humans, are used to. Take this image for example. In the top image, the second card from the left appears to be a stronger shade of pink than the corresponding card in the bottom picture. Believe it or not, they are the exact same color, but because of the red hue of the second photo, our brains change how we perceive that image.
Read more
  • 0
  • 0
  • 5651
article-image-ogre-3d-fixed-function-pipeline-and-shaders
Packt
25 Nov 2010
13 min read
Save for later

Ogre 3D: Fixed Function Pipeline and Shaders

Packt
25 Nov 2010
13 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Introduction Fixed Function Pipeline is the rendering pipeline on the graphics card that produces those nice shiny pictures we love looking at. As the prefix Fixed suggests, there isn't a lot of freedom to manipulate the Fixed Function Pipeline for the developer. We can tweak some parameters using the material files, but nothing fancy. That's where shaders can help fill the gap. Shaders are small programs that can be loaded onto the graphics card and then function as a part of the rendering process. These shaders can be thought of as little programs written in a C-like language with a small, but powerful, set of functions. With shaders, we can almost completely control how our scene is rendered and also add a lot of new effects that weren't possible with only the Fixed Function Pipeline. Render Pipeline To understand shaders, we need to first understand how the rendering process works as a whole. When rendering, each vertex of our model is translated from local space into camera space, then each triangle gets rasterized. This means, the graphics card calculates how to represent the model in an image. These image parts are called fragments. Each fragment is then processed and manipulated. We could apply a specific part of a texture to this fragment to texture our model or we could simply assign it a color when rendering a model in only one color. After this processing, the graphics card tests if the fragment is covered by another fragment that is nearer to the camera or if it is the fragment nearest to the camera. If this is true, the fragment gets displayed on the screen. In newer hardware, this step can occur before the processing of the fragment. This can save a lot of computation time if most of the fragments won't be seen in the end result. The following is a very simplified graph showing the pipeline: With almost each new graphics card generation, new shader types were introduced. It began with vertex and pixel/fragment shaders. The task of the vertex shader is to transform the vertices into camera space, and if needed, modify them in any way, like when doing animations completely on the GPU. The pixel/fragment shader gets the rasterized fragments and can apply a texture to them or manipulate them in other ways, for example, for lighting models with an accuracy of a pixel. Time for action – our first shader application Let's write our first vertex and fragment shaders: In our application, we only need to change the used material. Change it to MyMaterial13. Also remove the second quad: manual->begin("MyMaterial13", RenderOperation::OT_TRIANGLE_LIST); Now we need to create this material in our material file. First, we are going to define the fragment shader. Ogre 3D needs five pieces of information about the shader: The name of the shader In which language it is written In which source file it is stored How the main function of this shader is called In what profile we want the shader to be compiled All this information should be in the material file: fragment_program MyFragmentShader1 cg { source Ogre3DBeginnersGuideShaders.cg entry_point MyFragmentShader1 profiles ps_1_1 arbfp1 } The vertex shader needs the same parameter, but we also have to define a parameter that is passed from Ogre 3D to our shader. This contains the matrix that we will use for transforming our quad into camera space: vertex_program MyVertexShader1 cg { source Ogre3DBeginnerGuideShaders.cg entry_point MyVertexShader1 profiles vs_1_1 arbvp1 default_params { param_named_auto worldViewMatrix worldviewproj_matrix } } The material itself just uses the vertex and fragment shader names to reference them: material MyMaterial13 { technique { pass { vertex_program_ref MyVertexShader1 { } fragment_program_ref MyFragmentShader1 { } } } } Now we need to write the shader itself. Create a file named Ogre3DBeginnersGuideShaders.cg in the mediamaterialsprograms folder of your Ogre 3D SDK. Each shader looks like a function. One difference is that we can use the out keyword to mark a parameter as an outgoing parameter instead of the default incoming parameter. The out parameters are used by the rendering pipeline for the next rendering step. The out parameters of a vertex shader are processed and then passed into the pixel shader as an in parameter. The out parameter from a pixel shader is used to create the final render result. Remember to use the correct name for the function; otherwise, Ogre 3D won't find it. Let's begin with the fragment shader because it's easier: void MyFragmentShader1(out float4 color: COLOR) The fragment shader will return the color blue for every pixel we render: { color = float4(0,0,1,0); } That's all for the fragment shader; now we come to the vertex shader. The vertex shader has three parameters—the position for the vertex, the translated position of the vertex as an out variable, and as a uniform variable for the matrix we are using for the translation: void MyVertexShader1( float4 position : POSITION, out float4 oPosition : POSITION, uniform float4x4 worldViewMatrix) Inside the shader, we use the matrix and the incoming position to calculate the outgoing position: { oPosition = mul(worldViewMatrix, position); } Compile and run the application. You should see our quad, this time rendered in blue. What just happened? Quite a lot happened here; we will start with step 2. Here we defined the fragment shader we are going to use. Ogre 3D needs five pieces of information for a shader. We define a fragment shader with the keyword fragment_program, followed by the name we want the fragment program to have, then a space, and at the end, the language in which the shader will be written. As for programs, shader code was written in assembly and in the early days, programmers had to write shader code in assembly because there wasn't another language to be used. But also, as with general programming language, soon there came high-level programming to ease the pain of writing shader code. At the moment, there are three different languages that shaders can be written in: HLSL, GLSL, and CG. The shader language HLSL is used by DirectX and GLSL is the language used by OpenGL. CG was developed by NVidia in cooperation with Microsoft and is the language we are going to use. This language is compiled during the start up of our application to their respective assembly code. So shaders written in HLSL can only be used with DirectX and GLSL shaders with OpenGL. But CG can compile to DirectX and OpenGL shader assembly code; that's the reason why we are using it to be truly cross platform. That's two of the five pieces of information that Ogre 3D needs. The other three are given in the curly brackets. The syntax is like a property file—first the key and then the value. One key we use is source followed by the file where the shader is stored. We don't need to give the full path, just the filename will do, because Ogre 3D scans our directories and only needs the filename to find the file. Another key we are using is entry_point followed by the name of the function we are going to use for the shader. In the code file, we created a function called MyFragmentShader1 and we are giving Ogre 3D this name as the entry point for our fragment shader. This means, each time we need the fragment shader, this function is called. The function has only one parameter out float4 color : COLOR. The prefix out signals that this parameter is an out parameter, meaning we will write a value into it, which will be used by the render pipeline later on. The type of this parameter is called float4, which simply is an array of four float values. For colors, we can think of it as a tuple (r,g,b,a) where r stands for red, g for green, b for blue, and a for alpha: the typical tuple to description colors. After the name of the parameter, we got a : COLOR. In CG, this is called a semantic describing for what the parameter is used in the context of the render pipeline. The parameter :COLOR tells the render pipeline that this is a color. In combination with the out keyword and the fact that this is a fragment shader, the render pipeline can deduce that this is the color we want our fragment to have. The last piece of information we supply uses the keyword profiles with the values ps_1_1 and arbfp1. To understand this, we need to talk a bit about the history of shaders. With each generation of graphics cards, a new generation of shaders have been introduced. What started as a fairly simple C-like programming language without even IF conditions are now really complex and powerful programming languages. Right now, there are several different versions for shaders and each with a unique function set. Ogre 3D needs to know which of these versions we want to use. ps_1_1 means pixel shader version 1.1 and arbfp1 means fragment program version 1. We need both profiles because ps_1_1 is a DirectX specific function set and arbfp1 is a function subset for OpenGL. We say we are cross platform, but sometimes we need to define values for both platforms. All subsets can be found at http://www.ogre3d.org/docs/manual/manual_18.html. That's all needed to define the fragment shader in our material file. In step 3, we defined our vertex shader. This part is very similar to the fragment shader definition code; the main difference is the default_params block. This block defines parameters that are given to the shader during runtime. param_named_auto defines a parameter that is automatically passed to the shader by Ogre 3D. After this key, we need to give the parameter a name and after this, the value keyword we want it to have. We name the parameter worldViewMatrix; any other name would also work, and the value we want it to have has the key worldviewproj_matrix. This key tells Ogre 3D we want our parameter to have the value of the WorldViewProjection matrix. This matrix is used for transforming vertices from local into camera space. A list of all keyword values can be found at http://www.ogre3d.org/docs/manual/manual_23.html#SEC128. How we use these values will be seen shortly. Step 4 used the work we did before. As always, we defined our material with one technique and one pass; we didn't define a texture unit but used the keyword vertex_program_ref. After this keyword, we need to put the name of a vertex program we defined, in our case, this is MyVertexShader1. If we wanted, we could have put some more parameters into the definition, but we didn't need to, so we just opened and closed the block with curly brackets. The same is true for fragment_program_ref. Writing a shader Now that we have defined all necessary things in our material file, let's write the shader code itself. Step 6 defines the function head with the parameter we discussed before, so we won't go deeper here. Step 7 defines the function body; for this fragment shader, the body is extremely simple. We created a new float4 tuple (0,0,1,0), describes the color blue and assigns this color to our out parameter color. The effect is that everything that is rendered with this material will be blue. There isn't more to the fragment shader, so let's move on to the vertex shader. Step 8 defines the function header. The vertex shader has 3 parameters— two are marked as positions using CG semantics and the other parameter is a 4x4 matrix using float4 as values named worldViewMatrix. Before the parameter type definition, there is the keyword uniform. Each time our vertex shader is called, it gets a new vertex as the position parameter input, calculates the position of this new vertex, and saves it in the oPosition parameter. This means with each call, the parameter changes. This isn't true for the worldViewMatrix. The keyword uniform denotes parameters that are constant over one draw call. When we render our quad, the worldViewMatrix doesn't change while the rest of the parameters are different for each vertex processed by our vertex shader. Of course, in the next frame, the worldViewMatrix will probably have changed. Step 9 creates the body of the vertex shader. In the body, we multiply the vertex that we got with the world matrix to get the vertex translated into camera space. This translated vertex is saved in the out parameter to be processed by the rendering pipeline. We will look more closely into the render pipeline after we have experimented with shaders a bit more. Texturing with shaders We have painted our quad in blue, but we would like to use the previous texture. Time for action – using textures in shaders Create a new material named MyMaterial14. Also create two new shaders named MyFragmentShader2 and MyVertexShader2. Remember to copy the fragment and vertex program definitions in the material file. Add to the material file a texture unit with the rock texture: texture_unit { texture terr_rock6.jpg } We need to add two new parameters to our fragment shader. The first is a two tuple of floats for the texture coordinates. Therefore, we also use the semantic to mark the parameter as the first texture coordinates we are using. The other new parameter is of type sampler2D, which is another name for texture. Because the texture doesn't change on a per fragment basis, we mark it as uniform. This keyword indicates that the parameter value comes from outside the CG program and is set by the rendering environment, in our case, by Ogre 3D: void MyFragmentShader2(float2 uv : TEXCOORD0, out float4 color : COLOR, uniform sampler2D texture) In the fragment shader, replace the color assignment with the following line: color = tex2D(texture, uv); The vertex shader also needs some new parameters—one float2 for the incoming texture coordinates and one float2 as the outgoing texture coordinates. Both are our TEXCOORD0 because one is the incoming and the other is the outgoing TEXCOORD0: void MyVertexShader2( float4 position : POSITION, out float4 oPosition : POSITION, float2 uv : TEXCOORD0, out float2 oUv : TEXCOORD0, uniform float4x4 worldViewMatrix) In the body, we calculate the outgoing position of the vertex: oPosition = mul(worldViewMatrix, position); For the texture coordinates, we assign the incoming value to the outgoing value: oUv = uv; Remember to change the used material in the application code, and then compile and run it. You should see the quad with the rock texture.
Read more
  • 0
  • 0
  • 5590

article-image-creating-convincing-images-blender-internal-renderer-part1
Packt
20 Oct 2009
9 min read
Save for later

Creating Convincing Images with Blender Internal Renderer-part1

Packt
20 Oct 2009
9 min read
Throughout the years that have passed since the emergence of Computer Graphics, many aspiring artists tried convincingly to recreate the real world through works of applied art, some of which include oil painting, charcoal painting, matte painting, and even the most basic ones like pastel and/or crayon drawings has already made it through the artistic universe of realism. Albeit the fact that recreating the real world is like reinventing the wheel (which some artists might argue with), it is not an easy task to involve yourself into. It takes a lot of practice, perseverance, and personality to make it through.  But one lesson I have learned from the art world is to consciously and subconsciously observe the world around you. Pay attention to details. Observe how a plant behaves at different environmental conditions, look how a paper's texture is changed when wet, or probably observe how water in a river distorts the underlying objects. These are just some of the things that you can observe around you, and there are a million more or even an infinite number of things you can observe in your lifetime. In the advent of 3D as part of the numerous studies involved in Computer Graphics, a lot of effort has been made into developing tools and applications that emulate real-world environment.  It has become an unstated norm that the more realistic looking an image is, the greater the impact it has on viewers.  That, in turn, is partly true, but the real essence into creating stunning images is to know how it would look beautiful amidst the criteria that are present.  It is not a general requirement that all your images must look hyper realistic, you just have to know and judge how it would look good, after all that's what CG is all about.  And believe it or not, cheating the eye is an essential tool of the trade. In 3D rendering context, there are a number of ways on how to achieve realism in your scenes, but intuitively, the use of external renderers and advanced raytracers does help a lot in the setup and makes the creation process a bit lighter as compared to manually setting up lights, shaders, etc.  But that comes at a rendering time tradeoff.  Unfortunately though, I won't be taking you to the steps on how to setup your scenes for use in external renderers, but instead I'll walk you through the steps on how to achieve slightly similar effects as to that of externals with the use of the native renderer or the internal renderer as some might call it. Hopefully in this short article, I can describe to you numerous ways on how to achieve good-looking and realistic images with some nifty tools, workarounds from within Blender and use the Blender Internal Renderer to achieve these effects. So, let's all get a cup of tea, a comfortable couch, and hop in! On a nutshell, what makes an image look real? Shading, Materials, Shadows, Textures, Transparency, Reflection, Refraction, Highlights, Contrast, Color Balance, DoF, Lens Effects, Geometry (bevels), Subtlety, Environment, Staging, Composite Nodes, Story.. Before Anything Else... Beyond anything that will be discussed here, nothing beats a properly planned and well-imagined scene.  I cannot stress enough how important it is to begin everything with deep and careful planning.  Be it just a ball on a table or a flying scaled bear with a head of a tarsier and legs that of a mouse (?), it is very vital to plan beforehand.  Believe me, once you've planned everything right, you're almost done with your work (which I didn't believe then until I did give it a try).  And of course, with your touch of artistic flavors, a simple scene could just be the grandest one that history has ever seen. This article, by any means, does not technically teach you how to model subjects for your scene nor does it detail the concepts behind lighting (which is an article on its own and probably beyond my knowledge) nor does it teach you “the way” to do things but instead it will lead you through a process by which you will be able to understand your scene better and the concepts behind. I would also be leading you through a series of steps using the same scene we've setup from the beginning and hopefully by the end of the day, we could achieve something that comprises what has been discussed here so far. I have blabbered too much already, haven't I? Yeah.  Ok, on to the real thing. Before you begin the proceeding steps, it is a must (it really really is) to go grab your copy of Blender over at http://www.blender.org/download/get-blender/. The version I used for this tutorial is 2.49a (which should be the latest one being offered at Blender.org [as of this writing]). Scene Setup With every historical and memorable piece, it is a vital part of your 3d journey to setup something on your scene.  I couldn't imagine how a 3D artist could pass on a work with a blank animated scene, hyper minimal I might say. To start off, fire up Blender or your favorite 3D App for that matter and get your scene ready with your models, objects, subjects, or whatever you call them, just get them there inside your scene so we could have something to look at for now, won't we? On the image below (finally, a graphic one!), you could see a sample scene I've setup and a quick render of the said scene. The first image shows my scene with the model, two spheres, a plane, a lamp, and a camera. The second image shows the rendered version.   You'll notice that the image looks dull and lifeless, that is because it lacks the proper visual elements necessary for creating a convincing scene.  The current setup is all set to default, with the objects having no material data but just the premade ones set by Blender and the light’s settings set as they were by default. Shading and Materials To address some issues, we need to identify first what needs to be corrected.  The first thing we might want to do is to add some initial materials to the objects we have just so we could clearly distinguish their roles in the scene and to add some life to the somewhat dry set we have here.  To do so, select one object at a time and add a material. Let’s first select the main character of the scene (or any subject you wish for that matter) by clicking RMB (Right Mouse Button) on the character object, then under the Buttons Window, select Shading (F5), then click the Material Buttons tab, and click on “Add New” to add a new material to our object. Adding a New Material After doing so, more options will show up and this is where the real fun begins. The only thing we’ll be doing for now is to add some basic color and shading to our objects just so we could deviate from the standard gray default.  You’ll notice on the image below that I’ve edited quite a few options.  That’s what we only want for now, let’s leave the other settings as they are and we’ll get back to it as soon as we need to. Character Initial Material Settings   Big Sphere Initial Material Settings Small sphere Initial Material Settings   Ground Initial Material Settings If we do a test render now, here’s how it will look like: Render With Colors Still not so convincing, but somehow we managed to add a level of variety to our scene as compared to the initial render we’ve made.  Looking at the latest render we did, you’ll notice that the character with the two spheres still seem to be floating in space, creating no interaction whatsoever with the ground plane below it. Another thing would be the lack of diffuse color on some parts of the objects, thus creating a pitch black color which, as in this case, doesn’t seem to look good at all since we’re trying to achieve a well-lit, natural environment as much as possible. A quick and easy solution to this issue would be to enable Ambient Occlusion under the World Settings tab. This will tell Blender to create a fake global illumination effect as though you have added a bunch of lights to create the occlusion.  This would be a case similar to adding a dome of spot lights, with each light having a low energy level, creating a subtle AO effect. But for the purposes of this article, we’d be settling for Ambient Occlusion since it is faster to setup and eliminates the additional need for further tweaking. We access the AO (Ambient Occlusion) menu via the World Buttons tab under Shading (F5) menu then clicking the Amb Occ subtab.  Activate Ambient Occlusion then click on Use Falloff and edit the default strength of 1.00 to 0.70, doing so will create further diffusion on darker areas that have been hidden from the occlusion process.  Next would be to click Pixel Cache, I don’t know much technically what this does but what I know from experience is this speeds up the occlusion calculation.   Ambient Occlusion Settings Below, you’ll see the effects of AO as applied to the scene.  Notice the subtle diffusion of color and shadows and the interaction of the objects and the plane ground through the occlusion process.  So far we’ve only used a single lamp as fill light, but later on we’ll be adding further light sources to create a better effect. Render with Ambient Occlusion Whew, we’ve been doing something lately, haven’t we? So far what we did was to create a scene and a render image that will give us a better view of what it’s going to look like.  Next stop, we’ll be creating a base light setup to further create shadows and better looking diffusion. Soon we go!
Read more
  • 0
  • 0
  • 5440

article-image-unity-game-development-welcome-3d-world
Packt
27 Oct 2009
21 min read
Save for later

Unity Game Development: Welcome to the 3D world

Packt
27 Oct 2009
21 min read
Getting to grips with 3D Let's take a look at the crucial elements of 3D worlds, and how Unity lets you develop games in the third dimension. Coordinates If you have worked with any 3D artworking application before, you'll likely be familiar with the concept of the Z-axis. The Z-axis, in addition to the existing X for horizontal and Y for vertical, represents depth. In 3D applications, you'll see information on objects laid out in X, Y, Z format—this is known as the Cartesian coordinate method. Dimensions, rotational values, and positions in the 3D world can all be described in this way. In any documentation of 3D, you'll see such information written with parenthesis, shown as follows: (10, 15, 10) This is mostly for neatness, and also due to the fact that in programming, these values must be written in this way. Regardless of their presentation, you can assume that any sets of three values separated by commas will be in X, Y, Z order. Local space versus World space A crucial concept to begin looking at is the difference between Local space and World space. In any 3D package, the world you will work in is technically infinite, and it can be difficult to keep track of the location of objects within it. In every 3D world, there is a point of origin, often referred to as zero, as it is represented by the position (0,0,0). All world positions of objects in 3D are relative to world zero. However, to make things simpler, we also use Local space (also known as Object space) to define object positions in relation to one another. Local space assumes that every object has its own zero point, which is the point from which its axis handles emerge. This is usually the center of the object, and by creating relationships between objects, we can compare their positions in relation to one another. Such relationships, known as parent-child relationships, mean that we can calculate distances from other objects using Local space, with the parent object's position becoming the new zero point for any of its child objects. Vectors You'll also see 3D vectors described in Cartesian coordinates. Like their 2D counterparts, 3D vectors are simply lines drawn in the 3D world that have a direction and a length. Vectors can be moved in world space, but remain unchanged themselves. Vectors are useful in a game engine context, as they allow us to calculate distances, relative angles between objects, and the direction of objects. Cameras Cameras are essential in the 3D world, as they act as the viewport for the screen. Having a pyramid-shaped field of vision, cameras can be placed at any point in the world, animated, or attached to characters or objects as part of a game scenario. With adjustable Field of Vision (FOV), 3D cameras are your viewport on the 3D world. In game engines, you'll notice that effects such as lighting, motion blurs, and other effects are applied to the camera to help with game simulation of a person's eye view of the world—you can even add a few cinematic effects that the human eye will never experience, such as lens flares when looking at the sun! Most modern 3D games utilize multiple cameras to show parts of the game world that the character camera is not currently looking at—like a 'cutaway' in cinematic terms. Unity does this with ease by allowing many cameras in a single scene, which can be scripted to act as the main camera at any point during runtime. Multiple cameras can also be used in a game to control the rendering of particular 2D and 3D elements separately as part of the optimization process. For example, objects may be grouped in layers, and cameras may be assigned to render objects in particular layers. This gives us more control over individual renders of certain elements in the game. Polygons, edges, vertices, and meshes In constructing 3D shapes, all objects are ultimately made up of interconnected 2D shapes known as polygons. On importing models from a modelling application, Unity converts all polygons to polygon triangles. Polygon triangles (also referred to as faces) are in turn made up of three connected edges. The locations at which these vertices meet are known as points or vertices. By knowing these locations, game engines are able to make calculations regarding the points of impact, known as collisions, when using complex collision detection with Mesh Colliders, such as in shooting games to detect the exact location at which a bullet has hit another object. By combining many linked polygons, 3D modelling applications allow us to build complex shapes, known as meshes. In addition to building 3D shapes, the data stored in meshes can have many other uses. For example, it can be used as surface navigational data by making objects in a game, by following the vertices. In game projects, it is crucial for the developer to understand the importance of polygon count. The polygon count is the total number of polygons, often in reference to a model, but also in reference to an entire game level. The higher the number of polygons, the more work your computer must do to render the objects onscreen. This is why, in the past decade or so, we've seen an increase in the level of detail from early 3D games to those of today—simply compare the visual detail in a game, such as Id's Quake (1996) with the details seen in a game, such as Epic's Gears Of War (2006). As a result of faster technology, game developers are now able to model 3D characters and worlds for games that contain a much higher polygon count and this trend will inevitably continue. Materials, textures, and shaders Materials are a common concept to all 3D applications, as they provide the means to set the visual appearance of a 3D model. From basic colors to reflective image-based surfaces, materials handle everything. Starting with a simple color and the option of using one or more images—known as textures—in a single material, the material works with the shader, which is a script in charge of the style of rendering. For example, in a reflective shader, the material will render reflections of surrounding objects, but maintain its color or the look of the image applied as its texture. In Unity, the use of materials is easy. Any materials created in your 3D modelling package will be imported and recreated automatically by the engine and created as assets to use later. You can also create your own materials from scratch, assigning images as texture files, and selecting a shader from a large library that comes built-in. You may also write your own shader scripts, or implement those written by members of the Unity community, giving you more freedom for expansion beyond the included set. Crucially, when creating textures for a game in a graphics package such as Photoshop, you must be aware of the resolution. Game textures are expected to be square, and sized to a power of 2. This means that numbers should run as follows: 128 x 128 256 x 256 512 x 512 1024 x 1024 Creating textures of these sizes will mean that they can be tiled successfully by the game engine. You should also be aware that the larger the texture file you use, the more processing power you'll be demanding from the player's computer. Therefore, always remember to try resizing your graphics to the smallest power of 2 dimensions possible, without sacrificing too much in the way of quality. Rigid Body physics For developers working with game engines, physics engines provide an accompanying way of simulating real-world responses for objects in games. In Unity, the game engine uses Nvidia's PhysX engine, a popular and highly accurate commercial physics engine. In game engines, there is no assumption that an object should be affected by physics—firstly because it requires a lot of processing power, and secondly because it simply doesn't make sense. For example, in a 3D driving game, it makes sense for the cars to be under the influence of the physics engine, but not the track or surrounding objects, such as trees, walls, and so on—they simply don't need to be. For this reason, when making games, a Rigid Body component is given to any object you want under the control of the physics engine. Physics engines for games use the Rigid Body dynamics system of creating realistic motion. This simply means that instead of objects being static in the 3D world, they can have the following properties: Mass Gravity Velocity Friction As the power of hardware and software increases, rigid body physics is becoming more widely applied in games, as it offers the potential for more varied and realistic simulation. Collision detection While more crucial in game engines than in 3D animation, collision detection is the way we analyze our 3D world for inter-object collisions. By giving an object a Collider component, we are effectively placing an invisible net around it. This net mimics its shape and is in charge of reporting any collisions with other colliders, making the game engine respond accordingly. For example, in a ten-pin bowling game, a simple spherical collider will surround the ball, while the pins themselves will have either a simple capsule collider, or for a more realistic collision, employ a Mesh collider. On impact, the colliders of any affected objects will report to the physics engine, which will dictate their reaction, based on the direction of impact, speed, and other factors. In this example, employing a mesh collider to fit exactly to the shape of the pin model would be more accurate but is more expensive in processing terms. This simply means that it demands more processing power from the computer, the cost of which is reflected in slower performance—hence the term expensive. Essential Unity concepts Unity makes the game production process simple by giving you a set of logical steps to build any conceivable game scenario. Renowned for being non-game-type specific, Unity offers you a blank canvas and a set of consistent procedures to let your imagination be the limit of your creativity. By establishing its use of the Game Object (GO) concept, you are able to break down parts of your game into easily manageable objects, which are made of many individual Component parts. By making individual objects within the game and introducing functionality to them with each component you add, you are able to infinitely expand your game in a logical progressive manner. Component parts in turn have variables—essentially settings to control them with. By adjusting these variables, you'll have complete control over the effect that Component has on your object. Let's take a look at a simple example. The Unity way If I wished to have a bouncing ball as part of a game, then I'd begin with a sphere. This can quickly be created from the Unity menus, and will give you a new Game Object with a sphere mesh (a net of a 3D shape), and a Renderer component to make it visible. Having created this, I can then add a Rigid body. A Rigidbody (Unity refers to most two-word phrases as a single word term) is a component which tells Unity to apply its physics engine to an object. With this comes mass, gravity, and the ability to apply forces to the object, either when the player commands it or simply when it collides with another object. Our sphere will now fall to the ground when the game runs, but how do we make it bounce? This is simple! The collider component has a variable called Physic Material—this is a setting for the Rigidbody, defining how it will react to other objects' surfaces. Here we can select Bouncy, an available preset, and voila! Our bouncing ball is complete, in only a few clicks. This streamlined approach for the most basic of tasks, such as the previous example, seems pedestrian at first. However, you'll soon find that by applying this approach to more complex tasks, they become very simple to achieve. Here is an overview of those key Unity concepts plus a few more. Assets These are the building blocks of all Unity projects. From graphics in the form of image files, through 3D models and sound files, Unity refers to the files you'll use to create your game as assets. This is why in any Unity project folder all files used are stored in a child folder named Assets. Scenes In Unity, you should think of scenes as individual levels, or areas of game content (such as menus). By constructing your game with many scenes, you'll be able to distribute loading times and test different parts of your game individually. Game Objects When an asset is used in a game scene, it becomes a new Game Object—referred to in Unity terms—especially in scripting—using the contracted term "GameObject". All GameObjects contain at least one component to begin with, that is, the Transform component. Transform simply tells the Unity engine the position, rotation, and scale of an object—all described in X, Y, Z coordinate (or in the case of scale, dimensional) order. In turn, the component can then be addressed in scripting in order to set an object's position, rotation, or scale. From this initial component, you will build upon game objects with further components adding required functionality to build every part of any game scenario you can imagine. Components Components come in various forms. They can be for creating behavior, defining appearance, and influencing other aspects of an object's function in the game. By 'attaching' components to an object, you can immediately apply new parts of the game engine to your object. Common components of game production come built-in with Unity, such as the Rigidbody component mentioned earlier, down to simpler elements such as lights, cameras, particle emitters, and more. To build further interactive elements of the game, you'll write scripts, which are treated as components in Unity. Scripts While being considered by Unity to be Components, scripts are an essential part of game production, and deserve a mention as a key concept. You can write scripts in JavaScript, but you should be aware that Unity offers you the opportunity to write in C# and Boo (a derivative of the Python language) also. I've chosen to demonstrate Unity with JavaScript, as it is a functional programming language, with a simple to follow syntax that some of you may already have encountered in other endeavors such as Adobe Flash development in ActionScript or in using JavaScript itself for web development. Unity does not require you to learn how the coding of its own engine works or how to modify it, but you will be utilizing scripting in almost every game scenario you develop. The beauty of using Unity scripting is that any script you write for your game will be straightforward enough after a few examples, as Unity has its own built-in Behavior class—a set of scripting instructions for you to call upon. For many new developers, getting to grips with scripting can be a daunting prospect, and one that threatens to put off new Unity users who are simply accustomed to design only. I will introduce scripting one step at a time, with a mind to showing you not only the importance, but also the power of effective scripting for your Unity games. To write scripts, you'll use Unity's standalone script editor. On Mac, this is an application called Unitron, and on PC, Uniscite. These separate applications can be found in the Unity application folder on your PC or Mac and will be launched any time you edit a new script or an existing one. Amending and saving scripts in the script editor will immediately update the script in Unity. You may also designate your own script editor in the Unity preferences if you wish. Prefabs Unity's development approach hinges around the GameObject concept, but it also has a clever way to store objects as assets to be reused in different parts of your game, and then 'spawned' or 'cloned' at any time. By creating complex objects with various components and settings, you'll be effectively building a template for something you may want to spawn multiple instances of, with each instance then being individually modifiable. Consider a crate as an example—you may have given the object in the game a mass, and written scripted behaviors for its destruction; chances are you'll want to use this object more than once in a game, and perhaps even in games other than the one it was designed for. Prefabs allow you to store the object, complete with components and current configuration. Comparable to the MovieClip concept in Adobe Flash, think of prefabs simply as empty containers that you can fill with objects to form a data template you'll likely recycle. The interface The Unity interface, like many other working environments, has a customizable layout. Consisting of several dockable spaces, you can pick which parts of the interface appear where. Let's take a look at a typical Unity layout: As the previous image demonstrates (PC version shown), there are five different elements you'll be dealing with: Scene [1]—where the game is constructed Hierarchy [2]—a list of GameObjects in the scene Inspector [3]—settings for currently selected asset/object Game [4]—the preview window, active only in play mode Project [5]—a list of your project's assets, acts as a library The Scene window and Hierarchy The Scene window is where you will build the entirety of your game project in Unity. This window offers a perspective (full 3D) view, which is switchable to orthographic (top down, side on, and front on) views. This acts as a fully rendered 'Editor' view of the game world you build. Dragging an asset to this window will make it an active game object. The Scene view is tied to the Hierarchy, which lists all active objects in the currently open scene in ascending alphabetical order. The Scene window is also accompanied by four useful control buttons, as shown in the previous image. Accessible from the keyboard using keys Q, W, E, and R, these keys perform the following operations: The Hand tool [Q]: This tools allows navigation of the Scene window. By itself, it allows you to drag around in the Scene window to pan your view. Holding down Alt with this tool selected will allow you to rotate your view, and holding the Command key (Apple) or Ctrl key (PC) will allow you to zoom. Holding the Shift key down also will speed up both of these functions. The Translate tool [W]: This is your active selection tool. As you can completely interact with the Scene window, selecting objects either in the Hierarchy or Scene means you'll be able to drag the object's axis handles in order to reposition them. The Rotate tool [E]: This works in the same way as Translate, using visual 'handles' to allow you to rotate your object around each axis. The Scale tool [R]: Again, this tool works as the Translate and Rotate tools do. It adjusts the size or scale of an object using visual handles. Having selected objects in either the Scene or Hierarchy, they immediately get selected in both. Selection of objects in this way will also show the properties of the object in the Inspector. Given that you may not be able to see an object you've selected in the Hierarchy in the Scene window, Unity also provides the use of the F key, to focus your Scene view on that object. Simply select an object from the Hierarchy, hover your mouse cursor over the Scene window, and press F. The Inspector Think of the Inspector as your personal toolkit to adjust every element of any game object or asset in your project. Much like the Property Inspector concept utilized by Adobe in Flash and Dreamweaver, this is a context-sensitive window. All this means is that whatever you select, the Inspector will change to show its relevant properties—it is sensitive to the context in which you are working. The Inspector will show every component part of anything you select, and allow you to adjust the variables of these components, using simple form elements such as text input boxes, slider scales, buttons, and drop-down menus. Many of these variables are tied into Unity's drag-and-drop system, which means that rather than selecting from a drop-down menu, if it is more convenient, you can drag-and-drop to choose settings. This window is not only for inspecting objects. It will also change to show the various options for your project when choosing them from the Edit menu, as it acts as an ideal space to show you preferences—changing back to showing component properties as soon as you reselect an object or asset. In this screenshot, the Inspector is showing properties for a target object in the game. The object itself features two components—Transform and Animation. The Inspector will allow you to make changes to settings in either of them. Also notice that to temporarily disable any component at any time—which will become very useful for testing and experimentation—you can simply deselect the box to the left of the component's name. Likewise, if you wish to switch off an entire object at a time, then you may deselect the box next to its name at the top of the Inspector window. The Project window The Project window is a direct view of the Assets folder of your project. Every Unity project is made up of a parent folder, containing three subfolders—Assets, Library, and while the Unity Editor is running, a Temp folder. Placing assets into the Assets folder means you'll immediately be able to see them in the Project window, and they'll also be automatically imported into your Unity project. Likewise, changing any asset located in the Assets folder, and resaving it from a third-party application, such as Photoshop, will cause Unity to reimport the asset, reflecting your changes immediately in your project and any active scenes that use that particular asset. It is important to remember that you should only alter asset locations and names using the Project window—using Finder (Mac) or Windows Explorer (PC) to do so may break connections in your Unity project. Therefore, to relocate or rename objects in your Assets folder, use Unity's Project window instead. The Project window is accompanied by a Create button. This allows the creation of any assets that can be made within Unity, for example, scripts, prefabs, and materials. The Game window The Game window is invoked by pressing the Play button and acts as a realistic test of your game. It also has settings for screen ratio, which will come in handy when testing how much of the player's view will be restricted in certain ratios, such as 4:3 (as opposed to wide) screen resolutions. Having pressed Play, it is crucial that you bear in mind the following advice: In play mode, the adjustments you make to any parts of your game scene are merely temporary—it is meant as a testing mode only, and when you press Play again to stop the game, all changes made during play mode will be undone. This can often trip up new users, so don't forget about it! The Game window can also be set to Maximize when you invoke play mode, giving you a better view of the game at nearly fullscreen—the window expands to fill the interface. It is worth noting that you can expand any part of the interface in this way, simply by hovering over the part you wish to expand and pressing the Space bar. Summary Here we have looked at the key concepts, you'll need to understand for developing games with Unity. Due to space constraints, I cannot cover everything in depth, as 3D development is a vast area of study. With this in mind, I strongly recommend you to continue to read more on the topics discussed in this article, in order to supplement your study of 3D development. Each individual piece of software you encounter will have its own dedicated tutorials and resources dedicated to learning it. If you wish to learn 3D artwork to complement your work in Unity, I recommend that you familiarize yourself with your chosen package, after researching the list of tools that work with the Unity pipeline and choosing which one suits you best.
Read more
  • 0
  • 0
  • 5294
article-image-creating-convincing-images-blender-internal-renderer-part2
Packt
20 Oct 2009
9 min read
Save for later

Creating Convincing Images with Blender Internal Renderer-part2

Packt
20 Oct 2009
9 min read
Textures In your journey as a 3d artist, you might have encountered several (if not all) astounding works of art.  And through close inspection, you’ll notice that we barely see them without textures.  That is because textures are one of the most important aspect of 3d, but still, this doesn’t apply to all.  But adding textures to your characters, props, environment, etc. will greatly add to the aesthetic factor of your image that you wouldn’t believe it would. There are a number of ways to add texture to your objects in 3D such as UV mapping techniques, projections, 2D painting, etc.  All of these depend entirely on what kind of render are you trying to achieve.  But for the sake of this article, we’ll try to achieve some nice looking textures without having to worry about the complex tasks involved with it.  And with this, we’ll be using the ever famous and useful procedural textures to create seamless and continuously looking texture mapped over the surface of our models. More information about Procedural Textures can be found on http://www.blender.org/development/release-logs/blender-233/procedural-textures/. Now let’s add some textures, shall we? Let’s select the character model in our scene then go to the Texture tab on the rightmost part of the Material Buttons window and click Add New to add a new texture. Adding a New Texture After having added a new texture, additional windows appear allowing us to further modify how the currently added texture will affect our material.  Name this first texture as “bump” and the mapping options can be seen below. Bump Texture Mapping Settings   Bump Texture Settings Add another texture below the “bump” texture and call it “stain”.  The settings can be seen below. Stain Texture Mapping Settings   Stain Texture Settings We could have added more overlaying textures, but this will do for now just so we could see how the textures have affected our material so far.  Rendering now will only lead us to the image below. Dirtier And Better :) This time might be a good idea to change our framing and staging so we could look at it at a better perspective.  Changing the camera angle and increasing the ground plane’s scale and some adjustments on the spheres, I achieved something like this: New Camera Angle For an even better interaction from within the scene, we will adjust some material settings to simulate hard and reflective surfaces.  It’s a little unfair to give our main character some good materials while neglecting the other stuff we have.  So let’s just get on, and add some decent materials as replacement to the initial materials that both the spheres have had before. Go on and select the larger sphere and edit the current material we have so it would match that of the settings as seen in the image below. You’ll notice I added a Color Ramp to each of the materials, this is to slightly give the color a color transition as would be seen in the natural world, in addition to the current diffuse it already has. The vital part of the shading process of the Spheres is the reflectivity and mirror options as you can see in the following table:     Ray Mirror Freshnel Green Sphere 0.12 0.76 Blue Sphere 0.21 0.99     Green Sphere Material Settings   Blue Sphere Material Settings Our render would now look like this: Reflections to Simulate Mirror Effect and Smoothness To nearly finalize this part, we now deal with adding a texture to the world and varying the colors that would affect the Occlusion effect. To do so, let’s first change the Horizon and Zenith color of our World and change the Ambient Occlusion diffuse energy to the color we’ve just set by changing from “Plain” to “Sky Color”, as seen below. World Settings Rendering now will lead us to: New World Settings Render Notice the subtle difference between the previous render and the latest one where the slight bluish hue is more distinguishable. And then lastly, since we've already added some decent reflective material over to our spheres, it would be best if we can also see some environment being reflected over, to add to the already existent character as one of the objects being reflected. To do this, we're going to add a texture to our World.  This is one nifty tool in simulating an environment since we don't have to do the hard work in manually creating the objects that are going to be reflected.  Not only does it save us a lot of time but also the ease by which we can alter these environment is already a big advantage that we have at our hands. So to do this, let's go ahead and go to our Shading (F5) and select World Buttons.  Scroll to the far left side and you'll see tabs labeled “Texture and Input” and “Map To”, both of these tabs are essential in setting up our World texture so pay close attention to them. Below is an image that further shows you what we need to set up (sorry for the sudden theme change). World Texture You might have already guessed what we should do next, if not, I'll continue on.  After heading over to the “Texture and Input” and “Map To” tabs, let's first focus on what's active by default, that is, “Texture and Input”.  In this part, we'll only need a few things to get started.  First is to click “Add New” to add a new texture datablock to our blender scene, after which, let's edit the name of our texture and name it “environment”, then change the coordinates from “View” to “AngMap” to use a 360 degree angular coordinate, you'll see why in awhile. Adding a World Texture After applying these initial settings, we'll go ahead and proceed to the actual texturing process, which, as far as the World is concerned is just a very quick process.  I suppose you're still on the same Buttons window that we're on last time.  Click on the Texture button or press F6 on your function keys. Bam! Another set of Windows.  You'll see here that the texture we named “environment” awhile back is now reflected over to one of the texture slots, just like what we previously did with texturing the character we have.  But this time, instead of choosing procedural textures like Clouds, Voronoi, Noise, etc., we'll now be dealing with an image texture, as in our case, an HDRi (High Dynamic Range Image).  Our purpose in using an HDR image is to simulate the wide range of intensity levels (brightness and darkness) that is seen in reality and apply these settings over to our world, thus reflected upon by our objects.  As in our case, we'll be using high dynamic range images as light probes which are oriented 360 degrees and that's the very reason why we chose “AngMap” as our World texture coordinate. More info about HDRi can be seen at http://en.wikipedia.org/wiki/High_dynamic_range_imaging and you can download Light Probe Images over at Paul Debevec's collection at http://www.debevec.org/Probes.  Save your downloaded light probe images somewhere you can easily identify them with.  I couldn't stress enough how file organization can greatly help you in your career.  You could just imagine how frustrating it is to find assets among a thousand you already have, without properly placing them in their right places, this counts for every project you have as well . So to open up our Light Probe Image as texture to our World, click the drop down menu and choose “Image” as your texture type.  This tells Blender to use an image instead of the default procedural textures.  Then head to the far right side to locate the Image tab with a Load button on it.  Let's skip the Map Image tab for now. Image as Texture Type Loading an Image Texture Browse over at your downloaded HDR image (which should have an extension of .hdr) and confirm.  Now that the image is loaded, let's leave the default settings as they are since we wouldn't be using them that much.  You'll see on the far left Preview just how wonderful looking our image is.  But rendering your scene right now would yield to nothing but the same previous render we've had.  So if you're itching to get this image right at our scene (which I am too), go back to your World Settings and head over to the “Map To” tab just beside “Texture and Input” then deselect “Blend” and select “Hori” instead.  Kabam! Now we're all set! World Texture Mapping options And now, the moment we've all been eagerly waiting for, the Render! Yup, go ahead and render and it would (luckily) look like the image below. Render with HDRi Environment Then finally, on the next and last part of this article, we'll look on how we can even further add realism to our scene by simulating camera lenses and further enhancing the tone of the image with Composite Nodes.  
Read more
  • 0
  • 0
  • 4082

article-image-creating-pseudo-3d-imagery-gimp-part-2
Packt
21 Oct 2009
6 min read
Save for later

Creating Pseudo-3D Imagery with GIMP: Part 2

Packt
21 Oct 2009
6 min read
The next step would be to play around with Layer Modes which, I believe, is one of the most exciting aspects of graphic design. Let's leave layer “lower shine” for awhile and let's get back to layer “upper shine”, and change its layer mode to make it look more appealing and following a scheme in accordance to the color of the sphere. To do this, let's select “upper shine” layer on the Layers Window and right above the Opacity Slider is a dropdown menu containing lots of interesting layer modes, each having its own distinct advantages. You can play around and choose whichever suits your vision the best. I chose Overlay for that matter. Ever since I've started GIMPing, this layer mode has been my best friend for a couple of years already, it works like a charm most of the time.  I wonder though why on some applications, applying the overlay layer mode does different results.  As in the case of Photoshop, the closest I could get with GIMP's overlay is the Screen mode.  You've got to play around a bit and see which works best for you. Do the same thing for the “lower shine” layer, choosing Overlay as the layer mode.  Then, whenever you see fit, you can duplicate the layers to achieve a multiplied effect of the mode. I did that because it felt that something was still missing in the luminous aspect of the shine. So I selected each layer, duplicated them both and voila. To duplicate a layer, you can either right click on the layer name and choose Duplicate Layer from the choices or just press the Duplicate Button on the bottom part of the Layers Window. Duplicated Layers Next, we'll add additional highlights to better emulate specular reflections. And again, we're exploiting the Ellipse Select Tool and another new technique called Feathering. I don't know exactly the definition of feathering in CG, but as far as my experience goes, feathering is a technique from sets of tools where you soften the edges of a selection creating a subtle transition and blurred edges. Create a new layer at the top of the layer stack and call it “blurred shine”, then give it a Layer Fill Type of Transparency, just like what we did with the previous layers.  And with layer 'blurred shine” active, let's create an elliptical selection on the upper left hand part of the sphere, just where the sharp shine has been cast. Creating the Specular Selection With the selection active, right click on the Image Window and choose Select > Feather, then input a value for the feather and the unit to be used. I used 50 pixels. You might have noticed now that the selection seemed to have become smaller, and that's alright, that means you've done it right.  And with the marching ants still active, grab the Bucket Fill Tool over at the Toolbox Window or press SHIFT + B to activate it. Change your foreground color to something close to white or simply pure white, then with the Bucket Fill Tool active, click on the active selection. Tadaaaa! You just created a replica of a specular highlight, though not so close enough. What's great about feathering selections as opposed to applying a blur filter is that you only blur the selection border and not the entire selection. So, say, you have a picture of yourself and you wanted your face fade out smoothly on a vast landscape that you have photographed. Simply create a selection around your face, then apply a Feather to that selection, invert the selection and delete the outer parts, thus leaving only your face and the landscape behind (supposing you have your picture on a separate layer above the landscape layer.) Feathering the Selection Applying the Color with the Bucket Fill Whew, that was pretty quick, isn't it? I hope you agree with me on that.  If so, let's create another one, though smaller and placed just on the left of the blurred shine.  Create a new layer for this new blurred shine and name it “small blurred shine”.  Follow the same procedure for the feathering and color-filling. I used the same feather value for the smaller selection (even though it obviously is smaller), just so it almost affects the center of the selection, blurring the whole selection already, which is what I like for this part. And then, just like what we did with the upper and lower shine respectively, we'll change the Layer Modes to Overlay and duplicate the layers as we see fit.  Doing so results in this image: Blurred Shines Overlay Our sphere now looks a lot better than it had been when we first added its color. However, the shading still looks a bit flat and volumeless. To deal with that, we'll simulate the strength with which the light diffused our sphere object, creating deeper shadows on the opposite side of the light source. Duplication of layers is not only a matter of multiplying the effects of layer effects or such, but it can also be a good way to trace your changes, or better yet, as safe backups where working on the duplicate doesn't affect the original one and you can go back each time to the untouched layer anytime you want to see the differences that have been made.  But be careful though, the more layers and contents of each layer you have, the more computing memory will be consumed and will eventually cause a system slow down. Let's select the “sphere” layer and duplicate in once.  Automatically, the duplicate layer which is now named “sphere copy” becomes the active layer.  Right Click on “sphere copy” layer and choose Alpha to Selection to create a selection out of the fully opaque sphere. Next step is to shrink the selection such that we create a smaller elliptical selection inside the sphere.  To do this, right click on the Image Window and choose Select > Shrink.  Then on the pop up window that appears, type in an appropriate value for the shrinking. I chose 50 pixels. Shrinking the Selection Shrinked Selection Remember how we moved the selection last time? I believe you do. To translate/move our selection, grab the Ellipse Select Tool and activate the selection by clicking on it (clicking the middle portion of the selection makes this easier) until you see your cursor change into crossed arrows, this means you have just activated the move tool for the selection. And since the light is coming from the upper left direction, we would want to move the selection over to the location where the specular reflections are and where the lightest shading is.  Thats because later on, we'll be using this same selection to create shadows on the opposite side of the shade.  Now go ahead and drag the selection over to the upper left portion of the sphere.
Read more
  • 0
  • 0
  • 3995
Modal Close icon
Modal Close icon