Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-mobile-game-design
Packt
15 Nov 2013
12 min read
Save for later

Mobile Game Design

Packt
15 Nov 2013
12 min read
(For more resources related to this topic, see here.) The basic game design process The game design process shares many stages with any type of software design; identify what you want the game to do, define how it does it, find someone to program it, then test/ fix the hell out of it until it does what you expect it to do. Let's discuss these stages in a bit more detail. Find an idea. Unless you are one of the lucky few who start with an idea, sitting there staring at a blank piece of paper trying to force an idea out of your blank slate of a brain, may feel like trying to give birth when you're not pregnant: lots of effort with no payoff. Getting the right idea can be the hardest part of the entire design process and it usually takes several brainstorming sessions to achieve a good gameplay idea. In case you get stuck and feel like you're pondering too much, we suggest you to stop trying to be creative; go for a walk, watch a movie, read a book, or play a (gasp!) video game! Give the subconscious mind some space to percolate something cool up to the surface. Rough concept document: Once you have an idea for a game firmly embedded in your consciousness, it's time to write it down. This sounds simple and at this stage it should be. Write down the highlights of your idea; what is/are the fun parts, how does one win, what gets in the way of winning, how the player overcomes their obstacles to winning, and who you imagine would like to play this game. Storyboarding: The best way to test an idea is, well, to test it! Use pen and paper to create storyboards of your game and try to play it out on paper. These can save a lot of (expensive) programming time by eliminating unsuccessful ideas early and by working through interface organization on the cheap. The goal of storyboarding is to get something on paper that at least somewhat resembles the game you imagine in your head and it can go from very basic sketches, also called wire-frames, to detail schematics in Azure. Either way you should try to capture as many elements in the sketch as possible. The following figure represents the sketch of the double jump mechanic for a mobile platform made by one of the authors: Once you have concrete proof that your idea is good, invest some time and resources to create a playable demo that focuses on the action(s) the player will do most during the gameplay. It should have nothing extra such as fancy graphics and sound effects. It should include any pertinent actions that rely on the action in question and vice versa, for example if a previous action contributes to the action being tested, include it in the prototype. The question the prototype should answer is: do I still like my initial idea? While prototyping, it is acceptable to use existing assets scavenged from the net, other projects, and so on. Just be aware of the subtle risks of having the project become inadvertently associated with those assets, especially if they are high quality. For example, one of the authors was working on a simple (but clever!) real-time strategy game for Game Boy Advance. It was decided to add on a storyline to support the gameplay, which included a cast of characters. Instead of immediately creating original art for these characters, the team used the art from a defunct epic RPG project. The problem was that the quality of this placeholder art was so high (done by a world class fantasy/sci-fi artist) that when it was time to do final art for the game, the art the in-house artist did just wasn't up to the team's expectations. And the project didn't have enough money in the budget to hire the world-renowned artist to do the art for it. So both the team and the client (Nintendo) felt like the art was second rate, even though it was appropriate for the game being made. The project was later cancelled, but not necessarily due to the art. The following screenshot shows an adventure title prototype made by one of the authors with GameMaker studio by using assets taken from the Zelda saga: Test it once you have a working prototype, it is time to submit your idea to the public. Get a variety of people in to test your game like crazy. Include team members, former testers (if any), and fresh testers. Have people play often and get initial reactions as well as studied responses and collect all the data you can. Fix the issues that emerge from those testing sessions and be ready to discard anything that doesn't really fit the gameplay experience you had in mind. This can be a tough decision, especially for an element that the designer/design team have grown attached to. A good rule of thumb is if this element is on its third go around on being fixed; cut it if it doesn't pass. By then it is taking up too much of the project's resources. Refine the design document as implemented features pass the tests and the test, fix, or discard cycle is repeated on all the main features of your games, take the changes that were implemented during prototyping and update the design document to reflect them. By the end of this process, you will have a design document, a document that will be what you built for your final product. You can read an interesting article on Gamasutra about the layout of one such document, intended for a mobile team of developers at http://www.gamasutra.com/blogs/JasonBakker/20090604/84211/A_GDD_Template_for_the_Indie_Developer.php. Please note that this does not mean there won't be more changes! Hopefully it means there won't be any major changes, but be prepared for plenty of minor ones. End the preproduction once you have a clear idea of what your gameplay will be and a detailed document about what needs to be done, it is time to approach game production by creating the programming, graphics, audio, and interface of your game. As one works towards realization of the final product, continue using the evaluation procedures implemented during the prototyping process. Continually ask "is this fun for my target audience?" and don't fall into the trap of "well that's how I've always done that". Constantly question the design, and/or its implementation. If it's fun, leave it alone. If not, change it, no matter how late it is in the development process. Remember, you only have one chance to make a good first impression. When is the design really done? By now you have reached the realization that a project is never complete, you're simply done with it. No doubt you have many things you'd like to change, remove, or add but you've run out of time, money, or both. Make sure all those good ideas are recorded somewhere. It is a good idea to gather the team after release, and over snacks and refreshments capture what the team members would change. This is good for team morale as well as a good practice to follow. Mobile design constraints There are a few less obvious design considerations, based on the player's play behavior with a mobile device. What are the circumstances that players use mobile devices to play games? Usually they are waiting for something else to happen: waiting to board the bus, waiting to get off the bus, waiting in line, waiting in the waiting room, and so on. This affects several aspects of game design, as we will show in the following sections. Play time The most obvious design limitation is play time. The player should have a satisfying play experience in three minutes or less. A satisfying play experience usually means accomplishing a goal within the context of the game. A good point of reference is reaching a save game point. If save game points are placed about two and a half minutes of an average game player's ability apart, the average game player will never lose more than a couple of minutes of progress. For example, let's say an average player is waiting for a bus. She plays for three minutes and hits a save game point. The bus comes one minute later so the player stops playing and loses one minute of game progress (assuming there is no pause feature). Game depth Generally speaking, mobile games tend not to have much longevity, when compared to titles, such as Dragon Age or Fallout 3. There are several reasons for this, the most obvious one being the (usually) simple mechanics mobile games are built around. We don't mean that players cannot play Fruit Ninja or Angry Birds for a total of 60 hours or so, but it's not very likely that the average casual player will spend even 10 hours to unfold the story that may be told in a mobile game. At five hours of total gameplay, the player must in fact complete 120 two and a half minute save games. At 50 hours of the total gameplay, the player must complete 1200 two and a half minute save games. Are you sure your gameplay is sustainable over 1200 save game points? Mobile environment Mobile games are frequently played outdoors, in crowded, noisy, and even "shifting" or "scuffling" environments. Such factors must be considered while designing a mobile game. Does direct sunlight prevent players from understanding what's happening on the screen? Does a barking dog prevent the players from listening to important game instructions? Does the gameplay require control finesse and pixel precision to perform actions? If the answer to any of these questions is yes, you should iterate a little more around your design because these are all factors which could sink the success of your product. Smartphones Smartphones are still phones, after all. It is thus necessary that mobile games can handle unexpected events, which may occur while playing on your phone: incoming calls and messages, automatic updates, automatic power management utilities that activate alarms. You surely don't want your players to lose their progress due to an incoming call. Pause and auto-save features are thus mandatory design requirements of any successful mobile game. Single player versus multiplayer Multiplayer is generally much more fun than single player, no question. But how can you set up a multiplayer game in a two and half minute window? For popular multiplayer titles it is possible. Thanks to a turn-based, asynchronous play model where one player submits a move in the two and half minute window and then responds to the player's move. Very popular titles like Ruzzle, Hero Academy, or Skulls of the Shogun game system do that, but keep in mind that to support asynchronous gameplay it requires servers, which cost money and complex networking routines to be programmed. Are these extra difficulties worth their costs? The mobile market The success of any commercial project cannot arise with disregard to its reference market, and mobile games don't make exception. We the authors, believe that if you are reading this article, you are aware that the mobile market is evolving rapidly. The Newzoo market research for the games industry trends report for 2012 states that there are more than 500 million mobile gamers in the world and around 175 million gamers pay for games and that the mobile market was worth 9 billion dollars in 2012 (source: http://www.newzoo.com/insights/placing-mobile-games-in-perspective-of-the-total-games-market-free-mobile-trend-report/). The following screenshot represents the numbers of the mobile gaming market 2012 reported by Newzoo: As Juniper Research, a market intelligence firm, states, "smartphones and tablets are going to be primary devices for gamers to make in-app purchases in the future. Juniper projects 64.1 billion downloads of game apps to mobile devices in 2017, compared to the 21 billion downloaded in 2012." (source: http://www.gamesindustry.biz/articles/2013-04-30-mobile-to-be-primary-hardware-for-gaming-by-2016 ). Even handheld consoles, such as the 3DS by Nintendo or the PSVita by PlayStation are suffering from the competition of mobile phones and tablets, thanks to the improvements on mobile hardware and the quality of games. With regard to market share, a study by Strategy Analytics (source: http://www.strategyanalytics.com/default.aspx?mod=reportabstractviewer&a0=8437) shows that Android is the leading platform in Q1 2013, with 64 percent of all handheld sales. Japan being the only market where iOS is on the lead; though, as Apple is fond of pointing out, iOS users generally spend more money, when compared to Android estimators. All the data tell us that the positive trend in mobile devices growth will continue for several years and that with almost one billion mobile devices in the world, the mobile market cannot be ignored by game developers. Android is growing faster than Apple, but Apple is still the most lucrative market for mobile apps and games. Microsoft phones and tablets, on the other hand, didn't show positive trends as to be compared with iOS and Android growth. So the question is how can an indie team enter this market and have a chance of success? Summary In this article we discussed best practices for designing mobile games that can have chances to emerge in the highly competitive mobile market. We discussed the factors that come into play while designing games for the mobile platform, with regards to hardware and design limitations to the mobile market characteristics and the most successful mobile business models. Resources for Article : Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] So, what is Spring for Android? [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 3818

article-image-introduction-hlsl-3d-graphics-xna-game-studio-40
Packt
21 Dec 2010
16 min read
Save for later

Introduction to HLSL in 3D Graphics with XNA Game Studio 4.0

Packt
21 Dec 2010
16 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them           Read more about this book       (For more resources on this subject, see here.) Getting started The vertex shader and pixel shader are contained in the same code file called an Effect. The vertex shader is responsible for transforming geometry from object space into screen space, usually using the world, view, and projection matrices. The pixel shader's job is to calculate the color of every pixel onscreen. It is giving information about the geometry visible at whatever point onscreen it is being run for and takes into account lighting, texturing, and so on. For your convenience, I've provided the starting code for this article here. public class Game1 : Microsoft.Xna.Framework.Game{ GraphicsDeviceManager graphics; SpriteBatch spriteBatch; List<CModel> models = new List<CModel>(); Camera camera; MouseState lastMouseState; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 800; }// Called when the game should load its contentprotected override void LoadContent(){ spriteBatch = new SpriteBatch(GraphicsDevice); models.Add(new CModel(Content.Load<Model>("ship"), new Vector3(0, 400, 0), Vector3.Zero, new Vector3(1f), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); camera = new FreeCamera(new Vector3(1000, 500, -2000), MathHelper.ToRadians(153), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); lastMouseState = Mouse.GetState();}// Called when the game should update itselfprotected override void Update(GameTime gameTime){ updateCamera(gameTime); base.Update(gameTime);}void updateCamera(GameTime gameTime){ // Get the new keyboard and mouse state MouseState mouseState = Mouse.GetState(); KeyboardState keyState = Keyboard.GetState(); // Determine how much the camera should turn float deltaX = (float)lastMouseState.X - (float)mouseState.X; float deltaY = (float)lastMouseState.Y - (float)mouseState.Y; // Rotate the camera ((FreeCamera)camera).Rotate(deltaX * .005f, deltaY * .005f); Vector3 translation = Vector3.Zero; // Determine in which direction to move the camera if (keyState.IsKeyDown(Keys.W)) translation += Vector3.Forward; if (keyState.IsKeyDown(Keys.S)) translation += Vector3.Backward; if (keyState.IsKeyDown(Keys.A)) translation += Vector3.Left; if (keyState.IsKeyDown(Keys.D)) translation += Vector3.Right; // Move 3 units per millisecond, independent of frame rate translation *= 4 * (float)gameTime.ElapsedGameTime.TotalMilliseconds; // Move the camera ((FreeCamera)camera).Move(translation); // Update the camera camera.Update(); // Update the mouse state lastMouseState = mouseState;}// Called when the game should draw itselfprotected override void Draw(GameTime gameTime){ GraphicsDevice.Clear(Color.CornflowerBlue); foreach (CModel model in models) if (camera.BoundingVolumeIsInView(model.BoundingSphere)) model.Draw(camera.View, camera.Projection, ((FreeCamera)camera).Position); base.Draw(gameTime); }} Assigning a shader to a model In order to draw a model with XNA, it needs to have an instance of the Effect class assigned to it. Recall from the first chapter that each ModelMeshPart in a Model has its own Effect. This is because each ModelMeshPart may need to have a different appearance, as one ModelMeshPart may, for example, make up armor on a soldier while another may make up the head. If the two used the same effect (shader), then we could end up with a very shiny head or a very dull piece of armor. Instead, XNA provides us the option to give every ModelMeshPart a unique effect. In order to draw our models with our own effects, we need to replace the BasicEffect of every ModelMeshPart with our own effect loaded from the content pipeline. For now, we won't worry about the fact that each ModelMeshPart can have its own effect; we'll just be assigning one effect to an entire model. Later, however, we will add more functionality to allow different effects on each part of a model. However, before we start replacing the instances of BasicEffect assigned to our models, we need to extract some useful information from them, such as which texture and color to use for each ModelMeshPart. We will store this information in a new class that each ModelMeshPart will keep a reference to using its Tag properties: public class MeshTag{ public Vector3 Color; public Texture2D Texture; public float SpecularPower; public Effect CachedEffect = null; public MeshTag(Vector3 Color, Texture2D Texture, float SpecularPower) { this.Color = Color; this.Texture = Texture; this.SpecularPower = SpecularPower; }} This information will be extracted using a new function in the CModel class: private void generateTags(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (part.Effect is BasicEffect) { BasicEffect effect = (BasicEffect)part.Effect; MeshTag tag = new MeshTag(effect.DiffuseColor, effect.Texture, effect.SpecularPower); part.Tag = tag; }} This function will be called along with buildBoundingSphere() in the constructor: ...buildBoundingSphere();generateTags();... Notice that the MeshTag class has a CachedEffect variable that is not currently used. We will use this value as a location to store a reference to an effect that we want to be able to restore to the ModelMeshPart on demand. This is useful when we want to draw a model using a different effect temporarily without having to completely reload the model's effects afterwards. The functions that will allow us to do this are as shown: // Store references to all of the model's current effectspublic void CacheEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) ((MeshTag)part.Tag).CachedEffect = part.Effect;}// Restore the effects referenced by the model's cachepublic void RestoreEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (((MeshTag)part.Tag).CachedEffect != null) part.Effect = ((MeshTag)part.Tag).CachedEffect;} We are now ready to start assigning effects to our models. We will look at this in more detail in a moment, but it is worth noting that every Effect has a dictionary of effect parameters. These are variables that the Effect takes into account when performing its calculations—the world, view, and projection matrices, or colors and textures, for example. We modify a number of these parameters when assigning a new effect, so that each texture of ModelMeshPart can be informed of its specific properties: public void SetModelEffect(Effect effect, bool CopyEffect){foreach(ModelMesh mesh in Model.Meshes)foreach (ModelMeshPart part in mesh.MeshParts){Effect toSet = effect;// Copy the effect if necessaryif (CopyEffect)toSet = effect.Clone();MeshTag tag = ((MeshTag)part.Tag);// If this ModelMeshPart has a texture, set it to the effectif (tag.Texture != null){setEffectParameter(toSet, "BasicTexture", tag.Texture);setEffectParameter(toSet, "TextureEnabled", true);}elsesetEffectParameter(toSet, "TextureEnabled", false);// Set our remaining parameters to the effectsetEffectParameter(toSet, "DiffuseColor", tag.Color);setEffectParameter(toSet, "SpecularPower", tag.SpecularPower);part.Effect = toSet;}}// Sets the specified effect parameter to the given effect, if it// has that parametervoid setEffectParameter(Effect effect, string paramName, object val){ if (effect.Parameters[paramName] == null) return; if (val is Vector3) effect.Parameters[paramName].SetValue((Vector3)val); else if (val is bool) effect.Parameters[paramName].SetValue((bool)val); else if (val is Matrix) effect.Parameters[paramName].SetValue((Matrix)val); else if (val is Texture2D) effect.Parameters[paramName].SetValue((Texture2D)val);} The CopyEffect parameter, an option that this function has, is very important. If we specify false—telling the CModel not to copy the effect per ModelMeshPart—any changes made to the effect will be reflected any other time the effect is used. This is a problem if we want each ModelMeshPart to have a different texture, or if we want to use the same effect on multiple models. Instead, we can specify true to have the CModel copy the effect for each mesh part so that they can set their own effect parameters: Finally, we need to update the Draw() function to handle Effects other than BasicEffect: public void Draw(Matrix View, Matrix Projection, Vector3 CameraPosition){ // Calculate the base transformation by combining // translation, rotation, and scaling Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position);foreach (ModelMesh mesh in Model.Meshes){ Matrix localWorld = modelTransforms[mesh.ParentBone.Index] * baseWorld; foreach (ModelMeshPart meshPart in mesh.MeshParts) { Effect effect = meshPart.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); }} Creating a simple effect We will create our first effect now, and assign it to our models so that we can see the result. To begin, right-click on the content project, choose Add New Item, and select Effect File. Call it something like SimpleEffect.fx: The code for the new file is as follows. Don't worry, we'll go through each piece in a moment: float4x4 World;float4x4 View;float4x4 Projection;struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);}technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} To assign this effect to the models in our scene, we need to first load it in the game's LoadContent() function, then use the SetModelEffect() function to assign the effect to each model. Add the following to the end of the LoadContent function: Effect simpleEffect = Content.Load<Effect>("SimpleEffect");models[0].SetModelEffect(simpleEffect, true);models[1].SetModelEffect(simpleEffect, true); If you were to run the game now, you would notice that the models appear both flat and gray. This is the correct behavior, as the effect doesn't have the code necessary to do anything else at the moment. After we break down each piece of the shader, we will add some more exciting behavior: Let's begin at the top. The first three lines in this effect are its effect paremeters. These three should be familiar to you—they are the world, view, and projection matrices (in HLSL, float4x4 is the equivelant of XNA's Matrix class). There are many types of effect parameters and we will see more later. float4x4 World;float4x4 View;float4x4 Projection; The next few lines are where we define the structures used in the shaders. In this case, the two structs are VertexShaderInput and VertexShaderOutput. As you might guess, these two structs are used to send input into the vertex shader and retrieve the output from it. The data in the VertexShaderOutput struct is then interpolated between vertices and sent to the pixel shader. This way, when we access the Position value in the pixel shader for a pixel that sits between two vertices, we will get the actual position of that location instead of the position of one of the two vertices. In this case, the input and output are very simple: just the position of the vertex before and after it has been transformed using the world, view, and projection matrices: struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;}; You may note that the members of these structs are a little different from the properties of a class in C#—in that they must also include what are called semantics. Microsoft's definition for shader semantics is as follows (http://msdn.microsoft.com/en-us/library/bb509647%28VS.85%29.aspx): A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter. Basically, we need to specify what we intend to do with each member of our structs so that the graphics card can correctly map the vertex shader's outputs with the pixel shader's inputs. For example, in the previous code, we use the POSITION0 semantics to tell the graphics card that this value is the one that holds the position at which to draw the vertex. The next few lines are the vertex shader itself. Basically, we are just multiplying the input (object space or untransformed) vertex position by the world, view, and projection matrices (the mul function is part of HLSL and is used to multiply matrices and vertices) and returning that value in a new instance of the VertexShaderOutput struct: VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;} The next bit of code makes up the pixel shader. It accepts a VertexShaderOutput struct as its input (which is passed from the vertex shader), and returns a float4—equivelent to XNA's Vector4 class, in that it is basically a set of four floating point (decimal) numbers. We use the COLOR0 semantic for our return value to let the pipeline know that this function is returning the final pixel color. In this case, we are using those numbers to represent the red, green, blue, and transparency values respectively of the pixel that we are shading. In this extremely simple pixel shader, we are just returning the color gray (.5, .5, .5), so any pixel covered by the model we are drawing will be gray (like in the previous screenshot). float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);} The last part of the shader is the shader definition. Here, we tell the graphics card which vertex and pixel shader versions to use (every graphics card supports a different set, but in this case we are using vertex shader 1.1 and pixel shader 2.0) and which functions in our code make up the vertex and pixel shaders: technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} Texture mapping Let's now improve our shader by allowing it to render the textures each ModelMeshPart has assigned. As you may recall, the SetModelEffect function in the CModel class attempts to set the texture of each ModelMeshPart to its respective effect. However, it attempts to do so only if it finds the BasicTexture parameter on the effect. Let's add this parameter to our effect now (under the world, view, and projection properties): texture BasicTexture; We need one more parameter in order to draw textures on our models, and that is an instance of a sampler. The sampler is used by HLSL to retrieve the color of the pixel at a given position in a texture—which will be useful later on—in our pixel shader where we will need to retrieve the pixel from the texture corresponding the point on the model we are shading: sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>;}; A third effect parameter will allow us to turn texturing on and off: bool TextureEnabled = false; Every model that has a texture should also have what are called texture coordinates. The texture coordinates are basically two-dimensional coordinates called UV coordinates that range from (0, 0) to (1, 1) and that are assigned to every vertex in the model. These coordinates correspond to the point on the texture that should be drawn onto that vertex. A UV coordinate of (0, 0) corresponds to the top-left of the texture and (1, 1) corresponds to the bottom-right. The texture coordinates allow us to wrap two-dimensional textures onto the three-dimensional surfaces of our models. We need to include the texture coordinates in the input and output of the vertex shader, and add the code to pass the UV coordinates through the vertex shader to the pixel shader: struct VertexShaderInput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};struct VertexShaderOutput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); output.UV = input.UV; return output;} Finally, we can use the texture sampler, the texture coordinates (also called UV coordinates), and HLSL's tex2D function to retrieve the texture color corresponding to the pixel we are drawing on the model: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ float3 output = float3(1, 1, 1); if (TextureEnabled) output *= tex2D(BasicTextureSampler, input.UV); return float4(output, 1);} If you run the game now, you will see that the textures are properly drawn onto the models: Texture sampling The problem with texture sampling is that we are rarely able to simply copy each pixel from a texture directly onto the screen because our models bend and distort the texture due to their shape. Textures are distorted further by the transformations we apply to our models—rotation and other transformations. This means that we almost always have to calculate the approximate position in a texture to sample from and return that value, which is what HLSL's sampler2D does for us. There are a number of considerations to make when sampling. How we sample from our textures can have a big impact on both our game's appearance and performance. More advanced sampling (or filtering) algorithms look better but slow down the game. Mip mapping refers to the use of multiple sizes of the same texture. These multiple sizes are calculated before the game is run and stored in the same texture, and the graphics card will swap them out on the fly, using a smaller version of the texture for objects in the distance, and so on. Finally, the address mode that we use when sampling will affect how the graphics card handles UV coordinates outside the (0, 1) range. For example, if the address mode is set to "clamp", the UV coordinates will be clamped to (0, 1). If the address mode is set to "wrap," the coordinates will be wrapped through the texture repeatedly. This can be used to create a tiling effect on terrain, for example. For now, because we are drawing so few models, we will use anisotropic filtering. We will also enable mip mapping and set the address mode to "wrap". sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; MinFilter = Anisotropic; // Minification Filter MagFilter = Anisotropic; // Magnification Filter MipFilter = Linear; // Mip-mapping AddressU = Wrap; // Address Mode for U Coordinates AddressV = Wrap; // Address Mode for V Coordinates}; This will give our models a nice, smooth appearance in the foreground and a uniform appearance in the background:
Read more
  • 0
  • 0
  • 3800

article-image-creating-text-logo-blender
Packt
30 Nov 2009
3 min read
Save for later

Creating a Text Logo in Blender

Packt
30 Nov 2009
3 min read
Here is the final image we will be creating: Let us begin! We are going to begin with the default settings in Blender, as you can see below. Creating the Background To create the background we are going to be adding a plane and then making a series of holes in it. This will then act as the basis for our entire background when we replicate the plane with two array modifiers and a mirror modifier. Go ahead and: Add a plane from top view by hitting spacebar > Add > Mesh > Plane Subdivide that plane by hitting W > Subdivide Multi > 3 divisions This will give us a grid that we can punch a few holes in with relative ease. Next, go ahead and select the vertices shown below: Then: Press x > Delete Faces to delete the selected faces Next select the inside edges of the upper-left hole by clicking on one of the edges with alt > RMB You may then hit e > esc to extrude and cancel the transform that extruding activates. Next you can hit cntrl + shift + s > 1 for the To Sphere command, this will modify the extruded vertices into a perfect circle. Check out the result below: From here we can duplicate this circle and the surrounding faces into place of all the other holes such that we have a mesh that will repeat without any gaps. Think of it as a tilable texture but in mesh form! As you will surely notice, on the bottom left and bottom right you will only duplicate have of the circle. After duplicating each piece and moving it into place it will be necessary to remove all the duplicate vertices: Select everything with a Press w > Remove Doubles Moving on, before we can replicate our pattern we need to move it such that the bottom, left corner is at the center point of our grid. If you used the default size for the plane then you can simply select everything and hold down cntrl while moving it to lock it to the grid. Now, for our final background we want the holes in our mesh to have some depth, to do this all we need to do is select each of the inner circles and extrude them down along the Z-axis as you can see in the image below: Now is where things begin to get really fun! We are going to now add two array modifiers to replicate our pattern. The first array will repeat the pattern along the X-axis to the right, and the second array will replicate the pattern down along the Y-axis. We will then you a use mirror modifier along the X and Y axis to duplicate the whole pattern across the axis’. First go to the Editing Buttons and click on Add Modifier > Array Increase the count to 10 Click Merge Add a second Array and change the count to 3 Click Merge Change the X Offset to 0 and the Y Offset to 1.0 This will leave you with 1/4 of our final pattern. To complete it: Add a Mirror Modifier Click Y in addition to the default X, this will mirror it both up and across the central axis. Add a Subsurf modifier to smooth out the mesh Select everything with a and then press w > Set Smooth Setting the mesh to smooth will likely cause some normal issues (black spots) in which case you need to hit cntrl + n > Recalculate Normals while everything is selected.
Read more
  • 0
  • 0
  • 3785

Packt
04 May 2015
51 min read
Save for later

Saying Hello to Unity and Android

Packt
04 May 2015
51 min read
Welcome to the wonderful world of mobile game development. Whether you are still looking for the right development kit or have already chosen one, this article will be very important. In this article by Tom Finnegan, the author of Learning Unity Android Game Development, we explore the various features that come with choosing Unity as your development environment and Android as the target platform. Through comparison with major competitors, it is discovered why Unity and Android stand at the top of the pile. Following this, we will examine how Unity and Android work together. Finally, the development environment will be set up and we will create a simple Hello World application to test whether everything is set up correctly. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Major Unity features Major Android features Unity licensing options Installing the JDK Installing the Android software development kit (SDK) Installing Unity 3D Installing Unity Remote Understanding what makes Unity great Perhaps the greatest feature of Unity is how open-ended it is. Nearly all game engines currently on the market are limited in what one can build with them. It makes perfect sense but it can limit the capabilities of a team. The average game engine has been highly optimized for creating a specific game type. This is great if all you plan on making is the same game again and again. It can be quite frustrating when one is struck with inspiration for the next great hit, only to find that the game engine can't handle it and everyone has to retrain in a new engine or double the development time to make the game engine capable. Unity does not suffer from this problem. The developers of Unity have worked very hard to optimize every aspect of the engine, without limiting what types of games can be made using it. Everything ranging from simple 2D platformers to massive online role-playing games are possible in Unity. A development team that just finished an ultrarealistic first-person shooter can turn right around and make 2D fighting games without having to learn an entirely new system. Being so open-ended does, however, bring a drawback. There are no default tools that are optimized for building the perfect game. To combat this, Unity grants the ability to create any tool one can imagine, using the same scripting that creates the game. On top of that, there is a strong community of users that have supplied a wide selection of tools and pieces, both free and paid, that can be quickly plugged in and used. This results in a large selection of available content that is ready to jump-start you on your way to the next great game. When many prospective users look at Unity, they think that, because it is so cheap, it is not as good as an expensive AAA game engine. This is simply not true. Throwing more money at the game engine is not going to make a game any better. Unity supports all of the fancy shaders, normal maps, and particle effects that you could want. The best part is that nearly all of the fancy features that you could want are included in the free version of Unity, and 90 percent of the time beyond that, you do not even need to use the Pro-only features. One of the greatest concerns when selecting a game engine, especially for the mobile market, is how much girth it will add to the final build size. Most game engines are quite hefty. With Unity's code stripping, the final build size of the project becomes quite small. Code stripping is the process by which Unity removes every extra little bit of code from the compiled libraries. A blank project compiled for Android that utilizes full code stripping ends up being around 7 megabytes. Perhaps one of the coolest features of Unity is its multi-platform compatibility. With a single project, one can build for several different platforms. This includes the ability to simultaneously target mobiles, PCs, and consoles. This allows you to focus on real issues, such as handling inputs, resolution, and performance. In the past, if a company desired to deploy their product on more than one platform, they had to nearly double the development costs in order to essentially reprogram the game. Every platform did, and still does, run by its own logic and language. Thanks to Unity, game development has never been simpler. We can develop games using simple and fast scripting, letting Unity handle the complex translation to each platform. Unity – the best among the rest There are of course several other options for game engines. Two major ones that come to mind are cocos2d and Unreal Engine. While both are excellent choices, you will find them to be a little lacking in certain respects. The engine of Angry Birds, cocos2d, could be a great choice for your next mobile hit. However, as the name suggests, it is pretty much limited to 2D games. A game can look great in it, but if you ever want that third dimension, it can be tricky to add it to cocos2d; you may need to select a new game engine. A second major problem with cocos2d is how bare bones it is. Any tool for building or importing assets needs to be created from scratch, or it needs to be found. Unless you have the time and experience, this can seriously slow down development. Then there is the staple of major game development, Unreal Engine. This game engine has been used successfully by developers for many years, bringing great games to the world Unreal Tournament and Gears of War not the least among them. These are both, however, console and computer games, which is the fundamental problem with the engine. Unreal is a very large and powerful engine. Only so much optimization can be done on it for mobile platforms. It has always had the same problem; it adds a lot of girth to a project and its final build. The other major issue with Unreal is its rigidity in being a first-person shooter engine. While it is technically possible to create other types of games in it, such tasks are long and complex. A strong working knowledge of the underlying system is a must before achieving such a feat. All in all, Unity definitely stands strong amidst game engines. But these are still great reasons for choosing Unity for game development. Unity projects can look just as great as AAA titles. The overhead and girth in the final build are small and this is very important when working on mobile platforms. The system's potential is open enough to allow you to create any type of game that you might want, where other engines tend to be limited to a single type of game. In addition, should your needs change at any point in the project's life cycle, it is very easy to add, remove, or change your choice of target platforms. Understanding what makes Android great With over 30 million devices in the hands of users, why would you not choose the Android platform for your next mobile hit? Apple may have been the first one out of the gate with their iPhone sensation, but Android is definitely a step ahead when it comes to smartphone technology. One of its best features is its blatant ability to be opened up so that you can take a look at how the phone works, both physically and technically. One can swap out the battery and upgrade the micro SD card on nearly all Android devices, should the need arise. Plugging the phone into a computer does not have to be a huge ordeal; it can simply function as a removable storage media. From the point of view of the cost of development as well, the Android market is superior. Other mobile app stores require an annual registration fee of about 100 dollars. Some also have a limit on the number of devices that can be registered for development at one time. The Google Play market has a one-time registration fee of 25 dollars, and there is no concern about how many Android devices or what type of Android devices you are using for development. One of the drawbacks of some of the other mobile development kits is that you have to pay an annual registration fee before you have access to the SDK. With some, registration and payment are required before you can view their documentation. Android is much more open and accessible. Anybody can download the Android SDK for free. The documentation and forums are completely viewable without having to pay any fee. This means development for Android can start earlier, with device testing being a part of it from the very beginning. Understanding how Unity and Android work together As Unity handles projects and assets in a generic way, there is no need to create multiple projects for multiple target platforms. This means that you could easily start development with the free version of Unity and target personal computers. Then, at a later date, you can switch targets to the Android platform with the click of a button. Perhaps, shortly after your game is launched, it takes the market by storm and there is a great call to bring it to other mobile platforms. With just another click of the button, you can easily target iOS without changing anything in your project. Most systems require a long and complex set of steps to get your project running on a device. However, once your device is set up and recognized by the Android SDK, a single button click will allow Unity to build your application, push it to a device, and start running it. There is nothing that has caused more headaches for some developers than trying to get an application on a device. Unity makes this simple. With the addition of a free Android application, Unity Remote, it is simple and easy to test mobile inputs without going through the whole build process. While developing, there is nothing more annoying than waiting for 5 minutes for a build every time you need to test a minor tweak, especially in the controls and interface. After the first dozen little tweaks, the build time starts to add up. Unity Remote makes it simple and easy to test everything without ever having to hit the Build button. These are the big three reasons why Unity works well with Android: Generic projects A one-click build process Unity Remote We could, of course, come up with several more great ways in which Unity and Android can work together. However, these three are the major time and money savers. You could have the greatest game in the world, but if it takes 10 times longer to build and test, what is the point? Differences between the Pro and Basic versions of Unity Unity comes with two licensing options, Pro and Basic, which can be found at https://store.unity3d.com. If you are not quite ready to spend the 3,000 dollars that is required to purchase a full Unity Pro license with the Android add-on, there are other options. Unity Basic is free and comes with a 30-day free trial of Unity Pro. This trial is full and complete, as if you have purchased Unity Pro, the only downside being a watermark in the bottom-right corner of your game stating Demo Use Only. It is also possible to upgrade your license at a later date. Where Unity Basic comes with mobile options for free, Unity Pro requires the purchase of Pro add-ons for each of the mobile platforms. An overview of license comparison License comparisons can be found at http://unity3d.com/unity/licenses. This section will cover the specific differences between Unity Android Pro and Unity Android Basic. We will explore what the features are and how useful each one is in the following points: NavMeshes, pathfinding, and crowd simulation This feature is Unity's built-in pathfinding system. It allows characters to find their way from a point to another around your game. Just bake your navigation data in the editor and let Unity take over at runtime. Until recently, this was a Unity Pro only feature. Now the only part of it that is limited in Unity Basic is the use of off-mesh links. The only time you are going to need them is when you want your AI characters to be able to jump across and otherwise navigate around gaps. LOD support LOD (short for level of detail) lets you control how complex a mesh is, based on its distance from the camera. When the camera is close to an object, you can render a complex mesh with a bunch of detail in it. When the camera is far from that object, you can render a simple mesh because all that detail is not going to be seen anyway. Unity Pro provides a built-in system to manage this. However, this is another system that could be created in Unity Basic. Whether or not you are using the Pro version, this is an important feature for game efficiency. By rendering less complex meshes at a distance, everything can be rendered faster, leaving more room for awesome gameplay. The audio filter Audio filters allow you to add effects to audio clips at runtime. Perhaps you created gravel footstep sounds for your character. Your character is running and we can hear the footsteps just fine, when suddenly they enter a tunnel and a solar flare hits, causing a time warp and slowing everything down. Audio filters would allow us to warp the gravel footstep sounds to sound as if they were coming from within a tunnel and were slowed by a time warp. Of course, you could also just have the audio guy create a new set of tunnel gravel footsteps in the time warp sounds, although this might double the amount of audio in your game and limit how dynamic we can be with it at runtime. We either are or are not playing the time warp footsteps. Audio filters would allow us to control how much time warp is affecting our sounds. Video playback and streaming When dealing with complex or high-definition cut scenes, being able to play videos becomes very important. Including them in a build, especially with a mobile target, can require a lot of space. This is where the streaming part of this feature comes in. This feature not only lets us play videos but also lets us stream a video from the Internet. There is, however, a drawback to this feature. On mobile platforms, the video has to go through the device's built-in video-playing system. This means that the video can only be played in fullscreen and cannot be used as a texture for effects such as moving pictures on a TV model. Theoretically, you could break your video into individual pictures for each frame and flip through them at runtime, but this is not recommended for build size and video quality reasons. Fully-fledged streaming with asset bundles Asset bundles are a great feature provided by Unity Pro. They allow you to create extra content and stream it to users without ever requiring an update to the game. You could add new characters, levels, or just about any other content you can think of. Their only drawback is that you cannot add more code. The functionality cannot change, but the content can. This is one of the best features of Unity Pro. The 100,000 dollar turnover This one isn't so much a feature as it is a guideline. According to Unity's End User License Agreement, the basic version of Unity cannot be licensed by any group or individual who made $100,000 in the previous fiscal year. This basically means that if you make a bunch of money, you have to buy Unity Pro. Of course, if you are making that much money, you can probably afford it without an issue. This is the view of Unity at least and the reason why there is a 100,000 dollar turnover. Mecanim – IK Rigs Unity's new animation system, Mecanim, supports many exciting new features, one of which is IK (short form for Inverse Kinematics). If you are unfamiliar with the term, IK allows one to define the target point of an animation and let the system figure out how to get there. Imagine you have a cup sitting on a table and a character that wants to pick it up. You could animate the character to bend over and pick it up; but, what if the character is slightly to the side? Or any number of other slight offsets that a player could cause, completely throwing off your animation? It is simply impractical to animate for every possibility. With IK, it hardly matters that the character is slightly off. We just define the goal point for the hand and leave the animation of the arm to the IK system. It calculates how the arm needs to move in order to get the hand to the cup. Another fun use is making characters look at interesting things as they walk around a room: a guard could track the nearest person, the player's character could look at things that they can interact with, or a tentacle monster could lash out at the player without all the complex animation. This will be an exciting one to play with. Mecanim – sync layers and additional curves Sync layers, inside Mecanim, allow us to keep multiple sets of animation states in time with each other. Say you have a soldier that you want to animate differently based on how much health he has. When he is at full health, he walks around briskly. After a little damage to his health, the walk becomes more of a trudge. If his health is below half, a limp is introduced into his walk, and when he is almost dead he crawls along the ground. With sync layers, we can create one animation state machine and duplicate it to multiple layers. By changing the animations and syncing the layers, we can easily transition between the different animations while maintaining the state machine. The additional curves feature is simply the ability to add curves to your animation. This means we can control various values with the animation. For example, in the game world, when a character picks up its feet for a jump, gravity will pull them down almost immediately. By adding an extra curve to that animation, in Unity, we can control how much gravity is affecting the character, allowing them to actually be in the air when jumping. This is a useful feature for controlling such values alongside the animations, but you could just as easily create a script that holds and controls the curves. The custom splash screen Though pretty self-explanatory, it is perhaps not immediately evident why this feature is specified, unless you have worked with Unity before. When an application that is built in Unity initializes on any platform, it displays a splash screen. In Unity Basic, this will always be the Unity logo. By purchasing Unity Pro, you can substitute for the Unity logo with any image you want. Real-time spot/point and soft shadows Lights and shadows add a lot to the mood of a scene. This feature allows us to go beyond blob shadows and use realistic-looking shadows. This is all well and good if you have the processing space for it. However, most mobile devices do not. This feature should also never be used for static scenery; instead, use static lightmaps, which is what they are for. However, if you can find a good balance between simple needs and quality, this could be the feature that creates the difference between an alright and an awesome game. If you absolutely must have real-time shadows, the directional light supports them and is the fastest of the lights to calculate. It is also the only type of light available to Unity Basic that supports real-time shadows. HDR and tone mapping HDR (short for high dynamic range) and tone mapping allow us to create more realistic lighting effects. Standard rendering uses values from zero to one to represent how much of each color in a pixel is on. This does not allow for a full spectrum of lighting options to be explored. HDR lets the system use values beyond this range and processes them using tone mapping to create better effects, such as a bright morning room or the bloom from a car window reflecting the sun. The downside of this feature is in the processor. The device can still only handle values between zero and one, so converting them takes time. Additionally, the more complex the effect, the more time it takes to render it. It would be surprising to see this used well on handheld devices, even in a simple game. Maybe the modern tablets could handle it. Light probes Light probes are an interesting little feature. When placed in the world, light probes figure out how an object should be lit. Then, as a character walks around, they tell it how to be shaded. The character is, of course, lit by the lights in the scene, but there are limits on how many lights can shade an object at once. Light probes do all the complex calculations beforehand, allowing for better shading at runtime. Again, however, there are concerns about processing power. Too little power and you won't get a good effect; too much and there will be no processing power left for playing the game. Lightmapping with global illumination and area lights All versions of Unity support lightmaps, allowing for the baking of complex static shadows and lighting effects. With the addition of global illumination and area lights, you can add another touch of realism to your scenes. However, every version of Unity also lets you import your own lightmaps. This means that you could use some other program to render the lightmaps and import them separately. Static batching This feature speeds up the rendering process. Instead of spending time grouping objects for faster rendering on each frame , this allows the system to save the groups generated beforehand. Reducing the number of draw calls is a powerful step towards making a game run faster. That is exactly what this feature does. Render-to-texture effects This is a fun feature, but of limited use. It allows you to use the output from a camera in your game as a texture. This texture could then, in its most simple form, be put onto a mesh and act as a surveillance camera. You could also do some custom post processing, such as removing the color from the world as the player loses their health. However, this option could become very processor-intensive. Fullscreen post-processing effects This is another processor-intensive feature that probably will not make it into your mobile game. However, you can add some very cool effects to your scene, such as adding motion blur when the player is moving really fast or a vortex effect to warp the scene as the ship passes through a warped section of space. One of the best effects is using the bloom effect to give things a neon-like glow. Occlusion culling This is another great optimization feature. The standard camera system renders everything that is within the camera's view frustum, the view space. Occlusion culling lets us set up volumes in the space our camera can enter. These volumes are used to calculate what the camera can actually see from those locations. If there is a wall in the way, what is the point of rendering everything behind it? Occlusion culling calculates this and stops the camera from rendering anything behind that wall. Deferred rendering If you desire the best looking game possible, with highly detailed lighting and shadows, this is a feature of interest for you. Deferred rendering is a multi-pass process for calculating your game's light and shadow detail. This is, however, an expensive process and requires a decent graphics card to fully maximize its use. Unfortunately, this makes it a little outside of our use for mobile games. Stencil buffer access Custom shaders can use the stencil buffer to create special effects by selectively rendering over specific pixels. It is similar to how one might use an alpha channel to selectively render parts of a texture. GPU skinning This is a processing and rendering method by which the calculations for how a character or object appears, when using a skeleton rig, is given to the graphics card rather than getting it done by the central processor. It is significantly faster to render objects in this way. However, this is only supported on DirectX 11 and OpenGL ES 3.0, leaving it a bit out of reach for our mobile games. Navmesh – dynamic obstacles and priority This feature works in conjunction with the pathfinding system. In scripts, we can dynamically set obstacles, and characters will find their way around them. Being able to set priorities means that different types of characters can take different types of objects into consideration when finding their way around. For example, a soldier must go around the barricades to reach his target. The tank, however, could just crash through, should the player desire. Native code plugins' support If you have a custom set of code in the form of a Dynamic Link Library (DLL), this is the Unity Pro feature you need access to. Otherwise, the native plugins cannot be accessed by Unity for use with your game. Profiler and GPU profiling This is a very useful feature. The profiler provides tons of information about how much load your game puts on the processor. With this information, we can get right down into the nitty-gritties and determine exactly how long a script takes to process. Script access to the asset pipeline This is an alright feature. With full access to the pipeline, there is a lot of custom processing that can be done on assets and builds. The full range of possibilities is beyond the scope of this article. However, you can think of it as something that can make tint all of the imported textures slightly blue. Dark skin This is entirely a cosmetic feature. Its point and purpose are questionable. However, if a smooth, dark-skinned look is what you desire, this is the feature that you want. There is an option in the editor to change it to the color scheme used in Unity Basic. For this feature, whatever floats your boat goes. Setting up the development environment Before we can create the next great game for Android, we need to install a few programs. In order to make the Android SDK work, we will first install the Java Development Kit (JDK). Then we will install the Android SDK. After that, we will install Unity. We then have to install an optional code editor. To make sure everything is set up correctly, we will connect to our devices and take a look at some special strategies if the device is a tricky one. Finally, we will install Unity Remote, a program that will become invaluable in your mobile development. Installing the JDK Android's development language of choice is Java; so, to develop for it, we need a copy of the Java SE Development Kit on our computer. The process of installing the JDK is given in the following steps: The latest version of the JDK can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html. So open the site in a web browser, and you will be able to see the screen showed in the following screenshot: Select Java Platform (JDK) from the available versions and you will be brought to a page that contains the license agreement and allows you to select the type of file you wish to download. Accept the license agreement and select your appropriate Windows version from the list at the bottom. If you are unsure about which version to choose, then Windows x86 is usually a safe choice. Once the download is completed, run the new installer. After a system scan, click on Next two times, the JDK will initialize, and then click on the Next button one more time to install the JDK to the default location. It is as good there as anywhere else, so once it is installed, hit the Close button. We have just finished installing the JDK. We need this so that our Android development kit will work. Luckily, the installation process for this keystone is short and sweet. Installing the Android SDK In order to actually develop and connect to our devices, we need to have installed the Android SDK. Having the SDK installed fulfills two primary requirements. First, it makes sure that we have the bulk of the latest drivers for recognizing devices. Second, we are able to use the Android Debug Bridge (ADB). ADB is the system used for actually connecting to and interacting with a device. The process of installing the Android SDK is given in the following steps: The latest version of the Android SDK can be found at http://developer.android.com/sdk/index.html, so open a web browser and go to the given site. Once there, scroll to the bottom and find the SDK Tools Only section. This is where we can get just the SDK, which we need to make Android games with Unity, without dealing with the fancy fluff of the Android Studio. We need to select the .exe package with (Recommended) underneath it (as shown in the following screenshot): You will then be sent to a Terms and Conditions page. Read it if you prefer, but agree to it to continue. Then hit the Download button to start downloading the installer. Once it has finished downloading, start it up. Hit the first Next button and the installer will try to find an appropriate version of the JDK. You will come to a page that will notify you about not finding the JDK if you do not have it installed. If you skipped ahead and do not have the JDK installed, hit the Visit java.oracle.com button in the middle of the page and go back to the previous section for guidance on installing it. If you do have it, continue with the process. Hitting Next again will bring you to a page that will ask you about the person for whom you are installing the SDK . Select Install for anyone using this computer because the default install location is easier to get to for later purposes. Hit Next twice, followed by Install to install the SDK to the default location. Once this is done, hit Next and Finish to complete the installation of the Android SDK Manager. If Android SDK Manager does not start right away, start it up. Either way, give it a moment to initialize. The SDK Manager makes sure that we have the latest drivers, systems, and tools for developing with the Android platform. However, we have to actually install them first (which can be done from the following screen): By default, the SDK manager should select a number of options to install. If not, select the latest Android API (Android L (API 20) as of the time of writing this article), Android Support Library and Google USB Driver found in Extras. Be absolutely sure that Android SDK Platform-tools is selected. This will be very important later. It actually includes the tools that we need to connect to our device. Once everything is selected, hit Install packages at the bottom-right corner. The next screen is another set of license agreements. Every time a component is installed or updated through the SDK manager, you have to agree to the license terms before it gets installed. Accept all of the licenses and hit Install to start the process. You can now sit back and relax. It takes a while for the components to be downloaded and installed. Once this is all done, you can close it out. We have completed the process, but you should occasionally come back to it. Periodically checking the SDK manager for updates will make sure that you are using the latest tools and APIs. The installation of the Android SDK is now finished. Without it, we would be completely unable to do anything on the Android platform. Aside from the long wait to download and install components, this was a pretty easy installation. Installing Unity 3D Perform the following steps to install Unity: The latest version of Unity can be found at http://www.unity3d.com/unity/download. As of the time of writing this article, the current version is 5.0. Once it is downloaded, launch the installer and click on Next until you reach the Choose Components page, as shown in the following screenshot: Here, we are able to select the features of Unity installation:      Example Project: This is the current project built by Unity to show off some of its latest features. If you want to jump in early and take a look at what a complete Unity game can look like, leave this checked.      Unity Development Web Player: This is required if you plan on developing browser applications with Unity. As this article is focused on Android development, it is entirely optional. It is, however, a good one to check. You never know when you may need a web demo and since it is entirely free to develop for the web using Unity, there is no harm in having it.      MonoDevelop: It is a wise choice to leave this option unchecked. There is more detail in the next section, but it will suffice for now to say that it just adds an extra program for script editing that is not nearly as useful as it should be. Once you have selected or deselected your desired options, hit Next. If you wish to follow this article, note that we will uncheck MonoDevelop and leave the rest checked. Next is the location of installation. The default location works well, so hit Install and wait. This will take a couple of minutes, so sit back, relax, and enjoy your favorite beverage. Once the installation is complete, the option to run Unity will be displayed. Leave it checked and hit Finish. If you have never installed Unity before, you will be presented with a license activation page (as shown in the following screenshot): While Unity does provide a feature-rich, free version, in order to follow the entirety of this article, one is required to make use of some of the Unity Pro features. At https://store.unity3d.com, you have the ability to buy a variety of licenses. Once they are purchased, you will receive an e-mail containing your new license key. Enter that in the provided text field. If you are not ready to make a purchase, you have two alternatives. We will go over how to reset your license in the Building a simple application section later in the article. The alternatives are as follows:      The first alternative is that you can check the Activate the free version of Unity checkbox. This will allow you to use the free version of Unity. As discussed earlier, there are many reasons to choose this option. The most notable at the moment is cost.      Alternatively, you can select the Activate a free 30-day trial of Unity Pro option. Unity offers a fully functional, one-time installation and a free 30-day trial of  Unity Pro. This trial also includes the Android Pro add-on. Anything produced during the 30 days is completely yours, as if you had purchased a full Unity Pro license. They want you to see how great it is, so you will come back and make a purchase. The downside is that the Trial Version watermark will be constantly displayed at the corner of the game. After the 30 days, Unity will revert to the free version. This is a great option, should you choose to wait before making a purchase. Whatever your choice is, hit OK once you have made it. The next page simply asks you to log in with your Unity account. This will be the same account that you used to make your purchase. Just fill out the fields and hit OK. If you have not yet made a purchase, you can hit Create Account and have it ready for when you do make a purchase. The next page is a short survey on your development interests. Fill it out and hit OK or scroll straight to the bottom and hit Not right now. Finally, there is a thank you page. Hit Start using Unity. After a short initialization, the project wizard will open and we can start creating the next great game. However, there is still a bunch of work to do to connect the development device. So for now, hit the X button in the top-right corner to close the project wizard. We will cover how to create a new project in the Building a simple application section later on. We just completed installing Unity 3D. We also had to make a choice about licenses. The alternatives, though, will have a few shortcomings. You will either not have full access to all of the features or be limited to the length of the trial period while making due with a watermark in your games. The optional code editor Now a choice has to be made about code editors. Unity comes with a system called MonoDevelop. It is similar in many respects to Visual Studio. And like Visual Studio, it adds many extra files and much girth to a project, all of which it needs to operate. All this extra girth makes it take an annoying amount of time to start up, before one can actually get to the code. Technically, you can get away with a plain text editor, as Unity doesn't really care. This article recommends using Notepad++, which is found at http://notepad-plus-plus.org/download. It is free to use and it is essentially Notepad with code highlighting. There are several fancy widgets and add-ons for Notepad++ that add even greater functionality to it, but they are not necessary for following this article. If you choose this alternative, installing Notepad++ to the default location will work just fine. Connecting to a device Perhaps the most annoying step in working with Android devices is setting up the connection to your computer. Since there are so many different kinds of devices, it can get a little tricky at times just to have the device recognized by your computer. A simple device connection The simple device connection method involves changing a few settings and a little work in the command prompt. It may seem a little scary, but if all goes well you will be connected to your device shortly: The first thing you need to do is turn on the phone's Developer options. In the latest version of Android, these have been hidden. Go to your phone's settings page and find the About phone page. Next, you need to find the Build number information slot and tap it several times. At first, it will appear to do nothing, but it will shortly display that you need to press the button a few more times to activate the Developer options. The Android team did this so that the average user does not accidentally make changes. Now go back to your settings page and there should be a new Developer options page; select it now. This page controls all of the settings you might need to change while developing your applications. The only checkbox we are really concerned with checking right now is USB debugging. This allows us to actually detect our device from the development environment. If you are using Kindle, be sure to go into Security and turn on Enable ADB as well. There are several warning pop-ups that are associated with turning on these various options. They essentially amount to the same malicious software warnings associated with your computer. Applications with immoral intentions can mess with your system and get to your private information. All these settings need to be turned on if your device is only going to be used for development. However, as the warnings suggest, if malicious applications are a concern, turn them off when you are not developing. Next, open a command prompt on your computer. This can be done most easily by hitting your Windows key, typing cmd.exe, and then hitting Enter. We now need to navigate to the ADB commands. If you did not install the SDK to the default location, replace the path in the following commands with the path where you installed it. If you are running a 32-bit version of Windows and installed the SDK to the default location, type the following in the command prompt: cd c:\program files\android\android-sdk\platform-tools If you are running a 64-bit version, type the following in the command prompt: cd c:\program files (x86)\android\android-sdk\platform-tools Now, connect your device to your computer, preferably using the USB cable that came with it. Wait for your computer to finish recognizing the device. There should be a Device drivers installed type of message pop-up when it is done. The following command lets us see which devices are currently connected and recognized by the ADB system. Emulated devices will show up as well. Type the following in the command prompt: adb devices After a short pause for processing, the command prompt will display a list of attached devices along with the unique IDs of all the attached devices. If this list now contains your device, congratulations! You have a developer-friendly device. If it is not completely developer-friendly, there is one more thing that you can try before things get tricky. Go to the top of your device and open your system notifications. There should be one that looks like the USB symbol. Selecting it will open the connection settings. There are a few options here and by default Android selects to connect the Android device as a Media Device. We need to connect our device as a Camera. The reason is the connection method used. Usually, this will allow your computer to connect. We have completed our first attempt at connecting to our Android devices. For most, this should be all that you need to connect to your device. For some, this process is not quite enough. The next little section covers solutions to resolve the issue for connecting trickier devices. For trickier devices, there are a few general things that we can try; if these steps fail to connect your device, you may need to do some special research. Start by typing the following commands. These will restart the connection system and display the list of devices again: adb kill-server adb start-server adb devices If you are still not having any luck, try the following commands. These commands force an update and restart the connection system: cd ../tools android update adb cd ../platform-tools adb kill-server adb start-server adb devices If your device is still not showing up, you have one of the most annoying and tricky devices. Check the manufacturer's website for data syncing and management programs. If you have had your device for quite some time, you have probably been prompted to install this more than once. If you have not already done so, install the latest version even if you never plan on using it. The point is to obtain the latest drivers for your device, and this is the easiest way. Restart the connection system again using the first set of commands and cross your fingers! If you are still unable to connect, the best, professional recommendation that can be made is to google for the solution to your problem. Conducting a search for your device's brand with adb at the end should turn up a step-by-step tutorial that is specific to your device in the first couple of results. Another excellent resource for finding out all about the nitty-gritties of Android devices can be found at http://www.xda-developers.com/. Some of the devices that you will encounter while developing will not connect easily. We just covered some quick steps and managed to connect these devices. If we could have covered the processes for every device, we would have. However, the variety of devices is just too large and the manufacturers keep making more. Unity Remote Unity Remote is a great application created by the Unity team. It allows developers to connect their Android-powered devices to the Unity Editor and provide mobile inputs for testing. This is a definite must for any aspiring Unity and Android developer. If you are using a non-Amazon device, acquiring Unity Remote is quite easy. At the time of writing this article, it could be found on Google Play at https://play.google.com/store/apps/details?id=com.unity3d.genericremote. It is free and does nothing but connects your Android device to the Unity Editor, so the app permissions are negligible. In fact, there are currently two versions of Unity Remote. To connect to Unity 4.5 and later versions, we must use Unity Remote 4. If, however, you like the ever-growing Amazon market or seek to target Amazon's line of Android devices, adding Unity Remote will become a little trickier. First, you need to download a special Unity Package from the Unity Asset Store. It can be found at https://www.assetstore.unity3d.com/en/#!/content/18106. You will need to import the package into a fresh project and build it from there. Import the package by going to the top of Unity, navigate to Assets | Import Package | Custom Package, and then navigate to where you saved it. In the next section, we will build a simple application and put it on our device. After you have imported the package, follow along from the step where we open the Build Settings window, replacing the simple application with the created APK. Building a simple application We are now going to create a simple Hello World application. This will familiarize you with the Unity interface and how to actually put an application on your device. Hello World To make sure everything is set up properly, we need a simple application to test with and what better to do that with than a Hello World application? To build the application, perform the following steps: The first step is pretty straightforward and simple: start Unity. If you have been following along so far, once this is done you should see a screen resembling the next screenshot. As the tab might suggest, this is the screen through which we open our various projects. Right now, though, we are interested in creating one; so, select New Project from the top-right corner and we will do just that: Use the Project name* field to give your project a name; Ch1_HelloWorld fits well for a project name. Then use the three dots to the right of the Location* field to choose a place on your computer to put the new project. Unity will create a new folder in this location, based on the project name, to store your project and all of its related files: For now, we can ignore the 3D and 2D buttons. These let us determine the defaults that Unity will use when creating a new scene and importing new assets. We can also ignore the Asset packages button. This lets you select from the bits of assets and functionality that is provided by Unity. They are free for you to use in your projects. Hit the Create Project button, and Unity will create a brand-new project for us. The following screenshot shows the windows of the Unity Editor: The default layout of Unity contains a decent spread of windows that are needed to create a game:      Starting from the left-hand side, Hierarchy contains a list of all the objects that currently exist in our scene. They are organized alphabetically and are grouped under parent objects.      Next to this is the Scene view. This window allows us to edit and arrange objects in the 3D space. In the top left-hand side, there are two groups of buttons. These affect how you can interact with the Scene view.      The button on the far left that looks like a hand lets you pan around when you click and drag with the mouse.      The next button, the crossed arrows, lets you move objects around. Its behavior and the gizmo it provides will be familiar if you have made use of any modeling programs.      The third button changes the gizmo to rotation. It allows you to rotate objects.      The fourth button is for scale. It changes the gizmo as well.      The fifth button lets you adjust the position and the scale based on the bounding box of the object and its orientation relative to how you are viewing it.      The second to last button toggles between Pivot and Center. This will change the position of the gizmo used by the last three buttons to be either at the pivot point of the selected object, or at the average position point of all the selected objects.      The last button toggles between Local and Global. This changes whether the gizmo is orientated parallel with the world origin or rotated with the selected object.      Underneath the Scene view is the Game view. This is what is currently being rendered by any cameras in the scene. This is what the player will see when playing the game and is used for testing your game. There are three buttons that control the playback of the Game view in the upper-middle section of the window.      The first is the Play button. It toggles the running of the game. If you want to test your game, press this button.      The second is the Pause button. While playing, pressing this button will pause the whole game, allowing you to take a look at the game's current state.      The third is the Step button. When paused, this button will let you progress through your game one frame at a time.      On the right-hand side is the Inspector window. This displays information about any object that is currently selected.      In the bottom left-hand side is the Project window. This displays all of the assets that are currently stored in the project.      Behind this is Console. It will display debug messages, compile errors, warnings, and runtime errors. At the top, underneath Help, is an option called Manage License.... By selecting this, we are given options to control the license. The button descriptions cover what they do pretty well, so we will not cover them in more detail at this point. The next thing we need to do is connect our optional code editor. At the top, go to Edit and then click on Preferences..., which will open the following window: By selecting External Tools on the left-hand side, we can select other software to manage asset editing. If you do not want to use MonoDevelop, select the drop-down list to the right of External Script Editor and navigate to the executable of Notepad++, or any other code editor of your choice. Your Image application option can also be changed here to Adobe Photoshop or any other image-editing program that you prefer, in the same way as the script editor. If you installed the Android SDK to the default location, do not worry about it. Otherwise, click on Browse... and find the android-sdk folder. Now, for the actual creation of this application, right-click inside your Project window. From the new window that pops up, select Create and C# Script from the menu. Type in a name for the new script (HelloWorld will work well) and hit Enter twice: once to confirm the name and once to open it. In this article, this will be a simple Hello World application. Unity supports C#, JavaScript, and Boo as scripting languages. For consistency, this article will be using C#. If you, instead, wish to use JavaScript for your scripts, copies of all of the projects can be found with the other resources for this article, under a _JS suffix for JavaScript. Every script that is going to attach to an object extends the functionality of the MonoBehaviour class. JavaScript does this automatically, but C# scripts must define it explicitly. However, as you can see from the default code in the script, we do not have to worry about setting this up initially; it is done automatically. Extending the MonoBehaviour class lets our scripts access various values of the game object, such as the position, and lets the system automatically call certain functions during specific events in the game, such as the Update cycle and the GUI rendering. For now, we will delete the Start and Update functions that Unity insists on including in every new script. Replace them with a bit of code that simply renders the words Hello World in the top-left corner of the screen; you can now close the script and return to Unity: public void OnGUI() { GUILayout.Label("Hello World"); } Drag the HelloWorld script from the Project window and drop it on the Main Camera object in the Hierarchy window. Congratulations! You have just added your first bit of functionality to an object in Unity. If you select Main Camera in Hierarchy, then Inspector will display all of the components attached to it. At the bottom of the list is your brand-new HelloWorld script. Before we can test it, we need to save the scene. To do this, go to File at the top and select Save Scene. Give it the name HelloWorld and hit Save. A new icon will appear in your Project window, indicating that you have saved the scene. You are now free to hit the Play button in the upper-middle section of the editor and witness the magic of Hello World. We now get to build the application. At the top, select File and then click on Build Settings.... By default, the target platform is PC. Under Platform, select Android and hit Switch Platform in the bottom-left corner of the Build Settings window. Underneath the Scenes In Build box, there is a button labeled Add Current. Click on it to add our currently opened scene to the build. Only scenes that are in this list and checked will be added to the final build of your game. The scene with the number zero next to it will be the first scene that is loaded when the game starts. There is one last group of things to change before we can hit the Build button. Select Player Settings... at the bottom of the Build Settings window. The Inspector window will open Player Settings (shown in the following screenshot) for the application. From here, we can change the splash screen, icon, screen orientation, and a handful of other technical options: At the moment, there are only a few options that we care about. At the top, Company Name is the name that will appear under the information about the application. Product Name is the name that will appear underneath the icon on your Android device. You can largely set these to anything you want, but they do need to be set immediately. The important setting is Bundle Identifier, underneath Other Settings and Identification. This is the unique identifier that singles out your application from all other applications on the device. The format is com.CompanyName.ProductName, and it is a good practice to use the same company name across all of your products. For this article, we will be using com.TomPacktAndBegin.Ch1.HelloWorld for Bundle Identifier and opt to use an extra dot (period) for the organization. Go to File and then click on Save again. Now you can hit the Build button in the Build Settings window. Pick a location to save the file, and a file name ( Ch1_HelloWorld.apk works well). Be sure to remember where it is and hit Save. If during the build process Unity complains about where the Android SDK is, select the android-sdk folder inside the location where it was installed. The default would be C:\Program Files\Android\android-sdk for a 32-bit Windows system and C:\Program Files (x86)\Android\android-sdk for a 64-bit Windows system. Once loading is done, which should not be very long, your APK will have been made and we are ready to continue. We are through with Unity for this article. You can close it down and open a command prompt. Just as we did when we were connecting our devices, we need to navigate to the platform-tools folder in order to connect to our device. If you installed the SDK to the default location, use:      For a 32-bit Windows system: cd c:\program files\android\android-sdk\platform-tools      For a 64-bit Windows system: cd c:\program files (x86)\android\android-sdk\platform-tools Double-check to make sure that the device is connected and recognized by using the following command: adb devices Now we will install the application. This command tells the system to install an application on the connected device. The -r indicates that it should override if an application is found with the same Bundle Identifier as the application we are trying to install. This way you can just update your game as you develop, rather than uninstalling before installing the new version each time you need to make an update. The path to the .apk file that you wish to install is shown in quotes as follows: adb install -r "c:\users\tom\desktop\packt\book\ch1_helloworld.apk" Replace it with the path to your APK file; capital letters do not matter, but be sure to have all the correct spacing and punctuations. If all goes well, the console will display an upload speed when it has finished pushing your application to the device and a success message when it has finished the installation. The most common causes for errors at this stage are not being in the platform-tools folder when issuing commands and not having the correct path to the .apk file, surrounded by quotes. Once you have received your success message, find the application on your phone and start it up. Now, gaze in wonder at your ability to create Android applications with the power of Unity. We have created our very first Unity and Android application. Admittedly, it was just a simple Hello World application, but that is how it always starts. This served very well for double-checking the device connection and for learning about the build process without all the clutter from a game. If you are looking for a further challenge, try changing the icon for the application. It is a fairly simple procedure that you will undoubtedly want to perform as your game develops. How to do this was mentioned earlier in this section, but, as a reminder, take a look at Player Settings. Also, you will need to import an image. Take a look under Assets, in the menu bar, to know how to do this. Summary There were a lot of technical things in this article. First, we discussed the benefits and possibilities when using Unity and Android. That was followed by a whole lot of installation; the JDK, the Android SDK, Unity 3D, and Unity Remote. We then figured out how to connect to our devices through the command prompt. Our first application was quick and simple to make. We built it and put it on a device. Resources for Article: Further resources on this subject: What's Your Input? [article] That's One Fancy Hammer! [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 3779

article-image-physics-engine
Packt
04 Sep 2014
9 min read
Save for later

The physics engine

Packt
04 Sep 2014
9 min read
In this article by Martin Varga, the author of Learning AndEngine, we will look at the physics in AndEngine. (For more resources related to this topic, see here.) AndEngine uses the Android port of the Box2D physics engine. Box2D is very popular in games, including the most popular ones such as Angry Birds, and many game engines and frameworks use Box2D to simulate physics. It is free, open source, and written in C++, and it is available on multiple platforms. AndEngine offers a Java wrapper API for the C++ Box2D backend, and therefore, no prior C++ knowledge is required to use it. Box2D can simulate 2D rigid bodies. A rigid body is a simplification of a solid body with no deformations. Such objects do not exist in reality, but if we limit the bodies to those moving much slower than the speed of light, we can say that solid bodies are also rigid. Box2D uses real-world units and works with physics terms. A position in a scene in AndEngine is defined in pixel coordinates, whereas in Box2D, it is defined in meters. AndEngine uses a pixel to meter conversion ratio. The default value is 32 pixels per meter. Basic terms Box2D works with something we call a physics world. There are bodies and forces in the physics world. Every body in the simulation has the following few basic properties: Position Orientation Mass (in kilograms) Velocity (in meters per second) Torque (or angular velocity in radians per second) Forces are applied to bodies and the following Newton's laws of motion apply: The first law, An object that is not moving or moving with constant velocity will stay that way until a force is applied to it, can be tweaked a bit The second law, Force is equal to mass multiplied by acceleration, is especially important to understand what will happen when we apply force to different objects The third law, For every action, there is an equal and opposite reaction, is a bit flexible when using different types of bodies Body types There are three different body types in Box2D, and each one is used for a different purpose. The body types are as follows: Static body: This doesn't have velocity and forces do not apply to a static body. If another body collides with a static body, it will not move. Static bodies do not collide with other static and kinematic bodies. Static bodies usually represent walls, floors, and other immobile things. In our case, they will represent platforms which don't move. Kinematic body: This has velocity, but forces don't apply to it. If a kinematic body is moving and a dynamic body collides with it, the kinematic body will continue in its original direction. Kinematic bodies also do not collide with other static and kinematic bodies. Kinematic bodies are useful to create moving platforms, which is exactly how we are going to use them. Dynamic body: A dynamic body has velocity and forces apply to it. Dynamic bodies are the closest to real-world bodies and they collide with all types of bodies. We are going to use a dynamic body for our main character. It is important to understand the consequences of choosing each body type. When we define gravity in Box2D, it will pull all dynamic bodies to the direction of the gravitational acceleration, but static bodies will remain still and kinematic bodies will either remain still or keep moving in their set direction as if there was no gravity. Fixtures Every body is composed of one or more fixtures. Each fixture has the following four basic properties: Shape: In Box2D, fixtures can be circles, rectangles, and polygons Density: This determines the mass of the fixture Friction: This plays a major role in body interactions Elasticity: This is sometimes called restitution and determines how bouncy the object is There are also special properties of fixtures such as filters and filter categories and a single Boolean property called sensor. Shapes The position of fixtures and their shapes in the body determine the overall shape, mass, and the center of gravity of the body. The upcoming figure is an example of a body that consists of three fixtures. The fixtures do not need to connect. They are part of one body, and that means their positions relative to each other will not change. The red dot represents the body's center of gravity. The green rectangle is a static body and the other three shapes are part of a dynamic body. Gravity pulls the whole body down, but the square will not fall. Density Density determines how heavy the fixtures are. Because Box2D is a two-dimensional engine, we can imagine all objects to be one meter deep. In fact, it doesn't matter as long as we are consistent. There are two bodies, each with a single circle fixture, in the following figure. The left circle is exactly twice as big as the right one, but the right one has double the density of the first one. The triangle is a static body and the rectangle and the circles are dynamic, creating a simple scale. When the simulation is run, the scales are balanced. Friction Friction defines how slippery a surface is. A body can consist of multiple fixtures with different friction values. When two bodies collide, the final friction is calculated from the point of collision based on the colliding fixtures. Friction can be given a value between 0 and 1, where 0 means completely frictionless and 1 means super strong friction. Let's say we have a slope which is made of a body with a single fixture that has a friction value of 0.5, as shown in the following figure: The other body consists of a single square fixture. If its friction is 0, the body slides very fast all the way down. If the friction is more than 0, then it would still slide, but slow down gradually. If the value is more than 0.25, it would still slide but not reach the end. Finally, with friction close to 1, the body will not move at all. Elasticity The coefficient of restitution is a ratio between the speeds before and after a collision, and for simplicity, we can call the material property elasticity. In the following figure, there are three circles and a rectangle representing a floor with restitution 0, which means not bouncy at all. The circles have restitutions (from left to right) of 1, 0.5, and 0. When this simulation is started, the three balls will fall with the same speed and touch the floor at the same time. However, after the first bounce, the first one will move upwards and climb all the way to the initial position. The middle one will bounce a little and keep bouncing less and less until it stops. The right one will not bounce at all. The following figure shows the situation after the first bounce: Sensor When we need a fixture that detects collisions but is otherwise not affected by them and doesn't affect other fixtures and bodies, we use a sensor. A goal line in a 2D air hockey top-down game is a good example of a sensor. We want it to detect the disc passing through, but we don't want it to prevent the disc from entering the goal. The physics world The physics world is the whole simulation including all bodies with their fixtures, gravity, and other settings that influence the performance and quality of the simulation. Tweaking the physics world settings is important for large simulations with many objects. These settings include the number of steps performed per second and the number of velocity and position interactions per step. The most important setting is gravity, which is determined by a vector of gravitational acceleration. Gravity in Box2D is simplified, but for the purpose of games, it is usually enough. Box2D works best when simulating a relatively small scene where objects are a few tens of meters big at most. To simulate, for example, a planet's (radial) gravity, we would have to implement our own gravitational force and turn the Box2D built-in gravity off. Forces and impulses Both forces and impulses are used to make a body move. Gravity is nothing else but a constant application of a force. While it is possible to set the position and velocity of a body in Box2D directly, it is not the right way to do it, because it makes the simulation unrealistic. To move a body properly, we need to apply a force or an impulse to it. These two things are almost the same. While forces are added to all the other forces and change the body velocity over time, impulses change the body velocity immediately. In fact, an impulse is defined as a force applied over time. We can imagine a foam ball falling from the sky. When the wind starts blowing from the left, the ball will slowly change its trajectory. Impulse is more like a tennis racket that hits the ball in flight and changes its trajectory immediately. There are two types of forces and impulses: linear and angular. Linear makes the body move left, right, up, and down, and angular makes the body spin around its center. Angular force is called torque. Linear forces and impulses are applied at a given point, which will have different effects based on the position. The following figure shows a simple body with two fixtures and quite high friction, something like a carton box on a carpet. First, we apply force to the center of the large square fixture. When the force is applied, the body simply moves on the ground to the right a little. This is shown in the following figure: Second, we try to apply force to the upper-right corner of the large box. This is shown in the following figure: Using the same force at a different point, the body will be toppled to the right side. This is shown in the following figure:
Read more
  • 0
  • 0
  • 3736

article-image-introspecting-maya-python-and-pymel
Packt
23 Jul 2014
8 min read
Save for later

Introspecting Maya, Python, and PyMEL

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Maya and Python are both excellent and elegant tools that can together achieve amazing results. And while it may be tempting to dive in and start wielding this power, it is prudent to understand some basic things first. In this article, we will look at Python as a language, Maya as a program, and PyMEL as a framework. We will begin by briefly going over how to use the standard Python interpreter, the Maya Python interpreter, the Script Editor in Maya, and your Integrated Development Environment (IDE) or text editor in which you will do the majority of your development. Our goal for the article is to build a small library that can easily link us to documentation about Python and PyMEL objects. Building this library will illuminate how Maya, Python and PyMEL are designed, and demonstrate why PyMEL is superior to maya.cmds. We will use the powerful technique of type introspection to teach us more about Maya's node-based design than any Hypergraph or static documentation can. Creating your library There are generally three different modes you will be developing in while programming Python in Maya: using the mayapy interpreter to evaluate short bits of code and explore ideas, using your Integrated Development Environment to work on the bulk of the code, and using Maya's Script Editor to help iterate and test your work. In this section, we'll start learning how to use all three tools to create a very simple library. Using the interpreter The first thing we must do is find your mayapy interpreter. It should be next to your Maya executable, named mayapy or mayapy.exe. It is a Python interpreter that can run Python code as if it were being run in a normal Maya session. When you launch it, it will start up the interpreter in interactive mode, which means you enter commands and it gives you results, interactively. The >>> and ... characters in code blocks indicate something you should enter at the interactive prompt; the code listing in the article and your prompt should look basically the same. In later listings, long output lines will be elided with ... to save on space. Start a mayapy process by double clicking or calling it from the command line, and enter the following code: >>> print 'Hello, Maya!' Hello, Maya! >>> def hello(): ... return 'Hello, Maya!' ... >>> hello() 'Hello, Maya!' The first statement prints a string, which shows up under the prompting line. The second statement is a multiline function definition. The ... indicates the line is part of the preceding line. The blank line following the ... indicates the end of the function. For brevity, we will leave out empty ... lines in other code listings. After we define our hello function, we invoke it. It returns the string "Hello, Maya!", which is printed out beneath the invocation. Finding a place for our library Now, we need to find a place to put our library file. In order for Python to load the file as a module, it needs to be on some path where Python can find it. We can see all available paths by looking at the path list on the sys module. >>> import sys >>> for p in sys.path: ... print p C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages A number of paths will print out; I've replicated what's on my Windows system, but yours will almost definitely be different. Unfortunately, the default paths don't give us a place to put custom code. They are application installation directories, which we should not modify. Instead, we should be doing our coding outside of all the application installation directories. In fact, it's a good practice to avoid editing anything in the application installation directories entirely. Choosing a development root Let's decide where we will do our coding. To be concise, I'll choose C:mayapybookpylib to house all of our Python code, but it can be anywhere. You'll need to choose something appropriate if you are on OSX or Linux; we will use ~/mayapybook/pylib as our path on these systems, but I'll refer only to the Windows path except where more clarity is needed. Create the development root folder, and inside of it create an empty file named minspect.py. Now, we need to get C:mayapybookpylib onto Python's sys.path so it can be imported. The easiest way to do this is to use the PYTHONPATH environment variable. From a Windows command line you can run the following to add the path, and ensure it worked: > set PYTHONPATH=%PYTHONPATH%;C:mayapybookpylib > mayapy.exe >>> import sys >>> 'C:\mayapybook\pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '...minspect.py'> The following is the equivalent commands on OSX or Linux: $ export PYTHONPATH=$PYTHONPATH:~/mayapybook/pylib $ mayapy >>> import sys >>> '~/mayapybook/pylib' in sys.path True >>> import minspect >>> minspect <module 'minspect' from '.../minspect.py'> There are actually a number of ways to get your development root onto Maya's path. The option presented here (using environment variables before starting Maya or mayapy) is just one of the more straightforward choices, and it works for mayapy as well as normal Maya. Calling sys.path.append('C:\mayapybook\pylib') inside your userSetup.py file, for example, would work for Maya but not mayapy (you would need to use maya.standalone.initialize to register user paths, as we will do later). Using set or export to set environment variables only works for the current process and any new children. If you want it to work for unrelated processes, you may need to modify your global or user environment. Each OS is different, so you should refer to your operating system's documentation or a Google search. Some possibilities are setx from the Windows command line, editing /etc/environment in Linux, or editing /etc/launchd.conf on OS X. If you are in a studio environment and don't want to make changes to people's machines, you should consider an alternative such as using a script to launch Maya which will set up the PYTHONPATH, instead of launching the maya executable directly. Creating a function in your IDE Now it is time to use our IDE to do some programming. We'll start by turning the path printing code we wrote at the interactive prompt into a function in our file. Open C:mayapybookpylibminspect.py in your IDE and type the following code: import sys def syspath(): print 'sys.path:' for p in sys.path: print ' ' + p Save the file, and bring up your mayapy interpreter. If you've closed down the one from the last session, make sure C:mayapybookpylib (or whatever you are using as your development root) is present on your sys.path or the following code will not work! See the preceding section for making sure your development root is on your sys.path. >>> import minspect >>> reload(minspect) <module 'minspect' from '...minspect.py'> >>> minspect.syspath() C:Program FilesAutodeskMaya2013binpython26.zip C:Program FilesAutodeskMaya2013PythonDLLs C:Program FilesAutodeskMaya2013Pythonlib C:Program FilesAutodeskMaya2013Pythonlibplat-win C:Program FilesAutodeskMaya2013Pythonliblib-tk C:Program FilesAutodeskMaya2013bin C:Program FilesAutodeskMaya2013Python C:Program FilesAutodeskMaya2013Pythonlibsite-packages First, we import the minspect module. It may already be imported if this was an old mayapy session. That is fine, as importing an already-imported module is fast in Python and causes no side effects. We then use the reload function, which we will explore in the next section, to make sure the most up-to-date code is loaded. Finally, we call the syspath function, and its output is printed. Your actual paths will likely vary. Reloading code changes It is very common as you develop that you'll make changes to some code and want to immediately try out the changed code without restarting Maya or mayapy. You can do that with Python's built-in reload function. The reload function takes a module object and reloads it from disk so that the new code will be used. When we jump between our IDE and the interactive interpreter (or the Maya application) as we did earlier, we will usually reload the code to see the effect of our changes. I will usually write out the import and reload lines, but occasionally will only mention them in text preceding the code. Keep in mind that reload is not a magic bullet. When you are dealing with simple data and functions as we are here, it is usually fine. But as you start building class hierarchies, decorators, and other things that have dependencies or state, the situation can quickly get out of control. Always test your code in a fresh version of Maya before declaring it done to be sure it does not have some lingering defect hidden by reloading. Though once you are a master Pythonista you can ignore these warnings and figure out how to reload just about anything!
Read more
  • 0
  • 0
  • 3716
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-blender-25-simulating-manufactured-materials
Packt
21 Apr 2011
10 min read
Save for later

Blender 2.5: Simulating Manufactured Materials

Packt
21 Apr 2011
10 min read
Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects Creating metals Of all material surfaces, metals must be one of the most popular to appear in 3D design projects. Metals tend to be visually pleasing with brightly colored surfaces that will gleam when polished. They also exhibit fascinating surface detail due to oxidization and age-related weathering. Being malleable, these surfaces will dent and scratch to display their human interaction. All these issues mean that man-made metal objects are great objects to design outstanding material and texture surfaces within Blender. It is possible in Blender to design metal surfaces using quite simple material setups. Although it may seem logical to create complex node-based solutions to capture all the complexity apparent within a metal surface, the standard Blender material arrangement can achieve all that is necessary to represent almost any metal. Metals have their own set of unique criteria that need application within a material simulation. These include: Wide specularity due to the nature of metals being polished or dulled by interaction Unique bump maps, either representing the construction, and/ or as a result of interaction Reflection – metals, more than many other surfaces, can display reflection. Normally, this can be simulated by careful use of the specular settings in simulation but, occasionally, we will need to have other objects and environments reflected in a metal surface. Blender has a vast array of tools to help you simulate almost any metal surface. Some of these mimic real-world metal tooling effects like anisotropic blend types to simulate brushed metal surfaces, or blurred reflections sometimes seen on sandblasted metal surfaces. All these techniques, while producing realistic metal effects, tend to be very render intensive. We will work with some of the simpler tools in Blender to not only produce realistic results but also conserve memory usage and render times. We will start with a simple but pleasing copper surface. Copper has the unique ability to be used in everything from building materials, through cooking, to money. Keeping up with a building theme, we will create a copper turret material of the type of large copper usage that might be seen on anything from a fairy castle to a modern-day embellishment of a corporate building. One of the pleasant features of such a large structural use of copper is its surface color. A brown/orange predominant color, when new, is changed to a complementary color, light green/blue when oxidized. This oxidization also varies the specularity of its surface and in combination with its man-made construction, using plating creates a very pleasing material. Getting ready To prepare for this recipe, you will need to create a simple mesh to represent a copper-plated turret-like roof. You can be as extravagant as you wish in designing an interesting shape. Give the mesh a few curves, and variations in scale, so that you can see how the textures deform to the shape. The overall scale of this should be about 2.5 times larger than the default cube and about 1.5 times in width at its widest point. If you would prefer to use the same mesh as used in the recipe, you can download it as a pre-created blendfile from the Packtpub website. If you create a turret-like object yourself, ensure that all the normals are facing outwards. You can do this by selecting all of the vertices in edit mode, and then clicking on Normals/ Recalculate in the Tools Shelf. Also, set the surface shading to Smooth in the same menu. Depending on how many vertices you use to create your mesh, you may want to add a Sub-surface modifier to ensure that the model renders to give a nice smooth surface on which we will create the copper-plating material simulation. In the scene used in the example blendfile, three lights have been used. A Sun type lamp at location X 7.321, Y 1.409, Z 11.352 with a color of white and Energy of 1.00. However, it should only be set to provide specular lighting. It is positioned to create a nice specular reflection of the curved part of the turret. A Point lamp type set at X 9.286, Y -3.631, Z 5.904 with a color of white and Energy of 1.00. A Hemi type lamp at location X -9.208, Y 6.059, Z 5.904 with a color of R 1.00, B 0.97, B 0.66 and an Energy of 1.00. These will help simulate daylight and a nice specular reflection as you might see on a bright day. Now would be a good time to save your work. If you have downloaded the pre-created blendfile, or produced one yourself, save it with an incremented filename as copperturret- 01.blend. It will also be necessary for you to download, three images that will provide a color map, a bump map, and a specular map for the plated surface of our turret. They are simple grayscale images that are relatively easily created in a paint package. Essentially, one image is a tiled collection of metal plates with some surface detail, and the other is derived from this image by creating a higher contrast image from the first. This will be used as a specularity map. The third has the same outline as each tile edge but with simple blends from black to white. This will provide a bump map to give the general slope of each metal plate. All three, separate, are available for download from Packtpub website as: Chapt-02/textures/plating.png Chapt-02/textures/plating-bump-1.png Chapt-02/textures/plating-spec-pos.png Once downloaded, save these files into a textures subdirectory below where you have saved the blendfile. How to do it... We are going to create the effect of plating on the turret object, tiling an image around its surface to make it look as though it has been fashioned by master coppersmiths decades ago. Open the copper-turret-01.blend. This file currently has no materials or textures associated with it. With your turret mesh selected, create a new material in the Materials panel. Name your new material copper-roof. Change the Diffuse color to R 1.00, G 0.50, B 0.21. You can use the default diffuse shading type as Lambert. Set the Specular color to R 1.00, G 0.93, B 0.78 and the type to Wardiso with Intensity 0.534, and Slope 0.300. That's the general color set for our material, we now need to create some textures to add the magic. Move over to the Texture panel and select the first texture slot. Create a new texture of type Image or Movie, and name it color-map. From the Image tab, Open the image plating.png that should be in the textures subfolder where you saved the blendfile. This is a grayscale image composed from a number of photographs with grime maps applied within a paint package. Each plate has been scaled and repositioned to produce a random-looking, but tileable texture. Creating such textures is not a quick process. However, the time spent in producing a good image will make your materials look so much better. Under the Mapping tab, select Coordinates of type Generated Projection and of type Tube. Under Image Mapping, select Extension/ Repeat, and set the Repeat values of X 3 and Y 2. This will repeat the texture three times around the circumference of the turret and two times on its height. In the Influence tab, select Diffuse/Color and set to 0.500. Also, set Geometry/ Normal to 5.00. Finally, select Blend type Multiply, RGB to Intensity, and set the color to a nice bright orange with R 0.94, G 0.56, and B 0.00. Save your work as copper-turret-02.blend, and perform a test render. If necessary, you can perform a partial render of just one area of your camera view by using the SHIFT+B shortcut and dragging the border around just an area of the camera view. An orange-dashed border will show what area of the image will be rendered. If you also set the Crop selector in the Render panel under Dimensions, it will only render that bordered area and not the black un-rendered portion. You should see that both the color and bump have produced a subtle change in appearance of the copper turret simulation. However, the bump map is all rather even with each plate looking as though they are all the same thickness rather than one laid on top of another. Time to employ another bump map to create that overlapped look. With the turret object selected, move to the Texture panel and in the next free texture slot, create a new texture of type Image or Movie, and name it plate-bumps. In the Image tab, open the image plating-bump-1.png. Under the Image Mapping tab, select Extension of type Repeat and set the Repeat to X 3, Y 2. In the Mapping tab, ensure the Coordinates are set to Generated and the Projection to Tube. Finally, under the Influence tab, only have the Geometry/Normal set with a value of 10.000. Save your work, naming the file copper-turret-03.blend, and perform another test render. Renders of this model will be quite quick, so don't be afraid to regularly render to examine your progress. Your work should have a more pleasing sloped tiled copper look. However, the surface is still a little dull. Let us add some weather-beaten damage to help bind the images tiled on the surface to the structure below. With the turret object selected, choose the next free texture slot in the Texture panel. Create a new texture of Type Clouds and name it beaten-bumps. In the Clouds tab, set Grayscale and Noise/Hard, and set the Basis to Blender Original with Size 0.11, and Depth 6. Under the Mapping tab, set the Coordinates to Generated, and Projection to Tube. Below projection, change the X,Y,Z to Z, Y, X. Finally, under the Influence tab only, select Geometry/Normal and set to -0.200. Save your work again, incrementing the filename to copper-turret-04.blend. A test render at this point will not produce an enormous difference from the previous render but the effect is there. If you examine each stage render of the recipe so far you will see the subtle but important changes the textures have made. How it works... Creating metal surfaces, in 3D packages like Blender, will almost always require a photographic image to map the man-made nature of the material. Images can add color, bump, or normal maps, as well as specular variety, to show these man-made structures. Because metals can have so much variety in their surface appearance, more than one map will be required. In our example, we used three images that were created in a paint package. They have been designed to give a tileable texture so that the effect can be repeated across the surface without producing discernible repeats. Producing such images can be ttime-consuming but producing a good image map will make your materials much more believable. Occasionally, it will be possible to combine color, bump, and specularity maps into a single image but try to avoid this as it will undoubtedly lead to unnatural-looking metals. Sometimes, the simplest of bump maps can make all the difference to a material. In the middle image shown previously, we see a series of simple blends marking the high and low points of overlapping copper plates. It's working in a very similar way to the recipe on slate roof tiles. However, it is also being used in conjunction with the plating image that supplies the color and just a little bump. We have also supplied a third bump map using a procedural texture, Clouds. Procedurals have the effect of creating random variation across a surface, so here it is used to help tie together and break any repeats formed by the tiled images. Using multiple bump maps is an extremely efficient way of adding subtle detail to any material and here, you can almost see the builders of this turret leaning against it to hammer down the rivets.
Read more
  • 0
  • 0
  • 3695

article-image-zbrush-4-modeling-creature-zsketch
Packt
06 Apr 2011
4 min read
Save for later

ZBrush 4: how to model a creature with ZSketch

Packt
06 Apr 2011
4 min read
What the creature looks like The Brute is a crossbreed between a harmless emu and a wild forest bear. It is a roaming rogue living in the deepest forests. Only a few people have seen it and survived, so it's said to be between three and eight meters high. Despite its size, it combines strength and agility in a dangerous way. It is said that it hides his rather cute-looking head with trophies of its victims. ZSketching a character In this workflow, we can think of our ZSpheres as a skeleton we can place our virtual clay onto. So we try to build the armature, not as thick as the whole arm, but as thick as the underlying bone would be. With that in mind, let's get started. Time for action – creating the basic armature with ZSpheres Let's say the art director comes to your desk and shows you a concept of a monster to be created for the game project that you're working on. As always, there's little to no time for this task. Don't panic; just sketch it with ZSketch in no time. Let's see how this works: Pick a new ZSphere and align its rotation by holding Shift. Set your draw size down to 1. Activate Symmetry on the X-axis. The root ZSphere can't be deleted without deleting everything else, so the best place for this would be in the pelvis area. Placing the cursor on the line of symmetry will create a single ZSphere—this is indicated by the cursor turning green. Start out to create the armature or skeleton from the root ZSphere, commencing from the pelvis to the head, as shown in the next screenshot. Similar to the human spine, it roughly follows an S-curve: Continue by adding the shoulders. A little trick is to start the clavicle bone a bit lower at the spine, which gives a more natural motion in the shoulder area. Add the arms with the fingers as one ZSphere plus the thumbs, we'll refine it later. The arms should be lowered and bent so that we're able to judge the overall proportions better, as the next image shows: This "skeleton" will also be used for moving or posing our model, so we'll try to place ZSpheres where our virtual joints would be, for example, at the elbow joint. Add the hips, stretching out from the pelvis and continue with the legs. Try to bend the legs a bit (which looks more natural) as shown in the next screenshot. Finally, add the foot as one ZSphere for the creature to stand on: Now we have all the basic features of the armature ready. Let's check the concept again to get our character's proportions right. Because our character is more of a compact, bulky build, we have to shorten his legs and neck a bit. Make sure to check the perspective view, too. Inside any game engine, characters will be viewed in perspective. We can also set the focal angle under Draw FocalAngle|. The default value is 50. Switching perspective off helps comparing lengths. Add another ZSphere in the belly area to better control its mass, even if it looks embarrassing. To make him look less like Anubis, you may want to lower the top-most ZSphere a bit, so it will fit the horns. Our revised armature could now look like this with perspective enabled: With the overall proportions done, let's move on with details, starting with the toes. Insert another ZSphere next to the heels and continue by adding the toes, including the tiny fourth toe, as shown in the next screenshot: With larger ZSpheres, we can better judge the mass of the foot. But because we need a thinner bone-like structure, let's scale them down once we're done. Be careful to scale the ZSpheres, and not the Link spheres in-between them. This keeps them in place while scaling, as shown in the next image:
Read more
  • 0
  • 0
  • 3641

article-image-developing-and-deploying-virtual-world-environment-flash-multiplayer
Packt
16 Aug 2010
7 min read
Save for later

Developing and Deploying Virtual World Environment for Flash Multiplayer

Packt
16 Aug 2010
7 min read
(For more resources on Flash, see here.) Comparing SmartFoxServer Lite, Basic, and Pro SmartFoxServer is a commercial product by gotoAndPlay(). There are three package options of SmartFoxServer. They are Lite version, Basic version, and Pro version. The demo license of the SmartFoxServer provides full features with 20 concurrent users maximum without time limitation. We will use the demo license to build the entire virtual world throughout the article. SmartFoxServer Lite The Lite version was the original SmartFoxServer since 2004. The maximum concurrent connection is limited to 50. It supports some core features like message passing, server-side user/room variables, and dynamic room creation. However, the lack of ActionScript 3.0greatly limits the performance and functionality. Moreover, it is being updated slowly so that many new features from Basic and Pro version are missing in Lite version. When we compare the version number of the three options, we will know that Lite version is developing at a slow pace. The version of SmartFoxServer Pro is 1.6.6 at the time of writing. The Basic version is 1.5.9 and the Lite version is only 0.9.1. Because of the slow update, not supporting ActionScript 3 and lack of features, it is not recommended to use Lite version in production. SmartFoxServer Basic SmartFoxServer Basic supports ActionScript 3 and a bunch of advanced features such as administration panel, game room spectators, and moderators. The administration panel lets moderators configure the zones, rooms, and users when the server is running. However, the lack of server-side extension support limits the customizability of the socket server. It also means that all logic must reside on the client side. This raises a security issue that the client may alter the logic to cheat. The Basic version provides enough features to build a Flash virtual world in small scale that does not require high security. If you need a specific server logic and room management or want to put logic in server side to prevent client-side cheating, Pro version is the choice. SmartFoxServer Pro There is a long list of features that are supported in Pro version. There are three features amongst all that distinguish the Pro version, they are: Server-side extension that modifies the server behavior JSON/Raw data protocol message passing Direct database connection Modifying the behavior of server Server-side extension is some server logic that developers can program to modify the default behavior of the internal event handler and add server-side functions to extend the server for specific usage. For example, we may want to override the "user lost" event so that we can save the user properties, telling others that someone is disconnected and something else. In this case, we can write a function in server-side extension to handle all these things when the user lost, instead of running the default behavior that was provided by SmartFoxServer. The SmartFoxServer is written in Java. Therefore the native support language of server-side extension is Java. In order to reduce the development difficulties, SmartFoxServer supports Python and ActionScript as a server-side extension. The support of ActionScript makes it much more convenient for most Flash developers to develop the server-side extension without even knowing Java. Please note that the version of ActionScript supported in server-side extension is ActionScript 1, instead of ActionScript 3. Take a look at the following code snippet on a server-side extension. The functions in server-side extensions are often similar to this one. It comes with arguments to know which user is calling this command at which room. In this snippet there is a command called getSomething and it will use the provided command parameters to get the result and return the result to the corresponding user. function handleRequest(cmd, params, user, fromRoom){ var response = {}; switch (cmd) { case "getSomething": var cpu = params['cpuType’]; response.something = "A Powerful Computer with CPU "+cpu; // send the response back to the client. _server.sendResponse(response,-1,null,[user]); break }} JSON/Raw data protocol JSON (http://www.json.org) is a light-weight text-based data-interchange format. It is designed for both humans and machines to read and write the data easily. For example, we can format a list of users and their information with the following JSON code. {"users": [ { "name" : "Steve", "level" : 12, "position" : { "x" : 6, "y" : 7 }, { "name" : "John", "level" : 5, "position" : { "x" : 26, "y" : 12 }} The default data protocol supported by SmartFoxServer Lite and Basic is XML. The Pro version added support of JSON and raw data protocol make it possible to compress the transfer of data between clients and server. The length of messages between clients and server is much shorter and it means the transmission speed is much faster. Take an example of a client sending data to a server with different protocols. We are now trying to fetch some data from the server, and this is what it looks like when sending a command to the server via different protocol. XML: <dataObj><var n=’name’ t=’s’>extension</var><var n=’cmd’t=’s’>getSomething</var><obj t=’o’ o=’param’><var n=’cpuType’t=’n’>8</var></obj></dataObj> The length of this command is 148 bytes. JSON: {"b":{"p":{"cpuType":8},"r":1,"c":"getSomething","x":"extension"},"t":"xt"} The length of this command is 75 bytes. Raw Data: %xt%extension%getSomething%8% The length of this command is 29 bytes. When comparing with the bytes used to send a command over the network, XML is two times the JSON and five times the raw protocol. We are talking about several byte differences that may not be considered in a broadband Internet. However, it is a must to consider every byte that was sent to the network because we are not talking about 29 bytes versus 148 bytes in the real applications. Imagine there are 2000 players in the virtual world, sending similar commands every second. We are now talking about 2.4Mbit/s versus 500Kbit/s, and this rough statistic already ignores those commands that fetch a long list of results, for example, a long list of items that are owned by the player. The raw protocol format takes less bytes to represent the command because it does not contain the field name of the data. All parameters are position-dependent. In the preceding command, the first parameter stands for an extension message and the second stands for the command name. Other command-specific parameters follow these two parameters. Raw protocol is position-dependent on the passing parameters while JSON is not. It is recommended to use JSON protocol in most case and use the raw data protocol in real-time interaction parts. Also, we should state clearly in comments code what each parameters stands for because others cannot get the field information from the raw data. Accessing the database directly Flash does not provide any database access functions. Flash applications always connect to database via server-side technique. The Pro version of SmartFoxServer provides direct database connectivity in server-side extension. The Flash virtual world will call a function in sever-side extension and it will handle the database connection for the Flash. As the database connectivity is handled in server-side extension, Basic and Lite version does not contain this handy feature. We have to wrap the database access in other server-side technique, such as PHP, to connect database in Basic and Lite version. The two graphs compare the architecture of the database access in SmartFoxServer Pro, Basic, and Lite.
Read more
  • 0
  • 0
  • 3634

article-image-material-nodes-cycles
Packt
25 Jul 2013
12 min read
Save for later

Material nodes in Cycles

Packt
25 Jul 2013
12 min read
(For more resources related to this topic, see here.) Getting Ready In the description of the following steps, I'll assume that you are starting with a brand new Blender with the default factory settings; if not, start Blender and just click on the File menu item to the top main header bar to select Load Factory Settings from the pop-up menu. In the upper menu bar, switch from Blender Render to Cycles Render (hovering with the mouse on this button shows the Engine to use for rendering label). Now split the 3D view into two horizontal rows and change the upper one in to the Node Editor window by selecting the menu item from the Editor type button at the left-hand corner of the bottom bar of the window itself. The Node Editor window is, in fact, the window we will use to build our shader by mixing the nodes (it's not the only way, actually, but we'll see this later). Put the mouse cursor in the 3D view and add a plane under the cube (press Shift + A and navigate to Mesh | Plane). Enter edit mode (press Tab), scale it 3.5 times bigger (press S, digit 3.5, and hit Enter) and go out of edit mode (press Tab). Now, move the plane one Blender unit down (press G, then Z, digit -1, and then hit Enter). Go to the little icon (Viewport Shading) showing a sphere in the bottom bar of the 3D view and click on it. A menu showing different options appears (Bounding Box, Wireframe, Solid, Texture, Material, and Rendered). Select Rendered from the top of the list and watch your cube being rendered in real time in the 3D viewport. Now, you can rotate or/and translate the view or the cube itself and the view gets updated in real time (the speed of the update is only restricted by the complexity of the scene and by the computing power of your CPU or of your graphic card). Let's learn more about this: Select Lamp in the Outliner (by default, a Point lamp). Go to the Object Data window under the Properties panel on the right-hand side of the interface. Under the Nodes tab, click on Use Nodes to activate a node system for the selected light in the scene; this node system is made by an Emission closure connected to a Lamp Output node. Go to the Strength item, which is set to 100.000 by default, and start to increase the value—as the intensity of the Lamp increases, you can see the cube and the plane rendered in the viewport getting more and more bright, as shown in the following screenshot: How to do it... We just prepared the scene and had a first look at one of the more appreciated features of Cycles: the real-time rendered preview. Now let's start with the object's materials: Select the cube to assign the shader to, by left-clicking on the item in the Outliner, or also by right-clicking directly on the object in the Rendered viewport (but be aware that in Rendered mode, the object selection outline usually around the mesh is not visible because, obviously, it's not renderable). Go to the Material window under the Properties panel: even if with the default Factory Settings selected, the cube has already a default material assigned (as you can precisely see by navigating to Properties | Material | Surface). In any case, you need to click on the Use Nodes button under the Surface tab to activate the node system; or else, by checking the Use Nodes box in the header of the Node Editor window. As you check the Use Nodes box, the content of the Surface tab changes showing that a Diffuse BSDF shader has been assigned to the cube and that, accordingly, two linked nodes have appeared inside the Node Editor window: the Diffuse BSDF shader itself is already connected to the Surface input socket of a Material Output node (the Volume input socket does nothing at the moment, it's there in anticipation of a volumetric feature on the to-do list, and we'll see the Displacement socket later). Put the mouse cursor in the Node Editor window and by scrolling the mouse wheel, zoom in to the Diffuse BSDF node. Left-click on the Color rectangle: a color wheel appears, where you can select a new color to change the shader color by clicking on the wheel or by inserting the RGB values (and take note that there are also a color sampler and the Alpha channel value, although the latter, in this case, doesn't have any visible effect on the object material's color): The cube rendered in the 3D preview changes its material's color in real time. You can even move the cursor in the color wheel and watch the rendered object switching the colors accordingly. Set the object's color to a greenish color by setting its RGB values to 0.430, 0.800, and 0.499 respectively. Go to the Material window and, under the Surface tab, click on the Surface button, which at the moment is showing the Diffuse BSDF item. From the pop-up menu, select the Glossy BSDF shader item. The node now also changes in the Node Editor window and so does accordingly the cube's material in the Rendered preview, as shown here: Note that although we just switched a shader node with a different one, the color we set in the former one has been kept also in the new one; actually, this happens for all the values that can be kept from one node to a different one. Now, because in the real world a material having a 100 percent matte or reflective surface could hardly exist, a more correct basic Cycles material should be made by mixing the Diffuse BSDF and the Glossy BSDF shaders blended together by a Mix Shader node, in turn connected to the Material Output node. In the Material window, under the Surface tab, click again on the Surface button that is now showing the Glossy BSDF item and replace it back with a Diffuse BSDF shader. Put the mouse pointer in the Node Editor window and, by pressing Shift + A on the keyboard, make a pop-up menu appear with several items. Move the mouse pointer on the Shader item, it shows one more pop-up where all the shader options are collected. Select one of these shader menu items, for example, the Glossy BSDF item. The shader node is now added to the Node Editor window, although not connected to anything yet (in fact, it's not visible in the Material window but is visible only in the Node Editor window); the new nodes appear already selected. Again press Shift + A in the Node Editor window and this time add a Mix Shader node. Press G to move it on the link connecting the Diffuse BSDF node to the Material Output node (you'll probably need to first adjust the position of the two nodes to make room between them). The Mix Shader node gets automatically pasted in between, the Diffuse node output connected to the first Shader input socket, as shown in the following screenshot: Left-click with the mouse on the green dot output of the Glossy BSDF shader node and grab the link to the second input socket of the Mix Shader node. Release the mouse button now and see the nodes being connected. Because the blending Fac (factor) value of the Mix Shader node is set by default to 0.500, the two shader components, Diffuse and Glossy, are now showing on the cube's surface in equal parts, that is, each one at 50 percent. Left-click on the Fac slider with the mouse and slide it to 0.000. The cube's surface is now showing only the Diffuse component, because the Diffuse BSDF shader is connected to the first Shader input socket that is corresponding to a value set to 0.000. Slide the Fac slider value to 1.000 and the surface is now showing only the Glossy BSDF shader component, which is, in fact, connected to the second Shader input socket corresponding to a value set to 1.000. Set the Fac value to 0.800. The cube is now reflecting on its sides, even if blurred, the white plane, because we have a material that is reflective at 80 percent, matte at 20 percent, and so on: Lastly, select the plane, go to the Material window and click on the New button to assign a default whitish material. How it works... So, in its minimal form, a Cycles material is made by a closure (a node shader) connected to the Material Output node; by default, for a new material, the node shader is the Diffuse BSDF with RGB color set to 0.800, and the result is a matte whitish material (with the Roughness value at 0.000 actually corresponding to a Lambert shader). The Diffuse BSDF node can be replaced by any other one of the available shader list. For example, by a Glossy BSDF shader as in the former cube scene, which produces a totally mirrored surface material. As we have seen, the Node Editor window is not the only way to build the materials; in the Properties panel on the right-hand side of the UI, we have access to the Material window, which is usually divided as follows: The material name, user, and the datablock tab The Surface tab, including in a vertical ordered column only the shader nodes added in the Node Editor window and already connected to each other The Displacement tab, which we'll see later The Settings tab, where we can set the object color as seen in the viewport in not-rendered mode (Viewport Color), the material Pass Index, and a Multiple Importance Sample option The Material window not only reflects what we do in the Node Editor window and changes accordingly to it (and vice versa), but also can be used to change the values, to easily switch the closures themselves and to some extent to connect them to the other nodes. The Material and the Node Editor windows are so mutual that there is no prevalence in which one to use to build a material; both can be used individually or combined, depending on preferences or practical utility. In some cases, it can be very handy to switch a shader from the Surface tab under Material on the right (or a texture from the Texture window as well, but we'll see textures later), leaving untouched all the settings and the links in the node's network. There is no question, by the way, that the Material window can become pretty complex and confusing as a material network grows more and more in complexity, while the graphic appearance of the Node Editor window shows the same network in a much more clear and comprehensible way. There's more... Looking at the Rendered viewport, you'll notice that the image is now quite noisy and that there are white dots in certain areas of the image; these are the infamous fireflies, caused mainly by transparent, luminescent, or glossy surfaces. Actually, they have been introduced in our render by the glossy component. Follow these steps to avoid them: Go to the Render window under the Properties panel. In the Sampling tab, set Samples to 100 both for Preview and Render (they are set to 10 by default). Set the Clamp value to 1.00 (it's set to 0.00 by default). Go to the Light Paths tab and set the Filter Glossy value to 1.00 as well. The resulting rendered image, as shown here, is now a lot more smooth and noise free: Save the blend file in an appropriate location on your hard drive with a name such as start_01.blend. Samples set to 10 by default are obviously not enough to give a noiseless image, but are good for a fast preview. We could also let the Preview samples as default and increase only the Render value, to have longer rendering times but a clean image only for the final render (that can be started, as in BI, by pressing the F12 key). By using the Clamp value, we can cut the energy of the light. Internally, Blender converts the image color space to linear. It then re-converts it to RGB, that is, from 0 to 255 for the output. A value of 1.00 in linear space means that all the image values are now included inside a range starting from 0 and arriving to a maximum of 1, and that values bigger than 1 are not possible, so usually avoiding the fireflies problem. Clamp values higher than 1.00 can start to lower the general lighting intensity of the scene. The Filter Glossy value is exactly what the name says, a filter that blurs the glossy reflections on the surface to reduce noise. Be aware that even with the same samples, the Rendered preview not always has a total correspondence to the final render, both with regards to the noise as well as to the fireflies. This is mainly due to the fact that the preview-rendered 3D window and the final rendered image usually have very different sizes, and artifacts visible in the final rendered image may not show in a smaller preview-rendered window. Summary In this article we learned how to build a basic Cycles material, add textures, and use lamps or light-emitting objects. Also, we learned how to successfully create a simple scene. Resources for Article: Further resources on this subject: Advanced Effects using Blender Particle System [Article] Learn Baking in Blender [Article] Character Head Modeling in Blender: Part 2 [Article]
Read more
  • 0
  • 0
  • 3621
article-image-blender-25-rigging-torso
Packt
22 Jun 2011
9 min read
Save for later

Blender 2.5: Rigging the Torso

Packt
22 Jun 2011
9 min read
Blender 2.5 Character Animation Cookbook 50 great recipes for giving soul to your characters by building high-quality rigs This article is about our character's torso: we're going to see how to create hips, a spine, and a neck. Aside from what you'll learn from here, it's important for you to take a look at how some of those rigs were built. You'll see some similarities, but also some new ideas to apply to your own characters. It's pretty rare to see two rigs built the exact same way. How to create a stretchy spine A human spine, also called vertebral column, is a bony structure that consists of several vertebrae (24 or 33, if you consider the pelvic region). It acts as our main axis and allows us a lot of flexibility to bend forward, sideways, and backward. And why is this important to know? That number of vertebrae is something useful for us riggers. Not that we're going to create all those tiny bones to make our character's spine look real, but that information can be used within Blender. You can subdivide one physical bone for up to 32 logical segments (that can be seen in the B-Bone visualization mode), and this bone will make a curved deformation based on its parent and child bones. That allows us to get pretty good deformations on our character's spine while keeping the number of bones to a minimum. This is good to get a realistic deformation, but in animation we often need the liberty to squash and stretch our character: and this is needed not only in cartoony animations, but to emphasize realistic poses too. We're going to see how to use some constraints to achieve that. We're going to talk about just the spine, without the pelvic region. The latter needs a different setup which is out of the scope of this article. How to do it... Open the file 002-SpineStretch.blend from support files. It's a mesh with some bones already set for the limbs, as you can see in the next screenshot. There's no weight painting yet, because it's waiting for you to create the stretchy spine. Select the armature and enter into its Edit Mode (Tab). Go to side view (Numpad 3); make sure the 3D cursor is located near the character's back, in the line of what would be his belly button. Press Shift + A to add a new bone. Move its tip to a place near the character's eyes. Go to the Properties window, under the Object Data tab, and switch the armature's display mode to B-Bone. You'll see that this bone you just created is a bit fat, let's make it thinner using the B-Bone scale tool (Ctrl + Alt + S). With the bone still selected, press (W) and select Subdivide. Do the same to the remaining bones so we end up with five bones. Still, in side view, you can select and move (G) the individual joints to best fit the mesh, building that curved shape common in a human spine, ending with a bone to serve as the head, as seen in the next screenshot: Name these bones as D_Spine1, D_Spine2, D_Spine3, D_Neck, and D_Head. You may think just five bones aren't enough to build a good spine. And here's when the great rigging tools in Blender come to help us. Select the D_Neck bone, go to the Properties window, under the Bone tab and increase the value of Segments in the Deform section to 6. You will not notice any difference yet. Below the Segments field there are the Ease In and Ease Out sliders. These control the amount of curved deformation on the bone at its base and its tip, respectively, and can range from 0 (no curve) to 2. Select the next bone below in the chain (D_Spine3) and change its Segments value to 8. Do the same to the remaining bones below, with values of 8 and 6, respectively. To see the results, go out of Edit Mode (Tab). You should end up with a nice curvy spine as seen in the following screenshot: Since these bones are already set to deform the mesh, we could just add some shapes to them and move our character's torso to get a nice spine movement. But that's not enough for us, since we also want the ability to make this character stretch. Go back into Edit Mode, select the bones in this chain, press Shift + W, and select No Scale. This will make sure that the stretching of the parent bone will not be transferred to its children. This can also be accomplished under the Properties window, by disabling the Inherit Scale option of each bone. Still in Edit Mode, select all the spine bones and duplicate (Shift + D) them. Press Esc to make them stay at the same location of the original chain, followed by Ctrl + Alt + S to make them fatter (to allow us to distinguish both chains). When in Pose Mode, these bones would also appear subdivided, which can make our view quite cluttered. Change back the Segments property of each bone to 1 and disable their deform property on the same panel under the Properties Window. Name these new bones as Spine1, Spine2, Spine3, Neck, and Head, go out of Edit Mode (Tab) and you should have something that looks similar to the next screenshot: Now let's create the appropriate constraints. Enter in Pose Mode (Ctrl + Tab), select the bone Spine1, hold Shift, and select D_Spine1. Press Shift + Ctrl + C to bring up the Constraints menu. Select the Copy Location constraint. This will make the deformation chain move when you move the Spine_1 bone. The Copy Location constraint here is added because there is no pelvic bone in this example, since it's creation involves a different approach which we'll see in the next recipe, Rigging the pelvis. With the pelvic bone below the first spinal bone, its location will drive the location of the rest of the chain, since it will be the chain's root bone. Thus, this constraint won't be needed with the addition of the pelvis. Make sure that you check out our next recipe, dedicated to creating the pelvic bone. With those bones still selected, bring up the Constraints menu again and select the Stretch To constraint. You'll see that the deformation chain will seem to disappear, but don't panic. Go to the Properties Panel, under the Bone Constraints tab and look for the Stretch To constraint you have just created. Change the value of the Head or Tail slider to 1, so the constraint would be evaluated considering the tip of the Spine_1 bone instead of its base. Things will look different now, but not yet correct. Press the Reset button to recalculate the constraints and make things look normal again. This constraint will cause the first deformation bone to be stretched when you scale (S) the Spine_1 bone. Try it and see the results. The following screenshot shows the constraint values: This constraint should be enough for stretching, and we may think it could replace the Copy Rotation constraint. That's not true, since the StretchTo constraint does not apply rotations on the bone's longitudinal Y axis. So, let's add a Copy Rotation constraint. On the 3D View, with the Spine1 and D_Spine1 selected (in that order, that's important!), press Ctrl + Shift + C and choose the Copy Rotation constraint. Since the two bones have the exact same size and position in 3D space, you don't need to change any of the constraint's settings. You should add the Stretch To and Copy Rotation constraints to the remaining controller bones exactly the same way you did with the D_Spine1 bone in steps 9 to 12. As the icing on the cake, disable the X and Z scaling transformation on the controller bones. Select each, go to the Transform Panel (N), and press the lock button near the X and Z sliders under Scale. Now, when you select any of these controller bones and press S, the scale is just applied on their Y axis, making the deforming ones stretch properly. Remember that the controller bones also work as expected when rotated (R). The next screenshot shows the locking applied: Enter into Edit Mode (Tab), select the Shoulder.L bone, hold Shift, and select both Shoulder.R and Spine3 (in this order; that's important). Press Ctrl + P and choose Keep Offset to make both shoulder controllers children of the Spine3 bone and disable its scale inheriting either through Shift + W or the Bone tab on the Properties panel. When you finish setting these constraints and applying the rig to the mesh through weight painting, you can achieve something stretchy, as you can see in the next screenshot: The file 002-SpineStretch-complete.blend has this complete recipe, for your reference in case of doubts. How it works... When creating spine rigs in Blender, there's no need to create lots of bones, since Blender allows us to logically subdivide each one to get soft and curved deformations. The amount of curved deformation can also be controlled through the Ease In and Ease Out sliders, and it also works well with stretching. When you scale a bone on its local Y axis in Pose Mode, it doesn't retain its volume, thus the mesh deformed by it would be scaled without the stretching feeling. You must create controller bones to act as targets to the Stretch To constraint, so when they're scaled, the constrained bones will stretch and deform the mesh with its volume preserved. There's more... You should notice that the spine controllers will be hidden inside the character's body when you turn off the armature's X-Ray property. Therefore, you need to create some custom shapes for these controller bones in order to make your rig more usable.
Read more
  • 0
  • 0
  • 3618

article-image-cocos2d-working-sprites
Packt
13 Dec 2011
6 min read
Save for later

Cocos2d: Working with Sprites

Packt
13 Dec 2011
6 min read
  (For more resources on Cocos2d, see here.)   Drawing sprites The most fundamental task in 2D game development is drawing a sprite. Cocos2d provides the user with a lot of flexibility in this area. In this recipe we will cover drawing sprites using CCSprite, spritesheets, CCSpriteFrameCache, and CCSpriteBatchNode. We will also go over mipmapping. In this recipe we see a scene with Alice from Through The Looking Glass. Getting ready Please refer to the project RecipeCollection01 for the full working code of this recipe. How to do it... Execute the following code: @implementation Ch1_DrawingSprites-(CCLayer*) runRecipe { /*** Draw a sprite using CCSprite ***/ CCSprite *tree1 = [CCSprite spriteWithFile:@"tree.png"]; //Position the sprite using the tree base as a guide (y anchor point = 0)[tree1 setPosition:ccp(20,20)]; tree1.anchorPoint = ccp(0.5f,0); [tree1 setScale:1.5f]; [self addChild:tree1 z:2 tag:TAG_TREE_SPRITE_1]; /*** Load a set of spriteframes from a PLIST file and draw one byname ***/ //Get the sprite frame cache singleton CCSpriteFrameCache *cache = [CCSpriteFrameCachesharedSpriteFrameCache]; //Load our scene sprites from a spritesheet [cache addSpriteFramesWithFile:@"alice_scene_sheet.plist"]; //Specify the sprite frame and load it into a CCSprite CCSprite *alice = [CCSprite spriteWithSpriteFrameName:@"alice.png"]; //Generate Mip Maps for the sprite [alice.texture generateMipmap]; ccTexParams texParams = { GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE }; [alice.texture setTexParameters:&texParams]; //Set other information. [alice setPosition:ccp(120,20)]; [alice setScale:0.4f]; alice.anchorPoint = ccp(0.5f,0); //Add Alice with a zOrder of 2 so she appears in front of othersprites [self addChild:alice z:2 tag:TAG_ALICE_SPRITE]; //Make Alice grow and shrink. [alice runAction: [CCRepeatForever actionWithAction: [CCSequence actions:[CCScaleTo actionWithDuration:4.0f scale:0.7f], [CCScaleTo actionWithDuration:4.0f scale:0.1f], nil] ] ]; /*** Draw a sprite CGImageRef ***/ UIImage *uiImage = [UIImage imageNamed: @"cheshire_cat.png"]; CGImageRef imageRef = [uiImage CGImage]; CCSprite *cat = [CCSprite spriteWithCGImage:imageRef key:@"cheshire_cat.png"]; [cat setPosition:ccp(250,180)]; [cat setScale:0.4f]; [self addChild:cat z:3 tag:TAG_CAT_SPRITE]; /*** Draw a sprite using CCTexture2D ***/ CCTexture2D *texture = [[CCTextureCache sharedTextureCache]addImage:@"tree.png"]; CCSprite *tree2 = [CCSprite spriteWithTexture:texture]; [tree2 setPosition:ccp(300,20)]; tree2.anchorPoint = ccp(0.5f,0); [tree2 setScale:2.0f]; [self addChild:tree2 z:2 tag:TAG_TREE_SPRITE_2]; /*** Draw a sprite using CCSpriteFrameCache and CCTexture2D ***/ CCSpriteFrame *frame = [CCSpriteFrame frameWithTexture:texturerect:tree2.textureRect]; [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFrame:frame name:@"tree.png"]; CCSprite *tree3 = [CCSprite spriteWithSpriteFrame:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:@"tree.png"]]; [tree3 setPosition:ccp(400,20)]; tree3.anchorPoint = ccp(0.5f,0); [tree3 setScale:1.25f]; [self addChild:tree3 z:2 tag:TAG_TREE_SPRITE_3]; /*** Draw sprites using CCBatchSpriteNode ***/ //Clouds CCSpriteBatchNode *cloudBatch = [CCSpriteBatchNodebatchNodeWithFile:@"cloud_01.png" capacity:10]; [self addChild:cloudBatch z:1 tag:TAG_CLOUD_BATCH]; for(int x=0; x<10; x++){ CCSprite *s = [CCSprite spriteWithBatchNode:cloudBatchrect:CGRectMake(0,0,64,64)]; [s setOpacity:100]; [cloudBatch addChild:s]; [s setPosition:ccp(arc4random()%500-50, arc4random()%150+200)]; } //Middleground Grass int capacity = 10; CCSpriteBatchNode *grassBatch1 = [CCSpriteBatchNodebatchNodeWithFile:@"grass_01.png" capacity:capacity]; [self addChild:grassBatch1 z:1 tag:TAG_GRASS_BATCH_1]; for(int x=0; x<capacity; x++){ CCSprite *s = [CCSprite spriteWithBatchNode:grassBatch1rect:CGRectMake(0,0,64,64)]; [s setOpacity:255]; [grassBatch1 addChild:s]; [s setPosition:ccp(arc4random()%500-50, arc4random()%20+70)]; } //Foreground Grass CCSpriteBatchNode *grassBatch2 = [CCSpriteBatchNodebatchNodeWithFile:@"grass_01.png" capacity:10]; [self addChild:grassBatch2 z:3 tag:TAG_GRASS_BATCH_2]; for(int x=0; x<30; x++){ CCSprite *s = [CCSprite spriteWithBatchNode:grassBatch2rect:CGRectMake(0,0,64,64)]; [s setOpacity:255]; [grassBatch2 addChild:s]; [s setPosition:ccp(arc4random()%500-50, arc4random()%40-10)]; } /*** Draw colored rectangles using a 1px x 1px white texture ***/ //Draw the sky using blank.png [self drawColoredSpriteAt:ccp(240,190) withRect:CGRectMake(0,0,480,260) withColor:ccc3(150,200,200) withZ:0]; //Draw the ground using blank.png [self drawColoredSpriteAt:ccp(240,30)withRect:CGRectMake(0,0,480,60) withColor:ccc3(80,50,25) withZ:0]; return self;}-(void) drawColoredSpriteAt:(CGPoint)position withRect:(CGRect)rectwithColor:(ccColor3B)color withZ:(float)z { CCSprite *sprite = [CCSprite spriteWithFile:@"blank.png"]; [sprite setPosition:position]; [sprite setTextureRect:rect]; [sprite setColor:color]; [self addChild:sprite]; //Set Z Order [self reorderChild:sprite z:z];}@end How it works... This recipe takes us through most of the common ways of drawing sprites: Creating a CCSprite from a file: First, we have the simplest way to draw a sprite. This involves using the CCSprite class method as follows: +(id)spriteWithFile:(NSString*)filename; This is the most straightforward way to initialize a sprite and is adequate for many situations. Other ways to load a sprite from a file: After this, we will see examples of CCSprite creation using UIImage/CGImageRef, CCTexture2D, and a CCSpriteFrame instantiated using a CCTexture2D object. CGImageRef support allows you to tie Cocos2d into other frameworks and toolsets. CCTexture2D is the underlying mechanism for texture creation. Loading spritesheets using CCSpriteFrameCache: Next, we will see the most powerful way to use sprites, the CCSpriteFrameCache class. Introduced in Cocos2d-iPhone v0.99, the CCSpriteFrameCache singleton is a cache of all sprite frames. Using a spritesheet and its associated PLIST file we can load multiple sprites into the cache. From here we can create CCSprite objects with sprites from the cache: +(id)spriteWithSpriteFrameName:(NSString*)filename; Mipmapping: Mipmapping allows you to scale a texture or to zoom in or out of a scene without aliasing your sprites. When we scale Alice down to a small size, aliasing will inevitably occur. With mipmapping turned on, Cocos2d dynamically generates lower resolution textures to smooth out any pixelation at smaller scales. Go ahead and comment out the following lines: [alice.texture generateMipmap]; ccTexParams texParams = { GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR,GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE }; [alice.texture setTexParameters:&texParams]; Now you should see this pixelation as Alice gets smaller. Drawing many derivative sprites with CCSpriteBatchNode: The CCSpriteBatchNode class, added in v0.99.5, introduces an efficient way to draw and re-draw the same sprite over and over again. A batch node is created with the following method: CCSpriteBatchNode *cloudBatch = [CCSpriteBatchNodebatchNodeWithFile:@"cloud_01.png" capacity:10]; Then, you create as many sprites as you want using the follow code: CCSprite *s = [CCSprite spriteWithBatchNode:cloudBatchrect:CGRectMake(0,0,64,64)]; [cloudBatch addChild:s]; Setting the capacity to the number of sprites you plan to draw tells Cocos2d to allocate that much space. This is yet another tweak for extra efficiency, though it is not absolutely necessary that you do this. In these three examples we draw 10 randomly placed clouds and 60 randomly placed bits of grass. Drawing colored rectangles: Finally, we have a fairly simple technique that has a variety of uses. By drawing a sprite with a blank 1px by 1px white texture and then coloring it and setting its textureRect property we can create very useful colored bars: CCSprite *sprite = [CCSprite spriteWithFile:@"blank.png"];[sprite setTextureRect:CGRectMake(0,0,480,320)];[sprite setColor:ccc3(255,128,0)]; In this example we have used this technique to create very simple ground and sky backgrounds.  
Read more
  • 0
  • 0
  • 3592

article-image-blender-3d-249-quick-start
Packt
15 Sep 2010
9 min read
Save for later

Blender 3D 2.49: Quick Start

Packt
15 Sep 2010
9 min read
(For more resources on Blender, see here.) Interface One of the most important parts of any software is the interface, and with Blender, it is no different. But the Blender interface is unique, because it's all based on OpenGL graphics built in real-time that can be redesigned any way we want. Because of that, we can say that Blender has a default interface that can be customized any way we want. It's even possible to zoom all the items in menus and buttons. Let's take a look at the interface: (move cursor over image to enlarge) The default interface of Blender is divided into: 3D View: This is the section of the interface where you visualize all your objects and manipulate them. If you are in the modeling process, this window should always be visible. Buttons Window: Here we will find almost all the tools and menus, with options to set up features like modifiers, materials, textures, and lights. We can change the options available in this window with several small icons that change the buttons with specific tasks like materials, shading, editing, and others. Those buttons will reflect the active panel in Blender, for example, when we choose materials (F5 key). The Buttons window will then only show options related to materials. Header: All windows in Blender have a header, even if it's not visible at the time we create the window. The content of the header can change, depending on the window type. For example, in the header for the 3D View, we find options related to visualization, object manipulation, and selection. Menus: These menus work just like in any other application, with options to save files, import, and export models. Depending on the window type selected, the contents of the menu may differ. Scene Selector: We can create various scenes in Blender, and this selector allows us to choose and create these scenes. Because we will be modeling and dealing with scenery, the Scene selector will be an important tool for us. These parts make up the default interface of Blender, but we can change all aspects of the interface. There are even some modified screens, adapted to some common tasks with Blender, for us to choose. To access these modified screen sets, we must click on the selector located to the left of Scene Selector: There are screen sets prepared to work with Animation, Model, Material, Sequence, and Scripting. Each of these sets has a different interface organization, optimized for its specific task. A nice way to switch between these sets is with a keyboard shortcut, which is Ctrl plus left arrow or right arrow. Try this shortcut, and you will switch between sets very quickly. If you make any changes in the interface of Blender and want to overwrite the default interface, just press Ctrl + U, and your current interface will become the new default. In this way, every time Blender is started, your new interface will be shown. The same option can be reached in the File menu with the option named Save Default Settings. To restore the standard default interface, just use the option Load Factory Settings in the File menu. Windows and menus Blender has a lot of different windows that can do a lot of nice things. Two of the most common windows are the 3D View and the Buttons Window, but there are a lot more. With the Window type selector, we can choose among several types, such as File Browser, Text Editor, Timeline, and others. The Window type selector is always located in the left corner of each window, as shown in the following screenshot: Let's see what the function of each window is: Scripts Window: This window groups some nice scripts written in Python to add some extra tools and functionalities to Blender. It works much like plugins in other 3D Packages. There are scripts to help in a lot of different tasks like modeling, animation, and importing models. Some of these scripts are very helpful to architectural modeling such as Geom Tool and Bridge Faces. For instance, we can create a city space with only a few mouse clicks using a script named Discombobulator. In most cases, the scripts will appear in the right place in the Blender menus. Use this window only if you want to browse all scripts available in your Blender Scripts folder. To run a script, just select any script from the Scripts menu. File Browser: With this window, we can browse the files of a specific folder. This window appears automatically when we save or open a file. Image Browser: Here we can view the image files in a specific folder. This window is very useful to search for image files like .jpg, .png, and others. Node Editor and Compositing Nodes: With this window, it's possible to build node sets and create complex materials and textures. Buttons Window: We already have talked about this window, but it's nice to remember that after the 3D View, this is one of the most important windows, because here we can set options for almost any tool or functionality in Blender. This is the window responsible for several tools and functions in Blender, such as lights, materials, textures, and object properties. Outliner: This window shows us a list of the objects in your scene, and lists the relations among them. Here we can see if an object is related to some other object in a hierarchical way. In this window, we can easily hide and select objects, which may be useful for complex scenes. User Preferences: As the name suggests, here we can change Blender configurations, such as file paths, themes, Auto Save, and other options. Text Editor: This window allows us to write and open text files to make comments and place notes. We can open and run Python scripts here also. Audio Window: Here we can open and compose audio files in sequences. It works much like the Video Sequence Editor, but for audio files. Timeline: That's the place where we create animation. This window gives us nice tools to add key frames and build animations. Video Sequence Editor: Here we can build and composite images and video files. It's a very nice window that can replace a video editor in some ways. We can easily create a complex animation with a lot of shots and sequence them together with this window. And, we can use the Node Editor to create complex compositions and effects. UV/Image Editor: With this window, we can edit and set up images and textures. There is even a paint application, with which we can edit and make adjustments in textures and maps. This is a very important window for us, because a lot of the texture work we will be using will involve the use of UV Textures that require a lot of adjustments in the UV/Image Editor. NLA Editor: Here we can visualize and set up non-linear animations. This window is related more to animations and key frame visualization. A non-linear animation means that we can create small blocks of motions, which can be arranged any way we like, including copying and positioning those blocks into sequences. In Blender, these blocks are named strips. Because it's a non-linear editor, we can erase and rearrange the blocks without a break in the animation. For a linear animation system, any changes at the beginning of the animation would demand a full reconstruction of the animation from the artist. Action Editor: This window has nice options to set up actions related to character animation. Ipo Curve Editor: In this window, we can create and set up animations in a more visual way with curves. It's possible to add, edit, and delete key frames. Even for animations that don't require much work with characters and object deformations, like the ones we will be creating, it still requires a lot of work in the setup of curves to create good animation. Now we know what each of those windows do. Some of them will be very important for your visualization tasks, such as the Buttons and Scripts Window. Multiple windows A great feature in Blender is the ability to split the interface and use various window types at the same moment. The way to do this is very simple. We must right-click on the borders of an existing window to access a small menu with the options to split the window. We can split a window in two ways, which are vertical division and horizontal division. When you place the mouse cursor at the border of a window, the cursor will change into a double arrow. Just right-click and choose Split Area from the menu as shown in the next screenshot, and a division will be created: There are two kinds of divisions that we can create, which are vertical and horizontal divisions: Vertical: Click on the upper or lower border of a window to create a vertical division Horizontal: Click on the right or left border of a window to create a horizontal division After choosing Split Area, just place your mouse cursor where you wish the division to be created, and left-click with your mouse. Merge windows It's possible to merge two different windows too, with the same menu. There is an option named Join Areas, which will appear when we click with our right mouse button on the border of a window. After doing that, a big arrow will show which window will be destroyed and the arrow base shows the window that will take the place of the one destroyed: When you have chosen which windows should be joined, just left-click with your mouse to confirm it. We must always join windows that share the entire border with each other. Windows that only share a part of their borders can't be joined together, and we must find another way to join those windows.
Read more
  • 0
  • 0
  • 3589
article-image-unity-game-development-interactions-part-1
Packt
18 Nov 2009
8 min read
Save for later

Unity Game Development: Interactions (Part 1)

Packt
18 Nov 2009
8 min read
To detect physical interactions between game objects, the most common method is to use a Collider component—an invisible net that surrounds an object's shape and is in charge of detecting collisions with other objects. The act of detecting and retrieving information from these collisions is known as collision detection. Not only can we detect when two colliders interact, but we can also pre-empt a collision and perform many other useful tasks by utilizing a technique called Ray Casting, which draws a Ray—put simply, an invisible (non-rendered) vector line between two points in 3D space—which can also be used to detect an intersection with a game object's collider. Ray casting can also be used to retrieve lots of other useful information such as the length of the ray (therefore—distance), and the point of impact of the end of the line. In the given example, a ray facing the forward direction from our character is demonstrated. In addition to the direction, a ray can also be given a specific length, or allowed to cast until it finds an object. Over the course of the article, we will work with the outpost model. Because this asset has been animated for us, the animation of the outpost's door opening and closing is ready to be triggered—once the model is placed into our scene. This can be done with either collision detection or ray casting, and we will explore what you will need to do to implement either approach. Let's begin by looking at collision detection and when it may be appropriate to use ray casting instead of, or in complement to, collision detection. Exploring collisions When objects collide in any game engine, information about the collision event becomes available. By recording a variety of information upon the moment of impact, the game engine can respond in a realistic manner. For example, in a game involving physics, if an object falls to the ground from a height, then the engine needs to know which part of the object hit the ground first. With that information, it can correctly and realistically control the object's reaction to the impact. Of course, Unity handles these kinds of collisions and stores the information on your behalf, and you only have to retrieve it in order to do something with it. In the example of opening a door, we would need to detect collisions between the player character's collider and a collider on or near the door. It would make little sense to detect collisions elsewhere, as we would likely need to trigger the animation of the door when the player is near enough to walk through it, or to expect it to open for them. As a result, we would check for collisions between the player character's collider and the door's collider. However, we would need to extend the depth of the door's collider so that the player character's collider did not need to be pressed up against the door in order to trigger a collision, as shown in the following illustration. However, the problem with extending the depth of the collider is that the game interaction with it becomes unrealistic. In the example of our door, the extended collider protruding from the visual surface of the door would mean that we would bump into an invisible surface which would cause our character to stop in their tracks, and although we would use this collision to trigger the opening of the door through animation, the initial bump into the extended collider would seem unnatural to the player and thus detract from their immersion in the game. So while collision detection will work perfectly well between the player character collider and the door collider, there are drawbacks that call for us as creative game developers to look for a more intuitive approach, and this is where ray casting comes in. Ray casting While we can detect collisions between the player character's collider and a collider that fits the door object, a more appropriate method may be to check for when the player character is facing the door we are expecting to open and is within a certain distance of this door. This can be done by casting a ray forward from the player's forward direction and restricting its length. This means that when approaching the door, the player needn't walk right up to it—or bump into an extended collider—in order for it to be detected. It also ensures that the player cannot walk up to the door facing away from it and still open it—with ray casting they must be facing the door in order to use it, which makes sense. In common usage, ray casting is done where collision detection is simply too imprecise to respond correctly. For example, reactions that need to occur with a frame-by-frame level of detail may occur too quickly for a collision to take place. In this instance, we need to preemptively detect whether a collision is likely to occur rather than the collision itself. Let's look at a practical example of this problem. The frame miss In the example of a gun in a 3D shooter game, ray casting is used to predict the impact of a gunshot when a gun is fired. Because of the speed of an actual bullet, simulating the flight path of a bullet heading toward a target is very difficult to visually represent in a way that would satisfy and make sense to the player. This is down to the frame-based nature of the way in which games are rendered. If you consider that when a real gun is fired, it takes a tiny amount of time to reach its target—and as far as an observer is concerned it could be said to happen instantly—we can assume that even when rendering over 25 frames of our game per second, the bullet would need to have reached its target within only a few frames. In the example above, a bullet is fired from a gun. In order to make the bullet realistic, it will have to move at a speed of 500 feet per second. If the frame rate is 25 frames per second, then the bullet moves at 20 feet per frame. The problem with this is a person is about 2 feet in diameter, which means that the bullet will very likely miss the enemies shown at 5 and 25 feet away that would be hit. This is where prediction comes into play. Predictive collision detection Instead of checking for a collision with an actual bullet object, we find out whether a fired bullet will hit its target. By casting a ray forward from the gun object (thus using its forward direction) on the same frame that the player presses the fire button, we can immediately check which objects intersect the ray. We can do this because rays are drawn immediately. Think of them like a laser pointer—when you switch on the laser, we do not see the light moving forward because it travels at the speed of light—to us it simply appears. Rays work in the same way, so that whenever the player in a ray-based shooting game presses fire, they draw a ray in the direction that they are aiming. With this ray, they can retrieve information on the collider that is hit. Moreover, by identifying the collider, the game object itself can be addressed and scripted to behave accordingly. Even detailed information, such as the point of impact, can be returned and used to affect the resultant reaction, for example, causing the enemy to recoil in a particular direction. In our shooting game example, we would likely invoke scripting to kill or physically repel the enemy whose collider the ray hits, and as a result of the immediacy of rays, we can do this on the frame after the ray collides with, or intersects the enemy collider. This gives the effect of a real gunshot because the reaction is registered immediately. It is also worth noting that shooting games often use the otherwise invisible rays to render brief visible lines to help with aim and give the player visual feedback, but do not confuse these lines with ray casts because the rays are simply used as a path for line rendering. Adding the outpost Before we begin to use both collision detection and ray casting to open the door of our outpost, we'll need to introduce it to the scene. To begin, drag the outpost model from the Project panel to the Scene view and drop it anywhere—bear in mind you cannot position it when you drag-and-drop; this is done once you have dropped the model (that is, let go off the mouse). Once the outpost is in the Scene, you'll notice its name has also appeared in the Hierarchy panel and that it has automatically become selected. Now you're ready to position and scale it!  
Read more
  • 0
  • 0
  • 3522

article-image-blender-engine-characters
Packt
09 Nov 2012
4 min read
Save for later

Blender Engine : Characters

Packt
09 Nov 2012
4 min read
(For more resources related to this topic, see here.) An example — save the whale! The game can augment its levels of difficulty as we develop our world using different environments. We can always increase the capability of your character with new keyboard functions. Obviously, this is an example. Feel free to change the game, remake it, create another completely different character, and provide a gameplay of another gaming genre. There are thousands of possibilities, and it's fine if you deviate from our idea. It is important that you clear your design before you start your game library. That's all. How to create a library If we start working with the Blender Game Engine (BGE), we must have a library of all of the objects we use in our game. For example, the basic character, or even the smallest details, such as the appearance of health levels of our enemies. On the Internet, we can find plenty of 3D objects, which can be useful for our game. Let's make sure we use free models and read the instructions to run the model. Do not forget to mention the authorship of each object that you download. Time for action — downloading models from the Internet Let's go to one of the repositories for Blender, which can be found at http://www.opengameart.org/ and let's try to search for what is closest to our character. Write sea in the Search box, and choose 3D Art for Art Type, as shown in the following screenshot: We have some interesting options. We see a shark, seaweed, and some icebergs to select from. Choose and click on the thumbnail with the name ICEBERGS IN 3D: At the bottom of the page, you will find the file.blend downloadable. Click on it to start the download. Remember to click on RMB before the download begins. Now, let's try web pages, which have libraries that offer 3D models in other formats. An example of a very extensive library is http://sketchup.google.com/3dwarehouse/. Write trawler in the Search box, and choose the one that you like. In our case, we decided to go for the Google 3D model with the title Trawler boat, 28': Click on the Download model button: Save the file on your hard disk, in a folder of your game. What just happened? We have searched the Internet for 3D models, which will allow us to start a library for our game objects in Blender. Whether they are .blend files (original blender format) or of a 3D-model format, you can import them and work with them. Don't download models that you will not use. The libraries on the Internet grow every day, and we don't need to save all of the models that we like. Remember that before downloading the model and using it, we need to check if it has a free license. If you are releasing your project under some other free and/or open source license, then there could be licensing conflicts depending on what license the art is released under. It is your responsibility to verify the compatibility of the art license with the license you are using. Importing other files into Blender Before the imported mesh can be used, some scene and mesh prepping in Blender is usually required. It basically cleans up the model imported in Blender. Google SketchUp is another free, 3D software option. You can build models from scratch, and you can upload or download what you need, as you have seen. People all over the world share what they've made on the Google 3D Warehouse. It's our time to do the same. Download the program from http://sketchup.comand install it. You can uninstall it later. Open the boat file in SketchUp, click on Save as, and export the 3D model using the COLLADA format. The *.dae Collada format is a common, cross-platf orm file, which can be imported directly into Blender.
Read more
  • 0
  • 0
  • 3497
Modal Close icon
Modal Close icon