Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 26910

article-image-textures-blender
Packt
22 Oct 2009
10 min read
Save for later

Textures in Blender

Packt
22 Oct 2009
10 min read
Procedural Textures vs. Bitmap Textures Blender has basically two types of textures, which are procedural textures and bitmap textures. Each one has both positive and negative points. Which one is the best will depend on your project needs. Procedural: This kind of texture is generated by the software at rendering time, just like vector lines. This means that it won't depend on any type of image file. The best thing about this type of texture is that it is resolution independent, so we can set the texture to be rendered with high resolutions with minimum loss of quality. The negative point about this kind of texture is that it's harder to get realistic textures with it. Bitmap: To use this kind of texture, we will need an image file, such as a JPEG, PNG, or TGA file. The good thing about these textures is that we can achieve very realistic materials with it quickly. On the other hand, we must find the texture file before using it. And there is more. If you are creating a high resolution render, the texture file must be big. Texture Library Do you remember the way we organized materials? We can do exactly the same thing about textures. Besides setting names and storing the Blender files to import and use again later, collecting bitmap textures is another important point. Even if you don't start right away, it's important to know where to look for textures. So here is a small list of websites that provides free texture download. http://www.blender-textures.org http://www.cgtextures.com http://blender-archi.tuxfamily.org/textures Applying Textures To use a texture, we must apply a material to an object, and then use the texture with this material. We always use the texture inside a material. For instance, to make a plane that simulates a marble floor, we have to use a texture and set up how the surface will react to light and texture, which can give the surface a proper look of marble using any texture. To do that, we must use the texture panel, which is located right next to the materials button. We can use a keyboard shortcut to open this panel: just hit F6. There is a way to add a texture in the material panel as well, with a menu called Texture. The best way to get all the options is to add the texture on the texture panel. On this panel, we will be able to see a lot of buttons, which represent the texture channels. Each one of these channels can hold a texture. The final texture will be a mix of all the channels. If we have a texture at channel 1 and another texture at channel 2, these textures will be blended and represented in the material. Before adding a new texture, we must select a channel by clicking over one of them. Usually the first channel is selected, but if you want to use another one, just click on the channel. When the channel is selected, just click the Add New button to add a new texture. The texture controls are very similar to the material controls. We can set a name for the texture at the top, or erase it if we don't want it anymore. With the selector, we can choose a previously created texture too—just click and select. Now comes the fun part. Having added a texture, we have to choose a texture type. To do that, we click on the texture type combo box. There are a lot of textures, but most of them are procedural textures and we won't use them much. The only texture type that isn't procedural is the image type. We can use textures like Clouds and Wood to create some effects and give surfaces a more complex look, or even create a grass texture with some dirt on it. But most times, the texture type that we will be using will be the Image type. Each texture has its own set of parameters to determine how it will look in the object. If we add a Wood texture, it will show the configuration parameters at the right. If we choose as texture type Clouds, the parameters showed at the right will be completely different. With the image texture type it's not different, this kind of texture has its own type of setup. This is the control panel: To show how to set up a texture, let's use an image file that represents a wood floor and a plane. We can apply the texture to this plane and set up how it's going to look, testing all the parameters. The first thing to do is assign a material to the plane, and add a texture to this material. We choose as texture type the Image option. It will show the configuration options for this kind of texture. To apply the image as a texture to the plane, just click on the Load button, situated on the Image menu. When we hit this button, we will be able to select the image file. Locate the image file and the texture will be applied. If we want to have more control over how this texture is organized and placed on the plane, we need to learn how the controls work. Every time you make any changes to the setup of a texture, these changes will be shown in the preview window; use it a lot to make good changes. Here is a list of what some of the buttons can do for the texture: UseAlpha: If the texture has an alpha channel, we have to press this button for Blender calculate the channel. An image has an alpha channel when some kind of transparency is stored in the image. For instance, a .png file with transparent background has an alpha channel. We can use this to create a texture with a logo, for a bottle, or to add an image of a tree or person to a plane. Rot90: With this option we can rotate the texture by 90 degrees. Repeat: Every texture must be distributed on the object surface, and repeating the texture in lines and columns is the default way to do that. Extended: If this button is pressed, the texture will be adjusted to fit all the object surface area. Clip: With this option, the texture will be cropped and we will be able to show only a part of it. To adjust which parts of the texture will be displayed, use the Min/Max X/Y options. Xrepeat / Yrepeat: This option determines how many times a texture is repeated, with the repeat option turned on. Normal Map: If the texture will be used to create Normal Maps, press this button. These are textures used to change the face normals of an object. Still: With this button selected, we can determine that the image used as texture is a still image. This option is marked by default. Movie: If you have to use a movie file as texture, press this button. This is very useful if we need to make something like a theatre projection screen or a tv screen. Sequence: We can use a sequence of images as texture too; just press this button. It works the same ways as with a movie file. There are a few more parameters, like the Reload button. If your texture file suffers any kind of change, we must press this button for the changes get accepted by Blender. The X button can erase this texture; use it if you need to select another image file. When we add a texture to any material, an external link is created with this file. This link can be absolute or relative. When we add a texture called "wood.png", which is located at the root of your main hard disk, like C:, a link to this texture will be created like this: "c:wood.png", so every time you open this file, the software will look for that file at that exact place. This is an absolute link, but we can use a relative link as well. For instance, when we add a texture located in the same folder as our scene, a relative link will be created. Every time we use an absolute link and we have to move the ".blend" file to another computer, the texture file must go with it. To imbue the image file with the .blend, just press the icon of gift package. To save all the textures used in a scene, just access the file menu and use the Pack Data option. It will make all the texture files embedded with the source blend file. Mapping Every time we add a texture to any object, we must choose a mapping type to set up how the texture will be applied to the object. For instance, if we have a wall and apply a wood texture, it must be placed like wallpaper. But for cylindrical or spherical objects, or even walls, we have to set up in a way that makes the texture adaptable to the topology of the surface, to avoid effects such as a stretched texture. To set this up, we use the mapping options, which are located on the Map Input menu. On this menu, we can choose between four basic mapping types which are Cube, Sphere, Flat, and Tube. If you have a wall, choose the option that matches the topology type with the model. In this case, the best choice is the Cube. Another important option here is the UV button, which allows us to use another very powerful type of texturing, based on UV Mapping. Normal Map This is a special and useful type of texture, that can change the normals of surfaces. If we have a floor and a texture of ceramic tiles, the surface can be represented with smaller details of that tiling, using this kind of a map. It's almost like modeling the tiles. But everything is created using just a normal map. To use this kind of texture, we must turn on the Nor button on the Map To menu. When this button is turned on, we can set up the Nor slider to determine the intensity of the normal displacement. It works based on the pixel color of the texture. With white pixels, the normals are not affected, and with black pixels, the normals are fully translated. If you want to optimize the normal mapping, using a special texture is much recommended. Some texture libraries even have this type of normal maps ready for use. They can be called bump maps too. Here is an example of how we can use them. We take a stone texture and a tiled texture with a white background and black lines. The stone texture is applied to the floor, and the tiled texture is used to create a tiling for the floor. The setup for that is really simple. Just apply the texture at a lower channel, and turn off the Col button for this channel. Turn on the Nor button, and this texture will affect only the normals and not the material color. Any image can be used as a normal map, but we will always get better results with a greyscale image prepared to be used as a normal map. Now, just set up the Nor intensity with the slider, and see the render. Turn on positive and turn on negativeSome of the buttons on the Map To menu can be turned on with positive and negative values. For instance, the Nor option can be turned on with one click. If we click on it again, the Nor text will turn yellow. This means that the Nor is inverted with negative values. Some other buttons may present the same option.
Read more
  • 0
  • 0
  • 26810

article-image-normal-maps
Packt
19 Jan 2017
12 min read
Save for later

Normal maps

Packt
19 Jan 2017
12 min read
In this article by Raimondas Pupius, the author of the book Mastering SFML Game Development we will learn about normal maps and specular maps. (For more resources related to this topic, see here.) Lighting can be used to create visually complex and breath-taking scenes. One of the massive benefits of having a lighting system is the ability it provides to add extra details to your scene, which wouldn't have been possible otherwise. One way of doing so is using normal maps. Mathematically speaking, the word "normal" in the context of a surface is simply a directional vector that is perpendicular to the said surface. Consider the following illustration: In this case, what's normal is facing up because that's the direction perpendicular to the plane. How is this helpful? Well, imagine you have a really complex model with many vertices; it'd be extremely taxing to render the said model because of all the geometry that would need to be processed with each frame. A clever trick to work around this, known as normal mapping, is to take the information of all of those vertices and save them on a texture that looks similar to this one: It probably looks extremely funky, especially if being looked of physical release in grayscale, but try not to think of this in terms of colors, but directions. The red channel of a normal map encodes the –x and +x values. The green channel does the same for –y and +y values, and the blue channel is used for –z to +z. Looking back at the previous image now, it's easier to confirm which direction each individual pixel is facing. Using this information on geometry that's completely flat would still allow us to light it in such a way that it would make it look like it has all of the detail in there; yet, it would still remain flat and light on performance: These normal maps can be hand-drawn or simply generated using software such as Crazybump. Let's see how all of this can be done in our game engine. Implementing normal map rendering In the case of maps, implementing normal map rendering is extremely simple. We already have all the material maps integrated and ready to go, so at this time, it's simply a matter of sampling the texture of the tile-sheet normals: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textureName = m_tileMap.GetTileSet().GetTextureName(); auto normalMaterial = m_textureManager-> GetResource(textureName + "_normal"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { auto tile = m_tileMap.GetTile(x, y, layer); if (!tile) { continue; } auto& sprite = tile->m_properties->m_sprite; sprite.setPosition( static_cast<float>(x * Sheet::Tile_Size), static_cast<float>(y * Sheet::Tile_Size)); // Normal pass. if (normalMaterial) { shader->setUniform("material", *normalMaterial); renderer->Draw(sprite, &m_normals[layer]); } } } } } ... } The process is exactly the same as drawing a normal tile to a diffuse map, except that here we have to provide the material shader with the texture of the tile-sheet normal map. Also note that we're now drawing to a normal buffer texture. The same is true for drawing entities as well: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textures = m_systemManager-> GetEntityManager()->GetTextureManager(); for (auto &entity : m_entities) { auto position = entities->GetComponent<C_Position>( entity, Component::Position); if (position->GetElevation() < l_layer) { continue; } if (position->GetElevation() > l_layer) { break; } C_Drawable* drawable = GetDrawableFromType(entity); if (!drawable) { continue; } if (drawable->GetType() != Component::SpriteSheet) { continue; } auto sheet = static_cast<C_SpriteSheet*>(drawable); auto name = sheet->GetSpriteSheet()->GetTextureName(); auto normals = textures->GetResource(name + "_normal"); // Normal pass. if (normals) { shader->setUniform("material", *normals); drawable->Draw(&l_window, l_materials[MaterialMapType::Normal].get()); } } } ... } You can try obtaining a normal texture through the texture manager. If you find one, you can draw it to the normal map material buffer. Dealing with particles isn't much different from what we've seen already, except for one little piece of detail: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { if (l_layer >= 0) { if (positions[i].z < l_layer * Sheet::Tile_Size) { continue; } if (positions[i].z >= (l_layer + 1) * Sheet::Tile_Size) { continue; } } else if (positions[i].z < Sheet::Num_Layers * Sheet::Tile_Size) { continue; } // Normal pass. shader->setUniform("material", sf::Glsl::Vec3(0.5f, 0.5f, 1.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Normal].get()); } } ... } As you can see, we're actually using the material value shader in order to give particles' static normals, which are always sort of pointing to the camera. A normal map buffer should look something like this after you render all the normal maps to it: Changing the lighting shader Now that we have all of this information, let's actually use it when calculating the illumination of the pixels inside the light pass shader: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; void main() { vec4 pixel = texture2D(LastPass, gl_TexCoord[0].xy); vec4 diffusepixel = texture2D(DiffuseMap, gl_TexCoord[0].xy); vec4 normalpixel = texture2D(NormalMap, gl_TexCoord[0].xy); vec3 PixelCoordinates = vec3(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z); vec4 finalPixel = gl_Color * pixel; vec3 viewDirection = vec3(0, 0, 1); if(PassNumber == 1) { finalPixel *= vec4(AmbientLight, 1.0); } // IF FIRST PASS ONLY! vec3 N = normalize(normalpixel.rgb * 2.0 - 1.0); for(int i = 0; i < LightCount; ++i) { vec3 L = Lights[i].position - PixelCoordinates; float distance = length(L); float d = max(distance - Lights[i].radius, 0); L /= distance; float attenuation = 1 / pow(d/Lights[i].radius + 1, 2); attenuation = (attenuation - Lights[i].falloff) / (1 - Lights[i].falloff); attenuation = max(attenuation, 0); float normalDot = max(dot(N, L), 0.0); finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation))) * normalDot; } gl_FragColor = finalPixel; } First, the normal map texture needs to be passed to it as well as sampled, which is where the first two highlighted lines of code come in. Once this is done, for each light we're drawing on the screen, the normal directional vector is calculated. This is done by first making sure that it can go into the negative range and then normalizing it. A normalized vector only represents a direction. Since the color values range from 0 to 255, negative values cannot be directly represented. This is why we first bring them into the right range by multiplying them by 2.0 and subtracting by 1.0. A dot product is then calculated between the normal vector and the normalized L vector, which now represents the direction from the light to the pixel. How much a pixel is lit up from a specific light is directly contingent upon the dot product, which is a value from 1.0 to 0.0 and represents magnitude. A dot product is an algebraic operation that takes in two vectors, as well as the cosine of the angle between them, and produces a scalar value between 0.0 and 1.0 that essentially represents how “orthogonal” they are. We use this property to light pixels less and less, given greater and greater angles between their normals and the light. Finally, the dot product is used again when calculating the final pixel value. The entire influence of the light is multiplied by it, which allows every pixel to be drawn differently as if it had some underlying geometry that was pointing in a different direction. The last thing left to do now is to pass the normal map buffer to the shader in our C++ code: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("NormalMap", m_materialMaps[MaterialMapType::Normal]->getTexture()); ... } ... } This effectively enables normal mapping and gives us beautiful results such as this: The leaves, the character, and pretty much everything in this image now looks like they have a definition, ridges, and crevices; it is lit as if it had geometry, although it's paper-thin. Note the lines around each tile in this particular instance. This is one of the main reasons why normal maps for pixel art, such as tile sheets, shouldn't be automatically generated; it can sample the tiles adjacent to it and incorrectly add bevelled edges. Specular maps While normal maps provide us with the possibility to fake how bumpy a surface is, specular maps allow us to do the same with the shininess of a surface. This is what the same segment of the tile sheet we used as an example for a normal map looks like in a specular map: It's not as complex as a normal map since it only needs to store one value: the shininess factor. We can leave it up to each light to decide how much shine it will cast upon the scenery by letting it have its own values: struct LightBase { ... float m_specularExponent = 10.f; float m_specularStrength = 1.f; }; Adding support for specularity Similar to normal maps, we need to use the material pass shader to render to a specularity buffer texture: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... auto specMaterial = m_textureManager->GetResource( textureName + "_specular"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { ... // Normal pass. // Specular pass. if (specMaterial) { shader->setUniform("material", *specMaterial); renderer->Draw(sprite, &m_speculars[layer]); } } } } } ... } The texture for specularity is once again attempted to be obtained; it is passed down to the material pass shader if found. The same is true when you render entities: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... for (auto &entity : m_entities) { ... // Normal pass. // Specular pass. if (specular) { shader->setUniform("material", *specular); drawable->Draw(&l_window, l_materials[MaterialMapType::Specular].get()); } } } ... } Particles, on the other hand, also use the material value pass shader: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { ... // Normal pass. // Specular pass. shader->setUniform("material", sf::Glsl::Vec3(0.f, 0.f, 0.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Specular].get()); } } } For now, we don't want any of them to be specular at all. This can obviously be tweaked later on, but the important thing is that we have that functionality available and yielding results, such as the following: This specularity texture needs to be sampled inside a light-pass shader, just like a normal texture. Let's see what this involves. Changing the lighting shader Just as before, a uniform sampler2D needs to be added to sample the specularity of a particular fragment: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform sampler2D SpecularMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; float specularExponent; float specularStrength; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; const float SpecularConstant = 0.4; void main() { ... vec4 specularpixel = texture2D(SpecularMap, gl_TexCoord[0].xy); vec3 viewDirection = vec3(0, 0, 1); // Looking at positive Z. ... for(int i = 0; i < LightCount; ++i){ ... float specularLevel = 0.0; specularLevel = pow(max(0.0, dot(reflect(-L, N), viewDirection)), Lights[i].specularExponent * specularpixel.a) * SpecularConstant; vec3 specularReflection = Lights[i].color * specularLevel * specularpixel.rgb * Lights[i].specularStrength; finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation)) + vec4(specularReflection, 1.0)) * normalDot; } gl_FragColor = finalPixel; } We also need to add in the specular exponent and strength to each light's struct, as it's now part of it. Once the specular pixel is sampled, we need to set up the direction of the camera as well. Since that's static, we can leave it as is in the shader. The specularity of the pixel is then calculated by taking into account the dot product between the pixel’s normal and the light, the color of the specular pixel itself, and the specular strength of the light. Note the use of a specular constant in the calculation. This is a value that can and should be tweaked in order to obtain best results, as 100% specularity rarely ever looks good. Then, all that's left is to make sure the specularity texture is also sent to the light-pass shader in addition to the light's specular exponent and strength values: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("SpecularMap", m_materialMaps[MaterialMapType::Specular]->getTexture()); ... for (auto& light : m_lights) { ... shader->setUniform(id + ".specularExponent", light.m_specularExponent); shader->setUniform(id + ".specularStrength", light.m_specularStrength); ... } } } The result may not be visible right away, but upon closer inspection of moving a light stream, we can see that correctly mapped surfaces will have a glint that will move around with the light: While this is nearly perfect, there's still some room for improvement. Summary Lighting is a very powerful tool when used right. Different aspects of a material may be emphasized depending on the setup of the game level, additional levels of detail can be added in without too much overhead, and the overall aesthetics of the project will be leveraged to new heights. The full version of “Mastering SFML Game Development” offers all of this and more by not only utilizing normal and specular maps, but also using 3D shadow-mapping techniques to create Omni-directional point light shadows that breathe new life into the game world. Resources for Article: Further resources on this subject: Common Game Programming Patterns [article] Sprites in Action [article] Warfare Unleashed Implementing Gameplay [article]
Read more
  • 0
  • 0
  • 26521

article-image-sprites-action
Packt
20 Feb 2015
6 min read
Save for later

Sprites in Action

Packt
20 Feb 2015
6 min read
In this article by Milcho G. Milchev, author of the book SFML Essentials, we will see how we can use SFML to create a customized animation using a sequence of images. We will also see how SFML renders an animation. Animation exists in many forms. The traditional approach to animation is drawing a sequence of images which differ slightly from each other, and showing them on a screen one after the other. Even though this approach is still widely used, there are more elegant alternatives. For example, drawing (or modelling in 3D) only the limbs of a character and then animating how they move relative to time is a technique that saves a lot of time for artists. It also creates smoother results because not every frame of the animation has to be redrawn. In this book, we are going to explore only the traditional approach, since it is the simpler solution for programmers, and in many cases it is enough to bring life to any sprite. (For more resources related to this topic, see here.) The setup As we established earlier, the traditional approach involves a set of images that need to change over time. For our example, we will use a crystal, which rotates around its centre. Typically, an animation is kept in a single file (a sprite sheet), where each frame of the animation is stored, and in most cases, each frame is the same size—the size of the object. In our example, the sprite is, 32 x 32 pixels and has eight frames, which play for one second. Here is what the sprite sheet looks like: The following screenshot shows our animation setup in code: First of all, note that we are using the AssetManager class to load our sprite sheet. The next line sets the texture rectangle of the sprite to target the first image in our sprite sheet. Here is what this means in terms of the sprite sheet texture: Next, we will move this texture rectangle once in a while to simulate a rotating crystal. In the previous code, we set the number of frames to eight (as many as there are in  the sprite sheet), and set the time of the animation to one second in total, which means that each frame stays for about 0.125 seconds (the animation duration is divided by the number of frames) at a time. We know what we want to do now, so let's do it: In the code, we first measure the delta time since the last frame and add it to the accumulated time. The last two lines of the code actually do all the work. The first one looks intimidating at first glance, but it is simply a way to choose the correct frame, based on how much time has passed and how long the animation is. The formula timeAsSeconds / animationDuration gives us the time relative to the animation duration. So let's say that 0.4 seconds have passed and our animation duration is 1 second. This leaves us with 0.4 seconds in local animation time. Multiply this 0.4 seconds by the number of frames, and we get the following result: 0.4 * 8 = 3.2 This gives us which frame we should be on at the moment, and how long we have been there. The current frame index is the whole part of 3.2 (which is three), and the fraction part (0.2) is how long we have been on that frame. In this case, we are only interested in the current frame so we will take that by casting the whole expression to int. This rounds the number down if the number is positive (which it always is in this case). The last part, % frameNum is there to restart the animation when it reaches beyond its last frame. So in the case where 2.3 seconds have passed, we have the following result: 2.3 * 8 = 18.4 We do not have a 19th frame to show, so we show the frame which corresponds to that in our local scale [0…7]. In this case: 18 / 8 = 2 (and 2 remainder) Since the % operator takes the remainder of a division, we are left to show the frame with the index two, which is the third frame. (We start counting from zero as programmers, remember?) The last line of the code sets the texture rectangle to the current frame. The process is quite straightforward—since we only have frames on the x axis, we do not need to worry about the y coordinate of the rectangle, and so we will set it to zero. The x is computed by animFrame * spriteSize.x, which multiplies the current frame by the width of the frame. In the case, the current frame is two and the frame's width is 32, so we get: 2 * 32 = 64 Here is what the texture rectangle will look like: The last thing we need to do is render the sprite inside the render frame and we are done. If everything goes smoothly, we should have a rotating crystal on the screen with eight frames. With this technique, we can animate sprites of all kinds no matter how many frames they have or how long the animation is. There are problems with the current approach though - the code looks messy, and it is only useful for a single animation. What if we want multiple animations for a sprite (rotating the crystal in a vertical direction as well), and we want to be able to switch between them? Currently, we would have to duplicate all our code for each animation and each animated sprite. In the next section, we will talk about how to avoid these issues by building a fully featured animation system that requires as little code duplication as possible. Summary Sprite animations seem quite easy now, don't they? Just keep in mind that there is a lot more to explore when it comes to animation. Not only are there different techniques to doing them, but also perfecting what we've developed so far might take some time. Fortunately, what we have so far will work as is in the majority of cases so I would say that you are pretty much set to go. If you want to dig deeper, buy the book and read SFML Essentials in a simple step-by-step fashion by using SFML library to create realistic looking animations as well as to develop  2D and 3D games using the SFML. Resources for Article: Further resources on this subject: Vmware Vcenter Operations Manager Essentials - Introduction To Vcenter Operations Manager [article] Translating a File In Sdl Trados Studio [article] Adding Finesse To Your Game [article]
Read more
  • 0
  • 0
  • 26484

article-image-zombie-attacks
Packt
24 Sep 2015
9 min read
Save for later

The Zombie Attacks!

Packt
24 Sep 2015
9 min read
 In this article by Jamie Dean author of the book Unity Character Animation with Mecanim: RAW, we will demonstrate the process of importing and animating a rigged character in Unity. In this article, we will cover: Starting a blank Unity project and importing the necessary packages Importing a rigged character model in the FBX format and adjusting import settings Typically, an enemy character such as this will have a series of different animation sequences, which will be imported separately or together from a 3D package. In this case, our animation sequences are included in separate files. We will begin, by creating the Unity project. (For more resources related to this topic, see here.) Setting up the project Before we start exploring the animation workflow with Mecanim's tools, we need to set up the Unity project: Create a new project within Unity by navigating to File | New Project.... When prompted, choose an appropriate name and location for the project. In the Unity - Project Wizard dialog that appears, check the relevant boxes for the Character Controller.unityPackage and Scripts.unityPackage packages. Click on the Create button. It may take a few minutes for Unity to initialize. When the Unity interface appears, import the PACKT_cawm package by navigating to Assets | Import Package | Custom Package.... The Import package... window will appear. Navigate to the location where you unzipped the project files, select the unity package, and click on Open.The assets package will take a little time to decompress. When the Importing Package checklist appears, click on the Import button in the bottom-right of the window. Once the assets have finished importing, you will start with a default blank scene. Importing our enemy Now, it is time to import our character model: Minimize Unity. Navigate to the location where you unzipped the project files. Double-click on the Models folder to view its contents. Double-click on the zombie_m subfolder to view its contents.The folder contains an FBX file containing the rigged male zombie model and a separate subfolder containing the associated textures. Open Unity and resize the window so that both Unity and the zombie_m folder contents are visible. In Unity, click on the Assets folder in the Project panel. Drag the zombie_m FBX asset into the Assets panel to import it.Because the FBX file contains a normal map, a window will pop up asking if you want to set this file's import settings to read it correctly. Click on the Fix Now button. FBX files can contain embedded bitmap textures, which can be imported with the model. This will create subfolders containing the materials and textures within the folder where the model has been imported. Leaving the materials and textures as subfolders of the model will make them difficult to find within the project. The zombie model and two folders should now be visible in the FBX_Imports folder in the Assets panel. In the next step, we will move the imported material and texture assets into the appropriate folders in the Unity project. Organizing the material and textures The material and textures associated with the zombie_m model are currently located within the FBX_Imports folder. We will move these into different folders to organize them within the hierarchy of our project: Double-click on the Materials folder and drag the material asset contained within it into the PACKT_Materials folder in the Project panel. Return to the FBX_Imports folder by clicking on its title at the top of the Assets panel interface. Double-click on the textures folder. This will be named to be consistent with the model. Drag the two bitmap textures into the PACKT_Textures folder in the Project panel. Return to the FBX_Imports folder and delete the two empty subfolders.The moved material and textures will still be linked to the model. We will make sure of this by instancing it in the current empty scene. Drag the zombie_m asset into the Hierarchy panel. It may not be immediately visible within the Scene view due to the default import scale settings. We will take care of this in the next step. Adjusting the import scale Unity's import settings can be adjusted to account for the different tools commonly used to create 2D and 3D assets. Import settings are adjusted in the Inspector panel, which will appear on the right of the unity interface by default: Click on the zombie_m game object within the Hierarchy panel.This will bring up the file's import settings in the Inspector panel. Click on the Model tab. In the Scale Factor field, highlight the current number and type 1. The character model has been modeled to scale in meters to make it compatible with Unity's units. All 3D software applications have their own native scale. Unity does a pretty good job at accommodating all of them, but it often helps to know which software was used to create them. Scroll down until the Materials settings are visible. Uncheck the Import Materials checkbox.Now that we have got our textures and materials organized within the project, we want to make sure they are not continuously imported into the same folder as the model. Leave the remaining Model Import settings at their default values.We will be discussing these later on in the article, when we demonstrate the animation import. Click on the Apply button. You may need to scroll down within the Inspector panel to see this: The zombie_m character should now be visible in the Scene view: This character model is a medium resolution model—4410 triangles—and has a single 1024 x 1024 albedo texture and separate 1024 x 1024 specular and normal maps. The character has been rigged with a basic skeleton. The rigging process is essential if the model is to be animated. We need to save our progress, before we get any further: Save the scene by navigating to File | Save Scene as.... Choose an appropriate filename for the scene. Click on the Apply button. Despite the fact that we have only added a single game object to the default scene, there are more steps that we will need to take to set up the character and it will be convenient for us to save the current set up in case anything goes wrong. In the character animation, there are looping and single-shot animation sequences. Some animation sequences such as walk, run, idle are usually seamless loops designed to play back-to-back without the player being aware of where they start and end. Other sequences, typically, shooting, hitting, being injured or dying are often single-shot animations, which do not need to loop. We will start with this kind, and discuss looping animation sequences later in the article. In order to use Mecanim's animation tools, we need to set up the character's Avatar so that the character's hierarchy of bones is recognized and can be used correctly within Unity. Adjusting the rig import settings and creating the Avatar Now that we have imported the model, we will need to adjust the import settings so that the character functions correctly within our scene: Select zombie_m in the Assets panel. The asset's import settings should become visible within the Inspector panel. This settings rollout contains three tabs: Model, Rig, and Animations. Since we have already adjusted the Scale Factor within the Model Import settings, we will move on to the Rig import settings where we can define what kind of skeleton our character has. Choosing the appropriate rig import settings Mecanim has three options for importing rigged models: Legacy, Generic, and Humanoid. It also has a none option that should be applied to models that are not intended to be animated. Legacy format was previously the only option for importing skeletal animation in Unity. It is not possible to retarget animation sequences between models using Legacy, and setting up functioning state machines requires quite a bit of scripting. It is still a useful tool for importing models with fewer animation sequences and for simple mechanical animations. Legacy format animations are not compatible with Mecanim. Generic is one of the new animation formats that are compatible with Mecanim's animator controllers. It does not have the full functionality of Mecanim's character animation tools. Animations sequences imported with the generic format cannot be retargeted and are best used for quadrupeds, mechanical devices, pretty much anything except a character with two arms and two legs. The Humanoid animation type allows the full use of Mecanim's powerful toolset. It requires a minimum of 15 bones, and assumes that your rig is roughly human shaped with a pair of arms and legs. It can accommodate many more intermediary joints and some basic facial animation. One of the greatest benefits of using the Humanoid type is that it allows animation sequences to be retargeted or adapted to work with different rigs. For instance, you may have a detailed player character model with a full skeletal rig (including fingers and toes joints), maybe you want to reuse this character's idle sequence with a background character that is much less detailed, and has a simpler arrangement of bones. Mecanim makes it possible reuse purpose built motion sequences and even create useable sequences from motion capture data. Now that we have introduced these three rig types, we need to choose the appropriate setting for our imported zombie character, which in this case is Humanoid: In the Inspector panel, click on the Rig tab. Set the Animation Type field to Humanoid to suit our character skeleton type. Leave Avatar Definition set to Create From This Model. Optimize Game Objects can be left checked. Click on the Apply button to save the settings and transfer all of the changes that you have made to the instance in the scene.  The Humanoid animation type is the only one that supports retargeting. So if you are importing animations that are not unique and will be used for multiple characters, it is a good idea to use this setting. Summary In this article, we covered the major steps involved in animating a premade character using the Mecanim system in Unity. We started with FBX import settings for the model and the rig. We covered the creation of the Avatar by defining the bones in the Avatar Definition settings. Resources for Article: Further resources on this subject: Adding Animations[article] 2D Twin-stick Shooter[article] Skinning a character [article]
Read more
  • 0
  • 0
  • 25732

article-image-lighting-camera-effects-unity-2018
Amarabha Banerjee
04 May 2018
12 min read
Save for later

Implementing lighting & camera effects in Unity 2018

Amarabha Banerjee
04 May 2018
12 min read
Today, we will explore lighting & camera effects in Unity 2018. We will start with cameras to include perspectives, frustums, and Skyboxes. Next, we will learn a few uses of multiple cameras to include mini-maps. We will also cover the different types of lighting, explore reflection probes, and conclude with a look at shadows. Working with cameras Cameras render scenes so that the user can view them. Think about the hidden complexity of this statement. Our games are 3D, but people playing our games view them on 2D displays such as television, computer monitors, or mobile devices. Fortunately for us, Unity makes implementing cameras easy work. Cameras are GameObjects and can be edited using transform tools in the Scene view as well as in the Inspector panel. Every scene must have at least one camera. In fact, when a new scene is created, Unity creates a camera named Main Camera. As you will see later in this chapter, a scene can have multiple cameras. In the Scene view, cameras are indicated with a white camera silhouette, as shown in the following screenshot: When we click our Main Camera in the Hierarchy panel, we are provided with a Camera Preview in the Scene view. This gives us a preview of what the camera sees as if it were in game mode. We also have access to several parameters via the Inspector panel. The Camera component in the Inspector panel is shown here: Let's look at each of these parameters with relation to our Cucumber Beetle game: The Clear Flags parameter lets you switch between Skybox, Solid Color, Depth Only, and Don't Clear. The selection here informs Unity which parts of the screen to clear. We will leave this setting as Skybox. You will learn more about Skyboxes later in this chapter. The Background parameter is used to set the default background fill (color) of your game world. This will only be visible after all game objects have been rendered and if there is no Skybox. Our Cucumber Beetle game will have a Skybox, so this parameter can be left with the default color. The Culling Mask parameter allows you to select and deselect the layers you want the camera to render. The default selection options are Nothing, Everything, Default, TransparentFX, Ignore Raycast, Water, and UI. For our game, we will select Everything. If you are not sure which layer a game object is associated with, select it and look at the Layer parameter in the top section of the Inspector panel. There you will see the assigned layer. You can easily change the layer as well as create your own unique layers. This gives you finite rendering control. The Projection parameter allows you to select which projection, perspective or orthographic, you want for your camera. We will cover both of those projections later in this chapter. When perspective projection is selected, we are given access to the Field of View parameter. This is for the width of the camera's angle of view. The value range is 1-179°. You can use the slider to change the values and see the results in the Camera Preview window. When orthographic projection is selected, an additional Size parameter is available. This refers to the viewport size. For our game, we will select perspective projection with the Field of View set to 60. The Clipping Planes parameters include Near and Far. These settings set the closest and furthest points, relative to the camera, that rendering will happen at. For now, we will leave the default settings of 0.3 and 1000 for the Near and Far parameters, respectively. The Viewport Rect parameter has four components – X, Y, W, and H – that determine where the camera will be drawn on the screen. As you would expect, the X and Y components refer to horizontal and vertical positions, and the W and H components refer to width and height. You can experiment with these values and see the changes in the Camera Preview. For our game, we will leave the default settings. The Depth parameter is used when we implement more than one camera. We can set a value here to determine the camera's priority in relation to others. Larger values indicate a higher priority. The default setting is sufficient for our game. The Rendering Path parameter defines what rendering methods our camera will use. The options are Use Graphics Settings, Forward, Deferred, Legacy Vertex Lit, and Legacy Deferred (light prepass). We will use the Use Graphics Settings option for our game, which also uses the default setting. The Target Texture parameter is not something we will use in our game. When a render texture is set, the camera is not able to render to the screen. The Occlusion Culling parameter is a powerful setting. If enabled, Unity will not render objects that are occluded, or not seen by the camera. An example would be objects inside a building. If the camera can currently only see the external walls of the building, then none of the objects inside those walls can be seen. So, it makes sense to not render those. We only want to render what is absolutely necessary to help ensure our game has smooth gameplay and no lag. We will leave this as enabled for our game. The Allow HDR parameter is a checkbox that toggles a camera's High Dynamic Range (HDR) rendering. We will leave the default setting of enabled for our game. The Allow MSAA parameter is a toggle that determines whether our camera will use a Multisample Anti-Aliasing (MSAA) render target. MSAA is a computer graphics optimization technique and we want this enabled for our game. Understanding camera projections There are two camera projections used in Unity: perspective and orthographic. With perspective projection, the camera renders a scene based on the camera angle, as it exists in the scene. Using this projection, the further away an object is from the camera, the smaller it will be displayed. This mimics how we see things in the real world. Because of the desire to produce realistic games, or games that approximate the realworld, perspective projection is the most commonly used in modern games. It is also what we will use in our Cucumber Beetle game. The other projection is orthographic. An orthographic perspective camera renders a scene uniformly without any perspective. This means that objects further away will not be displayed smaller than objects closer to the camera. This type of camera is commonly used for top-down games and is the default camera projection used in 2D and Unity's UI system. We will use perspective projection for our Cucumber Beetle game. Orientating your frustum When a camera is selected in the Hierarchy view, its frustum is visible in the Scene view. A frustum is a geometric shape that looks like a pyramid that has had its top cut off, as illustrated here: The near, or top, plane is parallel to its base. The base is also referred to as the far plane. The frustum's shape represents the viable region of your game. Only objects in that region are rendered. Using the camera object in Scene view, we can change our camera's frustum shape. Creating a Skybox When we create game worlds, we typically create the ground, buildings, characters, trees, and other game objects. What about the sky? By default, there will be a textured blue sky in your Unity game projects. That sky is sufficient for testing but does not add to an immersive gaming experience. We want a bit more realism, such as clouds, and that can be accomplished by creating a Skybox. A Skybox is a six-sided cube visible to the player beyond all other objects. So, when a player looks beyond your objects, what they see is your Skybox. As we said, Skyboxes are six-sided cubes, which means you will need six separate images that can essentially be clamped to each other to form the cube. The following screenshot shows the Default Skybox that Unity projects start with as well as the completed Custom Skybox you will create in this section: Perform the following steps to create a Skybox: In the Project panel, create a Skybox subfolder in the Assets folder. We will use this folder to store our textures and materials for the Skybox. Drag the provided six Skybox images, or your own, into the new Skybox folder. Ensure the Skybox folder is selected in the Project panel. From the top menu, select Assets | Create | Material. In the Project panel, name the material Skybox. With the Skybox material selected, turn your attention to the Inspector panel. Select the Shader drop-down menu and select SkyBox | 6 Sided. Use the Select button for each of the six images and navigate to the images you added in step 2. Be sure to match the appropriate texture to the appropriate cube face. For example, the SkyBox_Front texture matches the Front[+Z] cube face on the Skybox Material. In order to assign our new Skybox to our scene, select Window | Lighting | Settings from the top menu. This will bring up the Lighting settings dialog window. In the Lighting settings dialog window, click on the small circle to the right of the Skybox Material input field. Then, close the selection window and the Lighting window. Refer to the following screenshot: You will now be able to see your Skybox in the Scene view. When you click on the Camera in the Hierarchy panel, you will also see the Skybox as it will appear from the camera's perspective. Be sure to save your scene and your project. Using multiple cameras Our Unity games must have a least one camera, but we are not limited to using just one. As you will see we will attach our main camera, or primary camera, to our player character. It will be as if the camera is following the character around the game environment. This will become the eyes of our character. We will play the game through our character's view. A common use of a second camera is to create a mini-map that can be seen in a small window on top of the game display. These mini-maps can be made to toggle on and off or be permanent/fixed display components. Implementations might consist of a fog-of-war display, a radar showing enemies, or a global top-down view of the map for orientation purposes. You are only limited by your imagination. Another use of multiple cameras is to provide the player with the ability to switch between third-person and first-person views. You will remember that the first-person view puts the player's arms in view, while in the third-person view, the player's entire body is visible. We can use two cameras in the appropriate positions to support viewing from either camera. In a game, you might make this a toggle—say, with the C keyboard key—that switches from one camera to the other. Depending on what is happening in the game, the player might enjoy this ability. Some single-player games feature multiple playable characters. Giving the player the ability to switch between these characters gives them greater control over the game strategy. To achieve this, we would need to have cameras attached to each playable character and then give the player the ability to swap characters. We would do this through scripting. This is a pretty advanced implementation of multiple characters. Another use of multiple cameras is adding specialty views in a game. These specialty views might include looking through a door's peep-hole, looking through binoculars at the top of a skyscraper, or even looking through a periscope. We can attach cameras to objects and change their viewing parameters to create unique camera use in our games. We are only limited by our own game designs and imagination. We can also use cameras as cameras. That's right! We can use the camera game object to simulate actual in-game cameras. One example is implementing security cameras in a prison-escape game. Working with lighting In the previous sections, we explored the uses of cameras for Unity games. Just like in the real world, cameras need lights to show us objects. In Unity games, we use multiple lights to illuminate the game environment. In Unity, we have both dynamic lighting techniques as well as light baking options for better performance. We can add numerous light sources throughout our scenes and selectively enable or disable an object's ability to cast or receive shadows. This level of specificity gives us tremendous opportunity to create realistic game scenes. Perhaps the secret behind Unity's ability to so realistically render light and shadows is that Unity models the actual behavior of lights and shadows. Real-time global illumination gives us the ability to instantiate multiple lights in each scene, each with the ability to directly or indirectly impact objects in the scene that are within range of the light sources. We can also add and manipulate ambient light in our game scenes. This is often done with Skyboxes, a tri-colored gradient, or even a single color. Each new scene in Unity has default ambient lighting, which we can control by editing the values in the Lighting window. In that window, you have access to the following settings: Environment Real-time Lighting Mixed Lighting Lightmapping Settings Other Settings Debug Settings No changes to these are required for our game at this time. We have already set the environmental lighting to our Skybox. When we create our scenes in Unity, we have three options for lighting. We can use real-time dynamic light, use the baked lighting approach, or use a mixture of the two. Our games perform more efficiently with baked lighting, compared to real-time dynamic lighting, so if performance is a concern, try using baked lighting where you can. To summarize, we have discussed how to create interesting lighting and camera effects using Unity 2018. This article is an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book will help you create fun filled real world games with Unity 2018. Game Engine Wars: Unity vs Unreal Engine Unity plugins for augmented reality application development How to create non-player Characters (NPC) with Unity 2018    
Read more
  • 0
  • 0
  • 25654
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-cocos2d-x
Packt
19 Oct 2015
11 min read
Save for later

Getting started with Cocos2d-x

Packt
19 Oct 2015
11 min read
 In this article written by Akihiro Matsuura, author of the book Cocos2d-x Cookbook, we're going to install Cocos2d-x and set up the development environment. The following topics will be covered in this article: Installing Cocos2d-x Using Cocos command Building the project by Xcode Building the project by Eclipse Cocos2d-x is written in C++, so it can build on any platform. Cocos2d-x is open source written in C++, so we can feel free to read the game framework. Cocos2d-x is not a black box, and this proves to be a big advantage for us when we use it. Cocos2d-x version 3, which supports C++11, was only recently released. It also supports 3D and has an improved rendering performance. (For more resources related to this topic, see here.) Installing Cocos2d-x Getting ready To follow this recipe, you need to download the zip file from the official site of Cocos2d-x (http://www.cocos2d-x.org/download). In this article we've used version 3.4 which was the latest stable version that was available. How to do it... Unzip your file to any folder. This time, we will install the user's home directory. For example, if the user name is syuhari, then the install path is /Users/syuhari/cocos2d-x-3.4. We call it COCOS_ROOT. The following steps will guide you through the process of setting up Cocos2d-x: Open the terminal Change the directory in terminal to COCOS_ROOT, using the following comand: $ cd ~/cocos2d-x-v3.4 Run setup.py, using the following command: $ ./setup.py The terminal will ask you for NDK_ROOT. Enter into NDK_ROOT path. The terminal will will then ask you for ANDROID_SDK_ROOT. Enter the ANDROID_SDK_ROOT path. Finally, the terminal will ask you for ANT_ROOT. Enter the ANT_ROOT path. After the execution of the setup.py command, you need to execute the following command to add the system variables: $ source ~/.bash_profile Open the .bash_profile file, and you will find that setup.py shows how to set each path in your system. You can view the .bash_profile file using the cat command: $ cat ~/.bash_profile We now verify whether Cocos2d-x can be installed: Open the terminal and run the cocos command without parameters. $ cocos If you can see a window like the following screenshot, you have successfully completed the Cocos2d-x install process. How it works... Let's take a look at what we did throughout the above recipe. You can install Cocos2d-x by just unzipping it. You know setup.py is only setting up the cocos command and the path for Android build in the environment. Installing Cocos2d-x is very easy and simple. If you want to install a different version of Cocos2d-x, you can do that too. To do so, you need to follow the same steps that are given in this recipe, but which will be for a different version. There's more... Setting up the Android environment  is a bit tough. If you started to develop at Cocos2d-x soon, you can turn after the settings part of Android. And you would do it when you run on Android. In this case, you don't have to install Android SDK, NDK, and Apache. Also, when you run setup.py, you only press Enter without entering a path for each question. Using the cocos command The next step is using the cocos command. It is a cross-platform tool with which you can create a new project, build it, run it, and deploy it. The cocos command works for all Cocos2d-x supported platforms. And you don't need to use an IDE if you don't want to. In this recipe, we take a look at this command and explain how to use it. How to do it... You can use the cocos command help by executing it with the --help parameter, as follows: $ cocos --help We then move on to generating our new project: Firstly, we create a new Cocos2d-x project with the cocos new command, as shown here: $ cocos new MyGame -p com.example.mygame -l cpp -d ~/Documents/ The result of this command is shown the following screenshot: Behind the new parameter is the project name. The other parameters that are mentioned denote the following: MyGame is the name of your project. -p is the package name for Android. This is the application id in Google Play store. So, you should use the reverse domain name to the unique name. -l is the programming language used for the project. You should use "cpp" because we will use C++. -d is the location in which to generate the new project. This time, we generate it in the user's documents directory. You can look up these variables using the following command: $ cocos new —help Congratulations, you can generate your new project. The next step is to build and run using the cocos command. Compiling the project If you want to build and run for iOS, you need to execute the following command: $ cocos run -s ~/Documents/MyGame -p ios The parameters that are mentioned are explained as follows: -s is the directory of the project. This could be an absolute path or a relative path. -p denotes which platform to run on.If you want to run on Android you use -p android. The available options are IOS, android, win32, mac, and linux. You can run cocos run –help for more detailed information. The result of this command is shown in the following screenshot: You can now build and run iOS applications of cocos2d-x. However, you have to wait for a long time if this is your first time building an iOS application. That's why it takes a long time to build cocos2d-x library, depending on if it was clean build or first build. How it works... The cocos command can create a new project and build it. You should use the cocos command if you want to create a new project. Of course, you can build by using Xcode or Eclipse. You can easier of there when you develop and debug. There's more... The cocos run command has other parameters. They are the following: --portrait will set the project as a portrait. This command has no argument. --ios-bundleid will set the bundle ID for the iOS project. However, it is not difficult to set it later. The cocos command also includes some other commands, which are as follows: The compile command: This command is used to build a project. The following patterns are useful parameters. You can see all parameters and options if you execute cocos compile [–h] command. cocos compile [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The deploy command: This command only takes effect when the target platform is android. It will re-install the specified project to the android device or simulator. cocos deploy [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The run command continues to compile and deploy commands. Building the project by Xcode Getting ready Before building the project by Xcode, you require Xcode with an iOS developer account to test it on a physical device. However, you can also test it on an iOS simulator. If you did not install Xcode, you can get it from Mac App Store. Once you have installed it, get it activated. How to do it... Open your project from Xcode. You can open your project by double-clicking on the file placed at: ~/Documents/MyGame/proj.ios_mac/MyGame.xcodeproj. Build and Run by Xcode You should select an iOS simulator or real device on which you want to run your project. How it works... If this is your first time building, it will take a long time. But continue to build with confidence as it's the first time. You can develop your game faster if you develop and debug it using Xcode rather than Eclipse. Building the project by Eclipse Getting ready You must finish the first recipe before you begin this step. If you have not finished it yet, you will need to install Eclipse. How to do it... Setting up NDK_ROOT: Open the preference of Eclipse Open C++ | Build | Environment Click on Add and set the new variable, name is NDK_ROOT, value is NDK_ROOT path. Importing your project into Eclipse: Open the file and click on Import Go to Android | Existing Android Code into Workspace Click on Next Import the project to Eclipse at ~/Documents/MyGame/proj.android. Importing Cocos2d-x library into Eclipse Perform the same steps from Step 3 to Step 4. Import the project cocos2d lib at ~/Documents/MyGame/cocos2d/cocos/platform/android/java, using the folowing command: importing cocos2d lib Build and Run Click on Run icon The first time, Eclipse asks you to select a way to run your application. You select Android Application and click on OK, as shown in the following screenshot: If you connected the Android device on your Mac, you can run your game on your real device or an emulator. The following screenshot shows that it is running it on Nexus5. If you added cpp files into your project, you have to modify the Android.mk file at ~/Documenst/MyGame/proj.android/jni/Android.mk. This file is needed to build NDK. This fix is required to add files. The original Android.mk would look as follows: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp If you added the TitleScene.cpp file, you have to modify it as shown in the following code: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp ../../Classes/TitleScene.cpp The preceding example shows an instance of when you add the TitleScene.cpp file. However, if you are also adding other files, you need to add all the added files. How it works... You get lots of errors when importing your project into Eclipse, but don't panic. After importing cocos2d-x library, errors soon disappear. This allows us to set the path of NDK, Eclipse could compile C++. After you modified C++ codes, run your project in Eclipse. Eclipse automatically compiles C++ codes, Java codes, and then runs. It is a tedious task to fix Android.mk again to add the C++ files. The following code is original Android.mk: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/../../Classes The following code is customized Android.mk that adds C++ files automatically. CPP_FILES := $(shell find $(LOCAL_PATH)/../../Classes -name *.cpp) LOCAL_SRC_FILES := hellocpp/main.cpp LOCAL_SRC_FILES += $(CPP_FILES:$(LOCAL_PATH)/%=%) LOCAL_C_INCLUDES := $(shell find $(LOCAL_PATH)/../../Classes -type d) The first line of the code gets C++ files to the Classes directory into CPP_FILES variable. The second and third lines add C++ files into LOCAL_C_INCLUDES variable. By doing so, C++ files will be automatically compiled in NDK. If you need to compile a file other than the extension .cpp file, you will need to add it manually. There's more... If you want to manually build C++ in NDK, you can use the following command: $ ./build_native.py This script is located at the ~/Documenst/MyGame/proj.android . It uses ANDROID_SDK_ROOT and NDK_ROOT in it. If you want to see its options, run ./build_native.py –help. Summary Cocos2d-x is an open source, cross-platform game engine, which is free and mature. It can publish games for mobile devices and desktops, including iPhone, iPad, Android, Kindle, Windows, and Mac. The book Cocos2d-x Cookbook focuses on using version 3.4, which is the latest version of Cocos2d-x that was available at the time of writing. We focus on iOS and Android development, and we'll be using Mac because we need it to develop iOS applications. Resources for Article: Further resources on this subject: CREATING GAMES WITH COCOS2D-X IS EASY AND 100 PERCENT FREE [Article] Dragging a CCNode in Cocos2D-Swift [Article] COCOS2D-X: INSTALLATION [Article]
Read more
  • 0
  • 0
  • 25632

article-image-hours-1-12-your-quest-begins
Packt
03 Apr 2012
4 min read
Save for later

Hours 1-12: Your Quest Begins!

Packt
03 Apr 2012
4 min read
Dealing with the Game Jam "theme" Virtually every Jam requires that you try to make a game that fits a theme. This is either a surprise word that the moderators came up with or one that has been voted upon earlier. The theme for a Jam is typically announced immediately before it begins. The anticipation and surprise gives the start of the event extra excitement and serves as a means to inspire the participants in the same way that the "secret ingredient" is used in the TV show Iron Chef . Once the theme word or words have been announced, digest it for a while. Some suggestions for coming up with a great game concepts based on the theme are as follows: Take a walk Listen to music Mull over ideas away from the computer Come back home and sketch your idea Visualize the game being played before touching the keyboard Talk about the theme over dinner with a friend Sleep on it and start in the morning Use this theme word as the genesis for your creative spark. Let it inspire you to think outside your normal comfort zone. Don't get discouraged if you think the theme isn't something you like: any game concept can be easily manipulated to fit a theme. Add one subtle reference and whatever game you'd hoped to make is still a possibility. Haiku Time: Many ideas. They all seem to fit the theme! Must I choose just one? Games that tend to win Game Jam competitions often make use of the theme word to find endless material for humor. One very strange statistical anomaly is that in most Game Jams, these three themes always score well in the voting stages: evolution, kittens, and fishing. Time and time again they are "always a bridesmaid, never a bride" and tend to be in the top ten, rather than the chosen theme. In Ludum Dare, for example, the "evolution" theme has been in the top ten almost a dozen times over the last six or seven years. When will "the evolution of kitten fishing" finally be the theme of a Game Jam? What the experts say: Chevy Ray Johnston A great way to come up with an idea to fit the theme is to write down the first five things that come to mind, then toss 'em. Those are the ideas everybody else is already thinking of and/or making. If I could give one piece of advice to newcomers, it would be to make a really simple game, and spend all your time polishing it like crazy! Really polished games are addictive, impressive, and always popular. Visual polish of some sort always seems to give games a boost-up in votes in compos, and makes them more likely to be clicked on by judges (especially in short Jams, where 90% of the games have little to no graphics). But unless you just care about winning, don't sacrifice a fun or engaging and interesting game just to make it look pretty. The best thing about Game Jams is the ridiculous shortcuts and solutions developers come up with to solve design problems in such a short time span. I hope that in the future, Game Jams will see more people developing not just video games, but other types of games as well; and creative things in general. I'm talking about writing Jams, board Game Jams, card Game Jams, and tech Jams where people get together and try to solve technological problems with the same mindset and ambition. Jams are great. Chevy Ray Johnston is author of many games including Fat Wizard and Skullpogo, the creator of the FlashPunk game engine (which is frequently used in Game Jams), and two time winner of Ludum Dare with the games FleeBuster and Beacon. Twitter: http://twitter.com/chevyray Flashpunk : http://flashpunk.net/ Google+ : https://plus.google.com/103872388664329802170   An example of a winning entry Let's take an example theme and see what it might inspire you to create. Take "ESCAPE," the theme of Ludum Dare 21. The winner of Ludum Dare 21 (theme: ESCAPE) created a game where you had to run away from an alien death beam in various platform-style obstacle courses. Try it out: Flee Buster by Chevy Ray Johnston. Other notable entrants created puzzle games where you had to escape from a mansion, jail, dungeon, or reverse pinball where you were the ball trying to get past the bottom. The possibilities are endless. The key qualities that the top ten entries all had were: Humor Simple gameplay Simple graphics Easy to pick up and play (simple controls)
Read more
  • 0
  • 0
  • 25530

article-image-how-model-characters-head-blender-part-1-2
Packt
29 Sep 2009
5 min read
Save for later

How to model a character's head in Blender Part 1

Packt
29 Sep 2009
5 min read
In this two-part tutorial by Jonathan Williamson, we are going to look at how to model a character head in Blender. Along with basic modeling tools we will also focus heavily on good topology and how to create a clean mesh that will deform well during animation. This tutorial will take you through the whole process from setting up a background image as a reference, to laying out the topology, to tweaking the final model proportions and mesh structure. First Steps: workspace and references Before we get started with Blender character modeling, the first thing we want to do is make our workspace more efficient. The way I like to do this is to simply split my view down the center, putting the resulting left viewport in front view (numpad 1) and the right viewport in side view (numpad 3). If you're not familiar with how to split your view, please reference this short video tutorial. This allows us to work from both sides of the model at the same time without having to switch our view constantly. It also gives us more views of the model to help with accuracy and proportion. Now that we have our workspace setup, let's go ahead and bring in our background image for reference. Today we are going to use a simple, rough drawing of mine that has a front and side view. Anytime you are working from references (which should be almost always!) try to get as many angles as possible. This is particularly important when we work from photo references. Here is the drawing: To place the reference into the background of your workspace: Go to View > Background Image Click Use and Load to navigate to your image. Do this with both viewports. The next step is to adjust the X and Y positions to line up your image, it's best to align the center of the head from both views with the Central Axis point (where each of the three axis' meet.) Modeling: mirroring and structure With our workspace and references set up it's time to start modeling. The first thing we want to do is add a mirror modifier to the default cube so that we only have to work on one side of the model; anything we do will be mirrored across the central axis. But, before we do that, we need to add a central loop of vertices to our cube, along with deleting one half. This way we don't mirror our cube on top of itself. You can do this by: Going into Edit Mode with Tab Hit Control + R to activate the Loop Cut tool. Cut a new loop, vertically, along the cube by clicking the MMB when you see the purple line with your mouse hovered over the cube. From the Front View, make sure everything is deselected with A and then select the left-most vertices and hit X > Delete Vertices The last thing we need to do before we start modeling is adding a mirror modifier for symmetry: Go to the Edit Buttons (F9) and click Add Modifier > Mirror Click Do Clipping We are now ready to really get down to business! Modeling: edgeloop structure The single most important thing to remember while modeling a character head is the structure of your mesh. This is referred to as "topology." Edgeloops, or continuous lines or circles of edges, are the primary concern with topology. Proper edgeloops allow your model to deform well during animation; they also make tweaking and detailing your model much easier! To get started: Select the back side of the cube X > Delete Vertices We do this because we want to work from a single face. What we are going to be doing is laying out a series of edgeloops to map out the structure we want for the mesh. Let's start at the chin by moving our remaining face with G to line up with the reference from both view. Due to the variations in our drawing it is going to be necessary to compensate between the views from time to time. What we are now going to do is use the Extrude tool to lay out our loops. To do this: Select the two outside vertices with Shift + RMB Hit E > Only Edges to extrude. Extruding will automatically place you into grab mode, which allows you to place the newly created edge where you want it. In this case, along the jaw bone. You can use Rotate, Scale and Grab to help you position the edges. When you're done you should have something like this: We can continue using this same technique to get the following for the top of the head: As you can see, we are starting to define the structure of the mesh and the shape of the head, much as a traditional artist would use reference lines to sketch out a head. Before we go too much further, we need to go ahead and map out the eye, as it is one of the most important areas of the head, and it's topology is essential the rest of the mesh. To do this we are going to add a circle from the Front View: From the Front View, left click in the center of the eye to position the 3D Cursor Hit Spacebar > Add > Mesh > Circle Use 8 Vertices and a Radius of 0.500 Next you want to use your translate tools (grab, rotate, scale) to postion each of the vertices to fit the shape of the eye socket: Now with everything selected (A): Hit E > Only Edges Then immediately hit S to scale in. Use this same technique for around the nose and the mouth: That's it for the structure, this will then allow us to connect all the areas and not have to worry so much about getting the topology right as we have just laid out the major areas.
Read more
  • 0
  • 0
  • 25430

article-image-limits-game-data-analysis
Packt
20 Nov 2013
7 min read
Save for later

Limits of Game Data Analysis

Packt
20 Nov 2013
7 min read
(For more resources related to this topic, see here.) Which game analytics should be used This section will focus on the role data that should take in your production process. As a studio, the first step is to identify your needs and to choose the goals you will attribute to game analytics. Game analytics as a tool Firstly, it is important to understand that game analytics are a tool, which means they can serve several purposes. You can use them for marketing, science, sociological studies, and so on. Following this statement, you will need different tools and different approaches to reach your goal. As this article has tried to highlight it, tools are chosen according to problems, regardless if the choice is technique or analysis. You must not choose a tool because it is said to be the best performing tool ever made, or because it is fashionable. Instead, you must choose a tool because it is said to be the most efficient tool for your needs. Try to answer the following questions: What are the long-term uses I plan to do with game analytics? Is it simply reporting the Key Performance Indicators or is it the building a user-centric framework for deep analysis? What are the types and the level of skills of the people who will work on it?Do I have all of the skills, from data scientists to game analysts, or do I need to choose a solution which will offset some lacks in a particular field? How much data will be collected? How do I plan to deal with possible peaks of frequentation? How do I adapt temporalities of reporting and analysis with the rhythmof production I have on my project? Do I split them weekly or monthly? What are the main goals of my process? Do I want to build a predictive model (for example, based on correlations) in order to define the next acquisition campaign I will run? Do I want to increase the monetization rate on the current player base? Do I want to perform A/B testing? And the list goes on. Game analytics must serve your team Secondly, it is important to ensure that the use of game analytics must serve your team as a whole. They should not have any disagreements about the long-term objectives that you have chosen. They must accompany it and especially improve it, but the general objective should remain the same. Given the current state of the field, withdrawing the "human touch" from the design process entirely and listening only to data would be a mistake. That's why the game analytics process should be thought through the prism of your own team; and therefore, should be presented as a new tool. This will help them to make good decisions for the game. The best example for the democratization of "game analytics way of thinking" inside your team is certainly the A/B testing aspect. If you experience debates about particular features in the game, instead of taking part you can propose to use A/B tests for some of those features. Following this, there are no particular limits to the use of the tool. A game designer can test different balancing on the virtual economy of a game and an artist can experience different graphic styles. When starting, focus your attention on simple practices If you are new to the field, the following list may help you to start defining your first objectives. It contains most of the typical use for online games, especially free-to-play games: Producing KPIs on a weekly or monthly basis, according to your needs. These KPIs will help you to orient the upcoming development of your game and to anticipate the return on investment of your acquisition campaigns. Identifying if some of the steps of your tutorial phase are poorly designed; for example, if you have a sudden player loss at a particular step of your tutorial. On the same idea, having the loss of players at each level is also very useful to improve the general balancing of your game, especially the progress curve and the difficulty. This topic is more important if you have a part of your business model based on purchasable goods, which can increase the progression rate of the player. You can evaluate which area and which purchasable goods of your game are generating the best income. You can perform A/B testing on particular key features of your game in order to see which ones are the most efficient. What game analytics should not be used for On the other hand, there are a few limits that you need to know before using methods and processes from game analytics. Keep away from numbers You must always be careful about the fact that numbers are used to represent a given situation during a "T" instant. From this statement, the predictive models must always be revised and improved. they should never be considered as the perfect truth. In order for the process to be efficient, it is quite important to keep research on the data inside the structure defined by the initial goals. Otherwise, you might split your efforts and no actionable insights would be identified. In other words, numbers must remain at their place. They are a tool in the hands of a human subject, and they should not become an obsession. Try to reason if they make any sense and if you are asking the right question. Practices that need to be avoided As mentioned in the the previous section, if you are new to this field, be aware of the following situations: Data cannot dictate the full content of your next update. If it is the case, you may first re-evaluate the general intention behind your product and talk with the game designer. When starting, try to avoid complex questions that involve external factors in the game, even if they seem crucial for you. For example, trying to understand why people stopped playing your game over a long period of time is usually impossible. Old players might stop playing because another game came out or they just got bored. Data cannot make miracles at this point of the engagement. Data must not take too much ampleness in the creative process. There are some human intentions and ideas, and only then the data comes in order to verify and improve the potential success of those intentions. Data must not slow down the performances of the game. One of the common methods to avoid this is to send the data when the player logs in or logs out and not at each click or each action. Summary This is the end of this article, and the most important thing you need to remember about game analytics in general is the importance of the definition of your objectives. The reason why you choose this tool instead of another (and this article has tried to list a maximum of them, from data mining to pure analysis) is because it fits your needs as much as possible. This statement is true at every stage of the refiection process which surrounds game analytics, from the choice of the storage solution to the type of analysis you want to perform. The rising of a fully-connected state in the video game industry offers developers the opportunity to change the way they create games, but there is no doubt that the level of maturation related to this tool has not reached its maximum yet. Therefore, even if the benefits of game analytics are great, be prepared to make mistakes as well; and keep your own process open to various criticisms from your team. Resources for Article: Further resources on this subject: Flash 10 Multiplayer Game: Game Interface Design [Article] GNU Octave: Data Analysis Examples [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article]
Read more
  • 0
  • 0
  • 25291
article-image-ragdoll-physics
Packt
19 Feb 2016
5 min read
Save for later

Ragdoll Physics

Packt
19 Feb 2016
5 min read
In this article we will learn how to apply Ragdoll physics to a character. (For more resources related to this topic, see here.) Applying Ragdoll physics to a character Action games often make use of Ragdoll physics to simulate the character's body reaction to being unconsciously under the effect of a hit or explosion. In this recipe, we will learn how to set up and activate Ragdoll physics to our character whenever she steps in a landmine object. We will also use the opportunity to reset the character's position and animations a number of seconds after that event has occurred. Getting ready For this recipe, we have prepared a Unity Package named Ragdoll, containing a basic scene that features an animated character and two prefabs, already placed into the scene, named Landmine and Spawnpoint. The package can be found inside the 1362_07_08 folder. How to do it... To apply ragdoll physics to your character, follow these steps: Create a new project and import the Ragdoll Unity Package. Then, from the Project view, open the mecanimPlayground level. You will see the animated book character and two discs: Landmine and Spawnpoint. First, let's set up our Ragdoll. Access the GameObject | 3D Object | Ragdoll... menu and the Ragdoll wizard will pop-up. Assign the transforms as follows:     Root: mixamorig:Hips     Left Hips: mixamorig:LeftUpLeg     Left Knee: mixamorig:LeftLeg     Left Foot: mixamorig:LeftFoot     Right Hips: mixamorig:RightUpLeg     Right Knee: mixamorig:RightLeg     Right Foot: mixamorig:RightFoot     Left Arm: mixamorig:LeftArm     Left Elbow: mixamorig:LeftForeArm     Right Arm: mixamorig:RightArm     Right Elbow: mixamorig:RightForeArm     Middle Spine: mixamorig:Spine1     Head: mixamorig:Head     Total Mass: 20     Strength: 50 Insert image 1362OT_07_45.png From the Project view, create a new C# Script named RagdollCharacter.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class RagdollCharacter : MonoBehaviour { void Start () { DeactivateRagdoll(); } public void ActivateRagdoll(){ gameObject.GetComponent<CharacterController> ().enabled = false; gameObject.GetComponent<BasicController> ().enabled = false; gameObject.GetComponent<Animator> ().enabled = false; foreach (Rigidbody bone in GetComponentsInChildren<Rigidbody>()) { bone.isKinematic = false; bone.detectCollisions = true; } foreach (Collider col in GetComponentsInChildren<Collider>()) { col.enabled = true; } StartCoroutine (Restore ()); } public void DeactivateRagdoll(){ gameObject.GetComponent<BasicController>().enabled = true; gameObject.GetComponent<Animator>().enabled = true; transform.position = GameObject.Find("Spawnpoint").transform.position; transform.rotation = GameObject.Find("Spawnpoint").transform.rotation; foreach(Rigidbody bone in GetComponentsInChildren<Rigidbody>()){ bone.isKinematic = true; bone.detectCollisions = false; } foreach (CharacterJoint joint in GetComponentsInChildren<CharacterJoint>()) { joint.enableProjection = true; } foreach(Collider col in GetComponentsInChildren<Collider>()){ col.enabled = false; } gameObject.GetComponent<CharacterController>().enabled = true; } IEnumerator Restore(){ yield return new WaitForSeconds(5); DeactivateRagdoll(); } } Save and close the script. Attach the RagdollCharacter.cs script to the book Game Object. Then, select the book character and, from the top of the Inspector view, change its tag to Player. From the Project view, create a new C# Script named Landmine.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class Landmine : MonoBehaviour { public float range = 2f; public float force = 2f; public float up = 4f; private bool active = true; void OnTriggerEnter ( Collider collision ){ if(collision.gameObject.tag == "Player" && active){ active = false; StartCoroutine(Reactivate()); collision.gameObject.GetComponent<RagdollCharacter>().ActivateRagdoll(); Vector3 explosionPos = transform.position; Collider[] colliders = Physics.OverlapSphere(explosionPos, range); foreach (Collider hit in colliders) { if (hit.GetComponent<Rigidbody>()) hit.GetComponent<Rigidbody>().AddExplosionForce(force, explosionPos, range, up); } } } IEnumerator Reactivate(){ yield return new WaitForSeconds(2); active = true; } } Save and close the script. Attach the script to the Landmine Game Object. Play the scene. Using the WASD keyboard control scheme, direct the character to the Landmine Game Object. Colliding with it will activate the character's Ragdoll physics and apply an explosion force to it. As a result, the character will be thrown away to a considerable distance and will no longer be in the control of its body movements, akin to a ragdoll. How it works... Unity's Ragdoll Wizard assigns, to selected transforms, the components Collider, Rigidbody, and Character Joint. In conjunction, those components make ragdoll physics possible. However, those components must be disabled whenever we want our character to be animated and controlled by the player. In our case, we switch those components on and off using the RagdollCharacter script and its two functions: ActivateRagdoll() and DeactivateRagdoll(), the latter including instructions to re-spawn our character in the appropriate place. For the testing purposes, we have also created the Landmine script, which calls RagdollCharacter script's function named ActivateRagdoll(). It also applies an explosion force to our ragdoll character, throwing it outside the explosion site. There's more... Instead of resetting the character's transform settings, you could have destroyed its gameObject and instantiated a new one over the respawn point using Tags. For more information on this subject, check Unity's documentation at: http://docs.unity3d.com/ScriptReference/GameObject.FindGameObjectsWithTag.html. Summary In this article we learned how to apply Ragdoll physics to a character. We also learned how to setup Ragdoll for the character of the game. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article: Further resources on this subject: Using a collider-based system [article] Looking Back, Looking Forward [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 24995

article-image-simple-pathfinding-algorithm-maze
Packt
23 Jul 2015
10 min read
Save for later

A Simple Pathfinding Algorithm for a Maze

Packt
23 Jul 2015
10 min read
In this article by Mário Kašuba, author of the book Lua Game Development Cookbook, explains that maze pathfinding can be used effectively in many types of games, such as side-scrolling platform games or top-down, gauntlet-like games. The point is to find the shortest viable path from one point on the map to another. This can be used for moving NPCs and players as well. (For more resources related to this topic, see here.) Getting ready This article will use a simple maze environment to find a path starting at the start point and ending at the exit point. You can either prepare one by yourself or let the computer create one for you. A map will be represented by a 2D-map structure where each cell will consist of a cell type and cell connections. The cell type values are as follows: 0 means a wall 1 means an empty cell 2 means the start point 3 means the exit point Cell connections will use a bitmask value to get information about which cells are connected to the current cell. The following diagram contains cell connection bitmask values with their respective positions: Now, the quite common problem in programming is how to implement an efficient data structure for 2D maps. Usually, this is done either with a relatively large one-dimensional array or with an array of arrays. All these arrays have a specified static size, so map dimensions are fixed. The problem arises when you use a simple 1D array and you need to change the map size during gameplay or the map size should be unlimited. This is where map cell indexing comes into place. Often you can use this formula to compute the cell index from 2D map coordinates: local index = x + y * map_width map[index] = value There's nothing wrong with this approach when the map size is definite. However, changing the map size would invalidate the whole data structure as the map_width variable would change its value. A solution to this is to use indexing that's independent from the map size. This way you can ensure consistent access to all elements even if you resize the 2D map. You can use some kind of hashing algorithm that packs map cell coordinates into one value that can be used as a unique key. Another way to accomplish this is to use the Cantor pairing function, which is defined for two input coordinates:   Index value distribution is shown in the following diagram: The Cantor pairing function ensures that there are no key collisions no matter what coordinates you use. What's more, it can be trivially extended to support three or more input coordinates. To illustrate the usage of the Cantor pairing function for more dimensions, its primitive form will be defined as a function cantor(k1, k2), where k1 and k2 are input coordinates. The pairing function for three dimensions will look like this: local function cantor3D(k1, k2, k3) return cantor(cantor(k1, k2), k3) end Keep in mind that the Cantor pairing function always returns one integer value. With higher number of dimensions, you'll soon get very large values in the results. This may pose a problem because the Lua language can offer 52 bits for integer values. For example, for 2D coordinates (83114015, 11792250) you'll get a value 0x000FFFFFFFFFFFFF that still can fit into 52-bit integer values without rounding errors. The larger coordinates will return inaccurate values and subsequently you'd get key collisions. Value overflow can be avoided by dividing large maps into smaller ones, where each one uses the full address space that Lua numbers can offer. You can use another coordinate to identify submaps. This article will use specialized data structures for a 2D map with the Cantor pairing function for internal cell indexing. You can use the following code to prepare this type of data structure: function map2D(defaultValue) local t = {} -- Cantor pair function local function cantorPair(k1, k2)    return 0.5 * (k1 + k2) * ((k1 + k2) + 1) + k2 end setmetatable(t, {    __index = function(_, k)      if type(k)=="table" then        local i = rawget(t, cantorPair(k[1] or 1, k[2] or 1))        return i or defaultValue      end    end,    __newindex = function(_, k, v)      if type(k)=="table" then        rawset(t, cantorPair(k[1] or 1, k[2] or 1), v)      else        rawset(t, k, v)      end    end, }) return t end The maze generator as well as the pathfinding algorithm will need a stack data structure. How to do it… This section is divided into two parts, where each one solves very similar problems from the perspective of the maze generator and the maze solver. Maze generation You can either load a maze from a file or generate a random one. The following steps will show you how to generate a unique maze. First, you'll need to grab a maze generator library from the GitHub repository with the following command: git clone https://github.com/soulik/maze_generator This maze generator uses the depth-first approach with backtracking. You can use this maze generator in the following steps. First, you'll need to set up maze parameters such as maze size, entry, and exit points. local mazeGenerator = require 'maze' local maze = mazeGenerator { width = 50, height = 25, entry = {x = 2, y = 2}, exit = {x = 30, y = 4}, finishOnExit = false, } The final step is to iteratively generate the maze map until it's finished or a certain step count is reached. The number of steps should always be one order of magnitude greater than the total number of maze cells mainly due to backtracking. Note that it's not necessary for each maze to connect entry and exit points in this case. for i=1,12500 do local result = maze.generate() if result == 1 then    break end end Now you can access each maze cell with the maze.map variable in the following manner: local cell = maze.map[{x, y}] local cellType = cell.type local cellConnections = cell.connections Maze solving This article will show you how to use a modified Trémaux's algorithm, which is based on depth-first search and path marking. This method guarantees finding the path to the exit point if there's one. It relies on using two keys in each step: current position and neighbors. This algorithm will use three state variables—the current position, a set of visited cells, and the current path from the starting point: local currentPosition = {maze.entry.x, maze.entry.y} local visistedCells = map2D(false) local path = stack() The whole maze solving process will be placed into one loop. This algorithm is always finite, so you can use the infinite while loop. -- A placeholder for neighbours function that will be defined later local neighbours   -- testing function for passable cells local cellTestFn = function(cell, position) return (cell.type >= 1) and (not visitedCells[position]) end   -- include starting point into path visitedCells[currentPosition] = true path.push(currentPosition)   while true do local currentCell = maze.map[currentPosition] -- is current cell an exit point? if currentCell and    (currentCell.type == 3 or currentCell.type == 4) then    break else    -- have a look around and find viable cells    local possibleCells = neighbours(currentPosition, cellTestFn)    if #possibleCells > 0 then      -- let's try the first available cell      currentPosition = possibleCells[1]      visitedCells[currentPosition] = true      path.push(currentPosition)    elseif not path.empty() then      -- get back one step      currentPosition = path.pop()    else      -- there's no solution      break    end end end This fairly simple algorithm uses the neighbours function to obtain a list of cells that haven't been visited yet: -- A shorthand for direction coordinates local neighbourLocations = { [0] = {0, 1}, [1] = {1, 0}, [2] = {0, -1}, [3] = {-1, 0}, }   local function neighbours(position0, fn) local neighbours = {} local currentCell = map[position0] if type(currentCell)=='table' then    local connections = currentCell.connections    for i=0,3 do      -- is this cell connected?      if bit.band(connections, 2^i) >= 1 then        local neighbourLocation = neighbourLocations[i]        local position1 = {position0[1] + neighbourLocation[1],         position0[2] + neighbourLocation[2]}        if (position1[1]>=1 and position1[1] <= maze.width and         position1[2]>=1 and position1[2] <= maze.height) then          if type(fn)=="function" then            if fn(map[position1], position1) then              table.insert(neighbours, position1)            end          else            table.insert(neighbours, position1)          end        end      end    end end return neighbours end When this algorithm finishes, a valid path between entry and exit points is stored in the path variable represented by the stack data structure. The path variable will contain an empty stack if there's no solution for the maze. How it works… This pathfinding algorithm uses two main steps. First, it looks around the current maze cell to find cells that are connected to the current maze cell with a passage. This will result in a list of possible cells that haven't been visited yet. In this case, the algorithm will always use the first available cell from this list. Each step is recorded in the stack structure, so in the end, you can reconstruct the whole path from the exit point to the entry point. If there are no maze cells to go, it will head back to the previous cell from the stack. The most important is the neighbours function, which determines where to go from the current point. It uses two input parameters: current position and a cell testing function. It looks around the current cell in four directions in clockwise order: up, right, down, and left. There must be a passage from the current cell to each surrounding cell; otherwise, it'll just skip to the next cell. Another step determines whether the cell is within the rectangular maze region. Finally, the cell is passed into the user-defined testing function, which will determine whether to include the current cell in a list of usable cells. The maze cell testing function consists of a simple Boolean expression. It returns true if the cell has a correct cell type (not a wall) and hasn't been visited yet. A positive result will lead to inclusion of this cell to a list of usable cells. Note that even if this pathfinding algorithm finds a path to the exit point, it doesn't guarantee that this path is the shortest possible. Summary We have learned how pathfinding works in games with a simple maze.With pathfinding algorithm, you can create intelligent game opponents that won't jump into a lava lake at the first opportunity. Resources for Article: Further resources on this subject: Mesh animation [article] Getting into the Store [article] Creating a Direct2D game window class [article]
Read more
  • 0
  • 0
  • 24989

article-image-turn-your-life-gamified-experience-unity
Packt
14 Nov 2016
29 min read
Save for later

Turn Your Life into a Gamified Experience with Unity

Packt
14 Nov 2016
29 min read
In this article for by Lauren S. Ferro from the book Gamification with Unity 5.x we will look into a Gamified experience with Unity. In a world full of work, chores, and dull things, we all must find the time to play. We must allow ourselves to be immersed in enchanted world of fantasy and to explore faraway and uncharted exotic islands that form the mysterious worlds. We may also find hidden treasure while confronting and overcoming some of our worst fears. As we enter these utopian and dystopian worlds, mesmerized by the magic of games, we realize anything and everything is possible and all that we have to do is imagine. Have you ever wondered what Gamification is? Join us as we dive into the weird and wonderful world of gamifying real-life experiences, where you will learn all about game design, motivation, prototyping, and bringing all your knowledge together to create an awesome application. Each chapter in this book is designed to guide you through the process of developing your own gamified application, from the initial idea to getting it ready and then published. The following is just a taste of what to expect from the journey that this book will take you on. (For more resources related to this topic, see here.) Not just pixels and programming The origins of gaming have an interesting and ancient history. It stems as far back as the ancient Egyptians with the game Sennet; and long since the reign of great Egyptian Kings, we have seen games as a way to demonstrate our strength and stamina, with the ancient Greeks and Romans. However, as time elapsed, games have not only developed from the marble pieces of Sennet or the glittering swords of battles, they have also adapted to changes in the medium: from stone to paper, and from paper to technology. We saw the rise and development of physical games (such as table top and card games) to games that require us to physically move our characters using our bodies and peripherals (Playstaton Move and WiiMote), in order to interact with the gaming environment (Wii Sports and Heavy Rain). So, now we not only have the ability to create 3D virtual worlds with virtual reality, but also can enter these worlds and have them enter ours with augmented reality. Therefore, it is important to remember that, just as the following image, (Dungeons and Dragons), games don't have to take on a digital form, they can also be physical: Dungeons and Dragons board with figurines and dice Getting contextual At the beginning of designing a game or game-like experience, designers need to consider the context for which the experience is to take place. Context is an important consideration for how it may influence the design and development of the game (such as hardware, resources, and target group). The way in which a designer may create a game-like experience varies. For example, a game-like experience aimed to encourage students to submit assessments on time will be designed differently from the one promoting customer loyalty. In this way, the designer should be more context aware, and as a result, it may be more likely to keep it in view during the design process. Education: Games can be educational, and they may be designed specifically to teach or to have elements of learning entwined into them to support learning materials. Depending on the type of learning game, it may include formal (educational institutions) or informal educational environments (learning a language for a business trip). Therefore, if you are thinking about creating an educational game, you might need to think about these considerations in more detail. Business: Maybe your intention is get your employees to arrive on time or to finish reports in the afternoon rather than right before they go home. Designing content for use within a business context targets situations that occur within the workplace. It can include objectives such as increasing employee productivity (individual/group). Personal: Getting personal with game-like applications can relate specifically to creating experiences to achieve personal objectives. These may include personal development, personal productivity, organization, and so on. Ultimately, only one person maintains these experiences; however, other social elements, such as leaderboards and group challenges, can bring others into the personal experience as well. Game: If it is not just educational, business, or personal development, chances are that you probably want to create a game to be a portal into lustrous worlds of wonder or to pass time on the evening commute home. Pure gaming contexts have no personal objectives (other than to overcome challenges of course). Who is our application targeting and where do they come from? Understanding the user is one of the most important considerations for any approach to be successful. User considerations not only include the demographics of the user (for example, who they are and where they are from), but also the aim of the experience, the objectives that you aim to achieve, and outcomes that the objectives lead to. In this book, this section considers the real-life consequences that your application/game will have on its audience. For example, will a loyalty application encourage people to engage with your products/store in the areas that you're targeting it toward. Therefore, we will explore ways that your application can obtain demographic data in Unity. Are you creating a game to teach Spanish to children, teenagers, or adults? This will change the way that you will need to think about your audience. For example, children tend to be users who are encouraged to play by their parents, teenagers tend to be a bit more autonomous but may still be influenced by their parents, and adults are usually completely autonomous. Therefore, this can influence the amount and the type of feedback that you can give and how often. Where are your audience from? For example, are you creating an application for a global reward program or a local one? This will have an effect on whether or not you will incorporate things like localization features so that the application adapts to your audience automatically or whether it's embedded into the design. What kind of devices does your audience use? Do they live in an area where they have access to a stable Internet connection? Do they need to have a powerful system to run your game or application? Chances are if the answer is yes for the latter question then you should probably take a look at how you will optimize your application. What is game design? Many types of games exist and so do design approaches. There are different ways that you can design and implement games. Now, let's take a brief look at how games are made, and more importantly, what they are made of: Generating ideas: This involves thinking about the story that we want to tell, or a trip that we may want the player to go on. At this stage, we're just getting everything out of our head and onto the paper. Everything and anything should be written; the stranger and abstract the idea, the better. It's important at this stage not to feel trapped that an idea may not be suitable. Often, the first few ideas that we create are the worst, and the great stuff comes from iterating all the ideas that we put down in this stage. Talk about your ideas with friends and family, and even online forums are a great place to get feedback on your initial concepts. One of the first things that any aspiring game designer can begin with is to look at what is already out there. A lot is learned when we succeed—or fail—especially why and how. Therefore, at this stage, you will want to do a bit of research about what you are designing. For instance, if you're designing an application to teach English, not only should you see other similar applications that are out there but also how English is actually taught, even in an educational environment. While you are generating ideas, it is also useful to think about the technology and materials that you will use along the way. What game engine is better for your game's direction? Do you need to purchase licenses if you are intending to make your game commercial? Answering these kinds of questions earlier can save many headaches later on when you have your concept ready to go. Especially, if you will need to learn how to use the software, as some have steep learning curves. Defining your idea: This is not just a beautiful piece of art that we see when a game is created; it can be rough, messy, and downright simple, but it communicates the idea. Not just this; it also communicates the design of the game's space and how a player may interact and even traverse it. Concept design is an art in itself and includes concepts on environments, characters puzzles, and even the quest itself. We will take the ideas that we had during the idea generation and flesh them out. We begin to refine it, to see what works and what doesn't. Again, get feedback. The importance of feedback is vital. When you design games, you often get caught up; you are so immersed in your ideas, and they make sense to you. You have sorted out every details (at least for the most part, it feels like that). However, you aren't designing for you, you are designing for your audience, and getting an outsiders opinion can be crucial and even offer a perspective that you may not necessarily would have thought of. This stage also includes the story. A game without a story is like a life without existence. What kind of story do you want your player to be a part of? Can they control it, or is it set in stone? Who are the characters? The answers to these questions will breathe soul into your ideas. While you design your story, keep referring to the concept that you created, the atmosphere, the characters, and the type of environment that you envision. Some other aspects of your game that you will need to consider at this stage are as follows: How will your players learn how to play your game? How will the game progress? This may include introducing different abilities, challenges, levels, and so on. Here is where you will need to observe the flow of the game. Too much happening and you will have a recipe for chaos, not enough and your player will get bored. What is the number of players that you envision playing your game, even if you intend for a co-op or online mode? What are the main features that will be in your game? How will you market your game? Will there be an online blog that documents the stages of development? Will it include interviews with different members of the team? Will there be different content that is tailored for each network (for example, Twitter, Facebook, Instagram, and so on). Bringing it together: This involves thinking about how all your ideas will come together and how they will work, or won't. Think of this stage as creating a painting. You may have all pieces, but you need to know how to use them to create the piece of art. Some brushes (for example, story, characters) work better with some paints (for example, game elements, mechanics), and so on. This stage is about bringing your ideas and concepts into reality. This stage features design processes, such as the following: Storyboards that will give an overview of how the story and the gameplay evolve throughout the game. Character design sheets that will outline characteristics about your characters and how they fit into the story. Game User Interfaces (GUIs) that will provide information to the player during gameplay. This may include elements, such as progress bars, points, and items that they will collect along the way. Prototyping: This is where things get real…well, relatively. It may be something as simple as a piece of paper or something more complex as a 3D model. You then begin to create the environments or the levels that your player will explore. As you develop your world, you will take your content and populate the levels. Prototyping is where we take what was in our head and sketched out on paper and use it to sculpt the gameful beast. The main purpose of this stage is to see how everything works, or doesn't. For example, the fantastic idea of a huge mech-warrior with flames shooting out of an enormous gun on its back was perhaps not the fantastic idea that was on paper, at least not in the intended part of the game. Rapid prototyping is fast and rough. Remember when you were in school and you had things, such as glue, scissors, pens, and pencils; well, that is what you will need for this. It gets the game to a functioning point before you spend tireless hours in a game engine trying to create your game. A few bad rapid prototypes early on can save a lot of time instead of a single digital one. Lastly, rapid prototyping isn't just for the preliminary prototyping phase. It can be used before you add in any new features to your game once it's already set up. Iteration: This is to the game what an iron is to a creased shirt. You want your game to be on point and iterating it gets it to that stage. For instance, that awesome mech-warrior that you created for the first level was perhaps better as the final boss. Iteration is about fine-tuning the game, that is, to tweak it so that it not only flows better overall, but also improves the gameplay. Playtesting: This is the most important part of the whole process once you have your game to a relatively functioning level. The main concept here is to playtest, playtest, and playtest. The importance of this stage cannot be emphasized enough. More often than not, games are buggy when finally released, with problems and issues that could be avoided during this stage. As a result, players lose interest and reviews contain frustration and disappointment, which—let's face it—we don't want after hours and hours of blood, sweat, and tears. The key here is not only to playtest your game but also to do it in multiple ways and on multiple devices with a range of different people. If you release your game on PC, test it on a high performance one and a low performance one. The same process should be applied for mobile devices (phones, tablets) and operating systems. Evaluate: Evaluateyour game based on the playtesting. Iterating, playtesting, and evaluating are three steps that you will go through on a regular basis, more so as you implement a new feature or tweak an existing one. This cycle is important. You wouldn't buy a car that has parts added without being tested first so why should a player buy a game with untested features? Build: Build your game and get it ready for distribution, albeit on CD or online as a digital download Publish: Publish your game! Your baby has come of age and is ready to be released out into the wild where it will be a portal for players around the world to enter the world that you (and your team) created from scratch. Getting gamified When we merge everyday objectives with games, we create gamified experiences. The aim of these experiences is to improve something about ourselves in ways that are ideally more motivating than how we perceive them in real life. For example, think of something that you find difficult to stay motivated with. This may be anything from managing your finances, to learning a new language, or even exercising. Now, if you make a deal with yourself to buy a new dress once you finish managing your finances or to go on a trip once you have learned a new language, you are turning the experience into a game. The rules are simply to finish the task; the condition of finishing it results in a reward—in the preceding example, either a dress or the trip. The fundamental thing to remember is that gamified experiences aim to make ordinary tasks extraordinary and enjoyable for the player. Games, gaming, and game-like experiences can give rise to many types of opportunities for us to play or even escape reality. To finish this brief exploration into the design of games, we must realize that games are not solely about sitting in front of the TV, playing on the computer, or being glued to the seat transfixed on a digital character dodging bullets. The game mechanics that make a task more engaging and fun is defined as "Gamification." Gamification relates to games, and not play; while the term has become popular, the concept is not entirely new. Think about loyalty cards, not just frequent flyer mile programs, but maybe even at your local butcher or café. Do you get a discount after a certain amount of purchases? For example, maybe, the tenth coffee is free. It's been a while since various reward schemes have already been in place; giving children a reward for completing household chores or for good behavior and rewarding "gold stars" for academic excellence is gamification. If you consider some social activities, such as Scouts, they utilize "gamification" as part of their procedures. Scouts learn new skills and cooperate and through doing so, they achieve status and receive badges of honor that demonstrate levels of competency. Gamification has become a favorable approach to "engaging" clients with new and exciting design schemes to maintain interest and promote a more enjoyable and ideally "fun" product. The product in question does not have to be "digital." Therefore, "gamification" can exist both in a physical realm (as mentioned before with the rewarding of gold stars) as well as in a more prominent digital sense (for example, badge and point reward systems) as an effective way to motivate and engage users. Some common examples of gamification include the following: Loyalty programs: Each time you engage with the company in a particular way, such as buying certain products or amount of, you are rewarded. These rewards can include additional products, points toward items, discounts, and even free items. School House points: A pastime that some of us may remember, especially fans of Harry Potter is that each time you do the right thing, such as follow the school rules, you get some points. Alternatively, you do the wrong thing, and you lose points. Scouts: It rewards levels of competency with badges and ranks. The more skilled you are, the more badges you collect, wear, and ultimately, the faster you work your way up the hierarchy. Rewarding in general: This will often be associated with some rules, and these rules determine whether or not you will get a reward. Eat your vegetables, you will get dessert; do your math homework, you will get to play. Both have winning conditions. Tests: As horrifying as it might sound, tests can be considered as a game. For example, we're on a quest to learn about history. Each assignment you get is like a task, preparing you for the final battle—the exam. At the end of all these assessments, you get a score or a grade that indicates to you your progress as you pass from one concept to the next. Ultimately, your final exam will determine your rank among your peers and whether or not you made it to the next level (that being anywhere from your year level to a university). It may be also worth noting that just as in games, you also have those trying to work the system, searching for glitches in the system that they can exploit. However, just as in games, they too eventually are kicked. One last thing to remember when you design anything targeted toward kids is that they can be a lot more perceptive than what we sometimes give them credit for. Therefore, if you "disguise" educational content with gameplay, it is likely that they will see through it. It's the same with adults; they know that they are monitoring their health or spending habits, it's your job to make it a little less painful. Therefore, be upfront, transparent, and cut through the "disguise." Of course, kids don't want to be asked to "play a game about maths" but they will be more interested in "going on adventures to beat the evil dragon with trigonometry." The same goes for adults; creating an awesome character that can be upgraded to a level-80 warrior for remembering to take out the trash, keep hydrated, and eat healthier is a lot better than telling them this is a "fun" application to become a better person. There is no I in Team Working on our own can be good, sometimes working with others can be better! However, the problem with working in a team is that we're all not equal. Some of us are driven by the project, with the aim to get the best possible outcome, whereas, others are driven by fame, reward, money, and the list goes on. If you ever worked on a group project in school, then you know exactly what it's like. Agile gamification is, to put simply, getting teams to work better together. Often, large complex projects encounter a wide range of problems from keeping on top of schedules, different perspectives, undefined roles, and a lack of overall motivation. Agile frameworks in this context are associated with the term Scrum. This describes an overall framework used to formalize software development projects. The Scrum process works as follows: The owner of the product will create a wish list known as the product backlog. Once the sprint planning begins, members of the team (between 3-9 people) will take sections from the top of the product backlog. Sprint planning involves the following: It involves listing all of the items that are needed to be completed for the project (in a story format—who, what, and why). This list needs to be prioritized. It includes estimating each task relatively (using the Fibonacci system). It involves planning the work sprint (1-2 week long, but less than 1 month long) and working toward a demo. It also involves making the work visible using a storyboard that contains the following sections: To do, Doing, and Done. Items begin in the To do section; once they have begun, they move to the Doing section; and once they are completed, they are then put in the Done section. The idea is that the team works through tasks in the burn down chart. Ideally, the amount of points that the sprint began with (in terms of tasks to be done) decreases in value each day you get closer to finishing the sprint. The team engages with daily meetings (preferably standing up) run by the Sprint/Scrum master. These meetings discuss what was done, what is planned to be done during the day, any issues that come up or might come up, and how can improvements be made. It provides a demonstration of the product's basic (working) features. During this stage, feedback is provided by the product owner as to whether or not they are happy with what has been done, the direction that it is going, and how it will relate to the remaining parts of the project. At this stage, the owner may ask you to improve it, iterate it, and so forth, for the next sprint. Lastly, the idea is to get the team together and to review the development of the project as a whole: what went well and what didn't go so well and what are the areas of improvement that can then be used to make the next Scrum better? Next, they will decide on how to implement each section. They will meet each day to not only assess the overall progress made for the development of each section but also to ensure that the work will be achieved within the time frame. Throughout the process, the team leader known as the Scrum/Sprint Master has the job of ensuring that the team stays focused and completes sections of the product backlog on time. Once the sprint is finished, the work should be at a level to be shipped, sold to the customer, or to at least show to a stakeholder. At the end of the sprint, the team and Scrum/Sprint Master assess the completed work and determine whether it is at an acceptable level. If the work is approved, the next sprint begins. Just as the first sprint, the team chooses another chunk of the product backlog and begins the process again.An overview of the Scrum process However, in the modern world, Scrum is adopted and applied to a range of different contexts outside of software development. As a result, it has gone through some iterations, including gamification. Agile Gamification, as it is more commonly known as, takes the concept of Scrum and turns it into a playful experience. Adding an element of fun to agile frameworks To turn the concept of Scrum into something a bit more interesting and at the same time to boost the overall motivation of your team, certain parts of it can be transformed with game elements. For example, implementing leaderboards based on the amount of tasks that each team member is able to complete (and on time) results in a certain number of points. By the end of the spring, the team member with the most number of points may be able to obtain a reward, such as a bonus in their next pay or an extended lunch break. It is also possible to make the burn down chart a bit more exciting by placing various bonuses if certain objectives are met within certain time frame or at a certain point during the burn down;as a result, giving added incentive to team members to get things delivered on time. In addition, to ensure that quality standards are also maintained, Scrum/Sprint Masters can also provide additional rewards if there is few or no feedback regarding things, such as quality or the overall cohesiveness of the output from the sprint. An example of a gamified framework can be seen in the image below. While setting up a DuoLingo Classroom account, users are presented with various game elements (for example, progress bar) and a checklist to ensure that everything that needs to be completed is done. Playtesting This is one of the most important parts of your game design. In fact, you cannot expect to have a great game without it. Playtesting is not just about checking whether your game works, or if there are bugs, it is also about finding out what people really think about it before you put it out in the world to see. In some cases, playtesting can make the difference between succeeding of failing epically. Consider this scenario: you have spent the last year, your blood, sweat and tears, and even your soul to create something fantastic. You probably think it's the best thing out there. Then, after you release it, you realize that only half the game was balanced, or worst, half interesting. At this stage, you will feel pretty down, but all these could have been avoided if you had taken the time to get some feedback. As humans, we don't necessarily like to hear our greatest love being criticized, especially if we have committed so much of our lives to it. However, the thing to keep in mind is, this stage shapes the final details. Playtesting is not meant for the final stages, when your game is close to being finished. At each stage, even when you begin to get a basic prototype completed, it should be play tested. During these stages, it does not have to be a large-scale testing, it can be done by a few colleagues, friends, or even family who can give you an idea of whether or not you're heading in the right direction. Of course, the other important thing to keep in mind is that the people who are testing your game are as close, if not the target audience. For instance, image that you're creating for your gamified application to encourage people to take medication on a regular basis is not ideal to test with people who do not take medication. Sure, they may be able to cover general feedback, such as user interface elements or even interaction, but in terms of its effectiveness, you're better off taking the time to recruit more specific people. Iterating After we have done all the playtesting is the time to re-plan another development cycle. In fact, the work of tuning your application doesn't stop after the first tests. On the contrary, it goes through different iterations many times. The iteration cycle starts with the planning stage, which include brainstorming, organizing the work (as we saw for instance in Scrum), and so on. In the next phase, development, we actually create the application, as we did in the previous chapter. Then, there is the playtesting, which we saw earlier in this chapter. In the latter stage, we tune and tweak values and fix bugs from our application. Afterward, we iterate the whole cycle again, by entering in the planning stage again. Here, we will need to plan the next iteration: what should be left and what should be done better or what to remove. All these decisions should be based on what we have collected in the playtesting stage. The cycle is well represented in the following diagram as a spiral that goes on and on through the process: The point of mentioning it now is because after you finish playtesting your game, you will need to repeat the stages that we have done previously, again. You will have to modify your design; you may need to even redesign things again. So, it is better to think of this as upgrading your design, rather than a tedious and repetitive process. When to stop? In theory, there is no stopping; the more the iteration, the better the application will be. Usually, the iterations stop when the application is well enough for your standards or when external constrains, such as the market or deadlines, don't allow you to perform any more iteration. The question when to stop? is tricky, and the answer really depends on many factors. You will need to take into account the resources needed to perform another iteration and time constraints. Of course, remember that your final goal is to deliver a quality product to your audience and each iteration is a step closer. Taking in the view with dashboards Overviews, summaries, and simplicity make life easier. Dashboards are a great way for keeping a lot of information relatively concise and contained, without being too overwhelming to a player. Of course, if the players want to obtain more detailed information, perhaps statistics about their accuracy since they began, they will have the ability to do so. So, what exactly is a dashboard? A dashboard is a central hub to view all of your progress, achievements, points, and rewards. If we take a look at the following screenshot, we can get a rough idea about what kind of information that they display. The image on the left is the dashboard for Memrise and displays current language courses, in this case, German; the players' achievements and streak; and the progress that they are making in the course. On the right is the dashboard for DuoLingo. Similar to Memrise, it also features information about daily streaks, amount of time committed, and the strength of each category learned for the new language, in this case, Italian. By just looking at these dashboards, the player can get a very quick idea about how well or bad they are doing.   Different dashboards (left) Memrise (right) DuoLingo Different approaches to dashboards can encourage different behaviors depending on the data displayed and how it is displayed. For example, you can have a dashboard that provides reflective information more dominantly, such as progress bars and points. Others can provide a more social approach by displaying the players rank among friends and comparing their statistics to others who are also engaged with the application. Some dashboards may even suggest friends that have similar elements in common, such as the language that is being learned. Ideally, the design of dashboards can be as simple or as complicated as the designer decides, but typically, the less is more approach is better. Summary Everything that we discussed in this chapter is just a taste of what this book offers. Each aspect of the design process is explained in more detail, giving you not only the information, but also the practical skills that you can use to build upon and develop any gamified application from start to finish. If you want to find out about gamification, how to use it, and more importantly how to implement it into Unity, then this book is a great foundation to get you going. In particular, you will learn how to apply all these concepts into Unity and create gamified experiences. Furthermore, the book will bring you to create a gamified application starting from the basic pieces, with a particular focus to your audience and your goals. Learning about the uses of gamification does not have to stop with this book. In fact, there are many ways that you can develop the knowledge that you have gained and apply it to other tasks. Some other Packt books, such as the Unity UI Cookbook by Francesco Sapio, which you can obtain at https://www.packtpub.com/game-development/unity-ui-cookbook features a range of different recipes to implement a range of different UI elements that can even be featured in your dashboard. In fact, UIs are the key for the development of gamifed experiences and applications. The main thing is that you continue to learn, adapt, and to apply your knowledge in many different types of contexts. Resources for Article: Further resources on this subject: Buildbox 2 Game Development: peek-a-boo [article] Customizing the Player Character [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 24636
article-image-global-illumination
Packt
17 Jun 2015
16 min read
Save for later

Global Illumination

Packt
17 Jun 2015
16 min read
In this article by Volodymyr Gerasimov, the author of the book, Building Levels in Unity, you will see two types of lighting that you need to take into account if you want to create well lit levels—direct and indirect. Direct light is the one that is coming directly from the source. Indirect light is created by light bouncing off the affected area at a certain angle with variable intensity. In the real world, the number of bounces is infinite and that is the reason why we can see dark areas that don't have light shining directly at them. In computer software, we don't yet have the infinite computing power at our disposal to be able to use different tricks to simulate the realistic lighting at runtime. The process that simulates indirect lighting, light bouncing, reflections, and color bleeding is known as Global Illumination (GI). Unity 5 is powered by one of the industry's leading technologies for handling indirect lighting (radiosity) in the gaming industry, called Enlighten by Geomerics. Games such as Battlefield 3-4, Medal of Honor: Warfighter, Need for Speed the Run and Dragon Age: Inquisition are excellent examples of what this technology is capable of, and now all of that power is at your fingertips completely for free! Now, it's only appropriate to learn how to tame this new beast. (For more resources related to this topic, see here.) Preparing the environment Realtime realistic lighting is just not feasible at our level of computing power, which forces us into inventing tricks to simulate it as close as possible, but just like with any trick, there are certain conditions that need to be met in order for it to work properly and keep viewer's eyes from exposing our clever deception. To demonstrate how to work with these limitations, we are going to construct a simple light set up for the small interior scene and talk about solutions to the problems as we go. For example, we will use the LightmappingInterior scene that can be found in the Chapter 7 folder in the Project window. It's a very simple interior and should take us no time to set up. The first step is to place the lights. We will be required to create two lights: a Directional to imitate the moonlight coming from the crack in the dome and a Point light for the fire burning in the goblet, on the ceiling.   Tune the light's Intensity, Range (in Point light's case), and Color to your liking. So far so good! We can see the direct lighting coming from the moonlight, but there is no trace of indirect lighting. Why is this happening? Should GI be enabled somehow for it to work? As a matter of fact, it does and here comes the first limitation of Global Illumination—it only works on GameObjects that are marked as Static. Static versus dynamic objects Unity objects can be of one of the two categories: static or dynamic. Differentiation is very simple: static objects don't move, they stay still where they are at all times, they neither play any animations nor engage in any kind of interactions. The rest of the objects are dynamic. By default, all objects in Unity are dynamic and can only be converted into static by checking the Static checkbox in the Inspector window.   See it for yourself. Try to mark an object as static in Unity and attempt to move it around in the Play mode. Does it work? Global Illumination will only work with static objects; this means, before we go into the Play mode right above it, we need to be 100 percent sure that the objects that will cast and receive indirect lights will not stop doing that from their designated positions. However, why is that you may ask, isn't the whole purpose of Realtime GI to calculate indirect lighting in runtime? The answer to that would be yes, but only to an extent. The technology behind this is called Precomputed Realtime GI, according to Unity developers it precomputes all possible bounces that the light can make and encodes them to be used in realtime; so it essentially tells us that it's going to take a static object, a light and answer a question: "If this light is going to travel around, how is it going to bounce from the affected surface of the static object from every possible angle?"   During runtime, lights are using this encoded data as instructions on how the light should bounce instead of calculating it every frame. Having static objects can be beneficial in many other ways, such as pathfinding, but that's a story for another time. To test this theory, let's mark objects in the scene as Static, meaning they will not move (and can't be forced to move) by physics, code or even transformation tools (the latter is only true during the Play mode). To do that, simply select Pillar, Dome, WaterProNighttime, and Goblet GameObjects in the Hierarchy window and check the Static checkbox at the top-right corner of the Inspector window. Doing that will cause Unity to recalculate the light and encode bouncing information. Once the process has finished (it should take no time at all), you can hit the Play button and move the light around. Notice that bounce lighting is changing as well without any performance overhead. Fixing the light coming from the crack The moonlight inside the dome should be coming from the crack on its surface, however, if you rotate the directional light around, you'll notice that it simply ignores concrete walls and freely shines through. Naturally, that is incorrect behavior and we can't have that stay. We can clearly see through the dome ourselves from the outside as a result of one-sided normals. Earlier, the solution was to duplicate the faces and invert the normals; however, in this case, we actually don't mind seeing through the walls and only want to fix the lighting issue. To fix this, we need to go to the Mesh Render component of the Dome GameObject and select the Two Sided option from the drop-down menu of the Cast Shadows parameter.   This will ignore backface culling and allow us to cast shadows from both sides of the mesh, thus fixing the problem. In order to cast shadows, make sure that your directional light has Shadow Type parameter set to either Hard Shadows or Soft Shadows.   Emission materials Another way to light up the level is to utilize materials with Emission maps. Pillar_EmissionMaterial applied to the Pillar GameObject already has an Emission map assigned to it, all that is left is to crank up the parameter next to it, to a number which will give it a noticeable effect (let's say 3). Unfortunately, emissive materials are not lights, and precomputed GI will not be able to update indirect light bounce created by the emissive material. As a result, changing material in the Play mode will not cause the update. Changes done to materials in the Play mode will be preserved in the Editor. Shadows An important byproduct of lighting is shadows cast by affected objects. No surprises here! Unity allows us to cast shadows by both dynamic and static objects and have different results based on render settings. By default, all lights in Unity have shadows disabled. In order to enable shadows for a particular light, we need to modify the Shadow Type parameter to be either Hard Shadows or Soft Shadows in the Inspector window.   Enabling shadows will grant you access to three parameters: Strength: This is the darkness of shadows, from 0 to 1. Resolution: This controls the resolution of the shadows. This parameter can utilize the value set in the Use Quality Settings or be selected individually from the drop down menu. Bias and Normal Bias – this is the shadow offset. These parameters are used to prevent an artifact known as Shadow Acne (pixelated shadows in lit areas); however, setting them too high can cause another artifact known as Peter Panning (disconnected shadow). Default values usually help us to avoid both issues. Unity is using a technique known as Shadow Mapping, which determines the objects that will be lit by assuming the light's perspective—every object that light sees directly, is lit; every object that isn't seen should be in the shadow. After rendering the light's perspective, Unity stores the depth of each surface into a shadow map. In the cases where the shadow map resolution is low, this can cause some pixels to appear shaded when they shouldn't be (Shadow Acne) or not have a shadow where it's supposed to be (Peter Panning), if the offset is too high. Unity allows you to control the objects that should receive or cast shadows by changing the parameters Cast Shadows and Receive Shadows in the Rendering Mesh component of a GameObject. Lightmapping Every year, more and more games are being released with real-time rendering solutions that allow for more realistic-looking environments at the price of ever-growing computing power of modern PCs and consoles. However, due to the limiting hardware capabilities of mobile platforms, it is still a long time before we are ready to part ways with cheap and affordable techniques such as lightmapping. Lightmapping is a technology for precomputing brightness of surfaces, also known as baking, and storing it in a separate texture—a lightmap. In order to see lighting in the area, we need to be able to calculate it at least 30 times per second (or more, based on fps requirements). This is not very cheap; however, with lightmapping we can calculate lighting once and then apply it as a texture. This technology is suitable for static objects that artists know will never be moved; in a nutshell, this process involves creating a scene, setting up the lighting rig and clicking Bake to get great lighting with minimum performance issues during runtime. To demonstrate the lightmapping process, we will take the scene and try to bake it using lightmapping. Static versus dynamic lights We've just talked about a way to guarantee that the GameObjects will not move. But what about lights? Hitting the Static checkbox for lights will not achieve much (unless you simply want to completely avoid the possibility of accidentally moving them). The problem at hand is that light, being a component of an object, has a separate set of controls allowing them to be manipulated even if the holder is set to static. For that purpose, each light has a parameter that allows us to specify the role of individual light and its contribution to the baking process, this parameter is called Baking. There are three options available for it: Realtime: This option will exclude this particular light from the baking process. It is totally fine to use real-time lighting, precomputed GI will make sure that modern computers and consoles are able to handle them quite smoothly. However, they might cause an issue if you are developing for the mobile platforms which will require every bit of optimization to be able to run with a stable frame rate. There are ways to fake real-time lighting with much cheaper options,. The only thing you should consider is that the number of realtime lights should be kept at a minimum if you are going for maximum optimization. Realtime will allow lights to affect static and dynamic objects. Baked: This option will include this light into the baking process. However, there is a catch: only static objects will receive light from it. This is self-explanatory—if we want dynamic objects to receive lighting, we need to calculate it every time the position of an object changes, which is what Realtime lighting does. Baked lights are cheap, calculated once we have stored all lighting information on a hard drive and using it from there, no further recalculation is required during runtime. It is mostly used on small situational lights that won't have a significant effect on dynamic objects. Mixed: This one is a combination of the previous two options. It bakes the lights into the static objects and affects the dynamic objects as they pass by. Think of the street lights: you want the passing cars to be affected; however, you have no need to calculate the lighting for the static environment in realtime. Naturally, we can't have dynamic objects move around the level unlit, no matter how much we'd like to save on computing power. Mixed will allow us to have the benefit of the baked lighting on the static objects as well as affect the dynamic objects at runtime. The first step that we are going to take is changing the Baking parameter of our lights from Realtime to Baked and enabling Soft Shadows:   You shouldn't notice any significant difference, except for the extra shadows appearing. The final result isn't too different from the real-time lighting. Its performance is much better, but lacks the support of dynamic objects. Dynamic shadows versus static shadows One of the things that get people confused when starting to work with shadows in Unity is how they are being cast by static and dynamic objects with different Baking settings on the light source. This is one of those things that you simply need to memorize and keep in mind when planning the lighting in the scene. We are going to explore how different Baking options affect the shadow casting between different combinations of static and dynamic objects: As you can see, real-time lighting handles everything pretty well; all the objects are casting shadows onto each other and everything works as intended. There is even color bleeding happening between two static objects on the right. With Baked lighting the result isn't that inspiring. Let's break it down. Dynamic objects are not lit. If the object is subject to change at runtime, we can't preemptively bake it into the lightmap; therefore, lights that are set to Baked will simply ignore them. Shadows are only cast by static objects onto static objects. This correlates to the previous statement that if we aren't sure that the object is going to change we can't safely bake its shadows into the shadow map. With Mixed we get a similar result as with real-time lighting, except for one instance: dynamic objects are not casting shadows onto static objects, but the reverse does work: static objects are casting shadows onto the dynamic objects just fine, so what's the catch? Each object gets individual treatment from the Mixed light: those that are static are treated as if they are lit by the Baked light and dynamic are lit in realtime. In other words, when we are casting a shadow onto a dynamic object, it is calculated in realtime, while when we are casting shadow onto the static object, it is baked and we can't bake a shadow that is cast by the object that is subject to change. This was never the case with real-time lighting, since we were calculating the shadows at realtime, regardless of what they were cast by or cast onto. And again, this is just one scenario that you need to memorize. Lighting options The Lighting window has three tabs: Object, Scene, and Lightmap. For now we will focus on the first one. The main content of an Object tab is information on objects that are currently selected. This allows us to get quick access to a list of controls, to better tweak selected objects for lightmapping and GI. You can switch between object types with the help of Scene Filter at the top; this is a shortcut to filtering objects in the Hierarchy window (this will not filter the selected GameObjects, but everything in the Hierarchy window). All GameObjects need to be set to Static in order to be affected by the lightmapping process; this is why the Lightmap Static checkbox is the first in the list for Mesh Renderers. If you haven't set the object to static in the Inspector window, checking the Lightmap Static box will do just that. The Scale in Lightmap parameter controls the lightmap resolution. The greater the value, the bigger the resolution given to the object's lightmap, resulting in better lighting effects and shadows. Setting the parameter to 0 will result in an object not being affected by lightmapping. Unless you are trying to fix lighting artifacts on the object, or going for the maximum optimization, you shouldn't touch this parameter; there is a better way to adjust the lightmap resolution for all objects in the scene; Scale in Lightmap scales in relation to global value. The rest of the parameters are very situational and quite advanced, they deal with UVs, extend the effect of GI on the GameObject, and give detailed information on the lightmap. For lights, we have a baking parameter with three options: Realtime, Baked, or Mixed. Naturally, if you want this light for lightmapping, Realtime is not an option, so you should pick Baked or Mixed. Color and Intensity are referenced from the Inspector window and can be adjusted in either place. Baked Shadows allows us to choose the shadow type that will be baked (Hard, Soft, Off). Summary Lighting is a difficult process that is deceptively easy to learn, but hard to master. In Unity, lighting isn't without its issues. Attempting to apply real-world logic to 3D rendering will result in a direct confrontation with limitations posed by imperfect simulation. In order to solve issues that may arise, one must first understand what might be causing them, in order to isolate the problem and attempt to find a solution. Alas, there are still a lot of topics left uncovered that are outside of the realm of an introduction. If you wish to learn more about lighting, I would point you again to the official documentation and developer blogs, where you'll find a lot of useful information, tons of theory, practical recommendations, as well as in-depth look into all light elements discussed. Resources for Article: Further resources on this subject: Learning NGUI for Unity [article] Saying Hello to Unity and Android [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 24587

article-image-blender-25-creating-uv-texture
Packt
21 Oct 2010
4 min read
Save for later

Blender 2.5: creating a UV texture

Packt
21 Oct 2010
4 min read
Before we can create a custom UV texture, we need to export our current UV map from Blender to a file that an image manipulation program, such as GIMP or Photoshop, can read. Exporting our UV map If we have GIMP downloaded, we can export our UV map from Blender to a format that GIMP can read. To do this, make sure we can view our UV map in the Image Editor. Then, go to UVs | Export UV Layout. Then save the file in a folder you can easily get to, naming it UV_layout or whatever you like. (Move the mouse over the image to enlarge.) Now it's time to open GIMP! Downloading GIMP Before we begin, we need to first get an image manipulation program. If you don't have one of the high-end programs, such as Photoshop, there still is hope. There's a wonderful free (and open source) program called GIMP, which parallels Photoshop in functionality. For the sake of creating our textures, we will be using GIMP, but feel free to use whatever you are personally most comfortable with. To download GIMP, visit the program's website at http://www.gimp.org and download the right version for your operating system. Mac Users will need to install X11 so GIMP will run. Consult your Mac OS installation guide for instructions on how to install. Windows users, you will need to install the GTK+ Runtime Environment to run GIMP—the download installer should warn you about this during installation. To install GTK+, visit http://www.gtk.org. Hello GIMP! When we open GIMP for the first time, we should have a 3-window layout, similar to the following screen: Create a new document by selecting File | New. You can also use the Ctrl+N keyboard shortcut. This should bring up a dialog box with a list of settings we can use to customize our new document. Because Blender exported our UV map as an SVG file, we can choose any size image we want, because we can scale the image to fit our document. SVG stands for Scalable Vector Graphic. Vector graphics are images defined by mathematically calculated paths, allowing them to be scaled infinitely without the pixilation caused when raster images are enlarged beyond a certain point. Change the Width and Height attributes to 2000 each. This will create a texture image 2000 pixels wide by 2000 pixels high. Click on OK to create our new document. Getting reference images Before we can create a UV texture for our wine bottle, which will primarily define the bottle's label, we need to know what is typically on a wine bottle's label. If you search the web for any wine bottle, you'll get a pretty good idea of what a wine bottle label looks like. However, for our purposes, we're going to use the following image: Notice how there's typically the name of the wine company, the type of wine, and the year it was made. We're going to use all of these in our own wine bottle label. Importing our UV map A nice thing about GIMP is that we can import images as layers into our current file. We're going to do just this with our UV map. Go to File | Open as Layers... to bring up the file selection dialog box. Navigate to the UV map we saved earlier and open it. Another dialog box will pop up—we can use this to tell GIMP how we want our SVG to appear in our document. Change the Width and Height attributes to match our working document—2000px by 2000px. Click on OK to confirm. Not every file type will bring up this dialog box—it's specific to SVG files only. We should now see our UV map in the document as a new layer. Before we continue, we should change the background color of our texture. Our label is going to be white, so we are going to need to distinguish our label from the rest of the wine bottle's material. With our background layer selected, fill the layer with a black color using the Fill tool. Next, we can create the background color of the label. Create a new layer by clicking on the New Layer button. Name it label_background. Using the Marquee Selection tool, make a selection similar to the following image: Fill it, using the Fill tool, with white. This will be the background for our label—everything else we add with be made in relation to this layer. Keep the UV map layer on top as often as possible. This will help us keep a clear view of where our graphics are in relation to our UV map at all times.
Read more
  • 0
  • 0
  • 24529
Modal Close icon
Modal Close icon