Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - 3D Game Development

115 Articles
article-image-cryengine-3-terrain-sculpting
Packt
21 Jun 2011
11 min read
Save for later

CryENGINE 3: Terrain Sculpting

Packt
21 Jun 2011
11 min read
CryENGINE 3 Cookbook Over 100 recipes written by Crytek developers for creating AAA games using the technology that created Crysis 2 Creating a new level Before we can do anything with the gameplay of the project that you are creating, we first need a foundation of a new level for the player to stand on. This recipe will cover how to create a new level from scratch. Getting ready Before we begin, you must have Sandbox 3 open. How to do it... At any point, with Sandbox open, you may create a new level by following these steps: Click File (found in the top -left of the Sandbox's main toolbar). Click New. From here, you will see a new dialog screen that will prompt you for information on how you want to set up your level. The most important aspect of a level is naming it, as you will not be able to create a level without some sort of proper name for the level's directory and its .cry file. You may name your level anything you wish, but for the ease of instruction we shall refer to this level as My_Level: In the Level Name dialog box, type in My_Level. For the Terrain properties, use the following values: Use Custom Terrain Size: True Heightmap Resolution:512x512 Meters Per Unit: 1 Click OK. Depending on your system specifications, you may find that creating a new level will require anywhere from a few seconds to a couple of minutes. Once finished, the Viewport should display a clear blue sky with the dialog in your console reading the following three lines: Finished synchronous pre-cache of render meshes for 0 CGF's Finished pre-caching camera position (1024,1024,100) in 0.0 sec Spawn player for channel 1 This means that the new level was created successfully. How it works... Let's take a closer look at each of the options used while creating this new level. Using the Terrain option This option allows the developer to control whether to have any terrain on the level to be manipulated by a heightmap or not. Sometimes terrain can be expensive for levels and if any of your future levels contain only interiors or only placed objects for the player to navigate on, then setting this value to false will be a good choice for you and will save a tremendous amount of memory and aid in the performance of the level later on. Heightmap resolution This drop-down controls the resolution of the heightmap and the base size of the play area defined. The settings can range from the smallest resolution (128 x 128) all the way up to the largest supported resolution (8192 x 8192). Meters per unit If the Heightmap Resolution is looked at in terms of pixel size, then this dialog box can also be viewed as the Meters Per Pixel. This means that each pixel of the heightmap will be represented by these many meters. For example, if a heightmap's resolution has 4 Meters Per Unit (or Pixel), then each pixel on the generated heightmap will measure four meters in length and width on the level. Even though this Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult as there will be a wider minimum triangle size set by this value. Terrain size This is the resulting size of the level with the equation of (Heightmap Resolution) x (Meters Per unit). Here are some examples of the results you will see (m = meters): (128x128) x 4m = 512x512m (512x512) x 16m = 8192x8192m (1024x1024) x 2m = 2048x2048m There's more... If you need to change your unit size after creating the map, you may change it by going into the Terrain Editor | Modify | Set Unit Size. This will allow you to change the original Meters Per Unit to the size you want it to be.   Generating a procedural terrain This recipe deals with the procedural generation of a terrain. Although never good enough for a final product because you will want to fine tune the heightmap to your specifications, these generated terrains are a great starting point for anyone new to creating levels or for anyone who needs to set up a test level with the Sandbox. Different heightmap seeds and a couple of tweaks to the height of the level and you can generate basic mountain ranges or islands quickly that are instantly ready to use. Getting ready Have My_Level open inside of Sandbox. How to do it... Up at the top-middle of the Sandbox main toolbar, you will find a menu selection called Terrain. From there you should see a list of options, but for now you will want to click on Edit Terrain. This opens the Terrain Editor window. The Terrain Editor window has a multitude of options that can be used to manipulate the heightmap in your level. But first we want to set up a basic generated heightmap for us to build a simple map with. Before we generate anything, we should first set the maximum height of the map to something more manageable. Follow these steps: Click Modify. Then click Set Max Height. Set your Max Terrain Height to 256 (these units are in meters). Now, we may be able to generate the terrain: Click Tools. Then click Generate Terrain. Modify the Variation (Random Base) to the value of 15. Click OK. After generating, you should be able to see a heightmap similar to the following screenshot: This is a good time to generate surface texture (File | Generate surface texture | OK), which allows you to see the heightmap with a basic texture in the Perspective View. How it works... The Maximum Height value is important as it governs the maximum height at which you can raise your terrain to. This does not mean that it is the maximum height of your level entirely, as you are still able to place objects well above this value. It is also important to note that if you import a grey scale heightmap into CryENGINE then this value will be used as the upper extreme of the heightmap (255,255,255 white) and the lower extreme will always be at 0 (0,0,0 black). Therefore the heightmap will be generated within 0 m height and the maximum height. Problems such as the following are a common occurrence: Tall spikes are everywhere on the map or there are massive mountains and steep slopes: Solution: Reduce the Maximum Height to a value that is more suited to the mountains and slopes you want The map is very flat and has no hills or anything from my heightmap: Solution: Increase the Maximum Height to a value that is suitable for making the hills you want There's more... Here are some other settings you might choose to use while generating the terrain. Terrain generation settings The following are the settings to generate a procedural terrain: Feature Size: This value handles the general height manipulations within the seed and the size of each mound within the seed. As the size of the feature depends greatly on rounded numbers it is easy to end up with a perfectly rounded island, therefore it is best to leave this value at 7.0. Bumpiness / Noise (Fade): Basically, this is a noise filter for the level. The greater the value, the more noise will appear on the heightmap. Detail (Passes): This value controls how detailed the slopes will become. By default, this value is very high to see the individual bumps on the slopes to give a better impression of a rougher surface. Reducing this value will decrease the amount of detail/roughness in the slopes seen. Variation: This controls the seed number used in the overall generation of the Terrain Heightmap. There are a total of 33 seeds ranging from 0 – 32 to choose from as a starting base for a basic heightmap. Blurring (Blur Passes): This is a Blur filter. The higher the amount, the smoother the slopes will be on your heightmap. Set Water Level: From the Terrain Editor window, you can adjust the water level from Modify | Set Water Level. This value changes the base height of the ocean level (in meters). Make Isle: This tool allows you to take the heightmap from your level and automatically lowers the border areas around the map to create an island. From the Terrain Editor window, select Modify | Make Isle.   Navigating a level with the Sandbox Camera The ability to intuitively navigate levels is a basic skill that all developers should be familiar with. Thankfully, this interface is quite intuitive to anyone who is already familiar with the WASD control scheme popular in most First Person Shooters Games developed on the PC. Getting ready You should have already opened a level from the CryENGINE 3 Software Development Kit content and seen a perspective viewport displaying the level. The window where you can see the level is called the Perspective Viewport window. It is used as the main window to view and navigate your level. This is where a large majority of your level will be created and common tasks such as object placement, terrain editing, and in-editor play testing will be performed. How to do it... The first step to interacting with the loaded level is to practice moving in the Perspective Viewport window. Sandbox is designed to be ergonomic for both left and right-handed users. In this example, we use the WASD control scheme, but the arrow keys are also supported for movement of the camera. Press W to move forwards. Then press S to move backwards. A is pressed to move or strafe left. Finally, D is pressed to move or strafe right. Now you have learned to move the camera on its main axes, it's time to adjust the rotation of the camera. When the viewport is the active window, hold down the right mouse button on your mouse and move the mouse pointer to turn the view. You can also hold down the middle mouse button and move the mouse pointer to pan the view. Roll the middle mouse button wheel to move the view forward or backward. Finally, you can hold down Shift to double the speed of the viewport movements. How it works... The Viewport allows for a huge diversity of views and layouts for you to view your level; the perspective view is just one of many. The perspective view is commonly used as it displays the output of the render engine. It also presents you a view of your level using the standard camera perspective, showing all level geometry, lighting, and effects. To experiment further with the viewport, note that it can also render subsystems and their toolsets such as flow graph, or character editor. There's more... You will likely want to adjust the movement speed and how to customize the viewport to your individual use. You can also split the viewport in multiple different views, which is discussed further. Viewport movement speed control The Speed input is used to increase or decrease the movement speed of all the movements you make in the main Perspective Viewport. The three buttons to the right of the Speed: inputs are quick links to the .1, 1, and 10 speeds. Under Views you can adjust the viewport to view different aspects of your level Top View, Front, and Left views will show their respective aspects of your level, consisting of bounding boxes and line-based helpers. It should be noted that geometry is not drawn. Map view shows an overhead map of your level with helper, terrain, and texture information pertaining to your level. Splitting the main viewport to several subviewports Individual users can customize the layout and set viewing options specific to their needs using the viewport menu accessed by right-clicking on the viewports header. The Layout Configuration window can be opened from the viewport header under Configure Layout. Once selected, you will be able to select one of the preset configurations to arrange the windows of the Sandbox editor into multiple viewport configurations. It should be recognized that in multiple viewport configurations some rendering effects may be disabled or performance may be reduced.  
Read more
  • 0
  • 0
  • 9207

article-image-cryengine-3-fun-physics
Packt
12 Jul 2011
4 min read
Save for later

CryENGINE 3: Fun Physics

Packt
12 Jul 2011
4 min read
CryENGINE 3 Cookbook Over 90 recipes written by Crytek developers for creating third-generation real-time games Low gravity In this simple recipe, we will look at utilizing the GravityBox to set up a low gravity area within a level. Getting ready Have Sandbox open Then open My_Level.cry How to do it... To start, first we must place down a GravityBox. In the RollupBar, click on the Entities button. Under the Physics section, select GravityBox. Place the GravityBox on the ground: Keeping the default dimensions (20, 20, 20 meters), the only property here that we want to change is the gravity. The default settings in this box set this entire area within the level to be a zero gravity zone. To adjust the up/down gravity of this, we need to change the value of gravity and the Z axis. To mimic normal gravity, this value would need to be set to the acceleration value of -9.81. To change this value to a lower gravity value, (something like the Moon's gravity) simply change it to a higher negative value such as -1.62. How it works... The GravityBox is a simple bounding box which overrides the defined gravity in the code (-9.81) and sets its own gravity value within the bounding box. Anything physicalized and activated to receive physics updates will behave within the confines of these gravitational rules unless they fall outside of the bounding box. There's more... Here are some useful tips about the gravity objects. Uniform property The uniform property within the GravityBox defines whether the GravityBox should use its own local orientation or the world's. If true, the GravityBox will use its own local rotation for the gravitational direction. If false, it will use the world's direction. This is used when you wish to have the gravity directed sideways. Set this value to True and then rotate the GravityBox onto its side. Gravity sphere Much like the GravityBox, the GravitySphere holds all the same principles but in a radius instead of a bounding box. The only other difference with the GravitySphere is that a false uniform Boolean will cause any object within the sphere to be attracted/repulsed from the center of the axis.   Hangman on a rope In this recipe, we will look at how we can utilize a rope to hang a dead body from it. Getting ready Open Sandbox Then open My_Level.cry How to do it... Begin by drawing out a rope: Open the RollupBar. From the Misc button, select Rope. With Grid Snap on and set to 1 meter, draw out a straight rope that has increments of one meter (by clicking once for every increment) up to four meters (double-click to finalize the rope). Align the rope so that from end to end it is along the Z axis (up and down) and a few meters off the ground: Next, we will need something solid to hang the rope from. Place down a solid with 1, 1, 1 meter. Align the rope underneath the solid cube while keeping both off the ground. Make sure when aligning the rope to get the end constraint to turn from red to green. This means it is attached to a physical surface: Lastly, we will need to hang a body from this rope. However, we will not hang him in the traditional manner, but rather by one of his feet. In the RollupBar, click on the Entities button. Under the Physics section, select DeadBody. Rotate this body up-side-down and align one of his feet to the bottom end of the rope. Select the rope to make sure the bottom constraint turns green to signal that it is attached. Verify that the Hangman on a rope recipe works by going into game mode and punching the dead body: How it works... The rope is a complicated cylinder that can contain as many bending segments as defined and is allowed to stretch and compress depending on the values defined. Tension and breaking strength can also be defined. But since ropes have expensive physics properties involved, they should be used sparingly.  
Read more
  • 0
  • 0
  • 9160

article-image-scaling-friendly-font-rendering-distance-fields
Packt
28 Oct 2014
8 min read
Save for later

Scaling friendly font rendering with distance fields

Packt
28 Oct 2014
8 min read
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem. (For more resources related to this topic, see here.) Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures! In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx. Getting ready For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from http://libgdx.badlogicgames.com/releases and unzip it. Make sure the samples projects are in your workspace. Please visit the link https://github.com/siondream/libgdx-cookbook to download the sample projects which you will need. How to do it… First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx. Generating distance field fonts with Hiero Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero Select the font using either the System or File options. This time, you don't need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time. Remove the Color effect, and add a white Distance field effect. Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot. To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding. Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property. Generate the font by going to File | Save BMFont files (text).... The following is the Hiero UI showing a font texture with a Distance field effect applied to it: Distance field fonts shader We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later. First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows: #ifdef GL_ES precision mediump float; precision mediump int; #endif   uniform sampler2D u_texture;   varying vec4 v_color; varying vec2 v_texCoord;   const float smoothing = 1.0/128.0;   void main() {    float distance = texture2D(u_texture, v_texCoord).a;    float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);    gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); } The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code. Rendering distance field fonts in Libgdx Let's move on to DistanceFieldFontSample.java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader. In the create() method, we instantiate both the fonts and shader normally: normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f);   distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f);   fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag"));   if (!fontShader.isCompiled()) {    Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); } We need to make sure that the texture our distanceFont just loaded is using linear filtering: distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear); Remember to free up resources in the dispose() method, and let's get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader: batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f);   batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end(); The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though: How it works… Let's explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf). A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process: Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case: This produces a clean result. There's more… We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this. Open a command-line window, access your Libgdx package folder and enter the following command: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator The distance field font generator takes the following parameters: --color: This parameter is in hexadecimal RGB format; the default is ffffff --downscale: This is the factor by which the original texture will be downscaled --spread: This is the edge scan distance, expressed in terms of the input Take a look at this example: java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution (https://github.com/jrenner/gdx-smart-font). Summary In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx. Further resources on this subject: Cross-platform Development - Build Once, Deploy Anywhere [Article] Getting into the Store [Article] Adding Animations [Article]
Read more
  • 0
  • 0
  • 8868

article-image-article-breaking-ground-with-sandbox
Packt
12 Oct 2012
11 min read
Save for later

Breaking Ground with Sandbox

Packt
12 Oct 2012
11 min read
What makes a game? We saw that majority of the games created on the CryENGINE SDK have historically been first-person shooters containing a mix of sandbox and directed gameplay. If you have gone so far as to purchase a book on the use of the CryENGINE 3 SDK, then I am certain that you have had some kind of idea for a game, or even improvements to existing games, that you might want to make. It has been my experience professionally that should you have any of these ideas and want to share or sell them, the ideas that are presented in a playable format, even in early prototype form, are far more effective and convincing than any PowerPoint presentation or 100-page design document. Reducing, reusing, recycling Good practice when creating prototypes and smaller scale games, especially if you lack the expertise in creating certain assets and code, is to reduce, reuse, and recycle. To break down what I mean: Reduce the amount of new assets and new code you need to make Reuse existing assets and code in new and unique ways Recycle the sample assets and code provided, and then convert them for your own uses Developing out of the box As mentioned earlier, the CryENGINE 3 SDK has a huge amount of out-of-the-box features for creating games. Let's begin by following a few simple steps to make our first game world. Before proceeding with this example, it's important to understand the features it is displaying; the level we will have created by the end of this article will not be a full, playable game, but rather a unique creation of yours, which will be constructed using the first major features we will need in our game. It will provide an environment in to which we can design gameplay. With the ultimate goal of this article being to create our own level with the core features immediately available to us, we must keep in mind that these examples are orientated to compliment a first-person shooter and not other genres. The first-person shooter genre is quite well defined as new games come out every year within this genre. So, it should be fairly easy for any developer to follow these examples. In my career, I have seen that you can indeed accomplish a good cross section of different games with the CryENGINE 3 SDK. However, the third- and first-person genres are significantly easier to create, immediately with the example content and features available right out of the box. For the designers:This article is truly a must-have for designers working with the engine. Though, I would highly recommend that all users of sandbox know how to use these features, as they are the principal features typically used within most levels of the different types of games in the CryENGINE. Time for action - creating a new level Let's follow a few simple steps to create our own level: Start the Editor.exe application. Select File | New. This will present you with a New Level dialog box that allows you to do the adjustments of some principal properties of your masterpiece to come. The following screenshot shows the properties available in New Level: Name this New Level, as Book_Example_1. The name that you choose here will identify this level for loading later as well as creating a folder and .cry file of the same name. In the Terrain section of the dialog box, set Heightmap Resolution to 1024x1024 , and Meters Per Unit to 1. Click on OK and your New Level will begin to load. This should occur relatively fast, but will depend on your computer's specifications. You will know the level has been loaded when you see Ready in the status bar. You will also see an ocean stretching out infinitely and some terrain slightly underneath the water. Maneuver your camera so that you have a good, overall view of the map you will create, as seen in the following screenshot: (Move the mouse over the image to enlarge.) What just happened? Congratulations! You now have an empty level to mold and modify at your will. Before moving on, let's talk a little about the properties that we just set, as they are fundamental properties of the levels within CryENGINE. It is important to understand these, as depending on the type of game you are creating, you may need bigger or smaller maps, or you may not even need terrain at all. Using the right Heightmap Resolution When we created the New Level, we chose a Heightmap Resolution of 1024x1024. To explain this further, each pixel on the heightmap has a certain grey level. This pixel then gets applied to the terrain polygons, and depending on the level of grey, will move the polygon on the terrain to a certain height. This is called displacement. Heightmaps always have varying values from full white to full black, where full white is maximum displacement and full black is minimum or no displacement. The higher the resolution of the heightmap, the more the pixels that are available to represent different features on said heightmap. You can thus achieve more definition and a more accurate geometrical representation of your heightmap using higher resolutions. The settings can range from the smallest resolution of 128x128, all the way to the largest supported resolution of 8192x8192 . The following screenshot shows the difference between high resolution and low resolution heightmaps:   Scaling your level with Meters Per Unit If the Heightmap Resolution parameter is examined in terms of pixel size, then this dialog box can be viewed also as the Meters Per Pixel parameter . This means that each pixel of the heightmap will be represented by so many meters. For example, if a heightmap's resolution has 4 Meters Per Unit, then each pixel on the generated heightmap will measure to be 4 meters in length and width on the level. Even though Meters Per Unit can be used to increase the size of your level, it will decrease the fidelity of the heightmap. You will notice that attempting to smoothen out the terrain may be difficult, since there will be a wider, minimum triangle size set by this value. Keep in mind that you can adjust the unit size even after the map has been created. This is done through the terrain editor, which we will discuss shortly. Calculating the real-world size of the terrain The expected size of the terrain can easily be calculated before making the map, because the equation is not so complicated. The real-world size of the terrain can be calculated as: (Heightmap Resolution) x Meters Per Unit = Final Terrain Dimensions. For example: (128x128) x 2m = 256x256m (512x512) x 8m = 4096x4096m (1024x1024) x 2m = 2048x2048m Using or not using terrain In most cases, levels in CryENGINE will use some amount of the terrain. The terrain itself is a highly optimized system that has levels of dynamic tessellation, which adjusts the density of polygons depending on the distance from the camera to the player. Dynamic tessellation is used to make the more defined areas of the terrain closer to the camera and the less defined ones further away, as the amount of terrain polygons on the screen will have a significant impact on the performance of the level. In some cases, however, the terrain can be expensive in terms of performance, and if the game is made in an environment like space or interior corridors and rooms, then it might make sense to disable the terrain. Disabling the terrain in these cases will save an immense amount of memory, and speed up level loading and runtime performance. In this particular example, we will use the terrain, but should you wish to disable it, simply go to the second tab in the RollupBar (usually called the environment tab) and set the ShowTerrainSurface parameter to false , as shown in the following screenshot:   Time for action - creating your own heightmap You must have created a new map to follow this example. Having sufficiently beaten the terrain system to death through explanation, let's get on with what we are most interested in, which is creating our own heightmap to use for our game: As discussed in the previous example, you should now see a flat plane of terrain slightly submerged beneath the ocean. At the top of the Sandbox interface in the main toolbar, you will find a menu selection called Terrain; open this. The following screenshot shows the options available in the Terrain menu. As we want to adjust the terrain, we will select the Edit Terrain option. This will open the Terrain Editor window, which is shown in the following screenshot: You can zoom in and pan this window to further inspect areas within the map. Click-and-drag using the right mouse button to pan the view and use the mouse wheel to zoom in and zoom out. The Terrain Editor window has a multitude of options, which can be used to manipulate the heightmap of your level. Before we start painting anything, we should first set the maximum height of the map to something more manageable: Click on Modify. Click on Set Max Height. Set your Max Terrain Height to 256. Note that the terrain height is measured in meters.     Having now set the Max Height parameter, we are ready to paint! Using a second monitor: This is a good time to take advantage of a second monitor should you have one, as you can leave the perspective view on your primary monitor and view the changes made in the Terrain Editor on your second monitor, in real time. On the right-hand side of the Terrain Editor , you will see a rollout menu named Terrain Brush. We will first use this to flatten a section of the level. Change the Brush Settings to Flatten, and set the following values: Outside Radius = 100 Inside Radius = 100 Hardness = 1 Height = 20     NOTE: You can sample the terrain height in the Terrain Editor or the view port using the shortcut Control when the flatten brush is selected. Now paint over the top half of the map. This will flatten the entire upper half of the terrain to 20 meters in height. You will end up with the following screenshot, where the dark portion represents the terrain, and since it is relatively low compared to our max height, it will appear black: Note that, by default, the water is set to a height of 16 meters. Since we flattened our terrain to a height of 20 meters, we have a 4-meter difference from the terrain to the water in the center of the map. In the perspective viewport, this will look like a steep cliff going into the water. At the location where the terrain meets the water, it would make sense to turn this into a beach, as it's the most natural way to combine terrain and water. To do this, we will smoothen the hard edge of the terrain along the water. As this is to become our beach area, let's now use the smooth tools to make it passable by the player: Change the Type of brush to Smooth and set the following parameters: Outside Radius = 50 Hardness = 1 I find it significantly easier to gauge the effects of the smooth brush in the perspective viewport. Paint the southern edge of the terrain, which will become our beach. It might be difficult to view the effects of the smooth brush simply in the terrain editor, so I recommend using the perspective viewport to paint your beach. Now that we have what will be our beach, let's sculpt some background terrain. Select the Rise/Lower brush and set the following parameters: Outside Radius = 75 Inside Radius = 50 Hardness = 0.8 Height = 1 Before painting, set the Noise Settings for the brush; to do so, check Enable Noise to true. Also set: Scale = 5 Frequency = 25 Paint the outer edges of the terrain while keeping an eye on the perspective viewport at the actual height of the mountain type structure that this creates. You can see the results in the Terrain Editor and perspective view, as seen in the following screenshots: It is a good time to use the shortcut to switch to smooth brush while painting the terrain. While in perspective view, switch to the smooth brush using the Shift shortcut. A good technique is to use the Rise/Lower brush and only click a few times, and then use Shift to switch to the smooth brush and do this multiple times on the same area. This will give you some nice terrain variation, which will serve us nicely when we go to texture it. Don't forget the player's perspective: Remember to switch to game mode periodically to inspect your terrain from the players level. It is often the case that we get caught up in the appearance of a map by looking at it from our point of view while building it, rather than from the point of view of the player, which is paramount for our game to be enjoyable to anyone playing it. Save this map as Book_Example_1_no_color.cry.
Read more
  • 0
  • 0
  • 8818

article-image-lighting-outdoor-scene-blender
Packt
19 Oct 2010
7 min read
Save for later

Lighting an Outdoor Scene in Blender

Packt
19 Oct 2010
7 min read
  Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        Getting the right files Before we get started, we need a scene to work with. There are three scenes provided for our use—an outdoor scene, an indoor scene, and a hybrid scene that incorporates elements that are found both inside as well as outside. All these files can be downloaded from http://www.cgshark.com/lightingand-rendering/ The file we are going to use for this scene is called exterior.blend. This scene contains a tricycle, which we will light as if it were a product being promoted for a company. To download the files for this tutorial, visit http://www.cgshark.com/lighting-and-rendering/ and select exterior.blend. Blender render settings In computer graphics, a two-dimensional image is created from three-dimensional data through a computational process known as rendering. It's important to understand how to customize Blender's internal renderer settings to produce a final result that's optimized for our project, be it a single image or a full-length film. With the settings Blender provides us, we can set frame rates for animation, image quality, image resolution, and many other essential parts needed to produce that optimized final result. The Scene menu We can access these render settings through the Scene menu. Here, we can adjust a myriad of settings. For the sake of these projects, we are only going to be concerned with: Which window Blender will render our image in How render layers are set up Image dimensions Output location and file type Render settings The first settings we see when we look at the Scene menu are the Render settings. Here, we can tell Blender to render the current frame or an animation using the render buttons. We can also choose what type of window we want Blender to render our image in using the Display options. The first option (and the one chosen by default) is Full Screen. This renders our image in a window that overlaps the three-dimensional window in our scene. To restore the three-dimensional view, select the Back to Previous button at the top of the window. The next option is the Image Editor that Blender uses both for rendering as well as UV editing. This is especially useful when using the Compositor, allowing us to see our result alongside our composite node setup. By default, Blender replaces the three-dimensional window with the Image Editor. The last option is the option that Blender has used, by default, since day one—New Window. This means that Blender will render the image in a newly created window, separate from the rest of the program's interface. For the sake of these projects, we're going to keep this setting at the default setting—Full Screen. Dimensions settings These are some of the most important settings that we can set when dealing with optimizing our project output. We can set the image size, frame rate, frame range, and aspect ratio of our render. Luckily for us, Blender provides us with preset render settings, common in the film industry: HDTV 1080P HDTV 720P TV NTSC TV PAL TV PAL 16:9 Because we want to keep our render times relatively low for our projects, we're going to set our preset dimensions to TV NTSC, which results in an image 720 pixels wide by 480 pixels high. If you're interested in learning more about how the other formats behave, feel free to visit http://en.wikipedia.org/wiki/Display_resolution. Output settings These settings are an important factor when determining how we want our final product to be viewed. Blender provides us with numerous image and video types to choose from. When rendering an animation or image sequence, it's always easier to manually set the folder we want Blender to save to. We can tell Blender where we want it to save by establishing the path in the output settings. By default on Macintosh, Blender saves to the /tmp/ folder. Now that we understand how Blender's renderer works, we can start working with our scene! Establishing a workflow The key to constantly producing high-quality work is to establish a well-tested and efficient workflow. Everybody's workflow is different, but we are going to follow this series of steps: Evaluate what the scene we are lighting will require. Plan how we want to lay out the lamps in our scene. Set lamp positions, intensities, colors, and shadows, if applicable. Add materials and textures. Tweak until we're satisfied. Evaluating our scene Before we even begin to approach a computer, we need to think about our scene from a conceptual perspective. This is important, because knowing everything about our scene and the story that's taking place will help us produce a more realistic result. To help kick start this process, we can ask ourselves a series of questions that will get us thinking about what's happening in our scene. These questions can pertain to an entire array of possibilities and conditions, including: Weather What is the weather like on this particular day? What was it like the day before or the day after? Is it cloudy, sunny, or overcast? Did it rain or snow? Source of light Where is the light coming from? Is it in front of, to the side, or even behind the object? Remember, light is reflected and refracted until all energy is absorbed; this not only affects the color of the light, but the quality as well. Do we need to add additional light sources to simulate this effect? Scale of light sources What is the scale of our light sources in relation to our three-dimensional scene? Believe it or not, this factor carries a lot of weight when it comes to the quality of the final render. If any lights feel out of place, it could potentially affect the believability of the final product. The goal of these questions is to prove to ourselves that the scene we're lighting has the potential to exist in real life. It's much harder, if not impossible, to light a scene if we don't know how it could possibly act in the real world. Let's take a look at these questions. What is the weather like? In our case, we're not concerned with anything too challenging, weather wise. The goal of this tutorial is to depict our tricycle in an environment that reflects the effects of a sunny, cloudless day. To achieve this, we are going to use lights with blue and yellow hues for simulating the effect the sun and sky will have on our tricycle. What are the sources of our light and where are they coming from in relation to our scene? In a real situation, the sun would provide most of the light, so we'll need a key light that simulates how the sun works. In our case, we can use a Sun lamp. The key to positioning light sources within a three-dimensional scene is to find a compromise between achieving the desired mood of the image and effectively illuminating the object being presented. What is the scale of our light sources? The sun is rather large, but because of the nature of the Sun lamp in Blender, we don't have to worry about the scale of the lamp in our three-dimensional scene. Sometimes—more commonly when working with indoor scenes, such as the scene we'll approach later—certain light sources need to be of certain sizes in relation to our scene, otherwise the final result will feel unnatural. Although we will be using a realistic approach to materials, textures, and lighting, we are going to present this scene as a product visualization. This means that we won't explicitly show a ground plane, allowing the viewer to focus on the product being presented, in this case, our tricycle.
Read more
  • 0
  • 0
  • 8794

article-image-irrlicht-creating-basic-template-application
Packt
21 Nov 2011
3 min read
Save for later

Irrlicht: Creating a Basic Template Application

Packt
21 Nov 2011
3 min read
(For more resources related to this topic, see here.) Creating a new empty project Let's get started by creating a new project from scratch. Follow the steps that are given for the IDE and operating system of your choice. Visual Studio Open Visual Studio and select File | New | Project from the menu. Expand the Visual C++ item and select Win32 Console Application. Click on OK to continue: In the project wizard click on Next to edit the Application Settings. Make sure Empty project is checked. Whether Windows application or Console application is selected will not matter. Click on Finish and your new project will be created: Let's add a main source file to the project. Right-click on Source Files and select Add New Item...|. Choose C++ File(.cpp) and call the file main.cpp: CodeBlocks Use the CodeBlocks project wizard to create a new project as described in the last chapter. Now double-click on main.cpp to open this file and delete its contents. Your main source file should now be blank. Linux and the command line Copy the make file of one of the examples from the Irrlicht examples folder to where you wish to create your new project. Open the make file with a text editor of your choice and change, in line 6, the target name to what you wish your project to be called. Additionally, change, in line 10, the variable IrrlichtHome to where you extracted your Irrlicht folder. Now create a new empty file called main.cpp. Xcode Open Xcode and select File | New Project. Select Command Line Tool from Application. Make sure the type of the application is set to C++ stdc++: When your new project is created, change Active Architecture to i386 if you are using an Intel Mac, or ppc if you are using a PowerPC Mac. Create a new target by right-clicking on Targets, select Application and click on Next. Target Name represents the name of the compiled executable and application bundle: The target info window will show up. Fill in the location of the include folder of your extracted Irrlicht package in the field Header Search Path and make sure the field GCC_ PREFIX_HEADER is empty: Right-click on the project file and add the following frameworks by selecting Add Existing Frameworks...|: Cocoa.framework Carbon.framework IOKit.framework OpenGL.framework Now, we have to add the static library named libIrrlicht.a that we compiled in Chapter 1, Installing Irrlicht. Right-click on the project file and click on Add | Existing Frameworks.... Now click on the button Add Other... and select the static library. Delete the original compile target and delete the contents of main.cpp. Time for action – creating the main entry point Now that our main file is completely empty, we need a main entry point. We don't need any command-line parameters, so just go ahead and add an empty main() method as follows: int main(){ return 0;} If you are using Visual Studio, you need to link against the Irrlicht library. You can link from code by adding the following line of code: #pragma comment(lib, "Irrlicht.lib") This line should be placed between your include statements and your main() function. If you are planning to use the same codebase for compiling on different platforms, you should use a compiler-specific define statement, so that this line will only be active when compiling the application with Visual Studio. #if defined(_MSC_VER) #pragma comment(lib, "Irrlicht.lib")#endif
Read more
  • 0
  • 0
  • 8600
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unreal-development-toolkit-level-design-hq
Packt
23 Nov 2011
5 min read
Save for later

Unreal Development Toolkit: Level Design HQ

Packt
23 Nov 2011
5 min read
(For more resources related to this subject, see here.) So let's get on with it. We will first look at downloading the UDK, and install it on your PC. Time for action – UDK download and installation Download the latest version of UDK. Log on to www.udk.com and download the latest version of unreal development kit beta. Once you download the UDK Installer, go ahead and install the UDK. The default directory for installing UDK is C:UDKUDK-VersionRelease. Version Release will be the month and year that the UDK you downloaded was built. UDK folder structure The UDK folder structure looks like the following screenshot: The UDK folder structure consists of the following four folders: Binaries: game/binary executable. Development: source code for UDK. Engine: engine files. UTGame: game files. For level-design and environment creation, the important folder here is the content folder. The packaged environment's assets such as models, textures, materials, sounds, and such are stored here. For environment creation and level design, the most important folder is UTGame | Content | Environments. It contains all the files you need to create your map, as shown in the following screenshot: UDK extension is the UDK package's name. This is how the models and textures are stored in UDK. Think of UDK extension as folders. Inside those folders are stored all the models, animations, textures, materials, and similar assets. You can browse the UDK files through the UDK editor.   UDK is the map file extension.   Time for action – launching the editor To launch the unreal editor, go to the Start Menu | Unreal Development Kit | UDK Version | Editor. Another way to launch the editor is to create a shortcut. To do this, go to the installation folder: UDKUDK-VersionReleaseBinaries, locate UDKLift.exe, right-click and select Send To | Desktop (create shortcut), as shown in the following screenshot: Once on you have created the shortcut on your desktop, right-click the shortcut and select Properties. Then, in the Target box under the Shortcut tab, add editor at the end of the text. It should look something like the following screenshot: Now double-click on the desktop icon and launch the UDK Editor. Autosave When you first launch the editor, you will have Autosave automatically enabled. This will save your map at a chosen timed interval. You can set how often it will automatically save by clicking the Left Mouse Button (LMB) on the arrow on the bottom-right of the Autosave Interval and choosing the time you want, as shown in the following screenshot: You will find the Autosave feature at the bottom right of the editor. If you enable Autosave, there are a few options such as Interval and Type. Save manually by going up to File | Save As. Content browser Content browser is where you will find off the game's assets. Placing static meshes (models), textures, sounds, and game entities such as player starts, weapons, and so on, can all be done through the content browser. You will be using the content browser very often. To open the content browser click on the top menu bar, as shown in the following screenshot: Packages are where you will find specific items contained within the UDK. Things such as static meshes are contained within a package. You can search for a package, or just find the package you want to use and select it as shown in the following screenshot: The top of the content browser contains a search box as well as a filter box. This is very useful. You can sort out the content in the browser by animation sets, material instances, static meshes, sounds, and so on. This helps a lot when looking for items. The next screenshot lists full names of the items within a selected package. You can sort by clicking on the Name, Type, Tags, or Path fields, and it will re-arrange the content's preview: The content browser is one of the most commonly used tools in UDK. Get comfortable using the content browser. Spend some time navigating around it. UDK basics covers the most essential tools and functions you need to know to get started with UDK. You'll be able to quickly jump into UDK and begin feeling comfortable using the most commonly used functions. What just happened? So we know how to launch the editor, how to use the Autosave function, and where to find the content browser. We are now going to look at how to move and rotate around the editor. Time for action – movement and rotation Time to have a look at movement, rotation, and navigating around the editor. Navigation Buttons used to navigate around UDK. UDK These are your primary keys for navigating and rotating using the editor: Left Mouse Button (LMB): pan right/left/forward/backward movements Right Mouse Button (RMB): rotate, look around LMB+RMB: up/down WASD key navigation The following are other forms of primary keys for navigating and rotating around the editor: Click and hold RMB. As you hold it, use the WASD keyboard keys to move around as you would in a first person shooter game. WASD movement is great if you are familiar with hammer source mapping. MAYA users If you are familiar with Maya, the following will be your primary keys for navigating and rotating around the editor. Hold down the U key U+ LMB: rotate, look around U+ RMB: forward/backward movements U+ MMB: right/left/up/down movements What just happened? Now that you have installed UDK and know what the content browser is, you are ready to begin. So let's get started.
Read more
  • 0
  • 0
  • 8363

article-image-detailing-environments
Packt
17 Jul 2013
4 min read
Save for later

Detailing Environments

Packt
17 Jul 2013
4 min read
(For more resources related to this topic, see here.) Applying materials As it stands, our current level looks rather... well, bland. I'd say it's missing something in order to really make it realistic... the walls are all the same! Thankfully, we can use textures to make the walls come to life in a very simple way, bringing us one step closer to that AAA quality that we're going for! Applying materials to our walls in Unreal Development Kit (UDK) is actually very simple once we know how to do it, which is what we're going to look at now: First, go to the menu bar at the top and access the Actor Classes window by going to the top menu and navigating to View | Browser Windows | Content Browser. Once in the Content Browser window, make sure that Packages are sorted by folder by clicking on the left-hand side button. Once this is done, click on the UDK Game folder in the Packages window. Then type in floor master in the top search bar menu. Click on the M_LT_Floors_BSP_Master material. Close the Content Browser window and then left-click on the floor of our level; if you look closely, you should see. With the floor selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Now that we have given the floor a material, let's give it a platform as well. Select each of the faces by holding down Ctrl and left-clicking on them individually. Once selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Another way to select all of the faces would be to rightclick on the floor and navigate to Select Surfaces | Adjacent Floors. Now our floor is placed; but if you play the game, you may notice the texture being repeated over and over again and the texture on the platform being stretched strangely. One of the ways we can rectify this problem is by scaling the texture to fit our needs. With all of the floor and the pieces of the platform selected, navigate to View| Surface Properties. From there, change the Simple field under Scaling to 2.0 and click on the Apply button to its right that will double the size of our textures. After that, go to Alignment and select Box; click on the Apply button placed below it to align our textures as if the faces that we selected were like a box. This works very well for objects consisting of box-like objects (our brushes, for instance). Close the Surface Properties window and open up the Content Browser window. Now search for floors organic. Select M_LT_Floors_BSP_ Organic15b and close the Content Browser window. Now select one of the floors on the edges with the default texture on them. Then right-click and go to Select Surfaces | Matching Texture. After that, right-click and select Apply Material : M_LT_Floors_BSP_Organic15b. We build our project by navigating to Build | Build All, save our game by going to the Save option within the File menu, and run our game by navigating to Play | In Editor. And with that, we now have a nicely textured world, and it is quite a good start towards getting our levels looking as refined as possible. Summary This article discusses the role of an environment artist doing a texture pass on the environment. After that, we will place meshes to make our level pop with added details. Finally, we will add a few more things to make the experience as nice looking as possible. Resources for Article : Further resources on this subject: Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article] Creating Virtual Landscapes [Article]
Read more
  • 0
  • 0
  • 8319

article-image-creating-pseudo-3d-imagery-gimp-part-1
Packt
21 Oct 2009
9 min read
Save for later

Creating Pseudo-3D Imagery with GIMP: Part 1

Packt
21 Oct 2009
9 min read
In the previous article I've written ( Creating Convincing Images with Blender Internal Renderer-part1), I discussed about creating convincing 3D still images through color manipulation, proper shadowing, minimal lighting, and a bit of post-processing, all using but one application – Blender. This time, the article you're about to read will give us some thoughts on how to mimic a 3D scene with the use of some basic 2D tools. Here again, I would stress that nothing beats a properly planned image, that applies to all genres you can think of. Some might think it's a waste of precious time to start sitting and planning without having a concrete output at the end of the thought process. But believe me, the ideas you planned will be far more powerful and beautiful than those ideas you just had, when you were just messing around and playing with the tool directly. In this article, I wouldn't be teaching you how to paint since I'm not good at it, rather I'll be leading you through a series of steps on how to digitally sketch/draw your scenes, give them subtle color shifts, add fake lighting, and apply filter effects to further emulate how 3D does its job. Primarily, this all leads you into a guide on how I create my digital drawings (though I admit they're not the best of its kind), but somehow I'm very proud, I eventually gave life to them from concept stage to digital art stage. It might be a bit daunting at first, but as you go along the series, you'll notice it gets simpler.  However, some might get confused as to how this applies to other applications since we're focusing on The GIMP in this article. That's not a problem at all once you are familiar with your own tool; it will just be a matter of working around the tools and options. I have been using The GIMP for a long time already, and as far as I can remember, I haven't complained on its shortcomings since those shortcomings are only but bits of features which I wouldn't be need at all.  So to those of you who have been and are using other image editing programs like Adobe Photoshop, Corel, etc., you're welcome to wander around and feel free to interpret some GIMP tools to that of yours.  It's all the same after all, just a tad bit difference on the interface. Just like what Jeremy Birn has said on one of his books: “Being an expert user of a 3D Program, bit itself, does not make the user into an artist more than learning to run a Word Processor makes someone into a good writer.” Additionally, one vital skill you have to develop is the skill of observation, which I myself ham yet to master. Methods Used Basic Drawing Selection Addition, Subtraction, Intersection Gradient Coloring Color Mixing Layering Layer Modes Layer Management Using Filters Requirements Latest version of The GIMP (download at http://www.gimp.org/downloads) Basic knowledge of image editing programs with layering capabilities Patience Let's Get Started! I would already assume you have the latest version of GIMP installed on your system and is running properly, otherwise, fix the problem or ask help from the forum (http://www.gimptalk.com). I'm also assuming you have all your previous tasks done before sitting down and going over this article (which I'm pretty much positive you are). And then lastly, be patient. Sketch it out The very first thing we're going to do is to sketch our ideas for the image, much like a single panel of a storyboard. It doesn't matter how good you draw it as long as you understand it yourself and you know what's going on in the drawing. This time, you can already visualize and create a picture of your final output and it's great if you did, if not, it's fine still. The important thing is we have laid down our scene one way or another. You can take your time sketching out your scenes and adding details to them like how many objects are seen, how many are in focus, what colors do they represent, how are your characters' facial expressions, what is the size of your image, etc. So just in case we forgot how it's going to look like in the end, we have a reference to call upon and that is your initial sketch. This way, you'll also be affected by the persistence of vision where after hours and hours (yay!) of looking on your sketch, you somehow see an afterimage of what you are about to create, and that's a good thing! I'm not good at sketching so please bear with my drawing: After this, it's now time to open up The GIMP and begin the actual fun part! First Run After executing GIMP, this should (and most likely) be the initial screen that's going to be displayed on your screen: The GIMP Initial Screen We don't want Wilbur (GIMP's Mascot) to be glaring at us from a blank empty window all the time, do we? And right now we could go ahead and add a canvas with which we'll be adding our aesthetic elements into, but before that you might want to inspect your application and tool preferences just to make sure you have set everything right. Activate the window with the menu bar at the top (since we currently have three windows to choose from), and then locate Edit > Preferences, as seen below: Locating GIMP's Preferences GIMP Preferences Everything you see here should be self-explanatory, if it isn't, just leave it for the moment and check the manual later, since I'm pretty much sure that thing you didn't understand on the Preferences must be something we will not use here.  So go ahead and save whatever changes you did and sometimes, GIMP might ask you to restart the application for the changes to take effect, then do what she says and we should be back on the black canvas shortly after application restart. By now, we should be having three windows, the main Toolbox Window (located on the left), the main Image Window (located on the middle), and the Layers Window (located on the right).  If, by any chance, the Layer Window is not there, go ahead and activate the Image Window and go to Windows > Layers or press CTRL + L to bring up the Layer Window. Showing the Layers Window Creating the Canvas Now that everything's set up, we'll go ahead and add a properly-sized canvas that we'll paint on, which will be the entire universe for our creation at a later stage. Let's go and create than now by going to File > New or by pressing CTRL+ N. A window will pop up asking you to edit and confirm the image settings for the canvas you're creating. You can choose from a variety of templates to use or you can manually input sizes (which we are going to do).  Before that, change the unit for coordinate display to inches just so we could have a better visual reference of how big our drawing canvass will be. Then on the Width input box, type 9 (for nine inches), and for the Height input box, type 6 (for six inches) respectively. This, however, is a very subjective portion, since you can just have any size you prefer, I just chose nine inches by six inches for the purposes of this article. Clicking the Advanced Options drop-down menu will reveal more options for you.  But right now, we'll never deal with that, just the width and height are sufficient for what we'll be need. When you're done setting up the dimensions and settings, click OK to confirm (is there a chance we could chance the OK buttons to “Alright” buttons, which sounds, uhmmm, better). Creating a New Image At this moment, we should be seeing a blank canvas with the dimensions that we've set awhile back. Then just at the right window (Layers Window), you'll notice there's already one layer present as compared to the default which is none. So everytime we add a new layer (which is very vital), we'll be referencing them over to the Layers Window. Since the creation of the layering system in image editors, it has been a blast to organize elements of an image and apply special effects on them as necessary. We can imagine layers as transparent sheets overlaying each other to form one final image; one transparent sheet can have a landscape drawn, another sheet contains trees and vegetation, and another sheet (which is above the tree sheets) is our main character. So together, we see a character with trees on a landscape in one. But as far as traditional layering is concerned, digital layering has been far more superior in terms of flexibility and the amount of modes we can experiment with. New Image with Layer This time might be a good idea to save our file natively, by that I mean save it in a format that is recognizable only by GIMP and that is lossless in format, so whether we save it a couple of times as such, no image compression happens and the image quality is not compromised. However, the native format is only related to GIMP and is not known elsewhere, so uploading such file to your website will show no image at all because it isn't recognized by the browser. In order to make it generally compatible, we export our image to known formats like JPEG, PNG, GIF, etc. depending on your need. Saving an image file on its native format preserves all the options we have like selections, paths, layers, layer modes, palettes, and many more. This native format that GIMP uses is known as .XCF which stands for “eXperimental Computing Facility”. Throughout this article, we'll save our files mainly in .xcf format and later on, when our tasks are done and we call our image finished, that's the time we export it to a readable and viewable format. Let's go ahead and save our file by going to File > Save, or by pressing CTRL + S. This brings up a window that we can type our filename into and browse and create the location for our files. Type whatever filename you wish and append the “.xcf” file extension at the end of the filename, or you can choose “GIMP xcf image” from a list on the lower half of the window. Saving an Image as XCF
Read more
  • 0
  • 1
  • 7704

article-image-ogre-3d-faqs
Packt
14 Mar 2011
8 min read
Save for later

Ogre 3D FAQs

Packt
14 Mar 2011
8 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch         Read more about this book       (For more resources on OGRE 3D, see here.) Q: What is Ogre3D? A: Creating 3D scenes and worlds is an interesting and challenging problem, but the results are hugely rewarding and the process to get there can be a lot of fun. Ogre 3D helps you create your own scenes and worlds. Ogre 3D is one of the biggest open source 3D render engines and enables its users to create and interact freely with their scenes.   Q: What are the system requirements for Ogre 3D? A: You need a compiler to compile the applications. Your computer should have a graphic card with 3D capabilities. It would be best if the graphic card supports DirectX 9.0.   Q: From where can I download the Ogre 3D software? A: Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. The following are the steps to download and install Ogre 3D SDK: Go to http://www.ogre3d.org/download/sdk Download the appropriate package. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot:     Q: Which are the different versions of the Ogre 3D SDK? A: Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms.   Q: What do you mean by a scene graph? A: A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. A scene graph has a root and is organized like a tree. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change.   Q: What are Spotlights? A: Spotlights are just like flashlights in their effect. They have a position where they are and a direction in which they illuminate the scene. This direction was the first thing we set after creating the light. The direction simply defines in which direction the spotlight is pointed. The next two parameters we set were the inner and the outer angles of the spotlight. The inner part of the spotlight illuminates the area with the complete power of the light source's color. The outer part of the cone uses less power to light the illuminated objects. This is done to emulate the effects of a real flashlight.   Q: What is the difference between frame-based and time-based movement? A: When using frame-based movement, the entity is moved the same distance each frame, by time passed movement, the entity is moved the same distance each second.   Q: What is a window handle and how is it used by our application and the operating system? A: A window handle is simply a number that is used as an identifier for a certain window. This number is created by the operating system and each window has a unique handle. The input system needs this handle because without it, it couldn't get the input events. Ogre 3D creates a window for us. So to get the window handle, we need to ask it the following line: win->getCustomAttribute("WINDOW", &windowHnd);   Q: What does a scene manager do? A: A scene manager does a lot of things, which will be obvious when we take a look at the documentation. There are lots of functions which start with create, destroy, get, set, and has. One important task the scene manager fulfills is the management of objects. This can be scene nodes, entities, lights, or a lot of other object types that Ogre 3D has. The scene manager acts as a factory for these objects and also destroys them. Ogre 3D works with the principle—he who creates an object, also destroys it. Every time we want an entity or scene node deleted, we must use the scene manager; otherwise, Ogre 3D might try to free the same memory later, which might result in an ugly application crash. Besides object management, it manages a scene, like its name suggests. This can include optimizing the scene and calculating positions of each object in the scene for rendering. It also implements efficient culling algorithms.   Q: Which three functions offer the FrameListener interface and at which point is each of these functions called? A: A FrameListener is based on the observer pattern. We can add a class instance which inherits from the Ogre::FrameListener interface to our Ogre 3D root instance using the addFrameListener() method of Ogre::Root. When this class instance is added, our class gets notified when certain events happen. The following are the three functions that offer the FrameListener interface: frameStarted which gets called before the frame is rendered frameRenderingQueued which is called after the frame is rendered but before the buffers are swapped and frameEnded which is called after the current frame has been rendered and displayed.   Q: What is a particle system? A: A particle system consists of two to three different constructs—an emitter, a particle, and an affector (optional). The most important of these three is the particle itself, as the name particle system suggests. A particle displays a color or textures using a quad or the point render capability of the graphics cards. When the particle uses a quad, this quad is always rotated to face the camera. Each particle has a set of parameters, including a time to live, direction, and velocity. There are a lot of different parameters, but these three are the most important for the concept of particle systems. The time to live parameter controls the life and death of a particle. Normally, a particle doesn't live for more than a few seconds before it gets destroyed. This effect can be seen in the demo when we look up at the smoke cone. There will be a point where the smoke vanishes. For these particles, the time to live counter reached zero and they got destroyed. An emitter creates a predefined number of particles per second and can be seen as the source of the particles. Affectors, on the other hand, don't create particles but change some of their parameters. An affector could change the direction, velocity, or color of the particles created by the emitter.     Q: Which add-ons are available for Ogre 3D? Where can I get them? A: The following are some of the add-ons available to Ogre 3D: Hydrax Hydrax is an add-on that adds the capability of rendering pretty water scenes to Ogre 3D. With this add-on, water can be added to a scene and a lot of different settings are available, such as setting the depth of the water, adding foam effects, underwater light rays, and so on. The add-on can be found at http://www.ogre3d.org/tikiwiki/Hydrax. Caelum Caelum is another add-on, which introduces sky rendering with day and night cycles to Ogre 3D. It renders the sun and moon correctly using a date and time. It also renders weather effects like snow or rain and a complex cloud simulation to make the sky look as real as possible. The wiki site for this add-on is http://www.ogre3d.org/tikiwiki/Caelum. Particle Universe Another commercial add-on is Particle Universe. Particle Universe adds a new particle system to Ogre 3D, which allows many more different effects than the normal Ogre 3D particle system allows. Also, it comes with a Particle Editor, allowing artists to create particles in a separate application and the programmer can load the created particle script later. This plugin can be found at http://www.ogre3d.org/tikiwiki/Particle+Universe+plugin.   Summary In this article we took a look at some of the most frequently asked questions on Ogre 3D. The article, Common Mistakes : Ogre Wiki, would be helpful for further queries pertaining to Ogre 3D. Further resources on this subject: Starting Ogre 3D [Article] Installation of Ogre 3D [Article] Materials with Ogre 3D [Article] The Ogre Scene Graph [Article] OGRE 3D 1.7 Beginner's Guide [Book]
Read more
  • 0
  • 0
  • 7516
article-image-introduction-blender-25-color-grading-sequel
Packt
18 Nov 2010
2 min read
Save for later

Introduction to Blender 2.5 Color Grading - A Sequel

Packt
18 Nov 2010
2 min read
Colorizing with hue adjustment For a quick and dirty colorization of images, hue adjustment is your best friend. However, the danger with using hue adjustment is that you don't have much control over your tones compared to when you were using color curves. To add the hue adjustment node in Blender's Node Editor Window, press SHIFT A then choose Color then finally Hue Saturation Value. This will add the Hue Saturation Value Node which is basically used to adjust the image's tint, saturation (grayscale, vibrant colors), and value (brightness). Later on in this article, you'll see just how useful this node will be. But for now, let's stick with just the hue adjustment aspect of this node. Move the mouse over the image to enlarge it. (Adding the Hue Saturation Value Node) (Hue Saturation Value Node) To colorize your images, simply slide the Hue slider. When using the hue slider, it's a good rule of thumb to keep the adjustments at a minimum, but for other special purpose, you can set them the way you want to. Below are some examples of different values of the Hue Adjustment. (Hue at 0.0) (Hue at 0.209) (Hue at 0.333) (Hue at 0.431) (Hue at 0.671) (Hue at 0.853) (Hue at 1.0)
Read more
  • 0
  • 0
  • 7101

Packt
19 May 2016
13 min read
Save for later

First Person Shooter Part 1 – Creating Exterior Environments

Packt
19 May 2016
13 min read
In this article by John P. Doran, the author of the book Unity 5.x Game Development Blueprints, we will be creating a first-person shooter; however, instead of shooting a gun to damage our enemies, we will be shooting a picture in a survival horror environment, similar to the Fatal Frame series of games and the recent indie title DreadOut. To get started on our project, we're first going to look at creating our level or, in this case, our environments starting with the exterior. In the game industry, there are two main roles in level creation: an environment artist and a level designer. An environment artist is a person who builds the assets that go into the environment. He/she uses tools such as 3Ds Max or Maya to create the model and then uses other tools such as Photoshop to create textures and normal maps. The level designer is responsible for taking the assets that the environment artist created and assembling them in an environment for players to enjoy. He/she designs the gameplay elements, creates the scripted events, and tests the gameplay. Typically, a level designer will create environments through a combination of scripting and using a tool that may or may not be in development as the game is being made. In our case, that tool is Unity. One important thing to note is that most companies have their own definition for different roles. In some companies, a level designer may need to create assets and an environment artist may need to create a level layout. There are also some places that hire someone to just do lighting or just to place meshes (called a mesher) because they're so good at it. (For more resources related to this topic, see here.) Project overview In this article, we take on the role of an environment artist who has been tasked to create an outdoor environment. We will use assets that I've placed in the example code as well as assets already provided to us by Unity for mesh placement. In addition, you will also learn some beginner-level design. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from the beginning to end. Here is the outline of our tasks: Creating the exterior environment—terrain Beautifying the environment—adding water, trees, and grass Building the atmosphere Designing the level layout and background Project setup At this point, I assume that you have a fresh installation of Unity and have started it. You can perform the following steps: With Unity started, navigate to File | New Project. Select a project location of your choice somewhere on your hard drive and ensure that you have Setup defaults for set to 3D. Then, put in a Project name (I used First Person Shooter). Once completed, click on Create project. Here, if you see the Welcome to Unity popup, feel free to close it as we won't be using it. Level design 101 – planning Now just because we are going to be diving straight into Unity, I feel that it's important to talk a little more about how level design is done in the game industry. Although you may think a level designer will just jump into the editor and start playing, the truth is that you normally would need to do a ton of planning ahead of time before you even open up your tool. In general, a level begins with an idea. This can come from anything; maybe you saw a really cool building, or a photo on the Internet gave you a certain feeling; maybe you want to teach the player a new mechanic. Turning this idea into a level is what a level designer does. Taking all of these ideas, the level designer will create a level design document, which will outline exactly what you're trying to achieve with the entire level from start to end. A level design document will describe everything inside the level; listing all of the possible encounters, puzzles, so on and so forth, which the player will need to complete as well as any side quests that the player will be able to achieve. To prepare for this, you should include as many references as you can with maps, images, and movies similar to what you're trying to achieve. If you're working with a team, making this document available on a website or wiki will be a great asset so that you know exactly what is being done in the level, what the team can use in their levels, and how difficult their encounters can be. In general, you'll also want a top-down layout of your level done either on a computer or with a graph paper, with a line showing a player's general route for the level with encounters and missions planned out. Of course, you don't want to be too tied down to your design document and it will change as you playtest and work on the level, but the documentation process will help solidify your ideas and give you a firm basis to work from. For those of you interested in seeing some level design documents, feel free to check out Adam Reynolds (Level Designer on Homefront and Call of Duty: World at War) at http://wiki.modsrepository.com/index.php?title=Level_Design:_Level_Design_Document_Example. If you want to learn more about level design, I'm a big fan of Beginning Game Level Design, John Feil (previously my teacher) and Marc Scattergood, Cengage Learning PTR. For more of an introduction to all of game design from scratch, check out Level Up!: The Guide to Great Video Game Design, Scott Rogers, Wiley and The Art of Game Design, Jesse Schell, CRC Press. For some online resources, Scott has a neat GDC talk named Everything I Learned About Level Design I Learned from Disneyland, which can be found at http://mrbossdesign.blogspot.com/2009/03/everything-i-learned-about-game-design.html, and World of Level Design (http://worldofleveldesign.com/) is a good source for learning about of level design, though it does not talk about Unity specifically. Introduction to terrain Terrain is basically used for non-manmade ground; things such as hills, deserts, and mountains. Unity's way of dealing with terrain is different than what most engines use in the fact that there are two mays to make terrains, one being using a height map and the other sculpting from scratch. Height maps Height maps are a common way for game engines to support terrains. Rather than creating tools to build a terrain within the level, they use a piece of graphics software to create an image and then we can translate that image into a terrain using the grayscale colors provided to translate into different height levels, hence the name height map. The lighter in color the area is, the lower its height, so in this instance, black represents the terrain's lowest areas, whereas white represents the highest. The Terrain's Terrain Height property sets how high white actually is compared with black. In order to apply a height map to a terrain object, inside an object's Terrain component, click on the Settings button and scroll down to Import Raw…. For more information on Unity's Height tools, check out http://docs.unity3d.com/Manual/terrain-Height.html. If you want to learn more about creating your own HeightMaps using Photoshop while this tutorial is for UDK, the area in Photoshop is the same: http://worldofleveldesign.com/categories/udk/udk-landscape-heightmaps-photoshop-clouds-filter.php  Others also use software such as Terragen to create HeightMaps. More information on that is at http://planetside.co.uk/products/terragen3. Exterior environment – terrain When creating exterior environments, we cannot use straight floors for the most part unless you're creating a highly urbanized area. Our game takes place in a haunted house in the middle of nowhere, so we're going to create a natural landscape. In Unity, the best tool to use to create a natural landscape is the Terrain tool. Unity's Terrain system lets us add landscapes, complete with bushes, trees, and fading materials to our game. To show how easy it is to use the terrain tool, let's get started. The first thing that we're going to do is actually create the terrain we'll be placing for the world. Let's first create a Terrain by selecting GameObject | 3D Object | Terrain. At this point, you should see the terrain on the screen. If for some reason you have problems seeing the terrain object, go to the Hierarchy tab and double-click on the Terrain object to focus your camera on it and move in as needed. Right now, it's just a flat plane, but we'll be doing a lot to it to make it shine. If you look to the right with the Terrain object selected, you'll see the Terrain editing tools, which do the following (from left to right): Raise/Lower Height—This will allow us to raise or lower the height of our terrain in a certain radius to create hills, rivers, and more. Paint Height—If you already know exactly the height that a part of your terrain needs to be, this tool will allow you to paint a spot to that location. Smooth Height—This averages out the area that it is in, attempts to smooth out areas, and reduces the appearance of abrupt changes. Paint Texture—This allows us to add textures to the surface of our terrain. One of the nice features of this is the ability to lay multiple textures on top of each other. Place Trees—This allows us to paint objects in our environment that will appear on the surface. Unity attempts to optimize these objects by billboarding distant trees so we can have dense forests without having a horrible frame rate. By billboarding, I mean that the object will be simplified and its direction usually changes constantly as the object and camera move, so it always faces the camera direction. Paint Details—In addition to trees, you can also have small things like rocks or grass covering the surface of your environment, using 2D images to represent individual clumps with bits of randomization to make it appear more natural. Terrain Settings—Settings that will affect the overall properties of the particular Terrain, options such as the size of the terrain and wind can be found here. By default, the entire Terrain is set to be at the bottom, but we want to have ground above us and below us so we can add in things like lakes. With the Terrain object selected, click on the second button from the left on the Terrain component (Paint height mode). From there, set the Height value under Settings to 100 and then press the Flatten button. At this point, you should note the plane moving up, so now everything is above by default. Next, we are going to create some interesting shapes to our world with some hills by "painting" on the surface. With the Terrain object selected, click on the first button on the left of our Terrain component (the Raise/Lower Terrain mode). Once this is completed, you should see a number of different brushes and shapes that you can select from. Our use of terrain is to create hills in the background of our scene, so it does not seem like the world is completely flat. Under the Settings, change the Brush Size and Opacity of your brush to 100 and left-click around the edges of the world to create some hills. You can increase the height of the current hills if you click on top of the previous hill. When creating hills, it's a good idea to look at multiple angles while you're building them, so you can make sure that none are too high or too low In general, you want to have taller hills as you go further back, or else you cannot see the smaller ones since they're blocked. In the Scene view, to move your camera around, you can use the toolbar at the top-right corner or hold down the right mouse button and drag it in the direction you want the camera to move around in, pressing the W, A, S, and D keys to pan. In addition, you can hold down the middle mouse button and drag it to move the camera around. The mouse wheel can be scrolled to zoom in and out from where the camera is. Even though you should plan out the level ahead of time on something like a piece of graph paper to plan out encounters, you will want to avoid making the level entirely from the preceding section, as the player will not actually see the game with a bird's eye view in the game at all (most likely). Referencing the map from the same perspective as your character will help ensure that the map looks great. To see many different angles at one time, you can use a layout with multiple views of the scene, such as the 4 Split. Once we have our land done, we now want to create some holes in the ground, which we will fill with water later. This will provide a natural barrier to our world that players will know they cannot pass, so we will create a moat by first changing the Brush Size value to 50 and then holding down the Shift key, and left-clicking around the middle of our texture. In this case, it's okay to use the Top view; remember that this will eventually be water to fill in lakes, rivers, and so on, as shown in the following screenshot:   To make this easier to see, you can click on the sun-looking light icon from the Scene tab to disable lighting for the time being. At this point, we have done what is referred to in the industry as "grayboxing," making the level in the engine in the simplest way possible but without artwork (also known as "whiteboxing" or "orangeboxing" depending on the company you're working for). At this point in a traditional studio, you'd spend time playtesting the level and iterating on it before an artist or you will take the time to make it look great. However, for our purposes, we want to create a finished project as soon as possible. When doing your own games, be sure to play your level and have others play your level before you polish it. For more information on grayboxing, check out http://www.worldofleveldesign.com/categories/level_design_tutorials/art_of_blocking_in_your_map.php. For an example with images of a graybox to the final level, PC Gamer has a nice article available at http://www.pcgamer.com/2014/03/18/building-crown-part-two-layout-design-textures-and-the-hammer-editor/. Summary With this, we now have a great-looking exterior level for our game! In addition, we covered a lot of features that exist in Unity for you to be able to use in your own future projects.   Resources for Article: Further resources on this subject: Learning NGUI for Unity [article] Components in Unity [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 7100

article-image-starting-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Starting Ogre 3D

Packt
25 Nov 2010
7 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction Up until now, the ExampleApplication class has started and initialized Ogre 3D for us; now we are going to do it ourselves. Time for action – starting Ogre 3D This time we are working on a blank sheet. Start with an empty code file, include Ogre3d.h, and create an empty main function: #include "OgreOgre.h"int main (void){ return 0;} Create an instance of the Ogre 3D Root class; this class needs the name of the "plugin.cfg": "plugin.cfg":Ogre::Root* root = new Ogre::Root("plugins_d.cfg"); If the config dialog can't be shown or the user cancels it, close the application: if(!root->showConfigDialog()){ return -1;} Create a render window: Ogre::RenderWindow* window = root->initialise(true,"Ogre3DBeginners Guide"); Next create a new scene manager: Ogre::SceneManager* sceneManager = root->createSceneManager(Ogre::ST_GENERIC); Create a camera and name it camera: Ogre::Camera* camera = sceneManager->createCamera("Camera");camera->setPosition(Ogre::Vector3(0,0,50));camera->lookAt(Ogre::Vector3(0,0,0));camera->setNearClipDistance(5); With this camera, create a viewport and set the background color to black: Ogre::Viewport* viewport = window->addViewport(camera);viewport->setBackgroundColour(Ogre::ColourValue(0.0,0.0,0.0)); Now, use this viewport to set the aspect ratio of the camera: camera->setAspectRatio(Ogre::Real(viewport->getActualWidth())/Ogre::Real(viewport->getActualHeight())); Finally, tell the root to start rendering: root->startRendering(); Compile and run the application; you should see the normal config dialog and then a black window. This window can't be closed by pressing Escape because we haven't added key handling yet. You can close the application by pressing CTRL+C in the console the application has been started from. What just happened? We created our first Ogre 3D application without the help of the ExampleApplication. Because we aren't using the ExampleApplication any longer, we had to include Ogre3D.h, which was previously included by ExampleApplication.h. Before we can do anything with Ogre 3D, we need a root instance. The root class is a class that manages the higher levels of Ogre 3D, creates and saves the factories used for creating other objects, loads and unloads the needed plugins, and a lot more. We gave the root instance one parameter: the name of the file that defines which plugins to load. The following is the complete signature of the constructor: Root(const String & pluginFileName = "plugins.cfg",const String &configFileName = "ogre.cfg",const String & logFileName = "Ogre.log") Besides the name for the plugin configuration file, the function also needs the name of the Ogre configuration and the log file. We needed to change the first file name because we are using the debug version of our application and therefore want to load the debug plugins. The default value is plugins.cfg, which is true for the release folder of the Ogre 3D SDK, but our application is running in the debug folder where the filename is plugins_d.cfg. ogre.cfg contains the settings for starting the Ogre application that we selected in the config dialog. This saves the user from making the same changes every time he/she start our application. With this file Ogre 3D can remember his choices and use them as defaults for the next start. This file is created if it didn't exist, so we don't append an _d to the filename and can use the default; the same is true for the log file. Using the root instance, we let Ogre 3D show the config dialog to the user in step 3. When the user cancels the dialog or anything goes wrong, we return -1 and with this the application closes. Otherwise, we created a new render window and a new scene manager in step 4. Using the scene manager, we created a camera, and with the camera we created the viewport; then, using the viewport, we calculated the aspect ratio for the camera. After creating all requirements, we tell the root instance to start rendering, so our result would be visible. Following is a diagram showing which object was needed to create the other: Adding resources We have now created our first Ogre 3D application, which doesn't need the ExampleApplication. But one important thing is missing: we haven't loaded and rendered a model yet. Time for action – loading the Sinbad mesh We have our application, now let's add a model. After setting the aspect ratio and before starting the rendering, add the zip archive containing the Sinbad model to our resources: Ogre::ResourceGroupManager::getSingleton().addResourceLocation("../../Media/packs/Sinbad.zip","Zip"); We don't want to index more resources at the moment, so index all added resources now: Ogre::ResourceGroupManager::getSingleton().initialiseAllResourceGroups(); Now create an instance of the Sinbad mesh and add it to the scene: Ogre::Entity* ent = sceneManager->createEntity("Sinbad.mesh");sceneManager->getRootSceneNode()->attachObject(ent); Compile and run the application; you should see Sinbad in the middle of the screen: What just happened? We used the ResourceGroupManager to index the zip archive containing the Sinbad mesh and texture files, and after this was done, we told it to load the data with the createEntity() call in step 3. Using resources.cfg Adding a new line of code for each zip archive or folder we want to load is a tedious task and we should try to avoid it. The ExampleApplication used a configuration file called resources.cfg in which each folder or zip archive was listed, and all the content was loaded using this file. Let's replicate this behavior. Time for action – using resources.cfg to load our models Using our previous application, we are now going to parse the resources.cfg. Replace the loading of the zip archive with an instance of a config file pointing at the resources_d.cfg: the resources_d.cfg:Ogre::ConfigFile cf;cf.load(«resources_d.cfg»); First get the iterator, which goes over each section of the config file: Ogre::ConfigFile::SectionIterator sectionIter =cf.getSectionIterator(); Define three strings to save the data we are going to extract from the config file and iterate over each section: Ogre::String sectionName, typeName, dataname;while (sectionIter.hasMoreElements()){ Get the name of the section: sectionName = sectionIter.peekNextKey(); Get the settings contained in the section and, at the same time, advance the section iterator; also create an iterator for the settings itself: Ogre::ConfigFile::SettingsMultiMap *settings = sectionIter.getNext();Ogre::ConfigFile::SettingsMultiMap::iterator i; Iterate over each setting in the section: for (i = settings->begin(); i != settings->end(); ++i){ Use the iterator to get the name and the type of the resources: typeName = i->first;dataname = i->second; Use the resource name, type, and section name to add it to the resource index: Ogre::ResourceGroupManager::getSingleton().addResourceLocation(dataname, typeName, sectionName); Compile and run the application, and you should see the same scene as before.
Read more
  • 0
  • 0
  • 7033
article-image-adding-finesse-your-game
Packt
21 Oct 2013
7 min read
Save for later

Adding Finesse to Your Game

Packt
21 Oct 2013
7 min read
(For more resources related to this topic, see here.) Adding a background There is still a lot of black in the background and as the game has a space theme, let's add some stars in there. The way we'll do this is to add a sphere that we can map the stars texture to, so click on Game Object | Create Other | Sphere, and position it at X: 0, Y: 0, Z: 0. We also need to set the size to X: 100, Y: 100, Z: 100. Drag the stars texture, located at Textures/stars, on to the new sphere that we created in our scene. That was simple, wasn't that? Unity has added the texture to a material that appears on the outside of our sphere while we need it to show on the inside. To fix it, we are going to reverse the triangle order, flip the normal map, and flip the UV map with C# code. Right-click on the Scripts folder and then click on Create and select C# Script. Once you click on it, a script will appear in the Scripts folder; it should already have focus and be asking you to type a name for the script, call it SkyDome. Double-click on the script in Unity and it will open in MonoDevelop. Edit the Start method, as shown in the following code: void Start () {// Get a reference to the meshMeshFilterBase MeshFilter = transform.GetComponent("MeshFilter")as MeshFilter;Mesh mesh = BaseMeshFilter.mesh;// Reverse triangle windingint[] triangles = mesh.triangles;int numpolies = triangles.Length / 3;for(int t = 0;t <numpolies; t++){Int tribuffer = triangles[t * 3];triangles[t * 3] = triangles[(t * 3) + 2];triangles[(t * 3) + 2] = tribuffer;}// Read just uv map for inner sphere projectionVector2[] uvs = mesh.uv;for(int uvnum = 0; uvnum < uvs.Length; uvnum++){uvs[uvnum] = new Vector2(1 - uvs[uvnum].x, uvs[uvnum].y);}// Read just normals for inner sphere projectionVector3[] norms = mesh.normals;for(int normalsnum = 0; normalsnum < norms.Length; normalsnum++){[ 69 ]norms[normalsnum] = -norms[normalsnum];}// Copy local built in arrays back to the meshmesh.uv = uvs;mesh.triangles = triangles;mesh.normals = norms;} The breakdown of the code as is follows: Get the mesh of the sphere. Reverse the way the triangles are drawn. Each triangle has three indexes in the array; this script just swaps the first and last index of each triangle in the array. Adjust the X position for the UV map coordinates. Flip the normals of the sphere. Apply the new values of the reversed triangles, adjusted UV coordinates, and flipped normals to the sphere. Click and drag this script onto your sphere GameObject and test your scene. You should now see something like the following screenshot: Adding extra levels Now that the game is looking better, we can add some more content in to it. Luckily the jagged array we created earlier easily supports adding more levels. Levels can be any size, even with variable column heights per row. Double-click on the Sokoban script in the Project panel and switch over to MonoDevelop. Find levels array and modify it to be as follows: // Create the top array, this will store the level arraysint[][][] levels ={// Create the level array, this will store the row arraynew int [][] {// Create all row array, these will store column datanew int[] {1,1,1,1,1,1,1,1},new int[] {1,0,0,1,0,0,0,1},new int[] {1,0,3,3,0,3,0,1},new int[] {1,0,0,1,0,1,0,1},new int[] {1,0,0,1,3,1,0,1},new int[] {1,0,0,2,2,2,2,1},new int[] {1,0,0,1,0,4,1,1},new int[] {1,1,1,1,1,1,1,1}},// Create a new levelnew int [][] {new int[] {1,1,1,1,0,0,0,0},new int[] {1,0,0,1,1,1,1,1},new int[] {1,0,2,0,0,3,0,1},new int[] {1,0,3,0,0,2,4,1},new int[] {1,1,1,0,0,1,1,1},new int[] {0,0,1,1,1,1,0,0}},// Create a new levelnew int [][] {new int[] {1,1,1,1,1,1,1,1},new int[] {1,4,0,1,2,2,2,1},new int[] {1,0,0,3,3,0,0,1},new int[] {1,0,3,0,0,0,1,1},new int[] {1,0,0,1,1,1,1},new int[] {1,0,0,1},new int[] {1,1,1,1}}}; The preceding code has given us two extra levels, bringing the total to three. The layout of the arrays is still very visual and you can easily see the level layout just by looking at the arrays. Our BuildLevel, CheckIfPlayerIsAttempingToMove and MovePlayer methods only work on the first level at the moment, let's update them to always use the users current level. We'll have to store which level the player is currently on and use that level at all times, incrementing the value when a level is finished. As we'll want this value to persist between plays, we'll be using the PlayerPrefs object that Unity provides for saving player data. Before we get the value, we need to check that it is actually set and exists; otherwise we could see some odd results. Start by declaring our variable for use at the top of the Sokoban script as follows: int currentLevel; Next, we'll need to get the value of the current level from the PlayerPrefs object and store it in the Awake method. Add the following code to the top of your Awake method: if (PlayerPrefs.HasKey("currentLevel")) {currentLevel = PlayerPrefs.GetInt("currentLevel");} else {currentLevel = 0;PlayerPrefs.SetInt("currentLevel", currentLevel);} Here we are checking if we have a value already stored in the PlayerPrefs object, if we do then use it, if we don't then set currentLevel to 0, and then save it to the PlayerPrefs object. To fix the methods mentioned earlier, click on Search | Replace. A new window will appear. Type levels[0] in the top box and levels[currentLevel] in the bottom one, and then click on All. Level complete detection It's all well and good having three levels, but without a mechanism to move between them they are useless. We are going to add a check to see if the player has finished a level, if they have then increment the level counter and load the next level in the array. We only need to do the check at the end of every move; to do so every frame would be redundant. We'll write the following method first and then explain it: // If this method returns true then we have finished the levelboolhaveFinishedLevel () {// Initialise the counter for how many crates are on goal// tilesint cratesOnGoalTiles = 0;// Loop through all the rows in the current levelfor (int i = 0; i< levels[currentLevel].Length; i++) {// Get the tile ID for the column and pass it the switch// statementfor (int j = 0; j < levels[currentLevel][i].Length; j++) {switch (levels[currentLevel][i][j]) {case 5:// Do we have a match for a crate on goal// tile ID? If so increment the countercratesOnGoalTiles++;break;default:break;}}}// Check if the cratesOnGoalTiles variable is the same as the// amountOfCrates we set when building the levelif (amountOfCrates == cratesOnGoalTiles) {return true;} else {return false;}} In the BuildLevel method, whenever we instantiate crate, we increment the amountOfCrates variable. We can use this variable to check if the amount of crates on goal tiles is the same as the amountOfCrates variable, if it is then we know we have finished the current level. The for loops iterate through the current level's rows and columns, and we know that 5 in the array is a crate on a goal tile. The method returns a Boolean based on whether we have finished the level or not. Now let's add the call to the method. The logical place would be inside the MovePlayer method, so go ahead and add a call to the method just after the pCol += tCol; statement. As the method returns true or false, we're going to use it in an if statement, as shown in the following code: // Check if we have finished the levelif (haveFinishedLevel()) {Debug.Log("Finished");} The Debug.Log method will do for now, let's check if it's working. The solution for level one is on YouTube at http://www.youtube.com/watch?v=K5SMwAJrQM8&hd=1. Click on the play icon at the top-middle of the Unity screen and copy the sequence of moves in the video (or solve it yourself), when all the crates are on the goal tiles you'll see Finished in the Console panel. Summary The game now has some structure in the form of levels that you can complete and is easily expandable. If you wanted to take a break from the article, now would be a great time to create and add some levels to the game and maybe add some extra sound effects. All this hard work is for nothing if you can't make any money though, isn't it? Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Flash Game Development: Making of Astro-PANIC! [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 6997

article-image-environmental-effects-3d-graphics-xna-game-studio-40
Packt
16 Dec 2010
10 min read
Save for later

Environmental Effects in 3D Graphics with XNA Game Studio 4.0

Packt
16 Dec 2010
10 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them We will look at a technique called region growing to add plants and trees to the terrain's surface, and finish by combining the terrain with our sky box, water, and billboarding effects to create a mountain scene: Building a terrain from a heightmap A heightmap is a 2D image that stores, in each pixel, the height of the corresponding point on a grid of vertices. The pixel values range from 0 to 1, so in practice we will multiply them by the maximum height of the terrain to get the final height of each vertex. We build a terrain out of vertices and indices as a large rectangular grid with the same number of vertices as the number of pixels in the heightmap. Let's start by creating a new Terrain class. This class will keep track of everything needed to render our terrain: textures, the effect, vertex and index buffers, and so on. public class Terrain { VertexPositionNormalTexture[] vertices; // Vertex array VertexBuffer vertexBuffer; // Vertex buffer int[] indices; // Index array IndexBuffer indexBuffer; // Index buffer float[,] heights; // Array of vertex heights float height; // Maximum height of terrain float cellSize; // Distance between vertices on x and z axes int width, length; // Number of vertices on x and z axes int nVertices, nIndices; // Number of vertices and indices Effect effect; // Effect used for rendering GraphicsDevice GraphicsDevice; // Graphics device to draw with Texture2D heightMap; // Heightmap texture } The constructor will initialize many of these values: public Terrain(Texture2D HeightMap, float CellSize, float Height, GraphicsDevice GraphicsDevice, ContentManager Content) { this.heightMap = HeightMap; this.width = HeightMap.Width; this.length = HeightMap.Height; this.cellSize = CellSize; this.height = Height; this.GraphicsDevice = GraphicsDevice; effect = Content.Load<Effect>("TerrainEffect"); // 1 vertex per pixel nVertices = width * length; // (Width-1) * (Length-1) cells, 2 triangles per cell, 3 indices per // triangle nIndices = (width - 1) * (length - 1) * 6; vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionNormalTexture), nVertices, BufferUsage.WriteOnly); indexBuffer = new IndexBuffer(GraphicsDevice, IndexElementSize.ThirtyTwoBits, nIndices, BufferUsage.WriteOnly); } Before we can generate any normals or indices, we need to know the dimensions of our grid. We know that the width and length are simply the width and height of our heightmap, but we need to extract the height values from the heightmap. We do this with the getHeights() function: private void getHeights() { // Extract pixel data Color[] heightMapData = new Color[width * length]; heightMap.GetData<Color>(heightMapData); // Create heights[,] array heights = new float[width, length]; // For each pixel for (int y = 0; y < length; y++) for (int x = 0; x < width; x++) { // Get color value (0 - 255) float amt = heightMapData[y * width + x].R; // Scale to (0 - 1) amt /= 255.0f; // Multiply by max height to get final height heights[x, y] = amt * height; } } This will initialize the heights[,] array, which we can then use to build our vertices. When building vertices, we simply lay out a vertex for each pixel in the heightmap, spaced according to the cellSize variable. Note that this will create (width – 1) * (length – 1) "cells"—each with two triangles: The function that does this is as shown: private void createVertices() { vertices = new VertexPositionNormalTexture[nVertices]; // Calculate the position offset that will center the terrain at (0, 0, 0) Vector3 offsetToCenter = -new Vector3(((float)width / 2.0f) * cellSize, 0, ((float)length / 2.0f) * cellSize); // For each pixel in the image for (int z = 0; z < length; z++) for (int x = 0; x < width; x++) { // Find position based on grid coordinates and height in // heightmap Vector3 position = new Vector3(x * cellSize, heights[x, z], z * cellSize) + offsetToCenter; // UV coordinates range from (0, 0) at grid location (0, 0) to // (1, 1) at grid location (width, length) Vector2 uv = new Vector2((float)x / width, (float)z / length); // Create the vertex vertices[z * width + x] = new VertexPositionNormalTexture( position, Vector3.Zero, uv); } } When we create our terrain's index buffer, we need to lay out two triangles for each cell in the terrain. All we need to do is find the indices of the vertices at each corner of each cell, and create the triangles by specifying those indices in clockwise order for two triangles. For example, to create the triangles for the first cell in the preceding screenshot, we would specify the triangles as [0, 1, 4] and [4, 1, 5]. private void createIndices() { indices = new int[nIndices]; int i = 0; // For each cell for (int x = 0; x < width - 1; x++) for (int z = 0; z < length - 1; z++) { // Find the indices of the corners int upperLeft = z * width + x; int upperRight = upperLeft + 1; int lowerLeft = upperLeft + width; int lowerRight = lowerLeft + 1; // Specify upper triangle indices[i++] = upperLeft; indices[i++] = upperRight; indices[i++] = lowerLeft; // Specify lower triangle indices[i++] = lowerLeft; indices[i++] = upperRight; indices[i++] = lowerRight; } } The last thing we need to calculate for each vertex is the normals. Because we are creating the terrain from scratch, we will need to calculate all of the normals based only on the height data that we are given. This is actually much easier than it sounds: to calculate the normals we simply calculate the normal of each triangle of the terrain and add that normal to each vertex involved in the triangle. Once we have done this for each triangle, we simply normalize again, averaging the influences of each triangle connected to each vertex. private void genNormals() { // For each triangle for (int i = 0; i < nIndices; i += 3) { // Find the position of each corner of the triangle Vector3 v1 = vertices[indices[i]].Position; Vector3 v2 = vertices[indices[i + 1]].Position; Vector3 v3 = vertices[indices[i + 2]].Position; // Cross the vectors between the corners to get the normal Vector3 normal = Vector3.Cross(v1 - v2, v1 - v3); normal.Normalize(); // Add the influence of the normal to each vertex in the // triangle vertices[indices[i]].Normal += normal; vertices[indices[i + 1]].Normal += normal; vertices[indices[i + 2]].Normal += normal; } // Average the influences of the triangles touching each // vertex for (int i = 0; i < nVertices; i++) vertices[i].Normal.Normalize(); } We'll finish off the constructor by calling these functions in order and then setting the vertices and indices that we created into their respective buffers: createVertices(); createIndices(); genNormals(); vertexBuffer.SetData<VertexPositionNormalTexture>(vertices); indexBuffer.SetData<int>(indices); Now that we've created the framework for this class, let's create the TerrainEffect.fx effect. This effect will, for the moment, be responsible for some simple directional lighting and texture mapping. We'll need a few effect parameters: float4x4 View; float4x4 Projection; float3 LightDirection = float3(1, -1, 0); float TextureTiling = 1; texture2D BaseTexture; sampler2D BaseTextureSampler = sampler_state { Texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; The TextureTiling parameter will determine how many times our texture is repeated across the terrain's surface—simply stretching it across the terrain would look bad because it would need to be stretched to a very large size. "Tiling" it across the terrain will look much better. We will need a very standard vertex shader: struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, mul(View, Projection)); output.Normal = input.Normal; output.UV = input.UV; return output; } The pixel shader is also very standard, except that we multiply the texture coordinates by the TextureTiling parameter. This works because the texture sampler's address mode is set to "wrap", and thus the sampler will simply wrap the texture coordinates past the edge of the texture, creating the tiling effect. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize(LightDirection)); light = clamp(light + 0.4f, 0, 1); // Simple ambient lighting float3 tex = tex2D(BaseTextureSampler, input.UV * TextureTiling); return float4(tex * light, 1); } The technique definition is the same as our other effects: technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } In order to use the effect with our terrain, we'll need to add a few more member variables to the Terrain class: Texture2D baseTexture; float textureTiling; Vector3 lightDirection; These values will be set from the constructor: public Terrain(Texture2D HeightMap, float CellSize, float Height, Texture2D BaseTexture, float TextureTiling, Vector3 LightDirection, GraphicsDevice GraphicsDevice, ContentManager Content) { this.baseTexture = BaseTexture; this.textureTiling = TextureTiling; this.lightDirection = LightDirection; // etc... Finally, we can simply set these effect parameters along with the View and Projection parameters in the Draw() function: effect.Parameters["BaseTexture"].SetValue(baseTexture); effect.Parameters["TextureTiling"].SetValue(textureTiling); effect.Parameters["LightDirection"].SetValue(lightDirection); Let's now add the terrain to our game. We'll need a new member variable in the Game1 class: Terrain terrain; We'll need to initialize it in the LoadContent() method: terrain = new Terrain(Content.Load<Texture2D>("terrain"), 30, 4800, Content.Load<Texture2D>("grass"), 6, new Vector3(1, -1, 0), GraphicsDevice, Content); Finally, we can draw it in the Draw() function: terrain.Draw(camera.View, camera.Projection); Multitexturing Our terrain looks pretty good as it is, but to make it more believable the texture applied to it needs to vary—snow and rocks at the peaks, for example. To do this, we will use a technique called multitexturing, which uses the red, blue, and green channels of a texture as a guide as to where to draw textures that correspond to those channels. For example, sand may correspond to red, snow to blue, and rock to green. Adding snow would then be as simple as painting blue onto the areas of this "texture map" that correspond with peaks on the heightmap. We will also have one extra texture that fills in the area where no colors have been painted onto the texture map—grass, for example. To begin with, we will need to modify our texture parameters on our effect from one texture to five: the texture map, the base texture, and the three color channel mapped textures. texture RTexture; sampler RTextureSampler = sampler_state { texture = <RTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture GTexture; sampler GTextureSampler = sampler_state { texture = <GTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BTexture; sampler BTextureSampler = sampler_state { texture = <BTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BaseTexture; sampler BaseTextureSampler = sampler_state { texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture WeightMap; sampler WeightMapSampler = sampler_state { texture = <WeightMap>; AddressU = Clamp; AddressV = Clamp; MinFilter = Linear; MagFilter = Linear; }; Second, we need to update our pixel shader to draw these textures onto the terrain: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize( LightDirection)); light = clamp(light + 0.4f, 0, 1); float3 rTex = tex2D(RTextureSampler, input.UV * TextureTiling); float3 gTex = tex2D(GTextureSampler, input.UV * TextureTiling); float3 bTex = tex2D(BTextureSampler, input.UV * TextureTiling); float3 base = tex2D(BaseTextureSampler, input.UV * TextureTiling); float3 weightMap = tex2D(WeightMapSampler, input.UV); float3 output = clamp(1.0f - weightMap.r - weightMap.g - weightMap.b, 0, 1); output *= base; output += weightMap.r * rTex + weightMap.g * gTex + weightMap.b * bTex; return float4(output * light, 1); } We'll need to add a way to set these values to the Terrain class: public Texture2D RTexture, BTexture, GTexture, WeightMap; All we need to do now is set these values to the effect in the Draw() function: effect.Parameters["RTexture"].SetValue(RTexture); effect.Parameters["GTexture"].SetValue(GTexture); effect.Parameters["BTexture"].SetValue(BTexture); effect.Parameters["WeightMap"].SetValue(WeightMap); To use multitexturing in our game, we'll need to set these values in the Game1 class: terrain.WeightMap = Content.Load<Texture2D>("weightMap"); terrain.RTexture = Content.Load<Texture2D>("sand"); terrain.GTexture = Content.Load<Texture2D>("rock"); terrain.BTexture = Content.Load<Texture2D>("snow");
Read more
  • 0
  • 0
  • 6602
Modal Close icon
Modal Close icon