Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-unity-3-0-enter-third-dimension
Packt
28 Dec 2011
10 min read
Save for later

Unity 3-0 Enter the Third Dimension

Packt
28 Dec 2011
10 min read
(For more resources on Unity 3D, see here.) Getting to grips with 3D Let's take a look at the crucial elements of 3D worlds, and how Unity lets you develop games in three dimensions. Coordinates If you have worked with any 3D application before, you'll likely be familiar with the concept of the Z-axis. The Z-axis, in addition to the existing X for horizontal and Y for vertical, represents depth. In 3D applications, you'll see information on objects laid out in X, Y, Z format—this is known as the Cartesian coordinate method. Dimensions, rotational values, and positions in the 3D world can all be described in this way. As in other documentation of 3D, you'll see such information written with parenthesis, shown as follows: (3, 5, 3) This is mostly for neatness, and also due to the fact that in programming, these values must be written in this way. Regardless of their presentation, you can assume that any sets of three values separated by commas will be in X, Y, Z order. In the following image, a cube is shown at location (3,5,3) in the 3D world, meaning it is 3 units from 0 in the X-axis, 5 up in the Y-axis, and 3 forward in the Z-axis: Local space versus world space A crucial concept to begin looking at is the difference between local space and world space. In any 3D package, the world you will work in is technically infinite, and it can be difficult to keep track of the location of objects within it. In every 3D world, there is a point of origin, often referred to as the 'origin'> or 'world zero', as it is represented by the position (0,0,0). All world positions of objects in 3D are relative to world zero. However, to make things simpler, we also use local space (also known as object space) to define object positions in relation to one another. These relationships are known as parent-child relationships. In Unity, parent-child relationships can be established easily by dragging one object onto another in the Hierarchy. This causes the dragged object to become a child, and its coordinates from then on are read in terms relative to the parent object. For example, if the child object is exactly at the same world position as the parent object, its position is said to be (0,0,0), even if the parent position is not at world zero. Local space assumes that every object has its own zero point, which is the point from which its axes emerge. This is usually the center of the object, and by creating relationships between objects, we can compare their positions in relation to one another. Such relationships, known as parent-child relationships, mean that we can calculate distances from other objects using local space, with the parent object's position becoming the new zero point for any of its child objects. This is especially important to bear in mind when working on art assets in 3D modelling tools, as you should always ensure that your models are created at 0,0,0 in the package that you are using. This is to ensure that when imported into Unity, their axes are read correctly. We can illustrate this in 2D, as the same conventions will apply to 3D. In the following example: The first diagram (i) shows two objects in world space. A large cube exists at coordinates(3,3), and a smaller one at coordinates (6,7). In the second diagram (ii), the smaller cube has been made a child object of the larger cube. As such the smaller cube's coordinates are said to be (3,4), because its zero point is the world position of the parent. Vectors You'll also see 3D vectors described in Cartesian coordinates. Like their 2D counterparts, 3D vectors are simply lines drawn in the 3D world that have a direction and a length. Vectors can be moved in world space, but remain unchanged themselves. Vectors are useful in a game engine context, as they allow us to calculate distances, relative angles between objects, and the direction of objects. Cameras Cameras are essential in the 3D world, as they act as the viewport for the screen. Cameras can be placed at any point in the world, animated, or attached to characters or objects as part of a game scenario. Many cameras can exist in a particular scene, but it is assumed that a single main camera will always render what the player sees. This is why Unity gives you a Main Camera object whenever you create a new scene. Projection mode—3D versus 2D The Projection mode of a camera states whether it renders in 3D (Perspective) or 2D (Orthographic). Ordinarily, cameras are set to Perspective Projection mode, and as such have a pyramid shaped Field of View (FOV). A Perspective mode camera renders in 3D and is the default Projection mode for a camera in Unity. Cameras can also be set to Orthographic Projection mode in order to render in 2D—these have a rectangular field of view. This can be used on a main camera to create complete 2D games or simply used as a secondary camera used to render Heads Up Display (HUD) elements such as a map or health bar. In game engines, you'll notice that effects such as lighting, motion blurs, and other effects are applied to the camera to help with game simulation of a person's eye view of the world—you can even add a few cinematic effects that the human eye will never experience, such as lens flares when looking at the sun! Most modern 3D games utilize multiple cameras to show parts of the game world that the character camera is not currently looking at—like a 'cutaway' in cinematic terms. Unity does this with ease by allowing many cameras in a single scene, which can be scripted to act as the main camera at any point during runtime. Multiple cameras can also be used in a game to control the rendering of particular 2D and 3D elements separately as part of the optimization process. For example, objects may be grouped in layers, and cameras may be assigned to render objects in particular layers. This gives us more control over individual renders of certain elements in the game. Polygons, edges, vertices, and meshes In constructing 3D shapes, all objects are ultimately made up of interconnected 2D shapes known as polygons. On importing models from a modeling application, Unity converts all polygons to polygon triangles. By combining many linked polygons, 3D modeling applications allow us to build complex shapes, known as meshes. Polygon triangles (also referred to as faces) are in turn made up of three connected edges. The locations at which these edges meet are known as points or vertices. By knowing these locations, game engines are able to make calculations regarding the points of impact, known as collisions, when using complex collision detection with Mesh Colliders, such as in shooting games to detect the exact location at which a bullet has hit another object. In addition to building 3D shapes that are rendered visibly, mesh data can have many other uses. For example, it can be used to specify a shape for collision that is less detailed than a visible object, but roughly the same shape. This can help save performance as the physics engine needn't check a mesh in detail for collisions. This is seen in the following image from the Unity car tutorial, where the vehicle itself is more detailed than its collision mesh: In the second image, you can see that the amount of detail in the mesh used for the collider is far less than the visible mesh itself: In game projects, it is crucial for the developer to understand the importance of the polygon count. The polygon count is the total number of polygons, often in reference to models, but also in reference to props, or an entire game level (or in Unity terms, 'Scene'). The higher the number of polygons, the more work your computer must do to render the objects onscreen. This is why we've seen an increase in the level of detail from early 3D games to those of today. Simply compare the visual detail in a game such as id's Quake(1996) with the details seen in Epic's Gears Of War (2006) in just a decade. As a result of faster technology, game developers are now able to model 3D characters and worlds, for games that contain a much higher polygon count and resultant level of realism, and this trend will inevitably continue in the years to come. This said, as more platforms emerge such as mobile and online, games previously seen on dedicated consoles can now be played in a web browser thanks to Unity. As such, the hardware constraints are as important now as ever, as lower powered devices such as mobile phones and tablets are able to run 3D games. For this reason, when modeling any object to add to your game, you should consider polygonal detail, and where it is most required. Materials, textures, and shaders Materials are a common concept to all 3D applications, as they provide the means to set the visual appearance of a 3D model. From basic colors to reflective image-based surfaces, materials handle everything. Let's start with a simple color and the option of using one or more images—known as textures. In a single material, the material works with the shader, which is a script in charge of the style of rendering. For example, in a reflective shader, the material will render reflections of surrounding objects, but maintain its color or the look of the image applied as its texture. In Unity, the use of materials is easy. Any materials created in your 3D modeling package will be imported and recreated automatically by the engine and created as assets that are reusable. You can also create your own materials from scratch, assigning images as textures and selecting a shader from a large library that comes built-in. You may also write your own shader scripts or copy-paste those written by fellow developers in the Unity community, giving you more freedom for expansion beyond the included set. When creating textures for a game in a graphics package such as Photoshop or GIMP, you must be aware of the resolution. Larger textures will give you the chance to add more detail to your textured models, but be more intensive to render. Game textures imported into Unity will be scaled to a power of 2 resolution. For example: 64px x 64px 128px x 128px 256px x 256px 512px x 512px 1024px x 1024px Creating textures of these sizes with content that matches at the edges will mean that they can be tiled successfully by Unity. You may also use textures scaled to values that are not powers of two, but mostly these are used for GUI elements.  
Read more
  • 0
  • 0
  • 3446

article-image-using-spritefonts-board-based-game-xna-40
Packt
30 Sep 2010
10 min read
Save for later

Using SpriteFonts in a Board-based Game with XNA 4.0

Packt
30 Sep 2010
10 min read
  XNA 4.0 Game Development by Example: Beginner's Guide Create your own exciting games with Microsoft XNA 4.0 Dive headfirst into game creation with XNA Four different styles of games comprising a puzzler, a space shooter, a multi-axis shoot 'em up, and a jump-and-run platformer Games that gradually increase in complexity to cover a wide variety of game development techniques Focuses entirely on developing games with the free version of XNA Packed with many suggestions for expanding your finished game that will make you think critically, technically, and creatively Fresh writing filled with many fun examples that introduce you to game programming concepts and implementation with XNA 4.0 A practical beginner's guide with a fast-paced but friendly and engaging approach towards game development Read more about this book (For more resources on XNA 4.0, see here.) SpriteFonts Unlike a Windows Forms application, XNA cannot use the TrueType fonts that are installed on your computer. In order to use a font, it must first be converted into a SpriteFont, a bitmap based representation of the font in a particular size that can be drawn with the SpriteBatch.DrawString() command. Technically, any Windows font can be turned into a SpriteFont, but licensing restrictions on most fonts will prevent you from using them in your XNA games. The redistributable font package is provided by Microsoft to address this problem and give XNA developers a range of usable fonts that can be included in XNA games. Following are samples of each of the fonts included in the font package: Time for action – add SpriteFonts to Game1 Right click on the Fonts folder in the Content project in Solution Explorer and select Add | New Item. From the Add New Item dialog, select Sprite Font. Name the font Pericles36.spritefont. After adding the font, the spritefont file will open in the editor window. In the spritefont file, change <Fontname>Kootenay</Fontname> to <Fontname>Pericles</Fontname>. Change <Size>14</Size> to <Size>36</Size>. Add the following declaration to the Game1 class: SpriteFont pericles36Font; Update the LoadContent() method of the Game1 class to load spritefont by adding: pericles36Font = Content.Load<SpriteFont>(@"FontsPericles36"); What just happened? Adding a SpriteFont to your game is very similar to adding a texture image. Since both are managed by the Content Pipeline, working with them is identical from a code standpoint. In fact, SpriteFonts are really just specialized sprite sheets, similar to what we used for our game pieces, and are drawn via the same SpriteBatch class we use to draw our sprites. The .spritefont file that gets added to your project is actually an XML document containing information that the Content Pipeline uses to create the .XNB file that holds the bitmap information for the font when you compile your code. The .spritefont file is copied from a template, so no matter what you call it, the XML will always default to 14 point Kootenay. In steps 4 and 5, we will edit the XML to generate 36 point Pericles instead. Just as with a Texture2D, we declare a variable (this time a SpriteFont) to hold the Pericles 36 point font. The Load() method of the Content object is used to load the font. SpriteFonts and extended charactersWhen a SpriteFont is built by the Content Processor, it actually generates bitmap images for each of the characters in the font. The range of characters generated is controlled by the <CharacterRegions> section in the SpriteFont's XML description. If you attempt to output a character not covered by this range, your game will crash. You can avoid this by removing the HTML comment characters (<!--and -->) from around the <DefaultCharacter> definition in the XML file. Whenever an unknown character is output, the character defined in <DefaultCharacter> will be used in its place. Score display Displaying the player's score with our new SpriteFont is simply a matter of calling the SpriteBatch.DrawString() method. Time for action – drawing the score Add a new Vector2 to the declarations area of Game1 to store the screen location where the score will be drawn: Vector2 scorePosition = new Vector2(605, 215); In the Draw() method, remove "this.Window.Title = playerScore.ToString();" and replace the line with: ToString();" and replace the line with:spriteBatch.DrawString(pericles36Font, playerScore.ToString(), scorePosition, Color.Black); What just happened? Using named vectors to store things like text positions, allows you to easily move them around later if you decide to modify the layout of your game screen. It also makes code more readable, as we have the name scorePosition instead of a hard-coded vector value in the spriteBatch.DrawString() call. Since our window size is set to 800 by 600 pixels, the location we have defined above will place the score into the pre-defined score box on our background image texture. The DrawString() method accepts a font to draw with (pericles36Font), a string to output (playerScore.ToString()), a Vector2 specifying the upper left corner of the location to begin drawing (scorePosition), and a color for the text to be drawn in (Color.Black). ScoreZooms Simply drawing the player's score is not very exciting, so let's add another use for our SpriteFont. In some puzzle games, when the player scores, the number of points earned is displayed in the center of the screen, rapidly growing larger and expanding until it flies off of the screen toward the player. We will implement this functionality with a class called ScoreZoom that will handle scaling the font. Time for action – creating the ScoreZoom class Add a new class file called ScoreZoom.cs to the Game1 class. Add the following using directive to the top of the file: using Microsoft.Xna.Framework.Graphics; Add the following declarati ons to the ScoreZoom class: public string Text;public Color DrawColor;private int displayCounter;private int maxDisplayCount = 30;private float scale = 0.4f;private float lastScaleAmount = 0.0f;private float scaleAmount = 0.4f; Add the Scale read-only property to the ScoreZoom class: public float Scale{ get { return scaleAmount * displayCounter; }} Add a Boolean property to indicate when the ScoreZoom has finished displaying: public bool IsCompleted{ get { return (displayCounter > maxDisplayCount); }} Create a constructor for the ScoreZoom class: public ScoreZoom(string displayText, Color fontColor){ Text = displayText; DrawColor = fontColor; displayCounter = 0;} Add an Update() method to the ScoreZoom class: public void Update(){ scale += lastScaleAmount + scaleAmount; lastScaleAmount += scaleAmount; displayCounter++;} What just happened? The ScoreZoom class holds some basic information about a piece of text and how it will be displayed to the screen. The number of frames the text will be drawn for are determined by displayCounter and maxDisplayCount. To manage the scale, three variables are used: scale contains the actual scale size that will be used when drawing the text, lastScaleAmount holds the amount the scale was increased by during the previous frame, and scaleAmount determines the growth in the scale factor during each frame. You can see how this is used in the Update() method. The current scale is increased by both the lastScaleAmount and scaleAmount. lastScaleAmount is then increased by the scale amount. This results in the scale growing in an exponential fashion instead of increasing linearly by a scaleAmount for each frame. This will give the text a zooming effect as it starts growing slowly and then speeds up rapidly to fill the screen. Time for action – updating and displaying ScoreZooms Add a Queue object to the Game1 class to hold active ScoreZooms: Queue<ScoreZoom> ScoreZooms = new Queue<ScoreZoom>(); Add a new helper method to the Game1 class to update the ScoreZooms queue: private void UpdateScoreZooms(){ int dequeueCounter = 0; foreach (ScoreZoom zoom in ScoreZooms) { zoom.Update(); if (zoom.IsCompleted) dequeueCounter++; } for (int d = 0; d < dequeueCounter; d++) ScoreZooms.Dequeue();} In the Update() method, inside the case section for GameState.Playing, add the call to update any active ScoreZooms. This can be placed right before the case's break; statement: UpdateScoreZooms(); Add the following to the CheckScoringChain() method to create a ScoreZoom when the player scores. Add this right after the playerScore is increased: ScoreZooms.Enqueue(new ScoreZoom("+" + DetermineScore(WaterChain.Count).ToString(), new Color(1.0f, 0.0f, 0.0f, 0.4f))); Modify the Draw() method of the Game1 class by adding the following right before the SpriteBatch.DrawString() call which draws the player's score: foreach (ScoreZoom zoom in ScoreZooms){ spriteBatch.DrawString(pericles36Font, zoom.Text, new Vector2(this.Window.ClientBounds.Width / 2, this.Window.ClientBounds.Height / 2), zoom.DrawColor, 0.0f, new Vector2(pericles36Font.MeasureString(zoom.Text).X / 2, pericles36Font.MeasureString(zoom.Text).Y / 2), zoom.Scale, SpriteEffects.None, 0.0f);} What just happened? Since all ScoreZoom objects "live" for the same amount of time, we can always be certain that the first one we create will finish before any created during a later loop. This allows us to use a simple Queue to hold ScoreZooms since a Queue works in a first-in-first-out manner. When UpdateScoreZooms() is executed, the dequeueCounter holds the number of ScoreZoom objects that have finished updating during this cycle. It starts at zero, and while the foreach loop runs, any ScoreZoom that has an IsCompleted property of true increments the counter. When the foreach has completed, ScoreZooms.Dequeue() is run a number of times equal to dequeueCounter. Adding new ScoreZoom objects is accomplished in step 4, with the Enqueue() method. The method is passed a new ScoreZoom object, which is constructed with a plus sign (+) and the score being added, followed by a red color with the alpha value set to 0.4f, making it a little more than halfway transparent. Just as the SpriteBatch.Draw() method has multiple overloads, so does the SpriteBatch.DrawString() method, and in fact, they follow much the same pattern. This form of the DrawString() method accepts the SpriteFont (pericles36Font), the text to display, a location vector, and a draw color just like the previous call. For the draw location in this case, we use this.Window.ClientBounds to retrieve the width and height of the game window. By dividing each by two, we get the coordinates of the center of the screen. The remaining parameters are the same as those of the extended Draw() call we used to draw rotated pieces. After the color value is rotation, which we have set to 0.0f, followed by the origin point for that rotation. We have used the MeasureString() method of the SpriteFont class to determine both the height and width of the text that will be displayed and divided the value by two to determine the center point of the text. Why do this when there is no rotation happening? Despite what the order of the parameters might indicate, this origin also impacts the next parameter: the scale. When the scale is applied, it sizes the text around the origin point. If we were to leave the origin at the default (0, 0), the upper left corner of the text would remain in the center of the screen and it would grow towards the bottom right corner. By setting the origin to the center of the text, the scale is applied evenly in all directions: Just as with the extended Draw() method earlier, we will use SpriteEffects.None for the spriteEffects parameter and 0.0f for the layer depth, indicating that the text should be drawn on top of whatever has been drawn already. Adding the GameOver game state Now that we can draw text, we can add a new game state in preparation for actually letting the game end when the facility floods.
Read more
  • 0
  • 0
  • 3446

article-image-raspberry-pi-gaming-operating-systems
Packt
19 Feb 2015
3 min read
Save for later

Raspberry Pi Gaming Operating Systems

Packt
19 Feb 2015
3 min read
In this article by Shea Silverman, author of the book Raspberry Pi Gaming Second Edition, we will see how the Raspberry Pi, while a powerful little device, is nothing without software to run on it. Setting up emulators, games, and an operating system can be a daunting task for those who are new to using Linux. Luckily, there are distributions (operating system images) that handle all of this for us. In this article, we will demonstrate a distribution that have been specially made for gaming. (For more resources related to this topic, see here.) PiPlay PiPlay is an open source premade distribution that combines numerous emulators, games, and a custom frontend that serves as the GUI for the Raspberry Pi. Created in 2012, PiPlay started as PiMAME. Originally, PiMAME was a version of Raspbian that included the AdvanceMAME and AdvanceMENU frontend. The distribution was set to autologin and start up AdvanceMENU at boot up. This project was founded because of the numerous issues users were facing to get MAME to compile and run on their own devices. As more and more emulators were released, PiMAME began to include them in the image, and changed its name to PiPlay, as it wasn't just for arcade emulation anymore. Currently, PiPlay contains the following emulators and games: AdvanceMAME (Arcade) MAME4ALL (Arcade) Final Burn Alpha (Capcom and Neo Geo) PCSX_ReARMed (PlayStation) Dgen (Genesis) SNES9x (Super Nintendo) FCEUX (NES) Gearboy (Gameboy) GPSP (Gameboy Advance) ScummVM (point-and-click games) Stella (Atari 2600) NXEngine (Cave Story) VICE (Commodore 64) Mednafen (Game Gear, Neo Geo Pocket Color, Sega Master System, Turbo Grafx 16/PC-Engine) To download the latest version of PiPlay, go to http://piplay.org and click on the Download option. Now you need to burn the PiPlay image to your SD card. When this is completed, insert the SD card into your Raspberry Pi and turn it on. Within a few moments, you should see an image like this on your screen: Once it's finished booting, you will be presented with the PiPlay menu screen: Here, you will see all the different emulators and tools you have available. PiPlay includes an extensive controller setup tool. By pressing Tab key or button 3 on your controller, a popup window will appear. Select Controller Setup and follow the onscreen guide to properly configure your controller: At the moment, there isn't much to do because you haven't loaded any games for the emulators. The easiest way to load your game files into PiPlay is to use the web frontend. If you connect your Pi to your network, an IP address should appear at the top right of your screen. Another way to find out your IP address is by running the command ifconfig on the command line. Navigate your computer's web browser to this address, and the PiPlay frontend will appear: Here, you can reboot, shutdown, and upload numerous files to the Pi via a drag and drop interface. Simply select the emulator you want to upload files to, find your game file, and drag it onto the box. In a few moments, the file will be uploaded. Summary In this article, you have been introduced to PiPlay Raspberry Pi distribution. All Raspberry Pi distributions share a lot in common, they go about implementing gaming in their own unique ways. Try all and use the one that fits your gaming style the best. Resources for Article: Further resources on this subject: Testing Your Speed [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 3409

article-image-getting-started-mudbox-2013
Packt
18 Sep 2012
12 min read
Save for later

Getting Started with Mudbox 2013

Packt
18 Sep 2012
12 min read
(For more resources on Web Graphics and Videos, see here.) Introduction This article will help you get your preferences set up so that you can work in a way that is most intuitive and efficient for you. Whether you are a veteran or a newbie, it is always a good idea to establish a good workflow. It will speed up your production time, allowing you to get ideas out of your head before you forget them. This will also greatly aid you in meeting deadlines and producing more iterations of your work. Installing Mudbox 2013 documentation In addition to the recipes in this book, you may find yourself wanting to look through the Mudbox 2013 documentation for additional help. By default, when you navigate to Help through Mudbox 2013's interface, you will be sent to an online help page. If you have a slow Internet connection or lack a connection altogether, you may want to install a local copy of the documentation. After downloading and installing the local copy, it is a good idea to have Mudbox 2013 point you to the right location when you navigate to Helpfrom the menus. This will eliminate the need to navigate through your files in order to find the documentation. The following recipe will guide you through this process. How to do it... First thing you will want to do is download the documentation from Autodesk's website. You can find the documentation for this version as well as the previous versions from the following link: http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=17765502. Once you're on this page you can scroll down and click on 2013 for the language and operating system that you are using. The following screenshot is what you should see: Next you will navigate to the location that you downloaded the file to, and run it. Now follow the prompts by clicking Next until the installation is complete. This file will install the documentation into your AutodeskMudbox 2013 folder by default. You can change this location during the installation process if you like but I recommend leaving this as the default location. After the local version of the Help files are installed, we need to point Mudbox 2013's Help menu to the local copy of the documentation. To do this, open the Mudbox 2013 folder, click on Windows in the top menu bar, and click on Preferences. The following screenshot shows how it should look: Next, click on the small arrow next to Help so that more options open up. You will notice that next to Help Location it says Autodesk Web Site. We are going to change that to Installed Local Help by clicking on the small arrow next to (or directly on the text) Autodesk Web Site and choose Installed Local Help from the drop-down menu. Then click on OK. Take note that if you did install your documentation to a different directory, then you will need to choose Custom instead of Installed Local Help. Then you will need to copy and paste the directory location into the Help Path textbox. Setting up hotkeys The first thing you will want to do when you start using a new piece of software is, either set up your own hotkeys or familiarize yourself with the default hotkeys. This is very important for speeding up your workflow. If you do not use hotkeys, you will have to constantly go through menus and scroll through windows to find the tools that you need, which will undoubtedly slow you down. How to do it... First you will need to go into the Windows menu item on the top menu bar. Next, you will click on Hotkeys to bring up the hotkey window as shown in the next screenshot. You will notice a drop-down menu that reads Use keyboard shortcuts from with a Restore Mudbox Defaults button next to it. Within this menu you can set your default hotkeys to resemble a 3D software that you are accustomed to using. This will help you transition smoothly into using Mudbox. If you are new to all 3D software, or use a software package that is not on this list, then using Mudbox hotkeys should suffice. The following screenshot shows the options available in Mudbox 2013: After choosing a default set of keys, you can now go in and change any hotkeys that you would like to customize. Let's say, I would like Eyedropper to activate when I press the E key and the left mouse button together. What you will do is change the current letter that is in the box next to Eyedropper to E and you will make sure there is a check in the box next to LMB (Left Mouse Button). It should look like the following screenshot: How it works... Once all your hotkeys are set up as desired, you will be able to use quick keystrokes to access a large number of tools without ever taking your eyes off your project. The more you get comfortable with your hotkeys, the faster you will get at switching between tools. There's more... When you first start using a particular software, you probably won't know exactly which tools you will be using most often. With that in mind, you will want to revisit your hotkey customization after getting a feel for your workflow and which tools you use the most. Another thing you want to think about, when setting up your hotkeys, is how easy it is to use the hotkey. For example, I tend to make hotkeys that relate to the tool in some way in order to make it easier to remember. For example, the Create Curve tool has a good hotkey already set for it, Ctrl+ C, for the reasons mentioned as follows: One reason it is a good hotkey is that the first letter of the tool is also the letter of the key being used for the hotkey. I can relate Cto curve. Another reason this could be a good hotkey is because if creating curves is something that I find myself doing often, then all I have to do is use my pinky finger on the Ctrl key and my pointer finger on the C key. You may think "Yeah? So what?" but if I were to set the hotkey to Ctrl+ Alt+ U it's a bit more of a stretch on my fingers and I would not want to do that frequently. The point is, key location and frequency of use are things you want to think about to speed up your workflow and stay comfortable while using your hotkeys. Increasing the resolution on your model Before you can get any fine details, or details that you would see while viewing from close up, into the surface of your model you will need to subdivide your mesh to increase its resolution. In the same way that a computer monitor displays more pixels when its resolution is increased, a model will have more points on its surface when the resolution is increased. How to do it... The hotkey for subdividing your surface is Shift + D or you can alternatively go into the menus as shown in the following screenshot: How it works... What this does is it adds more polygons which can be manipulated to add more detail. You will not want to subdivide your model too many times, otherwise, your computer will begin to slow down. The extent to which your computer will slow down is exponential. For example, if you have a six-sided cube and you subdivide it once, it will become 24-sided. If you subdivide it one more time, it will become 96-sided and so on. The following screenshot from Maya shows you what the wireframe looks like from one level to the next: The reason this image was created in Maya is because Mudbox will only show the proper wireframe when your model reaches 1000 polygons or more. The more powerful your computer, the more smoothly Mudbox 2013 will run. More specifically, it's the RAM and the video memory that are important. The following are some explanations on how RAM and video memory will affect your machines performance. RAM is the most important of all. The more RAM you have, the more polygons Mudbox will be able to handle, without taking a performance hit. The video memory increases the performance of your video card and allows high resolution, high speed, and color graphics. Basically, it allows the Graphical User Interface (GUI) to have better performance. So, now that you know RAM is important, how do you decide how much will be needed to run Mudbox 2013 smoothly? Well, one thing to consider is your operating system and the version of Mudbox 2013 you are running. If you have a 32-bit operating system and you are running the 32-bit Mudbox 2013, then the maximum RAM you can get is 4 GB. But, in reality you are only getting about 3 GB of RAM as the operating system needs to use around 1 GB of that memory. On the other hand, if you are using a 64-bit operating system and the 64-bit Mudbox 2013 version then you are capped at about 8 TB (yes, I said TB not GB). You will not need anywhere near that amount of RAM to run Mudbox 2013 smoothly. My recommendation is to have a minimum of 8 GB of RAM and 1 GB of video memory. With this amount of RAM and video memory you should be able to work with around 10 million triangles on the top level of your sculpt . There's more... Notice the little white box next to Add New Subdivision Level in the following screenshot: By clicking on this box, you will be given a few options for how Mudbox will handle the subdivision, as shown in the following screenshot: The options shown in the previous screenshot are explained as follows: Smooth Positions: This option will smooth out the edges by averaging out the vertices that are added. The following screenshot shows the progression from Level 0 to Level 2 on a cube: Subdivide UVs: If this option is unchecked when you create a new subdivision level, then you will lose your UVs on the object. To get your UVs back you will need to recreate the UVs for that level. If the Subdivide UVs option is turned on then it will just add subdivisions to your existing UVs. Smooth UVs: If this option is turned on, the UVs will be smoothed within the UV Borders as shown in the next screenshot: If you want your borders to smooth along with the interior parts of the shell, as shown in the next screenshot, then you will need to take a few extra steps to allow this: This is the method Mudbox used in the 2009 and earlier versions. In Mudbox 2010, they switched the way they handle this operation so that the borders do not smooth. Here is an excerpt from the Service Pack notes from 2010: "A new environment variable now exists to alter how the Smooth UVs property works when subdividing a model: MUDBOX2009_SUBDIVIDE_SMOOTH_UV. When this environment variable is set, the Smooth UVs property works as it did in Mudbox 2009. That is, the entire UV shell, including its UV borders, are smoothed when subdividing a model whenever the Smooth UVs property is turned on. If this environment variable is not set, the default Mudbox 2010 UV smoothing behavior occurs. That is, smoothing only occurs for the interior UVs in a UV shell, leaving the UV shell border edges unsmoothed. Which UV smoothing method you choose to use is entirely dependent on your individual rendering pipeline requirements and render application used." This has not changed since Mudbox 2010. So, basically what you need to do on a PC is add an environment variable MUDBOX2009_SUBDIVIDE_SMOOTH_UV that has a value of 1. To do this you will need to right-click on My Computer and click on Properties. Then, choose Advanced system settings and under the Advanced tab click on Environment Variables.... Under System Variables click on New.... In the blank where it says Variable Name enter MUDBOX2009_SUBDIVIDE_SMOOTH_UV and under Variable Value input a 1. Hit OK and it's all ready to go. Moving up and down subdivision levels Once you create subdivision levels using Shift + D, or through the menus, you can move up and down the levels you have created by using the Page Up key to move up in levels, or the Page Down key to move down in levels. But keep in mind, you will not be able to go any higher than the highest level you created using Add New Subdivision Level and you will never be able to go below Level 0. Another thing to take into account is which model you are subdividing. If you have multiple objects in your scene, you need to make sure the correct mesh is active when subdividing. The following are a couple of ways to make sure you are subdividing the correct mesh: One way is to select the object in the Object List before hitting Shift + D. Another way is to hover your mouse cursor over the mesh that you want to subdivide and then hit Shift + D. This will subdivide the mesh that is directly underneath your cursor.
Read more
  • 0
  • 0
  • 3326

Packt
19 Jun 2010
5 min read
Save for later

Getting Started with Blender’s Particle System- A Sequel

Packt
19 Jun 2010
5 min read
Creating Fire Taking from the same setup we had for the creating the smoke, making a fire is almost a similar process except for a few changes: the halo shader settings and force field strengths. Let’s go ahead and start changing the halo shader such that we change the color, hardness, add, and to disable the texture option. Then, we change the Force Field from Texture to Force with Strength of -6.7. Halo Settings for Fire   Force Field Settings   Fire Test Render   Furthermore, we can achieve even more believable results when we plug these image renders over to our compositor for some contrast boosting and other cool 2d effects. Creating Bubbles Let’s start a new Blender file, delete the default Cube, and replace it with a Plane primitive. Then let’s position the camera such that our plane is just below our view. Preparing the View   Next, let’s add a new particle system to the Plane and name it “Bubble”. Check the screenshots below for the settings. Bubble Cache Settings   Bubble Emission Settings   Bubble Velocity Settings   Bubble Physics Settings   Bubble Display Settings Bubble Field Weights Settings Now that we’ve got those settings in (remember though to play around because your settings might be way better than mine), let’s add a UV Sphere with the default divisions to our scene and name it “Bubble”. Then place it somewhere that the camera view won’t see. Adding, Moving, and Renaming the UV Sphere   What we’ll be doing next is to “instance” the UV Sphere (“Bubble”) we just added into the emitter plane, thus obeying the Particle Settings that we’ve set awhile back. To do this, select the Emitter plane and edit the Render and Display settings under Particle Settings (as seen below). Emitter Render and Display Settings Now if we play the animation in our 3D Viewport, you’ll now notice that the UV Sphere is now instanced to where the particle points are before, replacing them with a mesh object. Often, the instanced Bubble object would look small in our view, if this happens, simply scale the Bubble object and it will propagate accordingly in our Particle System. Instanced Bubble Objects   And that’s about it! Coupled with some nice shaders and compositing effects, you can definitely achieve impressive and seamless results. Bubbles with Sample Shaders and Image Background   Bubble Particles Composited Over a Photograph Bubbles and Butterflies   Creating Rockslides Similar to the concept of creating bubbles via Particle Systems, let’s derive the steps and create something different. This time, we’ll take advantage of Blender’s physics systems to automate natural motion and collision interaction. We won’t be using the Blender Game Engine for this matter (which should do almost the same thing), but instead we’re still going to use the particle system that is present in Blender. Like how we started the other topics, this time again, we’ll start by refreshing Blender’s view and starting a new session. Delete the default cube and add a plane mesh or a grid and start modeling a mountain-like terrain. This will be our slope from which our rock particles will fall and slide later on. You can use whichever technique you have on your disposal. Fast forward into time, here’s closely what we should have: Terrain Model for Rock Sliding   Next step is to create the actual rocks that are going to be falling and sliding on our terrain mesh. It’s optimal to start with an Icosphere and model from there. Be sure to move the models out of the camera’s view since we don’t want to see the original meshes, only the instances that are going to be generated. Model five (5) variations of the rocks and create a group for them named “RockGroup”. Rock Group Add an emitter plane across the top of the mountain terrain, this will be our particle rock emitter. Rock Particle Emitter   Next, create a Particle System on the emitter mesh and call it “RockSystem”. And this time, we’ll use the default gravity settings to simulate falling rock. Check the screenshots below for the particle setup.           Additionally, we must set the terrain mesh as a collision object such that the particles react to it whenever they collide. Play around with the settings until you’re satisfied with the behavior of your particles. Press ALT+A or click the play button in the Timeline Window to preview the animation. Setting Terrain as Collision   Single Frame from the Animation   Single Frame Rendered  
Read more
  • 0
  • 0
  • 3324

article-image-polygon-modeling-handgun-using-blender-3d-249-part-1
Packt
30 Nov 2009
3 min read
Save for later

Polygon Modeling of a Handgun using Blender 3D 2.49: Part 1

Packt
30 Nov 2009
3 min read
With the base model created, we will be able to analyze the shape of our model and evaluate the next steps of the project. We can even decide to make changes to the project because new ideas may appear when we see the object in 3D rather than in 2D. Starting with a background image The first step to start the modeling is to add the reference image as the background of the Blender 3D view. To do that, we can go to the View menu in the 3D view and choose Background Image. The background image in Blender appears only when we are at an orthogonal or Camera view. The background image is a simple black and white drawing of the weapon, but it will be a great reference for modeling. Before we go any further, it's important to point out a few things about the Background Image menu. We can make some adjustments to the image if it doesn't fit our Blender view: Use: With this button turned on, we will use the image as a background. If you want to hide the image, just turn it off and the image will disappear. Blend: The blend slider will control the transparency of the image. If you feel that the image is actually blocking your view of the whole model, making it a bit transparent may help. Size: As the name says, we can control the scale of the image. X and Y offset: With this option, we will be able to move the image in the X or Y axis to place it in a specific location. After clicking on the Use button, just hit the load button and choose the image to be used as a reference. Since you don't have the image used in this example, visit the Packt web site and download the project files from Support. If you've never used a reference image in Blender, it is important to note that the reference images appear only in 3D view when we are using the orthographic view or the camera view mode. It works only in the top, right, left, front, and other orthographic views. If you hit 5 and change the view to perspective, the image will disappear. By using the middle mouse button or the scroll to rotate the view, the image disappears. However, it's still there and we can see the image again by changing the view to an orthogonal or camera view. Make the image more transparent by using the Blend control. It will help in focusing on the model instead of the image. A value of 0.25 will be enough to help in the modeling without causing confusion.
Read more
  • 0
  • 0
  • 3319
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-straight-blender
Packt
16 Sep 2015
18 min read
Save for later

Straight into Blender!

Packt
16 Sep 2015
18 min read
 In this article by Romain Caudron and Pierre-Armand Nicq, the authors of Blender 3D By Example, you will start getting familiar with Blender. (For more resources related to this topic, see here.) Here, navigation within the interface will be presented. Its approach is atypical in comparison to other 3D software, such as Autodesk Maya® or Autodesk 3DS Max®, but once you get used to this, it will be extremely effective. If you have had the opportunity to use Blender before, it is important to note that the interface went through changes during the evolution of the software (especially since version 2.5). We will give you an idea of the possibilities that this wonderful free and open source software gives by presenting different workflows. You will learn some vocabulary and key concepts of 3D creation so that you will not to get lost during your learning. Finally, you will have a brief introduction to the projects that we will carry out throughout this book. Let's dive into the third dimension! The following topics will be covered in this article: Learning some theory and vocabulary Navigating the 3D viewport How to set up preferences Using keyboard shortcuts to save time An overview of the 3D workflow Before learning how to navigate the Blender interface, we will give you a short introduction to the 3D workflow. An anatomy of a 3D scene To start learning about Blender, you need to understand some basic concepts. Don't worry, there is no need to have special knowledge in mathematics or programming to create beautiful 3D objects; it only requires curiosity. Some artistic notions are a plus. All 3D elements, which you will handle, will evolve in to a scene. There is a three-dimensional space with a coordinate system composed of three axes. In Blender, the x axis shows the width, y axis shows the depth, and the z axis shows the height. Some softwares use a different approach and reverses the y and z axes. These axes are color-coded, we advise you to remember them: the x axis in red, the y axis in green and the z axis in blue. A scene may have the scale you want and you can adjust it according to your needs. This looks like a film set for a movie. A scene can be populated by one or more cameras, lights, models, rigs, and many other elements. You will have the control of their placement and their setup. A 3D scene looks like a film set. A mesh is made of vertices, edges, and faces. The vertices are some points in the scene space that are placed at the end of the edges. They could be thought of as 3D points in space and the edges connect them. Connected together, the edges and the vertices form a face, also called a polygon. It is a geometric plane, which has several sides as its name suggests. In 3D software, a polygon is constituted of at least three sides. It is often essential to favor four-sided polygons during modeling for a better result. You will have an opportunity to see this in more detail later. Your actors and environments will be made of polygonal objects, or more commonly called as meshes. If you have played old 3D games, you've probably noticed the very angular outline of the characters; it was, in fact, due to a low count of polygons. We must clarify that the orientation of the faces is important for your polygon object to be illuminated. Each face has a normal. This is a perpendicular vector that indicates the direction of the polygon. In order for the surface to be seen, it is necessary that the normals point to the outside of the model. Except in special cases where the interior of a polygonal object is empty and invisible. You will be able to create your actors and environment as if you were handling virtual clay to give them the desired shape. Anatomy of a 3D Mesh To make your characters presentable, you will have to create their textures, which are 2D images that will be mapped to the 3D object. UV coordinates will be necessary in order to project the texture onto the mesh. Imagine an origami paper cube that you are going to unfold. This is roughly the same. These details are contained in a square space with the representation of the mesh laid flat. You can paint the texture of your model in your favorite software, even in Blender. This is the representation of the UV mapping process. The texture on the left is projected to the 3D model on the right. After this, you can give the illusion of life to your virtual actors by animating them. For this, you will need to place animation keys spaced on the timeline. If you change the state of the object between two keyframes, you will get the illusion of movement—animation. To move the characters, there is a very interesting process that uses a bone system, mimicking the mechanism of a real skeleton. Your polygon object will be then attached to the skeleton with a weight assigned to the vertices on each bone, so if you animate the bones, the mesh components will follow them. Once your characters, props, or environment are ready, you will be able to choose a focal length and an adequate framework for your camera. In order to light your scene, the choice of the render engine will be important for the kind of lamps to use, but usually there are three types of lamps as used in cinema productions. You will have to place them carefully. There are directional lights, which behave like the sun and produce hard shadows. There are omnidirectional lights, which will allow you to simulate diffuse light, illuminating everything around it and casting soft shadows. There are also spots that will simulate a conical shape. As in the film industry or other imaging creation fields, good lighting is a must-have in order to sell the final picture. Lighting is an expressive and narrative element that can magnify your models, or make them irrelevant. Once everything is in place, you are going to make a render. You will have a choice between a still image and an animated sequence. All the given parameters with the lights and materials will be calculated by the render engine. Some render engines offer an approach based on physics with rays that are launched from the camera. Cycles is a good example of this kind of engine and succeeds in producing very realistic renders. Others will have a much simpler approach, but none less technically based on visible elements from the camera. All of this is an overview of what you will be able to achieve while reading this book and following along with Blender. What can you do with Blender? In addition to being completely free and open source, Blender is a powerful tool that is stable and with an integral workflow that will allow you to understand your learning of 3D creation with ease. Software updates are very frequent; they fix bugs and, more importantly, add new features. You will not feel alone as Blender has an active and passionate community around it. There are many sites providing tutorials, and an official documentation detailing the features of Blender. You will be able to carry out everything you need in Blender, including things that are unusual for a 3D package such as concept art creation, sculpting, or digital postproduction, which we have not yet discussed, including compositing and video editing. This is particularly interesting in order to push the aesthetics of your future images and movies to another level. It is also possible to make video games. Also, note that the Blender game engine is still largely unknown and underestimated. Although this aspect of the software is not as developed as other specialized game engines, it is possible to make good quality games without switching to another software. You will realize that the possibilities are enormous, and you will be able to adjust your workflow to suit your needs and desires. Software of this type could scare you by its unusual handling and its complexity, but you'll realize that once you have learned its basics, it is really intuitive in many ways. Getting used to the navigation in Blender Now that you have been introduced to the 3D workflow, you will learn how to navigate the Blender interface, starting with the 3D viewport. An introduction to the navigation of the 3D Viewport It is time to learn how to navigate in the Blender viewport. The viewport represents the 3D space, in which you will spend most of your time. As we previously said, it is defined by three axes (x, y, and z). Its main goal is to display the 3D scene from a certain point of view while you're working on it. The Blender 3D Viewport When you are navigating through this, it will be as if you were a movie director but with special powers that allow you to film from any point of view. The navigation is defined by three main actions: pan, orbit, and zoom. The pan action means that you will move horizontally or vertically according to your current point of view. If we connect that to our cameraman metaphor, it's like if you were moving laterally to the left, or to the right, or moving up or down with a camera crane. By default, in Blender the shortcut to pan around is to press the Shift button and the Middle Mouse Button (MMB), and drag the mouse. The orbit action means that you will rotate around the point that you are focusing on. For instance, imagine that you are filming a romantic scene of two actors and that you rotate around them in a circular manner. In this case, the couple will be the main focus. In a 3D scene, your main focus would be a 3D character, a light, or any other 3D object. To orbit around in the Blender viewport, the default shortcut is to press the MMB and then drag the mouse. The last action that we mentioned is zoom. The zoom action is straightforward. It is the action of moving our point of view closer to an element or further away from an element. In Blender, you can zoom in by scrolling your mouse wheel up and zoom out by scrolling your mouse wheel down. To gain time and precision, Blender proposes some predefined points of view. For instance, you can quickly go in a top view by pressing the numpad 7, you can also go in a front view by pressing the numpad 1, you can go in a side view by pressing the numpad 3, and last but not least, the numpad 0 allows you to go in Camera view, which represents the final render point of the view of your scene. You can also press the numpad 5 in order to activate or deactivate the orthographic mode. The orthographic mode removes perspective. It is very useful if you want to be precise. It feels as if you were manipulating a blueprint of the 3D scene. The difference between Perspective (left) and Orthographic (right) If you are lost, you can always look at the top left corner of the viewport in order to see in which view you are, and whether the orthographic mode is on or off. Try to learn by heart all these shortcuts; you will use them a lot. With repetition, this will become a habit. What are editors? In Blender, the interface is divided into subpanels that we call editors; even the menu bar where you save your file is an editor. Each editor gives you access to tools categorized by their functionality. You have already used an editor, the 3D view. Now it's time to learn more about the editor's anatomy. In this picture, you can see how Blender is divided into editors The anatomy of an editor There are 17 different editors in Blender and they all have the same base. An editor is composed of a Header, which is a menu that groups different options related to the editor. The first button of the header is to switch between other editors. For instance, you can replace the 3D view by the UV Image Editor by clicking on it. You can easily change its place by right-clicking on it in an empty space and by choosing the Flip to Top/Bottom option. The header can be hidden by selecting its top edge and by pulling it down. If you want to bring it back, press the little plus sign at the far right. The header of the 3D viewport. The first button is for switching between editors, and also, we can choose between different options in the menu In some editors, you can get access to hidden panels that give you other options. For instance, in the 3D view you can press the T key or the N key to toggle them on or off. As in the header, if a sub panel of an editor is hidden, you can click on the little plus sign to display it again. Split, Join, and Detach Blender offers you the possibility of creating editors where you want. To do this, you need to right-click on the border of an editor and select Split Area in order to choose where to separate them. Right-click on the border of an editor to split it into two editors The current editor will then be split in two editors. Now you can switch to any other editor that you desire by clicking on the first button of the header bar. If you want to merge two editors into one, you can right-click on the border that separates them and select the Join Area button. You will then have to click on the editor that you want to erase by pointing the arrow on it. Use the Join Area option to join two editors together You then have to choose which editor you want to remove by pointing and clicking on it. We are going to see another method of splitting editors that is nice. You can drag the top right corner of an editor and another editor will magically appear! If you want to join back two editors together, you will have to drag the top right corner in the direction of the editor that you want to remove. The last manipulation can be tricky at first, but with a little bit of practice, you will also be able to do it with closed eyes! The top right corner of an editor If you have multiple monitors, it could be a great idea to detach some editors in a separated window. With this, you could gain space and won't be overwhelmed by a condensed interface. In order to do this, you will need to press the Shift key and drag the top right corner of the editor with the Left Mouse Button (LMB). Some useful layout presets Blender offers you many predefined layouts that depend on the context of your creation. For instance, you can select the Animation preset in order to have all the major animation tools, or you can use the UV Editing preset in order to prepare your texturing. To switch between the presets, go to the top of the interface (in the Info editor, near the Help menu) and click on the drop-down menu. If you want, you can add new presets by clicking on the plus sign or delete presets by clicking on the X button. If you want to rename a preset, simply enter a new name in the corresponding text field. The following screenshot shows the Layout presets drop-down menu: The layout presets drop-down menu Setting up your preferences When we start learning new software, it's good to know how to set up your preferences. Blender has a large number of options, but we will show you just the basic ones in order to change the default navigation style or to add new tools that we call add-ons in Blender. An introduction to the Preferences window The preferences window can be opened by navigating to the File menu and selecting the User Preferences option. If you want, you can use the Ctrl + Alt + U shortcut or the Cmd key and comma key on a Mac system. There are seven tabs in this window as shown here: The different tabs that compose the Preferences window A nice thing that Blender offers is the ability to change its default theme. For this, you can go to the Themes tab and choose between different presets or even change the aspect of each interface elements. Another useful setting to change is the number of undo that is 32 steps, by default. To change this number, go to the Editing tab and under the Undo label, slide the Steps to the desired value. Customizing the default navigation style We will now show you how to use a different style of navigation in the viewport. In many other 3D programs, such as Autodesk Maya®, you can use the Alt key in order to navigate in the 3D view. In order to activate this in Blender, navigate to the Input tab, and under the Mouse section, check the Emulate 3 Button Mouse option. Now if you want to use this navigation style in the viewport, you can press Alt and LMB to orbit around, Ctrl + Alt and the LMB to zoom, and Alt + Shift and the LMB to pan. Remember these shortcuts as they will be very useful when we enter the sculpting mode while using a pen tablet. The Emulate 3 Button Mouse checkbox is shown as follows: The Emulate 3 Button Mouse will be very useful when sculpting using a pen tablet Another useful setting is the Emulate Numpad. It allows you to use the numeric keys that are above the QWERTY keys in addition to the numpad keys. This is very useful for changing the views if you have a laptop without a numpad, or if you want to improve your workflow speed. The Emulate Numpad allows you to use the numeric keys above the QWERTY keys in order to switch views or toggle the perspective on or off Improving Blender with add-ons If you want even more tools, you can install what is called as add-ons on your copy of Blender. Add-ons, also called Plugins or Scripts, are Python files with the .py extension. By default, Blender comes with many disabled add-ons ordered by category. We will now activate two very useful add-ons that will improve our speed while modeling. First, go to the Add-ons tab, and click on the Mesh button in the category list at the left. Here, you will see all the default mesh add-ons available. Click on the check-boxes at the left of the Mesh: F2 and Mesh: LoopTools subpanels in order to activate these add-ons. If you know the name of the add-on you want to activate, you can try to find it by typing its name in the search bar. There are many websites where you can download free add-ons, starting from the official Blender website. If you want to install a script, you can click on the Install from File button and you will be asked to select the corresponding Python file. The official Blender Add-ons Catalog You can find it at http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts. The following screenshot shows the steps for activating the add-ons: Steps for Add-ons activation Where are the add-ons on the hard-disk? All the scripts are placed in the add-ons folder that is located wherever you have installed Blender on your hard disk. This folder will usually be at Your Installation PathBlender FoundationBlender2.VersionNumberscriptsaddons. If you find it easier, you can drop the Python files here instead of at the standard installation. Don't forget to click on the Save User Settings button in order to save all your changes! Summary In this article, you have learned the steps behind 3D creations. You know what a mesh is and what it is composed of. Then you have been introduced to navigation in Blender by manipulating the 3D viewport and going through the user preference menu. In the later sections, you configured some preferences and extended Blender by activating some add-ons. Resources for Article: Further resources on this subject: Editing the UV islands[article] Working with Blender[article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 3159

article-image-zbrush-4-sculpting-preparing-creature-games
Packt
17 Mar 2011
4 min read
Save for later

ZBrush 4 Sculpting: Preparing the Creature for Games

Packt
17 Mar 2011
4 min read
ZBrush 4 Sculpting for Games: Beginner's Guide Sculpt machines, environments, and creatures for your game development projects         Read more about this book       (For more resources on the subject, see here.) You can download the finished creature by clicking here. Retopologizing for games Okay, we've got our high-poly mesh ready. But in order to bring it into a game engine, we need to build a low-polygon mesh from it. Because we used ZSketch in the beginning, we now create the low-poly mesh at the end of the process. This step is called retopologizing, or in most cases just retopo, because it creates a mesh with a new topology and projects the details from the high-poly onto it. In this way, we end up with a mesh with a clean and optimized topology and all of the high resolution details that we sculpted in earlier. This process can also be done in other major 3D applications such as Blender, 3ds Max, and so on. Honestly, I would prefer the retopologizing in Blender over that of ZBrush, as I find it a bit more robust. Nonetheless, the results are the same. So let's see how we can do this in ZBrush Before retopologizing, we should be clear about these points: Polycount: Think of how many polygons the character should have; this always seems tricky, but after some time you get used to it. Ask yourself how many polygons your platform and engine can handle for a character and how big will it be seen onscreen. Let's say our Brute will be used on a PC and console title, he's a few meters high and is visible fullscreen. So we're heading roughly for something around 5000 polygons. This is a very rough estimate and if we need less polygons for it, that's fine too. The fewer the better. Animation: How will the character be animated? If there's elaborate facial animation going on, the character should have enough polygons in his face, so the animator can move something. Ideally, the polygon-flow supports the animation. Quads: Always try to establish as many quads as possible. Sometimes, we just need to add triangles, but try to go for quads wherever you can. With these three points in mind, let's start retopologizing Time for action – creating an in-game mesh with retopologize First, lower the subdivision levels of all the subtools to get increased performance. Append a ZSphere as a subtool and make it the active one. Name it something meaningful like Retopo. Activate Transparency and place the ZSphere inside the mesh so that it is hidden, as shown in the next screenshot. Appending a ZSphere to the model gives us the advantage to easily hide subtools or adjust their subdivision level as we need it: Deactivate Transparency and go to Tool Topology| and click on Edit Topology. Activate Symmetry. Pick the SkinShade4 material and choose a darker color, so that we can see the orange lines of the retopologizing better. The SkinShade4 material offers very low contrast, which is what we need here. Begin drawing edges by left-clicking on the mesh. They will automatically snap to the underlying high-poly surface. Start out by laying in two circles around the eyes, as explained in the next image Retopologize commands When in retopologize mode, left-clicking adds a new vertex and an edge connecting it to the last selected vertex. If we want to start out from a different vertex, we can select a new starting point by Ctrl + left-clicking. Deleting points can be done by Alt + left-clicking. When you miss the point you wanted to delete, the last selected one gets deleted instead, so be careful about that.
Read more
  • 0
  • 0
  • 3135

article-image-ogre-3d-double-buffering
Packt
25 Nov 2010
5 min read
Save for later

Ogre 3D: Double Buffering

Packt
25 Nov 2010
5 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction When a scene is rendered, it isn't normally rendered directly to the buffer, which is displayed on the monitor. Normally, the scene is rendered to a second buffer and when the rendering is finished, the buffers are swapped. This is done to prevent some artifacts, which can be created if we render to the same buffer, which is displayed on the monitor. The FrameListener function, frameRenderingQueued, is called after the scene has been rendered to the back buffer, the buffer which isn't displayed at the moment. Before the buffers are swapped, the rendering result is already created but not yet displayed. Directly after the frameRenderingQueued function is called, the buffers get swapped and then the application gets the return value and closes itself. That's the reason why we see an image this time. Now, we will see what happens when frameRenderingQueued also returns true. Time for action – returning true in the frameRenderingQueued function Once again we modify the code to test the behavior of the Frame Listener Change frameRenderingQueued to return true: bool frameRenderingQueued (const Ogre::FrameEvent& evt){ std::cout << «Frame queued» << std::endl; return true;} Compile and run the application. You should see Sinbad for a short period of time before the application closes, and the following three lines should be in the console output: Frame started Frame queued Frame ended What just happened? Now that the frameRenderingQueued handler returns true, it will let Ogre 3D continue to render until the frameEnded handler returns false. Like in the last example, the render buffers were swapped, so we saw the scene for a short period of time. After the frame was rendered, the frameEnded function returned false, which closes the application and, in this case, doesn't change anything from our perspective. Time for action – returning true in the frameEnded function Now let's test the last of three possibilities. Change frameRenderingQueued to return true: bool frameEnded (const Ogre::FrameEvent& evt){ std::cout << «Frame ended» << std::endl; return true;} Compile and run the application. You should see the scene with Sinbad and an endless repetition of the following three lines: Frame started Frame queued Frame ended What just happened? Now, all event handlers returned true and, therefore, the application will never be closed; it would run forever as long as we aren't going to close the application ourselves. Adding input We have an application running forever and have to force it to close; that's not neat. Let's add input and the possibility to close the application by pressing Escape. Time for action – adding input Now that we know how the FrameListener works, let's add some input. We need to include the OIS header file to use OIS: #include "OISOIS.h" Remove all functions from the FrameListener and add two private members to store the InputManager and the Keyboard: OIS::InputManager* _InputManager;OIS::Keyboard* _Keyboard; The FrameListener needs a pointer to the RenderWindow to initialize OIS, so we need a constructor, which takes the window as a parameter: MyFrameListener(Ogre::RenderWindow* win){ OIS will be initialized using a list of parameters, we also need a window handle in string form for the parameter list; create the three needed variables to store the data: OIS::ParamList parameters;unsigned int windowHandle = 0;std::ostringstream windowHandleString; Get the handle of the RenderWindow and convert it into a string: win->getCustomAttribute("WINDOW", &windowHandle);windowHandleString << windowHandle; Add the string containing the window handle to the parameter list using the key "WINDOW": parameters.insert(std::make_pair("WINDOW", windowHandleString.str())); Use the parameter list to create the InputManager: _InputManager = OIS::InputManager::createInputSystem(parameters); With the manager create the keyboard: _Keyboard = static_cast<OIS::Keyboard*>(_InputManager->createInputObject( OIS::OISKeyboard, false )); What we created in the constructor, we need to destroy in the destructor: ~MyFrameListener(){ _InputManager->destroyInputObject(_Keyboard); OIS::InputManager::destroyInputSystem(_InputManager);} Create a new frameStarted function, which captures the current state of the keyboard, and if Escape is pressed, it returns false; otherwise, it returns true: bool frameStarted(const Ogre::FrameEvent& evt){ _Keyboard->capture(); if(_Keyboard->isKeyDown(OIS::KC_ESCAPE)) { return false; } return true;} The last thing to do is to change the instantiation of the FrameListener to use a pointer to the render window in the startup function: _listener = new MyFrameListener(window);_root->addFrameListener(_listener); Compile and run the application. You should see the scene and now be able to close it by pressing the Escape key. What just happened? We added input processing capabilities to our FrameListener but we didn't use any example classes, except our own versions.
Read more
  • 0
  • 0
  • 3039

article-image-creating-man-made-materials-blender-25
Packt
28 Jan 2011
10 min read
Save for later

Creating Man-made Materials in Blender 2.5

Packt
28 Jan 2011
10 min read
  Blender 2.5 Materials and Textures Cookbook Over 80 great recipes to create life-like Blender objects Master techniques to create believable natural surface materials Take your models to the next level of realism or artistic development by using the material and texture settings within Blender 2.5. Take the hassle out of material simulation by applying faster and more efficient material and texture strategies Part of Packt's Cookbook series: Each recipe is a logically organized according to the surface types with clear instructions and explanations on how these recipes can be applied across a range of materials including complex materials such as oceans, smoke, fire and explosions.        Creating a slate roof node material that repeats but with ultimate variety Man-made materials will often closely resemble their natural surface attributes. Slate is a natural material that is used in many building projects. Its tendency to shear into natural slices makes it an ideal candidate for roofing. However, in its man-made form it is much more regularly shaped and graded to give a nice repeating pattern on a roof surface. That doesn't mean that there is no irregularity in either the edges or surface of manufactured slate tiles. In fact, architects often use this irregularity to add character and drama to a roof. Repeat patterns in a 3D suite like Blender can be extremely difficult to control. If repeats become too obvious, particularly when surface and edges are supposed to be random, it can ruin the illusion. Fortunately, we will be employing Blender controls to add randomness to a repeated image representing the tiled pattern of the roof. Of course, slates like any building material need to be fixed to a roof. This is usually achieved with nails. After time, these metal joiners will age and either rust or channel water to add streaks and rust lines across the slate, often emphasizing the slope of the roof. All these secondary events will add character and dimension to a material simulation. However, such a material need not be complex to achieve a believable and stimulating result. Getting ready The preparation for this recipe could not be simpler. The modeling required to produce the mesh roof is no more than a simple plane created at the origin and rotated in its Y axis to 30°. The plane can be at the default size of one blender unit and should have no more than four vertices. That's just about the simplest model you can have in a 3D graphics program. Position the camera so that you have a reasonably close-up view as shown in the following image: The default lights will be fine for this simulation. But, you are welcome to place lights as you wish. Please bear in mind that a slate roof tends to be quite a dark material. So, if test renders appear too dark, raise the light energy until a reasonable render can be produced. You can also turn off Raytrace render and Ambient Occlusion, if it has been previously set, as they are not required for this material. This will save considerable time in rendering test images. Save your blendfile as slate-roof-01.blend. You will also need to either create or download a small tileable image to represent the pattern of the slate roof. Instructions are given on how to create it within the recipe but a downloadable version is available from the Packtpub website. How to do it... We need to create an image of the smallest repeatable pattern of our slates. This can act both as a bump map and also to mask and apply color variation to the material. The image is very simple and is based on the shape and dimension of a standard rectangular slate. You will see later how the shape can be changed to represent other slate patterns. This was created in GIMP, although any reasonable paint package could be used. Here are the steps to aid you in creating one yourself: Create a new image with size 260 x 420 pixels. I will show later how you can scale an image to give better proportions for more efficient use within Blender. Either place guides or create a grid to sub-divide the rectangle into four sections. In the top half of the rectangle, create a blend fill from black at the top to white at the middle. Do the same for the bottom half of the rectangle. Create a new layer and draw a black line, of three pixels' width, from the middle of the top rectangle section to divide the top rectangle into two. Draw black lines of the same thickness on each side of the whole rectangle. If you used a grid, you should find that one of these verticals is two pixels' width and the other one. Obviously, when this image is tiled, the black lines will all appear as equal in thickness. Finally, create another blend fill from the bottom of each rectangle from black to white upwards about ten pixels. Save your completed image as slate-tile.png to your Blender textures directory. If you want to skip these steps you can download a pre-created one from the Packtpub website. How it works... The image that you want to tile must be carefully designed to hide any seams that might appear when repeated. Most of the major paint packages, such as Photoshop and GIMP, have tools to aid you in that process. However, manual drawing, or editing of an image, will almost always be necessary to create accurate tileable images. Even tiny variations between seams will show up if repeated enough times across a surface. Fortunately, there are techniques available in Blender that will help mask these repeat image shortcomings. Using a tileable texture to add complexity to a surface We will use the tileable texture created in the previous recipe and apply it to a slate roof material in Blender. Reload the slate-roof-01.blend file saved earlier and select the roof mesh object. From the Materials panel, create a new material and name it slate-roof. In the Diffuse tab, set the color selector to R 0.250, G 0.260, and B 0.300. Under Specular tab, change the specular type to Wardiso, with Intensity to 0.400 and Slope to 0.300. The color should stay at the default white. That's set the general color and specularity for the first material that we will use to start a node material solution for our slate roof shader. Ensure you have a Node Editor window displayed. In the Node Editor, select the material node button and check the Use Nodes checkbox. A blank material node should be displayed connected to an output node. From the Browse ID Data button, on the Material node, select the material previously created named slate-roof. To confirm that the material is loaded into the node, re-select that node by left clicking it. Of course, at the moment, the material is no more than a single color with a soft specular shine. To start turning it into a proper representation of a slate roof, we have to add our tileable texture and set up some bump and color settings to make our simple plane look a little more like a slate roof with many tiles. With the Material node still selected, go to the Texture panel and in the first texture slot, create a new texture of type Image or Movie and name it slate-tile. From the Image tab, open the slate-tile.png image you saved earlier. Under Image Mapping/ Extension, select Repeat and set Repeat to X 9 and Y 6. That means the image will be repeated nine times in the X direction and six in the Y of the texture space. In the Influence tab, select Diffuse/Color and set to 0.831. Also, select Geometry/Normal and set to -5.000. Finally, set the Blend type to Darken. Save your work at this point, incrementing your filename number to slate-roof-02.blend. As you can see, a repeated pattern has been stamped on our flat surface with darker colors representing the slate tile separations and a darker top that currently looks like a shadow. This will be corrected in following recipes, along with the obvious clinical precision of each edge. How it works... The surface properties of slate produce a spread of specular highlight when the slate is dry. The best way of simulating that in Blender is to employ a specular shader that can easily generate this type of specular property. The Wardiso specular shader is ideal for this task as it allows a wide range of slope settings from very tight, below 0.100, to very widely spread, 0.400. This is different from the other specular shaders that use a hardness setting to vary the highlight spread. However, you will notice that other specular shader types produce a narrower range than the Wardiso shader. In our slate example, this particular shader provides the ideal solution. Man-made materials are often made from repeated patterns. This is often because it's easier to manufacture objects as components and bring them together when building thus producing patterns. Utilizing simple tileable images to represent those shapes is an extremely efficient way of creating a Blender material simulation. Blender provides some really useful tools to ease the process, using repeats within a material as well as techniques to add variety and drama to a material. Repeat is a really useful way of tiling an image any number of times across the object's texture space. In our example, we were applying the image texture to the object's generated texture space. That's basically the physical dimensions of the object. You can find out what the texture space looks like for any object by selecting the Object panel and choosing the Display tab and checking Texture Space. An orange dotted border, representing the texture space, will surround the object. The plane object used for this material simulation is a square rectangle. If you were to scale the plane disproportionately, the texture would distort accordingly. If we were using this material for a roof simulation, where the scale may not be square, we may need to alter the repeat settings in the texture to match the proportions of the roof rectangle. In our recipe, we started with a one blender unit square mesh then set the repeat pattern to X 9 and Y 6. The repeat settings have to be integer numbers so it may be necessary to calculate the nearest repeat numbers for the image you want to use. In our example, we didn't need to be absolutely accurate. Slates, after all, are often quite variable in size between buildings. If you want to be absolutely accurate, scale your original mesh in Object mode to match to the image proportions. So, in our example, we could have scaled the plane to 2.60 (or 0.26) blender units on its X axis and 4.20 (or 0.42) on its Y axis, and then designed our repeats from that point.
Read more
  • 0
  • 0
  • 3037
article-image-3d-vector-drawing-and-text-papervision3d-part-1
Packt
20 Oct 2009
4 min read
Save for later

3D Vector Drawing and Text with Papervision3D: Part 1

Packt
20 Oct 2009
4 min read
The main part of this article is dedicated to a library called VectorVision that was incorporated into Papervision3D. After discussing the classes of this library, we will take a look at the Lines3D class in the next part that also enables you to draw 3D lines. This class was already a part of Papervision3D before VectorVision was incorporated. VectorVision: 3D vector text and drawing VectorVision is a library written in ActionScript that allows you to render vector graphics in Papervision3D and add a 3D perspective to them. The project started as a separate library that you could download and use as an add-on. However, it was fully integrated in Papervision3D in June 2008. Being able to use vector shapes and text theoretically means that you could draw any kind of vector graphic and give it a 3D perspective. This article will focus on the features that are implemented in Papervision3D: Creating 3D vector text Drawing 3D vector shapes such as lines, circles, and rectangles Keep in mind that 3D letters can be seen as vector shapes too, just like lines, circles, and rectangles. The above distinction is made based on how VectorVision is implemented in Papervision3D. Some classes specifically deal with creating 3D text, whereas others enable you to create vector shapes. Creating a template class for the 3D text examples Because the 3D text examples we are about to see have a lot in common, we will use a template class that looks as follows: package{ import flash.events.Event; import org.papervision3d.materials.special.Letter3DMaterial; import org.papervision3d.typography.Font3D; import org.papervision3d.typography.Text3D; import org.papervision3d.typography.fonts.HelveticaBold; import org.papervision3d.view.BasicView; public class Text3DTemplate extends BasicView { private var material:Letter3DMaterial; private var font3D:Font3D; private var text3D:Text3D; private var easeOut:Number = 0.6; private var reachX:Number = 0.5 private var reachY:Number = 0.5 private var reachZ:Number = 0.5; public function Text3DTemplate() { stage.frameRate = 40; init(); startRendering(); } private function init():void { //code to be added } override protected function onRenderTick(event:Event = null):void { var xDist:Number = mouseX - stage.stageWidth * 0.5; var yDist:Number = mouseY - stage.stageHeight * 0.5; camera.x += (xDist - camera.x * reachX) * easeOut; camera.y += (yDist - camera.y * reachY) * easeOut; camera.z += (-mouseY * 2 - camera.z ) * reachZ; super.onRenderTick(); } }} We added some class properties that are used in the render method, where we added code to move the camera when the mouse moves. Also, we imported four classes and added three class properties that will enable us to create 3D text. How to create and add 3D text Let's see how we can create 3D vector text that looks crisp and clear. The general process of creating and displaying 3D text looks as follows: Create material with Letter3DMaterial. Create a Font3D instance. Create a Text3D instance, passing the text, font, and material to it, and add it to the scene or to another do3D. We will create an example that demonstrates several features of Text3D: Multiline Alignment Outlines All the following code should be added inside the init() method. Before we instantiate the classes that we need in order to display 3D text, we assign a text string to a local variable. var text:String = "Multiline 3D textnwith letter spacing,nline spacing,nand alignment ;-)"; Now, let's create a text material, font, and text. First we instantiate Letter3DMaterial, which resides in the org.papervision3d.materials.special package: material = new Letter3DMaterial(0x000000); The constructor of this class takes two optional parameters:
Read more
  • 0
  • 0
  • 3026

article-image-spritekit-framework-and-physics-simulation
Packt
24 Mar 2015
15 min read
Save for later

SpriteKit Framework and Physics Simulation

Packt
24 Mar 2015
15 min read
In this article by Bhanu Birani, author of the book iOS Game Programming Cookbook, you will learn about the SpriteKit game framework and about the physics simulation. (For more resources related to this topic, see here.) Getting started with the SpriteKit game framework With the release of iOS 7.0, Apple has introduced its own native 2D game framework called SpriteKit. SpriteKit is a great 2D game engine, which has support for sprite, animations, filters, masking, and most important is the physics engine to provide a real-world simulation for the game. Apple provides a sample game to get started with the SpriteKit called Adventure Game. The download URL for this example project is http://bit.ly/Rqaeda. This sample project provides a glimpse of the capability of this framework. However, the project is complicated to understand and for learning you just want to make something simple. To have a deeper understanding of SpriteKit-based games, we will be building a bunch of mini games in this book. Getting ready To get started with iOS game development, you have the following prerequisites for SpriteKit: You will need the Xcode 5.x The targeted device family should be iOS 7.0+ You should be running OS X 10.8.X or later If all the above requisites are fulfilled, then you are ready to go with the iOS game development. So let's start with game development using iOS native game framework. How to do it... Let's start building the AntKilling game. Perform the following steps to create your new SpriteKit project: Start your Xcode. Navigate to File | New | Project.... Then from the prompt window, navigate to iOS | Application | SpriteKit Game and click on Next. Fill all the project details in the prompt window and provide AntKilling as the project name with your Organization Name, device as iPhone, and Class Prefix as AK. Click on Next. Select a location on the drive to save the project and click on Create. Then build the sample project to check the output of the sample project. Once you build and run the project with the play button, you should see the following on your device: How it works... The following are the observations of the starter project: As you have seen, the sample project of SpriteKit plays a label with a background color. SpriteKit works on the concept of scenes, which can be understood as the layers or the screens of the game. There can be multiple scenes working at the same time; for example, there can be a gameplay scene, hud scene, and the score scene running at the same time in the game. Now we can look into the project for more detail arrangements of the starter project. The following are the observations: In the main directory, you already have one scene created by default called AKMyScene. Now click on AKMyScene.m to explore the code to add the label on the screen. You should see something similar to the following screenshot: Now we will be updating this file with our code to create our AntKilling game in the next sections. We have to fulfill a few prerequisites to get started with the code, such as locking the orientation to landscape as we want a landscape orientation game. To change the orientation of the game, navigate to AntKilling project settings | TARGETS | General. You should see something similar to the following screenshot: Now in the General tab, uncheck Portrait from the device orientation so that the final settings should look similar to the following screenshot: Now build and run the project. You should be able to see the app running in landscape orientation. The bottom-right corner of the screen shows the number of nodes with the frame rate. Introduction to physics simulation We all like games that have realistic effects and actions. In this article we will learn about the ways to make our games more realistic. Have you ever wondered how to provide realistic effect to game objects? It is physics that provides a realistic effect to the games and their characters. In this article, we will learn how to use physics in our games. While developing the game using SpriteKit, you will need to change the world of your game frequently. The world is the main object in the game that holds all the other game objects and physics simulations. We can also update the gravity of the gaming world according to our need. The default world gravity is 9.8, which is also the earth's gravity, World gravity makes all bodies fall down to the ground as soon as they are created. More about SpriteKit can be explored using the following link: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Physics/Physics.html Getting ready The first task is to create the world and then add bodies to it, which can interact according to the principles of physics. You can create game objects in the form of sprites and associate physics bodies to them. You can also set various properties of the object to specify its behavior. How to do it... In this section, we will learn about the basic components that are used to develop games. We will also learn how to set game configurations, including the world settings such as gravity and boundary. The initial step is to apply the gravity to the scene. Every scene has a physics world associated with it. We can update the gravity of the physics world in our scene using the following line of code: self.physicsWorld.gravity = CGVectorMake(0.0f, 0.0f); Currently we have set the gravity of the scene to 0, which means the bodies will be in a state of free fall. They will not experience any force due to gravity in the world. In several games we also need to set a boundary to the games. Usually, the bounds of the view can serve as the bounds for our physics world. The following code will help us to set up the boundary for our game, which will be as per the bounds of our game scene: // 1 Create a physics body that borders the screenSKPhysicsBody* gameBorderBody = [SKPhysicsBody   bodyWithEdgeLoopFromRect:self.frame];// 2 Set physicsBody of scene to gameBorderBodyself.physicsBody = gameBorderBody;// 3 Set the friction of that physicsBody to 0self.physicsBody.friction = 0.0f; In the first line of code we are initializing a SKPhysicsBody object. This object is used to add the physics simulation to any SKSpriteNode. We have created the gameBorderBody as a rectangle with the dimensions equal to the current scene frame. Then we assign that physics object to the physicsBody of our current scene (every SKSpriteNode object has the physicsBody property through which we can associate physics bodies to any node). After this we update the physicsBody.friction. This line of code updates the friction property of our world. The friction property defines the friction value of one physics body with another physics body. Here we have set this to 0, in order to make the objects move freely, without slowing down. Every game object is inherited from the SKSpriteNode class, which allows the physics body to hold on to the node. Let us take an example and create a game object using the following code: // 1SKSpriteNode* gameObject = [SKSpriteNode   spriteNodeWithImageNamed: @"object.png"];gameObject.name = @"game_object";gameObject.position = CGPointMake(self.frame.size.width/3,   self.frame.size.height/3);[self addChild:gameObject]; // 2gameObject.physicsBody = [SKPhysicsBody   bodyWithCircleOfRadius:gameObject.frame.size.width/2];// 3gameObject.physicsBody.friction = 0.0f; We are already familiar with the first few lines of code wherein we are creating the sprite reference and then adding it to the scene. Now in the next line of code, we are associating a physics body with that sprite. We are initializing the circular physics body with radius and associating it with the sprite object. Then we can update various other properties of the physics body such as friction, restitution, linear damping, and so on. The physics body properties also allow you to apply force. To apply force you need to provide the direction where you want to apply force. [gameObject.physicsBody applyForce:CGVectorMake(10.0f,   -10.0f)]; In the code we are applying force in the bottom-right corner of the world. To provide the direction coordinates we have used CGVectorMake, which accepts the vector coordinates of the physics world. You can also apply impulse instead of force. Impulse can be defined as a force that acts for a specific interval of time and is equal to the change in linear momentum produced over that interval. [gameObject.physicsBody applyImpulse:CGVectorMake(10.0f,   -10.0f)]; While creating games, we frequently use static objects. To create a rectangular static object we can use the following code: SKSpriteNode* box = [[SKSpriteNode alloc]   initWithImageNamed: @"box.png"];box.name = @"box_object";box.position = CGPointMake(CGRectGetMidX(self.frame),   box.frame.size.height * 0.6f);[self addChild:box];box.physicsBody = [SKPhysicsBody   bodyWithRectangleOfSize:box.frame.size];box.physicsBody.friction = 0.4f;// make physicsBody staticbox.physicsBody.dynamic = NO; So all the code is the same except one special property, which is dynamic. By default this property is set to YES, which means that all the physics bodies will be dynamic by default and can be converted to static after setting this Boolean to NO. Static bodies do not react to any force or impulse. Simply put, dynamic physics bodies can move while the static physics bodies cannot . Integrating physics engine with games From this section onwards, we will develop a mini game that will have a dynamic moving body and a static body. The basic concept of the game will be to create an infinite bouncing ball with a moving paddle that will be used to give direction to the ball. Getting ready... To develop a mini game using the physics engine, start by creating a new project. Open Xcode and go to File | New | Project and then navigate to iOS | Application | SpriteKit Game. In the pop-up screen, provide the Product Name as PhysicsSimulation, navigate to Devices | iPhone and click on Next as shown in the following screenshot: Click on Next and save the project on your hard drive. Once the project is saved, you should be able to see something similar to the following screenshot: In the project settings page, just uncheck the Portrait from Device Orientation section as we are supporting only landscape mode for this game. Graphics and games cannot be separated for long; you will also need some graphics for this game. Download the graphics folder, drag it and import it into the project. Make sure that the Copy items into destination group's folder (if needed) is checked and then click on Finish button. It should be something similar to the following screenshot: How to do it... Now your project template is ready for a physics-based mini game. We need to update the game template project to get started with code game logic. Take the following steps to integrate the basic physics object in the game. Open the file GameScene.m .This class creates a scene that will be plugged into the games. Remove all code from this class and just add the following function: -(id)initWithSize:(CGSize)size { if (self = [super initWithSize:size]) { SKSpriteNode* background = [SKSpriteNode spriteNodeWithImageNamed:@"bg.png"]; background.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); [self addChild:background]; } } This initWithSize method creates an blank scene of the specified size. The code written inside the init function allows you to add the background image at the center of the screen in your game. Now when you compile and run the code, you will observe that the background image is not placed correctly on the scene. To resolve this, open GameViewController.m. Remove all code from this file and add the following function; -(void)viewWillLayoutSubviews {   [super viewWillLayoutSubviews];     // Configure the view.   SKView * skView = (SKView *)self.view;   if (!skView.scene) {       skView.showsFPS = YES;       skView.showsNodeCount = YES;             // Create and configure the scene.       GameScene * scene = [GameScene sceneWithSize:skView.bounds.size];       scene.scaleMode = SKSceneScaleModeAspectFill;             // Present the scene.       [skView presentScene:scene];   }} To ensure that the view hierarchy is properly laid out, we have implemented the viewWillLayoutSubviews method. It does not work perfectly in viewDidLayoutSubviews method because the size of the scene is not known at that time. Now compile and run the app. You should be able to see the background image correctly. It will look something similar to the following screenshot: So now that we have the background image in place, let us add gravity to the world. Open GameScene.m and add the following line of code at the end of the initWithSize method: self.physicsWorld.gravity = CGVectorMake(0.0f, 0.0f); This line of code will set the gravity of the world to 0, which means there will be no gravity. Now as we have removed the gravity to make the object fall freely, it's important to create a boundary around the world, which will hold all the objects of the world and prevent them to go off the screen. Add the following line of code to add the invisible boundary around the screen to hold the physics objects: // 1 Create a physics body that borders the screenSKPhysicsBody* gameborderBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];// 2 Set physicsBody of scene to borderBodyself.physicsBody = gameborderBody;// 3 Set the friction of that physicsBody to 0self.physicsBody.friction = 0.0f; In the first line, we are are creating an edge-based physics boundary object, with a screen size frame. This type of a physics body does not have any mass or volume and also remains unaffected by force and impulses. Then we associate the object with the physics body of the scene. In the last line we set the friction of the body to 0, for a seamless interaction between objects and the boundary surface. The final file should look something like the following screenshot: Now we have our surface ready to hold the physics world objects. Let us create a new physics world object using the following line of code: // 1SKSpriteNode* circlularObject = [SKSpriteNode spriteNodeWithImageNamed: @"ball.png"];circlularObject.name = ballCategoryName;circlularObject.position = CGPointMake(self.frame.size.width/3, self.frame.size.height/3);[self addChild:circlularObject]; // 2circlularObject.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:circlularObject.frame.size.width/2];// 3circlularObject.physicsBody.friction = 0.0f;// 4circlularObject.physicsBody.restitution = 1.0f;// 5circlularObject.physicsBody.linearDamping = 0.0f;// 6circlularObject.physicsBody.allowsRotation = NO; Here we have created the sprite and then we have added it to the scene. Then in the later steps we associate the circular physics body with the sprite object. Finally, we alter the properties of that physics body. Now compile and run the application; you should be able to see the circular ball on the screen as shown in screenshot below: The circular ball is added to the screen, but it does nothing. So it's time to add some action in the code. Add the following line of code at the end of the initWithSize method: [circlularObject.physicsBody applyImpulse:CGVectorMake(10.0f, -10.0f)]; This will apply the force on the physics body, which in turn will move the associated ball sprite as well. Now compile and run the project. You should be able to see the ball moving and then collide with the boundary and bounce back, as there is no friction between the boundary and the ball. So now we have the infinite bouncing ball in the game. How it works… There are several properties used while creating physics bodies to define their behavior in the physics world. The following is a detailed description of the properties used in the preceding code: Restitution property defines the bounciness of an object. Setting the restitution to 1.0f, means that the ball collision will be perfectly elastic with any object. This means that the ball will bounce back with a force equal to the impact. Linear Damping property allows the simulation of fluid or air friction. This is accomplished by reducing the linear velocity of the body. In our case, we do not want the ball to slow down while moving and hence we have set the restitution to 0.0f. There's more… You can read about all these properties in detail at Apple's developer documentation: https://developer.apple.com/library/IOs/documentation/SpriteKit/Reference/SKPhysicsBody_Ref/index.html. Summary In this article, you have learned about the SpriteKit game framework, how to create a simple game using SpriteKit framework, physics simulation, and also how to integrate physics engine with games. Resources for Article: Further resources on this subject: Code Sharing Between iOS and Android [article] Linking OpenCV to an iOS project [article] Interface Designing for Games in iOS [article]
Read more
  • 0
  • 0
  • 3025

article-image-integrating-direct3d-xaml-and-windows-81
Packt
16 Jan 2014
9 min read
Save for later

Integrating Direct3D with XAML and Windows 8.1

Packt
16 Jan 2014
9 min read
(For more resources related to this topic, see here.) Preparing the swap chain for a Windows Store app Getting ready To target Windows 8.1, we need to use Visual Studio 2013. For the remainder of the article, we will assume that these can be located upon navigating to .ExternalBinDirectX11_2-Signed-winrt under the solution location. How to do it… We'll begin by creating a new class library and reusing a majority of the Common project used throughout the book so far, then we will create a new class D3DApplicationWinRT inheriting from D3DApplicationBase to be used as a starting point for our Windows Store app's render targets. Within Visual Studio, create a new Class Library (Windows Store apps) called Common.WinRT. New Project dialog to create a class library project for Windows Store apps Add references to the following SharpDX assemblies: SharpDX.dll, SharpDX.D3DCompiler.dll, SharpDX.Direct2D1.dll, SharpDX.Direct3D11.dll, and SharpDX.DXGI within .ExternalBinDirectX11_2-Signed-winrt. Right-click on the new project; navigate to Add | Existing item... ; and select the following files from the existing Common project: D3DApplicationBase.cs, DeviceManager.cs, Mesh.cs, RendererBase.cs, and HLSLFileIncludeHandlers.hlsl, and optionally, FpsRenderer.cs and TextRenderer.cs. Instead of duplicating the files, we can choose to Add As Link within the file selection dialog, as shown in the following screenshot: Files can be added as a link instead of a copy Any platform-specific code can be wrapped with a check for the NETFX_CORE definition, as shown in the following snippet: #if NETFX_CORE ...Windows Store app code #else ...Windows Desktop code #endif Add a new C# abstract class called D3DApplicationWinRT. // Implements support for swap chain description for // Windows Store apps public abstract class D3DApplicationWinRT : D3DApplicationBase { ... } In order to reduce the chances of our app being terminated to reclaim system resources, we will use the new SharpDX.DXGI.Device3.Trim function whenever our app is suspended (native equivalent is IDXGIDevice3::Trim). The following code shows how this is done: public D3DApplicationWinRT() : base() { // Register application suspending event Windows.ApplicationModel.Core .CoreApplication.Suspending += OnSuspending; } // When suspending hint that resources may be reclaimed private void OnSuspending(Object sender, Windows.ApplicationModel.SuspendingEventArgs e) { // Retrieve the DXGI Device3 interface from our // existing Direct3D device. using (SharpDX.DXGI.Device3 dxgiDevice = DeviceManager .Direct3DDevice.QueryInterface<SharpDX.DXGI.Device3>()) { dxgiDevice.Trim(); } } The existing D3DApplicationBase.CreateSwapChainDescription function is not compatible with Windows Store apps. Therefore, we will override this and create a SwapChainDescription1 instance that is compatible with Windows Store apps. The following code shows the changes necessary: protected override SharpDX.DXGI.SwapChainDescription1 CreateSwapChainDescription() { var desc = new SharpDX.DXGI.SwapChainDescription1() { Width = Width, Height = Height, Format = SharpDX.DXGI.Format.B8G8R8A8_UNorm, Stereo = false, SampleDescription.Count = 1, SampleDescription.Quality = 0, Usage = SharpDX.DXGI.Usage.BackBuffer | SharpDX.DXGI.Usage.RenderTargetOutput, Scaling = SharpDX.DXGI.Scaling.Stretch, BufferCount = 2, SwapEffect = SharpDX.DXGI.SwapEffect.FlipSequential, Flags = SharpDX.DXGI.SwapChainFlags.None }; return desc; } We will not be implementing the Direct3D render loop within a Run method for our Windows Store apps—this is because we will use the existing composition events where appropriate. Therefore, we will create a new abstract method Render and provide a default empty implementation of Run. public abstract void Render(); [Obsolete("Use the Render method for WinRT", true)] public override void Run() { } How it works… As of Windows 8.1 and DirectX Graphics Infrastructure (DXGI) 1.3, all Direct3D devices created by our Windows Store apps should call SharpDX.DXGI.Device3.Trim when suspending to reduce the memory consumed by the app and graphics driver. This reduces the chance that our app will be terminated to reclaim resources while it is suspended—although our application should consider destroying other resources as well. When resuming, drivers that support trimming will recreate the resources as required. We have used Windows.ApplicationModel.Core.CoreApplication rather than Windows.UI.Xaml.Application for the Suspending event, so that we can use the class for both an XAML-based Direct3D app as well as one that implements its own Windows.ApplicationModel.Core.IFrameworkView in order to render to CoreWindow directly. Windows Store apps only support a flip presentation model and therefore require that the swap chain is created using a SharpDX.DXGI.SwapEffect.FlipSequential swap effect; this in turn requires between two and 16 buffers specified in the SwapChainDescription1.BufferCount property. When using a flip model, it is also necessary to specify the SwapChainDescription1.SampleDescription property with Count=1 and Quality=0, as multisample anti-aliasing (MSAA) is not supported on the swap chain buffers themselves. A flip presentation model avoids unnecessarily copying the swap-chain buffer and increases the performance. By removing Windows 8.1 specific calls (such as the SharpDX.DXGI.Device3.Trim method), it is also possible to implement this recipe using Direct3D 11.1 for Windows Store apps that target Windows 8. See also The Rendering to a CoreWindow and Rendering to a SwapChainPanel recipes show how to create swap chains for non-XAML and XAML apps respectively NuGet Package Manager can be downloaded from http://visualstudiogallery.msdn.microsoft.com/4ec1526c-4a8c-4a84-b702-b21a8f5293ca You can find the flip presentation model on MSDN at http://msdn.microsoft.com/en-us/library/windows/desktop/hh706346(v=vs.85).aspx Rendering to a CoreWindow The XAML view provider found in the Windows Store app graphics framework cannot be modified. Therefore, when we want to implement the application's graphics completely within DirectX/Direct3D without XAML interoperation, it is necessary to create a basic view provider that allows us to connect our DirectX graphics device resources to the windowing infrastructure of our Windows Store app. In this recipe, we will implement a CoreWindow swap-chain target and look at how to hook Direct3D directly to the windowing infrastructure of a Windows Store app, which is exposed by the CoreApplication, IFrameworkViewSource, IFrameworkView, and CoreWindow .NET types. This recipe continues from where we left off with the Preparing the swap chain for Windows Store apps recipe. How to do it… We will first update the Common.WinRT project to support the creation of a swap chain for a Windows Store app's CoreWindow instance and then implement a simple Hello World application. Let's begin by creating a new abstract class within the Common.WinRT project, called D3DAppCoreWindowTarget and descending from the D3DApplicationWinRT class from our previous recipe. The default constructor accepts the CoreWindow instance and attaches a handler to its SizeChanged event. using Windows.UI.Core; using SharpDX; using SharpDX.DXGI; ... public abstract class D3DAppCoreWindowTarget : D3DApplicationWinRT { // The CoreWindow this instance renders to private CoreWindow _window; public CoreWindow Window { get { return _window; } } public D3DAppCoreWindowTarget(CoreWindow window) { _window = window; Window.SizeChanged += (sender, args) => { SizeChanged(); }; } ... } Within our new class, we will now override the CurrentBounds property and the CreateSwapChain function in order to return the correct size and create the swap chain for the associated CoreWindow. // Retrieve current bounds of CoreWindow public override SharpDX.Rectangle CurrentBounds { get { return new SharpDX.Rectangle( (int)_window.Bounds.X, (int)_window.Bounds.Y, (int)_window.Bounds.Width, (int)_window.Bounds.Height); } } // Create the swap chain protected override SharpDX.DXGI.SwapChain1 CreateSwapChain( SharpDX.DXGI.Factory2 factory, SharpDX.Direct3D11.Device1 device, SharpDX.DXGI.SwapChainDescription1 desc) { // Create the swap chain for the CoreWindow using (var coreWindow = new ComObject(_window)) return new SwapChain1(factory, device, coreWindow, ref desc); } This completes the changes to our Common.WinRT project. Next, we will create a Hello World Direct3D Windows Store app rendering directly to the application's CoreWindow instance. Visual Studio 2013 does not provide us with a suitable C# project template to create a non-XAML Windows Store app, so we will begin by creating a new C# Windows Store Blank App (XAML) project. Add references to the SharpDX assemblies: SharpDX.dll, SharpDX.Direct3D11.dll, SharpDX.D3DCompiler.dll, and SharpDX.DXGI.dll. Also, add a reference to the Common.WinRT project. Next, we remove the two XAML files from the project: App.xaml and MainPage.xaml. We will replace the previous application entry point, App.xaml, with a new static class called App. This will house the main entry point for our application where we start our Windows Store app using a custom view provider, as shown in the following snippet: using Windows.ApplicationModel.Core; using Windows.Graphics.Display; using Windows.UI.Core; ... internal static class App { [MTAThread] private static void Main() { var viewFactory = new D3DAppViewProviderFactory(); CoreApplication.Run(viewFactory); } // The custom view provider factory class D3DAppViewProviderFactory : IFrameworkViewSource { public IFrameworkView CreateView() { return new D3DAppViewProvider(); } } class D3DAppViewProvider : SharpDX.Component, IFrameworkView { ... } } The implementation of the IFrameworkView members of D3DAppViewProvider allows us to initialize an instance of a concrete descendent of the D3DAppCoreWindowTarget class within SetWindow and to implement the main application loop in the Run method. Windows.UI.Core.CoreWindow window;D3DApp d3dApp; // descends from D3DAppCoreWindowTarget public void Initialize(CoreApplicationView applicationView) { } public void Load(string entryPoint) { } public void SetWindow(Windows.UI.Core.CoreWindow window) { RemoveAndDispose(ref d3dApp); this.window = window; d3dApp = ToDispose(new D3DApp(window)); d3dApp.Initialize(); } public void Uninitialize() { } public void Run() { // Specify the cursor type as the standard arrow. window.PointerCursor = new CoreCursor( CoreCursorType.Arrow, 0); // Activate the application window, making it visible // and enabling it to receive events. window.Activate(); // Set the DPI and handle changes d3dApp.DeviceManager.Dpi = Windows.Graphics.Display .DisplayInformation.GetForCurrentView().LogicalDpi; Windows.Graphics.Display.DisplayInformation .GetForCurrentView().DpiChanged += (sender, args) => { d3dApp.DeviceManager.Dpi = Windows.Graphics.Display .DisplayInformation.GetForCurrentView().LogicalDpi; }; // Enter the render loop. Note that Windows Store apps // should never exit here. while (true) { // Process events incoming to the window. window.Dispatcher.ProcessEvents( CoreProcessEventsOption.ProcessAllIfPresent); // Render frame d3dApp.Render(); } } The D3DApp class follows the same structure from our previous recipes throughout the book. There are only a few minor differences as highlighted in the following code snippet: class D3DApp: Common.D3DAppCoreWindowTarget { public D3DApp(Windows.UI.Core.CoreWindow window) : base(window) { this.VSync=true; } // Private member fields ... protected override void CreateDeviceDependentResources( Common.DeviceManager deviceManager) { ... create all device resources ... and create renderer instances here } // Render frame public override void Render() { var context = this.DeviceManager.Direct3DContext; // OutputMerger targets must be set every frame context.OutputMerger.SetTargets( this.DepthStencilView, this.RenderTargetView); // Clear depthstencil and render target context.ClearDepthStencilView( this.DepthStencilView, SharpDX.Direct3D11.DepthStencilClearFlags.Depth | SharpDX.Direct3D11.DepthStencilClearFlags.Stencil , 1.0f, 0); context.ClearRenderTargetView( this.RenderTargetView, SharpDX.Color.LightBlue); ... setup context pipeline state ... perform rendering commands // Present the render target Present(); } } The following screenshot shows an example of the output using CubeRenderer, and overlaying the 2D text with the TextRenderer class: Output from the simple Hello World sample using the CoreWindow render target
Read more
  • 0
  • 0
  • 3025
article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-2
Packt
18 Nov 2009
6 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2

Packt
18 Nov 2009
6 min read
Time for action – simulating fluids with movement Your project manager is amazed with the shower of dozens of meteors in the background. However, he wants to add a more realistic background. He shows you a water simulation sample using Farseer Physics Engine. He wants you to use the wave simulation capabilities offered by this powerful physics simulator to create an asteroids belt. First, we are going to create a new class to define a fluid model capable of setting the initial parameters and updating a wave controller provided by the physics simulator. We will use Farseer Physics Engine's wave controller to add real-time fluids with movement for our games. The following code is based on the Silverlight water sample offered with the physics simulator. However, in this case, we are not interested in collision detection capabilities because we are going to create an asteroid belt in the background. Stay in the 3DInvadersSilverlight project. Create a new class—FluidModel. Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Farseer Physics Engine): using System;using FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Controllers;using FarseerGames.FarseerPhysics.Mathematics; Add the following public property to hold the WaveController instance: public WaveController WaveController { get; private set; } Add the following public properties to define the wave generator parameters: public float WaveGeneratorMax { get; set; }public float WaveGeneratorMin { get; set; }public float WaveGeneratorStep { get; set; } Add the following constructor without parameters: public FluidModel(){ // Assign the initial values for the wave generator parameters WaveGeneratorMax = 0.20f; WaveGeneratorMin = -0.15f; WaveGeneratorStep = 0.025f;} Add the Initialize method to create and configure the WaveController instance using the PhysicsSimulator instance received as a parameter: public void Initialize(PhysicsSimulator physicsSimulator){ // The wave controller controls how the waves move // It defines how big and how fast is the wave // It is represented as set of points equally spaced horizontally along the width of the wave. WaveController = new WaveController(); WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); WaveController.Width = ConvertUnits.ToSimUnits(30); WaveController.Height = ConvertUnits.ToSimUnits(3); // The number of vertices that make up the surface of the wave WaveController.NodeCount = 40; // Determines how quickly the wave will dissipate WaveController.DampingCoefficient = .95f; // Establishes how fast the wave algorithm runs (in seconds) WaveController.Frequency = .16f; //The wave generator parameters simply move an end-point of the WaveController.WaveGeneratorMax = WaveGeneratorMax; WaveController.WaveGeneratorMin = WaveGeneratorMin; WaveController.WaveGeneratorStep = WaveGeneratorStep; WaveController.Initialize();} Add the Update method to update the wave controller and update the points that draw the waves shapes: public void Update(TimeSpan elapsedTime){ WaveController.Update((float) elapsedTime.TotalSeconds);} What just happened? We now have a FluidModel class that creates, configures, and updates a WaveController instance according to an associated physics simulator. As we are going to work with different gravitational forces, we are going to use another independent physics simulator to work with the FluidModel instance in our game. Simulating waves The wave controller offers many parameters to represent a set of points equally spaced horizontally along the width of one or many waves. The waves can be: Big or small Fast or slow Tall or short The wave controller's parameters allow us to determine the number of vertices that make up the surface of the wave assigning a value to its NodeCount property. In this case, we are going to create waves with 40 nodes and each point is going to be represented by an asteroid: WaveController.NodeCount = 40; The Initialize method defines the position, width, height and other parameters for the wave controller. We have to convert our position values to the simulator values. Thus, we use the ConvertUnits.ToSimUnits method. For example, this line defines the 2D Vector for the wave's upper left corner (X = -20 and Y = 5): WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); The best way to understand each parameter is changing its values and running the example using these new values. Using a wave controller we can create amazing fluids with movement.   Time for action – creating a subclass for a complex asteroid belt Now, we are going to create a specialized subclass of Actor (Balder.Core.Runtime. Actor) to load, create an update a fluid with waves. This class will enable us to encapsulate an independent asteroid belt and add it to the game. In this case, it is a 3D character composed of many models (many instances of Mesh). Stay in the 3DInvadersSilverlight project. Create a new class, FluidWithWaves (a subclass of Actor) using the following declaration: public class FluidWithWaves : Actor Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Balder, Farseer Physics Engine and lists): using System.Windows;using System.Windows.Controls;using System.Windows.Media;using System.Windows.Shapes;// BALDERusing Balder.Core;using Balder.Core.Geometries;using Balder.Core.Math;using Balder.Core.Runtime;// FARSEER PHYSICSusing FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Collisions;using FarseerGames.FarseerPhysics.Dynamics;using FarseerGames.FarseerPhysics.Factories;using FarseerGames.FarseerPhysics.Mathematics;// LISTSusing System.Collections.Generic; Add the following protected variables to hold references for the RealTimeGame and the Scene instances: protected RealTimeGame _game;protected Scene _scene; Add the following private variables to hold the associated FluidModel instance, the collection of points that define the wave and the list of meshes (asteroids): private FluidModel _fluidModel;private PointCollection _points;private List<Mesh> _meshList; Add the following constructor with three parameters—the RealTimeGame, the Scene, and the PhysicsSimulator instances: public FluidWithWaves(RealTimeGame game, Scene scene, PhysicsSimulator physicsSimulator){ _game = game; _scene = scene; _fluidModel = new FluidModel(); _fluidModel.Initialize(physicsSimulator); int count = _fluidModel.WaveController.NodeCount; _points = new PointCollection(); for (int i = 0; i < count; i++) { _points.Add(new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]))); }} Override the LoadContent method to load the meteors' meshes and set their initial positions according to the points that define the wave: public override void LoadContent(){ base.LoadContent(); _meshList = new List<Mesh>(_points.Count); for (int i = 0; i < _points.Count; i++) { Mesh mesh = _game.ContentManager.Load<Mesh>("meteor.ase"); _meshList.Add(mesh); _scene.AddNode(mesh); mesh.Position.X = (float) _points[i].X; mesh.Position.Y = (float) _points[i].Y; mesh.Position.Z = 0; }} Override the Update method to update the fluid model and then change the meteors' positions taking into account the points that define the wave according to the elapsed time: public override void Update(){ base.Update(); // Update the fluid model with the real-time game elapsed time _fluidModel.Update(_game.ElapsedTime); _points.Clear(); for (int i = 0; i < _fluidModel.WaveController.NodeCount; i++) { Point p = new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]) +ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.Position.Y)); _points.Add(p); }// Update the positions for the meshes that define the wave's points for (int i = 0; i < _points.Count; i++) { _meshList[i].Position.X = (float)_points[i].X; _meshList[i].Position.Y = (float)_points[i].Y; }}
Read more
  • 0
  • 0
  • 3020

article-image-unity-game-development-interactions-part-2
Packt
18 Nov 2009
14 min read
Save for later

Unity Game Development: Interactions (Part 2)

Packt
18 Nov 2009
14 min read
Opening the outpost In this section, we will look at the two differing approaches for triggering the animation giving you an overview of the two techniques that will both become useful in many other game development situations. In the first approach, we'll use collision detection—a crucial concept to get to grips with as you begin to work on games in Unity. In the second approach, we'll implement a simple ray cast forward from the player. Approach 1—Collision detection To begin writing the script that will trigger the door-opening animation and thereby grant access to the outpost, we need to consider which object to write a script for. In game development, it is often more efficient to write a single script for an object that will interact with many other objects, rather than writing many individual scripts that check for a single object. With this in mind, when writing scripts for a game such as this, we will write a script to be applied to the player character in order to check for collisions with many objects in our environment, rather than a script made for each object the player may interact with, which checks for the player. Creating new assets Before we introduce any new kind of asset into our project, it is good practice to create a folder in which we will keep assets of that type. In the Project panel, click on the Create button, and choose Folder from the drop-down menu that appears. Rename this folder Scripts by selecting it and pressing Return (Mac) or by pressing F2 (PC). Next, create a new JavaScript file within this folder simply by leaving the Scripts folder selected and clicking on the Project panel's Create button again, this time choosing JavaScript. By selecting the folder, you want a newly created asset to be in before you create them, you will not have to create and then relocate your asset, as the new asset will be made within the selected folder. Rename the newly created script from the default—NewBehaviourScript—to PlayerCollisions. JavaScript files have the file extension of .js but the Unity Project panel hides file extensions, so there is no need to attempt to add it when renaming your assets. You can also spot the file type of a script by looking at its icon in the Project panel. JavaScript files have a 'JS' written on them, C# files simply have 'C#' and Boo files have an image of a Pacman ghost, a nice little informative pun from the guys at Unity Technologies! Scripting for character collision detection To start editing the script, double-click on its icon in the Project panel to launch it in the script editor for your platform—Unitron on Mac, or Uniscite on PC. Working with OnControllerColliderHit By default, all new JavaScripts include the Update() function, and this is why you'll find it present when you open the script for the first time. Let's kick off by declaring variables we can utilise throughout the script. Our script begins with the definition of four variables, public member variables and two private variables. Their purposes are as follows: doorIsOpen: a private true/false (boolean) type variable acting as a switch for the script to check if the door is currently open. doorTimer: a private floating-point (decimal-placed) number variable, which is used as a timer so that once our door is open, the script can count a defined amount of time before self-closing the door. currentDoor: a private GameObject storing variable used to store the specific currently opened door. Should you wish to add more than one outpost to the game at a later date, then this will ensure that opening one of the doors does not open them all, which it does by remembering the most recent door hit. doorOpenTime: a floating-point (potentially decimal) numeric public member variable, which will be used to allow us to set the amount of time we wish the door to stay open in the Inspector. doorOpenSound/doorShutSound: Two public member variables of data type AudioClip, for allowing sound clip drag-and-drop assignment in the Inspector panel. Define the variables above by writing the following at the top of the PlayerCollisions script you are editing: private var doorIsOpen : boolean = false;private var doorTimer : float = 0.0;private var currentDoor : GameObject;var doorOpenTime : float = 3.0;var doorOpenSound : AudioClip;var doorShutSound : AudioClip; Next, we'll leave the Update() function briefly while we establish the collision detection function itself. Move down two lines from: function Update(){} And write in the following function: function OnControllerColliderHit(hit : ControllerColliderHit){} This establishes a new function called OnControllerColliderHit. This collision detection function is specifically for use with player characters such as ours, which use the CharacterController component. Its only parameter hit is a variable that stores information on any collision that occurs. By addressing the hit variable, we can query information on the collision, including—for starters—the specific game object our player has collided with. We will do this by adding an if statement to our function. So within the function's braces, add the following if statement: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ }} In this if statement, we are checking two conditions, firstly that the object we hit is tagged with the tag outpostDoor and secondly that the variable doorOpen is currently set to false. Remember here that two equals symbols (==) are used as a comparative, and the two ampersand symbols (&&) simply say 'and also'. The end result means that if we hit the door's collider that we have tagged and if we have not already opened the door, then it may carry out a set of instructions. We have utilized the dot syntax to address the object we are checking for collisions with by narrowing down from hit (our variable storing information on collisions) to gameObject (the object hit) to the tag on that object. If this if statement is valid, then we need to carry out a set of instructions to open the door. This will involve playing a sound, playing one of the animation clips on the model, and setting our boolean variable doorOpen to true. As we are to call multiple instructions—and may need to call these instructions as a result of a different condition later when we implement the ray casting approach—we will place them into our own custom function called OpenDoor. We will write this function shortly, but first, we'll call the function in the if statement we have, by adding: OpenDoor(); So your full collision function should now look like this: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ OpenDoor(); }} Writing custom functions Storing sets of instructions you may wish to call at any time should be done by writing your own functions. Instead of having to write out a set of instructions or "commands" many times within a script, writing your own functions containing the instructions means that you can simply call that function at any time to run that set of instructions again. This also makes tracking mistakes in code—known as Debugging—a lot simpler, as there are fewer places to check for errors. In our collision detection function, we have written a call to a function named OpenDoor. The brackets after OpenDoor are used to store parameters we may wish to send to the function—using a function's brackets, you may set additional behavior to pass to the instructions inside the function. We'll take a look at this in more depth later in this article under the heading Function Efficiency. Our brackets are empty here, as we do not wish to pass any behavior to the function yet. Declaring the function To write the function we need to call, we simply begin by writing: function OpenDoor(){} In between the braces of the function, much in the same way as the instructions of an if statement, we place any instructions to be carried out when this function is called. Playing audio Our first instruction is to play the audio clip assigned to the variable called doorOpenSound. To do this, add the following line to your function by placing it within the curly braces after { "and before" }: audio.PlayOneShot(doorOpenSound); To be certain, it should look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound);} Here we are addressing the Audio Source component attached to the game object this script is applied to (our player character object, First Person Controller), and as such, we'll need to ensure later that we have this component attached; otherwise, this command will cause an error. Addressing the audio source using the term audio gives us access to four functions, Play(), Stop(), Pause(), and PlayOneShot(). We are using PlayOneShot because it is the best way to play a single instance of a sound, as opposed to playing a sound and then switching clips, which would be more appropriate for continuous music than sound effects. In the brackets of the PlayOneShot command, we pass the variable doorOpenSound, which will cause whatever sound file is assigned to that variable in the Inspector to play. We will download and assign this and the clip for closing the door after writing the script. Checking door status One condition of our if statement within our collision detection function was that our boolean variable doorIsOpen must be set to false. As a result, the second command inside our OpenDoor() function is to set this variable to true. This is because the player character may collide with the door several times when bumping into it, and without this boolean, they could potentially trigger the OpenDoor() function many times, causing sound and animation to recur and restart with each collision. By adding in a variable that when false allows the OpenDoor() function to run and then disallows it by setting the doorIsOpen variable to true immediately, any further collisions will not re-trigger the OpenDoor() function. Add the line: doorOpen = true; to your OpenDoor() function now by placing it between the curly braces after the previous command you just added. Playing animation We have already imported the outpost asset package and looked at various settings on the asset before introducing it to the game in this article. One of the tasks performed in the import process was the setting up of animation clips using the Inspector. By selecting the asset in the Project panel, we specified in the Inspector that it would feature three clips: idle (a 'do nothing' state) dooropen doorshut In our openDoor() function, we'll call upon a named clip using a String of text to refer to it. However, first we'll need to state which object in our scene contains the animation we wish to play. Because the script we are writing is to be attached to the player, we must refer to another object before referring to the animation component. We do this by stating the line: var myOutpost : GameObject = GameObject.Find("outpost"); Here we are declaring a new variable called myOutpost by setting its type to be a GameObject and then selecting a game object with the name outpost by using GameObject.Find. The Find command selects an object in the current scene by its name in the Hierarchy and can be used as an alternative to using tags. Now that we have a variable representing our outpost game object, we can use this variable with dot syntax to call animation attached to it by stating: myOutpost.animation.Play("dooropen"); This simply finds the animation component attached to the outpost object and plays the animation called dooropen. The play() command can be passed any string of text characters, but this will only work if the animation clips have been set up on the object in question. Your finished OpenDoor() custom function should now look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound); doorIsOpen = true; var myOutpost : GameObject = GameObject.Find("outpost"); myOutpost.animation.Play("dooropen");} Reversing the procedure Now that we have created a set of instructions that will open the door, how will we close it once it is open? To aid playability, we will not force the player to actively close the door but instead establish some code that will cause it to shut after a defined time period. This is where our doorTimer variable comes into play. We will begin counting as soon as the door becomes open by adding a value of time to this variable, and then check when this variable has reached a particular value by using an if statement. Because we will be dealing with time, we need to utilize a function that will constantly update such as the Update() function we had awaiting us when we created the script earlier. Create some empty lines inside the Update() function by moving its closing curly brace } a few lines down. Firstly, we should check if the door has been opened, as there is no point in incrementing our timer variable if the door is not currently open. Write in the following if statement to increment the timer variable with time if the doorIsOpen variable is set to true: if(doorIsOpen){ doorTimer += Time.deltaTime;} Here we check if the door is open — this is a variable that by default is set to false, and will only become true as a result of a collision between the player object and the door. If the doorIsOpen variable is true, then we add the value of Time.deltaTime to the doorTimer variable. Bear in mind that simply writing the variable name as we have done in our if statement's condition is the same as writing doorIsOpen == true. Time.deltaTime is a Time class that will run independent of the game's frame rate. This is important because your game may be run on varying hardware when deployed, and it would be odd if time slowed down on slower computers and was faster when better computers ran it. As a result, when adding time, we can use Time.deltaTime to calculate the time taken to complete the last frame and with this information, we can automatically correct real-time counting. Next, we need to check whether our timer variable, doorTimer, has reached a certain value, which means that a certain amount of time has passed. We will do this by nesting an if statement inside the one we just added—this will mean that the if statement we are about to add will only be checked if the doorIsOpen if condition is valid. Add the following code below the time incrementing line inside the existing if statement: if(doorTimer > doorOpenTime){shutDoor();doorTimer = 0.0;} This addition to our code will be constantly checked as soon as the doorIsOpen variable becomes true and waits until the value of doorTimer exceeds the value of the doorOpenTime variable, which, because we are using Time.deltaTime as an incremental value, will mean three real-time seconds have passed. This is of course unless you change the value of this variable from its default of 3 in the Inspector. Once the doorTimer has exceeded a value of 3, a function called shutDoor() is called, and the doorTimer variable is reset to zero so that it can be used again the next time the door is triggered. If this is not included, then the doorTimer will get stuck above a value of 3, and as soon as the door was opened it would close as a result. Your completed Update() function should now look like this: function Update(){ if(doorIsOpen){ doorTimer += Time.deltaTime; if(doorTimer > 3){ shutDoor(); doorTimer = 0.0; } }} Now, add the following function called shutDoor() to the bottom of your script. Because it performs largely the same function as openDoor(), we will not discuss it in depth. Simply observe that a different animation is called on the outpost and that our doorIsOpen variable gets reset to false so that the entire procedure may start over: function shutDoor(){audio.PlayOneShot(doorShutSound);doorIsOpen = false;var myOutpost : GameObject = GameObject.Find("outpost");myOutpost.animation.Play("doorshut");}
Read more
  • 0
  • 0
  • 3019
Modal Close icon
Modal Close icon