Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-papervision3d-external-models-part-1
Packt
18 Nov 2009
22 min read
Save for later

Papervision3D External Models: Part 1

Packt
18 Nov 2009
22 min read
This article covers the following: Modeling for Papervision3D Preparing for loading models Creating and loading models using Autodesk 3ds Max Loading an animation from Autodesk 3ds Max Creating and loading models using SketchUp Creating and loading models using Blender Controlling loaded materials Let's start off by having a look at some general practices to keep in mind when modeling for Papervision3D. Modeling for Papervision3D In this section, we will discuss several techniques that relate to modeling for Papervision3D. As Papervision3D is commonly used for web-based projects, modeling requires a different mindset than modeling for an animated movie, visualization, or game. Most of the techniques discussed relate to improving performance. This section is especially useful for modelers who need to create models for Papervision3D. Papervision3D PreviewerPapervision3D Previewer is a small program that should be part of every modeller's toolbox. This tool comes in handy for testing purposes. It allows a modeler to render an exported model in Papervision3D, and it displays some statistics that show how the model performs. At the time of writing, this tool was not compatible with Papervision3D 2.1, which could result in small problems when loading external models.Papervision3D Previewer can be downloaded from http://code.google.com/p/mrdoob/wiki/pv3dpreviewer Keep your polygon count low Papervision3D is a cutting edge technology that brings 3D to the Flash Player. It does this at an amazing speed relative to the capabilities of the Flash player. However, performance of Papervision3D is just a fraction of the performance that can be achieved with hardware-accelerated engines such as used by console games. Even with hardware-accelerated games there is a limit to the number of polygons that can be rendered, meaning there is always a compromise between detail and performance. This counts even more for Papervision3D, so always try to model using as few polygons as possible. Papervision3D users often wonder what the maximum number of triangles is that the Flash player can handle. There is no generic answer to this question, as performance depends on more factors than just the number of triangles. On average, the total triangle count should be no more than 3000, which equals 1500 polygons (remember that one polygon is made of two triangles). Unlike most 3D modeling programs, Papervision3D is triangle based and not polygon based. Add polygons to resolve artifacts Although this seems to contradict the previous suggestion to keep your polygon count low, sometimes you need more polygons to get rid of texture distortion or to reduce z-sorting artifacts. z-sorting artifacts will often occur in areas where objects intersect or closely intersect each other. Subdividing polygons in those areas can make z-sorting more accurate. Often this needs to be done by creating new polygons for the intersecting triangles of approximately the same size. There are several approaches to prevent z-sorting problems. Depending on the object you're using, it can be very time consuming to tweak and find the optimal amount and location of polygons. The amount of polygons you add in order to solve the problem should still be kept as low as possible. Finding the optimal values for your model will often result in switching a lot between Papervision3D and the 3D modeling program. Keep your textures small Textures used in the 3D modeling tool can be exported along with the model to a format that is readable for Papervision3D. This is a valuable feature as the texture will automatically be loaded by Papervision3D. However, the image, which was defined in the 3D authoring tool, will be used exactly as provided by Papervision3D. If you choose a 1024 by 1024 pixels image as the texture, for example the wheels of a car, Papervision3D loads the entire image and draws it on the wheel of a car that appears on screen at a size of 50 by 50 pixels for example. There are several problems related to this: It's a waste of bandwidth to load such a large image. Loading any image takes time, which should be kept as short as possible. It's a waste of capacity. Papervision3D needs to resize the image from 1024 by 1024 pixels to an image, which will be, for example, maximal 50 by 50 pixels on screen. Always choose texture dimensions that make sense for the application using it, and keep in mind that they have to be power of two. This will enable mipmapping and smoothing, which come without extra performance costs. Use textures that Flash can read 3D modeling programs usually read a variety of image sources. Some even support reading Adobe Photoshop's native file-format PSD. Flash can load only GIF, JPG, or PNG files at run time. Therefore, stick to these formats in your model so that you do not have to convert the textures when the model needs to be exported to Papervision3D. Use UV maps If your model is made up of several objects and textures, it's a good idea to use UV mapping, which is the process of unwrapping the model and defining all its textures into one single image. This way we can speed up initial loading of an application by making one request from Flash to load this image instead of loading dozens of images. UV mapping can also be used to tile or reuse parts of the image. The more parts of the UV-mapped image you can reuse, the more bandwidth you'll save. Always try to keep your UV-mapped image as small as possible, just as with keeping your normal textures small. In case you have a lot of objects sharing the same UV map and you need a large canvas to unwrap the UV map, be aware of the fact that the maximum image size supported by Flash Player 9 is 2880x2880 pixels. With the benefits of power of two textures in mind, the maximum width and height is 2048x2048 pixels. Baking textures Baking textures is the process of integrating shadows, lighting, reflection, or entire 3D objects into a single image. Most 3D modeling tools support this. This contradicts what has been said about tiling images in UV maps, as baking results in images that usually can only be used once because of the baked information on the texture. However, it can increase the level of realism of your application, just like shading does, but without the loss of performance caused by calculating shading in real time. Never use them in combination with a tiling image, as repeated shading, for instance, will result in unnatural looking renders. Therefore, each texture needs to be unique, which will cause longer loading times before you can show a scene. Use recognizable names for objects and materials It is always a good convention to use recognizable names for all your objects. This counts for the classes, methods, and properties in your code, and also for the names of the 3D objects in your modeling tool. Always think twice before renaming an object that is used by an application. The application might use the name of an object as the identifier to do something with it—for example, making it clickable. When working in a team of modelers and programmers, you really need to make this clear to the modelers as changing the name of an object can easily break your application. Size and positioning Maintaining the same relative size for your modeled objects, as you would use for instantiating primitives in your scene, is a good convention. Although you could always adjust the scale property of a loaded 3D model, it is very convenient when both Papervision3D and your modeling tool use the same scale. Remember that Papervision3D doesn't have a metric system defining units of a certain value such as meters, yards, pixels, and so on. It just uses units. Another convention is to position your object or objects at the origin of the 3D space in the modeling tool. Especially when exporting a single object from a 3D modeling tool, it is really helpful if it is located at a position of 0 on all axes. This way you can position the 3D object in Papervision3D by using absolute values, without needing to take the offset into account. You can compare this with adding movie clips to your library in Flash. In most cases, it is pretty useful when the elements of a movie clip are centered on their registration point. Finding the balance between quality and performance For each project you should try to find the balance between lightweight modeling and quality. Because each project is different in requirements, scale, and quality, there is no rule that applies for all. Keep the tips mentioned in the previous sections in mind and try to be creative with them. If you see a way to optimize your model, then do not hesitate to use it. Before we have a look at how to create and export models for Papervision3D, we will create a basic application for this purpose. Creating a template class to load models In order to show an imported 3D model using Papervision3D, we will create a basic application. Based on the orbit example (code bundle-chapter 6, click the following link to download: http://www.packtpub.com/files/code/5722_Code.zip) we create the following class. Each time we load a new model we just have to alter the init() method. First, have a look at the following base code for this example: package { import flash.events.Event; import org.papervision3d.materials.WireframeMaterial; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Cube; import org.papervision3d.view.BasicView; public class ExternalModelsExample extends BasicView { private var model:DisplayObject3D; private var rotX:Number = 0.1; private var rotY:Number = 0.1; private var camPitch:Number = 90; private var camYaw:Number = 270; private var easeOut:Number = 0.1; public function ExternalModelsExample() { stage.frameRate = 40; init(); startRendering(); } private function init():void { model = new Plane(); scene.addChild(model); } private function modelLoaded(e:FileLoadEvent):void { //To be added } override protected function onRenderTick(e:Event=null):void { var xDist:Number = mouseX - stage.stageWidth * 0.5; var yDist:Number = mouseY - stage.stageHeight * 0.5; camPitch += ((yDist * rotX) - camPitch + 90) * easeOut; camYaw += ((xDist * rotY) - camYaw + 270) * easeOut; camera.orbit(camPitch, camYaw); super.onRenderTick(); } }} We have created a new plane using a wireframe as its material. The plane is assigned to a class property named model, which is of the DisplayObject3D type. In fact, any external model is a do3D. No matter what type of model we load in the following examples, we can always assign it to the model property. The classes that we'll use for loading 3D models all inherit from DisplayObject3D. Now that we have created a default application, we are ready to create our first model in 3D Studio Max, export it, and then import it into Papervison3D. Creating models in Autodesk 3ds Max and loading them into Papervision3D Autodesk 3ds Max (also known as 3D Studio Max or 3ds Max) is one of the widely-known commercial 3D modeling and animation programs. This is a good authoring tool to start with, as it can save to two of the file formats Papervision3D can handle. These are: COLLADA (extension *.dae): An open source 3D file type, which is supported by Papervision3D. This is the most advanced format and has been supported since Papervision3D's first release. It also supports animations and is actually just a plain text XML file. 3D Studio (extension *.3ds): As the name suggests, this is one of the formats that 3ds Max natively supports. Generally speaking it is also one of the most common formats to save 3D models in. As of 3ds Max version 9, there is a built-in exporter plugin available that supports exporting to COLLADA. However, you should avoid using this, as at the time of writing, the models it exports are not suitable for Papervision3D. Don't have a license of 3ds Max and want to get along with the following examples? Go to www.autodesk.com to download a 30-day trial. Installing COLLADA Max An exporter that does support COLLADA files suitable for Papervision3D is called COLLADA Max. This is a free and open source exporter that works with all versions of 3ds Max 7 and higher. Installing this exporter is easy. Just follow the steps mentioned below: Make sure you have installed 3ds Max version 7 or higher. Go to http://sourceforge.net/projects/colladamaya/. Click on View all files and select the latest COLLADA Max version. (At the time of writing this is COLLADA Max NextGen 0.9.5, which is still in beta, but is the only version that works with 3ds Max 2010). Save the download somewhere on your computer. Run the installer. Click Next, until the installer confirms that the exporter is installed. Start 3ds Max and double check if we can export using the COLLADA or COLLADA NextGen filetype, as shown in the following screenshot: If the only COLLADA export option is Autodesk Collada, then something went wrong during the installation of COLLADA Max, as this is not the exporter that works with Papervision3D. Now that 3ds Max is configured correctly for exporting a file format that can be read by Papervision3D, we will have a look at how to create a basic textured model in 3ds Max and export it to Papervision3D. Creating the Utah teapot and export it for Papervision3D If you already know how to work with 3ds Max, this step is quite easy. All we need to do is create the Utah teapot, add UV mapping, add a material to it, and export it as COLLADA. However, if you are new to 3ds Max, the following steps needs to be clarified. First, we start 3ds Max and create a new scene. The creation of a new scene happens by default on startup. The Utah teapot is one of the objects that comes as a standard primitive in 3ds Max. This means you can select it from the default primitives menu and draw it in one of the viewports. Draw it in the top viewport so that the teapot will not appear rotated over one of its axes. Give it a Radius of 250 in the properties panel on the right, in order to make it match with the units that we'll use in Papervision3D. Position the teapot at the origin of the scene. You can do this by selecting it and changing the x, y, and z properties at the bottom of your screen. You would expect that you need to set all axes to 0, although this is not the case. In this respect, the teapot differs from other primitives in 3ds Max, as the pivot point is located at the bottom of the teapot. Therefore, we need to define a different value for the teapot on the z-axis. Setting it to approximately -175 is a good value. To map a material to the teapot, we need to define a UV map first. UV mapping is also known as UVW mapping. Some call it UV mapping and others call it UVW mapping. 3ds Max uses the term UVW mapping. While having the teapot still selected, go to modify and then select UVW Mapping from the modifier list. Select Shrink Wrap and click Fit in the Alignment section. This will create a UVW map for us. Open the material editor using keyboard shortcut m. Here we define the materials that we use in 3ds Max. Give the new material a name. Replace 01 – Default with a material name of your choice—for example, teapotMaterial. Provide a bitmap as the diffuse material. You can do this by clicking on the square button, at the right of the Diffuse value within Blinn Basic Parameters section. A new window called Material/Map Browser will open. Double-click Bitmap to load an external image. Select an image of your choice. We will use teapotMaterial.jpg The material editor will now update and show the selected material on an illustrative sphere. This is your newly-created material, which you need to drag on the created teapot. The teapot model can now be exported. Depending on the version of the installed COLLADA exporter, select COLLADA or COLLADA NextGen. Note that you should not export using Autodesk Collada, as this exporter doesn't work properly for Papervision3D. Give it a filename of your choice, for example teapot, and hit Save. The exporter window will pop up. The default settings are fine for exporting to Papervision3D, so click OK to save the file. Save the model in the default 3ds Max file format (.max) somewhere on your local disk, so we can use it later when discussing other ways to export this model to Papervision3D. The model that we have created and exported is now ready to be imported by Papervision3D. Let's take a look at how this works. Importing the Utah teapot into Papervision3D To work with the exported Utah teapot, we will use the ExternalModelsExample project that we created previously in this article. Browse to the folder inside your project where you have saved your document class. Create a new folder called assets and copy to this folder, the created COLLADA file along with the image used as the material of the teapot. The class used to load an external COLLADA file is called DAE, so let's import it. import org.papervision3d.objects.parsers.DAE; This type of class is also known as a parser, as it parses the model from a loaded file. When you have a closer look at the source files of Papervision3D and its model parsers, you will probably find out about the Collada class. This might be a little confusing as we use the DAE parser to load a COLLADA file and we do not use the Collada parser. Although you could use either, this article uses the DAE parser exclusively, as it is a more recent class, supporting more features such as animation. There is no feature that is supported by the Collada parser, and is not supported by the DAE parser. Replace all code inside the init() method with the following code that loads a COLLADA file: model = new DAE();model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);DAE(model).load("assets/teapot.DAE"); Because model is defined as a DisplayObject3D class type, we need to cast it to DAE to make use of its methods so that we can call the load() method. An event listener is defined, waiting for the model to be completely loaded and parsed. Once it is loaded, the modelLoaded() method will be triggered. It is a good convention to add models only to the scene once the model is completely loaded. Add the following line of code to the modelLoaded() method: scene.addChild(model); COLLADA Utah Teapot Example Publishing this code will result in the teapot with the texture as created in 3ds Max. In real-world applications it is good practice to keep your models in one folder and your textures in another. You might want to organize the files similar to the following structure: Models in /assets/models/ Textures in /assets/textures/ By default, textures are loaded from the same folder as the model is loaded from, or optionally from the location as specified in the COLLADA file. To include the /assets/textures/ folder we can add a file search path, which defines to have a look in the specified folder, to see if the file is located there, in case none can be found on the default paths. This can be defined as follows: daeModel.addFileSearchPath("assets/textures"); You can call this method multiple times, in order to have multiple folders defined. Internally, in Papervision3D, it will loop through an array of file paths. Exporting and importing the Utah teapot in 3ds format Now that we have seen how to get an object from 3ds Max into a Papervision3D project, we have a look at another format that is supported by both 3ds Max and Papervision3D. This format is called 3D Studio, using a 3ds extension. It is one of the established 3D file formats that are supported by most 3D modeling tools. Exporting and importing is very similar to COLLADA. Let's first export the file to the 3D Studio format. Open the Utah teapot, which we've modeled earlier in this article. Leave the model as it is, and go straight to export. This time we select 3D Studio (*.3DS) as the file type. Save it into your project folder and name it teapot. Click OK when asked whether to preserve Max's texture coordinates. If your model uses teapotMaterial.jpg, or an image with more than eight characters in its filename, the exporter will output a warning. You can close this warning, but you need to be aware of the output message. It says that the bitmap filename is a non-8.3 filename, that is, a maximum amount of 8 characters for the filename and a 3-character extension. The 3D Studio file is an old format, released at the time when there was a DOS version of 3ds Max. Back then it was an OS naming convention to use short filenames, known as 8.3 filenames. This convention still applies to the 3D Studio format, for the sake of backward compatibility. Therefore, the reference to the bitmap has been renamed inside the exported 3D Studio file. Because the exported 3D Studio file changed only the reference to the bitmap filename internally and it did not affect the file it refers to, we need to create a file using this renamed file reference. Otherwise, it won't be able to find the image. In this case we need to create a version of the image called teapotMa.jpg. Save this file in the same folder as the exported 3D Studio file. As you can see, it is very easy to export a model from 3ds Max to a format Papervision3D can read. Modeling the 3D object is definitely the hardest and most time consuming part, simply because creating models takes a lot of time. Loading the model into Papervision3D is just as easy as exporting it. First, copy the 3D Studio file plus the renamed image to the assets folder of your project. We can then alter the document class in order to load the 3ds file. The class that is used to parse a 3D Studio file is called Max3DS and needs to be imported. import org.papervision3d.objects.parsers.Max3DS; In the init() method you should replace or comment the code that loads the COLLADA model from our previous example, with the following: model = new Max3DS();model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);Max3DS(model).load("assets/teapot.3ds", null, "./assets/"); As the first parameter of the load method, we pass a file reference to the model we want to load. The second parameter defines a materials list, which we will not use for this example. The third and final parameter defines the texture folder. This folder is relative to the location of the published SWF. Note that this works slightly different than the DAE parser, which loads referenced images from the path relative to the folder in which the COLLADA file is located or loads images as specified by the addFileSearchPath() method. ExternalModelsExample Publish the code and you'll see the same teapot. However, this time it's using the 3D Studio file format as its source. Importing animated models The teapot is a static model that we exported from a 3D program and loaded into Papervision3D. It is also possible to load animated models, which contain one or multiple animations. 3ds Max is one of the programs in which you can create an animation for use in Papervision3D. Animating doesn't require any additional steps. You can just create the animation and export it. This also goes for other modeling tools that support exporting animations to COLLADA. For the sake of simplicity, this example will make use of a model that is already animated in 3ds Max. The model contains two animations, which together make up one long animation on a shared timeline. We will export this model and its animation to COLLADA, load it into Papervision3D, and play the two animations. Open animatedMill.max in 3ds Max. This file can be found in the zip file that can be downloaded from: http://www.packtpub.com/files/code/5722_Code.zip. You can see the animation of the model directly in 3ds Max by clicking the play button in the menu at the bottom right corner, which will animate the blades of the mill. The first 180 frames animate the blades from left to right. Frames 181 to 360 animate the blades from right to left. As the model is already animated, we can go ahead with exporting, without making any changes to the model. Export it using the COLLADA filetype and save it somewhere on your computer. When the COLLADA Max exporter settings window pops up, we need to check the Sample animation checkbox. By default Start and End are set to the length of the timeline as it is defined in 3ds Max. In case you just want to export a part of it, you can define the start and end frames you want to export. For this example we leave them as they are: 0 and 360. By completing these steps you have successfully exported an animation in the COLLADA format for Papervision3D. Now, have a look at how we can load the animated model into Papervision3D. First, you need to copy the exported COLLADA and the applied material—Blades.jpg, House.jpg, and Stand.jpg—to the assets folder of your project. To load an animated COLLADA, we can use the DAE class again. We only need to define some parameters at instantiation, so the animation will loop. model = new DAE(true,null,true);model.addEventListener(FileLoadEvent.LOAD_COMPLETE,modelLoaded);DAE(model).load("assets/animatedMill.dae"); Take a look at what these parameters stand for.
Read more
  • 0
  • 0
  • 3013

article-image-zbrush-faqs
Packt
20 Apr 2011
3 min read
Save for later

ZBrush FAQs

Packt
20 Apr 2011
3 min read
ZBrush 4 Sculpting for Games: Beginner's Guide Sculpt machines, environments, and creatures for your game development projects Q: Why do we use ZBrush and why is it so widely used in the game and film industry? A: ZBrush is very good for creating highly detailed models in a very short time. This may sound trivial, but it is very sought-after and if you have seen the amazing detail on some creatures in Avatar (film), The Lord of the Rings (film) or Gears of War (game), you'll know how much this adds to the experience. Without the possibilities of ZBrush, we weren't able to achieve such an incredible level of detail that looks almost real, like this detailed close-up of an arm: But apart from creating hyper-realistic models in games or films, ZBrush also focuses on making model creation easier and more lifelike. For these reasons, it essentially tries to mimic working with real clay, which is easy to understand. So it's all about adding and removing digital clay, which is quite a fun and intuitive way of creating 3D-models. Q: Where can one get more information on ZBrush? A: Now that you're digging into ZBrush, these websites are worth a visit: http://www.pixologic.com. As the developers of ZBrush, this site features many customer stories, tutorials, and most interestingly the turntable gallery, where you can rotate freely around ZBrush models from others. http://www.ZBrushcentral.com. The main forum with answers for all ZBrush-related questions and a nice "top-row-gallery". http://www.ZBrush.info. This is a wiki, hosted by pixologic, containing the online documentation for ZBrush. Q: What are the most important hotkeys in ZBrush? A: The following are some of the most important hotkeys in ZBrush: To Rotate your model, left-click anywhere on an unoccupied area of the canvas and drag the mouse. To Move your model, hold Alt while left-clicking anywhere on an unoccupied area of the canvas and drag the mouse. To Scale your model, Press Alt while left-clicking anywhere on an unoccupied area of the canvas, which is moving. Now release the Alt key while keeping the mouse button pressed and drag. Q: What is the difference between 2D, 2.5D, and 3D images in ZBrush? A: 2D digital Images are a flat representation of color, consisting of pixels. Each pixel holds color information. Opposed to that, 3D models—as the name says—can hold 3-dimensional information. A 2.5D image stores the color information like an image, but additionally knows how far away the pixels in the image are from the viewer and in which direction they are pointing. With this information you can, for example, change the lighting in your 2.5D image, without having to repaint it, which can be a real time-saver. To make this even clearer, the next list shows some of the actions we can perform, depending if we're working in 2D, 2.5D, or 3D: 3D – Rotation, deformation, lighting, 2.5D – Deformation, lighting, pixel-based effects 2D – Pixel-based effects A pixel-based effect, for example, could be the contrast brush or the glow brush, which can't be applied to a 3D-model. Q: How can we switch between 2.5D and 3D mode? A: We can switch between 2.5D and 3D mode by using the Edit button.
Read more
  • 0
  • 0
  • 3005

article-image-actors-and-pawns
Packt
23 Feb 2015
7 min read
Save for later

Actors and Pawns

Packt
23 Feb 2015
7 min read
In this article by William Sherif, author of the book Learning C++ by Creating Games with UE4, we will really delve into UE4 code. At first, it is going to look daunting. The UE4 class framework is massive, but don't worry. The framework is massive, so your code doesn't have to be. You will find that you can get a lot done and a lot onto the screen using relatively less code. This is because the UE4 engine code is so extensive and well programmed that they have made it possible to get almost any game-related task done easily. Just call the right functions, and voila, what you want to see will appear on the screen. The entire notion of a framework is that it is designed to let you get the gameplay you want, without having to spend a lot of time in sweating out the details. (For more resources related to this topic, see here.) Actors versus pawns A Pawn is an object that represents something that you or the computer's Artificial Intelligence (AI) can control on the screen. The Pawn class derives from the Actor class, with the additional ability to be controlled by the player directly or by an AI script. When a pawn or actor is controlled by a controller or AI, it is said to be possessed by that controller or AI. Think of the Actor class as a character in a play. Your game world is going to be composed of a bunch of actors, all acting together to make the gameplay work. The game characters, Non-player Characters (NPCs), and even treasure chests will be actors. Creating a world to put your actors in Here, we will start from scratch and create a basic level into which we can put our game characters. The UE4 team has already done a great job of presenting how the world editor can be used to create a world in UE4. I want you to take a moment to create your own world. First, create a new, blank UE4 project to get started. To do this, in the Unreal Launcher, click on the Launch button beside your most recent engine installation, as shown in the following screenshot: That will launch the Unreal Editor. The Unreal Editor is used to visually edit your game world. You're going to spend a lot of time in the Unreal Editor, so please take your time to experiment and play around with it. I will only cover the basics of how to work with the UE4 editor. You will need to let your creative juices flow, however, and invest some time in order to become familiar with the editor. To learn more about the UE4 editor, take a look at the Getting Started: Introduction to the UE4 Editor playlist, which is available at https://www.youtube.com/playlist?list=PLZlv_N0_O1gasd4IcOe9Cx9wHoBB7rxFl. Once you've launched the UE4 editor, you will be presented with the Projects dialog. The following screenshot shows the steps to be performed with numbers corresponding to the order in which they need to be performed: Perform the following steps to create a project: Select the New Project tab at the top of the screen. Click on the C++ tab (the second subtab). Then select Basic Code from the available projects listing. Set the directory where your project is located (mine is Y:Unreal Projects). Choose a hard disk location with a lot of space (the final project will be around 1.5 GB). Name your project. I called mine GoldenEgg. Click on Create Project to finalize project creation. Once you've done this, the UE4 launcher will launch Visual Studio. There will only be a couple of source files in Visual Studio, but we're not going to touch those now. Make sure that Development Editor is selected from the Configuration Manager dropdown at the top of the screen, as shown in the following screenshot: Now launch your project by pressing Ctrl + F5 in Visual Studio. You will find yourself in the Unreal Engine 4 editor, as shown in the following screenshot: The UE4 editor We will explore the UE4 editor here. We'll start with the controls since it is important to know how to navigate in Unreal. Editor controls If you've never used a 3D editor before, the controls can be quite hard to learn. These are the basic navigation controls while in edit mode: Use the arrow keys to move around in the scene Press Page Up or Page Down to go up and down vertically Left mouse click + drag it left or right to change the direction you are facing Left mouse click + drag it up or down to dolly (move the camera forward and backward, same as pressing up/down arrow keys) Right mouse click + drag to change the direction you are facing Middle mouse click + drag to pan the view Right mouse click and the W, A, S, and D keys to move around the scene Play mode controls Click on the Play button in the bar at the top, as shown in the following screenshot. This will launch the play mode. Once you click on the Play button, the controls change. In play mode, the controls are as follows: The W, A, S, and D keys for movement The left or right arrow keys to look toward the left and right, respectively The mouse's motion to change the direction in which you look The Esc key to exit play mode and return to edit mode What I suggest you do at this point is try to add a bunch of shapes and objects into the scene and try to color them with different materials. Adding objects to the scene Adding objects to the scene is as easy as dragging and dropping them in from the Content Browser tab. The Content Browser tab appears, by default, docked at the left-hand side of the window. If it isn't seen, simply select Window and navigate to Content Browser in order to make it appear. Make sure that the Content Browser is visible in order to add objects to your level Next, select the Props folder on the left-hand side of the Content Browser. Drag and drop things from the Content Browser into your game world To resize an object, press R on your keyboard. The manipulators around the object will appear as boxes, which denotes resize mode. Press R on your keyboard to resize an object In order to change the material that is used to paint the object, simply drag and drop a new material from the Content Browser window inside the Materials folder. Drag and drop a material from the Content Browser's Materials folder to color things with a new color Materials are like paints. You can coat an object with any material you want by simply dragging and dropping the material you desire onto the object you desire it to be coated on. Materials are only skin-deep: they don't change the other properties of an object (such as weight). Starting from scratch If you want to start creating a level from scratch, simply click on File and navigate to New Level..., as shown here: You can then select between Default and Empty Level. I think selecting Empty Level is a good idea, for the reasons that are mentioned later. The new level will be completely black in color to start with. Try dragging and dropping some objects from the Content Browser tab again. This time, I added a resized shapes / box for the ground plane and textured it with moss, a couple of Props / SM_Rocks, Particles / P_Fire, and most importantly, a light source. Be sure to save your map. Here's a snapshot of my map (how does yours look?): Summary In this article, we reviewed how the realistic environments are created with actors and monsters which are part of the game and also seen how the various kind of levels are created from the scratch. Resources for Article: Further resources on this subject: Program structure, execution flow, and runtime objects [article] What is Quantitative Finance? [article] Creating and Utilizing Custom Entities [article]
Read more
  • 0
  • 0
  • 2928

article-image-modeling-steampunk-spacecraft-using-blender-3d-249
Packt
29 Dec 2009
5 min read
Save for later

Modeling a Steampunk Spacecraft using Blender 3D 2.49

Packt
29 Dec 2009
5 min read
Steampunk concept Before we actually begin working on the model, let's make clear the difference between a regular spacecraft and a steampunk spacecraft. Although both of them are based on science fiction, the steampunk spacecraft has a few important characteristics that differ from a regular hi-tech spacecraft. Imagine a world where the advances of science and machinery were actually developed centuries ago. For example, imagine medieval knights using hi-tech armor and destroying castles with rockets. It may sound strange, as the rockets have been made only for the army in the last century. What would a fighter jet look like in the Middle Ages? It would be a mix of steel, glass, and wood. The steampunk environment is made out of these kinds of things, modern objects and vehicles produced and developed in a parallel universe, where those discoveries were made long ago. The secret of designing a good steampunk vehicle or object is to mix the recent technology with the materials and methods available in past times, such as wood and bronze to make a space suit. If you need some inspiration to design objects like those, watch some recent movies that use a steampunk environment to create some interesting machines. But, to really get to the source, I do recommend that you read some books written by Jules Verne, who wrote about incredible environments and machines that dive deep into the ocean or travel to outer space. The following image is an example of a steampunk weapon (Image credits: Halogen Gallery, licensed under Creative Commons): Next is a steampunk historical character (Image credits: Sparr0, licensed under Creative Commons): Here are a few resources to find out more about Steam Punk: Steampunk at Wikipedia, with lots of resources:http://en.wikipedia.org/wiki/Steampunk Guide to drawing and creating steampunk machinery:http://www.crabfu.com/steamtoys/diy_steampunk/ Showcase of steampunk technology:http://www.instructables.com/id/Steampunk/ Spacecraft concept Now that we know how to design a good steampunk machine, let's discuss the concept of this spacecraft. For this project, we will design a machine that mixes some elements of steel, but not those fancy industrial plates and welded parts. Instead, our machine will have the look and feel of a machine built by a blacksmith. As it would be really strange to have wooden parts for a spacecraft, we will skip or use this material only for the interior. Other aspects of the machine that will help give the impression of a steampunk spacecraft are as follows: Riveted sheets of metal Metal with the look of bronze Valves and pipes With that in mind, we can start with this concept image to create our spacecraft: It's not a complete project, but we're off to a great start with Blender and our use of polygons to create the basis for this Incredible Machine. Project workflow This project will improve our modeling and creating skills with Blender to a great extent! So, to make the process more efficient, the workflow will be planned as this would be done by a professional studio. This is the best way to optimize the time and quality of the project. It will also guarantee that future projects will be finished in the shortest timeframe. The first step for all projects is to find some reference images or photos for the pre-visualization stage. At this point, we should make all important decisions about a project based only on your concept studies. The biggest amount of time spent on this type of project is with artistic decisions like the framing of the camera, type and color of materials, shape of the object, and environment setup. All of those decisions should be made before we open Blender and start modeling because a simple detour on the main concept could result in a partial or total loss of all work. When all of the decisions are made, the next step is to start modeling with the reference images we found on the Internet, or we can draw the references ourselves. The modeling stage involves the spacecraft and the related environment, which of course will be outer space. For this environment, Blender will help us design a space with nebulas, star fields, and even a glazing star. Right after the environment is finished, we can begin working with some materials and textures. As the object has a complex set of parts, and in some cases an organic topology, we will have to pay extra attention to the UV mapping process to add textures. We'll use a few tips when working with those complex shapes and topology to simplify the process. What would a spacecraft be without special effects? Special effects make the project more realistic. The addition of a particle system enables the spacecraft's engines to work and simulates the shooting of a plasma gun. With those two effects, we will be able to give dynamism to the scene, showing some working parts of the object. And, to finish things up, there is the light setup for the scene. A light setup for a space scene is rather easy to accomplish because we will only have a strong light source for the scene, and not so much bouncing for the light rays. The goal for this project is to end up with a great space scene. If you already know how to use Blender, get ready to put your knowledge to the test!
Read more
  • 0
  • 0
  • 2924

article-image-cocos2d-uses-box2d-physics-engine
Packt
13 Dec 2011
7 min read
Save for later

Cocos2d: Uses of Box2D Physics Engine

Packt
13 Dec 2011
7 min read
  (For more resources on Cocos2d, see here.)   Box2D setup and debug drawing In our first physics recipe, we will explore the basics of creating a Box2D project and setting up a Box2D world. The example creates a scene that allows the user to create realistic 2D blocks. Getting ready Please refer to the project RecipeCollection02 for full working code of this recipe. How to do it... The first thing we need to do is create a Box2D project using the built-in Box2D project template: Go to File | New Project. Under User Templates click on Cocos2d. Now, right click on Cocos2d Box2d Application. Click Choose, name your project, and hit Save. Now, execute the following code: #import "Box2D.h"#import "GLES-Render.h"//32 pixels = 1 meter#define PTM_RATIO 32@implementation Ch4_BasicSetup-(CCLayer*) runRecipe { [super runRecipe]; /* Box2D Initialization *///Set gravity b2Vec2 gravity; gravity.Set(0.0f, -10.0f); //Initialize world bool doSleep = YES; world = new b2World(gravity, doSleep); world->SetContinuousPhysics(YES); //Initialize debug drawing m_debugDraw = new GLESDebugDraw( PTM_RATIO ); world->SetDebugDraw(m_debugDraw); uint32 flags = 0; flags += b2DebugDraw::e_shapeBit; m_debugDraw->SetFlags(flags); //Create level boundaries [self addLevelBoundaries]; //Add batch node for block creation CCSpriteBatchNode *batch = [CCSpriteBatchNodebatchNodeWithFile:@"blocks.png" capacity:150]; [self addChild:batch z:0 tag:0]; //Add a new block CGSize screenSize = [CCDirector sharedDirector].winSize; [self addNewSpriteWithCoords:ccp(screenSize.width/2, screenSize.height/2)]; //Schedule step method [self schedule:@selector(step:)]; return self;}/* Adds a polygonal box around the screen */-(void) addLevelBoundaries { CGSize screenSize = [CCDirector sharedDirector].winSize; //Create the body b2BodyDef groundBodyDef; groundBodyDef.position.Set(0, 0); b2Body *body = world->CreateBody(&groundBodyDef); //Create a polygon shape b2PolygonShape groundBox; //Add four fixtures each with a single edge groundBox.SetAsEdge(b2Vec2(0,0), b2Vec2(screenSize.width/PTM_RATIO,0)); body->CreateFixture(&groundBox,0); groundBox.SetAsEdge(b2Vec2(0,screenSize.height/PTM_RATIO),b2Vec2(screenSize.width/PTM_RATIO,screenSize.height/PTM_RATIO)); body->CreateFixture(&groundBox,0); groundBox.SetAsEdge(b2Vec2(0,screenSize.height/PTM_RATIO),b2Vec2(0,0)); body->CreateFixture(&groundBox,0); groundBox.SetAsEdge(b2Vec2(screenSize.width/PTM_RATIO,screenSize.height/PTM_RATIO), b2Vec2(screenSize.width/PTM_RATIO,0)); body->CreateFixture(&groundBox,0);}/* Adds a textured block */-(void) addNewSpriteWithCoords:(CGPoint)p { CCSpriteBatchNode *batch = (CCSpriteBatchNode*) [selfgetChildByTag:0]; //Add randomly textured block int idx = (CCRANDOM_0_1() > .5 ? 0:1); int idy = (CCRANDOM_0_1() > .5 ? 0:1); CCSprite *sprite = [CCSprite spriteWithBatchNode:batchrect:CGRectMake(32 * idx,32 * idy,32,32)]; [batch addChild:sprite]; sprite.position = ccp( p.x, p.y); //Define body definition and create body b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position.Set(p.x/PTM_RATIO, p.y/PTM_RATIO); bodyDef.userData = sprite; b2Body *body = world->CreateBody(&bodyDef); //Define another box shape for our dynamic body. b2PolygonShape dynamicBox; dynamicBox.SetAsBox(.5f, .5f);//These are mid points for our 1m box //Define the dynamic body fixture. b2FixtureDef fixtureDef; fixtureDef.shape = &dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.3f; body->CreateFixture(&fixtureDef);}/* Draw debug data */-(void) draw { //Disable textures glDisable(GL_TEXTURE_2D); glDisableClientState(GL_COLOR_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); //Draw debug data world->DrawDebugData(); //Re-enable textures glEnable(GL_TEXTURE_2D); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY);}/* Update graphical positions using physical positions */-(void) step: (ccTime) dt { //Set velocity and position iterations int32 velocityIterations = 8; int32 positionIterations = 3; //Steo the Box2D world world->Step(dt, velocityIterations, positionIterations); //Update sprite position and rotation to fit physical bodies for (b2Body* b = world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *obj = (CCSprite*)b->GetUserData(); obj.position = CGPointMake( b->GetPosition().x * PTM_RATIO,b->GetPosition().y * PTM_RATIO); obj.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); } }}/* Tap to add a block */- (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { for( UITouch *touch in touches ) { CGPoint location = [touch locationInView: [touch view]]; location = [[CCDirector sharedDirector] convertToGL: location]; [self addNewSpriteWithCoords: location]; }}@end How it works... The Box2D sample project is a simple way to understand what a physics system looks like. Initialization: Upon initialization of the b2World object, we set a few things including gravity, object sleeping, and continuous physics. Sleeping allows bodies that are at rest to take up less system resources. Gravity is typically set to a negative number in the Y direction but can be reset at any time using the following method on b2World: void SetGravity(const b2Vec2& gravity); In addition to storing a pointer to the main b2World instance, we also usually store a pointer to an instance of GLESDebugDraw. Debug drawing: Debug drawing is handled by the GLESDebugDraw class as defined in GLESRender.h. Debug drawing encompasses drawing five different elements onscreen. These include shapes, joint connections, AABBs (axis-aligned bounding boxes), broad-phase pairs, and a center of mass bit. Visual to physical drawing ratio: We define the constant PTM_RATIO at 32, to allow consistent conversion between the physical world and the visual world. PTM stands for pixel to meter. Box2D measures bodies in meters and is built and optimized to work with bodies between the sizes of 0.1 to 10.0 meters. Setting this ratio to 32 is a common convention for optimal shapes to appear between 3.2 to 320 pixels on screen. Optimization aside, there is no upper or lower limit to Box2D body size. Level boundaries: In this and many future examples, we add a level boundary roughly encompassing the entire screen. This is handled with the creation of a b2Body object with four fixtures. Each fixture has a b2Polygon shape that defines a single edge. Creating an edge typically involves the following: b2BodyDef bodyDef;bodyDef.position.Set(0, 0);b2Body *body = world->CreateBody(&bodyDef);b2PolygonShape poly;poly.SetAsEdge(b2Vec2(0,0), b2Vec2(480/PTM_RATIO,0));body->CreateFixture(&poly,0); Because these edges have no corresponding visual components (they are invisible), we do not need to set the bodyDef.userData pointer. Creating the blocks: Blocks are created much in the same way that the level boundaries are created. Instead of calling SetAsEdge, we call SetAsBox to create a box-shaped polygon. We then set the density and friction attributes of the fixture. We also set bodyDef.userData to point to the CCSprite we created. This links the visual and the physical, and allows our step: method to reposition sprites as necessary. Scheduling the world step: Finally, we schedule our step method. In this method, we run one discrete b2World step using the following code: int32 velocityIterations = 8;int32 positionIterations = 3;world->Step(dt, velocityIterations, positionIterations); The Box2D world Step method moves the physics engine forward one step. The Box2D constraint solver runs in two phases: the velocity phase and position phase. These determine how fast the bodies move and where they are in the game world. Setting these variables higher results in a more accurate simulation at the cost of speed. Setting velocityIterations to 8 and positionIterations to 3 is the suggested baseline in the Box2D manual. Using the dt variable syncs the logical timing of the application with the physical timing. If a game step takes an inordinate amount of time, the physics system will move forward quickly to compensate. This is referred to as a variable time step. An alternative to this would be a fixed time step set to 1/60th of a second. In addition to the physical step, we also reposition and re-orientate all CCSprites according to their respective b2Body positions and rotations: for (b2Body* b = world->GetBodyList(); b; b = b->GetNext()) { if (b->GetUserData() != NULL) { CCSprite *obj = (CCSprite*)b->GetUserData(); obj.position = CGPointMake( b->GetPosition().x * PTM_RATIO,b->GetPosition().y * PTM_RATIO); obj.rotation = -1 * CC_RADIANS_TO_DEGREES(b->GetAngle()); }} Taken together, these pieces of code sync the physical world with the visual.  
Read more
  • 0
  • 0
  • 2910

article-image-article-beating-back-the-horde
Packt
18 Feb 2014
9 min read
Save for later

Beating Back the Horde

Packt
18 Feb 2014
9 min read
(For more resources related to this topic, see here.) What kind of game will we be making? We are going to make one of the classics, a Tower Defense game. (http://old.casualcollective.com/#games/FETD2.) Our game won't be as polished as the example, but it gives you a solid base to work with and develop further. Mission briefing We will use the cloning tools again to create hordes of enemies to fight. We will also use these tools to create cannons and cannonballs. It's easy to re-use assets from other projects in Scratch 2.0. The new Backpack feature allows you to easily exchange assets between projects. How this works will be demonstrated in this article. Why is it awesome? This example is a lot more involved than the previous one. The final result will be a much more finished game which still leaves plenty of room to adapt and continue building on. While making this game, you will learn how to draw a background and how to make and use different costumes for a single sprite. We will make full use of the cloning technique to create many copies of similar objects. We will also use more variables and another type of variable called List to keep track of all the things going on in the game. You will also learn about a simple way to create movement patterns for computer-controlled objects. Your Hotshot objectives We will divide the articlein to the following tasks based primarily on the game sprites and their behavior: Creating a background Creating enemies Creating cannons Fighting back Mission checklist Click on the Create button to start a new project. Remove the Scratch cat by right-clicking on it and selecting delete. Creating a background Because the placement and the route to walk is important in this kind of game, we will start with the creation of the background. To the left of the Sprites window, you will see a separate picture. Underneath is the word Stage and another word, the name of the picture that's being shown. This picture is white when you start a new project because nothing is drawn on it yet. The following is an example with our background image already drawn in:   Engage thrusters We will draw a grassy field with a winding road running through it when looked at from the top, by going through the following steps: Click on the white image. Next click on the Backdrops tab to get to the drawing tool. This is similar to the Costumes tab for sprites, but the size of the drawing canvas is clearly limited to the size of the stage. Choose a green color and draw a rectangle from the top-left to the bottom-right of the canvas. Then click on the Fill tool and fill the rectangle with the same color to create a grassy background. On top of the field, we will draw a path which the enemies will use to walk on. Switch the Fill tool to a brown color. Draw rectangles to form a curving path as shown in the following screenshot. The background is now done. Let's save our work before moving on. Objective complete – mini debriefing The background is just a pretty picture with no direct functionality in the game. It tells the player what to expect in the game. It will be logical that enemies are going to follow the road that was drawn. We will also use this road as a guideline when scripting the movement path of the enemies. The open spaces between the path make it obvious where the player could place the cannons. Creating enemies We will quickly create an enemy sprite to make use of the background we just drew. These enemies will follow the path drawn in the background. Because the background image is fixed, we can determine exactly where the turns are. We will use a simple movement script that sends the enemies along the path from one end of the stage to the other. Like with the targets in the previous project, we will use a base object that creates clones of itself that will actually show up on stage. Prepare for lift off We will first draw an enemy sprite. Let's keep this simple for now. We can always add to the visual design later. The steps to draw it are as follows: Click on the paintbrush icon to create a new sprite. Choose a red color and draw a circle. Make sure the circle is of proper size compared to the path in the background. Fill the circle with the same color. We name the new sprite enemy1. That's all for now! We will add more to the appearance of the enemy sprite later. The enemy sprite appears as a red circle large enough to fit the path.   Engage thrusters Let's make it functional first with a script. We will place the base enemy sprite at the start of the path and have it create clones. Then we will program the clones to follow the path as shown in the following steps: The script will start when the when <green flag> clicked block is clicked. Place the sprite at the start of the path with a go to x: -240 y: 0 block. Wait for three seconds by using the wait ... secs block to allow the player to get ready for the game. Add a repeat … block. Fill in 5 to create five clones per wave. Insert a create clone of <myself> block. Then wait for two seconds by using the wait ... secs block so the enemy clones won't be spawned too quickly. Before we start moving the clones, we have to determine what path they will follow. The key information here are the points where the path bends in a new direction. We can move the enemies from one bend to another in an orderly manner. Be warned that it may take some time to complete this step. You will probably need to test and change the numbers you are going to use to move the sprites correctly. If you don't have the time to figure it all out, you can check and copy the image with the script blocks at the end of this step to get a quick result. Do you remember how the xy-coordinate system of the stage worked from the last project? Get a piece of paper (or you can use the text editor on your computer) and get ready to take some notes. Examine the background you drew on the stage, and write down all the xy-coordinates that the path follows in order. These points will serve as waypoints. Look at the screenshot to see the coordinates that I came up with. But remember that the numbers for your game could be different if you drew the path differently. To move the enemy sprites, we will use the glide … secs to x: … y: ... blocks. With this block, a sprite will move fluidly to the given point in the given amount of time as shown in the following steps: Start the clone script with a when I start as a clone block. Beyond the starting point, there will be seven points to move to. So stack together seven glide … blocks. In the coordinate slots, fill in the coordinates you just wrote down in the correct order. Double-check this since filling in a wrong number will cause the enemies to leave the path. Deciding how long a sprite should take to complete a segment depends on the length of that segment. This requires a bit of guesswork since we didn't use an exact drawing method. Your most accurate information is the differences between the coordinates you used from point to point. Between the starting point (-240,0) and the first waypoint (-190,0), the enemy sprite will have moved 50 pixels. Let's say we want to move 10 pixels per second. That means the sprite should move to it's new position in 5 seconds. The difference between the first (-190,0) and the second (-190,125) waypoint is 125. So according to the same formula, the sprite should move along this segment of the path in 12.5 seconds. Continue calculating the glide speeds like this for the other blocks. These are the numbers I came up with : 5, 12.5, 17, 26.5, 15.5, 14, and 10.5, but remember that yours may be different You can use the formula new position – old position / 10 = result to figure out the numbers you need to use. To finish off, delete the clone when it reaches the end of the path. Test your script and see the enemies moving along the path. You might notice they are very slow and bunched together because they don't travel enough distances between spawns. Let's fix that by adding a variable speed multiplier. Not only can we easily tweak the speed of the sprites but we can also use this later to have other enemy sprites move at different speeds, as shown in the following steps: Create a variable and make sure it is for this sprite only. Name it multiplier_R. The R stands for red, the color of this enemy. Place set <multiplier_R> to … at the start of the <green flag> script. Fill in 0.3 as a number for the basic enemy. Take the speed numbers you filled in previously and multiply them with the multiplier. Use a ...*... operator block. Place the multiplier_R variable in one slot. Type the correct number in the other slot. Place the calculation in the glide block instead of the fixed number.The completed scripts for enemy movement will look as follows: Objective complete – mini debriefing Test the game again and see how the enemies move much faster, about three times as fast if you have used 0.3 for the multiplier. You can play with the variable number a bit to see the effect. If you decrease the multiplier, the enemies will move even faster. If you increase the number, the enemies will become slower.
Read more
  • 0
  • 0
  • 2900
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-major-sdk-components
Packt
24 Oct 2013
11 min read
Save for later

Major SDK components

Packt
24 Oct 2013
11 min read
(For more resources related to this topic, see here.) Controller The Leap::Controller class is a liaison between the controller and your code. Whenever you wish to do anything at all with the device you must first go through your controller. From a controller instance we can interact with the device configuration, detected displays, current and past frames, and set up event handling with our listener subclass. Config An instance of the Config class can be obtained from a controller. It provides a key/value interface to modify the operation of the Leap device and driver behavior. Some of the options available are: Robust mode : Somewhat slower frame processing but works better with less light. Low resource mode : Less accurate and responsive tracking, but uses less CPU and USB bandwidth. Tracking priority : Can prioritize either precision of tracking data or the rate at which data is sampled (resulting in approximately 4x data frame-rate boost), or a balance between the two (approximately 2x faster than the precise mode). Flip tracking : Allows you to use the controller with the USB cable coming out of either side. This setting simply flips the positive and negative coordinates on the X-axis. Screen A controller may have one or more calibratedScreens, which are computer displays in the field of view of the controller, which have a known position and dimensions. Given a pointable direction and a screen we can determine what the user is pointing at. Math Several math-related functions and types such as Leap::Vector, Leap::Matrix, and Leap::FloatArray are provided by LeapMath.h. All points in space, screen coordinates, directions, and normal are returned by the API as three-element vectors representing X, Y, and Z coordinates or unit vectors. Frame The real juicy information is stored inside each Frame. A Frame instance represents a point in time in which the driver was able to generate an updated view of its world and detect where screens, your hands, and pointables are. Hand At present the only body parts you can use with the controller are your hands. Given a frame instance we can inspect the number of hands in the frame, their position and rotation, normal vectors, and gestures. The hand motion API allows you to compare two frames and determine if the user has performed a translation, rotation, or scaling gesture with their hands in that time interval. The methods we can call to check for these interactions are: Leap::Hand::translation(sinceFrame): Translation (also known as movement) returned as a Leap::Vector including the direction of the movement of the hand and the distance travelled in millimeters. Leap::Hand::rotationMatrix(sinceFrame), ::rotationAxis(sinceFrame), ::rotationAngle(sinceFrame, axisVector): Hand rotation, either described as a rotation matrix, vector around an axis or float angle around a vector between –π and π radians (that's -180° to 180° for those of you who are a little rusty with your trigonometry). Leap::Hand::scaleFactor(sinceFrame): Scaling represents the distance between two hands. If the hands are closer together in the current frame compared to sinceFrame, the return value will be less than 1.0 but greater than 0.0. If the hands are further apart the return value will be greater than 1.0 to indicate the factor by which the distance has increased. Pointable A Hand also can contain information about Pointable objects that were recognized in the frame as being attached to the hand. A distinction is made between the two different subclasses of pointable objects, Tool, which can be any slender, long object such as a chopstick or a pencil, and Finger, whose meaning should be apparent. You can request either fingers or tools from a Hand, or a list of pointables to get both if you don't care. Finger positioning Suppose we want to know where a user's fingertips are in space. Here's a short snippet of code to output the spatial coordinates of the tips of the fingers on a hand that is being tracked by the controller: if (frame.hands().empty()) return; const Leap::Hand firstHand = frame.hands()[0]; const Leap::FingerList fingers = firstHand.fingers(); Here we obtain a list of the fingers on the first hand of the frame. For an enjoyable diversion let's output the locations of the fingertips on the hand, given in the Leap coordinate system: for (int i = 0; i < fingers.count(); i++) { const Leap::Finger finger = fingers[i]; std::cout << "Detected finger " << i << " at position (" << finger.tipPosition().x << ", " << finger.tipPosition().y << ", " << finger.tipPosition().z << ")" << std::endl; } This demonstrates how to get the position of the fingertips of the first hand that is recognized in the current frame. If you hold three fingers out the following dazzling output is printed: Detected finger 0 at position (-119.867, 213.155, -65.763) Detected finger 1 at position (-90.5347, 208.877, -61.1673) Detected finger 2 at position (-142.919, 211.565, -48.6942) While this is clearly totally awesome, the exact meaning of these numbers may not be immediately apparent. For points in space returned by the SDK the Leap coordinate system is used. Much like our forefathers believed the Earth to be the cornerstone of our solar system, your Leap device has similar notions of centricity. It measures locations by their distance from the Leap origin, a point centered on the top of the device. Negative X values represent a point in space to the left of the device, positive values are to the right. The Z coordinates work in much the same way, with positive values extending towards the user and negative values in the direction of the display. The Y coordinate is the distance from the top of the device, starting 25 millimeters above it and extending to about 600 millimeters (two feet) upwards. Note that the device cannot see below itself, so all Y coordinates will be positive. An example of cursor control By now we are feeling pretty saucy, having diligently run the sample code thus far and controlling our computer in a way never before possible. While there is certain utility and endless amusement afforded by printing out finger coordinates while waving your hands in the air and pretending to be a magician, there are even more exciting applications waiting to be written, so let's continue onwards and upwards. Until computer-gesture interaction is commonplace, pretending to be a magician while you test the functionality of Leap SDK is not recommended in public places such as coffee shops. In some cultures it is considered impolite to point at people. Fortunately your computer doesn't have feelings and won't mind if we use a pointing gesture to move its cursor around (you can even use a customarily offensive finger if you so choose). In order to determine where to move the cursor, we must first locate the position on the display that the user is pointing at. To accomplish this we will make use of the screen calibration and detection API in the SDK. If you happen to leave your controller near a computer monitor it will do its best to try and determine the location and dimensions of the monitor by looking for a large, flat surface in its field of view. In addition you can use the complementary Leap calibration functionality to improve its accuracy if you are willing to take a couple of minutes to point at various dots on your screen. Note that once you have calibrated your screen, you should ensure that the relative positions of the Leap and the screen do not change. Once your controller has oriented itself within your surroundings, hands and display, you can ask your trusty controller instance for a list of detected screens: // get list of detected screens const Leap::ScreenList screens = controller.calibratedScreens(); // make sure we have a detected screen if (screens.empty()) return;const Leap::Screen screen = screens[0]; We now have a screen instance that we can use to find out the physical location in space of the screen as well as its boundaries and resolution. Who cares about all that though, when we can use the SDK to compute where we're pointing to with the intersect() method? // find the first finger or tool const Leap::Frame frame = controller.frame(); const Leap::HandList hands = frame.hands(); if (hands.empty()) return; const Leap::PointableList pointables = hands[0].pointables(); if (pointables.empty()) return; const Leap::Pointable firstPointable = pointables[0]; // get x, y coordinates on the first screen const Leap::Vector intersection = screen.intersect( firstPointable, true, // normalize 1.0f // clampRatio ); The vector intersection contains what we want to know here; the pixel pointed at by our pointable. If the pointable argument to intersect() is not actually pointing at the screen then the return value will be (NaN, NaN, NaN). NaN stands for not a number . We can easily check for the presence of non-finite values in a vector with the isValid() method: if (! intersection.isValid()) return; // print intersection coordinates std::cout << "You are pointing at (" << intersection.x << ", " << intersection.y << ", " << intersection.z << ")" << std::endl; Prepare to be astounded when you point at the middle of your screen and the transfixing message You are pointing at (0.519522, 0.483496, 0) is revealed. Assuming your screen resolution is larger than one pixel on either side, this output may be somewhat unexpected, so let's talk about what screen.intersect(const Pointable &pointable, bool normalize, float clampRatio=1.0f) is returning. The intersect() method draws an imaginary ray from the tip of pointable extending in the same direction as your finger or tool and returns a three-element vector containing the coordinates of the point of intersection between the ray and the screen. If the second parameter normalize is set to false then intersect() will return the location in the leap coordinate system. Since we have no interest in the real world we have set normalize to true, which causes the coordinates of the returned intersection vector to be fractions of the screen width and height. When intersect() returns normalized coordinates, (0, 0, 0) is considered the bottom-left pixel, and (1, 1, 0) is the top-right pixel. It is worth noting that many computer graphics coordinate systems define the top-left pixel as (0, 0) so use caution when using these coordinates with other libraries. There is one last (optional) parameter to the intersect() method, clampRatio, which is used to expand or contract the boundaries of the area at which the user can point, should you want to allow pointing beyond the edges of the screen. Now that we have our normalized screen position, we can easily work out the pixel coordinate in the direction of the user's rude gesticulations: unsigned int x = screen.widthPixels() * intersection.x; // flip y coordinate to standard top-left origin unsigned int y = screen.heightPixels() * (1.0f - intersection.y); std::cout << "You are offending the pixel at (" << x << ", " << y << std::endl; Since intersection.x and intersection.y are fractions of the screen dimensions, simply multiply by the boundary sizes to get our intersection coordinates on the screen. We'll go ahead and leave out the Z-coordinate since it's usually (OK, always) zero. Now for the coup de grace —moving the cursor location, here's how to do it on Mac OS X: CGPoint destPoint = CGPointMake(x, y); CGDisplayMoveCursorToPoint(kCGDirectMainDisplay, de.stPoint); You will need to #include <CoreGraphics/CoreGraphics.h> and link it ( –framework CoreGraphics) to make use of CGDisplayMoveCursorToPoint(). Now all of our hard efforts are rewarded, and we can while away the rest of our days making the cursor zip around with nothing more than a twitch of the finger. At least until our arm gets tired. After a few seconds (or minutes, for the easily-amused) it may become apparent that the utility of such an application is severely limited, as we can't actually click on anything. So maybe you shouldn't throw your mouse away just yet, but read on if you are ready to escape from the shackles of such an antiquated input device. Summary In this article, we learned about the major components of the Leap SDK. We went through the various components of the Leap SDK. Resources for Article: Further resources on this subject: Kinect in Motion – An Overview [Article] Getting started with Kinect for Windows SDK Programming [Article] Getting Started with Kinect [Article]
Read more
  • 0
  • 0
  • 2879

article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-1
Packt
18 Nov 2009
7 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1

Packt
18 Nov 2009
7 min read
  A 3D game must be attractive. It has to offer amazing effects for the main characters and in the background. A spaceship has to fly through a meteor shower. An asteroid belt has to draw waves while a UFO pursues a spaceship. A missile should make a plane explode. The real world shows us things moving everywhere. Most of these scenes, however, aren't repetitive sequences. Hence, we have to combine great designs, artificial intelligence (AI), and advanced physics to create special effects. Working with 3D characters in the background So far, we have added physics, collision detection capabilities, life, and action to our 3D scenes. We were able to simulate real-life effects for the collision of two 3D characters by adding some artificial intelligence. However, we need to combine this action with additional effects to create a realistic 3D world. Players want to move the camera while playing so that they can watch amazing effects. They want to be part of each 3D scene as if it were a real life situation. How can we create complex and realistic backgrounds capable of adding realistic behavior to the game? We can do this combining everything we have learned so far with a good object-oriented design. We have to create random situations combined with more advanced physics. We have to add more 3D characters with movement to the scenes. We must add complexity to the backgrounds. We can work with many independent physics engines to work with parallel worlds. In real-life, there are concurrent and parallel words. We have to reproduce this behavior in our 3D scenes. Time for action – adding a transition to start the game Your project manager does not want the game to start immediately. He wants you to add a butt on in order to allow the player to start the game by clicking on it. As you are using Balder, adding a butt on is not as simple as expected. We are going to add a butt on to the main page, and we are going to change Balder's default game initialization: Stay in the 3DInvadersSilverlight project. Expand App.xaml in the Solution Explorer and open App.xaml.cs––the C# code for App.xaml. Comment the following line of code (we are not going to use Balder's services in this class):  //using Balder.Silverlight.Services; Comment the following line of code in the event handler for the Application_Startup event, after the line this.RootVisual = new MainPage();: //TargetDevice.Initialize<InvadersGame>(); Open the XAML code for MainPage.xaml and add the following lines of code after the line (You will see a butt on with the ti tle Start the game.): <!-- A button to start the game --><Button x_Name="btnStartGame" Content="Start the game!" Canvas.Left="200" Canvas.Top="20" Width="200" Height="30" Click="btnStartGame_Click"></Button> Now, expand MainPage.xaml in the Solution Explorer and open MainPage.xaml.cs––the C# code for MainPage.xaml. Add the following line of code at the beginning (As we are going to use many of Balder's classes and interfaces.): using Balder.Silverlight.Services; Add the following lines of code to program the event handler for the button's Click event (this code will initialize the game using Balder's services): private void btnStartGame_Click(object sender, RoutedEventArgs e){ btnStartGame.Visibility = Visibility.Collapsed; TargetDevice.Initialize<InvadersGame>();} Build and run the solution. Click on the Start the game! butt on and the UFOs will begin their chase game. The butt on will make a transition to start the game, as shown in the following screenshots:   What just happened? You could use a Start the game! butt on to start a game using Balder's services. Now, you will be able to offer the player more control over some parameters before starting the game. We commented the code that started the game during the application start-up. Then, we added a button on the main page (MainPage). The code programmed in its Click event handler initializes the desired Balder.Core.Game subclass (InvadersGame) using just one line: TargetDevice.Initialize<InvadersGame>(); This initialization adds a new specific Canvas as another layout root's child, controlled by Balder to render the 3D scenes. Thus, we had to make some changes to add a simple butt on to control this initialization. Time for action – creating a low polygon count meteor model The 3D digital artists are creating models for many aliens. They do not have the time to create simple models. Hence, they teach you to use Blender and 3D Studio Max to create simple models with low polygon count. Your project manager wants you to add dozens of meteors, to the existing chase game. A gravitational force must attract these meteors and they have to appear in random initial positions in the 3D world. First, we are going to create a low polygon count meteor using 3D Studio Max. Then, we are going to add a texture based on a PNG image and export the 3D model to the ASE format, compatible with Balder. As previously explained, we have to do this in order to export the ASE format with a bitmap texture definition enveloping the meshes. We can also use Blender or any other 3D DCC tool to create this model. We have already learned how to export an ASE format from Blender. Thus, this time, we are going to learn the necessary steps to do it using 3D Studio Max. Start 3D Studio Max and create a new scene. Add a sphere with six segments. Locate the sphere in the scene's center. Use the Uniform Scale tool to resize the low polygon count sphere to 11.329 in the three axis, as shown in the following screenshot: Click on the Material Editor button. Click on the first material sphere, on the Material Editor window's upper-left corner. Click on the small square at the right side of the Diffuse color rectangle, as shown in the following screenshot: Select Bitmap from the list shown in the Material/Map Browser window that pops up and click on OK. Select the PNG file to be used as a texture to envelope the sphere. You can use Bricks.PNG, previously downloaded from http://www.freefoto.com/. You just need to add a reference to a bitmap file. Then, click on Open. The Material Editor preview panel will show a small sphere thumbnail enveloped by the selected bitmap, as shown in the following screenshot: Drag the new material and drop it on the sphere. If you are facing problems, remember that the 3D digital artist created a similar sphere a few days ago and he left the meteor.max file in the following folder (C:Silverlight3DInvaders3D3DModelsMETEOR). Save the file using the name meteor.max in the previously mentioned folder. Now, you have to export the model to the ASE format with the reference to the texture. Therefore, select File | Export and choose ASCII Scene Export (*.ASE) on the Type combo box. Select the aforementioned folder, enter the file name meteor.ase and click on Save. Check the following options in the ASCII Export dialog box. (They are unchecked by default): Mesh Normals Mapping Coordinates Vertex Colors The dialog box should be similar to the one shown in the following screenshot: Click on OK. Now, the model is available as an ASE 3D model with reference to the texture. You will have to change the absolute path for the bitmap that defines the texture in order to allow Balder to load the model in a Silverlight application.
Read more
  • 0
  • 0
  • 2844

article-image-installing-gideros
Packt
08 Nov 2013
8 min read
Save for later

Installing Gideros

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) About Gideros Gideros is a set of software packages created and managed by a company named Gideros Mobile. It provides developers with the ability to create 2D games for multiple platforms by reusing the same code. Games created with Gideros run as native applications, thus having all the benefits of high performance and the utilization of the hardware power of a mobile device. Gideros uses Lua as its programming language, which is a lightweight scripting language with an easy learning curve and it is quite popular in the context of game development. A few of the greatest Gideros features are as follows: Its rapid prototyping and fast development time by providing a single-click on-device testing that enables you to compile and run your game from your computer to device in an instant A clean object-oriented approach that enables you to write clean and reusable code Additionally, Gideros is not limited to its provided API and can be extended to offer virtually any native platform features through its plugin system You can use all of these to create and even publish your game for free, if you don't mind a small Gideros splash screen being shown before your game starts Installing Gideros Currently, Gideros has no registration requirements for downloading its SDK, so you can easily navigate to their download page (http://giderosmobile.com/download) and download the version that is suitable for your operating system. As Gideros can be used on Linux only using the WINE emulator, it means that even for Linux you have to download the Windows version of Gideros. So, to sum it up: Download the Windows version for Windows and Linux OS Download the Mac version for OS X Gideros consists of multiple programs providing you with a basic package needed to develop your own mobile games. This software package includes the following features: Gideros Studio: It is a lightweight IDE to manage Gideros projects Gideros Player: It is a fast and lightweight desktop; iOS and Android players can run their apps with one click when testing Gideros Texture Packer: It is used to pack multiple textures in one texture for faster texture rendering Gideros Font Creator: It is used to create Bitmap fonts from different font formats for faster font rendering Gideros License Manager: It is used to license your downloaded copy of Gideros before exporting a project (required even for free accounts) An offline copy of the Gideros documentation and Reference API to get you started Creating your first project After you have downloaded and installed Gideros, you can try to create your first Gideros project. Although Gideros is IDE independent, and lot of other IDE's such as Lua Glider, Zero Brane, IntelliJ IDEA, and even Sublime can support Gideros, I would recommend that first-time users choose the provided Gideros Studio. That is what we will be using in this article. Trying out Gideros Studio You should note that I will be using the Windows version for screenshots and explanations, but Gideros Studio on other operating systems is quite similar, if not exactly the same. Therefore, it should not cause any confusion if you are using other versions of Gideros. When you open Gideros Studio, you will see a lot of different sections or what we will call panes. The largest pane will be the Start Page, which will provide you with the following options: Create New Project Access offline the Getting Started guide Access offline the Reference Manual Browse and try out Gideros Example Projects Go ahead and click on Create New Project, a New Project dialog will open. Now enter the name of your project, for example, New Project. Change the location of the project if you want to or leave it set to the default value, and click on OK when you are ready. Note that the Start Page is automatically closed and the space occupied by the Start Page is now free. This will be your coding pane, where all the code will be displayed. But first let's draw our attention to the Project pane, where you can see your chosen project name inside. In this pane, you will manage all the files used by your app. One important thing to note is that file/folder structure in Gideros Project pane is completely independent from your filesystem. This means that you will have to add files manually to the Gideros Studio Project pane. They won't show up automatically when you copy them into the project folder. And in your filesystem, files and folders may be organized completely different than those in Gideros Studio. This feature gives you the flexibility of managing multiple projects with the same code or asset base. When you, for example, want to include specific things in the iOS version of the game, which Android won't have, you can create two different projects in the same project directory, which could reuse the same files and simultaneously have their own independent, platform-specific files. So let's see how it actually works. Right-click on your project name inside the Project pane and select Add New File.... It will pop up the Add New File dialog. Like in many Lua development environments, an application should start with a main.lua file; so name your file main.lua and click on OK. You will now see that main.lua was added to your Project pane. And if you check the directory of your project in your filesystem, you will see that it also contains the main.lua file. Now double-click on main.lua inside the Project pane and it will open this file inside the code pane, where you can write a code for it. So let's try it out. Write a simple line of code: print("Hello world!") What this line will do is simply print out the provided string (Hello world!) inside the output console. Now save the project by either using the File menu or a diskette icon on the toolbar and let's run this project on a local desktop player. Using the Gideros desktop player To run our app, we first need to launch Gideros Player by clicking on a small joystick icon on the toolbar. This will open up the Gideros desktop player. The default screen of Gideros Player shows the current version of Gideros used and the IP address the player is bound to. Additionally, the desktop player provides different customizations: You can make it appear on the top of every window by navigating to View | Always on Top. You can change the zoom by navigating to View | Zoom. It is helpful when running the player in high resolutions, which might not fit the screen. You can select the orientation (portrait or landscape) of the player by navigating to Hardware | Orientation, to suit the needs of your app. You can provide the resolution you want to test your app in by navigating to Hardware | Resolution. It provides the most popular resolution templates to choose from. You can also set the frame rate of your app by navigating to Hardware | Frame Rate. Resolution selected in Gideros Player settings corresponds to the physical device you want to test your application on. All these options give you the flexibility to test your app across different device configurations from within one single desktop player. Now when the player is launched, you should see that the start and stop buttons of Gideros Studio are now enabled. And to run your project, all you need to do is click on the start button. You might need to launch Gideros Player and Gideros Studio with proper permissions and even add them to your Antivirus or Firewall's exceptions list to allow them to connect. The IP address and Gideros version of the player should disappear and you should only see a white screen there. That is because we did not actually display any graphical object as image. But what we did was printing some information to the console. So let's check the Output pane in the Gideros Studio. As you see the Output pane, there are some information messages, like the fact that main.lua was uploaded and the uploading process to the Gideros Player was finished successfully; but it also displays any text we pass to Lua print command, as in our case it was Hello world!. The Output pane is very handy for a simple debugging process by printing out the information using the print command. It also provides the error information if something is wrong with the project and it cannot be built. Now when we know what an Output pane is, let's actually display something on the player's screen. Summary In this article, you've learned a few features about Gideros Studio, such as installing Gideros on your machine, creating your first project, how to use the Gideros Player, and trying out your first project. Resources for Article: Further resources on this subject: Getting Started with PlayStation Mobile [Article] Getting Started with Marmalade [Article] Getting Started with GameSalad [Article]
Read more
  • 0
  • 0
  • 2823

Packt
12 Aug 2015
11 min read
Save for later

Tappy Defender – Building the home screen

Packt
12 Aug 2015
11 min read
In this article by John Horton, the author of Android Game Programming by Example, we will look at developing the home screen UI for our game. (For more resources related to this topic, see here.) Creating the project Fire up Android Studio and create a new project by following these steps. On the welcome page of Android Studio, click on Start a new Android Studio project. In the Create New Project window shown next, we need to enter some basic information about our app. These bits of information will be used by Android Studio to determine the package name. In the following image, you can see the Edit link where you can customize the package name if required. If you will be copy/pasting the supplied code into your project, then use C1 Tappy Defender for the Application name field and gamecodeschool.com in the Company Domain field as shown in the following screenshot: Click on the Next button when you're ready. When asked to select the form factors your app will run on, we can accept the default settings (Phone and Tablet). So click on Next again. On the Add an activity to mobile dialog, just click on Blank Activity followed by the Next button. On the Choose options for your new file dialog, again we can accept the default settings because MainActivity seems like a good name for our main Activity. So click on the Finish button. What we did Android Studio has built the project and created a number of files, most of which you will see and edit during the course of building this game. As mentioned earlier, even if you are just copying and pasting the code, you need to go through this step because Android Studio is doing things behind the scenes to make your project work. Building the home screen UI The first and simplest part of your Tappy Defender game is the home screen. All you need is a neat picture with a scene about the game, a high score, and a button to start the game. The finished home screen will look a bit like this: When you built the project, Android Studio opens up two files ready for you to edit. You can see them as tabs in the following Android Studio UI designer. The files (and tabs) are MainActivity.java and activity_main.xml: The MainActivity.java file is the entry point to your game, and you will see this in more detail soon. The activity_main.xml file is the UI layout that your home screen will use. Now, you can go ahead and edit the activity_main.xml file, so it actually looks like your home screen should. First of all, your game will be played with the Android device in landscape mode. If you change your UI preview window to landscape, you will see your progress more accurately. Look for the button shown in the next image. It is just preceding the UI preview: Click on the button shown in the preceding screenshot, and your UI preview will switch to landscape like this: Make sure activity_main.xml is open by clicking on its tab. Now, you will set in a background image. You can use your own. Add your chosen image to the drawable folder of the project in Android Studio. In the Properties window of the UI designer, find and click on the background property as shown in the next image: Also, in the previous image the button labelled ... is outlined. It is just to the right of the background property. Click on that ... button and browse to and select the background image file that you will be using. Next, you need a TextView widget that you will use to display the high score. Note that there is already a TextView widget on the layout. It says Hello World. You will modify this and use it for your high score. Left click on and drag the TextView to where you want it. You can copy me if you intend using the supplied background or put it where it looks best with your background. Next, in the Properties window, find and click on the id property. Enter textHighScore. You can also edit the text property to say High Score: 99999 or similar so that the TextView looks the part. However, this isn't necessary because your Java code will take care of this later. Now, you will drag a button from the widget palette as shown in the following screenshot: Drag it to where it looks good on your background. You can copy me if using the supplied background or put it where it looks best with your background. What we did You now have a cool background with neatly arranged widgets (a TextView and a Button) for your home screen. You can add functionality via Java code to the Button widget next. Revisit the TextView for the player's high score. The important point is that both the widgets have been assigned a unique ID that you can use to reference and manipulate in your Java code. Coding the functionality Now, you have a simple layout for your game home screen. Now, you need to add the functionality that will allow the player to click on the Play button to start the game. Click on the tab for the MainActivity.java file. The code that was automatically generated for us is not exactly what we need. Therefore, we will start again as it is simpler and quicker than tinkering with what is already there. Delete the entire contents of the MainActivity.java file except the package name and enter the following code in it. Of course, your package name may be different. package com.gamecodeschool.c1tappydefender;import android.app.Activity;import android.os.Bundle;public class MainActivity extends Activity{    // This is the entry point to our game    @Override    protected void onCreate(Bundle savedInstanceState) {        super.onCreate(savedInstanceState);                //Here we set our UI layout as the view        setContentView(R.layout.activity_main);    }} The mentioned code is the current contents of your main MainActivity class and the entry point of your game, the onCreate method. The line of code that begins with setContentView... is the line that loads our UI layout from activity_main.xml to the players screen. We can run the game now and see our home screen. Now, let's handle the Play button on our home screen. Add the two highlighted lines of the following code into the onCreate method just after the call to setContentView(). The first new line creates a new Button object and gets a reference to Button in our UI layout. The second line is the code to listen for clicks on the button. //Here we set our UI layout as the viewsetContentView(R.layout.activity_main);// Get a reference to the button in our layoutfinal Button buttonPlay =   (Button)findViewById(R.id.buttonPlay);// Listen for clicksbuttonPlay.setOnClickListener(this); Note that you have a few errors in your code. You can resolve these errors by holding down the Alt keyboard key and then pressing Enter. This will add an import directive for the Button class. You still have one error. You need to implement an interface so that your code listens to the button clicks. Modify the MainActivity class declaration as highlighted: public class MainActivity extends Activity         implements View.OnClickListener{ When you implement the onClickListener interface, you must also implement the onClick method. This is where you will handle what happens when a button is clicked. You can automatically generate the onClick method by right-clicking somewhere after the onCreate method, but within the MainActivity class, and navigating to Generate | Implement methods | onClick(v:View):void. You also need to have Android Studio add another import directive for Android.view.View. Use the Alt | Enter keyboard combination again. You can now scroll to near the bottom of the MainActivity class and see that Android Studio has implemented an empty onClick method for you. You should have no errors in your code at this point. Here is the onClick method: @Overridepublic void onClick(View v) {  //Our code goes here} As you only have one Button object and one listener, you can safely assume that any clicks on your home screen are the player pressing your Play button. Android uses the Intent class to switch between activities. As you need to go to a new activity when the Play button is clicked, you will create a new Intent object and pass in the name of your future Activity class, GameActivity to its constructor. You can then use the Intent object to switch activities. Add the following code to the body of the onClick method: // must be the Play button.// Create a new Intent objectIntent i = new Intent(this, GameActivity.class);// Start our GameActivity class via the IntentstartActivity(i);// Now shut this activity downfinish(); Once again, you have errors in your code because you need to generate a new import directive, this time for the Intent class so use the Alt | Enter keyboard combination again. You still have one error in your code. This is because your GameActivity class does not exist yet. You will now solve this problem. Creating GameActivity You have seen that when the player clicks on the Play button, main activity will close and game activity will begin. Therefore, you need to create a new activity called GameActivity that will be where your game actually executes. From the main menu, navigate to File | New | Activity | Blank Activity. In the Choose options for your new file dialog, change the Activity name field to GameActivity. You can accept all the other default settings from this dialog, so click on Finish. As you did with your MainActivity class, you will code this class from scratch. Therefore, delete the entire code content from GameActivity.java. What we did Android Studio has generated two more files for you and done some work behind the scenes that you will investigate soon. The new files are GameActivity.java and activity_game.xml. They are both automatically opened for you in two new tabs, in the same place as the other tabs above the UI designer. You will never need activity_game.xml because you will build a dynamically generated game view, not a static UI. Feel free to close that now or just ignore it. You will come back to the GameActivity.java file, when you start to code your game for real. Configuring the AndroidManifest.xml file We briefly mentioned that when we create a new project or a new activity, Android Studio does more than just creating two files for us. This is why we create new projects/activities the way we do. One of the things going on behind the scenes is the creation and modification of the AndroidManifest.xml file in the manifests directory. This file is required for your app to work. Also, it needs to be edited to make your app work the way you want it to. Android Studio has automatically configured the basics for you, but you will now do two more things to this file. By editing the AndroidManifest.xml file, you will force both of your activities to run with a full screen, and you will also lock them to a landscape layout. Let's make these changes here: Open the manifests folder now, and double click on the AndroidManifest.xml file to open it in the code editor. In the AndroidManifest.xml file, find the following line of code:android:name=".MainActivity" Immediately following it, type or copy and paste these two lines to make MainActivity run full screen and lock it in the landscape orientation:android:theme="@android:style/Theme.NoTitleBar.Fullscreen"android:screenOrientation="landscape" In the AndroidManifest.xml file, find the following line of code:android:name=".GameActivity" Immediately following it, type or copy and paste these two lines to make GameActivity run full screen and lock it in the landscape orientation: android:theme="@android:style/Theme.NoTitleBar.Fullscreen"android:screenOrientation="landscape" What you did You have now configured both activities from your game to be full screen. This gives a much more pleasing appearance for your player. In addition, you have disabled the player's ability to affect your game by rotating their Android device. Continue building on what you've learnt so far with Android Game Programming by Example! Learn to implement the game rules, game mechanics, and game objects such as guns, life, money; and of course, the enemy. You even get to control a spaceship!
Read more
  • 0
  • 0
  • 2771
article-image-project-setup-and-modeling-residential-project
Packt
08 Jul 2015
20 min read
Save for later

Project Setup and Modeling a Residential Project

Packt
08 Jul 2015
20 min read
In this article by Scott H. MacKenzie and Adam Rendek, authors of the book ArchiCAD 19 – The Definitive Guide, we will see how our journey, into ArchiCAD 19, begins with an introduction to the graphic user interface, also known as the GUI. As with any software program, there is a menu bar along the top that gives access to all the tools and features. There are also toolbars and tool palettes that can be docked anywhere you like. In addition to this, there are some special palettes that pop up only when you need them. After your introduction to ArchiCAD's user interface, you can jump right in and start creating the walls and floors for your new house. Then you will learn how to create ceilings and the stairs. Before too long you will have a 3D model to orbit around. It is really fun and probably easier than you would expect. (For more resources related to this topic, see here.) The ArchiCAD GUI The first time you open ArchiCAD you will find the toolbars along the top, just under the menu bar and there will be palettes docked to the left and right of the drawing area. We will focus on the 3 following palettes to get started: The Toolbox palette: This contains all of your selection, modeling, and drafting tools. It will be located on the left hand side by default. The Info Box palette: This is your context menu that changes according to whatever tool is currently in use. By default, this will be located directly under the toolbars at the top. It has a scrolling function; hover your cursor over the palette and spin the scroll wheel on your mouse to reveal everything on the palette. The Navigator palette: This is your project navigation window. This palette gives you access to all your views, sheets, and lists. It will be located on the right-hand side by default. These three palettes can be seen in the following screenshot: All of the mentioned palettes are dockable and can be arranged however you like on your screen. They can also be dragged away from the main ArchiCAD interface. For instance, you could have palettes on a second monitor. Panning and Zooming ArchiCAD has the same panning and zooming interface as most other CAD (Computer-aided design) and BIM (Building Information Modeling) programs. Rolling the scroll wheel on your mouse will zoom in and out. Pressing down on the scroll wheel (or middle button) and moving your cursor will execute a pan. Each drawing view window has a row of zoom commands along the bottom. You should try each one to get familiar with each of their functions. View toggling When you have multiple views open, you can toggle through them by pressing the Ctrl key and tapping on the Tab key. Or, you can pick any of the open views from the bottom of the Window pull-down menu. Pressing the F2 key will open a 2D floor plan view and pressing the F3 key will open the default 3D view. Pressing the F5 key will open a 3D view of selected items. In other words, if you want to isolate specific items in a 3D view, select those items and press F5. The function keys are second nature to those that have been using ArchiCAD for a long time. If a feature has a function key shortcut, you should use it. Project setup ArchiCAD is available in multiple different language versions. The exercises in this book use the USA version of ArchiCAD. Obviously this version is in English. There is another version in English and that is referred to as the International (INT) version. You can use the International version to do the exercises in the book, just be aware that there may be some subtle differences in the way that something is named or designed. When you create a new project in ArchiCAD, you start by opening a project template. The template will have all the basic stuff you need to get started including layers, line types, wall types, doors, windows, and more. The following lesson will take you through the first steps in creating a new ArchiCAD project: Open ArchiCAD. The Start ArchiCAD dialog box will appear. Select the Create a New Project radio button at the top. Select the Use a Template radio button under Set up Project Settings. Select ArchiCAD 19 Residential Template.tpl from the drop-down list. If you have the International version of ArchiCAD, then the residential template may not be available. Therefore you can use ArchiCAD 19 Template.tpl. Click on New. This will open a blank project file. Project Settings Now that you have opened your new project, we are going to create a house with 4 stories (which includes a story for the roof). We create a story for the roof in order to facilitate a workspace to model the elements on that level. The template we just opened only has 2 stories, so we will need to add 2 more. Then we need to look at some other settings. Stories The settings for the stories are as follows: On the Navigator palette, select the Project Map icon . Double click on 1st FLOOR. Right click on Stories and select Create New Story. You will be prompted to give the new story a name. Enter the name BASEMENT. Click on the button next to Below. Enter 9' into the Height box and click on the Create button. Then double click on 2. 2nd FLOOR. Right click on Stories and then select Create New Story. You will be prompted to give the new story a name. Enter the name ROOF. Click on the button next to Above. Enter 9' into the Height box and click on the Create button. Your list of stories should now look like this 3. ROOF 2. 2nd Floor 1. 1st Floor -1. BASEMENT The International version of ArchiCAD (INT) will give the first floor the index number of 0. The second floor index number will be 1. And the roof will be 2. Now we need to adjust the heights of the other stories: Right click on Stories (on the Navigator palette) and select Story Settings. Change the number in the Height to Next box for 1st FLOOR to 9'. Do the same for 2nd FLOOR. Units On the menu bar, go to Options | Project Preferences | Working Units and perform the following steps: Ensure Model Units is set to feet & fractional inches. Ensure that Fractions is set to 1/64. Ensure that Layout Units is set to feet & fractional inches. Ensure that Angle Unit is set to Decimal degrees. Ensure that Decimals is set to 2. You are now ready to begin modeling your house, but first let's save the project. To save the project, perform the following steps: Navigate to the File menu and click on Save. If by chance you have saved it already, then click on Save As. Name your file Colonial House. Click on Save. Renovation filters The Renovation Filter feature allows you to differentiate how your drawing elements will appear in different construction phases. For renovation projects that have demolition and new work phases, you need to show the items to be demolished differently than the existing items that are to remain, or that are new. The projects we will work on in this book do not require this feature to manage phases because we will only be creating a new construction. However, it is essential that your renovation filter setting is set to New Construction. We will do this in the first modeling exercise. Selection methods Before you can do much in ArchiCAD, you need to be familiar with selecting elements. There are several ways to select something in ArchiCAD, which are as follows: Single cursor click Pick the Arrow tool from the toolbox or hold the Shift key down on the keyboard and click on what you want to select. As you click on the elements, hold the Shift key down to add them to your selection set. To remove elements from the selection set, just click on them again with the Shift key pressed. There is a mode within this mode called Quick Selection. It is toggled on and off from the Info Box palette. The icon looks like a magnet. When it is on, it works like a magnet because it will stick to faces or surfaces, such as slabs or fill patterns. If this mode is not on, then you are required to find an edge, endpoint, or hotspot node to select an element with a single click. Hold the Space button down to temporarily change the mode while selecting elements. Window Pick the Arrow tool from the toolbox or hold the Shift key down and draw your selection window. Click once for the window starting corner and click a second time for the end corner. This works just as windowing does in AutoCAD. Not as Revit does, where you need to hold the mouse button down while you draw your window. There are 3 different windowing methods. Each one is set from the Info Box palette: Partial Elements: Anything that is inside of or touching the window will be selected. AutoCAD users will know this as a Crossing Window. Entire Elements: Anything completely encapsulated by the window will be selected. If something is not completely inside the window then it will not be selected. Direction Dependent: Click and window to the left, the Partial Elements window will be used. Click and window to the right, the Entire Elements window will be used. Marquee A marquee is a selection window that stays on the screen after you create it. If you are a MicroStation CAD program user, this will be similar to a selection window. It can be used for printing a specific area in a drawing view and performing what AutoCAD users would refer to as a Stretch command. There are 2 types of marquees; single story (skinny) and multi story (fat). The single story marquee is used when you want to select elements on your current story view only. The multi-story marquee will select everything on your current story as well as the stories above and below your selections. The Find & Select tool This lets ArchiCAD select elements for you, based on the attribute criteria that you define, such as element type, layer, and pen number. When you have the criteria defined, click on the plus sign button on the palette and all the elements within that criterion inside your current view or marquee will be selected. The quickest way to open the Find & Select tool is with the Ctrl + F key combination Modification commands As you draw, you will inevitably need to move, copy, stretch, or trim something. Select your items first, and then execute the modification command. Here are the basic commands you will need to get things moving: Adjust (Extend): Press Ctrl + - or navigate to Edit | Reshape | Adjust Drag (Move): Press Ctrl + D or…navigate to Edit | Move | Drag Drag a Copy (Copy): Press Ctrl + Shift + D or navigate to Edit | Move | Drag a Copy Intersect (Fillet): Click on the Intersect button on the Standard toolbar or navigate to Edit | Reshape | Intersect Resize (Scale): Press Ctrl + K or navigate to Edit | Reshape | Resize Rotate: Press Ctrl + E or navigate to Edit | Move | Rotate Stretch: Press Ctrl + H or navigate to Edit | Reshape | Stretch Trim: Press Ctrl or click on the Trim button on the Standard toolbar or navigate to Edit | Reshape | Trim. Hold the Ctrl key down and click on the portion of wall or line that you want trimmed off. This is the fastest way to trim anything! Memorizing the keyboard combinations above is a sure way to increase your productivity. Modeling – part I We will start with the wall tool to create the main exterior walls on the 1st floor of our house, and then create the floor with the slab tool. However, before we begin, let's make sure your Renovation Filter is set to New Construction. Setting the Renovation Filter The Renovation Filter is an active setting that controls how the elements you create are displayed. Everything we create in this project is for new construction so we need the new construction filter to be active. To do so, go to the Document menu, click on Renovation and then click on 04 New Construction. Using the Wall tool The Wall tool has settings for height, width, composite, layer, pen weight and more. We will learn about these things as we go along, and learn a little bit more each time we progress into to the project. Double click on 1. 1st Story in the Navigator palette to ensure we are working on story 1. Select the Wall tool from the Toolbox palette or from the menu bar under Design | Design Tools | Wall. Notice that this will automatically change the contents of the Info Box palette. Click on the wall icon inside Info Box. This will bring up the active properties of the wall tool in the form of the Wall Default Settings window. (This can also be achieved by double clicking on the wall tool button in Toolbox). Change the composite type to Siding 2x6 Wd. Stud. Click on the wall composite button to do this.   Creating the exterior walls of the 1st Story To create the exterior walls of the 1st story perform the following steps: Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Double click on 1. 1st Story in the Navigator palette to ensure that we are working on story 1. Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Change the composite type to be Siding 2x6 Wd. Stud. Click on the wall composite button to do this. Notice at the bottom of the Wall Default Settings window is the layer currently assigned to the wall tool. It should be set to A-WALL-EXTR. Click on OK to start your first wall. Click near the center of the drawing screen and move your cursor to the left, notice the orange dashed line that appears. That is your guide line. Keep your cursor over the guide line so that it keeps you locked in the orthogonal direction. You should also immediately see the Tracker palette pop up, displaying your distance drawn and angle after your first click. Before you make your second click, enter the number 24 from your keyboard and press Enter. You should now have 24-0" long wall. If your Tracker palette does not appear, it may be toggled off. Go up to the Standard tool bar and click on the Tracker button to turn it on. Select this again and make your first click on the upper left end corner of your first wall. Move your cursor down, so that it snaps to the guideline, enter the number 28, and press the Enter key. Draw your third wall by clicking on the bottom left endpoint of your second wall, move your cursor to the right, snapped over the guide line, type in the number 24 and press Enter. Draw your fourth wall by clicking on the bottom right end point of your third wall and the starting point of your first wall. You should now have four walls that measure 24'-0" x 28"-0, outside edge to outside edge. Move your four walls to the center of the drawing view and perform the following steps: Click on the Arrow tool at the top of the Toolbox. Click outside one of the corners of the walls, and then click on the opposite side. All four walls should be selected now. Use the Drag command to move the walls. The quickest way to activate the Drag command is by pressing Ctrl + D. The long way is from the menu bar by navigating to Edit | Move | Drag. Drag (move) the walls to the center of your drawing window. Press the Esc key or click on a blank space in your drawing window to deselect the walls. You can select all the walls in a view by activating the Wall tool and pressing Ctrl + A. You are now ready to create a floor with the slab tool. But first, let's have a little fun and see how it looks in 3D (press the F3 key): From the Navigator palette, double click on Generic Axonometry under the 3D folder icon. This will open a 3D view window. Hold your Shift key down, press down on your scroll wheel button, and slowly move your mouse around. You are now orbiting! Play around with it a little, then get back to work and go to the next step to create your first floor slab. Press the F2 key to get back to a 2D view. You can also perform a 3D orbit via the Orbit button at the bottom of any 3D view window. Creating the first story's floor with the Slab tool The slab tool is used to create floors. It is also used to create ceilings. We will begin using it now to create the first floor for our house. Similar to the Wall tool, it also has settings for layer, pen weight and composite. To create the first story's floor using the Slab tool, perform the following steps: Select the Slab tool from the Toolbox palette or from the menu bar under Design | Design Tools | Slab. This will change the contents of the Info Box palette. Click on the Slab icon in Info Box. This will bring up the Slab Default Settings (active properties) window for the Slab tool. As with the Wall tool, you have a composite setting for the slab tool. Set the composite type for the slab tool to FLR Wd Flr + 2x10. The layer should be set to A-FLOR. Click OK. You could draw the shape of the slab by tracing over the outside lines of your walls but we are going to use the Magic Wand feature. Hover your cursor over the space inside your four walls and press the space bar on your keyboard. This will automatically create the slab using the boundary created by the walls. Then, open a 3D view and look at your floor. Instead of using the tool icon inside the Info Box palette, double click on any tool icon inside the Toolbox palette to bring up the default settings window for that tool. Creating the exterior walls and floor slabs for the basement and the second story We could repeat all of the previous steps to create the floor and walls for the second story and the basement, but in this case, it will be quicker to copy what we have already drawn on the first story and copy it up with the Edit Elements by Stories tool. Perform the following steps to create the exterior walls and floor slabs for the basement and second story: Go to the Navigator palette and right click over Stories, select Edit Elements by Stories. The Edit Elements by Stories window will open. Under Select Action, you want to set it to Copy. Under From Story, set it to 1. 1st FLOOR. In the To Story section, check the box for 2nd FLOOR and -1. BASEMENT. Click on OK. You should see a dialog box appear, stating that as a result of the last operation, elements have been created and/or have changed their position on currently unseen stories. Whenever you get this message, you should confirm that you have not created any unwanted elements. Click on the Continue button. Now you should have walls and a floor on three stories; Basement, 1st FLOOR, and 2nd FLOOR. The quickest way to jump to the next story up or the next story down is with the Ctrl + Arrow Up or Ctrl + Arrow Down key combination. Basement element modification The floor and the walls on the BASEMENT story need to be changed to a different composite type. Do this by performing the following steps: Open the BASEMENT view and select the four walls by clicking on one at a time while holding down the Shift key. Right click over your selection and click on Wall Selection Settings. Change the walls to the EIFS on 8" CMU composite type. Then, click on OK. Move your cursor over the floor slab. The quick selection cursor should appear. This selection mode allows you to click on an object without needing to find an edge or endpoint. Click on the slab. Open the Slab Selection Setting window but this time, do it by pressing the Ctrl + T key combination. Change the floor slab composite to Conc. Slab: 4" on gravel. Click on OK. The Ctrl + T key combination is the quickest way to bring up an element's selection settings window when an element is selected. Open a 3D view (by pressing the F3 key) and orbit around your house. It should look similar to the following screenshot: Adding the garage We need to add the garage and the laundry room, which connects the garage to the house. Do this by performing the following steps: Open the 1st FLOOR story from the project map. Start the Wall tool. From the Info Box palette, set the wall composite setting to Siding 2x6 Wd. Stud. Click on the upper-left corner of your house for your wall starting point. Move your cursor to the left, snap to the guide line, type 6'-10", and press Enter. Change the Geometry Method setting on Info Box to Chained. Refer to the following screenshot: Start your next wall by clicking on the endpoint of your last wall, move your cursor up, snap to the guideline and type 5', and press Enter. Move your cursor to the left, snap to grid line, type in 12'-6", and press Enter. Move your cursor down, snap to grid line, type in 22'-4", and press Enter. Move your cursor to the right, snap to grid line and double click on the perpendicular west wall (double pressing your Enter key will work the same as a double click). Now we want to create the floor for this new set of walls. To do that, perform the following steps: Start the Slab tool. Change the composite to Conc. Slab: 4" on gravel. Hover your cursor inside the new set of walls and press the Space key to use the magic wand. This will create the floor slab for the garage and laundry room. There is still one more wall to create, but this time we will use the Adjust command to, in effect, create a new wall: Select the 5'-0" wall drawn in the previous exercise. Go to the Edit menu, click on Reshape, and then click on Adjust. Click on the bottom edge of the perpendicular wall down below. The wall should extend down. Refer to the following screenshot: Then Change to a 3D view (by pressing F3) and examine your work. The 3D view If you switch to a 3D view and your new modeling does not show, zoom in or out to refresh the view, or double click your scroll wheel (middle button). Your new work will appear. Summary In this article you were introduced to the ArchiCAD Graphical User Interface (GUI), project settings and learned how to select stuff. You created all the major modeling for your house and got a primer on layers. Now you should have a good understanding of the ArchiCAD way of creating architectural elements and how to control their parameters. Resources for Article: Further resources on this subject: Let There be Light! [article] Creating an AutoCAD command [article] Setting Up for Photoreal Rendering [article]
Read more
  • 0
  • 0
  • 2614

article-image-guidelines-setting-ouya-odk
Packt
07 May 2014
5 min read
Save for later

Guidelines for Setting Up the OUYA ODK

Packt
07 May 2014
5 min read
(For more resources related to this topic, see here.) Starting with the OUYA Development Kit The OUYA Development Kit (OUYA ODK) is a tool to create games and applications for the OUYA console, and its extensions and libraries are in the .jar format. It is released under Apache License Version 2.0. The OUYA ODK contains the following folders: Licenses: The SDK games and applications depend on various open source libraries. This folder contains all the necessary authorizations for the successful compilation and publication of the project in the OUYA console or testing in the emulator. Samples: This folder has some scene examples, which help to show users how to use the Standard Development Kit. Javadoc: This folder contains the documentation of Java classes, methods, and libraries. Libs: This folder contains the .jar files for the OUYA Java classes and their dependencies for the development of applications for the OUYA console. OUYA Framework APK file: This file contains the core of the OUYA software environment that allows visualization of a project based on the environment of OUYA. OUYA Launcher APK file: This file contains the OUYA launcher that displays the generated .apk file. The ODK plugin within Unity3D Download the Unity3D plugin for OUYA. In the developer portal, you will find these resources at https://github.com/ouya/ouya-unity-plugin. After downloading the ODK plugin, unzip the file in the desktop and import the ODK plugin for the Unity3D folder in the interface engine of the Assets folder; you will find several folders in it, including the following ones: Ouya: This folder contains the Examples, SDK, and StarterKit folders LitJson: This folder contains libraries that are important for compilation Plugins: This folder contains the Android folder, which is required for mobile projects "The Last Maya" created with the .ngui extension Importing the ODK plugin within Unity3D The OUYA Unity plugin can be imported into the Unity IDE. Navigate to Assets | Import Package | Custom Package…. Find the Ouya Unity Plugin folder on the desktop and import all the files. The package is divided into Core and Examples. The Core folder contains the OUYA panel and all the code for the construction of a project for the console. The Core and Examples folders can be used as individual packages and exported from the menu, as shown in the following screenshot: Installing and configuring the ODK plugin First, execute the Unity3D application and navigate to File | Open Project and then select the folder where you need to put the OUYA Unity plugin. You can check if Ouya Unity Plugin has been successfully imported by having a look at the window menu at the top of Unity3D, where the toolbars are located. In this manner, you can review the various components of the OUYA panel. While loading the OUYA panel, a window will be displayed with the following sections and buttons: Build Application: This is the first button and is used to compile, build, and create an Android Application Package file (APK) Build and Run Application: This is the next button and allows you to compile the application, generate an APK, and then run it on the emulator or publish directly to a device connected to the computer Compile: This button compiles the entire solution The lower section displays the paths of different libraries. Before it uses the OUYA plugin, the user ought to edit the fields in the PlayerSettings window (specifically the Bundle Identifier field), set the Minimum API Level field to API level 16, and set the Default Orientation field to Landscape Left. Another button that is mandatory is Bundle Identifier synchronizer, which synchronizes the Android manifest file (XML) and the identifiers of Java packages. Remember that the package ID must be unique for each game and has to be edited to avoid synchronization problems. Also, the OuyaGameObject (shown in the following screenshot) is very important for use in in-app purchases: The OUYA panel The Unity tab in the OUYA panel shows the path of the Unity JAR file, which houses the file's JAR class. This file is important because it is the one that communicates with the Unity Web Player. This Unity tab is shown in the following screenshot: The Java JDK tab shows the routes of the Java Runtime installation with all its different components to properly compile a project for Android and OUYA, as shown in the following screenshot: The Android SDK tab displays the current version of the SDK and contains the paths of the different components of the SDK: Android Jar Path ADB, APT SDK Path, and Path, as shown in the following screenshot. These paths must correspond to the PATH environment variable of the operating system. Finally, the last tab of the OUYA panel, Android NDK, shows the installation path of C++ scripts for native builds, as shown in the following screenshot: Installing and configuring the Java class If at this point you want to perform native development using the NDK or have problems opening or compiling the OUYA project, you need to configure the Java files. To install and configure the Java class, perform the following steps: Download and install the JDK 1.6 and configure the Java Runtime path in the PATH environment variable. Next, you need to set a unique bundle, identifier.com.yourcompany.gametitle. Hit the Sync button so your packages and manifest match. Create a game in the developer portal that uses the bundle ID. Download that signing key (key.der) and save it in Unity. Compile the Java plugin and the Java application. Input your developer UUID from the developer portal into the OuyaGameObject in the scene.
Read more
  • 0
  • 0
  • 2590

article-image-basic-concepts
Packt
23 Oct 2013
12 min read
Save for later

Basic Concepts

Packt
23 Oct 2013
12 min read
  (For more resources related to this topic, see here.) Scene and Actors You must have heard the quote written by William Shakespeare: "All the world's a stage, and all the men and women merely players: they have their exits and their entrances; and one man in his time plays many parts, his acts being seven ages." As per my interpretation, he wanted to say that this world is like a stage, and human beings are like players or actors who perform our role in it. Every actor may have his own discrete personality and influence, but there is only one stage, with a finite area, predefined props, and lighting conditions. In the same way, a world in PhysX is known as scene and the players performing their role are known as actors. A scene defines the property of the world in which a simulation takes place, and its characteristics are shared by all of the actors created in the scene. A good example of a scene property is gravity, which affects all of the actors being simulated in a scene. Although different actors can have different properties, independent of the scene. An instance of a scene can be created using the PxScene class. An actor is an object that can be simulated in a PhysX scene. It can have properties, such as shape, material, transform, and so on. An actor can be further classified as a static or dynamic actor; if it is a static one, think of it as a prop or stationary object on a stage that is always in a static position, immovable by simulation; if it is dynamic, think of it as a human or any other moveable object on the stage that can have its position updated by the simulation. Dynamic actors can have properties like mass, momentum, velocity, or any other rigid body related property. An instance of static actor can be created by calling PxPhysics::createRigidStatic() function, similarly an instance of dynamic actor can be created by calling PxPhysics::createRigidDynamic() function. Both functions require single parameter of PxTransform type, which define the position and orientation of the created actor. Materials In PhysX, a material is the property of a physical object that defines the friction and restitution property of an actor, and is used to resolve the collision with other objects. To create a material, call PxPhysics::createMaterial(), which requires three arguments of type PxReal; these represent static friction, dynamic friction and restitution, respectively. A typical example for creating a PhysX material is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); Static friction represents the friction exerted on a rigid body when it is in a rest position, and its value can vary from 0 to infinity. On the other hand, dynamic friction is applicable to a rigid body only when it is moving, and its value should always be within 0 and 1. Restitution defines the bounciness of a rigid body and its value should always be between 0 and 1; the body will be more bouncy the closer its value is to 1. All of these values can be tweaked to make an object behave as bumpy as a Ping-Pong ball or as slippery as ice when it interacts with other objects. Shapes When we create an actor in PhysX, there are some other properties, like its shape and material, that need to be defined and used further as function parameters to create an actor. A shape in PhysX is a collision geometry that defines the collision boundaries for an actor. An actor can have more than one shape to define its collision boundary. Shapes can be created by calling PxRigidActor::createShape(), which needs at least one parameter each of type PxGeometry and PxMaterial respectively. A typical example of creating a PhysX shape of an actor is as follows: PxMaterial* mMaterial = gPhysicsSDK->createMaterial(0.5,0.5,0.5); PxRigidDynamic* sphere = gPhysicsSDK->createRigidDynamic(spherePos); sphere->createShape(PxSphereGeometry(0.5f), *mMaterial); An actor of type PxRigidStatic, which represents static actors, can have shapes such as a sphere, capsule, box, convex mesh, triangular mesh, plane, or height field. Permitted shapes for actors of the PxRigidDynamic type that represents dynamic actors depends on whether the actor is flagged as kinematic or not. If the actor is flagged as kinematic, it can have all of the shapes of an actor of the PxRigidStatic type; otherwise it can have shapes such as a sphere, capsule, box, convex mesh, but not a triangle mesh, a plane, or a height field. Creating the first PhysX 3 program Now we have enough understanding to create our first PhysX program. In this program, we initialize PhysX SDK, create a scene, and then add two actors. The first actor will be a static plane that will act as a static ground, and the second will be a dynamic cube positioned a few units above the plane. Once the simulation starts, the cube should fall on to the plane under the effect of gravity. Because this is our first PhysX code, to keep it simple, we will not draw any actor visually on the screen. We will just print the position of the falling cube on the console until it comes to rest. We will start our code by including the required header files. PxPhysicsAPI.h is the main header file for PhysX, and includes the entire PhysX API in a single header. Later on, you may want to selectively include only the header files that you need, which will help to reduce the application size. We also load the three most frequently used precompiled PhysX libraries for both the Debug and Release platform configuration of VC++ 2010 Express compiler shown as follows: In addition to the std namespace, which is a part of standard C++, we also need to add the physx namespace for PhysX, as follows: #include <iostream> #include <PxPhysicsAPI.h> //PhysX main header file //-------Loading PhysX libraries----------] #ifdef _DEBUG #pragma comment(lib, "PhysX3DEBUG_x86.lib") #pragma comment(lib, "PhysX3CommonDEBUG_x86.lib") #pragma comment(lib, "PhysX3ExtensionsDEBUG.lib") #else #pragma comment(lib, "PhysX3_x86.lib") #pragma comment(lib, "PhysX3Common_x86.lib") #pragma comment(lib, "PhysX3Extensions.lib") #endif using namespace std; using namespace physx; Initializing PhysX For initializing PhysX SDK, we first need to create an object of type PxFoundation by calling the PxCreateFoundation() function. This requires three parameters: the version ID, an allocator callback, and an error callback. The first parameter prevents a mismatch between the headers and the corresponding SDK DLL(s). The allocator callback and error callback are specific to an application, but the SDK also provides a default implementation, which is used in our program. The foundation class is needed to initialize higher-level SDKs. The code snippet for creating a foundation of PhysX SDK is as follows: static PxDefaultErrorCallback gDefaultErrorCallback; static PxDefaultAllocator gDefaultAllocatorCallback; static PxFoundation* gFoundation = NULL; //Creating foundation for PhysX gFoundation = PxCreateFoundation (PX_PHYSICS_VERSION, gDefaultAllocatorCallback, gDefaultErrorCallback); After creating an instance of the foundation class, we finally create an instance of PhysX SDK by calling the PxCreatePhysics() function. This requires three parameters: the version ID, the reference of the PxFoundation object we created earlier, and PxTolerancesScale. The PxTolerancesScale parameter makes it easier to author content on different scales and still have PhysX work as expected; however, to get started, we simply pass a default object of this type. We make sure that the PhysX device is created correctly by comparing it with NULL. If the object is not equal to NULL, the device was created successfully. The code snippet for creating an instance of PhysX SDK is as follows: static PxPhysics* gPhysicsSDK = NULL; //Creating instance of PhysX SDK gPhysicsSDK = PxCreatePhysics (PX_PHYSICS_VERSION, *gFoundation, PxTolerancesScale() ); if(gPhysicsSDK == NULL) { cerr<<"Error creating PhysX3 device, Exiting..."<<endl; exit(1); } Creating scene Once the PhysX device is created, it's time to create a PhysX scene and then add the actors to it. You can create a scene by calling PxPhysics::createScene(), which requires an instance of the PxSceneDesc class as a parameter. The object of PxSceneDesc contains the description of the properties that are required to create a scene, such as gravity. The code snippet for creating an instance of the PhysX scene is given as follows: PxScene* gScene = NULL; //Creating scene PxSceneDesc sceneDesc(gPhysicsSDK->getTolerancesScale()); sceneDesc.gravity = PxVec3(0.0f, -9.8f, 0.0f); sceneDesc.cpuDispatcher = PxDefaultCpuDispatcherCreate(1); sceneDesc.filterShader = PxDefaultSimulationFilterShader; gScene = gPhysicsSDK->createScene(sceneDesc); Then, one instance of PxMaterial is created, which will be used as a parameter for creating the actors. //Creating material PxMaterial* mMaterial = //static friction, dynamic friction, restitution gPhysicsSDK->createMaterial(0.5,0.5,0.5); Creating actors Now it's time to create actors; our first actor is a plane that will act as a ground. When we create a plane in PhysX, its default orientation is vertical, like a wall, but we want it to act like a ground. So, we have to rotate it by 90 degrees so that its normal will face upwards. This can be done using the PxTransform class to position and rotate the actor in 3D world space. Because we want to position the plane at the origin, we put the first parameter of PxTransform as PxVec3(0.0f,0.0f,0.0f); this will position the plane at the origin. We also want to rotate the plane along the z-axis by 90 degrees, so we will use PxQuat(PxHalfPi,PxVec3(0.0f,0.0f,1.0f)) as the second parameter. Now we have created a rigid static actor, but we don't have any shape defined for it. So, we will do this by calling the createShape() function and putting PxPlaneGeometry() as the first parameter, which defines the plane shape and a reference to the mMaterial that we created before as the second parameter. Finally, we add the actor by calling PxScene::addActor and putting the reference of plane, as shown in the following code: //1-Creating static plane PxTransform planePos = PxTransform(PxVec3(0.0f, 0, 0.0f),PxQuat(PxHalfPi, PxVec3(0.0f, 0.0f, 1.0f))); PxRigidStatic* plane = gPhysicsSDK->createRigidStatic(planePos); plane->createShape(PxPlaneGeometry(), *mMaterial); gScene->addActor(*plane); The next actor we want to create is a dynamic actor having box geometry, situated 10 units above our static plane. A rigid dynamic actor can be created by calling the PxCreateDynamic() function, which requires five parameters of type: PxPhysics, PxTransform, PxGeometry, PxMaterial, and PxReal respectively. Because we want to place it 10 units above the origin, the first parameter of PxTransform will be PxVec3(0.0f,10.0f,0.0f). Notice that they component of the vector is 10, which will place it 10 units above the origin. Also, we want it at its default identity rotation, so we skipped the second parameter of the PxTransform class. An instance of PxBoxGeometry also needs to be created, which requires PxVec3 as a parameter, which describes the dimension of a cube in half extent. We finally add the created actor to the PhysX scene by calling PxScene::addActor() and providing the reference of gBox as the function parameter. PxRigidDynamic*gBox); //2) Create cube PxTransform boxPos(PxVec3(0.0f, 10.0f, 0.0f)); PxBoxGeometry boxGeometry(PxVec3(0.5f,0.5f,0.5f)); gBox = PxCreateDynamic(*gPhysicsSDK, boxPos, boxGeometry, *mMaterial, 1.0f); gScene->addActor(*gBox); Simulating PhysX Simulating a PhysX program requires calculating the new position of all of the PhysX actors that are under the effect of Newton's law, for the next time frame. Simulating a PhysX program requires a time value, also known as time step, which forwards the time in the PhysX world. We use the PxScene::simulate() method to advance the time in the PhysX world. Its simplest form requires one parameter of type PxReal, which represents the time in seconds, and this should always be more than 0, of else the resulting behavior will be undefined. After this, you need to call PxScene::fetchResults(), which will allow the simulation to finish and return the result. The method requires an optional Boolean parameter, and setting this to true indicates that the simulation should wait until it is completed, so that on return the results are guaranteed to be available. //Stepping PhysX PxReal myTimestep = 1.0f/60.0f; void StepPhysX() { gScene->simulate(myTimestep); gScene->fetchResults(true); } We will simulate our PhysX program in a loop until the dynamic actor (box) we created 10 units above the ground falls to the ground and comes to an idle state. The position of the box is printed on the console for each time step of the PhysX simulation. By observing the console, you can see that initially the position of the box is (0, 10, 0), but the y component, which represents the vertical position of the box, is decreasing under the effect of gravity during the simulation. At the end of loop, it can also be observed that the position of the box in each simulation loop is the same; this means the box has hit the ground and is now in an idle state. //Simulate PhysX 300 times for(int i=0; i<=300; i++) { //Step PhysX simulation if(gScene) StepPhysX(); //Get current position of actor (box) and print it PxVec3 boxPos = gBox->getGlobalPose().p; cout<<"Box current Position ("<<boxPos.x <<" "<<boxPos.y <<" "<<boxPos.z<<")n"; } Shutting down PhysX Now that our PhysX simulation is done, we need to destroy the PhysX related objects and release the memory. Calling the PxScene::release() method will remove all actors, particle systems, and constraint shaders from the scene. Calling PxPhysics::release() will shut down the entire physics. Soon after, you may want to call PxFoundation::release() to release the foundation object, as follows: void ShutdownPhysX() { gScene->release(); gPhysicsSDK->release(); gFoundation->release(); } Summary We finally created our first PhysX program and learned its steps from start to finish. To keep our first PhysX program short and simple, we just used a console to display the actor's position during simulation, which is not very exciting; but it was the simplest way to start with PhysX. Resources for Article: Further resources on this subject: Building Events [Article] AJAX Form Validation: Part 1 [Article] Working with Zend Framework 2.0 [Article]
Read more
  • 0
  • 0
  • 2577
article-image-mastering-blender-25-basics
Packt
17 Jun 2011
13 min read
Save for later

Mastering the Blender 2.5 Basics

Packt
17 Jun 2011
13 min read
Blender 2.5 Character Animation Cookbook 50 great recipes for giving soul to your characters by building high-quality rigs Adjusting and tracking the timing Timing, by itself, is a subject that goes well beyond the scope of a simple recipe. It is, in fact, the main subject of a number of animation-related books. Strictly speaking, Timing in animation is how long it takes (in frames or seconds) between two Extreme poses. You can have your character in great poses, but if the timing between them is not right, your shot may be ruined. Maybe it is a difficult thing to master because there are no definite rules for it: everyone is born with a particular sense of timing. Despite that, it's enormously important to look at video and real life references to understand the timing for different actions. Imagine a tennis ball falling to the ground and bouncing. Think of the time between its first and second contact with the ground. Now replace it with a bowling ball and think of the time required for this bounce. You know, from your life experience, that the tennis ball bounces slower than the bowling ball. The timing between these two balls is different. The timing here (along with spacing, subject of the next recipe) is the main factor that makes us perceive the different nature and weight of each ball. The "rules" of timing can also be broken for comedic effect: something that purposely moves faster or slower than usual may get a laugh from the audience. We're going to see how different timings can change how we perceive a shot with the same poses. How to do it... Open the file 007-Timing.blend (Go to Support to get the code). It has our character Otto with three poses, making him look from one side to the other: (Move the mouse over the image to enlarge it.) Press Alt + A to play the animation. You may think the timing is acceptable for this head turn, but this method of checking the timing is not ideal. When you tell Blender to play the animation through Alt + A, you're relying in your computer's power to process all the information of your scene in real time. You'd probably end up seeing something slower than what you'll actually get after rendering the frames. When playing the animation inside the 3D view, you can see the actual playback frame rate on the top left corner of the window. If it's slower than the scene frame rate (in this case, 24 fps), it means that the rendered animation will be faster than what you're seeing. When adjusting the timing, we must be sure of the exact results of every keyframe set. Even a one-frame change can make a huge impact on the scene, but rendering a complex scene just to test the timing is out of the question, because it just takes too long to see the results. We need a quick way to check the timing precisely. Fortunately, Blender allows us to make a quick "render" of our 3D view, with only the OpenGL information. This is also called "playblast", and is exactly what we need. Take a look at the header of our 3D view and find the button with a clapperboard icon, as seen in the next screenshot: OpenGL stands for Open Graphics Library, and is a free cross-platform specification and API for writing 2D and 3D computer graphics. Not only are the objects inside Blender's 3D view made using this library, but also the user interface with all its buttons, icons, and text are drawn on the screen with OpenGL. From OpenGL version 2.0 it's possible to use GLSL, a high level shading language heavily used to create games and supported by Blender to enhance the way objects are displayed on the screen in real time. From Blender 2.5, GLSL is the default real time rendering method when the user selects the Textured viewport shading mode, but that option has to be supported by your graphics card. Click on that clapperboard button, and the active 3D view will be used for a quick OpenGL render of your scene. This preview rendering shares the Render panel settings in the Properties window, so the picture size, frame rate, output folder, file format, duration, and stamp will be the same. If you can't see the button in your 3D View header (it is available only in the header) it may be an issue of lack of space; you can click with the middle button (or the scroll wheel) of your mouse over the header and drag it to the sides to find it. After the OpenGL rendering is complete, press Esc to go back to your scene and press Ctrl + F11 to preview the animation with the correct frame rate to check the timing. Starting with the Blender 2.5 series, there's no built-in player in the program, so you have to specify one in the User Preferences window (Ctrl + Alt + U), on the File tab. This player can even be a previous version of Blender in the 2.4 series or any player you wish, such as DJV or Mplayer. With any of these options you must tell Blender the file path where the player is installed. Now that you can watch the animation with the correct frame rate, you'll notice that the head turns quite fast, since it only takes five frames to complete. This fast timing makes our action seem to happen after the character listens to an abrupt and loud noise coming from his left, so he has to turn his head quickly and look to see what happened. Let's suppose our character is watching a tennis match in Wimbledon, and his seat is in line with the net, at the middle of the court (yep, lucky guy). Watching the ball from the serve until it reaches the other side of the court should take longer than what we have just set up, so let's adjust our keyframes now. In the DopeSheet window, leave the first keyframe at frame 1. Select the last column of keyframes by holding Alt and right-clicking on any keyframe set at frame 5. Move (G) the column to frame 15 (hold Ctrl for snapping to the frames), so our action takes three times longer than the original. Another way of selecting a column of keyframes is through the DopeSheet Summary option on the window header. It creates an extra line above all channels. If you select the diamond on this line, all keyframes on that column will be selected. You can even collapse all channels and use only the DopeSheet Summary to move the keys along the timeline to make timing adjustments easily. Now, the Breakdown, or intermediate position between two Extreme poses. It doesn't have to be at the exact middle of our action. Actually, it's good to avoid symmetry not only in our models and poses, but in our motions too. Move (G) the Breakdown to frame 6, and you'll have something similar to the next screenshot: Now you can make another OpenGL render to preview the action with the new timing. You can choose to disable the layer where the armature is located, the second, by holding Shift and clicking over it, so you don't have the bones on the preview. Of course this is far from a finished shot: it's a good idea to make the character blink during the head turn, add some moving holds, animate the eyeballs, add some facial expressions, and so on. This rough example is only to show how drastically the timing can change the feel of an action. If you set the timing between the positions even higher, our character may seem like he's looking at something slower (someone on a bike, maybe?) moving in front of him. How it works... Along with good posing, the timing is crucial to make our actions vivid, believable, and with a sense of weight. The timing also is very important to help your audience understand what is happening in the scene, so it must be carefully adjusted. To have a precise view of how the timing is working in an action within Blender, it's best to use the OpenGL preview mode, since the usual Alt + A shortcut to preview the animation inside the 3D View can be misleading. There's more... Depending on the complexity of your scene, you can achieve the correct frame rate within the 3D view with Alt + A. You can disable the visibility of irrelevant objects or some modifiers to help speed up this real time processing, like lowering (or disabling) the Subdivision Surface modifier and hiding the armature and background layers. Spacing: favoring and easing poses The previous recipe shows us how to adjust the timing of our character's actions, which is something extremely important to make our audience not only understand what is happening on the screen, but also know the weight and forces involved in the motion. Since timing is closely related to spacing, there is often confusion between the two concepts. Timing in animation is the number of frames between two Extreme poses. Spacing is how the animated subject moves and shows variations of speed along these frames. Actions with the same timing and different spacing are perceived differently by the audience, and these principles combined are responsible for the feeling of weight of our actions. We're going to see how the spacing works and how we can create eases and favoring poses to enhance movements. How to do it... Open the file 007-Spacing.blend. It has our character Otto turning his head from right to left, just like in the timing recipe. We don't have a Breakdown position defined yet, and this action has a timing set to 15 frames. First, let's understand the most elementary type of spacing: linear, or even spacing. This is when the calculated intermediate positions between two keyframes have the same distance among them, without any kind of acceleration. This isn't something we're used to seeing in nature, thus it's not the default interpolation mode in Blender. To use it, select the desired keyframes in a DopeSheet or a Graph Editor window, press Shift + T, and choose the Linear interpolation mode. The curves between the keyframes will turn into straight lines, as you can see in the next screenshot showing the channels for the Head bone. If you preview the animation with Alt + A, you'll see that the movement is very mechanical and unappealing, something we don't see in nature. That's why this interpolation mode isn't the default one. Movements in nature all have some variation in speed, going from a resting state, accelerating to a peak velocity, then slowing down until another resting state. These variations of speed are called eases, and are represented with curved lines on the Graph Editor. When there is an increase in speed we have an ease out. When the movement slows down to a resting state, we have an ease in. This variation in speed is the default interpolation method in Blender, and you can enable it by selecting the desired keyframes in a DopeSheet or Graph Editor window, press Shift + T and select the Bezier interpolation mode. The next screenshot shows the same keyframes with easing: When we adjust the curve handles on the Graph Editor, we're actually defining the eases of that movement. When you insert keyframes in Blender, it automatically creates both eases: out and in (with same speeds). Since not all movements have the same variation of speed at their beginning and end, it's a good idea to change the handles on the Graph Editor. This difference of speed between the start and end keyframes is called favoring. When the Spacing between two poses have different eases, we say the movement "favors" one of the poses, notably the one which has the bigger ease. In the next screenshot, the curves for the Head bone were adjusted so the movement favors the second pose. Note that there is a softer curve near the second pose, while the first has sharper lines near it. This will make the head leave the first pose very quick and slowly settle into the second one. In order to create sharp angles with the handles in the Graph Editor window, you need to select the desired curve channels, press V and choose the Free handle type. Open the video file 007-Spacing.mov in a video player, which enables navigating through the frames (such as DJV), to watch the three actions at the same time. Although the timing of the action is unchanged, you can clearly notice how the interpolation changes the motion. In the next screenshot, you can see that at frame 8, the Favoring version has the face closer to the second pose: Now that you understand what spacing is, know the difference between the interpolation types, and can use eases to favor poses, let's add a Breakdown position. This action is pretty boring, since the head turn happens without any arcs. It's a good idea to tilt the head down a little during the turn, making an imaginary arc with the eyes. Especially during quick head turns, it's a good idea to make your character blink during the turn. Unless your character is following something with the eyes—such as in a tennis court in our example—a quick blink is useful to make a "scene cut" in our minds from one subject to the other. On the DopeSheet window, in the Action Editor, select the Favoring action. Go to frame 6, where the character looks to the camera. Select and rotate (R) the Head and Neck bones to front on their local X axis, as seen in the next screenshot, and insert a keyframe (I) for its rotation: Since Blender automatically creates symmetrical eases on each new keyframe, it's time to adjust our spacing for the Head and Neck bones on the Graph Editor window. If you play the animation with Alt + A, you'll notice that the motion goes very weird because of that automatic ease. The F-Curves on the X axis of each bone for this motion are not soft. Ideally, since this is a Breakdown position, the curves between it and its surrounding Extreme poses should be smooth, regardless of the favoring. Select the curve handles on frames 1 and 6, and move (G) them in order to soften the curve peak in that Breakdown position. The next screenshot shows the curves before and after editing. Notice how the peak curves at the Breakdown in the middle get smoother after editing: Now the action looks more natural, with a nice Breakdown and favoring created using the F-Curves. The file 007-Spacing-complete.blend has this finished example for your reference, in which you can play the animation with Alt + A to see the results. How it works... By understanding the principle of Spacing, you can create eases and favoring in order to create snappy and interesting motions. Just like visible shapes, the pace of motion in nature is often asymmetrical. To make your motions not only more interesting but also more believable and with accents to reinforce the purpose behind the movements, you should master Spacing. Be sure to check out the interpolation curves in your animations: interesting movements normally have different eases between two Extreme positions.
Read more
  • 0
  • 0
  • 2556

article-image-saying-hello-unity-and-android
Packt
17 Dec 2013
21 min read
Save for later

Saying Hello to Unity and Android

Packt
17 Dec 2013
21 min read
Understanding what makes Unity great Perhaps the greatest feature of Unity is how open-ended it is. Nearly all game engines currently on the market are limited in what one can build. It makes perfect sense but it can limit the capabilities of a team. The average game engine has been highly optimized for creating a specific game type. This is great if all you plan on making is the same game again and again. When one is struck with inspiration for the next great hit, only to find that the game engine can't handle it and everyone has to retrain in a new engine or double the development time to make it capable, it can be quite frustrating. Unity does not suffer this problem. The developers of Unity have worked very hard to optimize every aspect of the engine, without limiting what types of games can be made. Everything ranging from simple 2D platformers to massive online role-playing games is possible in Unity. A development team that just finished an ultra-realistic first-person shooter can turn right around and make 2D fighting games without having to learn an entirely new system. Being so open ended does, however, bring a drawback. There are no default tools optimized for building that perfect game. To combat this, Unity grants the ability to create any tool one can imagine, using the same scripting that creates the game. On top of that, there is a strong community of users that have supplied a wide selection of tools and pieces, both free and paid, to be quickly plugged in and used. This results in a large selection of available content, ready to jump-start you on your way to the next great game. When many prospective users look at Unity, they think that because it is so cheap, it is not as good as an expensive AAA game engine. This is simply not true. Throwing more money at the game engine is not going to make a game any better. Unity supports all of the fancy shaders, normal maps, and particle effects you could want. The best part is, nearly all of the fancy features you could want are included in the free version of Unity and 90 percent of the time beyond that, one does not need to even use the Pro only features. One of the greatest concerns when selecting a game engine, especially for the mobile market, is how much girth it will add to the final build size. Most are quite hefty. With Unity's code stripping, it becomes quite small. Code stripping is the process by which Unity removes every extra little bit of code from the compiled libraries. A blank project, compiled for Android, that utilizes full code stripping ends up being around 7 megabytes. Perhaps one of the coolest features of Unity is the multi-platform compatibility. With a single project one can build for several different platforms. This includes the ability to simultaneously target mobile, PC, and consoles. This allows one to focus on real issues, such as handling inputs, resolution, and performance. In the past, if a company desired to deploy their product on more than one platform, they had to nearly double the development costs in order to essentially reprogram the game. Every platform did, and still does, run by its own logic and language. Thanks to Unity, game development has never been simpler. We can develop games using simple and fast scripting, letting Unity handle the complex translation to each platform. There are of course several other options for game engines. Two major ones that come to mind are cocos2d and Unreal Engine. While both are excellent choices, we can always find them to be a little lacking in certain respects. The engine of Angry Birds, cocos2d, could be a great choice for your next mobile hit. However, as the name suggests, it is pretty much limited to 2D games. A game can look great in it, but if you ever want that third dimension, it can be tricky to add. A second major problem with cocos2d is how bare bones it is. Any tool for building or importing assets needs to be created from scratch, or they need to be found. Unless you have the time and experience, this can seriously slow down development. Then there is the staple of major game development, Unreal Engine. This game engine has been used successfully by developers for many years, bringing great games to the world; Unreal Tournament and Gears of War not the least among them. These are both, however, console and computer games, which is the fundamental problem with the engine. Unreal is a very large and powerful engine. Only so much optimization can be done for mobile platforms. It has always had the same problem; it adds a lot of girth to a project and its final build. The other major issue with Unreal is its rigidity in being a first-person shooter engine. While it is technically possible to create other types of games in it, such tasks are long and complex. A strong working knowledge of the underlying system is a must before achieving such a feat. All in all, Unity definitely stands strong among the rest. But these are still great reasons for choosing Unity for game development. Projects can look just as great as AAA titles. Overhead and girth in the final build is small and very important when working on mobile platforms. The system's potential is open enough to allow you to create any type of game you might want, where other engines tend to be limited to a single type of game. And should your needs change at any point in the project's life cycle, it is very easy to add, remove, or change your choice of target platforms. Understanding what makes Android great With over 30-million devices in the hands of users, why would you not choose the Android platform for your next mobile hit? Apple may have been the first one out of the gate with their iPhone sensation, but Android is definitely a step ahead when it comes to smartphone technology. One of its best features is its blatant ability to be opened up so you can take a look at how the phone works, both physically and technically. One can swap out the battery and upgrade the micro SD card, should the need arise. Plugging the phone into a computer does not have to be a huge ordeal; it can simply function as removable storage media. From the point of view of cost of development, the Android market is superior as well. Other mobile app stores require an annual registration fee of about 100 dollars. Some also have a limit on the number of devices that can be registered for development at one time. The Google Play market has a one-time registration fee, and there is no concern about how many or what type of Android devices you are using for development. One of the drawbacks about some of the other mobile development kits is that you have to pay an annual registration fee before you have access to the SDK. With some, registration and payment are required before you can view their documentation. Android is much more open and accessible. Anybody can download the Android SDK for free. The documentation and forums are completely viewable without having to pay any fee. This means development for Android can start earlier, with device testing being a part of it from the very beginning. Understanding how Unity and Android work together Because Unity handles projects and assets in a generic way, there is no need to create multiple projects for multiple target platforms. This means that you could easily start development with the free version of Unity and target personal computers. Then, at a later date, you can switch targets to the Android platform with the click of a button. Perhaps, shortly after your game is launched, it takes the market by storm and there is a great call to bring it to other mobile platforms. With just another click of the button, you can easily target iOS without changing anything in your project. Most systems require a long and complex set of steps to get your project running on a device. For the first application, we will be going through that process because it is important to know about it. However, once your device is set up and recognized by the Android SDK, a single-button click will allow Unity to build your application, push it to a device, and start running it. There is nothing that has caused more headaches for some developers than trying to get an application on a device. Unity makes it simple. With the addition of a free Android application, Unity Remote, it is simple and easy to test mobile inputs without going through the whole build process. While developing, there is nothing more annoying than waiting for 5 minutes for a build every time you need to test a minor tweak, especially in the controls and interface. After the first dozen little tweaks the build time starts to add up. Unity Remote makes it simple and easy to test it all without ever having to hit the Build button. These are the big three: generic projects, a one-click build process, and Unity Remote. We could, of course, come up with several more great ways in which Unity and Android can work together. But these three are the major time and money savers. You could have the greatest game in the world but, if it takes 10 times as long to build and test, what is the point? Differences between Pro and Basic Unity comes with two licensing options, Pro and Basic, which can be found at https://store.unity3d.com. In order to follow, Unity Basic is all that is required. If you are not quite ready to spend the $3,000 required to purchase a full Unity Pro license with the Android add-on, there are other options. Unity Basic is free and comes with a 30-day free trial of Unity Pro. This trial is full and complete, just as if one has purchased Unity Pro. It is also possible to upgrade your license at a later date. Where Unity Basic comes with mobile options for free, Unity Pro requires the purchase of Pro add-ons for each of the mobile platforms. License comparison overview License comparisons can be found at http://unity3d.com/unity/licenses. This section will cover the specific differences between Unity Android Pro and Unity Android Basic. We will explore what the feature is and how useful it is. NavMeshes, Pathfinding, and crowd Simulation: This feature is Unity's built-in pathfinding system. It allows characters to find their way from point to point around your game. Just bake your navigation data in the editor and let Unity take over at runtime. This feature is great if you don't have the ability or inclination to program a pathfinding system yourself. There is a whole slew of tutorials online about how to program pathfinding and do crowd simulation. It is completely possible to do all of this in Unity Basic; you just need to provide the tools yourself. LOD Support: LOD(Level-of-detail) lets you control how complex a mesh is, based on its distance from the camera. When the camera is close to an object, render a complex mesh with a bunch of detail in it. When the camera is far from that object, render a simple mesh, because all that detail is not going to be seen anyway. Unity Pro provides a built-in system to manage this. However, this is another system that could be created in Unity Basic. Whether using Pro or not, this is an important feature for game efficiency. By rendering less complex meshes at a distance, everything can be rendered faster, leaving more room for awesome gameplay. Audio Filter: Audio filters allow you to add effects to audio clips at runtime. Perhaps you created gravel footstep sounds for your character. Your character is running, and we can hear the footsteps just fine, when suddenly they enter a tunnel and a solar flare hits, causing a time warp and slowing everything down. Audio filters would allow us to warp the gravel footstep sounds to sound like they are coming from within a tunnel and are slowed by a time warp. Of course, you could also just have the audio guy create a new set of tunnel gravel footsteps in the time warp sounds. But this might double the amount of audio in your game and limits how dynamic we can be with it at runtime. We either are or are not playing the time warp footsteps. Audio filters would allow us to control how much time warp is affecting our sounds. Video Playback and Streaming: When dealing with complex or high-definition cut scenes, being able to play a video becomes very important. Including them in a build especially with a mobile target can require a lot of space. This is where the streaming part of this feature comes in. This feature not only lets us play video, it also lets us stream that video from the internet. There is, however, a drawback to this feature. On mobile platforms, the video has to go through the device's builtin, video-playing system. This means the video can only be played full-screen and cannot be used as a texture. Theoretically, though, you could break your video into individual pictures for each frame and flip through them at runtime, but this is not recommended for build size and video quality reasons. Fully Fledged Streaming with Asset Bundles: Asset bundles are a great feature provided by Unity Pro. They allow you to create extra content and stream it to the users, without ever requiring an update to the game. You could add new characters, levels, or just about any other content you can think of. Their only drawback is that you cannot add more code. The functionality cannot change, but the content can. This is one of the best features of Unity Pro. 100,000 Dollar Turnover: This one isn't so much a feature as it is a guideline. According to Unity's End User License Agreement, the basic version of Unity cannot be licensed by any group or individual that made $100,000 in the previous fiscal year. This basically means, if you make a bunch of money, you have to buy Unity Pro. Of course, if you are making that much money, you can probably afford it without issue. That is the view of Unity at least, and the reason why it is there. Mecanim: IK Rigs: Unity's new animation system, Mecanim, supports many exciting new features, one of which is IK. If you are unfamiliar with the term, IK allows one to define the target point of an animation and let the system figure out how to get there. Imagine you have a cup sitting on a table and a character that wants to pick it up. You could animate the character to bend over and pick it up, but what if the character is slightly to the side? Or any number of other slight offsets that a player could cause, completely throwing off your animation. It is simply impractical to animate for every possibility. With IK, it hardly matters that the character is slightly off. We just define the goal point for the hand and leave the arm to the IK system. It calculates for us how the arm needs to move in order to get the hand to the cup. Another fun use is making characters look at interesting things as they walk around a room. A guard could track the nearest person, the player character could look at things that they can interact with, or a tentacle monster could lash out at the player without all the complex animation. This will be an exciting one to play with. Mecanim: Sync Layers & Additional Curves Sync layers, inside Mecanim, allow us to keep multiple sets of animation states in time with each other. Say you have a soldier that you want to animate differently based on how much health he has. When at full health, he walks around briskly. After a little damage, it becomes more of a trudge. If health is below half, a limp is introduced to his walk. And when almost dead, he crawls along the ground. With sync layers, we can create one animation state machine and duplicate it to multiple layers. By changing the animations and syncing the layers, we can easily transition between the different animations while maintaining the state machine. Additional curves are simply the ability to add curves to your animations. This means we can control various values with the animation. For example, in the game world, when a character picks up their feet for a jump, gravity will pull them down almost immediately. By adding an extra curve to that animation, in Unity, we can control how much gravity is affecting the character, allowing them to actually get in the air when jumping. This is a useful feature for controlling such values right alongside the animations, but one could just as easily create a script that holds and controls the curves. Custom Splash Screen: Though pretty self-explanatory, it is perhaps not immediately evident why this feature is specified, unless you have worked with Unity before. When an application built in Unity initializes on any platform, it displays a splash screen. In Unity Basic this will always be the Unity logo. By purchasing Unity Pro, you can substitute the Unity logo with any image you want. Build Size Stripping: This is an important feature for mobile platforms. Build size stripping removes all of the excess from your final build. Unity does a very good job at only including the assets that you have created that are used in the final build. With the stripping, it also only includes the parts of the engine itself that are used in the game. This is of great use when you absolutely have to get under that limit for downloading from the cell towers. On the other hand, you could create something similar to the asset bundles. Just let the users buy the framework, and download the assets later. Realtime Directional Shadows: Lights and shadows add a lot to the mood of a scene. This feature allows us to go beyond blob shadows and use realistic looking shadows. This is all well and good if you have the processing space for it. Most mobile devices do not. This feature should also never be used for static scenery. Instead, use static lightmaps, which is what they are for. But if you can find a good balance between simple needs and quality, this could be the feature that creates the difference between an alright and an awesome game. HDR, tone mapping: HDR(High Dynamic Range) and tone mapping allow us to create more realistic lighting effects. Standard rendering uses values from zero to one to represent how much of each color in a pixel is on. This does not allow for a full spectrum of lighting options to be explored. HDR lets the system use values beyond this range and process them using tone mapping to create better effects, such as a bright morning room or the bloom from a car window reflecting the sun. The downside of this feature is in the processor. The device can still only handle values between zero and one, so converting them takes time. Additionally, the more complex the effect, the more time it takes to render it. It would be surprising to see this used well on handheld devices, even in a simple game. Maybe the modern tablets could handle it. Light Probes: Light probes are an interesting little feature. When placed in the world, light probes figure out how an object should be lit. Then, as a character walks around, they tell it how to be shaded. The character is, of course, lit by the lights in the scene but there are limits on how many lights can shade an object at once. Light probes do all the complex calculations beforehand, allowing for better shading at runtime. Again, however, there are concerns about the processing power. Too little and you won't get a good effect; too much and there will be no processing left for playing the game. Lightmapping with Global Illumination and area lights: All versions of Unity support lightmaps, allowing for the baking of complex static shadows and lighting effects. With the addition of global illumination and area lights, you can add another touch of realism to your scenes. However, every version of Unity also lets you import your own lightmaps. This means, you could use some other program to render the lightmaps and import them separately. Static Batching: This feature speeds up the rendering process. Instead of spending time on each frame grouping objects for faster rendering, this allows the system to save the groups generated beforehand. Reducing the number of draw calls is a powerful step towards making a game run faster. That is exactly what this feature does. Render-to-Texture Effects: This is a fun feature, but of limited use. It simply allows you to redirect the rendering of the camera from going to the screen and instead go to a texture. This texture could then, in its most simple form, be put onto a mesh and act like a surveillance camera. You could also do some custom post processing, such as removing the color from the world as the player loses their health. However, that option could become very processor-intensive. Full-Screen Post-Processing Effects: This is another processor-intensive feature that probably will not make it into your mobile game. But you can add some very cool effects to your scene. Such as, adding motion blur when the player is moving really fast, or a vortex effect to warp the scene as the ship passes through a warped section of space. One of the best is using the bloom effect to give things a neon-like glow. Occlusion Culling: This is another great optimization feature. The standard camera system renders everything that is within the camera's view frustum, the view space. Occlusion culling lets us set up volumes in the space our camera can enter. These volumes are used to calculate what the camera can actually see from those locations. If there is a wall in the way, what is the point of rendering everything behind it? Occlusion culling calculates this and stops the camera from rendering anything behind that wall. Navmesh: Dynamic Obstacles and Priority: This feature works in conjunction with the pathfinding system. In scripts, we can dynamically set obstacles, and characters will find their way around them. Being able to set priorities means different types of characters can take different types of objects into consideration when finding their way around. A soldier must go around the barricades to reach his target. The tank, however, could just crash through, should it desire to. .Net Socket Support: This feature is only useful if you plan on doing fancy things over a user's network. Multiplayer networking is already supported in every version of Unity. The multiplayer that is available, though, does require a master server. With the use of sockets, one could create connections to other devices locally. Profiler and GPU profiling: This is a very useful feature. The profiler provides tons of information about how much load your game puts on the processor. With this information we can get right down into the nitty-gritties and determine exactly how long a script takes to process. Towards the end, though, we will also create a tool for determining how long specific parts of your code take to process. Script Access to Asset Pipeline: This is an alright feature. With full access to the pipeline, there is a lot of custom processing that can be done on assets and builds. The full range of possibilities are beyond our scope. But think of it as being able to tint all of the imported textures slightly blue. Dark Skin: This is entirely a cosmetic feature. Its point and purpose are questionable. But if a smooth, dark-skinned look is what you desire, this is the feature you want. There is an option in the editor to change it to the color scheme used in Unity Basic. For this feature, whatever floats your boat goes. Setting up the development environment Before we can create the next great game for Android, we need to install a few programs. In order to make the Android SDK work, we will first install the JDK. Then, we will be installing the Android SDK. After that is the installation of Unity. We then have to install an optional code editor. To make sure everything is set up correctly, we will connect to our devices and take a look at some special strategies if the device is a tricky one. Finally, we will install Unity Remote, a program that will become invaluable in your mobile development.
Read more
  • 0
  • 0
  • 2531
Modal Close icon
Modal Close icon