Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - 3D Game Development

115 Articles
article-image-packaging-game
Packt
07 Jul 2016
13 min read
Save for later

Packaging the Game

Packt
07 Jul 2016
13 min read
A game is not just an art, code, and a game design packaged within an executable. You have to deal with stores, publishers, ratings, console providers, and making assets and videos for stores and marketing, among other minor things required to fully ship a game. This article by Muhammad A.Moniem, author of the book Mastering Unreal Engine 4.X, will take care of the last steps you need to do within the Unreal environment in order to get this packaged executable fine and running. Anything post-Unreal, you need to find a way to do it, but from my seat, I'm telling you have done the complex, hard, huge, and long part; what comes next is a lot simpler! This article will help us understand and use Unreal's Project Launcher, patching the project and creating DLCs (downloadable content) (For more resources related to this topic, see here.) Project Launcher and DLCs The first and the most important thing you have to keep in mind is that the project launcher is still in development and the process of creating DLCs is not final yet, and might get changed in the future with upcoming engine releases. While writing this book I've been using Unreal 4.10 and testing everything I do and write within the Unreal 4.11 preview version, and yet still the DLC process remains experimental. So be advised that you might find it a little different in the future as the engine evolves: While we have packaged the game previously through the File menu using Packaging Project, there is another, more detailed, more professional way to do the same job. Using the Project Launcher, which comes in the form of a separate app with Unreal (Unreal Frontend), you have the choice to run it directly from the editor. You can access the Project Launcher from the Windows menu, and then choose Project Launcher, and that will launch it right away. However, I have a question here. Why would you go through these extra steps, then just do the packaging process in one click? Well, extensibility is the answer. Using the Unreal Project Launcher allows you to create several profiles, each profile having a different build setting, and later you can fire each build whenever you need it; not only that, but the profiles could be made for different projects, which means you can have an already made setup for all your projects with all the different build configurations. And yet even that's not everything; it comes in handier when you get the chance to cook the content of a game several times, so rather than keep doing it through the File menu, you can just cook the content for the game for all the different platforms at once. For example; if you have to change one texture within your game which is supported on five platforms, you can make a profile which will cook the content for all the platforms and arrange them for you at once, and you can spend that time doing something else. The Project Launcher does the whole thing for you. What if you have to cook the game content for different languages? Let's say the game supports 10 languages? Do you have to do it one by one for each language? The answer is simple; the Project Launcher will do it for you. So you can simply think of the Project Launcher as a batch process, custom command-line tool, or even a form of build server. You set the configurations and requests, and leave it alone doing the whole thing for you, while you are saving your time doing something else. It is all about productivity! And the most important part about the Project Launcher is that you can create DLCs very easily. By just setting a profile for it with a different set of options and settings, you can end up with getting the DLC or game mode done without any complications. In a word, it is all about profiles, and because of that let's discuss how to create profiles, that could serve different purposes. Sometimes the Project Launcher proposes for you a standard profile that matching the platform you are using. That is good, but usually those profiles might not have all we need, and that's why it is recommended to always create new profiles to serve our goals. The Project Launcher by default is divided into two sections vertically; the upper part contains the default profiles, while the lower part contains the custom profiles. And in order to create a new profile all you have to do is to hit the plus sign at the bottom part, where it is titled Custom Launch Profiles: Pressing it will take you to a wizard, or it is better to describe this as a window, where you can set up the new profile options. Those options are drastic, and changing between them leads to a completely different result, so you have to be careful. But in general, you mostly will be building either a project for release, or building a DLC or patch for an already released project. Not to mention that you can even do more types of building that serve different goals, such as a language package for an already released game, which is treated as a patch or DLC but at the same time it has different a setup and options than a patch or DLC. Anyway, we will be taking care of the two main types of process that developers usually have to deal with in the Project Launcher: release and patch. Packaging a release After the new Custom Launch Profile wizard window opens, you have changes for its settings that are necessary to make our Release build of the project. This includes: General: This has the following fields: Give a name to the profile, and this name will be displayed in the Project Launcher main window Give a description to the profile in order to make its goals clear for you in the future, or for anyone else who is going to use it Project: This has the following sections: Select a project, the one that needs to be built. Or you can leave this at Any Project, in order to build the current active project: Build: This has the following sections: Indeed, you have to check the box of the build, so you make a build and activate this section of options. From the Build Configuration dropdown, you have to choose a build type, which is Shipping in this case. Finally, you can check the Build UAT (Unreal Automation Tool) option from Advanced Settings in this section. The UAT could be considered as a bunch of scripts creating a set of automated processes, but in order to decide whether to run it or not, you have to really understand what the UAT is: Written in C# (may convert to C++ in the future) Automates repetitive tasks through automation scripts Builds, cooks, packages, deploys and launches projects Invokes UBT for compilation Analyzes and fixes game content files Codes surgery when updating to new engine versions Distributes compilation (XGE) and build system integration Generates code documentation Automates testing of code and content And many others—you can add your own scripts! Now you will know if you want to enable it or not: Cook: This has the following settings: In the Cook section, you need to set it to by the book. This means you need to define what exactly is needed to be cooked and for which platforms it is enough for now to set it to WindowsNoEditor, and check the cultures you want from the list. I have chosen all of them (this is faster than picking one at a time) and then exclude the ones that I don't want: Then you need to check which maps should be cooked; if you can't see maps, it is probably the first build. Later you'll find the maps listed. But anyway, you must keep all the maps listed in the Maps folder under the Content directory: Now from the Release / DLC / Patching Settings section, you have to check the option Create a release version of the game for distribution, as this version going to be the distribution one. And from the same section give the build a version. This is going to create some extra files that will be used in the future if we are going to create patches or DLCs: You can expand the Advanced Settings section to set your own options. By default, Compress Content and Save Packages without versions are both checked, and both are good for the type of build we are making. But also you can set Store all content in a single file (UnrealPak) to keep things tidy; one .pak file is better than lots of separated files. Finally, you can set Cooker Build Configuration to Shipping, as long as we set Build Configuration itself to Shipping: Package: This has the following options: From this section's drop-down menu, choose Package & store locally, and that will save the packages on the drive. You can't set anything else here, unless you want to store the game packaged project into a repository: Deploy: The Deploy section is meant to build the game into the device of your choice, and I don't think it is the case here, or anyone will want to do it. If you want to put the game into a device, you could directly do Launch from within the editor itself. So, let's set this section to Do Not Deploy: Launch: In case you have chosen to deploy the game into a device, then you'll be able to find this section; otherwise, the options here will be disabled. The set of options here is meant to choose the configurations of the deployed build, as once it is deployed to the device it will run. Here you can set something like the language culture, the default startup map, command-line arguments, and so on. And as we are not deploying now, this section will be disabled: Now we have finished editing our profile, you can find a back arrow at the top of this wizard. Pressing it will take you back to the Project Launcher main window: Now you can find our profile in the bottom section. Any other profiles you'll be making in the future will be listed there. Now there is one step to finish the build. In the right corner of the profile there is a button that says Launch This Profile. Hitting it will start the process of building, cooking, and packaging this profile for the selected project. Hit it right away if you want the process to start. And keep in mind, anytime you need to change any of the previously set settings, there is always an Edit button for each profile: The Project Launcher will start processing this profile; it will take some time, but the amount of time depends on your choices. And you'll be able to see all the steps while it is happening. Not only this, but you can also watch a detailed log; you can save this log, or you can even cancel the process at any time: Once everything is done, a new button will appear at the bottom: Done. Hitting it will take you back again to the Project Launcher main window. And you can easily find the build in the SavedStagedBuildsWindowsNoEditor directory of your project, which is in my case: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditor. The most important thing now is that, if you are planning to create patches or DLCs for this project, remember when you set a version number in the Cook section. This produced some files that you can find in: ProjectNameReleaseReleaseVersionPlatform. Which in my case is: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor. There are two files; you have to make sure that you have a backup of them on your drive for future use. Now you can ship the game and upload it to the distribution channel! Packaging a patch or DLC The good news is, there isn't much to do here. Or in other words, you have to do lots of things, but it is a repetitive process. You'll be creating a new profile in the Project Launcher, and you'll be setting 90% of the options so they're the same as the previous release profile; the only difference will be in the Cook options. Which means the settings that will remain the same are: Project Build Package Deploy Launch The only difference is that in the Release/DLC/Patching Settings section of the Cook section you have to: Disable Create a release version of the game for distribution. Set the number of the base build (the release) as the release version this is based on, as this choice will make sure to compare the previous content with the current one. Check Generate patch, if the current build is a patch, not a DLC. Check Build DLC, if the current build is a DLC, not a patch: Now you can launch this profile, and wait until it is done. The patching process creates a *.pak file in the directory: ProjectNameSavedStagedBuildsPlatformNameProjectNameContentPaks. This .pak file is the patch that you'll be uploading to the distribution channel! And the most common way to handle these type of patch is by creating installers; in this case, you'll create an installer to copy the *.pak file into the player's directory: ProjectNameReleasesVersionNumberPlatformName. Which means, it is where the original content *.pak file of the release version is. In my case I copy the *.pak file from: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor to: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditorBellzContentPaks. Now you've found the way to patch and download content, and you have to know, regardless of the time you have spent creating it, it will be faster in the future, because you'll be getting more used to it, and Epic is working on making the process better and better. Summary The Project Launcher is a very powerful tool shipped with the Unreal ecosystem. Using it is not mandatory, but sometimes it is needed to save time, and you learned how and when to use this powerful tool. Many games nowadays have downloadable content; it helps to keep the game community growing, and the game earn more revenue. Having DLCs is not essential, but it is good, having them must be planned earlier as we discussed, and you've learned how to manage them within Unreal Engine. And you learned how to make patches and DLCs using the Unreal Project Launcher. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4 [article] Bang Bang – Let's Make It Explode [article] Lighting basics [article]
Read more
  • 0
  • 0
  • 17324

article-image-blender-25-detailed-render-earth-space
Packt
25 May 2011
10 min read
Save for later

Blender 2.5: Detailed Render of the Earth from Space

Packt
25 May 2011
10 min read
Blender 2.5 HOTSHOT Challenging and fun projects that will push your Blender skills to the limit Our purpose is to create a very detailed view of the earth from space. By detailed, we mean that it includes land, oceans, and clouds, and not only the color and specular reflection, but also the roughness they seem to have, when seen from space. For this project, we are going to perform some work with textures and get them properly set up for our needs (and also for Blender's way of working). What Does It Do? We will create a nice image of the earth resembling the beautiful pictures that are taken from orbiting of the earth, showing the sun rising over the rim of the planet. For this, we will need to work carefully with some textures, set up a basic scene, and create a fairly complex setup of nodes for compositing the final result. In our final image, we will get very nice effects, such as the volumetric effect of the atmosphere that we can see round its rim, the strong highlight of the sun when rising over the rim of the earth, and the very calm, bluish look of the dark part of the earth when lit by the moon. Why Is It Awesome? With this project, we are going to understand how important it is to have good textures to work with. Having the right textures for the job saves lots of time when producing a high-quality rendered image. Not only are we going to work with some very good textures that are freely available on the Internet, but we are also going to perform some hand tweaking to get them tuned exactly as we need them. This way we can also learn how much time can be saved by just doing some preprocessing on the textures to create finalized maps that will be fed directly to the material, without having to resort to complex tricks that would only cause us headaches. One of the nicest aspects of this project is that we are going to see how far we take a very simple scene by using the compositor in Blender. We are definitely going to learn some useful tricks for compositing. Your Hotshot Objectives This project will be tackled in five parts: Preprocessing the textures Object setup Lighting setup Compositing preparation Compositing Mission Checklist The very key for the success of our project is getting the right set of quality images at a sufficiently high resolution. Let's go to www.archive.org and search for www.oera.net/How2.htm on the 'wayback machine'. Choose the snapshot from the Apr 18, 2008 link. Click on the image titled Texture maps of the Earth and Planets. Once there, let's download these images: Earth texture natural colors Earth clouds Earth elevation/bump Earth water/land mask Remember to save the high-resolution version of the images, and put them in the tex folder, inside the project's main folder. We will also need to use Gimp to perform the preprocessing of the textures, so let's make sure to have it installed. We'll be working with version 2.6. Preprocessing the Textures The textures we downloaded are quite good, both in resolution and in the way they clearly separate each aspect of the shading of the earth. There is a catch though—using the clouds, elevation, and water/land textures as they are will cause us a lot of headache inside Blender. So let's perform some better basic preprocessing to get finalized and separated maps for each channel of the shader that will be created. Engage Thrusters For each one of the textures that we're going to work on, let's make sure to get the previous one closed to avoid mixing the wrong textures. Clouds Map Drag the EarthClouds_2500x1250.jpg image from the tex folder into the empty window of Gimp to get it loaded. Now locate the Layers window and right-click on the thumbnail of the Background layer, and select the entry labeled Add Layer Mask... from the menu. In the dialog box, select the Grayscale copy of layer option. Once the mask is added to the layer, the black part of the texture should look transparent. If we take a look at the image after adding the mask, we'll notice the clouds seem to have too much transparency. To solve this, we will perform some adjustment directly on the mask of the layer. Go to the Layers window and click on the thumbnail of the mask (the one to the right-hand side) to make it active (its border should become white). Then go to the main window (the one containing the image) and go to Colors | Curves.... In the Adjust Color Curves dialog, add two control points and get the curve shown in the next screenshot: The purpose of this curve is to get the light gray pixels of the mask to become lighter and the dark ones to get darker; the strong slope between the two control points will cause the border of the mask to be sharper. Make sure that the Value channel is selected and click on OK. Now let's take a look at the image and see how strong the contrast of the image is and how well defined the clouds are now. Finally, let's go to Image| Mode| RGB to set the internal data format for the image to a safe format (thus avoiding the risk of having Blender confused by it). Now we only need to go to File| Save A Copy... and save it as EarthClouds.png in the tex folder of the project. In the dialogs asking for confirmation, make sure to tell Gimp to apply the layer mask (click on Export in the first dialog). For the settings of the PNG file, we can use the default values. Let's close the current image in Gimp and get the main window empty in order to start working on the next texture. Specular Map Let's start by dragging the image named EarthMask_2500x1250.jpg onto the main window of Gimp to get it open. Then drag the image EarthClouds_2500x1250.jpg over the previous one to get it added as a separate layer in Gimp. Now, we need to make sure that the images are correctly aligned. To do this, let's go to View| Zoom| 4:1 (400%), to be able to move the layer with pixel precision easily. Now go to the bottom right-hand side corner of the window and click-and-drag over the four-arrows icon until the part of the image shown in the viewport is one of the corners. After looking at the right place, let's go to the Toolbox and activate the Move tool. Finally, we just need to drag the clouds layer so that its corner exactly matches the corner of the water/land image. Then let's switch to another zoom level by going to View| Zoom| 1:4 (25%). Now let's go to the Layers window, select the EarthClouds layer, and set its blending mode to Multiply (Mode drop-down, above the layers list). Now we just need to go to the main window and go to Colors| Invert. Finally, let's switch the image to RGB mode by going to Image| Mode| RGB and we are done with the processing. Remember to save the image as EarthSpecMap.jpg in the tex folder of the project and close it in Gimp. The purpose of creating this specular map is to correctly mix the specularity of the ocean (full) with one of the clouds that is above the ocean (null). This way, we get a correct specularity, both in the ocean and in the clouds. If we just used the water or land mask to control specularity, then the clouds above the ocean would have specular reflection, which is wrong. Bump Map The bump map controls the roughness of the material; this one is very important as it adds a lot of detail to the final render without having to create actual geometry to represent it. First, drag the EarthElevation_2500x1250.jpg to the main window of Gimp to get it open. Then let's drag the EarthClouds_2500x1250.jpg image over the previous one, so that it gets loaded as a layer above the first one. Now zoom in by going to View| Zoom| 4:1 (400%). Drag the image so that you are able to see one of its corners and use the move tool to get the clouds layer exactly matching the elevation layer. Then switch back to a wider view by going to View| Zoom| 1:4 (25%). Now it's time to add a mask to the clouds layer. Right-click on the clouds layer and select the Add Layer Mask... entry from the menu. Then select the Grayscale copy of layer option in the dialog box and click Add. What we have thus far is a map that defines how intense the roughness of the surface in each point will be. But there's is a problem: The clouds are as bright as or even brighter than the Andes and the Himalayas, which means the render process will distort them quite a lot. Since we know that the intensity of the roughness on the clouds must be less, let's perform another step to get the map corrected accordingly. Let's select the left thumbnail of the clouds layer (color channel of the layer), then go to the main window and open the color levels using the Levels tool by going to Colors| Levels.... In the Output Levels part of the dialog box, let's change the value 255 (on the right-hand side) to 66 and then click on OK. Now we have a map that clearly gives a stronger value to the highest mounts on earth than to the clouds, which is exactly what we needed. Finally, we just need to change the image mode to RGB (Image| Mode| RGB) and save it as EarthBumpMap.jpg in the tex folder of the project. Notice that we are mixing the bump maps of the clouds and the mountains. The reason for this is that working with separate bump maps will get us into a very tricky situation when working inside Blender; definitely, working with a single bump map is way easier than trying to mix two or more. Now we can close Gimp, since we will work exclusively within Blender from now on. Objective Complete - Mini Debriefing This part of the project was just a preparation of the textures. We must create these new textures for three reasons: To get the clouds' texture having a proper alpha channel; this will save us trouble when working with it in Blender. To control the spec map properly, in the regions where there are clouds, as the clouds must not have specular reflection. To create a single, unified bump map for the whole planet. This will save us lots of trouble when controlling the Normal channel of the material in Blender. Notice that we are using the term "bump map" to refer to a texture that will be used to control the "normal" channel of the material. The reason to not call it "normal map" is because a normal map is a special kind of texture that isn't coded in grayscale, like our current texture.
Read more
  • 0
  • 0
  • 16611

article-image-unreal-engine
Packt
26 Aug 2013
8 min read
Save for later

The Unreal Engine

Packt
26 Aug 2013
8 min read
(For more resources related to this topic, see here.) Sound cues versus sound wave data There are two types of sound entries in UDK: Sound cues Sound wave data The simplest difference between the two is that Sound Wave Data is what we would have if we imported a sound file into the editor, and a Sound Cue is taking a sound wave data or multiple sound wave datas and manipulating them or combining them using a fairly robust and powerful toolset that UDK gives us in their Sound Cue Editor. In terms of uses, sound wave datas are primarily only used as parts of sound cues. However, in terms of placing ambient sounds, that is, sounds that are just sort of always playing in the background, sound wave datas and sound cues offer different situations where each is used. Regardless, they both get represented in the level as Sound Actors, of which there are several types as shown in the following screenshot:     Types of sound actors A key element of any well designed level is ambient sound effects. This requires placing sound actors into the world. Some of these actors use sound wave data and others use sound cues. There are strengths, weaknesses, and specific use cases for all of them, so we'll touch on those presently. Using sound cues There are two distinct types of sound actors that call for the use of sound cues specifically. The strength of using sound cues for ambient sounds is that the different sounds can be manipulated in a wider variety of ways. Generally, this isn't necessary as most ambient sounds are some looping sound used to add sound to things like torches, rippling streams, a subtle blowing wind, or other such environmental instances. The two types of sound actors that use sound cues are Ambient Sounds and Ambient Sound Movables as shown in the following screenshot: Ambient sound As the name suggests, this is a standard ambient sound. It stays exactly where you place it and cannot be moved. These ambient sound actors are generally used for stationary sounds that need some level of randomization or some other form of specific control of multiple sound wave datas. Ambient sound movable Functionally very similar to the regular ambient sound, this variation can, as the name suggests, be moved. That means, this sort of ambient sound actor should be used in a situation where an ambient sound would be used, but needs to be mobile. The main weakness of the two ambient sound actors that utilize sound cues is that each one you place in a level is identically set to the exact settings within the sound cue. Conversely, ambient sound actors utilizing sound wave datas can be set up on an instance by instance basis. What this means is explained with the help of an example. Lets say we have two fires in our game. One is a small torch, and the other is a roaring bonfire. If we feel that using the same sound for each is what we want to do, then we can place both the ambient sound actors utilizing sound wave datas and adjust some settings within the actor to make sure that the bonfire is louder and/or lower pitched. If we wanted this type of variation using sound cues, we would have to make separate sound cues. Using sound wave data There are four types of ambient sound actors that utilize sound wave datas directly as opposed to housed within sound cues. As previously mentioned, the purpose of using ambient sound actors that use sound wave data is to avoid having to create multiple sound cues with only minimally different contents for simple ambient sounds. This is most readily displayed by the fact that the most commonly used ambient sound actors that use sound wave data are called AmbientSoundSimple and AmbientSoundSimpleToggleable as shown in the following screenshot: Ambient sound simple Ambient sound simple is, as the name suggests, the simplest of ambient sound actors. They are only used when we need one sound wave data or multiple sound wave datas to just repeat on a loop over and over again. Fortunately, most ambient sounds in a level fit this description. In most cases, if we were to go through a level and do an ambient sound pass, all we would need to use are ambient sound simples. Ambient sound non loop Ambient sound non loop are pretty much the same, functionally, as ambient sound simples. The only difference is, as the name suggests, they don't loop. They will play whatever sound wave data(s) that are set in the actor, then delay by a number of seconds that is also set within the actor, and then go through it again. This is useful when we want to have a sound play somewhat intermittently, but not be on a regular loop. Ambient sound non looping toggleable Ambient sound non looping toggleable are, for all intents and purposes, the same as the regular ambient sound non loop actors, but they are toggleable. This means, put simply, that they can be turned on and off at will using Kismet. This would obviously be useful if we needed one of these intermittent sounds to play only when certain things happened first. Ambient sound simple toggleable Ambient sound simple toggleable are basically the same as a plain old, run of the mill ambient sound simple with the difference being, as like the ambient sound non looping toggleable, it can be turned on and off using Kismet. Playing sounds in Kismet There are several different ways to play different kinds of sounds using Kismet. Firstly, if we are using a toggleable ambient sound actor, then we can simply use a toggle sequence, which can be found under New Action | Toggle. There is also a Play Sound sequence located in New Action | Sound | Play Sound. Both of these are relatively straightforward in terms of where to plug in the sound cue. Playing sounds in Matinee If we need a sound to play as part of a Matinee sequence, the Matinee tool gives us the ability to trigger the sound in question. If we have a Matinee sequence that contains a Director track, then we need to simply right-click and select Add New Sound Track. From here, we just need to have the sound cue we want to use selected in the Content Browser, and then, with the Sound Track selected in the active Matinee window, we simply place the time marker where we want the sound to play and press Enter. This will place a keyframe that will trigger our sound to play, easy as pie. The Matinee tool dialog will look like the following screenshot: Matinee will only play one sound in a sound track at a time, so if we place multiple sounds and they overlap, they won't play simultaneously. Fortunately, we can have as many separate sound tracks as we need. So if we find ourselves setting up a Matinee and two or more sounds overlap in our sound track, we can just add a second one and move some of our sounds in to it. Now that we've gone over the different ways to directly play and use sound cues, let's look at how to make and manipulate the same sound cues using UDK's surprisingly robust Sound Cue Editor. Summary Now that we have a decent grasp of what kinds of sound control UDK offers us and how to manipulate sounds in the editor, we can set about bringing our game to audible life. A quick tip for placing ambient sounds: if you look at something that visually seems like it should be making a noise like a waterfall, a fire, a flickering light, or whatever else, then it probably should have an ambient sound of some sort placed right on it. And as always, what we've covered in this article is an overview of some of the bare bones basics required to get started exploring sounds and soundscapes in UDK. There are plenty of other actors, settings, and things that can be done. So, again, I recommend playing around with anything you can find. Experiment with everything in UDK and you'll learn all sorts of new and interesting things. Resources for Article: Further resources on this subject: Unreal Development Toolkit: Level Design HQ [Article] Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article]
Read more
  • 0
  • 0
  • 16187
Visually different images

article-image-c-ngui
Packt
29 Dec 2014
23 min read
Save for later

C# with NGUI

Packt
29 Dec 2014
23 min read
In this article by Charles Pearson, the author of Learning NGUI for Unity, we will talk about C# scripting with NGUI. We will learn how to handle events and interact with them through code. We'll use them to: Play tweens with effects through code Implement a localized tooltip system Localize labels through code Assign callback methods to events using both code and the Inspector view We'll learn many more useful C# tips throughout the book. Right now, let's start with events and their associated methods. (For more resources related to this topic, see here.) Events When scripting in C# with the NGUI plugin, some methods will often be used. For example, you will regularly need to know if an object is currently hovered upon, pressed, or clicked. Of course, you could code your own system—but NGUI handles that very well, and it's important to use it at its full potential in order to gain development time. Available methods When you create and attach a script to an object that has a collider on it (for example, a button or a 3D object), you can add the following useful methods within the script to catch events: OnHover(bool state): This method is called when the object is hovered or unhovered. The state bool gives the hover state; if state is true, the cursor just entered the object's collider. If state is false, the cursor has just left the collider's bounds. OnPress(bool state): This method works in the exact same way as the previous OnHover() method, except it is called when the object is pressed. It also works for touch-enabled devices. If you need to know which mouse button was used to press the object, use the UICamera.currentTouchID variable; if this int is equal to -1, it's a left-click. If it's equal to -2, it's a right-click. Finally, if it's equal to -3, it's a middle-click. OnClick(): This method is similar to OnPress(), except that this method is exclusively called when the click is validated, meaning when an OnPress(true) event occurs followed by an OnPress(false) event. It works with mouse click and touch (tap). In order to handle double clicks, you can also use the OnDoubleClick() method, which works in the same way. OnDrag(Vector2 delta): This method is called at each frame when the mouse or touch moves between the OnPress(true) and OnPress(false) events. The Vector2 delta argument gives you the object's movement since the last frame. OnDrop(GameObject droppedObj): This method is called when an object is dropped on the GameObject on which this script is attached. The dropped GameObject is passed as the droppedObj parameter. OnSelect(): This method is called when the user clicks on the object. It will not be called again until another object is clicked on or the object is deselected (click on empty space). OnTooltip(bool state): This method is called when the cursor is over the object for more than the duration defined by the Tooltip Delay inspector parameter of UICamera. If the Sticky Tooltip option of UICamera is checked, the tooltip remains visible until the cursor moves outside the collider; otherwise, it disappears as soon as the cursor moves. OnScroll(float delta): This method is called when the mouse's scroll wheel is moved while the object is hovered—the delta parameter gives you the amount and direction of the scroll. If you attach your script on a 3D object to catch these events, make sure it is on a layer included in Event Mask of UICamera. Now that we've seen the available event methods, let's see how they are used in a simple example. Example To illustrate when these events occur and how to catch them, you can create a new EventTester.cs script with the following code: void OnHover(bool state) { Debug.Log(this.name + " Hover: " + state); }   void OnPress(bool state) { Debug.Log(this.name + " Pressed: " + state); }   void OnClick() { Debug.Log(this.name + " Clicked"); }   void OnDrag(Vector2 delta) { Debug.Log(this.name + " Drag: " + delta); }   void OnDrop(GameObject droppedObject) { Debug.Log(droppedObject.name + " dropped on " + this.name); }   void OnSelect(bool state) { Debug.Log(this.name + " Selected: " + state); }   void OnTooltip(bool state) { Debug.Log("Show " + this.name + "'s Tooltip: " + state); }   void OnScroll(float delta) { Debug.Log("Scroll of " + delta + " on " + this.name); } The above highlighted lines are the event methods we discussed, implemented with their respective necessary arguments. Attach our Event Tester component now to any GameObject with a collider, like our Main | Buttons | Play button. Hit Unity's play button. From now on, events that occur on the object they're attached to are now tracked in the Console output:   I recommend that you keep the EventTester.cs script in a handy file directory as a reminder for available event methods in the future. Indeed, for each event, you can simply replace the Debug.Log() lines with the instructions you need. Now we know how to catch events through code. Let's use them to display a tooltip! Creating tooltips Let's use the OnTooltip() event to show a tooltip for our buttons and different options, as shown in the following screenshot:   The tooltip object shown in the preceding screenshot, which we are going to create, is composed of four elements: Tooltip: The tooltip container, with the Tooltip component attached. Background: The background sprite that wraps around Label. Border: A yellow border that wraps around Background. Label: The label that displays the tooltip's text. We will also make sure the tooltip is localized using NGUI methods. The tooltip object In order to create the tooltip object, we'll first create its visual elements (widgets), and then we'll attach the Tooltip component to it in order to define it as NGUI's tooltip. Widgets First, we need to create the tooltip object's visual elements: Select our UI Root GameObject in the Hierarchy view. Hit Alt + Shift + N to create a new empty child GameObject. Rename this new child from GameObject to Tooltip. Add the NGUI Panel (UIPanel) component to it. Set this new Depth of UIPanel to 10. In the preceding steps, we've created the tooltip container. It has UIPanel with a Depth value of 10 in order to make sure our tooltip will remain on top of other panels. Now, let's create the faintly transparent background sprite: With Tooltip selected, hit Alt + Shift + S to create a new child sprite. Rename this new child from Sprite to Background. Select our new Tooltip | Background GameObject, and configure UISprite, as follows: Perform the following steps: Make sure Atlas is set to Wooden Atlas. Set Sprite to the Window sprite. Make sure Type is set to Sliced. Change Color Tint to {R: 90, G: 70, B: 0, A: 180}. Set Pivot to top-left (left arrow + up arrow). Change Size to 500 x 85. Reset its Transform position to {0, 0, 0}. Ok, we can now easily add a fully opaque border with the following trick: With Tooltip | Background selected, hit Ctrl + D to duplicate it. Rename this new duplicate to Border. Select Tooltip | Border and configure its attached UI Sprite, as follows:   Perform the following steps: Disable the Fill Center option. Change Color Tint to {R: 255, G: 220, B: 0, A: 255}. Change the Depth value to 1. Set Anchors Type to Unified. Make sure the Execute parameter is set to OnUpdate. Drag Tooltip | Background in to the new Target field. By not filling the center of the Border sprite, we now have a yellow border around our background. We used anchors to make sure this border always wraps the background even during runtime—thanks to the Execute parameter set to OnUpdate. Right now, our Game and Hierarchy views should look like this:   Let's create the tooltip's label. With Tooltip selected, hit Alt + Shift + L to create a new label. For the new Label GameObject, set the following parameters for UILabel:   Set Font Type to NGUI, and Font to Arimo20 with a size of 40. Change Text to [FFCC00]This[FFFFFF] is a tooltip. Change Overflow to ResizeHeight. Set Effect to Outline, with an X and Y of 1 and black color. Set Pivot to top-left (left arrow + up arrow). Change X Size to 434. The height adjusts to the text amount. Set the Transform position to {33, -22, 0}. Ok, good. We now have a label that can display our tooltip's text. This label's height will adjust automatically as the text gets longer or shorter. Let's configure anchors to make sure the background always wraps around the label: Select our Tooltip | Background GameObject. Set Anchors Type to Unified. Drag Tooltip | Label in the new Target field. Set the Execute parameter to OnUpdate. Great! Now, if you edit our tooltip's text label to a very large text, you'll see that it adjusts automatically, as shown in the following screenshot:   UITooltip We can now add the UITooltip component to our tooltip object: Select our UI Root | Tooltip GameObject. Click the Add Component button in the Inspector view. Type tooltip with your keyboard to search for components. Select Tooltip and hit Enter or click on it with your mouse. Configure the newly attached UITooltip component, as follows: Drag UI Root | Tooltip | Label in the Text field. Drag UI Root | Tooltip | Background in the Background field. The tooltip object is ready! It is now defined as a tooltip for NGUI. Now, let's see how we can display it when needed using a few simple lines of code. Displaying the tooltip We must now show the tooltip when needed. In order to do that, we can use the OnTooltip() event, in which we request to display the tooltip with localized text: Select our three Main | Buttons | Exit, Options, and Play buttons. Click the Add Component button in the Inspector view. Type ShowTooltip with your keyboard. Hit Enter twice to create and attach the new ShowTooltip.cs script to it. Open this new ShowTooltip.cs script. First, we need to add this public key variable to define which text we want to display: // The localization key of the text to display public string key = ""; Ok, now add the following OnTooltip() method that retrieves the localized text and requests to show or hide the tooltip depending on the state bool: // When the OnTooltip event is triggered on this object void OnTooltip(bool state) { // Get the final localized text string finalText = Localization.Get(key);   // If the tooltip must be removed... if(!state) { // ...Set the finalText to nothing finalText = ""; }   // Request the tooltip display UITooltip.ShowText(finalText); } Save the script. As you can see in the preceding code, the Localization.Get(string key) method returns localized text of the corresponding key parameter that is passed. You can now use it to localize a label through code anytime! In order to hide the tooltip, we simply request UITooltip to show an empty tooltip. To use Localization.Get(string key), your label must not have a UILocalize component attached to it; otherwise, the value of UILocalize will overwrite anything you assign to UILabel. Ok, we have added the code to show our tooltip with localized text. Now, open the Localization.txt file, and add these localized strings: // Tooltips Play_Tooltip, "Launch the game!", "Lancer le jeu !" Options_Tooltip, "Change language, nickname, subtitles...", "Changer la langue, le pseudo, les sous-titres..." Exit_Tooltip, "Leaving us already?", "Vous nous quittez déjà ?" Now that our localized strings are added, we could manually configure the key parameter for our three buttons' Show Tooltip components to respectively display Play_Tooltip, Options_Tooltip, and Exit_Tooltip. But that would be a repetitive action, and if we want to add localized tooltips easily for future and existing objects, we should implement the following system: if the key parameter is empty, we'll try to get a localized text based on the GameObject's name. Let's do this now! Open our ShowTooltip.cs script, and add this Start() method: // At start void Start() { // If key parameter isn't defined in inspector... if(string.IsNullOrEmpty(key)) { // ...Set it now based on the GameObject's name key = name + "_Tooltip"; } } Click on Unity's play button. That's it! When you leave your cursor on any of our three buttons, a localized tooltip appears:   The preceding tooltip wraps around the displayed text perfectly, and we didn't have to manually configure their Show Tooltip components' key parameters! Actually, I have a feeling that the display delay is too long. Let's correct this: Select our UI Root | Camera GameObject. Set Tooltip Delay of UICamera to 0.3. That's better—our localized tooltip appears after 0.3 seconds of hovering. Adding the remaining tooltips We can now easily add tooltips for our Options page's element. The tooltip works on any GameObject with a collider attached to it. Let's use a search by type to find them: In the Hierarchy view's search bar, type t:boxcollider Select Checkbox, Confirm, Input, List (both), Music, and SFX: Click on the Add Component button in the Inspector view. Type show with your keyboard to search the components. Hit Enter or click on the Show Tooltip component to attach it to them. For the objects with generic names, such as Input and List, we need to set their key parameter manually, as follows: Select the Checkbox GameObject, and set Key to Sound_Tooltip. Select the Input GameObject, and set Key to Nickname_Tooltip. For the List for language selection, set Key to Language_Tooltip. For the List for subtitles selection, set Key to Subtitles_Tooltip. To know if the selected list is the language or subtitles list, look at Options of its UIPopup List: if it has the None option, then it's the subtitles selection. Finally, we need to add these localization strings in the Localization.txt file: Sound_Tooltip, "Enable or disable game sound", "Activer ou désactiver le son du jeu" Nickname_Tooltip, "Name used during the game", "Pseudo utilisé lors du jeu" Language_Tooltip, "Game and user interface language", "Langue du jeu et de l'interface" Subtitles_Tooltip, "Subtitles language", "Langue des sous-titres" Confirm_Tooltip, "Confirm and return to main menu", "Confirmer et retourner au menu principal" Music_Tooltip, "Game music volume", "Volume de la musique" SFX_Tooltip, "Sound effects volume", "Volume des effets" Hit Unity's play button. We now have localized tooltips for all our options! We now know how to easily use NGUI's tooltip system. It's time to talk about Tween methods. Tweens The tweens we have used until now were components we added to GameObjects in the scene. It is also possible to easily add tweens to GameObjects through code. You can see all available tweens by simply typing Tween inside any method in your favorite IDE. You will see a list of Tween classes thanks to auto-completion, as shown in the following screenshot:   The strong point of these classes is that they work in one line and don't have to be executed at each frame; you just have to call their Begin() method! Here, we will apply tweens on widgets, but keep in mind that it works in the exact same way with other GameObjects since NGUI widgets are GameObjects. Tween Scale Previously, we've used the Tween Scale component to make our main window disappear when the Exit button is pressed. Let's do the same when the Play button is pressed, but this time we'll do it through code to understand how it's done. DisappearOnClick Script We will first create a new DisappearOnClick.cs script that will tween a target's scale to {0.01, 0.01, 0.01} when the GameObject it's attached to is clicked on: Select our UI Root | Main | Buttons | Play GameObject. Click the Add Component button in the Inspector view. Type DisappearOnClick with your keyboard. Hit Enter twice to create and add the new DisappearOnClick.cs script. Open this new DisappearOnClick.cs script. First, we must add this public target GameObject to define which object will be affected by the tween, and a duration float to define the speed: // Declare the target we'll tween down to {0.01, 0.01, 0.01} public GameObject target; // Declare a float to configure the tween's duration public float duration = 0.3f; Ok, now, let's add the following OnClick() method, which creates a new tween towards {0.01, 0.01, 0.01} on our desired target using the duration variable: // When this object is clicked private void OnClick() { // Create a tween on the target TweenScale.Begin(target, duration, Vector3.one * 0.01f); } In the preceding code, we scale down the target for the desired duration, towards 0.01f. Save the script. Good. Now, we simply have to assign our variables in the Inspector view: Go back to Unity and select our Play button GameObject. Drag our UI Root | Main object in the DisappearOnClick Target field. Great. Now, hit Unity's play button. When you click the menu's Play button, our main menu is scaled down to {0.01, 0.01, 0.01}, with the simple TweenScale.Begin() line! Now that we've seen how to make a basic tween, let's see how to add effects. Tween effects Right now, our tween is simple and linear. In order to add an effect to the tween, we first need to store it as UITweener, which is its parent class. Replace lines of our OnClick() method by these to first store it and set an effect: // Retrieve the new target's tween UITweener tween = TweenScale.Begin(target, duration, Vector3.one * 0.01f); // Set the new tween's effect method tween.method = UITweener.Method.EaseInOut; That's it. Our tween now has an EaseInOut effect. You also have the following tween effect methods:   Perform the following steps: BounceIn: Bouncing effect at the start of tween BounceOut: Bouncing effect at the end of tween EaseIn: Smooth acceleration effect at the start of tween EaseInOut: Smooth acceleration and deceleration EaseOut: Smooth deceleration effect at the end of tween Linear: Simple linear tween without any effects Great. We now know how to add tween effects through code. Now, let's see how we can set event delegates through code. You can set the tween's ignoreTimeScale to true if you want it to always run at normal speed even if your Time.timeScale variable is different from 1. Event delegates Many NGUI components broadcast events, for which you can set an event delegate—also known as a callback method—executed when the event is triggered. We did it through the Inspector view by assigning the Notify and Method fields when buttons were clicked. For any type of tween, you can set a specific event delegate for when the tween is finished. We'll see how to do this through code. Before we continue, we must create our callback first. Let's create a callback that loads a new scene. The callback Open our MenuManager.cs script, and add this static LoadGameScene() callback method: public static void LoadGameScene() { //Load the Game scene now Application.LoadLevel("Game"); } Save the script. The preceding code requests to load the Game scene. To ensure Unity finds our scenes at runtime, we'll need to create the Game scene and add both Menu and Game scenes to the build settings: Navigate to File | Build Settings. Click on the Add Current button (don't close the window now). In Unity, navigate to File | New Scene. Navigate to File | Save Scene as… Save the scene as Game.unity. Click on the Add Current button of the Build Settings window and close it. Navigate to File | Open Scene and re-open our Menu.unity scene. Ok, now that both scenes have been added to the build settings, we are ready to link our callback to our event. Linking a callback to an event Now that our LoadGameScene() callback method is written, we must link it to our event. We have two solutions. First, we'll see how to assign it using code exclusively, and then we'll create a more flexible system using NGUI's Notify and Method fields. Code In order to set a callback for a specific event, a generic solution exists for all NGUI events you might encounter: the EventDelegate.Set() method. You can also add multiple callbacks to an event using EventDelegate.Add(). Add this line at the end of the OnClick() method of DisappearOnClick.cs: // Set the tween's onFinished event to our LoadGameScene callback EventDelegate.Set(tween.onFinished, MenuManager.LoadGameScene); Instead of the preceding line, we can also use the tween-specific SetOnFinished() convenience method to do this. We'll get the exact same result with fewer words: // Another way to assign our method to the onFinished event tween.SetOnFinished(MenuManager.LoadGameScene); Great. If you hit Unity's play button and click on our main menu's Play button, you'll see that our Game scene is loaded as soon as the tween has finished! It is possible to remove the link of an existing event delegate to a callback by calling EventDelegate.Remove(eventDelegate, callback);. Now, let's see how to link an event delegate to a callback using the Inspector view. Inspector Now that we have seen how to set event delegates through code, let's see how we can create a variable to let us choose which method to call within the Inspector view, like this:   The method to call when the target disappears can be set any time without editing the code The On Disappear variable shown in the preceding screenshot is of the type EventDelegate. We can declare it right now with the following line as a global variable for our DisappearOnClick.cs script: // Declare an event delegate variable to be set in Inspector public EventDelegate onDisappear; Now, let's change the OnClick() method's last line to make sure the tween's onFinished event calls the defined onDisappear callback: // Set the tween's onFinished event to the selected callback tween.SetOnFinished(onDisappear); Ok. Great. Save the script and go to Unity. Select our main menu's Play button: a new On Disappear field has appeared. Drag UI Root—which holds our MenuManager.cs script—in the Notify field. Now, try to select our MenuManager | LoadGameScene method. Surprisingly, it doesn't appear, and you can only select the script's Exit method… why is that? That is simply because our LoadGameScene() method is currently static. If we want it to be available in the Inspector view, we need to remove its static property: Open our MenuManager.cs script. Remove the static keyword from our LoadGameScene() method. Save the script and return to Unity. You can now select it in the drop-down list:   Great! We have set our callback through the Inspector view; the Game scene will be loaded when the menu disappears. Now that we have learned how to assign event delegates to callback methods through code and the Inspector view, let's see how to assign keyboard keys to user interface elements. Keyboard keys In this section, we'll see how to add keyboard control to our UI. First, we'll see how to bind keys to buttons, and then we'll add a navigation system using keyboard arrows. UIKey binding The UIKey Binding component assigns a specific key to the widget it's attached to. We'll use it now to assign the keyboard's Escape key to our menu's Exit button: Select our UI Root | Main | Buttons | Exit GameObject. Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Binding and hit Enter or click on it with your mouse. Let's see its available parameters. Parameters We've just added the following UIKey Binding component to our Exit button, as follows:   The newly attached UIKey Binding component has three parameters: Key Code: Which key would you like to bind to an action? Modifier: If you want a two-button combination. Select on the four available modifiers: Shift, Control, Alt or None. Action: Which action should we bind to this key? You can simulate a button click with PressAndClick, a selection with Select, or both with All. Ok, now, we'll configure it to see how it works. Configuration Simply set the Key Code field to Escape. Now, hit Unity's play button. When you hit the Escape key of our keyboard, it reacts as if the Exit button was pressed! We can now move on to see how to add keyboard and controller navigation to the UI. UIKey navigation The UIKey Navigation component helps us assign objects to select using the keyboard arrows or controller directional-pad. For most widgets, the automatic configuration is enough, but we'll need to use the override parameters in some cases to have the behavior we need. The nickname input field has neither the UIButton nor the UIButton Scale components attached to it. This means that there will be no feedback to show the user it's currently selected with the keyboard navigation, which is a problem. We can correct this right now. Select UI Root | Options | Nickname | Input, and then: Add the Button component (UIButton) to it. Add the Button Scale component (UIButton Scale) to it. Center Pivot of UISprite (middle bar + middle bar). Reset Center of Box Collider to {0, 0, 0}. The Nickname | Input GameObject should have an Inspector view like this:   Ok. We'll now add the Key Navigation component (UIKey Navigation) to most of the buttons in the scene. In order to do that, type t:uibutton in the Hierarchy view's search bar to display only GameObjects with the UIButton component attached to them:   Ok. With the preceding search filter, select the following GameObjects:   Now, with the preceding selection, follow these steps: Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Navigation and hit Enter or click on it with your mouse. We've added the UIKey Navigation component to our selection. Let's see its parameters. Parameters We've just added the following UIKey Navigation component to our objects:   The newly attached UIKey Navigation component has four parameter groups: Starts Selected: Is this widget selected by default at the start? Select on Click: Which widget should be selected when this widget is clicked on — or the Enter key/confirm button has been pressed? This option can be used to select a specific widget when a new page is displayed. Constraint: Use this to limit the navigation movement from this widget: None: The movement is free from this widget Vertical: From this widget, you can only go up or down Horizontal: From this widget, you can only move left or right Explicit: Only move to widgets specified in the Override Override: Use the Left, Right, Up, and Down fields to force the input to select the specified objects. If the Constraint parameter is set to Explicit, only widgets specified here can be selected. Otherwise, automatic configuration still works for fields left to None. Summary This article thus has given an introduction to how C# is used in Unity. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15832

article-image-creating-virtual-landscapes
Packt
06 Mar 2013
9 min read
Save for later

Creating Virtual Landscapes

Packt
06 Mar 2013
9 min read
(For more resources related to this topic, see here.) Describing a world in data Just like modern games, early games like Ant Attack required data that described in some meaningful way how the landscape was to appear. The eerie city landscape of "Antchester" (shown in the following screenshot) was constructed in memory as a 128 x 128 byte grid, the first 128 bytes defined the upper-left wall, and the 128 byte row below that, and so on. Each of these bytes described the vertical arrangement of blocks in lower six bits, for game logic purposes the upper two bits were used for game sprites. Heightmaps are common ground The arrangement of numbers in a grid pattern is still extensively used to represent terrain. We call these grids "maps" and they are popular by virtue of being simple to use and manipulate. A long way from "Antchester", maps can now be measured in megabytes or Gigabytes (around 20GB is needed for the whole earth at 30 meter resolution). Each value in the map represents the height of the terrain at that location. These kinds of maps are known as heightmaps. However, any information that can be represented in the grid pattern can use maps. Additional maps can be used by 3D engines to tell it how to mix many textures together; this is a common terrain painting technique known as "splatting". Splats describe the amount of blending between texture layers. Another kind of map might be used for lighting, adding light, or shadows to an area of the map. We also find in some engines something called visibility maps which hide parts of the terrain; for example we might want to add holes or caves into a landscape. Coverage maps might be used to represent objects such as grasses, different vegetation layers might have some kind of map the engine uses to draw 3D objects onto the terrain surface. GROME allows us to create and edit all of these kinds of maps and export them, with a little bit of manipulation we can port this information into most game engines. Whatever the technique used by an engine to paint the terrain, height-maps are fairly universal in how they are used to describe topography. The following is an example of a heightmap loaded into an image viewer. It appears as a gray scale image, the intensity of each pixel represents a height value at that location on the map. This map represents a 100 square kilometer area of north-west Afghanistan used in a flight simulation. GROME like many other terrain editing tools uses heightmaps to transport terrain information. Typically importing the heightmap as a gray scale image using common file formats such as TIFF, PNG, or BMP. When it's time to export the terrain project you have similar options to save. This commonality is the basis of using GROME as a tool for many different engines. There's nothing to stop you from making changes to an exported heightmap using image editing software. The GROME plugin system and SDK permit you to make your own custom exporter for any unsupported formats. So long as we can deal with the material and texture format requirements for our host 3D engine we can integrate GROME into the art pipeline. Texture sizes Using textures for heightmap information does have limitations. The largest "safe" size for a texture is considered 4096 x 4096 although some of the older 3D cards would have problems with anything higher than 2048 x 2048. Also, host 3D engines often require texture dimensions to be a power of 2. A table of recommended dimensions for images follow: SafeTexture dimensions 64 x 64 128 x 128 256 x 256 512 x 512 1024 x 1024 2048 x 2048 4096 x 4096 512 x 512 provides efficient trade-off between resolution and performance and is the default value for GROME operations. If you're familiar with this already then great, you might see questions posted on forums about texture corruption or materials not looking correct. Sometimes these problems are the result of not conforming to this arrangement. Also, you'll see these numbers crop up a few times in GROME's drop-down property boxes. To avoid any potential problems it is wise to ensure any textures you use in your projects conform to these specifications. One exception is Unreal Development Kit ( UDK) in which you'll see numbers such as 257 x 257 used. If you have a huge amount of terrain data that you need to import for a project you can use the texture formats mentioned earlier but I recommend using RAW formats if possible. If your project is based on real-world topography then importing DTED or GeoTIFF data will extract geographical information such as latitude, longitude, and number of arc seconds represented by the terrain. Digital Terrain Elevation Data (DTED) A file format used by geographers and mappers to map the height of a terrain. Often used to import real-world topography into flight simulations. Shuttle Radar Topography Mission (SRTM) data is easily obtained and converted. The huge world problem Huge landscapes may require a lot of memory, potentially more than a 3D card can handle. In game consoles memory is a scarce resource, on mobile devices transferring the app and storing is a factor. Even on a cutting edge PC large datasets will eat into that onboard memory especially when we get down to designing and building them using high-resolution data. Requesting actions that eat up your system memory may cause the application to fail. We can use GROME to create vast worlds without worrying too much about memory. This is done by taking advantage of how GROME manages data through a process of splitting terrain into "zones" and swapping it out to disk. This swapping is similar to how operating systems move memory to disk and reload it on demand. By default whenever you import large DTED files GROME will break the region into multiple zones and hide them. Someone new to GROME might be confused by a lengthy file import operation only to be presen ted with a seemingly empty project space. When creating terrain for engines such as Unity, UDK, Ogre3D, and others you should keep in mind their own technical limitations of what they can reasonably import. Most of these engines are built for small scale scenes. While GROME doesn't impose any specific unit of measure on your designs, one unit equals one meter is a good rule of thumb. Many third-party models are made to this scale. However it's up to the artist to pick a unit of scale and importantly, be consistent. Keep in mind many 3D engines are limited by two factors: Floating point math precision Z-buffer (depth buffer) precision Floating point precision As a general rule anything larger than 20,000 units away from the world origin in any direction is going to exhibit precision errors. This manifests as vertex jitter whenever vertices are rotated and transformed by large values. The effects are not something you can easily work around. Changing the scale of the object shifts the error to another decimal point. Normally in engines that specialize in rendering large worlds they either use a camera-relative rendering or some kind of paging system. Unity and UDK are not inherently capable of camera-relative rendering but a method of paging is possible to employ. Depth buffer precision The other issue associated with large scene rendering is z-fighting. The depth buffer is a normally invisible part of a scene used to determine what part is hidden by another, depth-testing. Whenever a pixel is written to a scene buffer the z component is saved in the depth buffer. Typically this buffer has 16 bits of precision, meaning you have a linear depth of 0 to 65,536. This depth value is based on the 3D camera's view range (the difference between the camera near and far distance). Z-fighting occurs when objects appear to be co-planer polygons written into the z-buffer with similar depth values causing them to "fight" for visibility. This flickering is an indicator that the scene and camera settings need to be rethought. Often the easy fix is to increase the z-buffer precision by increasing the camera's near distance. The downside is that this can clip very near objects. GROME will let you create such large worlds. Its own Graphite engine handles them well. Most 3D engines are designed for smaller first and third-person games which will have a practical limit of around 10 to 25 square kilometers (1 meter = 1 unit). GROME can mix levels of detail quite easily, different regions of the terrain have their own mesh density. If for example you have a map on an island, you will want lots of detail for the land and less in the sea region. However, game engines such as Unity, UDK, and Ogre3 Dare are not easily adapted to deal with such variability in the terrain mesh since they are optimized to render a large triangular grid of uniform size. Instead, we use techniques to fake extra detail and bake it into our terrain textures, dramatically reducing the triangle count in the process. Using a combination of Normal Maps and Mesh Layers in GROME we can create the illusion of more detail than there is at a distance. Normal map A Normal is a unit vector (a vector with a total length of one) perpendicular to a surface. When a texture is used as a Normal map, the red, green, and blue channels represent the vector (x,y,z). These are used to generate the illusion of more detail by creating a bumpy looking surface. Also known as bump-maps. Summary In this article we looked at heightmaps and how they allow us to import and export to other programs and engines. We touched upon world sizes and limitations commonly found in 3D engines. Resources for Article : Further resources on this subject: Photo Manipulation with GIMP 2.6 [Article] Setting up a BizTalk Server Environment [Article] Creating and Warping 3D Text with Away3D 3.6 [Article]
Read more
  • 0
  • 0
  • 15794

article-image-audio-and-animation-hand-hand
Packt
16 Feb 2016
5 min read
Save for later

Audio and Animation: Hand in Hand

Packt
16 Feb 2016
5 min read
In this article, we are going to learn techniques to match audio pitch to animation speed. This is very crucial while editing videos and creating animated contents. (For more resources related to this topic, see here.) Matching the audio pitch to the animation speed Many artifacts sound higher in pitch when accelerated and lower when slowed down. Car engines, fan coolers, Vinyl, a record player the list goes on. If you want to simulate this kind of sound effect in an animated object that can have its speed changed dynamically, follow this article. Getting ready For this, you'll need an animated 3D object and an audio clip. Please use the files animatedRocket.fbx and engineSound.wav, available in the 1362_09_01 folder, that you can find in code bundle of the book Unity 5.x Cookbook at https://www.packtpub.com/game-development/unity-5x-cookbook. How to do it... To change the pitch of an audio clip according to the speed of an animated object, please follow these steps: Import the animatedRocket.fbx file into your Project. Select the carousel.fbx file in the Project view. Then, from the Inspector view, check its Import Settings. Select Animations, then select the clip Take 001, and make sure to check the Loop Time option. Click on the Apply button, shown as follows to save the changes: The reason why we didn't need to check Loop Pose option is because our animation already loops in a seamless fashion. If it didn't, we could have checked that option to automatically create a seamless transition from the last to the first frame of the animation. Add the animatedRocket GameObject to the scene by dragging it from the Project view into the Hierarchy view. Import the engineSound.wav audio clip. Select the animatedRocket GameObject. Then, drag engineSound from the Project view into the Inspector view, adding it as an Audio Source for that object. In the Audio Source component of carousel, check the box for the Loop option, as shown in the following screenshot: We need to create a Controller for our object. In the Project view, click on the Create button and select Animator Controller. Name it as rocketlController. Double-click on rocketController object to open the Animator window, as shown. Then, right-click on the gridded area and select the Create State | Empty option, from the contextual menu. Name the new state spin and set Take 001 as its motion in the Motion field: From the Hierarchy view, select animatedRocket. Then, in the Animator component (in the Inspector view), set rocketController as its Controller and make sure that the Apply Root Motion option is unchecked as shown: In the Project view, create a new C# Script and rename it to ChangePitch. Open the script in your editor and replace everything with the following code: using UnityEngine; public class ChangePitch : MonoBehaviour{ public float accel = 0.05f; public float minSpeed = 0.0f; public float maxSpeed = 2.0f; public float animationSoundRatio = 1.0f; private float speed = 0.0f; private Animator animator; private AudioSource audioSource; void Start(){ animator = GetComponent<Animator>(); audioSource = GetComponent<AudioSource>(); speed = animator.speed; AccelRocket (0f); } void Update(){ if (Input.GetKey (KeyCode.Alpha1)) AccelRocket(accel); if (Input.GetKey (KeyCode.Alpha2)) AccelRocket(-accel); } public void AccelRocket(float accel){ speed += accel; speed = Mathf.Clamp(speed,minSpeed,maxSpeed); animator.speed = speed; float soundPitch = animator.speed * animationSoundRatio; audioSource.pitch = Mathf.Abs(soundPitch); } } Save your script and add it as a component to animatedRocket GameObject. Play the scene and change the animation speed by pressing key 1 (accelerate) and 2 (decelerate) on your alphanumeric keyboard. The audio pitch will change accordingly. How it works... At the Start() method, besides storing the Animator and Audio oururcecuSource components in variables, we'll get the initial speed from the Animator and, we'll call the AccelRocket() function by passing 0 as an argument, only for that function to calculate the resulting pitch for the Audio Source. During Update() function, the lines of the if(Input.GetKey (KeyCode.Alpha1)) and if(Input.GetKey (KeyCode.Alpha2)) code detect whenever the 1 or 2 keys are being pressed on the alphanumeric keyboard to call the AccelRocket() function, passing a accel float variable as an argument. The AccelRocket() function, in its turn, increments speed with the received argument (the accel float variable). However, it uses the Mathf.Clamp()command to limit the new speed value between the minimum and maximum speed as set by the user. Then, it changes the Animator speed and Audio Source pitch according to the new speed absolute value (the reason for making it an absolute value is keeping the pitch a positive number, even when the animation is reversed by a negative speed value). Also, please note that setting the animation speed and therefore, the sound pitch to 0 will cause the sound to stop, making it clear that stopping the object's animation also prevents the engine sound from playing. There's more... Here is some information on how to fine-tune and customize this recipe. Changing the Animation/Sound Ratio If you want the audio clip pitch to be more or less affected by the animation speed, change the value of the Animation/Sound Ratio parameter. Accessing the function from other scripts The AccelRocket()function was made public so that it can be accessed from other scripts. As an example, we have included the ExtChangePitch.cs script in 1362_09_01 folder. Try attaching this script to the Main Camera object and use it to control the speed by clicking on the left and right mouse buttons. Summary In this article we learned, how to match audio pitch to the animation speed, how to change Animation/Sound Ratio. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article:   Further resources on this subject: The Vertex Functions [article] Lights and Effects [article] Virtual Machine Concepts [article]
Read more
  • 0
  • 0
  • 15380
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 15217

article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15083

article-image-getting-started-gamesalad
Packt
20 Mar 2012
10 min read
Save for later

Getting Started with GameSalad

Packt
20 Mar 2012
10 min read
Let's get to it shall we?   System requirements In order for you to run GameSalad and create amazingly awesome games, you must meet the minimum system requirements, which are as follows:   Intel Mac (Any Mac from 2006 and above) Mac OS X 10.6 or higher At least 1GB RAM A device running iOS (iPad, iPhone 3G and up, or iPod Touch) If your computer exceeds these requirements, perfect! If not, you will need to upgrade your computer. Keep in mind, these are the minimum requirements, having a computer with better specs is recommended.   Let's get into GameSalad Let's start by downloading GameSalad and registering for an account. Let's go to GameSalad's website, www.gamesalad.com. Click the "Download Free App – GameSalad Creator" button.   While you are waiting for GameSalad to download, you should sign up for a free account. At the top of the page click Sign Up, enter your email address and create a username and password. You have two options for GameSalad membership, you can keep the Basic Pricing, which is completely free or select Professional Pricing. The difference is when you publish your App, you will have a Created with GameSalad splash screen, not a big deal right? Especially, not when you can get this awesome program for free! The Professional Pricing, which is $499 (USD) per year gives you all the features of the free version of GameSalad, plus it allows you to use iAds, Game Center, Promotional Links, your own Custom Splash Screen, and Priority Technical Support.   This does not include your Apple developer cost, which is $99 a year Other tools that are recommended for game development:   Adobe Photoshop (http://www.adobe.com/products/photoshop.html) or a free equivalent, Inkscape, Gimp, and Pixelmator Drawing Tablet (Makes creating sprites much easier but not required) Getting familiar with GameSalad's interface Once you open GameSalad, you are presented with several options on the screen.   Following are the options:   Home: It shows you the latest GameSalad links (Success stories, their latest game release, and so on...). News: It is self-explanatory, this shows you the latest update notes, and what is new in the GS community. Start: The getting started screen, here you have video tutorials, Wiki Docs, Blog, and more. Profile: This shows you, your GameSalad's profile page, messages, followers, and likes. New: These are all your new projects, Blank templates, and various bare bone templates to get you started. Recent: This shows you all of your recently saved projects. Portfolio: This shows all your published Apps through GameSalad.   Back/Forward buttons: Used when navigating back and forth between windows Web Preview: Allows you to see what your game will look like within the browser (HTML5) Home: This takes you right back to the project's main window Publish: Brings up the Publish window, here you can chose to deploy your game to the web, iPhone, iPad, Mac, or Android Scenes: Gives you a drop-down menu of all your scenes Feedback: Have some thoughts about GameSalad? Click this to send them to the Creators! Preview: At the main menu, or while editing an actor this starts your game from the beginning. If you are in a level, it will preview the level Help: Brings up the GameSalad documentation, which lists many help topics. Target Platform and Orientation: This drop-down menu gives you, your device options, iPhone Landscape, iPhone Portrait, GameSalad.com, iPad Landscape, iPad Portrait, and 720p HD Enable Resolution Independence (only when iPhone and iPad device is set): Check this option if you are creating a game specifically for the iPhone 4, 4S, iPad, or Kindles and Nooks. This takes your high resolution images and converts them for iPhone 3GS, 3G, and iPhone (1st Gen) Scenes Tab: Switch to this to see all your wonderful levels! Actors Tab: Select this tab to see all your actors in the game project. From this tab, you can group different types of actors, such as platforms and enemies. This comes in handy when an actor has to collide with numerous other actors (enemies or platforms) + button: Adds a Level - button (when a level is selected): Deletes a level Inspector (with Game selected) Actors: Here, you will see all your in-game items (Players, platforms, collectables, and so on) Attributes: Here, you can edit all the attributes of the game such as the display size. Devices: Here, you can edit all the settings for the mouse, touch screen, accelerometer, screen, and audio. Inspector (with Scene selected) o Attributes: Here, you can edit all the attributes of the current level, such as the size of the level, screen wrap (X,Y), Gravity, background color, camera settings, and autorotate. o Layers: Here, you can create numerous layers with scrollable on or off. For example, a layer called UI with scrollable deselected will have all your user interface items, and they will stay on the screen. Library (with Behaviors selected) o Standard: These are all the standard GameSalad behaviors (Movements, change actor attributes, and more) o Custom: These are your own custom behaviors. Let's say, you needed the same behavior throughout numerous actors but you didn't want to keep re-adding and changing the behavior for each actor. Once, you create the Behavior, drag it into this box and you can use it as much as you want. o Pro: These are all the professional behaviors (only available when you have paid for the professional membership). These include Game Center Connect, iAd, and Open URL Library (with Images selected) Project: This shows all your imported images into this project. Purchased: This is a new feature that shows the images you have purchased through GameSalad's Marketplace. (When you click Purchase Images..., this will take you to the GameSalad Marketplace where you will have a plethora of Content packs and more to purchase and import into your game) When you click the "+" button, you can import images from your hard drive, alternately, you can drag them directly into the Library from the Finder Library (with Sounds selected): This shows you all of your sound effects and music that you have imported into your project. As with images, when you click the "+" button you can import sound effects or music from your hard drive, or drag them directly in from the Finder. Actor Mode: This involves normal mouse functions; it allows you to select actors within the level. Following is the screenshot of the icon: Camera Mode: It allows you to edit the camera, position, and scrolling hot spots for characters that control the camera. Following is the screenshot of the icon: Reset Scene: While previewing your level and if this button is pressed, everything will go back to its initial state. Following is the screenshot of the icon: Play: This will start a preview of the current level. This is different from the green Project Preview button, as this will only preview the current level, and not the whole project. When you complete the level, an alert will appear telling you the scene has ended, and you can either select to preview the next level, or reset the current scene. Following is the screebshot of the icon: Show Initial State: If you have run a preview, and want to see the initial state without ending the preview, then pause the preview, click on the following icon and the initial state is seen. Following is the screenshot of the icon: For now, let's click New | My Great Project This is a fresh project; everything is empty. You can see that you have one level so far, but you can add more at a later time. See the Scenes and Actors Tabs? Currently, Scenes is selected, this shows you all of your levels, but if you click the Actors tab, you will be able to see all your actors (or game objects, characters, collectables, and so on.) in the game. You can also rearrange all of the actors in Actor Tags, to give you an idea of what these are useful for. Take for example, if you have 30 different enemies, when you are setting up your collisions within behaviors, you won't have to set up 30 different collisions. Rather, when you set up all the enemies within a tag named Enemies you can do a single collision behavior for all actors of the tag! This reduces a lot of time when coding. We will get into more detail about actor tags, when we get into creating some games later in the book. If you double-click on the Initial Scene, you will be taken to the level editor. Before we do that, let's go through the buttons shown in the following screenshot: The descriptions of the buttons in the previous screenshot are as as follows: Seems pretty easy, right? It is! GameSalad's user interface is simple. Even if you don't know what a certain button does, just hover your mouse over the button and a tooltip appears and tells you what the button does. Even though it's a very simple user interface, it is very powerful. Take for example, something as simple as the Enable Resolution Independence option. Simply selecting this takes out a lot of time from having to create two sets of images, a high resolution retina-friendly image, and a lower quality set for non-retina display images. With this option, all you have to do is create a high resolution set. Choose this option and GameSalad automatically creates a lower quality set of images for non-retina devices. How great is that? Such a simple option and yet it saves so much time and effort, and isn't simplicity what everyone wants? Now let's get into the scene editor Double-click our initial scene and you will see the Scene Editor, yes it may be a little daunting, but once you get used to the user interface, it is really quite simple. Let's break down all the buttons and see what they do: What do all these buttons mean? Following is a description of all the buttons and boxes: There we go! The GameSalad interface really is that easy to navigate! In this article, you set up an account with GameSalad, you downloaded and installed it and now you know how to use the interface. GameSalad has such a simple interface, but it is really powerful. As we looked at earlier, an option as simple as Resolution Independence is so easy to select and yet one click takes off so much time from creating different sets of images that can be used for developing. This is what makes GameSalad so great; it's such a simple user interface and yet it is so powerful. What is so amazing about all of it, is that there's no programming involved whatsoever! For those who don't have the smartness to program a full game, this is what everyone else wants, simple, quick, and super powerful.
Read more
  • 0
  • 0
  • 14885

article-image-creating-custom-hud
Packt
29 Apr 2013
5 min read
Save for later

Creating a Custom HUD

Packt
29 Apr 2013
5 min read
(For more resources related to this topic, see here.) Mission Briefing In this project we will be creating a HUD that can be used within a Medieval RPG and that will fit nicely into the provided Epic Citadel map, making use of Scaleform and ActionScript 3.0 using Adobe Flash CS6. As usual, we will be following a simple step—by—step process from beginning to end to complete the project. Here is the outline of our tasks: Setting up Flash Creating our HUD Importing Flash files into UDK Setting up Flash Our first step will be setting up Flash in order for us to create our HUD. In order to do this, we must first install the Scaleform Launcher. Prepare for Lift Off At this point, I will assume that you have run Adobe Flash CS6 at least once beforehand. If not, you can skip this section to where we actually import the .swf file into UDK. Alternatively, you can try to use some other way to create a Flash animation, such as FlashDevelop, Flash Builder, or SlickEdit; but that will have to be done on your own. Engage Thrusters The first step will be to install the Scaleform Launcher. The launcher will make it very easy for us to test our Flash content using the GFX hardware—accelerated Flash Player, which is what UDK will use to play it. Let's get started. Open up Adobe Flash CS6 Professional. Once the program starts up, open up Adobe Extension Manager by going to Help | Manage Extensions.... You may see the menu say Performing configuration tasks, please wait.... This is normal; just wait for it to bring up the menu as shown in the following screenshot: Click on the Install option from the top menu on the right—hand side of the screen. In the file browser, locate the path of your UDK installation and then go into the BinariesGFxCLICK Tools folder. Once there, select the ScaleformExtensions.mxp file and then select OK. When the agreement comes up, press the Accept button; then select whether you want the program to be installed for just you or everyone on your computer. If Flash is currently running, you should get a window popping up telling you that the program will not be ready until you restart the program. Close the manager and restart the program. With your reopened version of Flash start up the Scaleform Launcher by clicking on Window | Other Panels | Scaleform Launcher. At this point you should see the Scaleform Launcher panel come up as shown in the following screenshot: At this point all of the options are grayed out as it doesn't know how to access the GFx player, so let's set that up now. Click on the + button to add a new profile. In the profile name section, type in GFXMediaPlayer. Next, we need to reference the GFx player. Click on the + button in the player EXE section. Go to your UDK directory, BinariesGFx, and then select GFxMediaPlayerD3d9.exe. It will then ask you to give a name for the Player Name field with the value already filled in; just hit the OK button. UDK by default uses DirectX 9 for rendering. However, since GDC 2011, it has been possible for users to use DirectX 11. If your project is using 11, feel free to check out http://udn.epicgames.com/Three/DirectX11Rendering.html and use DX11. In order to test our game, we will need to hit the button that says Test with: GFxMediaPlayerD3d9 as shown in the following screenshot: If you know the resolution in which you want your final game to be, you can set up multiple profiles to preview how your UI will look at a specific resolution. For example, if you'd like to see something at a resolution of 960 x 720, you can do so by altering the command params field after %SWF PATH% to include the text —res 960:720. Now that we have the player loaded, we need to install the CLIK library for our usage. Go to the Preferences menu by selecting Edit | Preferences. Click on the ActionScript tab and then click on the ActionScript 3.0 Settings... button. From there, add a new entry to our Source path section by clicking on the + button. After that, click on the folder icon to browse to the folder we want. Add an additional path to our CLIK directory in the file explorer by first going to your UDK installation directory and then going to DevelopmentFlashAS3CLIK. Click on the OK button and drag—and—drop the newly created Scaleform Launcher to the bottom—right corner of the interface. Objective Complete — Mini Debriefing Alright, Flash is now set up for us to work with Scaleform within it, which for all intents and purposes is probably the hardest part about working with Scaleform. Now that we have taken care of it, let's get started on the HUD! As long as you have administrator access to your computer, these settings should be set for whenever you are working with Flash. However, if you do not, you will have to run through all of these settings every time you want to work on Scaleform projects.
Read more
  • 0
  • 0
  • 14668
article-image-3d-modeling
Packt
05 Feb 2015
7 min read
Save for later

3D Modeling

Packt
05 Feb 2015
7 min read
In this article by Suryakumar Balakrishnan Nair and Andreas Oehlke, authors of Learning LibGDX Game Development, Second Edition, you will learn how to load a model and create a basic 3D scene. In a game, we need an actual model exported from Blender or any other 3D animation software. (For more resources related to this topic, see here.) Loading a model Copy these three files to the assets folder of the android project: car.g3dj: This is the model file to be used in our example tiretext.jpg and yellowtaxi.jpg: These are the materials for the model Replacing the ModelBuilder class in our ModelTest.java file, we add the following code: assets = new AssetManager(); assets.load("car.g3dj", Model.class); assets.finishLoading(); model = assets.get("car.g3dj", Model.class); instance = new ModelInstance(model); Additionally, a camera input controller is also added to inspect the model from various angles as follows: camController = new CameraInputController(cam); Gdx.input.setInputProcessor(camController); camController.update(); This camera input controller will be updated on each render() by calling camController.update(). The completed MyModelTest.java is as follows: public class MyModelTest extends ApplicationAdapter { public Environment environment; public PerspectiveCamera cam; public CameraInputController camController; public ModelBatch modelBatch; public Model model; public ModelInstance instance; public AssetManager assets ; @Override public void create() { environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f)); modelBatch = new ModelBatch(); cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(1,1,1); cam.lookAt(0, 0, 0); cam.near = 1f; cam.far = 300f; cam.update(); assets = new AssetManager(); assets.load("car.g3dj", Model.class); assets.finishLoading(); model = assets.get("car.g3dj", Model.class); instance = new ModelInstance(model); camController = new CameraInputController(cam); Gdx.input.setInputProcessor(camController); } @Override public void render() { camController.update(); Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(instance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); assets.dispose() ; } } The new additions are highlighted. The following is a screenshot of the render scene. Use the W , S , A , D keys and mouse to navigate through the scene. Model formats and the FBX converter LibGDX supports three model formats, namely Wavefront OBJ, G3DJ, and G3DB. Wavefront OBJ models are intended for testing purposes only because this format does not include enough information for complex models. You can export your 3D model as .obj from any 3D animation or modeling software, however LibGDX does not fully support .obj, hence, if you use your own .obj model, then it might not render correctly. The G3DJ is a JSON textual format supported by LibGDX and can be used for debugging, whereas the G3DB is a binary format and is faster to load. One of the most popular model formats supported by any modeling software is FBX. LibGDX provides a tool called FBX converter to convert formats such as .obj and .fbx into the LibGDX supported formats .g3dj and .g3db. To convert car.fbx to a .g3db format, open the command line and call fbx-conv-win32, as shown in the following screenshot: Make sure that the fbx-conv-win32.exe file is in the same folder as car.fbx. Otherwise, you will have to use the full path of the source file to convert. To find out more about FBX converter visit https://github.com/libgdx/fbx-conv and https://github.com/libgdx/libgdx/wiki/3D-animations-and-skinning. Also, you can download FBX converter from http://libgdx.badlogicgames.com/fbx-conv. Creating a basic 3D scene Create a simple scene with a ball and ground, as shown in the following screenshot: Add the following code to MyCollisionTest.java: package com.packtpub.libgdx.collisiontest; import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; ... import com.badlogic.gdx.utils.Array; public class MyCollisionTest extends ApplicationAdapter { PerspectiveCamera cam; ModelBatch modelBatch; Array<Model> models; ModelInstance groundInstance; ModelInstance sphereInstance; Environment environment; ModelBuilder modelbuilder; @Override public void create() { modelBatch = new ModelBatch(); environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f)); cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(0, 10, -20); cam.lookAt(0, 0, 0); cam.update(); models = new Array<Model>(); modelbuilder = new ModelBuilder(); // creating a ground model using box shape float groundWidth = 40; modelbuilder.begin(); MeshPartBuilder mpb = modelbuilder.part("parts", GL20.GL_TRIANGLES, Usage.Position | Usage.Normal | Usage.Color, new Material(ColorAttribute.createDiffuse(Color.WHITE))); mpb.setColor(1f, 1f, 1f, 1f); mpb.box(0, 0, 0, groundWidth, 1, groundWidth); Model model = modelbuilder.end(); models.add(model); groundInstance = new ModelInstance(model); // creating a sphere model float radius = 2f; final Model sphereModel = modelbuilder.createSphere(radius, radius, radius, 20, 20, new Material(ColorAttribute.createDiffuse(Color.RED), ColorAttribute.createSpecular(Color.GRAY), FloatAttribute.createShininess(64f)), Usage.Position | Usage.Normal); models.add(sphereModel); sphereInstance = new ModelInstance(sphereModel); sphereinstance.transform.trn(0, 10, 0); } public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(groundInstance, environment); modelBatch.render(sphereInstance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); for (Model model : models) model.dispose(); } } The ground is actually a thin box created using ModelBuilder just like the sphere. Now that we have created a simple 3D scene, let's add some physics using the following code: public class MyCollisionTest extends ApplicationAdapter { ... private btDefaultCollisionConfiguration collisionConfiguration; private btCollisionDispatcher dispatcher; private btDbvtBroadphase broadphase; private btSequentialImpulseConstraintSolver solver; private btDiscreteDynamicsWorld world; private Array<btCollisionShape> shapes = new Array<btCollisionShape>(); private Array<btRigidBodyConstructionInfo> bodyInfos = new Array<btRigidBody.btRigidBodyConstructionInfo>(); private Array<btRigidBody> bodies = new Array<btRigidBody>(); private btDefaultMotionState sphereMotionState; @Override public void create() { ... // Initiating Bullet Physics Bullet.init(); //setting up the world collisionConfiguration = new btDefaultCollisionConfiguration(); dispatcher = new btCollisionDispatcher(collisionConfiguration); broadphase = new btDbvtBroadphase(); solver = new btSequentialImpulseConstraintSolver(); world = new btDiscreteDynamicsWorld(dispatcher, broadphase, solver, collisionConfiguration); world.setGravity(new Vector3(0, -9.81f, 1f)); // creating ground body btCollisionShape groundshape = new btBoxShape(new Vector3(20, 1 / 2f, 20)); shapes.add(groundshape); btRigidBodyConstructionInfo bodyInfo = new btRigidBodyConstructionInfo(0, null, groundshape, Vector3.Zero); this.bodyInfos.add(bodyInfo); btRigidBody body = new btRigidBody(bodyInfo); bodies.add(body); world.addRigidBody(body); // creating sphere body sphereMotionState = new btDefaultMotionState(sphereInstance.transform); sphereMotionState.setWorldTransform(sphereInstance.transform); final btCollisionShape sphereShape = new btSphereShape(1f); shapes.add(sphereShape); bodyInfo = new btRigidBodyConstructionInfo(1, sphereMotionState, sphereShape, new Vector3(1, 1, 1)); this.bodyInfos.add(bodyInfo); body = new btRigidBody(bodyInfo); bodies.add(body); world.addRigidBody(body); } public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); world.stepSimulation(Gdx.graphics.getDeltaTime(), 5); sphereMotionState.getWorldTransform(sphereInstance.transform); modelBatch.begin(cam); modelBatch.render(groundInstance, environment); modelBatch.render(sphereInstance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); for (Model model : models) model.dispose(); for (btRigidBody body : bodies) { body.dispose(); } sphereMotionState.dispose(); for (btCollisionShape shape : shapes) shape.dispose(); for (btRigidBodyConstructionInfo info : bodyInfos) info.dispose(); world.dispose(); collisionConfiguration.dispose(); dispatcher.dispose(); broadphase.dispose(); solver.dispose(); Gdx.app.log(this.getClass().getName(), "Disposed"); } } The highlighted parts are the addition to our previous code. After execution, we see the ball falling and colliding with the ground. Summary In this article, you learned how to load a 3D model of a car and created a basic 3D scene. Resources for Article: Further resources on this subject: Getting Started with GameSalad [article] Sparrow iOS Game Framework - The Basics of Our Game [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 14295

Packt
07 Sep 2011
8 min read
Save for later

Unity 3: Building a Rocket Launcher

Packt
07 Sep 2011
8 min read
  (For more resources on this subject, see here.) Mission briefing We will create a character that carries a rocket launcher and is able to shoot it as well as creating the camera view looking back from the character shoulder (third-person camera view). Then, we will add the character controller script to control our character, and the player will have to hold the Aim button to be able to shoot the rocket, similar to the Resident Evil 4 or 5 styles. What does it do? We will start with applying the built-in CharacterMotor, FPSInputController, and MouseLook scripts from the built-in FPS character controller. Then, we will add the character model and start creating a new script by adapting part of the code in the FPSInputController script. Then, we will be able to control the animation for our character to shoot, walk, run, and remain idle. Next, we will create a rocket prefab and the rocket launcher script to fire our rocket. We will use and adapt the built-in explosion and fire trial particle in Unity, and attach them to our rocket prefab. We will also create a new smoke particle, which will appear from the barrel of the rocket launcher when the player clicks Shoot. Then, we will create the scope target for aiming. We will also create the launcher and smoke GameObject, which are the start position of the rocket and the smoke particle. Finally, we will add the rocket GUITexture object and script to track the number of bullets we have let, at player each shot. We will also add the Reload but on to refill our bullet when the character is out of the bullet. Why Is It Awesome? When we complete this article, we will be able to create the third-person shooter style camera view and controller, which is very popular in many games today. We will also be able to create a rocket launcher weapon and particle by using the prefab technique. Finally, we will be able to create an outline text with the GUITexture object for tracking the number of bullets left. Your Hotshot Objectives In this article, we will use a third-person controller script to control our character and combine it with the built-in first-person controller prefab style to create our third-person shooter script to fire a rocket from the rocket launcher. Here is what we will do: Setting up the character with the first-person controller prefab Creating the New3PSController and MouseLook_JS scripts Create a rocket launcher and a scope target Create the rockets and particles Create the rocket bullet UI Mission Checklist First, we need the chapter 5 project package, which will include the character model with a gun from the Unity FPS tutorial website, and all the necessary assets for this article. So, let's browse to http://www.packtpub.com/support?nid=8267 and download Chapter5.zip package. Unzip it and we will see Chapter5.unitypackage, and we are ready. Setting up the character with the first-person controller prefab In the first section of this article, we will make all the necessary settings before we create our character on the scene. We will set up the imported assets and make sure that all the assets are imported in the proper way and are ready to use by using the Import Package in the Project view inside Unity. Then, we will set the light, level, camera, and put our character in the scene with the first-person controller prefab. We will import the Chapter5.unitypackage package to Unity, which contains the Chapter5 folder. Inside this folder, we will see five subfolders, which are Fonts, Level, Robot Artwork, Rocket, and UI. The Fonts folder will contain the Font file, which will be used by the GUI. The Level folder will contain the simple level prefab, its textures, and materials. Robot Artwork is the folder that includes the character FBX model, materials, and textures, which can be taken from the Unity FPS tutorial. The Rocket folder contains the rocket and rocket launcher FBX models, materials, and textures, which can be taken from the Unity FPS tutorial. Finally, the UI folder includes all the images, which we will use to create the GUI. Prepare for Lift Off In this section, we will begin by importing the chapter 5 Unity package, checking all the assets, setting up the level, and adding the character to the scene with the FPS controller script. First, let's create a new project and name it RocketLauncher, and this time we will include the built-in Character Controller package and Particles package by checking the Character Controller.unityPackage and Particles.unityPackage checkboxes in the Project Wizard. Then, we will click on the Create Project but on, as shown in the following screenshot: Next, import the assets package by going to Assets | Import Package | Custom Package.... Choose Chapter5.unityPackage, which we just downloaded, and then click on the Import but on in the pop-up window link, as shown in the following screenshot: Wait until it's done, and you will see the Chapter5 folder in the Window view. Make sure that we have all have folders, which are Fonts, Level, Robot Artwork, Rocket, and UI, inside this folder. Now, let's create something. Engage Thrusters In this section, we will set up the scene, camera view, and place our character in the scene: First, let's begin with creating the directional light by going to GameObject | Create Other | Directional Light, and go to its Inspector view to set the rotation X to 30 and the position (X: 0, Y: 0, Z: 0). Then, add the level to our scene by clicking on the Chapter5 folder in the Project view. In the Level folder, you will see the Level Prefab; drag it to the Hierarchy view and you will see the level in our scene. Next, remove the Main Camera from the Hierarchy view because we will use the camera from the built-in First Person Controller prefab. So, right-click on the Main Camera on the Hierarchy view and choose Delete to remove it. Then, add the built-in First Person Controller prefab to the Hierarchy view by going to the Standard Assets folder. Under the Character Controllers folder, you will see the First Person Controller prefab; drag it to the Hierarchy view. In the Hierarchy view, click on the arrow in the front of the First Person Controller object to see its hierarchy, similar to the one shown in the following screenshot: Then, we go back to the Project view. In the Chapter5 folder inside Robot Artwork, drag the robot.fbx object (as shown in the following screenshot) on top of the Main Camera inside the First Person Controller object in the Hierarchy. This will cause the editor to show the window that tells us this action will break the prefab, so we just click on the Continue but on to break it. It means that this game object will not be linked to the original prefab. Next, remove the Graphics object above the Main Camera. Right-click on it and choose Delete. Now we will see something similar to the following screenshot: We have put the robot object as a child of the camera because we want our character to rotate with the camera. This will make our character always appear in front of the camera view, which is similar to the third-person view. This setup is different from the original FPS prefab because in the first person view, we will not see the character in the camera view, so there is no point in calculating the rotation of the character Now, click on the First Person Controller object in the Hierarchy view to bring up the Inspector view, and set up the Transform Position of X: 0, Y: 1.16, Z: 0|. Then, go to the Character Controller, and set all values as follows: Character Controller (Script) Height: 2.25 Center X: -0.8, Y: 0.75, Z: 1.4 Move down one step by clicking on Main Camera in the Hierarchy view and go to its Inspector view to set the value of Transform and Mouse Look as follows: Transform Position X: 0, Y: 1.6, Z: 0 Mouse Look (Script) Sensitivity Y: 5 Minimum Y: -15 We will leave all the other parameters as default and use the default values. Then, we will go down one more step to set the Transform of the robot by clicking on it to bring up its Inspector view, and set the following: Now, we are done with this step. In the next step, we will adjust and add some code to control the animation and movement of our character the FPSInputController script. If the user presses it, we want the character to stop moving and play the shooting animation to prepare the character to be able to fire. We also set the maximum and minimum of the camera rotation on the Y-axis, which limits the camera to rotate up and down only. Then, we set the motor.inputMoveDirection to Vector3. zero because we don't want our character to move while he/she is executing the shooting action. On the other hand, if the user doesn't press E, we check for the user input. If the user presses the right arrow or let arrow, we change the speed to run speed; if not we set it to walk speed. Then, we applied the movement speed to motor. movement.maxForwardSpeed, motor.movement.maxSidewaysSpeed, and motor.movement.maxBackwardsSpeed.
Read more
  • 0
  • 0
  • 14183

article-image-customizing-skin-guiskin
Packt
22 Jul 2014
8 min read
Save for later

Customizing skin with GUISkin

Packt
22 Jul 2014
8 min read
(For more resources related to this topic, see here.) Prepare for lift off We will begin by creating a new project in Unity. Let's start our project by performing the following steps: First, create a new project and name it MenuInRPG. Click on the Create new Project button, as shown in the following screenshot: Next, import the assets package by going to Assets | Import Package | Custom Package...; choose Chapter2Package.unityPackage, which we just downloaded, and then click on the Import button in the pop-up window link, as shown in the following screenshot: Wait until it's done, and you will see the MenuInRPGGame and SimplePlatform folders in the Window view. Next, click on the arrow in front of the SimplePlatform folder to bring up the drop-down options and you will see the Scenes folder and the SimplePlatform_C# and SimplePlatform_JS scenes, as shown in the following screenshot: Next, double-click on the SimplePlatform_C# (for a C# user) and SimplePlatform_JS (for a Unity JavaScript user) scenes, as shown in the preceding screenshot, to open the scene that we will work on in this project. When you double-click on either of the SimplePlatform scenes, Unity will display a pop-up window asking whether you want to save the current scene or not. As we want to use the SimplePlatform scene, just click on the Don't Save button to open up the SimplePlatform scene, as shown in the following screenshot: Then, go to the MenuInRPGGame/Resources/UI folder and click on the first file to make sure that the Texture Type and Format fields are selected correctly, as shown in the following screenshot: Why do we set it up in this way? This is because we want to have a UI graphic to look as close to the source image as possible. However, we set the Format field to Truecolor, which will make the size of the image larger than Compress, but will show the right color of the UI graphics. Next, we need to set up the Layers and Tags configurations; for this, go to Edit | Project Settings | Tags and set them as follows: Tags Element 0 UI Element 1 Key Element 2 RestartButton Element 3 Floor Element 4 Wall Element 5 Background Element 6 Door Layers User Layer Background User Layer Level Use Layer UI At last, we will save this scene in the MenuInRPGGame/Scenes folder, and name it MenuInRPG by going to File | Save Scene as... and then save it. Engage thrusters Now we are ready to create a GUI skin; for this, perform the following steps: Let's create a new GUISkin object by going to Assets | Create | GUISkin, and we will see New GUISkin in our Project window. Name the GUISkin object as MenuSkin. Then, click on MenuSkin and go to its Inspector window. We will see something similar to the following screenshot: You will see many properties here, but don't be afraid, because this is the main key to creating custom graphics for our UI. Font is the base font for the GUI skin. From Box to Scroll View, each property is GUIStyle, which is used for creating our custom UI. The Custom Styles property is the array of GUIStyle that we can set up to apply extra styles. Settings are the setups for the entire GUI. Next, we will set up the new font style for our menu UI; go to the Font line in the Inspector view, click the circle icon, and select the Federation Kalin font. Now, you have set up the base font for GUISkin. Next, click on the arrow in front of the Box line to bring up a drop-down list. We will see all the properties, as shown in the following screenshot: For more information and to learn more about these properties, visit http://unity3d.com/support/documentation/Components/class-GUISkin.html. Name is basically the name of this style, which by default is box (the default style of GUI.Box). Next, we will be seeing our custom UI to this GUISkin; click on the arrow in front of Normal to bring up the drop-down list, and you will see two parameters—Background and Text Color. Click on the circle icon on the right-hand side of the Background line to bring up the Select Texture2D window and choose the boxNormal texture, or you can drag the boxNormal texture from the MenuInRPG/Resources/UI folder and drop it to the Background space. We can also use the search bar in the Select Texture2D window or the Project view to find our texture by typing boxNormal in the search bar, as shown in the following screenshot: Then, under the Text Color line, we leave the color as the default color—because we don't need any text to be shown in this style—and repeat the previous step with On Normal by using the boxNormal texture. Next, click on the arrow in front of Active under Background. Choose the boxActive texture, and repeat this step for On Active. Then, go to each property in the Box style and set the following parameters: Border: Left: 14, Right: 14, Top: 14, Bottom: 14 Padding: Left: 6, Right: 6, Top: 6, Bottom: 6 For other properties of this style, we will leave them as default. Next, we go to the following properties in the MenuSkin inspector and set them as follows: Label Normal | Text Color R 27, G: 95, B: 104, A: 255 Window Normal | Background myWindow On Normal | Background myWindow Border Left: 27, Right: 27, Top: 55, Bottom: 96 Padding Left: 30, Right: 30, Top: 60, Bottom: 30 Horizontal Scrollbar Normal | Background horScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Thumb Normal | Background horScrollBarThumbNormal Hover | Background horScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Left Button Normal | Background arrowLNormal Hover | Background arrowLHover Fixed Width 14 Fixed Height 15 Horizontal Scrollbar Right Button Normal | Background arrowRNormal Hover | Background arrowRHover Fixed Width 14 Fixed Height 15 Vertical Scrollbar Normal | Background verScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Padding Left: 0, Right: 0, Top: 0, Bottom: 0 Vertical Scrollbar Thumb Normal | Background verScrollBarThumbNormal Hover | Background verScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Vertical Scrollbar Up Button Normal | Background arrowUNormal Hover | Background arrowUHover Fixed Width 16 Fixed Height 14 Vertical Scrollbar Down Button Normal | Background arrowDNormal Hover | Background arrowDHover Fixed Width 16 Fixed Height 14 We have finished setting up of the default styles. Now we will go to the Custom Styles property and create our custom GUIStyle to use for this menu; go to Custom Styles and under Size, change the value to 6. Then, we will see Element 0 to Element 5. Next, we go to the first element or Element 0; under Name, type Tab Button, and we will see Element 0 change to Tab Button. Set it as follows: Tab Button (or Element 0) Name Tab Button Normal Background tabButtonNormal Text Color R: 27, G: 62, B: 67, A: 255 Hover Background tabButtonHover Text Color R: 211, G: 166, B: 9, A: 255 Active Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 On Normal Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 Border Left: 12, Right: 12, Top: 12, Bottom: 4 Padding Left: 6, Right: 6, Top: 6, Bottom: 4 Font Size 14 Alignment Middle Center Fixed Height 31 The settings are shown in the following screenshot: For the Text Color value, we can also use the eyedropper tool next to the color box to copy the same color, as we can see in the following screenshot: We have finished our first style, but we still have five styles left, so let's carry on with Element 1 with the following settings: Exit Button (or Element 1) Name Exit Button Normal | Background buttonCloseNormal Hover | Background buttonCloseHover Fixed Width 26 Fixed Height 22 The settings for Exit Button are showed in the following screenshot: The following styles will create a style for Element 2: Text Item (or Element 2) Name Text Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Left Word Wrap Check The settings for Text Item are shown in the following screenshot: To set up the style for Element 3, the following settings should be done: Text Amount (or Element 3) Name Text Amount Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Right Word Wrap Check The settings for Text Amount are shown in the following screenshot: The following settings should be done to create Selected Item: Selected Item (or Element 4) Name Selected Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Hover Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 Active Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 On Normal Background itemSelectActive Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings are shown in the following screenshot: To create the Disabled Click style, the following settings should be done: Disabled Click (or Element 5) Name Disabled Click Normal Background itemSelectNormal Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings for Disabled Click are shown in the following screenshot: And now, we have finished this step.
Read more
  • 0
  • 0
  • 13832
article-image-getting-know-libgdx
Packt
25 Aug 2015
15 min read
Save for later

Getting to Know LibGDX

Packt
25 Aug 2015
15 min read
In this article written by James Cook, author of the book LibGDX Game Development By Example, the author likes to state that, "Creating games is fun, and that is why I like to do it". The process of having an idea for a game to actually delivering it has changed over the years. Back in the 1980s, it was quite common that the top games around were created by either a single person or a very small team. However, anyone who is lucky enough (in my opinion) to see games grow from being quite a simplistic affair to the complex beast that the now AAA titles are, must have also seen the resources needed for these grow with them. The advent of mobile gaming reduced the barrier for entry; once again, the smaller teams could produce a game that could be a worldwide hit! Now, there are games of all genres and complexities available across major gaming platforms. Due to this explosion in the number of games being made, new general-purpose game-making tools appeared in the community. Previously, the in-house teams built and maintained very specific game engines for their games; however, this would have led to a lot of reinventing the wheel. I hate to think how much time I would have lost if for each of my games, I had to start from scratch. Now, instead of worrying about how to display a 2D image on the screen, I can focus on creating that fun player experience I have in my head. My tool of choice? LibGDX. (For more resources related to this topic, see here.) Before I dive into what LibGDX is, here is how LibGDX describes itself. From the LibGDX wiki—https://github.com/libgdx/libgdx/wiki/Introduction: LibGDX is a cross-platform game and visualization development framework. So what does that actually mean? What can LibGDX do for us game-makers that allows us to focus purely on the gameplay? To begin with, LibGDX is Java-based. This means you can reuse a lot, and I mean a lot, of tools that already exist in the Java world. I can imagine a few of you right now must be thinking, "But Java? For a game? I thought Java is supposed to be slow". To a certain extent, this can be true; after all, Java is still an interpreted language that runs in a virtual machine. However, to combat the need for the best possible performance, LibGDX takes advantage of the Java Native Interface (JNI) to implement native platform code and negate the performance disadvantage. One of the beauties of LibGDX is that it allows you to go as low-level as you would like. Direct access to filesystems, input devices, audio devices, and OpenGL (via OpenGL ES 2.0/3.0) is provided. However, the added edge LibGDX gives is that with the APIs that are built on top of these low-level facilities, displaying an image on the screen takes now a days only a few lines of code. A full list of the available features for LibGDX can be found here:http://libgdx.badlogicgames.com/features.html I am happy to wait here while you go and check it out. Impressive list of features, no? So, how cross-platform is this gaming platform? This is probably what you are thinking now. Well, as mentioned before, games are being delivered on many different platforms, be it consoles, PCs, or mobiles. LibGDX currently supports the following platforms: Windows Linux Mac OS X Android BlackBerry iOS HTML/WebGL That is a pretty comprehensive list. Being able to write your game once and have it delivered to all the preceding platforms is pretty powerful. At this point, I would like to mention that LibGDX is completely free and open source. You can go to https://github.com/libGDX/libGDX and check out all the code in all its glory. If the code does something and you would like to understand how, it is all possible; or, if you find a bug, you can make a fix and offer it back to the community. Along with the source code, there are plenty of tests and demos showcasing what LibGDX can do, and more importantly, how to do it. Check out the wiki for more information: https://github.com/libgdx/libgdx/wiki/Running-Demos https://github.com/libgdx/libgdx/wiki/Running-Tests "Who else uses LibGDX?" is quite a common query that comes up during a LibGDX discussion. Well it turns out just about everyone has used it. Google released a game called "Ingress" (https://play.google.com/store/apps/details?id=com.nianticproject.ingress&hl=en) on the play store in 2013, which uses LibGDX. Even Intel (https://software.intel.com/en-us/articles/getting-started-with-libgdx-a-cross-platform-game-development-framework) has shown an interest in LibGDX. Finally, I would like to end this section with another quote from the LibGDX website: LibGDX aims to be a framework rather than an engine, acknowledging that there is no one-size-fits-all solution. Instead we give you powerful abstractions that let you chose how you want to write your game or application. libGDX wiki—https://github.com/libgdx/libgdx/wiki/Introduction This means that you can use the available tools if you want to; if not, you can dive deeper into the framework and create your own! Setting up LibGDX We know by now that LibGDX is this awesome tool for creating games across many platforms with the ability to iterate on our code at superfast speeds. But how do we start using it? Thankfully, some helpful people have made the setup process quite easy. However, before we get to that part, we need to ensure that we have the prerequisites installed, which are as follows: Java Development Kit 7+ (at the time of writing, version 8 is available) Android SDK Not that big a list! Follow the given steps: First things first. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html. Download and install the latest JDK if you haven't already done so. Oracle developers are wonderful people and have provided a useful installation guide, which you can refer to if you are unsure on how to install the JDK, at http://docs.oracle.com/javase/8/docs/technotes/guides/install/install_overview.html. Once you have installed the JDK, open up the command line and run the following command: java -version If it is installed correctly, you should get an output similar to this: If you generate an error while doing this, consult the Oracle installation documentation and try again. One final touch would be to ensure that we have JAVA_HOME configured. On the command line, perform the following:    For Windows, set JAVA_HOME = C:PathToJDK    For Linux and Mac OSX, export JAVA_HOME = /Path/ToJDK/ Next, on to the Android SDK. At the time of writing, Android Studio has just been released. Android Studio is an IDE offered by Google that is built upon JetBrains IntelliJ IDEA Java IDE. If you feel comfortable using Android Studio as your IDE, and as a developer who has used IntelliJ for the last 5 years, I suggest that you at least give it a go. You can download Android Studio + Android SDK in a bundle from here: http://developer.android.com/sdk/index.html Alternatively, if you plan to use a different IDE (Eclipse or NetBeans, for example) you can just install the tools from the following URL: http://developer.android.com/sdk/index.html#Other You can find the installation instructions here: https://developer.android.com/sdk/installing/index.html?pkg=tools However, I would like to point out that the official IDE for Android is now Android Studio and no longer Eclipse with ADT. For the sake of simplicity, we will only focus on making games for desktops for the greater part of this article. We will look at exporting to Android and iOS later on. Once the Android SDK is installed, it would be well worth running the SDK manager application; so, finalize the set up. If you opt to use Android Studio, you can access this from the SDK Manager icon in the toolbar. Alternatively, you can also access it as follows: On Windows: Double-click on the SDK's Manager.exe file at the root of the Android SDK directory On Mac/Linux: Open a terminal and navigate to the tools/ directory in the location where the Android SDK is installed, then execute Android SDK. The following screen might appear: As a minimum configuration, select: Android SDK Tools Android SDK Platform-tools Android SDK Build-tools (latest available version) Latest version of SDK Platform Let them download and install the selected configuration. Then that's it! Well, not really. We just need to set the ANDROID_HOME environment variable. To do this, we can open up a command line and run the following command: On Windows: Set ANDROID_HOME=C:/Path/To/Your/Android/Sdk On Linux and Mac OS X: Export ANDROID_HOME=/Path/To/Your/Android/Sdk Phew! With that done, we can now move on to the best part—creating our first ever LibGDX game! Creating a project Follow the given steps to create your own project: As mentioned earlier, LibGDX comes with a really useful project setup tool. Download the application from here: http://libgdx.badlogicgames.com/download.html At the time of writing, it is the big red "Download Setup App" button in the middle of your screen. Once downloaded, open the command line and navigate to the location of the application. You will notice that it is a JAR file type. This means we need to use Java to run it. Running this will open the setup UI: Before we hit the Generate button, let's just take a look at what we are creating here: Name: This is the name of our game. Package: This is the Java package our game code will be developed in. Game class: This parameter sets the name of our game class, where the magic happens! Destination: This is the project's directory. You can change this to any location of your choice. Android SDK: This is the location of the SDK. If this isn't set correctly, we can change it here. Going forward, it might be worth setting the ANDROID_HOME environment variable. Next is the version of LibGDX we want to use. At time of writing, the version is 1.5.4. Now, let's move on to the subprojects. As we are only interested in desktops at the moment, let's deselect the others. Finally, we come to extensions. Feel free to uncheck any that are checked. We won't be needing any of them at this point in time. For more information on available extensions, check out the LibGDX wiki (https://github.com/libgdx/libgdx/wiki). Once all is set, let's hit the Generate button! There is a little window at the bottom of the UI that will now spring to life. Here, it will show you the setup progress as it downloads the necessary setup files. Once complete, open that command line, navigate to the directory, and run your preferred tree command (in Windows, it is just "tree").   Hopefully, you will have the same directory layout as the previous image shows. The astute among you will now ask, "What is this Gradle?" and quite rightly so. I haven't mentioned it yet, although it appears twice in our projects directory. What is Gradle? Well, Gradle is a very excellent build tool and LibGDX leverages its abilities to look after the dependencies, build process, and IDE integration. This is especially useful if you are going to be working in a team with a shared code base. Even if you are not, the dependency management aspect is worth it alone. Anyone who isn't familiar with dependency management may well be used to downloading Java JARs manually and placing them in a libs folder, but they might run into problems later when the JAR they just downloaded needs another JAR, and so on. The dependency management will take care of this for you and even better is that the LibGDX setup application takes care of this for you by already describing the dependencies that you need to run! Within LibGDX, there is something called the Gradle Wrapper. This is essentially the Gradle application embedded into the project. This allows portability of our project, as now if we want someone else to run it, they can. I guess this leads us to the question, how do we use Gradle to run our project? In the LibGDX wiki (https://github.com/libgdx/libgdx/wiki/Gradle-on-the-Commandline), you will find a comprehensive list of commands that can be used while developing your game. However, for now, we will only cover the desktop project. What you may not have noticed is that the setup application actually generates a very simple "Hello World" game for us. So, we have something we can run from the command line right away! Let's go for it! On our command line, let's run the following:    On Windows: gradlew desktop:run    On Linux and Mac OS X: ./gradlew desktop:run The following screen will appear once you execute the preceding command:   You will get an output similar to the preceding screenshot. Don't worry if it suddenly wants to start downloading the dependencies. This is our dependency management in action! All those JARs and native binaries are being downloaded and put on to classpaths. But, we don't care. We are here to create games! So, after the command prompt has finished downloading the files, it should then launch the "Hello World" game. Awesome! You have just launched your very first LibGDX game! Although, before we get too excited, you will notice that not much actually happens here. It is just a red screen with the Bad Logic Games logo. I think now is the time to look at the code! Importing a project So far, we have launched the "Hello World" game via the command line, and haven't seen a single line of code so far. Let's change that. To do this, I will use IntelliJ IDEA. If you are using Android Studio, the screenshots will look familiar. If you are using Eclipse, I am sure you will be able to see the common concepts. To begin with, we need to generate the appropriate IDE project files. Again, this is using Gradle to do the heavy lifting for us. Once again, on the command line, run the following (pick the one that applies): On Windows: gradlew idea or gradlew eclipse On Linux and Mac OS X: ./gradlew idea or ./gradlew eclipse Now, Gradle will have generated some project files. Open your IDE of choice and open the project. If you require more help, check out the following wiki pages: https://github.com/libgdx/libgdx/wiki/Gradle-and-Eclipse https://github.com/libgdx/libgdx/wiki/Gradle-and-Intellij-IDEA https://github.com/libgdx/libgdx/wiki/Gradle-and-NetBeans Once the project is open, have a poke around and look at some of the files. I think our first port of call should be the build.gradle file in the root of the project. Here, you will see that the layout of our project is defined and the dependencies we require are on display. It is a good time to mention that going forward, there will be new releases of LibGDX, and to update our project to the latest version, all we need to do is update the following property: gdxVersion = '1.6.4' Now, run your game and Gradle will kick in and download everything for you! Next, we should look for our game class, remember the one we specified in the setup application—MyGdxGame.java? Find it, open it, and be in awe of how simple it is to display that red screen and Bad Logic Games logo. In fact, I am going to paste the code here for you to see how simple it is: public class MyGdxGame extends ApplicationAdapter { SpriteBatch batch; Texture img; @Override public void create () { batch = new SpriteBatch(); img = new Texture("badlogic.jpg"); } @Override public void render () { Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); batch.draw(img, 0, 0); batch.end(); } } Essentially, we can see that when the create() method is called, it sets up a SpriteBatch batch and creates a texture from a given JPEG file. Then, on the render() method, this is called on every iteration of the game loop; it covers the screen with the color red, then it draws the texture at the (0, 0) coordinate location. Finally, we will look at the DesktopLauncher class, which is responsible for running the game in the desktop environment. Let's take a look at the following code snippet: public class DesktopLauncher { public static void main (String[] arg) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); new LwjglApplication(new MyGdxGame(), config); } } The preceding code shows how simple it is. We have a configuration object that will define how our desktop application runs, setting things like screen resolution and framerate, amongst others. In fact, this is an excellent time to utilize the open source aspect of LibGDX. In your IDE, click through to the LwjglApplicationConfiguration class. You will see all the properties that can be tweaked and notes on what they mean. The instance of the LwjglApplicationConfiguration class is then passed to the constructor of another class LwjglApplication, along with an instance of our MyGdxGame class. Finally, those who have worked with Java a lot in the past will recognize that it is wrapped in a main method—a traditional entry point for a Java application. That is all that is needed to create and launch a desktop-only LibGDX game. Summary In this article, we looked at what LibGDX is about and how to go about creating a standard project, running it from the command line and importing it into your preferred IDE ready for development. Resources for Article: Further resources on this subject: 3D Modeling[article] Using Google's offerings[article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 13741

Packt
22 Nov 2013
15 min read
Save for later

Unity Networking – The Pong Game

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Multiplayer is everywhere. It's a staple of AAA games and small-budget indie offerings alike. Multiplayer games tap into our most basic human desires. Whether it be teaming up with strangers to survive a zombie apocalypse, or showing off your skills in a round of "Capture the Flag" on your favorite map, no artificial intelligence in the world comes close to the feeling of playing with a living, breathing, and thinking human being. Unity3D has a sizable number of third-party networking middleware aimed at developing multiplayer games, and is arguably one of the easiest platforms to prototype multiplayer games. The first networking system most people encounter in Unity is the built-in Unity Networking API . This API simplifies a great many tasks in writing networked code by providing a framework for networked objects rather than just sending messages. This works by providing a NetworkView component, which can serialize object state and call functions across the network. Additionally, Unity provides a Master server, which essentially lets players search among all public servers to find a game to join, and can also help players in connecting to each other from behind private networks. In this article, we will cover: Introducing multiplayer Introducing UDP communication Setting up your own Master server for testing What a NetworkView is Serializing object state Calling RPCs Starting servers and connecting to them Using the Master server API to register servers and browse available hosts Setting up a dedicated server model Loading networked levels Creating a Pong clone using Unity networking Introducing multiplayer games Before we get started on the details of communication over the Internet, what exactly does multiplayer entail in a game? As far as most players are concerned, in a multiplayer game they are sharing the same experience with other players. It looks and feels like they are playing the same game. In reality, they aren't. Each player is playing a separate game, each with its own game state. Trying to ensure that all players are playing the exact same game is prohibitively expensive. Instead, games attempt to synchronize just enough information to give the illusion of a shared experience. Games are almost ubiquitously built around a client-server architecture, where each client connects to a single server. The server is the main hub of the game, ideally the machine for processing the game state, although at the very least it can serve as a simple "middleman" for messages between clients. Each client represents an instance of the game running on a computer. In some cases the server might also have a client, for instance some games allow you to host a game without starting up an external server program. While an MMO ( Massively Multiplayer Online ) might directly connect to one of these servers, many games do not have prior knowledge of the server IPs. For example, FPS games often let players host their own servers. In order to show the user a list of servers they can connect to, games usually employ another server, known as the "Master Server" or alternatively the "Lobby server". This server's sole purpose is to keep track of game servers which are currently running, and report a list of these to clients. Game servers connect to the Master server in order to announce their presence publicly, and game clients query the Master server to get an updated list of game servers currently running. Alternatively, this Master server sometimes does not keep track of servers at all. Sometimes games employ "matchmaking", where players connect to the Lobby server and list their criteria for a game. The server places this player in a "bucket" based on their criteria, and whenever a bucket is full enough to start a game, a host is chosen from these players and that client starts up a server in the background, which the other players connect to. This way, the player does not have to browse servers manually and can instead simply tell the game what they want to play. Introducing UDP communication The built-in Unity networking is built upon RakNet . RakNet uses UDP communication for efficiency. UDP ( User Datagram Protocols ) is a simple way to send messages to another computer. These messages are largely unchecked, beyond a simple checksum to ensure that the message has not been corrupted. Because of this, messages are not guaranteed to arrive, nor are they guaranteed to only arrive once (occasionally a single message can be delivered twice or more), or even in any particular order. TCP, on the other hand, guarantees each message to be received just once, and in the exact order they were sent, although this can result in increased latency (messages must be resent several times if they fail to reach the target, and messages must be buffered when received, in order to be processed in the exact order they were sent). To solve this, a reliability layer must be built on top of UDP. This is known as rUDP ( reliable UDP ). Messages can be sent unreliably (they may not arrive, or may arrive more than once), or reliably (they are guaranteed to arrive, only once per message, and in the correct order). If a reliable message was not received or was corrupt, the original sender has to resend the message. Additionally, messages will be stored rather than immediately processed if they are not in order. For example, if you receive messages 1, 2, and 4, your program will not be able to handle those messages until message 3 arrives. Allowing unreliable or reliable switching on a per-message basis affords better overall performance. Messages, such as player position, are better suited to unreliable messages (if one fails to arrive, another one will arrive soon anyway), whereas damage messages must be reliable (you never want to accidentally drop a damage message, and having them arrive in the same order they were sent reduces race conditions). In Unity, you can serialize the state of an object (for example, you might serialize the position and health of a unit) either reliably or unreliably (unreliable is usually preferred). All other messages are sent reliably. Setting up the Master Server Although Unity provide their own default Master Server and Facilitator (which is connected automatically if you do not specify your own), it is not recommended to use this for production. We'll be using our own Master Server, so you know how to connect to one you've hosted yourself. Firstly, go to the following page: http://unity3d.com/master-server/ We're going to download two of the listed server components: the Master Server and the Facilitator as shown in the following screenshot: The servers are provided in full source, zipped. If you are on Windows using Visual Studio Express, open up the Visual Studio .sln solution and compile in the Release mode. Navigate to the Release folder and run the EXE (MasterServer.exe or Facilitator.exe). If you are on a Mac, you can either use the included XCode project, or simply run the Makefile (the Makefile works under both Linux and Mac OS X). The Master Server, as previously mentioned, enables our game to show a server lobby to players. The Facilitator is used to help clients connect to each other by performing an operation known as NAT punch-through . NAT is used when multiple computers are part of the same network, and all use the same public IP address. NAT will essentially translate public and private IPs, but in order for one machine to connect to another, NAT punch-through is necessary. You can read more about it here: http://www.raknet.net/raknet/manual/natpunchthrough.html The default port for the Master Server is 23466, and for the Facilitator is 50005. You'll need these later in order to configure Unity to connect to the local Master Server and Facilitator instead of the default Unity-hosted servers. Now that we've set up our own servers, let's take a look at the Unity Networking API itself. NetworkViews and state serialization In Unity, game objects that need to be networked have a NetworkView component. The NetworkView component handles communication over the network, and even helps make networked state serialization easier. It can automatically serialize the state of a Transform, Rigidbody, or Animation component, or in one of your own scripts you can write a custom serialization function. When attached to a game object, NetworkView will generate a NetworkViewID for NetworkView. This ID serves to uniquely identify a NetworkView across the network. An object can be saved as part of a scene with NetworkView attached (this can be used for game managers, chat boxes, and so on), or it can be saved in the project as a prefab and spawned later via Network.Instantiate (this is used to generate player objects, bullets, and so on). Network.Instantiate is the multiplayer equivalent to GameObject.Instantiate —it sends a message over the network to other clients so that all clients spawn the object. It also assigns a network ID to the object, which is used to identify the object across multiple clients (the same object will have the same network ID on every client). A prefab is a template for a game object (such as the player object). You can use the Instantiate methods to create a copy of the template in the scene. Spawned network game objects can also be destroyed via Network.Destroy. It is the multiplayer counterpart of GameObject.Destroy. It sends a message to all clients so that they all destroy the object. It also deletes any RPC messages associated with that object. NetworkView has a single component that it will serialize. This can be a Transform, a Rigidbody, an Animation, or one of your own components that has an OnSerializeNetworkView function. Serialized values can either be sent with the ReliableDeltaCompressed option, where values are always sent reliably and compressed to include only changes since the last update, or they can be sent with the Unreliable option, where values are not sent reliably and always include the full values (not the change since the last update, since that would be impossible to predict over UDP). Each method has its own advantages and disadvantages. If data is constantly changing, such as player position in a first person shooter, in general Unreliable is preferred to reduce latency. If data does not often change, use the ReliableDeltaCompressed option to reduce bandwidth (as only changes will be serialized). NetworkView can also call methods across the network via Remote Procedure Calls ( RPC ). RPCs are always completely reliable in Unity Networking, although some networking libraries allow you to send unreliable RPCs, such as uLink or TNet. Writing a custom state serializer While initially a game might simply serialize Transform or Rigidbody for testing, eventually it is often necessary to write a custom serialization function. This is a surprisingly easy task. Here is a script that sends an object's position over the network: using UnityEngine; using System.Collections; public class ExampleUnityNetworkSerializePosition : MonoBehaviour { public void OnSerializeNetworkView( BitStream stream, NetworkMessageInfo info ) { // we are currently writing information to the network if( stream.isWriting ) { // send the object's position Vector3 position = transform.position; stream.Serialize( ref position ); } // we are currently reading information from the network else { // read the first vector3 and store it in 'position' Vector3 position = Vector3.zero; stream.Serialize( ref position ); // set the object's position to the value we were sent transform.position = position; } } } Most of the work is done with BitStream. This is used to check if NetworkView is currently writing the state, or if it is reading the state from the network. Depending on whether it is reading or writing, stream.Serialize behaves differently. If NetworkView is writing, the value will be sent over the network. However, if NetworkView is reading, the value will be read from the network and saved in the referenced variable (thus the ref keyword, which passes Vector3 by reference rather than value). Using RPCs RPCs are useful for single, self-contained messages that need to be sent, such as a character firing a gun, or a player saying something in chat. In Unity, RPCs are methods marked with the [RPC] attribute. This can be called by name via networkView.RPC( "methodName", … ). For example, the following script prints to the console on all machines when the space key is pressed. using UnityEngine; using System.Collections; public class ExampleUnityNetworkCallRPC : MonoBehavior { void Update() { // important – make sure not to run if this networkView is notours if( !networkView.isMine ) return; // if space key is pressed, call RPC for everybody if( Input.GetKeyDown( KeyCode.Space ) ) networkView.RPC( "testRPC", RPCMode.All ); } [RPC] void testRPC( NetworkMessageInfo info ) { // log the IP address of the machine that called this RPC Debug.Log( "Test RPC called from " + info.sender.ipAddress ); } } Also note the use of NetworkView.isMine to determine ownership of an object. All scripts will run 100 percent of the time regardless of whether your machine owns the object or not, so you have to be careful to avoid letting some logic run on remote machines; for example, player input code should only run on the machine that owns the object. RPCs can either be sent to a number of players at once, or to a specific player. You can either pass an RPCMode to specify which group of players to receive the message, or a specific NetworkPlayer to send the message to. You can also specify any number of parameters to be passed to the RPC method. RPCMode includes the following entries: All (the RPC is called for everyone) AllBuffered (the RPC is called for everyone, and then buffered for when new players connect, until the object is destroyed) Others (the RPC is called for everyone except the sender) OthersBuffered (the RPC is called for everyone except the sender, and then buffered for when new players connect, until the object is destroyed) Server (the RPC is sent to the host machine) Initializing a server The first thing you will want to set up is hosting games and joining games. To initialize a server on the local machine, call Network.InitializeServer. This method takes three parameters: the number of allowed incoming connections, the port to listen on, and whether to use NAT punch-through. The following script initializes a server on port 25000 which allows 8 clients to connect: using UnityEngine; using System.Collections; public class ExampleUnityNetworkInitializeServer : MonoBehavior { void OnGUI() { if( GUILayout.Button( "Launch Server" ) ) { LaunchServer(); } } // launch the server void LaunchServer() { // Start a server that enables NAT punchthrough, // listens on port 25000, // and allows 8 clients to connect Network.InitializeServer( 8, 25005, true ); } // called when the server has been initialized void OnServerInitialized() { Debug.Log( "Server initialized" ); } } You can also optionally enable an incoming password (useful for private games) by setting Network.incomingPassword to a password string of the player's choice, and initializing a general-purpose security layer by calling Network.InitializeSecurity(). Both of these should be set up before actually initializing the server. Connecting to a server To connect to a server you know the IP address of, you can call Network.Connect. The following script allows the player to enter an IP, a port, and an optional password and attempts to connect to the server: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToServer : MonoBehavior { private string ip = ""; private string port = ""; private string password = ""; void OnGUI() { GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); GUILayout.Label( "Port" ); port = GUILayout.TextField( port, GUILayout.Width( 50f ) ); GUILayout.Label( "Password (optional)" ); password = GUILayout.PasswordField( password, '*',GUILayout.Width( 200f ) ); if( GUILayout.Button( "Connect" ) ) { int portNum = 25005; // failed to parse port number – a more ideal solution is tolimit input to numbers only, a number of examples can befound on the Unity forums if( !int.TryParse( port, out portNum ) ) { Debug.LogWarning( "Given port is not a number" ); } // try to initiate a direct connection to the server else { Network.Connect( ip, portNum, password ); } } } void OnConnectedToServer() { Debug.Log( "Connected to server!" ); } void OnFailedToConnect( NetworkConnectionError error ) { Debug.Log( "Failed to connect to server: " +error.ToString() ); } } Connecting to the Master Server While we could just allow the player to enter IP addresses to connect to servers (and many games do, such as Minecraft), it's much more convenient to allow the player to browse a list of public servers. This is what the Master Server is for. Now that you can start up a server and connect to it, let's take a look at how to connect to the Master Server you downloaded earlier. First, make sure both the Master Server and Facilitator are running. I will assume you are running them on your local machine (IP is 127.0.0.1), but of course you can run these on a different computer and use that machine's IP address. Keep in mind, if you want the Master Server publicly accessible, it must be installed on a machine with a public IP address (it cannot be in a private network). Let's configure Unity to use our Master Server rather than the Unity-hosted test server. The following script configures the Master Server and Facilitator to connect to a given IP (by default 127.0.0.1): using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToMasterServer : MonoBehaviour { // Assuming Master Server and Facilitator are on the same machine public string MasterServerIP = "127.0.0.1"; void Awake() { // set the IP and port of the Master Server to connect to MasterServer.ipAddress = MasterServerIP; MasterServer.port = 23466; // set the IP and port of the Facilitator to connect to Network.natFacilitatorIP = MasterServerIP; Network.natFacilitatorPort = 50005; } }
Read more
  • 0
  • 0
  • 13655