Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-mobile-game-design-best-practices
Packt
18 Feb 2016
13 min read
Save for later

Mobile Game Design Best Practices

Packt
18 Feb 2016
13 min read
With so many different screen sizes, so little space to work with, and no real standard in the videogame industry, interface design is one of the toughest parts of creating a successful game. We will provide you with what you need to know to address the task properly and come up with optimal solutions for your mobile games.a (For more resources related to this topic, see here.) Best practices in UI design In this article we will provide a list of useful hints to approach the creation of a well-designed UI for your next game. The first golden rule is that the better the game interface is designed, the less it will be noticed by players, as it allows users to navigate through the game in a way that feels natural and easy to grasp. To achieve that, always opt for the simplest solution possible, as simplicity means that controls are clean and easy to learn and that the necessary info is displayed clearly. A little bit of psychology helps when deciding the positioning of buttons in the interface. Human beings have typical cognitive biases and they easily develop habits. Learning how to exploit such psychological aspects can really make a difference in the ergonomics of your game interface. We mentioned The Theory of Affordances by Gibson, but there are other theories of visual perception that should be taken into consideration. A good starting point is the Gestalt Principles that you will find at the following link: http://graphicdesign.spokanefalls.edu/tutorials/process/gestaltprinciples/gestaltprinc.htm Finally, it may seem trivial, but never assume anything. Design your game interface so that the most prominent options available on the main game screen lead your players to the game. Choose the target platform. When you begin designing the interface for a game, spend some time thinking about the main target platform for your game, in order to have a reference resolution to begin working with. The following table describes the screen resolutions of the most popular devices you are likely to work with: Device model Screen resolution (pixels) iPhone 3GS and equivalent 480x320 iPhone 4(S) and equivalent 960x640 at 326 ppi iPhone 5 1136x640 at 326 ppi iPad 1 and 2 1024x768 iPad 3 retina 2048x1536 at 264 ppi iPad Mini 1024x768 at 163 ppi Android devices variable Tablets variable As you may notice, when working with iOS, things are almost straightforward, as with the exception of the latest iPhone 5, there are only two main aspect ratios, and retina displays simply doubles the number of pixels. By designing your interface separately for the iPhone/iPod touch, the iPad at retina resolution, and scaling it down for older models, you basically cover almost all the Apple-equipped customers. For Android-based devices, on the other hand, things are more complicated, as there are tons of devices and they can widely differ from each other in screen size and resolution. The best thing to do in this case is to choose a reference, high-end model with HD display, such as the HTC One X+ or the Samsung Galaxy S4 (as we write), and design the interface to match their resolution. Scale it as required to adapt to others: though this way, the graphics won't be perfect for any device, 90 percent of your gamers won't notice any difference. The following is a list of sites where you can find useful information to deal with the Android screens variety dilemma: http://developer.android.com/about/dashboards/index.html http://anidea.com/technology/designer%E2%80%99s-guide-to-supporting-multiple-android-device-screens/ http://unitid.nl/2012/07/10-free-tips-to-create-perfect-visual-assets-for-ios-and-android-tablet-and-mobile/ Search for references There is no need to reinvent the wheel every time you design a new game interface. Games can be easily classified by genre and different genres tend to adopt general solutions for the interface that are shared among different titles in the same genre. Whenever you are planning the interface for your new game, look at others' work first. Play games and study their UI, especially from a functional perspective. When studying others' game interfaces, always ask yourself: What info is necessary to the player to play this title? What kind of functionality is needed to achieve the game goals? Which are the important components that need to stand out from the rest? What's the purpose and context of each window? By answering such questions, you will be able to make a deep analysis of the interface of other games, compare them, and then choose the solutions to better fit your specific needs. The screen flow The options available to the players of your game will need to be located in a game screen of some sort. So the questions you should ask yourself are: How many screens does my game need? Which options will be available to players? Where will these options be located? Once you come up with a list of the required options and game screens, create a flow chart to describe how the player navigates through the different screens and which options are available in each one. The resulting visual map will help you understand if the screen flow is clear and intuitive, if game options are located where the players expect to find them, and if there are doubles, which should be avoided. Be sure that each game screen is provided with a BACK button to go back to a previous game screen. It can be useful to add hyperlinks to your screen mockups so that you can try navigating through them early on. Functionality It is now time to define how the interface you are designing will help users to play your game. At this point, you should already have a clear idea of what the player will be doing in your game and the mechanics of your game. With that information in mind, think about what actions are required and what info must be displayed to deal with them. For every piece of information that you can deliver to the player, ask yourself if it is really necessary and where it should be displayed for optimal fruition. Try to be as conservative as you can when doing that, it is much too easy to lose the grip on the interface of your game if new options, buttons, and functions keep proliferating. The following is a list of useful hints to keep in mind when defining the functionality of your game interface: Keep the number of buttons as low as possible Stick to one primary purpose for each game screen Refer to the screen flow to check the context for each game screen Split complex info into small chunks and/or multiple screens to avoid overburdening your players Wireframes Now that the flow and basic contents of the game interface is set, it is time to start drawing with a graphic editor, such as Photoshop. Create a template for your game interface which can support the different resolutions you expect to develop your game for, and start defining the size and positioning of each button and any pieces of information that must be available on screen. Try not to use colors yet, or just use them to highlight very important buttons available in each game screen. This operation should involve at least three members of the team: the game designer, the artist, and the programmer. If you are a game designer, never plan the interface without conferring with your artist and programmer: the first is responsible for creating the right assets for the job, so it is important that he/she understands the ideas behind your design choices. The programmer is responsible for implementing the solutions you designed, so it is good practice to ask for his/her opinion too, to avoid designing solutions which in the end cannot be implemented. There are also many tools that can be used by web and app developers to quickly create wireframes and prototypes for user interfaces. A good selection of the most appreciated tools can be found at the following link: http://www.dezinerfolio.com/2011/02/21/14-prototyping-and-wireframing-tools-for-designers The button size We suggest you put an extra amount of attention to defining the proper size for your game buttons. There's no point in having buttons on the screen if the player can't press them. This is especially true with games that use virtual pads. As virtual buttons tend to shadow a remarkable portion of a mobile device, there is a tendency to make them as small as possible. If they are too small, the consequences can be catastrophic, as the players won't be able to even play your game, let alone enjoy it. Street Fighter IV for the iPhone, for example, implements the biggest virtual buttons available on the Apple Store. Check them in the following figure: When designing buttons for your game interface, take your time to make tests and find an optimal balance between the opposing necessities of displaying buttons and saving as much screen space as possible for gameplay. The main screen The main goal of the first interactive game screen of a title should be to make it easy to play. It is thus very important that the PLAY button is large and distinctive enough for players to easily find it on the main screen. The other options that should be available on the main screen may vary depending on the characteristics of each specific game, although some are expected despite the game genre, such as OPTIONS, LEADERBOARDS, ACHIEVEMENTS, BUY, and SUPPORT. The following image represents the main screen of Angry Birds, which is a perfect example of a well-designed main screen. Notice, for example, that optional buttons on the bottom part of the screen are displayed as symbols that make it clear what is the purpose of each one of them. This is a smart way to reduce issues related with translating your game text into different languages. Test and iterate Once the former steps are completed, start testing the game interface. Try every option available to check that the game interface actually provides users with everything they need to correctly play and enjoy your game. Then ask other people to try it and get feedback from them. As you collect feedback, list the most requested modifications, implement them, and repeat the cycle until you are happy with the actual configuration you came up with for your game interface. Evergreen options In the last section of this article, we will provide some considerations about game options that should always be implemented in a well-designed mobile game UI, regardless of its genre or distinctive features. Multiple save slots Though extremely fit for gaming, today's smartphones are first of all phones and multi-purpose devices in general, so it is very common to be forced to suddenly quit a match due to an incoming call or other common activities. All apps quit when there is an incoming call or when the player presses the home button and mobile games offer an auto-saving feature in case of such events. What not all games do is to keep separate save states for every mode the game offers or for multiple users. Plants vs. Zombies, for example, offers such a feature: both the adventure and the quick play modes, in all their variations, are stored in separate save slots, so that the player never loses his/her progresses, regardless of the game mode he/she last played or the game level he/she would like to challenge. The following is a screenshot taken from the main screen of the game: A multiple save option is also much appreciated because it makes it safe for your friends to try your newly downloaded game without destroying your previous progresses. Screen rotation The accelerometer included in a large number of smartphones detects the rotation of the device in the 3D space and most software running on those devices rotate their interface as well, according to the portrait or landscape mode in which the smartphone is held. With games, it is not as easy to deal with such a feature as it would be for an image viewer or a text editor. Some games are designed to exploit the vertical or horizontal dimension of the screen with a purpose, and rotating the phone is an action that might not be accommodated by the game altogether. Should the game allow rotating the device, it is then necessary to adapt the game interface to the orientation of the phone as well, and this generally means designing an alternate version of the interface altogether. It is also an interesting (and not much exploited) feature to have the action of rotating the device as part of the actual gameplay and a core mechanic for a game. Calibrations and reconfigurations It is always a good idea to let players have the opportunity to calibrate and/or reconfigure the game controls in the options screen. For example, left-handed players would appreciate the possibility of switching the game controls orientation. When the accelerometer is involved, it can make a lot of difference for a player to be able to set the sensibility of the device to rotation. Different models with different hardware and software detect the rotation in the space differently and there's no single configuration which is good for all smartphones. So let players calibrate their phones according to their personal tastes and the capabilities of the device they are handling. Several games offer this option. Challenges As games become more and more social, several options have been introduced to allow players to display their score on public leaderboards and compete against friends. One game which does that pretty well is Super 7, an iPhone title that displays, on the very top of the screen, a rainbow bar which increases with the player's score. When the bar reaches its end on the right half of the screen, it means some other player's score has been beaten. It is a nice example of a game feature which continually rewards the player and motivates him as he plays the title. Experiment The touch screen is a relatively new control scheme for games. Feel free to try out new ideas. For example, would it be possible to design a first person shooter that uses the gestures, instead of traditional virtual buttons and D-pad? The trick is to keep the playfield as open as possible since the majority of smart devices have relatively small screens. Summary To learn more about mobile game desgin, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mobile First Design with HTML5 and CSS3 (https://www.packtpub.com/web-development/mobile-first-design-html5-and-css3) Gideros Mobile Game Development (https://www.packtpub.com/game-development/gideros-mobile-game-development) Resources for Article: Further resources on this subject: Elucidating the Game-changing Phenomenon of the Docker-inspired Containerization Paradigm [article] Getting Started with Multiplayer Game Programming [article] Introduction to Mobile Web ArcGIS Development [article]
Read more
  • 0
  • 0
  • 17351

article-image-packaging-game
Packt
07 Jul 2016
13 min read
Save for later

Packaging the Game

Packt
07 Jul 2016
13 min read
A game is not just an art, code, and a game design packaged within an executable. You have to deal with stores, publishers, ratings, console providers, and making assets and videos for stores and marketing, among other minor things required to fully ship a game. This article by Muhammad A.Moniem, author of the book Mastering Unreal Engine 4.X, will take care of the last steps you need to do within the Unreal environment in order to get this packaged executable fine and running. Anything post-Unreal, you need to find a way to do it, but from my seat, I'm telling you have done the complex, hard, huge, and long part; what comes next is a lot simpler! This article will help us understand and use Unreal's Project Launcher, patching the project and creating DLCs (downloadable content) (For more resources related to this topic, see here.) Project Launcher and DLCs The first and the most important thing you have to keep in mind is that the project launcher is still in development and the process of creating DLCs is not final yet, and might get changed in the future with upcoming engine releases. While writing this book I've been using Unreal 4.10 and testing everything I do and write within the Unreal 4.11 preview version, and yet still the DLC process remains experimental. So be advised that you might find it a little different in the future as the engine evolves: While we have packaged the game previously through the File menu using Packaging Project, there is another, more detailed, more professional way to do the same job. Using the Project Launcher, which comes in the form of a separate app with Unreal (Unreal Frontend), you have the choice to run it directly from the editor. You can access the Project Launcher from the Windows menu, and then choose Project Launcher, and that will launch it right away. However, I have a question here. Why would you go through these extra steps, then just do the packaging process in one click? Well, extensibility is the answer. Using the Unreal Project Launcher allows you to create several profiles, each profile having a different build setting, and later you can fire each build whenever you need it; not only that, but the profiles could be made for different projects, which means you can have an already made setup for all your projects with all the different build configurations. And yet even that's not everything; it comes in handier when you get the chance to cook the content of a game several times, so rather than keep doing it through the File menu, you can just cook the content for the game for all the different platforms at once. For example; if you have to change one texture within your game which is supported on five platforms, you can make a profile which will cook the content for all the platforms and arrange them for you at once, and you can spend that time doing something else. The Project Launcher does the whole thing for you. What if you have to cook the game content for different languages? Let's say the game supports 10 languages? Do you have to do it one by one for each language? The answer is simple; the Project Launcher will do it for you. So you can simply think of the Project Launcher as a batch process, custom command-line tool, or even a form of build server. You set the configurations and requests, and leave it alone doing the whole thing for you, while you are saving your time doing something else. It is all about productivity! And the most important part about the Project Launcher is that you can create DLCs very easily. By just setting a profile for it with a different set of options and settings, you can end up with getting the DLC or game mode done without any complications. In a word, it is all about profiles, and because of that let's discuss how to create profiles, that could serve different purposes. Sometimes the Project Launcher proposes for you a standard profile that matching the platform you are using. That is good, but usually those profiles might not have all we need, and that's why it is recommended to always create new profiles to serve our goals. The Project Launcher by default is divided into two sections vertically; the upper part contains the default profiles, while the lower part contains the custom profiles. And in order to create a new profile all you have to do is to hit the plus sign at the bottom part, where it is titled Custom Launch Profiles: Pressing it will take you to a wizard, or it is better to describe this as a window, where you can set up the new profile options. Those options are drastic, and changing between them leads to a completely different result, so you have to be careful. But in general, you mostly will be building either a project for release, or building a DLC or patch for an already released project. Not to mention that you can even do more types of building that serve different goals, such as a language package for an already released game, which is treated as a patch or DLC but at the same time it has different a setup and options than a patch or DLC. Anyway, we will be taking care of the two main types of process that developers usually have to deal with in the Project Launcher: release and patch. Packaging a release After the new Custom Launch Profile wizard window opens, you have changes for its settings that are necessary to make our Release build of the project. This includes: General: This has the following fields: Give a name to the profile, and this name will be displayed in the Project Launcher main window Give a description to the profile in order to make its goals clear for you in the future, or for anyone else who is going to use it Project: This has the following sections: Select a project, the one that needs to be built. Or you can leave this at Any Project, in order to build the current active project: Build: This has the following sections: Indeed, you have to check the box of the build, so you make a build and activate this section of options. From the Build Configuration dropdown, you have to choose a build type, which is Shipping in this case. Finally, you can check the Build UAT (Unreal Automation Tool) option from Advanced Settings in this section. The UAT could be considered as a bunch of scripts creating a set of automated processes, but in order to decide whether to run it or not, you have to really understand what the UAT is: Written in C# (may convert to C++ in the future) Automates repetitive tasks through automation scripts Builds, cooks, packages, deploys and launches projects Invokes UBT for compilation Analyzes and fixes game content files Codes surgery when updating to new engine versions Distributes compilation (XGE) and build system integration Generates code documentation Automates testing of code and content And many others—you can add your own scripts! Now you will know if you want to enable it or not: Cook: This has the following settings: In the Cook section, you need to set it to by the book. This means you need to define what exactly is needed to be cooked and for which platforms it is enough for now to set it to WindowsNoEditor, and check the cultures you want from the list. I have chosen all of them (this is faster than picking one at a time) and then exclude the ones that I don't want: Then you need to check which maps should be cooked; if you can't see maps, it is probably the first build. Later you'll find the maps listed. But anyway, you must keep all the maps listed in the Maps folder under the Content directory: Now from the Release / DLC / Patching Settings section, you have to check the option Create a release version of the game for distribution, as this version going to be the distribution one. And from the same section give the build a version. This is going to create some extra files that will be used in the future if we are going to create patches or DLCs: You can expand the Advanced Settings section to set your own options. By default, Compress Content and Save Packages without versions are both checked, and both are good for the type of build we are making. But also you can set Store all content in a single file (UnrealPak) to keep things tidy; one .pak file is better than lots of separated files. Finally, you can set Cooker Build Configuration to Shipping, as long as we set Build Configuration itself to Shipping: Package: This has the following options: From this section's drop-down menu, choose Package & store locally, and that will save the packages on the drive. You can't set anything else here, unless you want to store the game packaged project into a repository: Deploy: The Deploy section is meant to build the game into the device of your choice, and I don't think it is the case here, or anyone will want to do it. If you want to put the game into a device, you could directly do Launch from within the editor itself. So, let's set this section to Do Not Deploy: Launch: In case you have chosen to deploy the game into a device, then you'll be able to find this section; otherwise, the options here will be disabled. The set of options here is meant to choose the configurations of the deployed build, as once it is deployed to the device it will run. Here you can set something like the language culture, the default startup map, command-line arguments, and so on. And as we are not deploying now, this section will be disabled: Now we have finished editing our profile, you can find a back arrow at the top of this wizard. Pressing it will take you back to the Project Launcher main window: Now you can find our profile in the bottom section. Any other profiles you'll be making in the future will be listed there. Now there is one step to finish the build. In the right corner of the profile there is a button that says Launch This Profile. Hitting it will start the process of building, cooking, and packaging this profile for the selected project. Hit it right away if you want the process to start. And keep in mind, anytime you need to change any of the previously set settings, there is always an Edit button for each profile: The Project Launcher will start processing this profile; it will take some time, but the amount of time depends on your choices. And you'll be able to see all the steps while it is happening. Not only this, but you can also watch a detailed log; you can save this log, or you can even cancel the process at any time: Once everything is done, a new button will appear at the bottom: Done. Hitting it will take you back again to the Project Launcher main window. And you can easily find the build in the SavedStagedBuildsWindowsNoEditor directory of your project, which is in my case: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditor. The most important thing now is that, if you are planning to create patches or DLCs for this project, remember when you set a version number in the Cook section. This produced some files that you can find in: ProjectNameReleaseReleaseVersionPlatform. Which in my case is: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor. There are two files; you have to make sure that you have a backup of them on your drive for future use. Now you can ship the game and upload it to the distribution channel! Packaging a patch or DLC The good news is, there isn't much to do here. Or in other words, you have to do lots of things, but it is a repetitive process. You'll be creating a new profile in the Project Launcher, and you'll be setting 90% of the options so they're the same as the previous release profile; the only difference will be in the Cook options. Which means the settings that will remain the same are: Project Build Package Deploy Launch The only difference is that in the Release/DLC/Patching Settings section of the Cook section you have to: Disable Create a release version of the game for distribution. Set the number of the base build (the release) as the release version this is based on, as this choice will make sure to compare the previous content with the current one. Check Generate patch, if the current build is a patch, not a DLC. Check Build DLC, if the current build is a DLC, not a patch: Now you can launch this profile, and wait until it is done. The patching process creates a *.pak file in the directory: ProjectNameSavedStagedBuildsPlatformNameProjectNameContentPaks. This .pak file is the patch that you'll be uploading to the distribution channel! And the most common way to handle these type of patch is by creating installers; in this case, you'll create an installer to copy the *.pak file into the player's directory: ProjectNameReleasesVersionNumberPlatformName. Which means, it is where the original content *.pak file of the release version is. In my case I copy the *.pak file from: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor to: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditorBellzContentPaks. Now you've found the way to patch and download content, and you have to know, regardless of the time you have spent creating it, it will be faster in the future, because you'll be getting more used to it, and Epic is working on making the process better and better. Summary The Project Launcher is a very powerful tool shipped with the Unreal ecosystem. Using it is not mandatory, but sometimes it is needed to save time, and you learned how and when to use this powerful tool. Many games nowadays have downloadable content; it helps to keep the game community growing, and the game earn more revenue. Having DLCs is not essential, but it is good, having them must be planned earlier as we discussed, and you've learned how to manage them within Unreal Engine. And you learned how to make patches and DLCs using the Unreal Project Launcher. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4 [article] Bang Bang – Let's Make It Explode [article] Lighting basics [article]
Read more
  • 0
  • 0
  • 17324

article-image-creating-classes
Packt
11 Jul 2016
17 min read
Save for later

Creating Classes

Packt
11 Jul 2016
17 min read
In this article by William Sherif and Stephen Whittle, authorsof the book Unreal Engine 4 Scripting with C ++ Cookbook, we will discuss about how to create C++ classes and structs that integrate well with the UE4 Blueprints Editor. These classes are graduated versions of the regular C++ classes, and are called UCLASS. (For more resources related to this topic, see here.) A UCLASS is just a C++ class with a whole lot of UE4 macro decoration on top. The macros generate additional C++ header code that enables integration with the UE4 Editor itself. Using UCLASS is a great practice. The UCLASS macro, if configured correctly, can possibly make your UCLASS Blueprintable. The advantage of making your UCLASS Blueprintable is that it can enable your custom C++ objects to have Blueprints visually-editable properties (UPROPERTY) with handy UI widgets such as text fields, sliders, and model selection boxes. You can also have functions (UFUNCTION) that are callable from within a Blueprints diagram. Both of these are shown in the following images: On the left, two UPROPERTY decorated class members (a UTexture reference and an FColor) show up for editing in a C++ class's Blueprint. On the right, a C++ function GetName marked as BlueprintCallable UFUNCTION shows up as callable from a Blueprints diagram. Code generated by the UCLASS macro will be located in a ClassName.generated.h file, which will be the last #include required in your UCLASS header file, ClassName.h. The following are the topics that we will cover in this article: Making a UCLASS – Deriving from UObject Creating a user-editable UPROPERTY Accessing a UPROPERTY from Blueprints Specifying a UCLASS as the type of a UPROPERTY Creating a Blueprint from your custom UCLASS Making a UCLASS – Deriving from UObject When coding with C++, you can have your own code that compiles and runs as native C++ code, with appropriate calls to new and delete to create and destroy your custom objects. Native C++ code is perfectly acceptable in your UE4 project as long as your new and delete calls are appropriately paired so that no leaks are present in your C++ code. You can, however, also declare custom C++ classes, which behave like UE4 classes, by declaring your custom C++ objects as UCLASS. UCLASS use UE4's Smart Pointers and memory management routines for allocation and deallocation according to Smart Pointer rules, can be loaded and read by the UE4 Editor, and can optionally be accessed from Blueprints. Note that when you use the UCLASS macro, your UCLASS object's creation and destruction must be completely managed by UE4: you must use ConstructObject to create an instance of your object (not the C++ native keyword new), and call UObject::ConditionalBeginDestroy() to destroy the object (not the C++ native keyword delete). Getting ready In this recipe, we will outline how to write a C++ class that uses the UCLASS macro to enable managed memory allocation and deallocation as well as to permit access from the UE4 Editor and Blueprints. You need a UE4 project into which you can add new code to use this recipe. How to do it... From your running project, select File | Add C++ Class inside the UE4 Editor. In the Add C++ Class dialog that appears, go to the upper-right side of the window, and tick the Show All Classes checkbox. Creating a UCLASS by choosing to derive from the Object parent class. UObject is the root of the UE4 hierarchy. You must tick the Show All Classes checkbox in the upper-right corner of this dialog for the Object class to appear in the list view. Select Object (top of the hierarchy) as the parent class to inherit from, and then click on Next. Note that although Object will be written in the dialog box, in your C++ code, the C++ class you will deriving from is actually UObject with a leading uppercased U. This is the naming convention of UE4: UCLASS deriving from UObject (on a branch other than Actor) must be named with a leading U. UCLASS deriving from Actor must be named with a leading A. C++ classes (that are not UCLASS) deriving from nothing do not have a naming convention, but can be named with a leading F (for example, FAssetData), if preferred. Direct derivatives of UObject will not be level placeable, even if it contains visual representation elements like UStaticMeshes. If you want to place your object inside a UE4 level, you must at least derive from the Actor class or beneath it in the inheritance hierarchy. This article's example code will not be placeable in the level, but you can create and use Blueprints based on the C++ classes that we write in this article in the UE4 Editor. Name your new Object derivative something appropriate for the object type that you are creating. I call mine UserProfile. This comes off as UUserObject in the naming of the class in the C++ file that UE4 generates to ensure that the UE4 conventions are followed (C++ UCLASS preceded with a leading U). We will use the C++ object that we've created to store the Name and Email of a user that plays our game. Go to Visual Studio, and ensure your class file has the following form: #pragma once #include "Object.h" // For deriving from UObject #include "UserProfile.generated.h" // Generated code // UCLASS macro options sets this C++ class to be // Blueprintable within the UE4 Editor UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() }; Compile and run your project. You can now use your custom UCLASS object inside Visual Studio, and inside the UE4 Editor. See the following recipes for more details on what you can do with it. How it works… UE4 generates and manages a significant amount of code for your custom UCLASS. This code is generated as a result of the use of the UE4 macros such as UPROPERTY, UFUNCTION, and the UCLASS macro itself. The generated code is put into UserProfile.generated.h. You must #include the UCLASSNAME.generated.h file with the UCLASSNAME.h file for compilation to succeed. Without including the UCLASSNAME.generated.h file, compilation would fail. The UCLASSNAME.generated.h file must be included as the last #include in the list of #include in UCLASSNAME.h. Right Wrong #pragma once   #include "Object.h" #include "Texture.h" // CORRECT: .generated.h last file #include "UserProfile.generated.h" #pragma once   #include "Object.h" #include "UserProfile.generated.h" // WRONG: NO INCLUDES AFTER // .GENERATED.H FILE #include "Texture.h" The error that occurs when a UCLASSNAME.generated.h file is not included last in a list of includes is as follows: >> #include found after .generated.h file - the .generated.h file should always be the last #include in a header There's more… There are a bunch of keywords that we want to discuss here, which modify the way a UCLASS behaves. A UCLASS can be marked as follows: Blueprintable: This means that you want to be able to construct a Blueprint from the Class Viewer inside the UE4 Editor (when you right-click, Create Blueprint Class… becomes available). Without the Blueprintable keyword, the Create Blueprint Class… option will not be available for your UCLASS, even if you can find it from within the Class Viewer and right-click on it. The Create Blueprint Class… option is only available if you specify Blueprintable in your UCLASS macro definition. If you do not specify Blueprintable, then the resultant UCLASS will not be Blueprintable. BlueprintType:  Using this keyword implies that the UCLASS is usable as a variable from another Blueprint. You can create Blueprint variables from the Variables group in the left-hand panel of any Blueprint's EventGraph. If NotBlueprintType is specified, then you cannot use this Blueprint variable type as a variable in a Blueprints diagram. Right-clicking the UCLASS name in the Class Viewer will not show Create Blueprint Class… in its context menu. Any UCLASS that have BlueprintType specified can be added as variables to your Blueprint class diagram's list of variables. You may be unsure whether to declare your C++ class as a UCLASS or not. It is really up to you. If you like smart pointers, you may find that UCLASS not only make for safer code, but also make the entire code base more coherent and more consistent. See also To add additional programmable UPROPERTY to the Blueprints diagrams, see the section on Creating a user-editable UPROPERTY, further in the article. Creating a user-editable UPROPERTY Each UCLASS that you declare can have any number of UPROPERTY declared for it within it. Each UPROPERTY can be a visually editable field, or some Blueprints accessible data member of the UCLASS. There are a number of qualifiers that we can add to each UPROPERTY, which change the way it behaves from within the UE4 Editor, such as EditAnywhere (screens from which the UPROPERTY can be changed), and BlueprintReadWrite (specifying that Blueprints can both read and write the variable at any time in addition to the C++ code being allowed to do so). Getting ready To use this recipe, you should have a C++ project into which you can add C++ code. In addition, you should have completed the preceding recipe, Making a UCLASS – Deriving from UObject. How to do it... Add members to your UCLASS declaration as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float Armor; UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float HpMax; }; Create a Blueprint of your UObject class derivative, and open the Blueprint in the UE4 editor by double-clicking it from the object browser. You can now specify values in Blueprints for the default values of these new UPROPERTY fields. Specify per-instance values by dragging and dropping a few instances of the Blueprint class into your level, and editing the values on the object placed (by double-clicking on them). How it works… The parameters passed to the UPROPERTY() macro specify a couple of important pieces of information regarding the variable. In the preceding example, we specified the following: EditAnywhere: This means that the UPROPERTY() macro can be edited either directly from the Blueprint, or on each instance of the UClass object as placed in the game level. Contrast this with the following: EditDefaultsOnly: The Blueprint's value is editable, but it is not editable on a per-instance basis EditInstanceOnly: This would allow editing of the UPROPERTY() macro in the game-level instances of the UClass object, and not on the base blueprint itself BlueprintReadWrite: This indicates that the property is both readable and writeable from Blueprints diagrams. UPROPERTY() with BlueprintReadWrite must be public members, otherwise compilation will fail. Contrast this with the following: BlueprintReadOnly: The property must be set from C++ and cannot be changed from Blueprints Category: You should always specify a Category for your UPROPERTY(). The Category determines which submenu the UPROPERTY() will appear under in the property editor. All UPROPERTY() specified under Category=Stats will appear in the same Stats area in the Blueprints editor. See also A complete UPROPERTY listing is located at https://docs.unrealengine.com/latest/INT/Programming/UnrealArchitecture/Reference/Properties/Specifiers/index.html. Accessing a UPROPERTY from Blueprints Accessing a UPROPERTY from Blueprints is fairly simple. The member must be exposed as a UPROPERTY on the member variable that you want to access from your Blueprints diagram. You must qualify the UPROPERTY in your macro declaration as being either BlueprintReadOnly or BlueprintReadWrite to specify whether you want the variable to be either readable (only) from Blueprints, or even writeable from Blueprints. You can also use the special value BlueprintDefaultsOnly to indicate that you only want the default value (before the game starts) to be editable from the Blueprints editor. BlueprintDefaultsOnly indicates the data member cannot be edited from Blueprints at runtime. How to do it... Create some UObject-derivative class, specifying both Blueprintable and BlueprintType, such as the following: UCLASS( Blueprintable, BlueprintType ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) FString Name; }; The BlueprintType declaration in the UCLASS macro is required to use the UCLASS as a type within a Blueprints diagram. Within the UE4 Editor, derive a Blueprint class from the C++ class, as shown in Creating a Blueprint from your custom UCLASS. Create an instance of your Blueprint-derived class in the UE4 Editor by dragging an instance from the Content Browser into the main game world area. It should appear as a round white sphere in the game world unless you've specified a model mesh for it. In a Blueprints diagram which allows function calls (such as the Level Blueprint, accessible via Blueprints | Open Level Blueprint), try printing the Name property of your Warrior instance, as seen in the following screenshot: Navigating Blueprints diagrams is easy. Right-Click + Drag to pan a Blueprints diagram; Alt + Right-Click + Drag to zoom. How it works… UPROPERTY are automatically written Get/Set methods for UE4 classes. They must not be declared as private variables within the UCLASS, however. If they are not declared as public or protected members, you will get a compiler error of the form: >> BlueprintReadWrite should not be used on private members Specifying a UCLASS as the type of a UPROPERTY So, you've constructed some custom UCLASS intended for use inside of UE4. But how do you instantiate them? Objects in UE4 are reference-counted and memory-managed, so you should not allocate them directly using the C++ keyword new. Instead, you'll have to use a function called ConstructObject to instantiate your UObject derivative. ConstructObject doesn't just take the C++ class of the object you are creating, it also requires a Blueprint class derivative of the C++ class (a UClass* reference). A UClass* reference is just a pointer to a Blueprint. How do we instantiate an instance of a particular Blueprint from C++ code? C++ code does not, and should not, know concrete UCLASS names, since these names are created and edited in the UE4 Editor, which you can only access after compilation. We need a way to somehow hand back the Blueprint class name to instantiate to the C++ code. The way we do this is by having the UE4 programmer select the UClass that the C++ code is to use from a simple dropdown menu listing all the Blueprints available (derived from a particular C++ class) inside the UE4 editor. To do this, we simply have to provide a user-editable UPROPERTY with a TSubclassOf<C++ClassName>-typed variable. Alternatively, you can use FStringClassReference to achieve the same objective. This makes selecting the UCLASS in the C++ code is exactly like selecting a Texture to use. UCLASS should be considered as resources to the C++ code, and their names should never be hard-coded into the code base. Getting ready In your UE4 code, you're often going to need to refer to different UCLASS in the project. For example, say you need to know the UCLASS of the player object so that you can use SpawnObject in your code on it. Specifying a UCLASS from C++ code is extremely awkward, because the C++ code is not supposed to know about the concrete instances of the derived UCLASS that were created in the Blueprints editor at all. Just as we don't want to bake specific asset names into the C++ code, we don't want to hard-code derived Blueprints class names into the C++ code. So, we use a C++ variable (for example, UClassOfPlayer), and select that from a Blueprints dialog in the UE4 editor. You can do so using a TSubclassOf member or an FStringClassReference member, as shown in the following screenshot: How to do it... Navigate to the C++ class that you'd like to add the UCLASS reference member to. For example, decking out a class derivative with the UCLASS of the player is fairly easy. From inside a UCLASS, use code of the following form to declare a UPROPERTY that allows selection of a UClass (Blueprint class) that derives from UObject in the hierarchy: UCLASS() class CHAPTER2_API UUserProfile : public UObject { UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Unit) TSubclassOf<UObject> UClassOfPlayer; // Displays any UClasses // deriving from UObject in a dropdown menu in Blueprints // Displays string names of UCLASSes that derive from // the GameMode C++ base class UPROPERTY( EditAnywhere, meta=(MetaClass="GameMode"), Category = Unit ) FStringClassReference UClassGameMode; }; Blueprint the C++ class, and then open that Blueprint. Click on the drop-down menu beside your UClassOfPlayer menu. Select the appropriate UClassOfPlayer member from the drop-down menu of the listed UClass. How it works… TSubclassOf The TSubclassOf< > member will allow you to specify a UClass name using a drop-down menu inside of the UE4 editor when editing any Blueprints that have TSubclassOf< > members. FStringClassReference The MetaClass tag refers to the base C++ class from which you expect the UClassName to derive. This limits the drop-down menu's contents to only the Blueprints derived from that C++ class. You can leave the MetaClass tag out if you wish to display all the Blueprints in the project. Creating a Blueprint from your custom UCLASS Blueprinting is just the process of deriving a Blueprint class for your C++ object. Creating Blueprint-derived classes from your UE4 objects allows you to edit the custom UPROPERTY visually inside the editor. This avoids hardcoding any resources into your C++ code. In addition, in order for your C++ class to be placeable within the level, it must be Blueprinted first. But this is only possible if the C++ class underlying the Blueprint is an Actor class-derivative. There is a way to load resources (like textures) using FStringAssetReferences and StaticLoadObject. These pathways to loading resources (by hardcoding path strings into your C++ code) are generally discouraged, however. Providing an editable value in a UPROPERTY(), and loading from a proper concretely typed asset reference is a much better practice. Getting ready You need to have a constructed UCLASS that you'd like to derive a Blueprint class from (see the section on Making a UCLASS – Deriving from UObject given earlier in this article) in order to follow this recipe. You must have also marked your UCLASS as Blueprintable in the UCLASS macro for Blueprinting to be possible inside the engine. Any UObject-derived class with the meta keyword Blueprintable in the UCLASS macro declaration will be Blueprintable. How to do it… To Blueprint your UserProfile class, first ensure that UCLASS has the Blueprintable tag in the UCLASS macro. This should look as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject Compile and run your code. Find the UserProfile C++ class in the Class Viewer (Window | Developer Tools | Class Viewer). Since the previously created UCLASS does not derive from Actor, to find your custom UCLASS, you must turn off Filters | Actors Only in the Class Viewer (which is checked by default). Turn off the Actors Only check mark to display all the classes in the Class Viewer. If you don't do this, then your custom C++ class may not show! Keep in mind that you can use the small search box inside the Class Viewer to easily find the UserProfile class by starting to type it in. Find your UserProfile class in the Class Viewer, right-click on it, and create a Blueprint from it by selecting Create Blueprint… Name your Blueprint. Some prefer to prefix the Blueprint class name with BP_. You may choose to follow this convention or not, just be sure to be consistent. Double-click on your new Blueprint as it appears in the Content Browser, and take a look at it. You will be able to edit the Name and Email fields for each UserProfile Blueprint instance you create. How it works… Any C++ class you create that has the Blueprintable tag in its UCLASS macro can be Blueprinted within the UE4 editor. A blueprint allows you to customize properties on the C++ class in the visual GUI interface of UE4. Summary The UE4 code is, typically, very easy to write and manage once you know the patterns. The code we write to derive from another UCLASS, or to create a UPROPERTY or UFUNCTION is very consistent. This article provides recipes for common UE4 coding tasks revolving around basic UCLASS derivation, property and reference declaration, construction, destruction, and general functionality. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4[article] An overview of Unreal Engine[article] Overview of Unreal Engine 4[article]
Read more
  • 0
  • 0
  • 16805

article-image-blender-25-detailed-render-earth-space
Packt
25 May 2011
10 min read
Save for later

Blender 2.5: Detailed Render of the Earth from Space

Packt
25 May 2011
10 min read
Blender 2.5 HOTSHOT Challenging and fun projects that will push your Blender skills to the limit Our purpose is to create a very detailed view of the earth from space. By detailed, we mean that it includes land, oceans, and clouds, and not only the color and specular reflection, but also the roughness they seem to have, when seen from space. For this project, we are going to perform some work with textures and get them properly set up for our needs (and also for Blender's way of working). What Does It Do? We will create a nice image of the earth resembling the beautiful pictures that are taken from orbiting of the earth, showing the sun rising over the rim of the planet. For this, we will need to work carefully with some textures, set up a basic scene, and create a fairly complex setup of nodes for compositing the final result. In our final image, we will get very nice effects, such as the volumetric effect of the atmosphere that we can see round its rim, the strong highlight of the sun when rising over the rim of the earth, and the very calm, bluish look of the dark part of the earth when lit by the moon. Why Is It Awesome? With this project, we are going to understand how important it is to have good textures to work with. Having the right textures for the job saves lots of time when producing a high-quality rendered image. Not only are we going to work with some very good textures that are freely available on the Internet, but we are also going to perform some hand tweaking to get them tuned exactly as we need them. This way we can also learn how much time can be saved by just doing some preprocessing on the textures to create finalized maps that will be fed directly to the material, without having to resort to complex tricks that would only cause us headaches. One of the nicest aspects of this project is that we are going to see how far we take a very simple scene by using the compositor in Blender. We are definitely going to learn some useful tricks for compositing. Your Hotshot Objectives This project will be tackled in five parts: Preprocessing the textures Object setup Lighting setup Compositing preparation Compositing Mission Checklist The very key for the success of our project is getting the right set of quality images at a sufficiently high resolution. Let's go to www.archive.org and search for www.oera.net/How2.htm on the 'wayback machine'. Choose the snapshot from the Apr 18, 2008 link. Click on the image titled Texture maps of the Earth and Planets. Once there, let's download these images: Earth texture natural colors Earth clouds Earth elevation/bump Earth water/land mask Remember to save the high-resolution version of the images, and put them in the tex folder, inside the project's main folder. We will also need to use Gimp to perform the preprocessing of the textures, so let's make sure to have it installed. We'll be working with version 2.6. Preprocessing the Textures The textures we downloaded are quite good, both in resolution and in the way they clearly separate each aspect of the shading of the earth. There is a catch though—using the clouds, elevation, and water/land textures as they are will cause us a lot of headache inside Blender. So let's perform some better basic preprocessing to get finalized and separated maps for each channel of the shader that will be created. Engage Thrusters For each one of the textures that we're going to work on, let's make sure to get the previous one closed to avoid mixing the wrong textures. Clouds Map Drag the EarthClouds_2500x1250.jpg image from the tex folder into the empty window of Gimp to get it loaded. Now locate the Layers window and right-click on the thumbnail of the Background layer, and select the entry labeled Add Layer Mask... from the menu. In the dialog box, select the Grayscale copy of layer option. Once the mask is added to the layer, the black part of the texture should look transparent. If we take a look at the image after adding the mask, we'll notice the clouds seem to have too much transparency. To solve this, we will perform some adjustment directly on the mask of the layer. Go to the Layers window and click on the thumbnail of the mask (the one to the right-hand side) to make it active (its border should become white). Then go to the main window (the one containing the image) and go to Colors | Curves.... In the Adjust Color Curves dialog, add two control points and get the curve shown in the next screenshot: The purpose of this curve is to get the light gray pixels of the mask to become lighter and the dark ones to get darker; the strong slope between the two control points will cause the border of the mask to be sharper. Make sure that the Value channel is selected and click on OK. Now let's take a look at the image and see how strong the contrast of the image is and how well defined the clouds are now. Finally, let's go to Image| Mode| RGB to set the internal data format for the image to a safe format (thus avoiding the risk of having Blender confused by it). Now we only need to go to File| Save A Copy... and save it as EarthClouds.png in the tex folder of the project. In the dialogs asking for confirmation, make sure to tell Gimp to apply the layer mask (click on Export in the first dialog). For the settings of the PNG file, we can use the default values. Let's close the current image in Gimp and get the main window empty in order to start working on the next texture. Specular Map Let's start by dragging the image named EarthMask_2500x1250.jpg onto the main window of Gimp to get it open. Then drag the image EarthClouds_2500x1250.jpg over the previous one to get it added as a separate layer in Gimp. Now, we need to make sure that the images are correctly aligned. To do this, let's go to View| Zoom| 4:1 (400%), to be able to move the layer with pixel precision easily. Now go to the bottom right-hand side corner of the window and click-and-drag over the four-arrows icon until the part of the image shown in the viewport is one of the corners. After looking at the right place, let's go to the Toolbox and activate the Move tool. Finally, we just need to drag the clouds layer so that its corner exactly matches the corner of the water/land image. Then let's switch to another zoom level by going to View| Zoom| 1:4 (25%). Now let's go to the Layers window, select the EarthClouds layer, and set its blending mode to Multiply (Mode drop-down, above the layers list). Now we just need to go to the main window and go to Colors| Invert. Finally, let's switch the image to RGB mode by going to Image| Mode| RGB and we are done with the processing. Remember to save the image as EarthSpecMap.jpg in the tex folder of the project and close it in Gimp. The purpose of creating this specular map is to correctly mix the specularity of the ocean (full) with one of the clouds that is above the ocean (null). This way, we get a correct specularity, both in the ocean and in the clouds. If we just used the water or land mask to control specularity, then the clouds above the ocean would have specular reflection, which is wrong. Bump Map The bump map controls the roughness of the material; this one is very important as it adds a lot of detail to the final render without having to create actual geometry to represent it. First, drag the EarthElevation_2500x1250.jpg to the main window of Gimp to get it open. Then let's drag the EarthClouds_2500x1250.jpg image over the previous one, so that it gets loaded as a layer above the first one. Now zoom in by going to View| Zoom| 4:1 (400%). Drag the image so that you are able to see one of its corners and use the move tool to get the clouds layer exactly matching the elevation layer. Then switch back to a wider view by going to View| Zoom| 1:4 (25%). Now it's time to add a mask to the clouds layer. Right-click on the clouds layer and select the Add Layer Mask... entry from the menu. Then select the Grayscale copy of layer option in the dialog box and click Add. What we have thus far is a map that defines how intense the roughness of the surface in each point will be. But there's is a problem: The clouds are as bright as or even brighter than the Andes and the Himalayas, which means the render process will distort them quite a lot. Since we know that the intensity of the roughness on the clouds must be less, let's perform another step to get the map corrected accordingly. Let's select the left thumbnail of the clouds layer (color channel of the layer), then go to the main window and open the color levels using the Levels tool by going to Colors| Levels.... In the Output Levels part of the dialog box, let's change the value 255 (on the right-hand side) to 66 and then click on OK. Now we have a map that clearly gives a stronger value to the highest mounts on earth than to the clouds, which is exactly what we needed. Finally, we just need to change the image mode to RGB (Image| Mode| RGB) and save it as EarthBumpMap.jpg in the tex folder of the project. Notice that we are mixing the bump maps of the clouds and the mountains. The reason for this is that working with separate bump maps will get us into a very tricky situation when working inside Blender; definitely, working with a single bump map is way easier than trying to mix two or more. Now we can close Gimp, since we will work exclusively within Blender from now on. Objective Complete - Mini Debriefing This part of the project was just a preparation of the textures. We must create these new textures for three reasons: To get the clouds' texture having a proper alpha channel; this will save us trouble when working with it in Blender. To control the spec map properly, in the regions where there are clouds, as the clouds must not have specular reflection. To create a single, unified bump map for the whole planet. This will save us lots of trouble when controlling the Normal channel of the material in Blender. Notice that we are using the term "bump map" to refer to a texture that will be used to control the "normal" channel of the material. The reason to not call it "normal map" is because a normal map is a special kind of texture that isn't coded in grayscale, like our current texture.
Read more
  • 0
  • 0
  • 16611

article-image-flash-game-development-creation-complete-tetris-game
Packt
25 Mar 2011
10 min read
Save for later

Flash Game Development: Creation of a Complete Tetris Game

Packt
25 Mar 2011
10 min read
Tetris features shapes called tetrominoes, geometric shapes composed of four squared blocks connected orthogonally, that fall from the top of the playing field. Once a tetromino touches the ground, it lands and cannot be moved anymore, being part of the ground itself, and a new tetromino falls from the top of the game field, usually a 10x20 tiles vertical rectangle. The player can move the falling tetromino horizontally and rotate by 90 degrees to create a horizontal line of blocks. When a line is created, it disappears and any block above the deleted line falls down. If the stacked tetrominoes reach the top of the game field, it's game over. Defining game design This time I won't talk about the game design itself, since Tetris is a well known game and as you read this article you should be used to dealing with game design. By the way, there is something really important about this game you need to know before you start reading this article. You won't draw anything in the Flash IDE. That is, you won't manually draw tetrominoes, the game field, or any other graphic assets. Everything will be generated on the fly using AS3 drawing methods. Tetris is the best game for learning how to draw with AS3 as it only features blocks, blocks, and only blocks. Moreover, although the game won't include new programming features, its principles make Tetris the hardest game of the entire book. Survive Tetris and you will have the skills to create the next games focusing more on new features and techniques rather than on programming logic. Importing classes and declaring first variables The first thing we need to do, as usual, is set up the project and define the main class and function, as well as preparing the game field. Create a new file (File | New) then from New Document window select Actionscript 3.0. Set its properties as width to 400 px, height to 480 px, background color to #333333 (a dark gray), and frame rate to 30 (quite useless anyway since there aren't animations, but you can add an animated background on your own). Also, define the Document Class as Main and save the file as tetris.fla. Without closing tetris.fla, create a new file and from New Document window select ActionScript 3.0 Class. Save this file as Main.as in the same path you saved tetris.fla. Then write: package { import flash.display.Sprite; import flash.utils.Timer; import flash.events.TimerEvent; import flash.events.KeyboardEvent; public class Main extends Sprite { private const TS_uint=24; private var fieldArray:Array; private var fieldSprite:Sprite; public function Main() { // tetris!! } } } We already know we have to interact with the keyboard to move, drop, and rotate tetrominoes and we have to deal with timers to manage falling delay, so I already imported all needed libraries. Then, there are some declarations to do: private const TS_uint=24; TS is the size, in pixels, of the tiles representing the game field. It's a constant as it won't change its value during the game, and its value is 24. With 20 rows of tiles, the height of the whole game field will be 24x20 = 480 pixels, as tall as the height of our movie. private var fieldArray:Array; fieldArray is the array that will numerically represent the game field. private var fieldSprite:Sprite; fieldSprite is the DisplayObject that will graphically render the game field. Let's use it to add some graphics. Drawing game field background Nobody wants to see an empty black field, so we are going to add some graphics. As said, during the making of this game we won't use any drawn Movie Clip, so every graphic asset will be generated by pure ActionScript. The idea: Draw a set of squares to represent the game field. The development: Add this line to Main function: public function Main() { generateField(); } then write generateField function this way: private function generateField():void { fieldArray = new Array(); fieldSprite=new Sprite(); addChild(fieldSprite); fieldSprite.graphics.lineStyle(0,0x000000); for (var i_uint=0; i<20; i++) { fieldArray[i]=new Array(); for (var j_uint=0; j<10; j++) { fieldArray[i][j]=0; fieldSprite.graphics.beginFill(0x444444); fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); fieldSprite.graphics.endFill(); } } } Test the movie and you will see: The 20x10 game field has been rendered on the stage in a lighter gray. I could have used constants to define values like 20 and 10, but I am leaving it to you at the end of the article. Let's see what happened: fieldArray = new Array(); fieldSprite=new Sprite(); addChild(fieldSprite); These lines just construct fieldArray array and fieldSprite DisplayObject, then add it to stage as you have already seen a million times. fieldSprite.graphics.lineStyle(0,0x000000); This line introduces a new world called Graphics class. This class contains a set of methods that will allow you to draw vector shapes on Sprites. lineStyle method sets a line style that you will use for your drawings. It accepts a big list of arguments, but at the moment we'll focus on the first two of them. The first argument is the thickness of the line, in points. I set it to 0 because I wanted it as thin as a hairline, but valid values are 0 to 255. The second argument is the hexadecimal color value of the line, in this case black. Hexadecimal uses sixteen distinct symbols to represent numbers from 0 to 15. Numbers from zero to nine are represented with 0-9 just like the decimal numeral system, while values from ten to fifteen are represented by letters A-F. That's the way it is used in most common paint software and in the web to represent colors. You can create hexadecimal numbers by preceding them with 0x. Also notice that lineStyle method, like all Graphics class methods, isn't applied directly on the DisplayObject itself but as a method of the graphics property. for (var i_uint=0; i<20; i++) { ... } The remaining lines are made by the classical couple of for loops initializing fieldArray array in the same way you already initialized all other array-based games, and drawing the 200 (20x10) rectangles that will form the game field. fieldSprite.graphics.beginFill(0x444444); beginFill method is similar to lineStyle as it sets the fill color that you will use for your drawings. It accepts two arguments, the color of the fill (a dark gray in this case) and the opacity (alpha). Since I did not specify the alpha, it takes the default value of 1 (full opacity). fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); With a line and a fill style, we are ready to draw some squares with drawRect method, that draws a rectangle. The four arguments represent respectively the x and y position relative to the registration point of the parent DisplayObject (fieldSprite, that happens to be currently on 0,0 in this case), the width and the height of the rectangle. All the values are to be intended in pixels. fieldSprite.graphics.endFill(); endFill method applies a fill to everything you drew after you called beginFill method. This way we are drawing a square with a TS pixels side for each for iteration. At the end of both loops, we'll have 200 squares on the stage, forming the game field. Drawing a better game field background Tetris background game fields are often represented as a checkerboard, so let's try to obtain the same result. The idea: Once we defined two different colors, we will paint even squares with one color, and odd squares with the other color. The development: We have to modify the way generateField function renders the background: private function generateField():void { var colors_Array=new Array("0x444444","0x555555");"); fieldArray = new Array(); var fieldSprite_Sprite=new Sprite(); addChild(fieldSprite); fieldSprite.graphics.lineStyle(0,0x000000); for (var i_uint=0; i<20; i++) { fieldArray[i]=new Array(); for (var j_uint=0; j<10; j++) { fieldArray[i][j]=0; fieldSprite.graphics.beginFill(colors[(j%2+i%2)%2]); fieldSprite.graphics.drawRect(TS*j,TS*i,TS,TS); fieldSprite.graphics.endFill(); } } } We can define an array of colors and play with modulo operator to fill the squares with alternate colors and make the game field look like a chessboard grid. The core of the script lies in this line: fieldSprite.graphics.beginFill(colors[(j%2+i%2)%2]); that plays with modulo to draw a checkerboard. Test the movie and you will see: Now the game field looks better. Creating the tetrominoes The concept behind the creation of representable tetrominoes is the hardest part of the making of this game. Unlike the previous games you made, such as Snake, that will feature actors of the same width and height (in Snake the head is the same size as the tail), in Tetris every tetromino has its own width and height. Moreover, every tetromino but the square one is not symmetrical, so its size is going to change when the player rotates it. How can we manage a tile-based game with tiles of different width and height? The idea: Since tetrominoes are made by four squares connected orthogonally (that is, forming a right angle), we can split tetrominoes into a set of tiles and include them into an array. The easiest way is to include each tetromino into a 4x4 array, although most of them would fit in smaller arrays, it's good to have a standard array. Something like this: Every tetromino has its own name based on the alphabet letter it reminds, and its own color, according to The Tetris Company (TTC), the company that currently owns the trademark of the game Tetris. Just for your information, TTC sues every Tetris clone whose name somehow is similar to "Tetris", so if you are going to create and market a Tetris clone, you should call it something like "Crazy Bricks" rather than "Tetriz". Anyway, following the previous picture, from left-to-right and from top-to-bottom, the "official" names and colors for tetrominoes are: I—color: cyan (0x00FFFF) T—color: purple (0xAA00FF) L—color: orange (0xFFA500) J—color: blue (0x0000FF) Z—color: red (0xFF0000) S—color: green (0x00FF00) O—color: yellow (0xFFFF00) The development: First, add two new class level variables: private const TS_uint=24; private var fieldArray:Array; private var fieldSprite:Sprite; private var tetrominoes:Array = new Array(); private var colors_Array=new Array(); tetrominoes array is the four-dimensional array containing all tetrominoes information, while colors array will store their colors. Now add a new function call to Main function: public function Main() { generateField(); initTetrominoes(); } initTetrominoes function will initialize tetrominoes-related arrays. private function initTetrominoes():void { // I tetrominoes[0]=[[[0,0,0,0],[1,1,1,1],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[0,1,0,0],[0,1,0,0]]]; colors[0]=0x00FFFF; // T tetrominoes[1]=[[[0,0,0,0],[1,1,1,0],[0,1,0,0],[0,0,0,0]], [[0,1,0,0],[1,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,1,0,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,1,0],[0,1,0,0],[0,0,0,0]]]; colors[1]=0x767676; // L tetrominoes[2]=[[[0,0,0,0],[1,1,1,0],[1,0,0,0],[0,0,0,0]], [[1,1,0,0],[0,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,0,1,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[0,1,1,0],[0,0,0,0]]]; colors[2]=0xFFA500; // J tetrominoes[3]=[[[1,0,0,0],[1,1,1,0],[0,0,0,0],[0,0,0,0]], [[0,1,1,0],[0,1,0,0],[0,1,0,0],[0,0,0,0]], [[0,0,0,0],[1,1,1,0],[0,0,1,0],[0,0,0,0]], [[0,1,0,0],[0,1,0,0],[1,1,0,0],[0,0,0,0]]]; colors[3]=0x0000FF; // Z tetrominoes[4]=[[[0,0,0,0],[1,1,0,0],[0,1,1,0],[0,0,0,0]], [[0,0,1,0],[0,1,1,0],[0,1,0,0],[0,0,0,0]]]; colors[4]=0xFF0000; // S tetrominoes[5]=[[[0,0,0,0],[0,1,1,0],[1,1,0,0],[0,0,0,0]], [[0,1,0,0],[0,1,1,0],[0,0,1,0],[0,0,0,0]]]; colors[5]=0x00FF00; // O tetrominoes[6]=[[[0,1,1,0],[0,1,1,0],[0,0,0,0],[0,0,0,0]]]; colors[6]=0xFFFF00; } colors array is easy to understand: it's just an array with the hexadecimal value of each tetromino color. tetrominoes is a four-dimensional array. It's the first time you see such a complex array, but don't worry. It's no more difficult than the two-dimensional arrays you've been dealing with since the creation of Minesweeper. Tetrominoes are coded into the array this way: tetrominoes[n] contains the arrays with all the information about the n-th tetromino. These arrays represent the various rotations, the four rows and the four columns. tetrominoes[n][m] contains the arrays with all the information about the n-th tetromino in the m-th rotation. These arrays represent the four rows and the four columns. tetrominoes[n][m][o] contains the array with the four elements of the n-th tetromino in the m-th rotation in the o-th row. tetrominoes[n][m][o][p] is the p-th element of the array representing the o-th row in the m-th rotation of the n-th tetromino. Such element can be 0 if it's an empty space or 1 if it's part of the tetromino. There isn't much more to explain as it's just a series of data entry. Let's add our first tetromino to the field.
Read more
  • 0
  • 0
  • 16481

article-image-unreal-engine
Packt
26 Aug 2013
8 min read
Save for later

The Unreal Engine

Packt
26 Aug 2013
8 min read
(For more resources related to this topic, see here.) Sound cues versus sound wave data There are two types of sound entries in UDK: Sound cues Sound wave data The simplest difference between the two is that Sound Wave Data is what we would have if we imported a sound file into the editor, and a Sound Cue is taking a sound wave data or multiple sound wave datas and manipulating them or combining them using a fairly robust and powerful toolset that UDK gives us in their Sound Cue Editor. In terms of uses, sound wave datas are primarily only used as parts of sound cues. However, in terms of placing ambient sounds, that is, sounds that are just sort of always playing in the background, sound wave datas and sound cues offer different situations where each is used. Regardless, they both get represented in the level as Sound Actors, of which there are several types as shown in the following screenshot:     Types of sound actors A key element of any well designed level is ambient sound effects. This requires placing sound actors into the world. Some of these actors use sound wave data and others use sound cues. There are strengths, weaknesses, and specific use cases for all of them, so we'll touch on those presently. Using sound cues There are two distinct types of sound actors that call for the use of sound cues specifically. The strength of using sound cues for ambient sounds is that the different sounds can be manipulated in a wider variety of ways. Generally, this isn't necessary as most ambient sounds are some looping sound used to add sound to things like torches, rippling streams, a subtle blowing wind, or other such environmental instances. The two types of sound actors that use sound cues are Ambient Sounds and Ambient Sound Movables as shown in the following screenshot: Ambient sound As the name suggests, this is a standard ambient sound. It stays exactly where you place it and cannot be moved. These ambient sound actors are generally used for stationary sounds that need some level of randomization or some other form of specific control of multiple sound wave datas. Ambient sound movable Functionally very similar to the regular ambient sound, this variation can, as the name suggests, be moved. That means, this sort of ambient sound actor should be used in a situation where an ambient sound would be used, but needs to be mobile. The main weakness of the two ambient sound actors that utilize sound cues is that each one you place in a level is identically set to the exact settings within the sound cue. Conversely, ambient sound actors utilizing sound wave datas can be set up on an instance by instance basis. What this means is explained with the help of an example. Lets say we have two fires in our game. One is a small torch, and the other is a roaring bonfire. If we feel that using the same sound for each is what we want to do, then we can place both the ambient sound actors utilizing sound wave datas and adjust some settings within the actor to make sure that the bonfire is louder and/or lower pitched. If we wanted this type of variation using sound cues, we would have to make separate sound cues. Using sound wave data There are four types of ambient sound actors that utilize sound wave datas directly as opposed to housed within sound cues. As previously mentioned, the purpose of using ambient sound actors that use sound wave data is to avoid having to create multiple sound cues with only minimally different contents for simple ambient sounds. This is most readily displayed by the fact that the most commonly used ambient sound actors that use sound wave data are called AmbientSoundSimple and AmbientSoundSimpleToggleable as shown in the following screenshot: Ambient sound simple Ambient sound simple is, as the name suggests, the simplest of ambient sound actors. They are only used when we need one sound wave data or multiple sound wave datas to just repeat on a loop over and over again. Fortunately, most ambient sounds in a level fit this description. In most cases, if we were to go through a level and do an ambient sound pass, all we would need to use are ambient sound simples. Ambient sound non loop Ambient sound non loop are pretty much the same, functionally, as ambient sound simples. The only difference is, as the name suggests, they don't loop. They will play whatever sound wave data(s) that are set in the actor, then delay by a number of seconds that is also set within the actor, and then go through it again. This is useful when we want to have a sound play somewhat intermittently, but not be on a regular loop. Ambient sound non looping toggleable Ambient sound non looping toggleable are, for all intents and purposes, the same as the regular ambient sound non loop actors, but they are toggleable. This means, put simply, that they can be turned on and off at will using Kismet. This would obviously be useful if we needed one of these intermittent sounds to play only when certain things happened first. Ambient sound simple toggleable Ambient sound simple toggleable are basically the same as a plain old, run of the mill ambient sound simple with the difference being, as like the ambient sound non looping toggleable, it can be turned on and off using Kismet. Playing sounds in Kismet There are several different ways to play different kinds of sounds using Kismet. Firstly, if we are using a toggleable ambient sound actor, then we can simply use a toggle sequence, which can be found under New Action | Toggle. There is also a Play Sound sequence located in New Action | Sound | Play Sound. Both of these are relatively straightforward in terms of where to plug in the sound cue. Playing sounds in Matinee If we need a sound to play as part of a Matinee sequence, the Matinee tool gives us the ability to trigger the sound in question. If we have a Matinee sequence that contains a Director track, then we need to simply right-click and select Add New Sound Track. From here, we just need to have the sound cue we want to use selected in the Content Browser, and then, with the Sound Track selected in the active Matinee window, we simply place the time marker where we want the sound to play and press Enter. This will place a keyframe that will trigger our sound to play, easy as pie. The Matinee tool dialog will look like the following screenshot: Matinee will only play one sound in a sound track at a time, so if we place multiple sounds and they overlap, they won't play simultaneously. Fortunately, we can have as many separate sound tracks as we need. So if we find ourselves setting up a Matinee and two or more sounds overlap in our sound track, we can just add a second one and move some of our sounds in to it. Now that we've gone over the different ways to directly play and use sound cues, let's look at how to make and manipulate the same sound cues using UDK's surprisingly robust Sound Cue Editor. Summary Now that we have a decent grasp of what kinds of sound control UDK offers us and how to manipulate sounds in the editor, we can set about bringing our game to audible life. A quick tip for placing ambient sounds: if you look at something that visually seems like it should be making a noise like a waterfall, a fire, a flickering light, or whatever else, then it probably should have an ambient sound of some sort placed right on it. And as always, what we've covered in this article is an overview of some of the bare bones basics required to get started exploring sounds and soundscapes in UDK. There are plenty of other actors, settings, and things that can be done. So, again, I recommend playing around with anything you can find. Experiment with everything in UDK and you'll learn all sorts of new and interesting things. Resources for Article: Further resources on this subject: Unreal Development Toolkit: Level Design HQ [Article] Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article]
Read more
  • 0
  • 0
  • 16187
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-c-ngui
Packt
29 Dec 2014
23 min read
Save for later

C# with NGUI

Packt
29 Dec 2014
23 min read
In this article by Charles Pearson, the author of Learning NGUI for Unity, we will talk about C# scripting with NGUI. We will learn how to handle events and interact with them through code. We'll use them to: Play tweens with effects through code Implement a localized tooltip system Localize labels through code Assign callback methods to events using both code and the Inspector view We'll learn many more useful C# tips throughout the book. Right now, let's start with events and their associated methods. (For more resources related to this topic, see here.) Events When scripting in C# with the NGUI plugin, some methods will often be used. For example, you will regularly need to know if an object is currently hovered upon, pressed, or clicked. Of course, you could code your own system—but NGUI handles that very well, and it's important to use it at its full potential in order to gain development time. Available methods When you create and attach a script to an object that has a collider on it (for example, a button or a 3D object), you can add the following useful methods within the script to catch events: OnHover(bool state): This method is called when the object is hovered or unhovered. The state bool gives the hover state; if state is true, the cursor just entered the object's collider. If state is false, the cursor has just left the collider's bounds. OnPress(bool state): This method works in the exact same way as the previous OnHover() method, except it is called when the object is pressed. It also works for touch-enabled devices. If you need to know which mouse button was used to press the object, use the UICamera.currentTouchID variable; if this int is equal to -1, it's a left-click. If it's equal to -2, it's a right-click. Finally, if it's equal to -3, it's a middle-click. OnClick(): This method is similar to OnPress(), except that this method is exclusively called when the click is validated, meaning when an OnPress(true) event occurs followed by an OnPress(false) event. It works with mouse click and touch (tap). In order to handle double clicks, you can also use the OnDoubleClick() method, which works in the same way. OnDrag(Vector2 delta): This method is called at each frame when the mouse or touch moves between the OnPress(true) and OnPress(false) events. The Vector2 delta argument gives you the object's movement since the last frame. OnDrop(GameObject droppedObj): This method is called when an object is dropped on the GameObject on which this script is attached. The dropped GameObject is passed as the droppedObj parameter. OnSelect(): This method is called when the user clicks on the object. It will not be called again until another object is clicked on or the object is deselected (click on empty space). OnTooltip(bool state): This method is called when the cursor is over the object for more than the duration defined by the Tooltip Delay inspector parameter of UICamera. If the Sticky Tooltip option of UICamera is checked, the tooltip remains visible until the cursor moves outside the collider; otherwise, it disappears as soon as the cursor moves. OnScroll(float delta): This method is called when the mouse's scroll wheel is moved while the object is hovered—the delta parameter gives you the amount and direction of the scroll. If you attach your script on a 3D object to catch these events, make sure it is on a layer included in Event Mask of UICamera. Now that we've seen the available event methods, let's see how they are used in a simple example. Example To illustrate when these events occur and how to catch them, you can create a new EventTester.cs script with the following code: void OnHover(bool state) { Debug.Log(this.name + " Hover: " + state); }   void OnPress(bool state) { Debug.Log(this.name + " Pressed: " + state); }   void OnClick() { Debug.Log(this.name + " Clicked"); }   void OnDrag(Vector2 delta) { Debug.Log(this.name + " Drag: " + delta); }   void OnDrop(GameObject droppedObject) { Debug.Log(droppedObject.name + " dropped on " + this.name); }   void OnSelect(bool state) { Debug.Log(this.name + " Selected: " + state); }   void OnTooltip(bool state) { Debug.Log("Show " + this.name + "'s Tooltip: " + state); }   void OnScroll(float delta) { Debug.Log("Scroll of " + delta + " on " + this.name); } The above highlighted lines are the event methods we discussed, implemented with their respective necessary arguments. Attach our Event Tester component now to any GameObject with a collider, like our Main | Buttons | Play button. Hit Unity's play button. From now on, events that occur on the object they're attached to are now tracked in the Console output:   I recommend that you keep the EventTester.cs script in a handy file directory as a reminder for available event methods in the future. Indeed, for each event, you can simply replace the Debug.Log() lines with the instructions you need. Now we know how to catch events through code. Let's use them to display a tooltip! Creating tooltips Let's use the OnTooltip() event to show a tooltip for our buttons and different options, as shown in the following screenshot:   The tooltip object shown in the preceding screenshot, which we are going to create, is composed of four elements: Tooltip: The tooltip container, with the Tooltip component attached. Background: The background sprite that wraps around Label. Border: A yellow border that wraps around Background. Label: The label that displays the tooltip's text. We will also make sure the tooltip is localized using NGUI methods. The tooltip object In order to create the tooltip object, we'll first create its visual elements (widgets), and then we'll attach the Tooltip component to it in order to define it as NGUI's tooltip. Widgets First, we need to create the tooltip object's visual elements: Select our UI Root GameObject in the Hierarchy view. Hit Alt + Shift + N to create a new empty child GameObject. Rename this new child from GameObject to Tooltip. Add the NGUI Panel (UIPanel) component to it. Set this new Depth of UIPanel to 10. In the preceding steps, we've created the tooltip container. It has UIPanel with a Depth value of 10 in order to make sure our tooltip will remain on top of other panels. Now, let's create the faintly transparent background sprite: With Tooltip selected, hit Alt + Shift + S to create a new child sprite. Rename this new child from Sprite to Background. Select our new Tooltip | Background GameObject, and configure UISprite, as follows: Perform the following steps: Make sure Atlas is set to Wooden Atlas. Set Sprite to the Window sprite. Make sure Type is set to Sliced. Change Color Tint to {R: 90, G: 70, B: 0, A: 180}. Set Pivot to top-left (left arrow + up arrow). Change Size to 500 x 85. Reset its Transform position to {0, 0, 0}. Ok, we can now easily add a fully opaque border with the following trick: With Tooltip | Background selected, hit Ctrl + D to duplicate it. Rename this new duplicate to Border. Select Tooltip | Border and configure its attached UI Sprite, as follows:   Perform the following steps: Disable the Fill Center option. Change Color Tint to {R: 255, G: 220, B: 0, A: 255}. Change the Depth value to 1. Set Anchors Type to Unified. Make sure the Execute parameter is set to OnUpdate. Drag Tooltip | Background in to the new Target field. By not filling the center of the Border sprite, we now have a yellow border around our background. We used anchors to make sure this border always wraps the background even during runtime—thanks to the Execute parameter set to OnUpdate. Right now, our Game and Hierarchy views should look like this:   Let's create the tooltip's label. With Tooltip selected, hit Alt + Shift + L to create a new label. For the new Label GameObject, set the following parameters for UILabel:   Set Font Type to NGUI, and Font to Arimo20 with a size of 40. Change Text to [FFCC00]This[FFFFFF] is a tooltip. Change Overflow to ResizeHeight. Set Effect to Outline, with an X and Y of 1 and black color. Set Pivot to top-left (left arrow + up arrow). Change X Size to 434. The height adjusts to the text amount. Set the Transform position to {33, -22, 0}. Ok, good. We now have a label that can display our tooltip's text. This label's height will adjust automatically as the text gets longer or shorter. Let's configure anchors to make sure the background always wraps around the label: Select our Tooltip | Background GameObject. Set Anchors Type to Unified. Drag Tooltip | Label in the new Target field. Set the Execute parameter to OnUpdate. Great! Now, if you edit our tooltip's text label to a very large text, you'll see that it adjusts automatically, as shown in the following screenshot:   UITooltip We can now add the UITooltip component to our tooltip object: Select our UI Root | Tooltip GameObject. Click the Add Component button in the Inspector view. Type tooltip with your keyboard to search for components. Select Tooltip and hit Enter or click on it with your mouse. Configure the newly attached UITooltip component, as follows: Drag UI Root | Tooltip | Label in the Text field. Drag UI Root | Tooltip | Background in the Background field. The tooltip object is ready! It is now defined as a tooltip for NGUI. Now, let's see how we can display it when needed using a few simple lines of code. Displaying the tooltip We must now show the tooltip when needed. In order to do that, we can use the OnTooltip() event, in which we request to display the tooltip with localized text: Select our three Main | Buttons | Exit, Options, and Play buttons. Click the Add Component button in the Inspector view. Type ShowTooltip with your keyboard. Hit Enter twice to create and attach the new ShowTooltip.cs script to it. Open this new ShowTooltip.cs script. First, we need to add this public key variable to define which text we want to display: // The localization key of the text to display public string key = ""; Ok, now add the following OnTooltip() method that retrieves the localized text and requests to show or hide the tooltip depending on the state bool: // When the OnTooltip event is triggered on this object void OnTooltip(bool state) { // Get the final localized text string finalText = Localization.Get(key);   // If the tooltip must be removed... if(!state) { // ...Set the finalText to nothing finalText = ""; }   // Request the tooltip display UITooltip.ShowText(finalText); } Save the script. As you can see in the preceding code, the Localization.Get(string key) method returns localized text of the corresponding key parameter that is passed. You can now use it to localize a label through code anytime! In order to hide the tooltip, we simply request UITooltip to show an empty tooltip. To use Localization.Get(string key), your label must not have a UILocalize component attached to it; otherwise, the value of UILocalize will overwrite anything you assign to UILabel. Ok, we have added the code to show our tooltip with localized text. Now, open the Localization.txt file, and add these localized strings: // Tooltips Play_Tooltip, "Launch the game!", "Lancer le jeu !" Options_Tooltip, "Change language, nickname, subtitles...", "Changer la langue, le pseudo, les sous-titres..." Exit_Tooltip, "Leaving us already?", "Vous nous quittez déjà ?" Now that our localized strings are added, we could manually configure the key parameter for our three buttons' Show Tooltip components to respectively display Play_Tooltip, Options_Tooltip, and Exit_Tooltip. But that would be a repetitive action, and if we want to add localized tooltips easily for future and existing objects, we should implement the following system: if the key parameter is empty, we'll try to get a localized text based on the GameObject's name. Let's do this now! Open our ShowTooltip.cs script, and add this Start() method: // At start void Start() { // If key parameter isn't defined in inspector... if(string.IsNullOrEmpty(key)) { // ...Set it now based on the GameObject's name key = name + "_Tooltip"; } } Click on Unity's play button. That's it! When you leave your cursor on any of our three buttons, a localized tooltip appears:   The preceding tooltip wraps around the displayed text perfectly, and we didn't have to manually configure their Show Tooltip components' key parameters! Actually, I have a feeling that the display delay is too long. Let's correct this: Select our UI Root | Camera GameObject. Set Tooltip Delay of UICamera to 0.3. That's better—our localized tooltip appears after 0.3 seconds of hovering. Adding the remaining tooltips We can now easily add tooltips for our Options page's element. The tooltip works on any GameObject with a collider attached to it. Let's use a search by type to find them: In the Hierarchy view's search bar, type t:boxcollider Select Checkbox, Confirm, Input, List (both), Music, and SFX: Click on the Add Component button in the Inspector view. Type show with your keyboard to search the components. Hit Enter or click on the Show Tooltip component to attach it to them. For the objects with generic names, such as Input and List, we need to set their key parameter manually, as follows: Select the Checkbox GameObject, and set Key to Sound_Tooltip. Select the Input GameObject, and set Key to Nickname_Tooltip. For the List for language selection, set Key to Language_Tooltip. For the List for subtitles selection, set Key to Subtitles_Tooltip. To know if the selected list is the language or subtitles list, look at Options of its UIPopup List: if it has the None option, then it's the subtitles selection. Finally, we need to add these localization strings in the Localization.txt file: Sound_Tooltip, "Enable or disable game sound", "Activer ou désactiver le son du jeu" Nickname_Tooltip, "Name used during the game", "Pseudo utilisé lors du jeu" Language_Tooltip, "Game and user interface language", "Langue du jeu et de l'interface" Subtitles_Tooltip, "Subtitles language", "Langue des sous-titres" Confirm_Tooltip, "Confirm and return to main menu", "Confirmer et retourner au menu principal" Music_Tooltip, "Game music volume", "Volume de la musique" SFX_Tooltip, "Sound effects volume", "Volume des effets" Hit Unity's play button. We now have localized tooltips for all our options! We now know how to easily use NGUI's tooltip system. It's time to talk about Tween methods. Tweens The tweens we have used until now were components we added to GameObjects in the scene. It is also possible to easily add tweens to GameObjects through code. You can see all available tweens by simply typing Tween inside any method in your favorite IDE. You will see a list of Tween classes thanks to auto-completion, as shown in the following screenshot:   The strong point of these classes is that they work in one line and don't have to be executed at each frame; you just have to call their Begin() method! Here, we will apply tweens on widgets, but keep in mind that it works in the exact same way with other GameObjects since NGUI widgets are GameObjects. Tween Scale Previously, we've used the Tween Scale component to make our main window disappear when the Exit button is pressed. Let's do the same when the Play button is pressed, but this time we'll do it through code to understand how it's done. DisappearOnClick Script We will first create a new DisappearOnClick.cs script that will tween a target's scale to {0.01, 0.01, 0.01} when the GameObject it's attached to is clicked on: Select our UI Root | Main | Buttons | Play GameObject. Click the Add Component button in the Inspector view. Type DisappearOnClick with your keyboard. Hit Enter twice to create and add the new DisappearOnClick.cs script. Open this new DisappearOnClick.cs script. First, we must add this public target GameObject to define which object will be affected by the tween, and a duration float to define the speed: // Declare the target we'll tween down to {0.01, 0.01, 0.01} public GameObject target; // Declare a float to configure the tween's duration public float duration = 0.3f; Ok, now, let's add the following OnClick() method, which creates a new tween towards {0.01, 0.01, 0.01} on our desired target using the duration variable: // When this object is clicked private void OnClick() { // Create a tween on the target TweenScale.Begin(target, duration, Vector3.one * 0.01f); } In the preceding code, we scale down the target for the desired duration, towards 0.01f. Save the script. Good. Now, we simply have to assign our variables in the Inspector view: Go back to Unity and select our Play button GameObject. Drag our UI Root | Main object in the DisappearOnClick Target field. Great. Now, hit Unity's play button. When you click the menu's Play button, our main menu is scaled down to {0.01, 0.01, 0.01}, with the simple TweenScale.Begin() line! Now that we've seen how to make a basic tween, let's see how to add effects. Tween effects Right now, our tween is simple and linear. In order to add an effect to the tween, we first need to store it as UITweener, which is its parent class. Replace lines of our OnClick() method by these to first store it and set an effect: // Retrieve the new target's tween UITweener tween = TweenScale.Begin(target, duration, Vector3.one * 0.01f); // Set the new tween's effect method tween.method = UITweener.Method.EaseInOut; That's it. Our tween now has an EaseInOut effect. You also have the following tween effect methods:   Perform the following steps: BounceIn: Bouncing effect at the start of tween BounceOut: Bouncing effect at the end of tween EaseIn: Smooth acceleration effect at the start of tween EaseInOut: Smooth acceleration and deceleration EaseOut: Smooth deceleration effect at the end of tween Linear: Simple linear tween without any effects Great. We now know how to add tween effects through code. Now, let's see how we can set event delegates through code. You can set the tween's ignoreTimeScale to true if you want it to always run at normal speed even if your Time.timeScale variable is different from 1. Event delegates Many NGUI components broadcast events, for which you can set an event delegate—also known as a callback method—executed when the event is triggered. We did it through the Inspector view by assigning the Notify and Method fields when buttons were clicked. For any type of tween, you can set a specific event delegate for when the tween is finished. We'll see how to do this through code. Before we continue, we must create our callback first. Let's create a callback that loads a new scene. The callback Open our MenuManager.cs script, and add this static LoadGameScene() callback method: public static void LoadGameScene() { //Load the Game scene now Application.LoadLevel("Game"); } Save the script. The preceding code requests to load the Game scene. To ensure Unity finds our scenes at runtime, we'll need to create the Game scene and add both Menu and Game scenes to the build settings: Navigate to File | Build Settings. Click on the Add Current button (don't close the window now). In Unity, navigate to File | New Scene. Navigate to File | Save Scene as… Save the scene as Game.unity. Click on the Add Current button of the Build Settings window and close it. Navigate to File | Open Scene and re-open our Menu.unity scene. Ok, now that both scenes have been added to the build settings, we are ready to link our callback to our event. Linking a callback to an event Now that our LoadGameScene() callback method is written, we must link it to our event. We have two solutions. First, we'll see how to assign it using code exclusively, and then we'll create a more flexible system using NGUI's Notify and Method fields. Code In order to set a callback for a specific event, a generic solution exists for all NGUI events you might encounter: the EventDelegate.Set() method. You can also add multiple callbacks to an event using EventDelegate.Add(). Add this line at the end of the OnClick() method of DisappearOnClick.cs: // Set the tween's onFinished event to our LoadGameScene callback EventDelegate.Set(tween.onFinished, MenuManager.LoadGameScene); Instead of the preceding line, we can also use the tween-specific SetOnFinished() convenience method to do this. We'll get the exact same result with fewer words: // Another way to assign our method to the onFinished event tween.SetOnFinished(MenuManager.LoadGameScene); Great. If you hit Unity's play button and click on our main menu's Play button, you'll see that our Game scene is loaded as soon as the tween has finished! It is possible to remove the link of an existing event delegate to a callback by calling EventDelegate.Remove(eventDelegate, callback);. Now, let's see how to link an event delegate to a callback using the Inspector view. Inspector Now that we have seen how to set event delegates through code, let's see how we can create a variable to let us choose which method to call within the Inspector view, like this:   The method to call when the target disappears can be set any time without editing the code The On Disappear variable shown in the preceding screenshot is of the type EventDelegate. We can declare it right now with the following line as a global variable for our DisappearOnClick.cs script: // Declare an event delegate variable to be set in Inspector public EventDelegate onDisappear; Now, let's change the OnClick() method's last line to make sure the tween's onFinished event calls the defined onDisappear callback: // Set the tween's onFinished event to the selected callback tween.SetOnFinished(onDisappear); Ok. Great. Save the script and go to Unity. Select our main menu's Play button: a new On Disappear field has appeared. Drag UI Root—which holds our MenuManager.cs script—in the Notify field. Now, try to select our MenuManager | LoadGameScene method. Surprisingly, it doesn't appear, and you can only select the script's Exit method… why is that? That is simply because our LoadGameScene() method is currently static. If we want it to be available in the Inspector view, we need to remove its static property: Open our MenuManager.cs script. Remove the static keyword from our LoadGameScene() method. Save the script and return to Unity. You can now select it in the drop-down list:   Great! We have set our callback through the Inspector view; the Game scene will be loaded when the menu disappears. Now that we have learned how to assign event delegates to callback methods through code and the Inspector view, let's see how to assign keyboard keys to user interface elements. Keyboard keys In this section, we'll see how to add keyboard control to our UI. First, we'll see how to bind keys to buttons, and then we'll add a navigation system using keyboard arrows. UIKey binding The UIKey Binding component assigns a specific key to the widget it's attached to. We'll use it now to assign the keyboard's Escape key to our menu's Exit button: Select our UI Root | Main | Buttons | Exit GameObject. Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Binding and hit Enter or click on it with your mouse. Let's see its available parameters. Parameters We've just added the following UIKey Binding component to our Exit button, as follows:   The newly attached UIKey Binding component has three parameters: Key Code: Which key would you like to bind to an action? Modifier: If you want a two-button combination. Select on the four available modifiers: Shift, Control, Alt or None. Action: Which action should we bind to this key? You can simulate a button click with PressAndClick, a selection with Select, or both with All. Ok, now, we'll configure it to see how it works. Configuration Simply set the Key Code field to Escape. Now, hit Unity's play button. When you hit the Escape key of our keyboard, it reacts as if the Exit button was pressed! We can now move on to see how to add keyboard and controller navigation to the UI. UIKey navigation The UIKey Navigation component helps us assign objects to select using the keyboard arrows or controller directional-pad. For most widgets, the automatic configuration is enough, but we'll need to use the override parameters in some cases to have the behavior we need. The nickname input field has neither the UIButton nor the UIButton Scale components attached to it. This means that there will be no feedback to show the user it's currently selected with the keyboard navigation, which is a problem. We can correct this right now. Select UI Root | Options | Nickname | Input, and then: Add the Button component (UIButton) to it. Add the Button Scale component (UIButton Scale) to it. Center Pivot of UISprite (middle bar + middle bar). Reset Center of Box Collider to {0, 0, 0}. The Nickname | Input GameObject should have an Inspector view like this:   Ok. We'll now add the Key Navigation component (UIKey Navigation) to most of the buttons in the scene. In order to do that, type t:uibutton in the Hierarchy view's search bar to display only GameObjects with the UIButton component attached to them:   Ok. With the preceding search filter, select the following GameObjects:   Now, with the preceding selection, follow these steps: Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Navigation and hit Enter or click on it with your mouse. We've added the UIKey Navigation component to our selection. Let's see its parameters. Parameters We've just added the following UIKey Navigation component to our objects:   The newly attached UIKey Navigation component has four parameter groups: Starts Selected: Is this widget selected by default at the start? Select on Click: Which widget should be selected when this widget is clicked on — or the Enter key/confirm button has been pressed? This option can be used to select a specific widget when a new page is displayed. Constraint: Use this to limit the navigation movement from this widget: None: The movement is free from this widget Vertical: From this widget, you can only go up or down Horizontal: From this widget, you can only move left or right Explicit: Only move to widgets specified in the Override Override: Use the Left, Right, Up, and Down fields to force the input to select the specified objects. If the Constraint parameter is set to Explicit, only widgets specified here can be selected. Otherwise, automatic configuration still works for fields left to None. Summary This article thus has given an introduction to how C# is used in Unity. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15832

article-image-vr-build-and-run
Packt
22 Feb 2016
25 min read
Save for later

VR Build and Run

Packt
22 Feb 2016
25 min read
Yeah well, this is cool and everything, but where's my VR? I WANT MY VR! Hold on kid, we're getting there. In this article, we are going to set up a project that can be built and run with a virtual reality head-mounted display (HMD) and then talk more in depth about how the VR hardware technology really works. We will be discussing the following topics: The spectrum of the VR device integration software Installing and building a project for your VR device The details and defining terms for how the VR technology really works (For more resources related to this topic, see here.) VR device integration software Before jumping in, let's understand the possible ways to integrate our Unity project with virtual reality devices. In general, your Unity project must include a camera object that can render stereographic views, one for each eye on the VR headset. Software for the integration of applications with the VR hardware spans a spectrum, from built-in support and device-specific interfaces to the device-independent and platform-independent ones. Unity's built-in VR support Since Unity 5.1, support for the VR headsets is built right into Unity. At the time of writing this article, there is direct support for Oculus Rift and Samsung Gear VR (which is driven by the Oculus software). Support for other devices has been announced, including Sony PlayStation Morpheus. You can use a standard camera component, like the one attached to Main Camera and the standard character asset prefabs. When your project is built with Virtual Reality Supported enabled in Player Settings, it renders stereographic camera views and runs on an HMD. The device-specific SDK If a device is not directly supported in Unity, the device manufacturer will probably publish the Unity plugin package. An advantage of using the device-specific interface is that it can directly take advantage of the features of the underlying hardware. For example, Steam Valve and Google have device-specific SDK and Unity packages for the Vive and Cardboard respectively. If you're using one of these devices, you'll probably want to use such SDK and Unity packages. (At the time of writing this article, these devices are not a part of Unity's built-in VR support.) Even Oculus, supported directly in Unity 5.1, provides SDK utilities to augment that interface (see, https://developer.oculus.com/documentation/game-engines/latest/concepts/unity-intro/). Device-specific software locks each build into the specific device. If that's a problem, you'll either need to do some clever coding, or take one of the following approaches instead. The OSVR project In January 2015, Razer Inc. led a group of industry leaders to announce the Open Source Virtual Reality (OSVR) platform (for more information on this, visit http://www.osvr.com/) with plans to develop open source hardware and software, including an SDK that works with multiple devices from multiple vendors. The open source middleware project provides device-independent SDKs (and Unity packages) so that you can write your code to a single interface without having to know which devices your users are using. With OSVR, you can build your Unity game for a specific operating system (such as Windows, Mac, and Linux) and then let the user configure the app (after they download it) for whatever hardware they're going to use. At the time of writing this article, the project is still in its early stage, is rapidly evolving, and is not ready for this article. However, I encourage you to follow its development. WebVR WebVR (for more information, visit http://webvr.info/) is a JavaScript API that is being built directly into major web browsers. It's like WebGL (2D and 3D graphics API for the web) with VR rendering and hardware support. Now that Unity 5 has introduced the WebGL builds, I expect WebVR to surely follow, if not in Unity then from a third-party developer. As we know, browsers run on just about any platform. So, if you target your game to WebVR, you don't even need to know the user's operating system, let alone which VR hardware they're using! That's the idea anyway. New technologies, such as the upcoming WebAssembly, which is a new binary format for the Web, will help to squeeze the best performance out of your hardware and make web-based VR viable. For WebVR libraries, check out the following: WebVR boilerplate: https://github.com/borismus/webvr-boilerplate GLAM: http://tparisi.github.io/glam/ glTF: http://gltf.gl/ MozVR (the Mozilla Firefox Nightly builds with VR): http://mozvr.com/downloads/ WebAssembly: https://github.com/WebAssembly/design/blob/master/FAQ.md 3D worlds There are a number of third-party 3D world platforms that provide multiuser social experiences in shared virtual spaces. You can chat with other players, move between rooms through portals, and even build complex interactions and games without having to be an expert. For examples of 3D virtual worlds, check out the following: VRChat: http://vrchat.net/ JanusVR: http://janusvr.com/ AltspaceVR: http://altvr.com/ High Fidelity: https://highfidelity.com/ For example, VRChat lets you develop 3D spaces and avatars in Unity, export them using their SDK, and load them into VRChat for you and others to share over the Internet in a real-time social VR experience. Creating the MeMyselfEye prefab To begin, we will create an object that will be a proxy for the user in the virtual environment. Let's create the object using the following steps: Open Unity and the project from the last article. Then, open the Diorama scene by navigating to File | Open Scene (or double-click on the scene object in Project panel, under Assets). From the main menu bar, navigate to GameObject | Create Empty. Rename the object MeMyselfEye (hey, this is VR!). Set its position up close into the scene, at Position (0, 1.4, -1.5). In the Hierarchy panel, drag the Main Camera object into MeMyselfEye so that it's a child object. With the Main Camera object selected, reset its transform values (in the Transform panel, in the upper right section, click on the gear icon and select Reset). The Game view should show that we're inside the scene. If you recall the Ethan experiment that we did earlier, I picked a Y-position of 1.4 so that we'll be at about the eye level with Ethan. Now, let's save this as a reusable prefabricated object, or prefab, in the Project panel, under Assets: In Project panel, under Assets, select the top-level Assets folder, right-click and navigate to Create | Folder. Rename the folder Prefabs. Drag the MeMyselfEye prefab into the Project panel, under Assets/Prefabs folder to create a prefab. Now, let's configure the project for your specific VR headset. Build for the Oculus Rift If you have a Rift, you've probably already downloaded Oculus Runtime, demo apps, and tons of awesome games. To develop for the Rift, you'll want to be sure that the Rift runs fine on the same machine on which you're using Unity. Unity has built-in support for the Oculus Rift. You just need to configure your Build Settings..., as follows: From main menu bar, navigate to File | Build Settings.... If the current scene is not listed under Scenes In Build, click on Add Current. Choose PC, Mac, & Linux Standalone from the Platform list on the left and click on Switch Platform. Choose your Target Platform OS from the Select list on the right (for example, Windows). Then, click on Player Settings... and go to the Inspector panel. Under Other Settings, check off the Virtual Reality Supported checkbox and click on Apply if the Changing editor vr device dialog box pops up. To test it out, make sure that the Rift is properly connected and turned on. Click on the game Play button at the top of the application in the center. Put on the headset, and IT SHOULD BE AWESOME! Within the Rift, you can look all around—left, right, up, down, and behind you. You can lean over and lean in. Using the keyboard, you can make Ethan walk, run, and jump just like we did earlier. Now, you can build your game as a separate executable app using the following steps. Most likely, you've done this before, at least for non-VR apps. It's pretty much the same: From the main menu bar, navigate to File | Build Settings.... Click on Build and set its name. I like to keep my builds in a subdirectory named Builds; create one if you want to. Click on Save. An executable will be created in your Builds folder. If you're on Windows, there may also be a rift_Data folder with built data. Run Diorama as you would do for any executable application—double-click on it. Choose the Windowed checkbox option so that when you're ready to quit, close the window with the standard Close icon in the upper right of your screen. Build for Google Cardboard Read this section if you are targeting Google Cardboard on Android and/or iOS. A good starting point is the Google Cardboard for Unity, Get Started guide (for more information, visit https://developers.google.com/cardboard/unity/get-started). The Android setup If you've never built for Android, you'll first need to download and install the Android SDK. Take a look at Unity manual for Android SDK Setup (http://docs.unity3d.com/Manual/android-sdksetup.html). You'll need to install the Android Developer Studio (or at least, the smaller SDK Tools) and other related tools, such as Java (JVM) and the USB drivers. It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine.) Make sure that you know how to install and run it on your Android phone. The iOS setup A good starting point is Unity manual, Getting Started with iOS Development guide (http://docs.unity3d.com/Manual/iphone-GettingStarted.html). You can only perform iOS development from a Mac. You must have an Apple Developer Account approved (and paid for the standard annual membership fee) and set up. Also, you'll need to download and install a copy of the Xcode development tools (via the Apple Store). It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine). Make sure that you know how to install and run it on your iPhone. Installing the Cardboard Unity package To set up our project to run on Google Cardboard, download the SDK from https://developers.google.com/cardboard/unity/download. Within your Unity project, import the CardboardSDKForUnity.unitypackage assets package, as follows: From the Assets main menu bar, navigate to Import Package | Custom Package.... Find and select the CardboardSDKForUnity.unitypackage file. Ensure that all the assets are checked, and click on Import. Explore the imported assets. In the Project panel, the Assets/Cardboard folder includes a bunch of useful stuff, including the CardboardMain prefab (which, in turn, contains a copy of CardboardHead, which contains the camera). There is also a set of useful scripts in the Cardboard/Scripts/ folder. Go check them out. Adding the camera Now, we'll put the Cardboard camera into MeMyselfEye, as follows: In the Project panel, find CardboardMain in the Assets/Cardboard/Prefabs folder. Drag it onto the MeMyselfEye object in the Hierarchy panel so that it's a child object. With CardboardMain selected in Hierarchy, look at the Inspector panel and ensure the Tap is Trigger checkbox is checked. Select the Main Camera in the Hierarchy panel (inside MeMyselfEye) and disable it by unchecking the Enable checkbox on the upper left of its Inspector panel. Finally, apply theses changes back onto the prefab, as follows: In the Hierarchy panel, select the MeMyselfEye object. Then, in its Inspector panel, next to Prefab, click on the Apply button. Save the scene. We now have replaced the default Main Camera with the VR one. The build settings If you know how to build and install from Unity to your mobile phone, doing it for Cardboard is pretty much the same: From the main menu bar, navigate to File | Build Settings.... If the current scene is not listed under Scenes to Build, click on Add Current. Choose Android or iOS from the Platform list on the left and click on Switch Platform. Then, click on Player Settings… in the Inspector panel. For Android, ensure that Other Settings | Virtual Reality Supported is unchecked, as that would be for GearVR (via the Oculus drivers), not Cardboard Android! Navigate to Other Settings | PlayerSettings.bundleIdentifier and enter a valid string, such as com.YourName.VRisAwesome. Under Resolution and Presentation | Default Orientation set Landscape Left. Play Mode To test it out, you do not need your phone connected. Just press the game's Play button at the top of the application in the center to enter Play Mode. You will see the split screen stereographic views in the Game view panel. While in Play Mode, you can simulate the head movement if you were viewing it with the Cardboard headset. Use Alt + mouse-move to pan and tilt forward or backwards. Use Ctrl + mouse-move to tilt your head from side to side. You can also simulate magnetic clicks (we'll talk more about user input in a later article) with mouse clicks. Note that since this emulates running on a phone, without a keyboard, the keyboard keys that we used to move Ethan do not work now. Building and running in Android To build your game as a separate executable app, perform the following steps: From the main menu bar, navigate to File | Build & Run. Set the name of the build. I like to keep my builds in a subdirectory named Build; you can create one if you want. Click on Save. This will generate an Android executable .apk file, and then install the app onto your phone. The following screenshot shows the Diorama scene running on an Android phone with Cardboard (and Unity development monitor in the background). Building and running in iOS To build your game and run it on the iPhone, perform the following steps: Plug your phone into the computer via a USB cable/port. From the main menu bar, navigate to File | Build & Run. This allows you to create an Xcode project, launch Xcode, build your app inside Xcode, and then install the app onto your phone. Antique Stereograph (source https://www.pinterest.com/pin/493073859173951630/) The device-independent clicker At the time of writing this article, VR input has not yet been settled across all platforms. Input devices may or may not fit under Unity's own Input Manager and APIs. In fact, input for VR is a huge topic and deserves its own book. So here, we will keep it simple. As a tribute to the late Steve Jobs and a throwback to the origins of Apple Macintosh, I am going to limit these projects to mostly one-click inputs! Let's write a script for it, which checks for any click on the keyboard, mouse, or other managed device: In the Project panel, select the top-level Assets folder. Right-click and navigate to Create | Folder. Name it Scripts. With the Scripts folder selected, right-click and navigate to Create | C# Script. Name it Clicker. Double-click on the Clicker.cs file in the Projects panel to open it in the MonoDevelop editor. Now, edit the Script file, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { return Input.anyKeyDown; } } Save the file. If you are developing for Google Cardboard, you can add a check for the Cardboard's integrated trigger when building for mobile devices, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { #if (UNITY_ANDROID || UNITY_IPHONE) return Cardboard.SDK.CardboardTriggered; #else return Input.anyKeyDown; #endif } } Any scripts that we write that require user clicks will use this Clicker file. The idea is that we've isolated the definition of a user click to a single script, and if we change or refine it, we only need to change this file. How virtual reality really works So, with your headset on, you experienced the diorama! It appeared 3D, it felt 3D, and maybe you even had a sense of actually being there inside the synthetic scene. I suspect that this isn't the first time you've experienced VR, but now that we've done it together, let's take a few minutes to talk about how it works. The strikingly obvious thing is, VR looks and feels really cool! But why? Immersion and presence are the two words used to describe the quality of a VR experience. The Holy Grail is to increase both to the point where it seems so real, you forget you're in a virtual world. Immersion is the result of emulating the sensory inputs that your body receives (visual, auditory, motor, and so on). This can be explained technically. Presence is the visceral feeling that you get being transported there—a deep emotional or intuitive feeling. You can say that immersion is the science of VR, and presence is the art. And that, my friend, is cool. A number of different technologies and techniques come together to make the VR experience work, which can be separated into two basic areas: 3D viewing Head-pose tracking In other words, displays and sensors, like those built into today's mobile devices, are a big reason why VR is possible and affordable today. Suppose the VR system knows exactly where your head is positioned at any given moment in time. Suppose that it can immediately render and display the 3D scene for this precise viewpoint stereoscopically. Then, wherever and whenever you moved, you'd see the virtual scene exactly as you should. You would have a nearly perfect visual VR experience. That's basically it. Ta-dah! Well, not so fast. Literally. Stereoscopic 3D viewing Split-screen stereography was discovered not long after the invention of photography, like the popular stereograph viewer from 1876 shown in the following picture (B.W. Kilborn & Co, Littleton, New Hampshire, see http://en.wikipedia.org/wiki/Benjamin_W._Kilburn). A stereo photograph has separate views for the left and right eyes, which are slightly offset to create parallax. This fools the brain into thinking that it's a truly three-dimensional view. The device contains separate lenses for each eye, which let you easily focus on the photo close up. Similarly, rendering these side-by-side stereo views is the first job of the VR-enabled camera in Unity. Let's say that you're wearing a VR headset and you're holding your head very still so that the image looks frozen. It still appears better than a simple stereograph. Why? The old-fashioned stereograph has twin relatively small images rectangularly bound. When your eye is focused on the center of the view, the 3D effect is convincing, but you will see the boundaries of the view. Move your eyeballs around (even with the head still), and any remaining sense of immersion is totally lost. You're just an observer on the outside peering into a diorama. Now, consider what an Oculus Rift screen looks like without the headset (see the following screenshot): The first thing that you will notice is that each eye has a barrel shaped view. Why is that? The headset lens is a very wide-angle lens. So, when you look through it you have a nice wide field of view. In fact, it is so wide (and tall), it distorts the image (pincushion effect). The graphics software (SDK) does an inverse of that distortion (barrel distortion) so that it looks correct to us through the lenses. This is referred to as an ocular distortion correction. The result is an apparent field of view (FOV), that is wide enough to include a lot more of your peripheral vision. For example, the Oculus Rift DK2 has a FOV of about 100 degrees. Also of course, the view angle from each eye is slightly offset, comparable to the distance between your eyes, or the Inter Pupillary Distance (IPD). IPD is used to calculate the parallax and can vary from one person to the next. (Oculus Configuration Utility comes with a utility to measure and configure your IPD. Alternatively, you can ask your eye doctor for an accurate measurement.) It might be less obvious, but if you look closer at the VR screen, you see color separations, like you'd get from a color printer whose print head is not aligned properly. This is intentional. Light passing through a lens is refracted at different angles based on the wavelength of the light. Again, the rendering software does an inverse of the color separation so that it looks correct to us. This is referred to as a chromatic aberration correction. It helps make the image look really crisp. Resolution of the screen is also important to get a convincing view. If it's too low-res, you'll see the pixels, or what some refer to as a screen door effect. The pixel width and height of the display is an oft-quoted specification when comparing the HMD's, but the pixels per inch (ppi) value may be more important. Other innovations in display technology such as pixel smearing and foveated rendering (showing a higher-resolution detail exactly where the eyeball is looking) will also help reduce the screen door effect. When experiencing a 3D scene in VR, you must also consider the frames per second (FPS). If FPS is too slow, the animation will look choppy. Things that affect FPS include the graphics processor (GPU) performance and complexity of the Unity scene (number of polygons and lighting calculations), among other factors. This is compounded in VR because you need to draw the scene twice, once for each eye. Technology innovations, such as GPUs optimized for VR, frame interpolation and other techniques, will improve the frame rates. For us developers, performance-tuning techniques in Unity, such as those used by mobile game developers, can be applied in VR. These techniques and optics help make the 3D scene appear realistic. Sound is also very important—more important than many people realize. VR should be experienced while wearing stereo headphones. In fact, when the audio is done well but the graphics are pretty crappy, you can still have a great experience. We see this a lot in TV and cinema. The same holds true in VR. Binaural audio gives each ear its own stereo view of a sound source in such a way that your brain imagines its location in 3D space. No special listening devices are needed. Regular headphones will work (speakers will not). For example, put on your headphones and visit the Virtual Barber Shop at https://www.youtube.com/watch?v=IUDTlvagjJA. True 3D audio, such as VisiSonics (licensed by Oculus), provides an even more realistic spatial audio rendering, where sounds bounce off nearby walls and can be occluded by obstacles in the scene to enhance the first-person experience and realism. Lastly, the VR headset should fit your head and face comfortably so that it's easy to forget that you're wearing it and should block out light from the real environment around you. Head tracking So, we have a nice 3D picture that is viewable in a comfortable VR headset with a wide field of view. If this was it and you moved your head, it'd feel like you have a diorama box stuck to your face. Move your head and the box moves along with it, and this is much like holding the antique stereograph device or the childhood View Master. Fortunately, VR is so much better. The VR headset has a motion sensor (IMU) inside that detects spatial acceleration and rotation rate on all three axes, providing what's called the six degrees of freedom. This is the same technology that is commonly found in mobile phones and some console game controllers. Mounted on your headset, when you move your head, the current viewpoint is calculated and used when the next frame's image is drawn. This is referred to as motion detection. Current motion sensors may be good if you wish to play mobile games on a phone, but for VR, it's not accurate enough. These inaccuracies (rounding errors) accumulate over time, as the sensor is sampled thousands of times per second, one may eventually lose track of where you are in the real world. This drift is a major shortfall of phone-based VR headsets such as Google Cardboard. It can sense your head motion, but it loses track of your head position. High-end HMDs account for drift with a separate positional tracking mechanism. The Oculus Rift does this with an inside-out positional tracking, where an array of (invisible) infrared LEDs on the HMD are read by an external optical sensor (infrared camera) to determine your position. You need to remain within the view of the camera for the head tracking to work. Alternatively, the Steam VR Vive Lighthouse technology does an outside-in positional tracking, where two or more dumb laser emitters are placed in the room (much like the lasers in a barcode reader at the grocery checkout), and an optical sensor on the headset reads the rays to determine your position. Either way, the primary purpose is to accurately find the position of your head (and other similarly equipped devices, such as handheld controllers). Together, the position, tilt, and the forward direction of your head—or the head pose—is used by the graphics software to redraw the 3D scene from this vantage point. Graphics engines such as Unity are really good at this. Now, let's say that the screen is getting updated at 90 FPS, and you're moving your head. The software determines the head pose, renders the 3D view, and draws it on the HMD screen. However, you're still moving your head. So, by the time it's displayed, the image is a little out of date with respect to your then current position. This is called latency, and it can make you feel nauseous. Motion sickness caused by latency in VR occurs when you're moving your head and your brain expects the world around you to change exactly in sync. Any perceptible delay can make you uncomfortable, to say the least. Latency can be measured as the time from reading a motion sensor to rendering the corresponding image, or the sensor-to-pixel delay. According to Oculus' John Carmack: "A total latency of 50 milliseconds will feel responsive, but still noticeable laggy. 20 milliseconds or less will provide the minimum level of latency deemed acceptable." There are a number of very clever strategies that can be used to implement latency compensation. The details are outside the scope of this article and inevitably will change as device manufacturers improve on the technology. One of these strategies is what Oculus calls the timewarp, which tries to guess where your head will be by the time the rendering is done, and uses that future head pose instead of the actual, detected one. All of this is handled in the SDK, so as a Unity developer, you do not have to deal with it directly. Meanwhile, as VR developers, we need to be aware of latency as well as the other causes of motion sickness. Latency can be reduced by faster rendering of each frame (keeping the recommended FPS). This can be achieved by discouraging the moving of your head too quickly and using other techniques to make the user feel grounded and comfortable. Another thing that the Rift does to improve head tracking and realism is that it uses a skeletal representation of the neck so that all the rotations that it receives are mapped more accurately to the head rotation. An example of this is looking down at your lap makes a small forward translation since it knows it's impossible to rotate one's head downwards on the spot. Other than head tracking, stereography and 3D audio, virtual reality experiences can be enhanced with body tracking, hand tracking (and gesture recognition), locomotion tracking (for example, VR treadmills), and controllers with haptic feedback. The goal of all of this is to increase your sense of immersion and presence in the virtual world. Summary In this article, we discussed the different levels of device integration software and then installed the software that is appropriate for your target VR device. We also discussed what happens inside the hardware and software SDK that makes virtual reality work and how it matters to us VR developers. For more information on VR development and Unity refer to the following Packt books: Unity UI Cookbook, by Francesco Sapio: https://www.packtpub.com/game-development/unity-ui-cookbook Building a Game with Unity and Blender, by Lee Zhi Eng: https://www.packtpub.com/game-development/building-game-unity-and-blender Augmented Reality with Kinect, by Rui Wang: https://www.packtpub.com/application-development/augmented-reality-kinect Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Unity Networking – The Pong Game [article] Getting Started with Mudbox 2013 [article]
Read more
  • 0
  • 0
  • 15808

article-image-creating-virtual-landscapes
Packt
06 Mar 2013
9 min read
Save for later

Creating Virtual Landscapes

Packt
06 Mar 2013
9 min read
(For more resources related to this topic, see here.) Describing a world in data Just like modern games, early games like Ant Attack required data that described in some meaningful way how the landscape was to appear. The eerie city landscape of "Antchester" (shown in the following screenshot) was constructed in memory as a 128 x 128 byte grid, the first 128 bytes defined the upper-left wall, and the 128 byte row below that, and so on. Each of these bytes described the vertical arrangement of blocks in lower six bits, for game logic purposes the upper two bits were used for game sprites. Heightmaps are common ground The arrangement of numbers in a grid pattern is still extensively used to represent terrain. We call these grids "maps" and they are popular by virtue of being simple to use and manipulate. A long way from "Antchester", maps can now be measured in megabytes or Gigabytes (around 20GB is needed for the whole earth at 30 meter resolution). Each value in the map represents the height of the terrain at that location. These kinds of maps are known as heightmaps. However, any information that can be represented in the grid pattern can use maps. Additional maps can be used by 3D engines to tell it how to mix many textures together; this is a common terrain painting technique known as "splatting". Splats describe the amount of blending between texture layers. Another kind of map might be used for lighting, adding light, or shadows to an area of the map. We also find in some engines something called visibility maps which hide parts of the terrain; for example we might want to add holes or caves into a landscape. Coverage maps might be used to represent objects such as grasses, different vegetation layers might have some kind of map the engine uses to draw 3D objects onto the terrain surface. GROME allows us to create and edit all of these kinds of maps and export them, with a little bit of manipulation we can port this information into most game engines. Whatever the technique used by an engine to paint the terrain, height-maps are fairly universal in how they are used to describe topography. The following is an example of a heightmap loaded into an image viewer. It appears as a gray scale image, the intensity of each pixel represents a height value at that location on the map. This map represents a 100 square kilometer area of north-west Afghanistan used in a flight simulation. GROME like many other terrain editing tools uses heightmaps to transport terrain information. Typically importing the heightmap as a gray scale image using common file formats such as TIFF, PNG, or BMP. When it's time to export the terrain project you have similar options to save. This commonality is the basis of using GROME as a tool for many different engines. There's nothing to stop you from making changes to an exported heightmap using image editing software. The GROME plugin system and SDK permit you to make your own custom exporter for any unsupported formats. So long as we can deal with the material and texture format requirements for our host 3D engine we can integrate GROME into the art pipeline. Texture sizes Using textures for heightmap information does have limitations. The largest "safe" size for a texture is considered 4096 x 4096 although some of the older 3D cards would have problems with anything higher than 2048 x 2048. Also, host 3D engines often require texture dimensions to be a power of 2. A table of recommended dimensions for images follow: SafeTexture dimensions 64 x 64 128 x 128 256 x 256 512 x 512 1024 x 1024 2048 x 2048 4096 x 4096 512 x 512 provides efficient trade-off between resolution and performance and is the default value for GROME operations. If you're familiar with this already then great, you might see questions posted on forums about texture corruption or materials not looking correct. Sometimes these problems are the result of not conforming to this arrangement. Also, you'll see these numbers crop up a few times in GROME's drop-down property boxes. To avoid any potential problems it is wise to ensure any textures you use in your projects conform to these specifications. One exception is Unreal Development Kit ( UDK) in which you'll see numbers such as 257 x 257 used. If you have a huge amount of terrain data that you need to import for a project you can use the texture formats mentioned earlier but I recommend using RAW formats if possible. If your project is based on real-world topography then importing DTED or GeoTIFF data will extract geographical information such as latitude, longitude, and number of arc seconds represented by the terrain. Digital Terrain Elevation Data (DTED) A file format used by geographers and mappers to map the height of a terrain. Often used to import real-world topography into flight simulations. Shuttle Radar Topography Mission (SRTM) data is easily obtained and converted. The huge world problem Huge landscapes may require a lot of memory, potentially more than a 3D card can handle. In game consoles memory is a scarce resource, on mobile devices transferring the app and storing is a factor. Even on a cutting edge PC large datasets will eat into that onboard memory especially when we get down to designing and building them using high-resolution data. Requesting actions that eat up your system memory may cause the application to fail. We can use GROME to create vast worlds without worrying too much about memory. This is done by taking advantage of how GROME manages data through a process of splitting terrain into "zones" and swapping it out to disk. This swapping is similar to how operating systems move memory to disk and reload it on demand. By default whenever you import large DTED files GROME will break the region into multiple zones and hide them. Someone new to GROME might be confused by a lengthy file import operation only to be presen ted with a seemingly empty project space. When creating terrain for engines such as Unity, UDK, Ogre3D, and others you should keep in mind their own technical limitations of what they can reasonably import. Most of these engines are built for small scale scenes. While GROME doesn't impose any specific unit of measure on your designs, one unit equals one meter is a good rule of thumb. Many third-party models are made to this scale. However it's up to the artist to pick a unit of scale and importantly, be consistent. Keep in mind many 3D engines are limited by two factors: Floating point math precision Z-buffer (depth buffer) precision Floating point precision As a general rule anything larger than 20,000 units away from the world origin in any direction is going to exhibit precision errors. This manifests as vertex jitter whenever vertices are rotated and transformed by large values. The effects are not something you can easily work around. Changing the scale of the object shifts the error to another decimal point. Normally in engines that specialize in rendering large worlds they either use a camera-relative rendering or some kind of paging system. Unity and UDK are not inherently capable of camera-relative rendering but a method of paging is possible to employ. Depth buffer precision The other issue associated with large scene rendering is z-fighting. The depth buffer is a normally invisible part of a scene used to determine what part is hidden by another, depth-testing. Whenever a pixel is written to a scene buffer the z component is saved in the depth buffer. Typically this buffer has 16 bits of precision, meaning you have a linear depth of 0 to 65,536. This depth value is based on the 3D camera's view range (the difference between the camera near and far distance). Z-fighting occurs when objects appear to be co-planer polygons written into the z-buffer with similar depth values causing them to "fight" for visibility. This flickering is an indicator that the scene and camera settings need to be rethought. Often the easy fix is to increase the z-buffer precision by increasing the camera's near distance. The downside is that this can clip very near objects. GROME will let you create such large worlds. Its own Graphite engine handles them well. Most 3D engines are designed for smaller first and third-person games which will have a practical limit of around 10 to 25 square kilometers (1 meter = 1 unit). GROME can mix levels of detail quite easily, different regions of the terrain have their own mesh density. If for example you have a map on an island, you will want lots of detail for the land and less in the sea region. However, game engines such as Unity, UDK, and Ogre3 Dare are not easily adapted to deal with such variability in the terrain mesh since they are optimized to render a large triangular grid of uniform size. Instead, we use techniques to fake extra detail and bake it into our terrain textures, dramatically reducing the triangle count in the process. Using a combination of Normal Maps and Mesh Layers in GROME we can create the illusion of more detail than there is at a distance. Normal map A Normal is a unit vector (a vector with a total length of one) perpendicular to a surface. When a texture is used as a Normal map, the red, green, and blue channels represent the vector (x,y,z). These are used to generate the illusion of more detail by creating a bumpy looking surface. Also known as bump-maps. Summary In this article we looked at heightmaps and how they allow us to import and export to other programs and engines. We touched upon world sizes and limitations commonly found in 3D engines. Resources for Article : Further resources on this subject: Photo Manipulation with GIMP 2.6 [Article] Setting up a BizTalk Server Environment [Article] Creating and Warping 3D Text with Away3D 3.6 [Article]
Read more
  • 0
  • 0
  • 15794

article-image-audio-and-animation-hand-hand
Packt
16 Feb 2016
5 min read
Save for later

Audio and Animation: Hand in Hand

Packt
16 Feb 2016
5 min read
In this article, we are going to learn techniques to match audio pitch to animation speed. This is very crucial while editing videos and creating animated contents. (For more resources related to this topic, see here.) Matching the audio pitch to the animation speed Many artifacts sound higher in pitch when accelerated and lower when slowed down. Car engines, fan coolers, Vinyl, a record player the list goes on. If you want to simulate this kind of sound effect in an animated object that can have its speed changed dynamically, follow this article. Getting ready For this, you'll need an animated 3D object and an audio clip. Please use the files animatedRocket.fbx and engineSound.wav, available in the 1362_09_01 folder, that you can find in code bundle of the book Unity 5.x Cookbook at https://www.packtpub.com/game-development/unity-5x-cookbook. How to do it... To change the pitch of an audio clip according to the speed of an animated object, please follow these steps: Import the animatedRocket.fbx file into your Project. Select the carousel.fbx file in the Project view. Then, from the Inspector view, check its Import Settings. Select Animations, then select the clip Take 001, and make sure to check the Loop Time option. Click on the Apply button, shown as follows to save the changes: The reason why we didn't need to check Loop Pose option is because our animation already loops in a seamless fashion. If it didn't, we could have checked that option to automatically create a seamless transition from the last to the first frame of the animation. Add the animatedRocket GameObject to the scene by dragging it from the Project view into the Hierarchy view. Import the engineSound.wav audio clip. Select the animatedRocket GameObject. Then, drag engineSound from the Project view into the Inspector view, adding it as an Audio Source for that object. In the Audio Source component of carousel, check the box for the Loop option, as shown in the following screenshot: We need to create a Controller for our object. In the Project view, click on the Create button and select Animator Controller. Name it as rocketlController. Double-click on rocketController object to open the Animator window, as shown. Then, right-click on the gridded area and select the Create State | Empty option, from the contextual menu. Name the new state spin and set Take 001 as its motion in the Motion field: From the Hierarchy view, select animatedRocket. Then, in the Animator component (in the Inspector view), set rocketController as its Controller and make sure that the Apply Root Motion option is unchecked as shown: In the Project view, create a new C# Script and rename it to ChangePitch. Open the script in your editor and replace everything with the following code: using UnityEngine; public class ChangePitch : MonoBehaviour{ public float accel = 0.05f; public float minSpeed = 0.0f; public float maxSpeed = 2.0f; public float animationSoundRatio = 1.0f; private float speed = 0.0f; private Animator animator; private AudioSource audioSource; void Start(){ animator = GetComponent<Animator>(); audioSource = GetComponent<AudioSource>(); speed = animator.speed; AccelRocket (0f); } void Update(){ if (Input.GetKey (KeyCode.Alpha1)) AccelRocket(accel); if (Input.GetKey (KeyCode.Alpha2)) AccelRocket(-accel); } public void AccelRocket(float accel){ speed += accel; speed = Mathf.Clamp(speed,minSpeed,maxSpeed); animator.speed = speed; float soundPitch = animator.speed * animationSoundRatio; audioSource.pitch = Mathf.Abs(soundPitch); } } Save your script and add it as a component to animatedRocket GameObject. Play the scene and change the animation speed by pressing key 1 (accelerate) and 2 (decelerate) on your alphanumeric keyboard. The audio pitch will change accordingly. How it works... At the Start() method, besides storing the Animator and Audio oururcecuSource components in variables, we'll get the initial speed from the Animator and, we'll call the AccelRocket() function by passing 0 as an argument, only for that function to calculate the resulting pitch for the Audio Source. During Update() function, the lines of the if(Input.GetKey (KeyCode.Alpha1)) and if(Input.GetKey (KeyCode.Alpha2)) code detect whenever the 1 or 2 keys are being pressed on the alphanumeric keyboard to call the AccelRocket() function, passing a accel float variable as an argument. The AccelRocket() function, in its turn, increments speed with the received argument (the accel float variable). However, it uses the Mathf.Clamp()command to limit the new speed value between the minimum and maximum speed as set by the user. Then, it changes the Animator speed and Audio Source pitch according to the new speed absolute value (the reason for making it an absolute value is keeping the pitch a positive number, even when the animation is reversed by a negative speed value). Also, please note that setting the animation speed and therefore, the sound pitch to 0 will cause the sound to stop, making it clear that stopping the object's animation also prevents the engine sound from playing. There's more... Here is some information on how to fine-tune and customize this recipe. Changing the Animation/Sound Ratio If you want the audio clip pitch to be more or less affected by the animation speed, change the value of the Animation/Sound Ratio parameter. Accessing the function from other scripts The AccelRocket()function was made public so that it can be accessed from other scripts. As an example, we have included the ExtChangePitch.cs script in 1362_09_01 folder. Try attaching this script to the Main Camera object and use it to control the speed by clicking on the left and right mouse buttons. Summary In this article we learned, how to match audio pitch to the animation speed, how to change Animation/Sound Ratio. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article:   Further resources on this subject: The Vertex Functions [article] Lights and Effects [article] Virtual Machine Concepts [article]
Read more
  • 0
  • 0
  • 15380
article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 15217

Packt
16 Feb 2016
45 min read
Save for later

Looking Good – The Graphical Interface

Packt
16 Feb 2016
45 min read
We will start by creating a simple Tic-tac-toe game, using the basic pieces of GUI that Unity provides. Following this, we will discuss how we can change the styles of our GUI controls to improve the look of our game. We will also explore some tips and tricks to handle the many different screen sizes of Android devices. Finally, we will learn about a much quicker way, to put our games on the device. With all that said, let's jump in. In this article, we will cover the following topics: User preferences Buttons, text, and images Dynamic GUI positioning Build and run In this article, we will be creating a new project in Unity. The first section here will walk you through its creation and setup (For more resources related to this topic, see here.) Creating a Tic-tac-toe game The project for this article is a simple Tic-tac-toe style game, similar to what any of us might play on paper. As with anything else, there are several ways in which you can make this game. We are going to use Unity's uGUI system in order to better understand how to create a GUI for any of our other games. The game board The basic Tic-tac-toe game involves two players and a 3 x 3 grid. The players take turns filling squares with Xs and Os. The player who first fills a line of three squares with their letter wins the game. If all squares are filled without a player achieving a line of three, the game is a tie. Let's start with the following steps to create our game board: The first thing to do is to create a project for this article. So, start up Unity and we will do just that. If you have been following along so far, Unity should boot up into the last project that was open. This isn't a bad feature, but it can become extremely annoying. Think of it like this: you have been working on a project for a while and it has grown large. Now you need to quickly open something else, but Unity defaults to your huge project. If you wait for it to open before you can work on anything else, it can consume a lot of time. To change this feature, go to the top of the Unity window and click on Edit followed by Preferences. This is the same place where we changed our script editor's preferences. This time, though, we are going to change settings in the General tab. The following screenshot shows the options that are present under the General tab: At this moment, our primary concern is the Load Previous Project on Startup option; however, we will still cover all of the options in turn. All the options under the General tab are explained in detail as follows:     Auto Refresh: This is one of the best features of Unity. As an asset is changed outside of Unity, this option lets Unity automatically detect the change and refresh the asset inside your project.     Load Previous Project on Startup: This is a great option and you should make sure that this is unchecked whenever installing Unity. When checked, Unity will immediately open the last project you worked on rather than Project Wizard.     Compress Assets on Import: This is the checkbox for automatically compressing your game assets when they are first imported to Unity.     Editor Analytics: This checkbox is for Unity's anonymous usage statistics. Leave it checked and the Unity Editor will send information occasionally to the Unity source. It doesn't hurt anything to leave it on and helps the Unity team to make the Unity Editor better; however, it comes down to personal preference.     Show Asset Store search hits: This setting is only relevant if you plan to use Asset Store. The asset store can be a great source of assets and tools for any game; however, since we are not going to use it. It does what the name suggests. When you search the asset store for something within the Unity Editor, the number of results is displayed based on this checkbox.     Verify Saving Assets: This is a good one to be left off. If this is on, every time you click on Save in Unity, a dialog box will pop up so that you can make sure to save any and all of the assets that have changed since your last save. This option is not so much about your models and textures, but it is concerned with Unity's internal files, materials, and prefabs. It's best to leave it off for now.     Skin (Pro Only): This option only applies to Unity Pro users. It gives the option to switch between the light and dark versions of the Unity Editor. It is purely cosmetic, so go with your gut for this one. With your preferences set, now go to File and then select New Project. Click on the Browse... button to pick a location and name for the new project. We will not be using any of the included packages, so click on Create and we can get on with it. By changing a few simple options, we can save ourselves a lot of trouble later. This may not seem like that big of a deal now for simple projects from this article, but, for large and complex projects, not choosing the correct options can cause a lot of hassle for you even if you just want to make a quick switch between projects. Creating the board With the new project created, we have a clean slate to create our game. Before we can create the core functionality, we need to set up some structure in our scene for our game to work and our players to interact with: Once Unity finishes initializing the new project, we need to create a new canvas. We can do this by navigating to GameObject | UI | Canvas. The whole of Unity's uGUI system requires a canvas in order to draw anything on the screen. It has a few key components, as you can see in the following Inspector window, which allow it and everything else in your interface to work.    Rect Transform: This is a special type of the normal transform component that you will find on nearly every other object that you will use in your games. It keeps track of the object's position on screen, its size, its rotation, the pivot point around which it will rotate, and how it will behave when the screen size changes. By default, the Rect Transform for a canvas is locked to include the whole screen's size.    Canvas: This component controls how it and the interface elements it controls interact with the camera and your scene. You can change this by adjusting Render Mode. The default mode, Screen Space – Overlay, means that everything will be drawn on screen and over top of everything else in the scene. The Screen Space – Camera mode will draw everything a specific distance away from the camera. This allows your interface to be affected by the perspective nature of the camera, but any models that might be closer to the camera will appear in front of it. The World Space mode ensures that the canvas and elements it controls are drawn in the world just like any of the models in your scene.    Graphics Raycaster: This is the component that lets you actually interact with and click on your various interface elements. When you added the canvas, an extra object called EventSystem was also created. This is what allows our buttons and other interface elements to interact with our scripts. If you ever accidentally delete it, you can recreate it by going to the top of the Unity and navigating to GameObject | UI | EventSystem. Next, we need to adjust the way the Unity Editor will display our game so that we can easily make our game board. To do this, switch to the Game view by clicking on its tab at the top of the Scene view. Then, click on the button that says Free Aspect and select the option near the bottom: 3 : 2 Landscape (3 : 2). Most of the mobile devices your games will be played on will use a screen that approximates this ratio. The rest will not see any distortion in your game. To allow our game to adjust to the various resolutions, we need to add a new component to our canvas object. With it selected in the Hierarchy panel, click on Add Component in the Inspector panel and navigate to Layout | Canvas Scaler. The selected component allows a base screen resolution to be worked from, letting it automatically scale our GUI as the devices change. To select a base resolution, select Scale With Screen Size from the Ui Scale Mode drop-down list. Next, let's put 960 for X and 640 for Y. It is better to work from a larger resolution than a smaller one. If your resolution is too small, all your GUI elements will look fuzzy when they are scaled up for high-resolution devices. To keep things organized, we need to create three empty GameObjects. Go back to the top of Unity and select Create Empty three times under GameObject. In the Hierarchy tab, click and drag them to our canvas to make them the canvas's children. To make each of them usable for organizing our GUI elements, we need to add the Rect Transform component. Find it by navigating to Add Component | Layout | Rect Transform in Inspector for each. To rename them, click on their name at the top of the Inspector and type in a new name. Name one Board, another Buttons, and the last one as Squares. Next, make Buttons and Squares children of Board. The Buttons element will hold all of the pieces of our game board that are clickable while Squares will hold the squares that have already been selected. To keep the Board element at the same place as the devices change, we need to change the way it anchors to its parent. Click on the box with a red cross and a yellow dot in the center at the top right of Rect Transform to expand the Anchor Presets menu: Each of these options affects which corner of the parent the element will stick to as the screen changes size. We want to select the bottom-right option with four arrows, one in each direction. This will make it stretch with the parent element. Make the same change to Buttons and Squares as well. Set Left, Top, Right, and Bottom of each of these objects to 0. Also, make sure that Rotation is all set to 0 and Scale is set to 1. Otherwise, our interface may be scaled oddly when we work or play on it. Next, we need to change the anchor point of the board. If Anchor is not expanded, click on the little triangle on the left-hand side to expand it. Either way, the Max X value needs to be set to 0.667 so that our board will be a square and cover the left two-thirds of our screen. This game board is the base around which the rest of our project will be created. Without it, the game won't be playable. The game squares use it to draw themselves on screen and anchor themselves to relevant places. Later, when we create menus, this is needed to make sure that a player only sees what we need them to be interacting with at that moment. Game squares Now that we have our base game board in place, we need the actual game squares. Without them, it is going to be kind of hard to play the game. We need to create nine buttons for the player to click on, nine images for the background of the selected squares, and nine texts to display which person controls the squares. To create them and set them up, perform these steps: Navigate to Game Object | UI just like we did for the canvas, but this time select Button, Image, and Text to create everything we need. Each of the image objects needs one of the text objects as a child. Then, all of the images must be children of the Squares object and the buttons must be children of the Buttons object. All of the buttons and images need a number in their name so that we can keep them organized. Name the buttons Button0 through Button8 and the images Square0 through Square8. The next step is to lay out our board so that we can keep things organized and in sync with our programming. We need to set each numbered set specifically. But first, pick the crossed arrows from the bottom-right corner of Anchor Presets for all of them and ensure that their Left, Top, Right, and Bottom values are set to 0. To set each of our buttons and squares at the right place, just match the numbers to the following table. The result will be that all the squares will be in order, starting at the top left and ending at the bottom right: Square Min X Min Y Max X Max Y 0 0 0.67 0.33 1 1 0.33 0.67 0.67 1 2 0.67 0.67 1 1 3 0 0.33 0.33 0.67 4 0.33 0.33 0.67 0.67 5 0.67 0.33 1 0.67 6 0 0 0.33 0.33 7 0.33 0 0.67 0.33 8 0.67 0 1 0.33 The last thing we need to add is an indicator to show whose turn it is. Create another Text object just like we did before and rename it as Turn Indicator. After you make sure that the Left, Top, Right, and Bottom values are set to 0 again, set Anchor Point Preset to the blue arrows again. Finally, set Min X under Anchor to 0.67. We now have everything that we need to play the basic game of Tic-tac-toe. To check it out, select the Squares object and uncheck the box in the top-right corner to turn it off. When you hit play now, you should be able to see your whole game board and click on the buttons. You can even use Unity Remote to test it with the touch settings. If you have not already done so, it would be a good idea to save the scene before continuing. The game squares are the last piece to set up our initial game. It almost looks like a playable game now. We just need to add a few scripts and we will be able to play all the games of Tic-tac-toe that we could ever desire. Controlling the game Having a game board is one of the most important parts of creating any game. However, it does us no good if we can't control what happens when its various buttons are pressed. Let's create some scripts and write some code to fix this now: Create two new scripts in the Project panel. Name the new scripts as TicTacToeControl and SquareState. Open them and clear out the default functions. The SquareState script will hold the possible states of each square of our game board. To do this, clear absolutely everything out of the script, including the using UnityEngine line and the public class SquareState line, so that we can replace it with a simple enumeration. An enumeration is just a list of potential values. This one is concerned with the player who controls the square. It will allow us to keep track of whether X's controlling it, O's controlling it, or if it is clear. The Clear statement becomes the first and therefore, the default state: public enum SquareState { Clear, Xcontrol, Ocontrol } In our other script, TicTacToeControl, we need to start by adding an extra line at the very beginning, right under using UnityEngine. This line lets our code interact with the various GUI elements, most importantly with this game, allowing us to change the text of who controls a square and whose turn it is. using UnityEngine.UI; Next, we need two variables that will largely control the flow of the game. They need to be added in place of the two default functions. The first defines our game board. It is an array of nine squares to keep track of who owns what. The second keeps track of whose turn it is. When the Boolean is true, the X player gets a turn. When the Boolean is false, the O player gets a turn: public SquareState[] board = new SquareState[9]; public bool xTurn = true; The next variable will let us change the text on screen for whose turn it is: public Text turnIndicatorLandscape; These three variables will give us access to all of the GUI objects that we set up in the last section, allowing us to change the image and text based on who owns the square. We can also turn the buttons and squares on and off as they are clicked. All of them are marked with Landscape so that we will be able to keep them straight later, when we have a second board for the Portrait orientation of devices: public GameObject[] buttonsLandscape; public Image[] squaresLandscape; public Text[] squareTextsPortrait; The last two variables for now will give us access to the images that we need to change the backgrounds: public Sprite oImage; public Sprite xImage; Our first function for this script will be called every time a button is clicked. It receives the number of buttons clicked, and the first thing it does is turn the button off and the square on: public void ButtonClick(int squareIndex) { buttonsLandscape[squareIndex].SetActive(false); squaresLandscape[squareIndex].gameObject.SetActive(true); Next, the function checks the Boolean that we created earlier to see whose turn it is. If it is the X player's turn, the square is set to use the appropriate image and text, indicating that their control is set. It then marks on the script's internal board that controls the square before finally switching to the O player's turn: if(xTurn) { squaresLandscape[squareIndex].sprite = xImage; squareTextsLandscape[squareIndex].text = "X"; board[squareIndex] = SquareState.XControl; xTurn = false; turnIndicatorLandscape.text = "O's Turn"; } This next block of code does the same thing as the previous one, except it marks control for the O player and changes the turn to the X player: else { squaresLandscape[squareIndex].sprite = oImage; squareTextsLandscape[squareIndex].text = "O"; board[squareIndex] = SquareState.OControl; xTurn = true; turnIndicatorLandscape.text = "X's Turn"; } } That is it for the code right now. Next, we need to return to the Unity Editor and set up our new script in the scene. You can do this by creating another empty GameObject and renaming it as GameControl. Add our TicTacToeControl script to it by dragging the script from the Project panel and dropping it in the Inspector panel when the object is selected. We now need to attach all of the object references that our script needs in order to actually work. We don't need to touch the Board or XTurn slots in the Inspector panel, but the Turn Indicator object does need to be dragged from the Hierarchy tab to the Turn Indicator Landscape slot in the Inspector panel. Next, expand the Buttons Landscape, Squares Landscape, and Square Texts Landscape settings and set each Size slot to 9. To each of the new slots, we need to drag the relevant object from the Hierarchy tab. The Element 0 object under Buttons Landscape gets Button0, Element 1 gets Button1, and so on. Do this for all of the buttons, images, and texts. Ensure that you put them in the right order or else our script will appear confusing as it changes things when the player is playing. Next, we need a few images. If you have not already done so, import the starting assets for this article by going to the top of Unity, by navigating to Assets | Import New Asset and selecting the files to import them. You will need to navigate to and select each one at a time. We have Onormal and Xnormal for indicating control of the square. The ButtonNormal image is used when the button is just sitting there and ButtonActive is used when the player touches the button. The Title field is going to be used for our main menu a little bit later. In order to use any of these images in our game, we need to change their import settings. Select each of them in turn and find the Texture Type dropdown in the Inspector panel. We need to change them from Texture to Sprite (2D \ uGUI). We can leave the rest of the settings at their defaults. The Sprite Mode option is used if we have a sprite sheet with multiple elements in one image. The Packing Tag option is used for grouping and finding sprites in the sheet. The Pixels To Units option affects the size of the sprite when it is rendered in world space. The Pivot option simply changes the point around which the image will rotate. For the four square images, we can click on Sprite Editor to change how the border appears when they are rendered. When clicked, a new window opens that shows our image with some green lines at the edges and some information about it in the lower-right. We can drag these green lines to change the Border property. Anything outside the green lines will not be stretched with the image as it fills spaces that are larger than it. A setting around 13 for each side will keep our whole border from stretching. Once you make any changes, ensure that you hit the Apply button to commit them. Next, select the GameControl object once more and drag the ONormal image to the OImage slot and the XNormal image to the XImage slot. Each of the buttons needs to be connected to the script. To do this, select each of them from Hierarchy in turn and click on the plus sign at the bottom-right corner of their Inspector: We then need to click on that little circle to the left of No Function and select GameControl from the list in the new window. Now navigate to No Function | TicTacToeControl | ButtonClick (int) to connect the function in our code to the button. Finally, for each of the buttons, put the number of the button in the number slot to the right of the function list. To keep everything organized, rename your Canvas object to GameBoard_Landscape. Before we can test it out, be sure that the Squares object is turned on by checking the box in the top-left corner of its Inspector. Also, uncheck the box of each of its image children. This may not look like the best game in the world, but it is playable. We have buttons that call functions in our scripts. The turn indicator changes as we play. Also, each square indicates who controls it after they are selected. With a little more work, this game could look and work great. Messing with fonts Now that we have a basic working game, we need to make it look a little better. We are going to add our button images and pick some new font sizes and colors to make everything more readable: Let's start with the buttons. Select one of the Button elements and you will see in the Inspector that it is made of an Image (Script) component and a Button (Script) component. The first component controls how the GUI element will appear when it just sits there. The second controls how it changes when a player interacts with it and what bit of functionality this triggers.    Source Image: This is the base image that is displayed when the element just sits there and is untouched by the player.    Color: This controls the tinting and fading of the image that is being used.    Material: This lets you use a texture or shader that might otherwise be used on 3D models.    Image Type: This determines how the image will be stretched to fill the available space. Usually, it will be set to Sliced, which is for images that use a border and can be optionally filled with a color based on the Fill Center checkbox. Otherwise, it will be often set to Simple, for example when you are using a normal image and can use prevent the Preserve Aspect box from being stretched by odd sized Rect Transforms.    Interactable: This simply toggles whether or not the player is able to click on the button and trigger functionality.    Transition: This changes how the button will react as the player interacts with it. ColorTint causes the button to change color as it is interacted with. SpriteSwap will change the image when it is interacted with. Animation will let you define more complex animation sequences for the transitions between states.    The Target Graphic is a reference to the base image used for drawing the button on screen.    The Normal slot, Highlighted slot, Pressed slot, and Disabled slot define the effects or images to use when the button is not being interacted with, is moused over, the player clicks on it, and when the button has been turned off. For each of our buttons, we need to drag our ButtonNormal image from our Project panel to the Source Image slot. Next, click on the white box to the right of the Color slot to open the color picker. To stop our buttons from being faded, we need to move the A slider all the way to the right or set the box to 255. We want to change images when our buttons are pressed, so change the Transition to SpriteSwap. Mobile devices have almost no way of hovering over GUI elements, so we do not need to worry about the Highlighted state. However, we do want to add our ButtonActive image to the Pressed Sprite slot so that it will switch when the player touches the button. The button squares should be blank until someone clicks on them, so we need to get rid of the text element. The easiest way to do this is to select each one under the button and delete it. Next, we need to change the Text child of each of the image elements. It is the Text (Script) component that allows us to control how text is drawn on screen.    Text: This is the area where we can change text that will be drawn on screen.    Font: This allows us to pick any font file that is in our project to use for the text.    Font Style: This will let you adjust the bold and italic nature of the text.    Font Size: This is the size of the text. This is just like picking a font size in your favorite word processor.    Line Spacing: This is the distance between each line of text.    Rich Text: This will let you use a few special HTML style tags to affect only part of the text with a color, italics, and so on.    Alignment: This changes the location where the text will be centered in the box. The first three boxes adjust the horizontal position. The second three change the vertical position.    Horizontal Overflow / Vertical Overflow: These adjust whether the text can be drawn outside the box, wrapped to a new line, or clipped off.    Best Fit: This will automatically adjust the size of the text to fit a dynamically size-changing element, within a Min and Max value.    Color/Material: These change the color and texture of the text as and when it is drawn.    Shadow (Script): This component adds a drop shadow to the text, just like what you might add in Photoshop. For each of our text elements, we need to use a Font Size of 120 and the Alignment should be centered. For the Turn Indicator text element, we also need to use a Font Size of 120 and it also needs to be centered. The last thing to do is to change the Color of the text elements to a dark gray so that we can easily see it against the color of our buttons: Now, our board works and looks good too. Try taking a stab at adding your own images for the buttons. You will need two images; one for when the button sits there and one for when the button is pressed. Also, the default Arial font is boring. Find a new font to use for your game; you can import it just like any other asset for your game. Rotating devices If you have been testing your game so far, you have probably noticed that the game only looks good when we hold the device in the landscape mode. When it is held in the portrait mode, everything becomes squished as the squares and turn indicator try to share the little amount of horizontal space that is available. As we have already set up our game board in one layout mode, it becomes a fairly simple matter to duplicate it for the other mode. However, it does require duplicating a good portion of our code to make it all work properly: To make a copy of our game board, right-click on it and select Duplicate from the new menu. Rename the duplicate game board to GameBoard_Portrait. This will be the board used when our player's device is in the portrait mode. To see our changes while we are making them, turn off the landscape game board and select 3:2 Portrait (2:3) from the drop-down list at the top left of the Game window. Select the Board object that is a child of GameBoard_Portrait. In its Inspector panel, we need to change the anchors to use the top two-thirds of the screen rather than the left two-thirds. The values of 0 for Min X, 0.33 for Min Y, and 1 for both Max X and Max Y will make this happen. Next, Turn Indicator needs to be selected and moved to the bottom-third of the screen. Values of 0 for Min X and Min Y, 1 for Max X, and 0.33 for Max Y will work well here. Now that we have our second board set up, we need to make a place for it in our code. So, open the TicTacToeControl script and scroll to the top so that we can start with some new variables. The first variable that we are going to add will give us access to the turn indicator for the portrait mode of our screen: public Text turnIndicatorPortrait; The next three variables will keep track of the buttons, square images, and owner text information. These are just like the three lists that we created earlier to keep track of the board while it is in the landscape mode: public GameObject[] buttonsPortrait; public Image[] squaresPortrait; public Text[] squareTextsPortrait; The last two variables that we are going to add to the top of our script here are for keeping track of the two canvas objects that actually draw our game boards. We need these so that we can switch between them as the user turns their device around. public GameObject gameBoardGroupLandscape; public GameObject gameBoardGroupPortrait; Next, we need to update a few of our functions so that they make changes to both boards and not just the landscape board. These first two lines turn the portrait board's buttons off and the squares on when the player clicks on them. They need to go at the beginning of our ButtonClick function. Put them right after the two lines where we use SetActive on the buttons and squares for the landscape set: buttonsPortrait[squareIndex].SetActive(false); squaresPortrait[squareIndex].gameObject.SetActive(true); These two lines change the image and text for the controlling square in favor of the X player for the Portrait set. They go inside the if statement of our ButtonClick function, right after the two lines that do the same thing for the landscape set: squaresPortrait[squareIndex].sprite = xImage; squareTextsPortrait[squareIndex].text = "X"; This line goes at the end of that same if statement and changes the Portrait set's turn indicator text: turnIndicatorPortrait.text = "O's Turn"; The next two lines change image and text in favor of the O player. They go after the same lines for the Landscape set, inside of the else statement of our ButtonClick function: squaresPortrait[squareIndex].sprite = oImage; squareTextsPortrait[squareIndex].text = "O"; This is the last line that we need to add to our ButtonClick function; it needs to be put at the end of the else statement. It simply changes the text indicating whose turn it is: turnIndicatorPortrait.text = "X's Turn"; Next, we need to create a new function to control the changing of our game boards when the player changes the orientation of their device. We will start by defining the Update function. This is a special function called by Unity for every single frame. It will allow us to check for a change in orientation for every frame: public void Update() { The function begins with an if statement that uses Input.deviceOrientation to find out how the player's device is currently being held. It compares the finding to the LandscapeLeft orientation to see whether the device is begin held sideways, with the home button on the left side. If the result is true, the Portrait set of GUI elements are turned off while the Landscape set is turned on: if(Input.deviceOrientation == DeviceOrientation.LandscapeLeft) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } The next else if statement checks for a Portrait orientation, the home button is down. It turns Portrait on and the Landscape set off if true: else if(Input.deviceOrientation == DeviceOrientation.Portrait) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } This else if statement is checking LanscapeRight when the home button is on the right side: else if(Input.deviceOrientation == DeviceOrientation.LandscapeRight) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } Finally, we check the PortraitUpsideDown orientation, which is when the home button is at the top of the device. Don't forget the extra bracket to close off and end the function: else if(Input.deviceOrientation == DeviceOrientation.PortraitUpsideDown) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } } We now need to return to Unity and select our GameControl object so that we can set up our new Inspector properties. Drag and drop the various pieces from the portrait game board in Hierarchy to the relevant slot in Inspector, Turn Indicator to the Turn Indicator Portrait slot, the buttons to the Buttons Portrait list in order, the squares to Squares Portrait, and their text children to the Square Texts Portrait. Finally, drop the GameBoard_Portrait object in the Game Board Group Portrait slot. We should now be able to play our game and see the board switch when we change the orientation of our device. You will have to either build your project on your device or connect using Unity Remote because the Editor itself and your computer simply don't have a device orientation like your mobile device. Be sure to set the display mode of your Game window to Remote in the top-left corner so that it will update along with your device while using Unity Remote. Menus and victory Our game is nearly complete. The last things that we need are as follows: An opening menu where players can start a new game A bit of code for checking whether anybody has won the game A game over menu for displaying who won the game Setting up the elements Our two new menus will be quite simple when compared to the game board. The opening menu will consist of our game's title graphic and a single button, while the game over menu will have a text element to display the victory message and a button to go back to the main menu. Let's perform the following steps to set up the elements: Let's start with the opening menu by creating a new Canvas, just like we did before, and rename it as OpeningMenu. This will allow us to keep it separate from the other screens that we have created. Next, the menu needs an Image element and a Button element as children. To make everything easier to work with, turn off the game boards with the checkbox at the top of their Inspector windows. For our image object, we can drag our Title image to the Source Image slot. For the image's Rect Transform, we need to set the Pos X and Pos Y values to 0. We also need to adjust the Width and Height. We are going to match the dimensions of the original image so that it will not be stretched. Put a value of 320 for Width and 160 for Height. To move the image to the top half of the screen, put a 0 in the Pivot Y slot. This changes where the position is based on for the image. For the button's Rect Transform, we again need the value of 0 for both Pos X and Pos Y. We again need a value of 320 for the Width, but this time we want a value of 100 for the Height. To move it to the bottom half of the screen, we need a value of 1 in the Pivot Y slot. Next up is to set the images for the button, just like we did earlier for the game board. Put the ButtonNormal image in the Source Image slot. Change Transition to SpriteSwap and put the ButtonActive image in the Pressed Sprite slot. Do not forget to change Color to have an A value of 255 in color picker so that our button is not partially faded. Finally, for this menu to change the button text, expand Button in the Hierarchy and select Text child object. Right underneath Text in the Inspector panel for this object is a text field where we can change the text displayed on the button. A value of New Game here will work well. Also, change Font Size to 45 so that we can actually read it. Next, we need to create the game over menu. So, turn off our opening menu and create a new canvas for our game over menu. Rename it as GameOverMenu so that we can continue to be organized. For this menu, we need a Text element and a Button element as its children. We will set this one up in an almost identical way to the previous one. Both the text and the button need values of 0 for the Pos X and Pos Y slots, with a value of 320 for Width. The text will use a Height of 160 and a Pivot Y of 0. We also need to set its Font Size as 80. You can change the default text, but it will be overwritten by our code anyway. To center our text in the menu, select the middle buttons from the two sets next to the Alignment property. The button will use a Height of 100 and a Pivot Y of 1. Also, be sure that you set the Source Image, Color, Transition, and Pressed Sprite to the proper images and settings. The last thing to set is the button's text child. Set the default text to Main Menu and give it a Font Size of 45. That is it for setting up our menus. We have all the screens that we need to allow the player to interact with our game. The only problem is that we don't have any of the functionality to make them actually do anything. Adding the code To make our game board buttons work, we had to create a function in our script they could reference and call when they are touched. The main menu's button will start a new game, while the game over menu's button will change screens to the main menu. We will also need to create a little bit of code to clear out and reset the game board when a new game starts. If we don't, it would be impossible for the player to play more than one round before being required to restart the whole app if they want to play again. Open the TicTacToeControl script so that we can make some more changes to it. We will start with the addition of three variables at the top of the script. The first two will keep track of the two new menus, allowing us to turn them on and off as per our need. The third is for the text object in the game over screen that will give us the ability to put a message based on the result of the game. Next, we need to create a new function. The NewGame function will be called by the button in the main menu. Its purpose is to reset the board so that we can continue to play without having to reset the whole application. public void NewGame() { The function starts by setting the game to start on the X player's turn. It then creates a new array of SquareStates, which effectively wipes out the old game board. It then sets the turn indicators for both the Landscape and Portrait sets of controls: xTurn = true; board = new SquareState[9]; turnIndicatorLandscape.text = "X's Turn"; turnIndicatorPortratit.text = "X's Turn"; We next loop through the nine buttons and squares for both the Portrait and Landscape controls. All of the buttons are turned on and the squares are turned off using SetActive, which is the same as clicking on the little checkbox at the top-left corner of the Inspector panel: for(int i=0;i<9;i++) { buttonsPortrait[i].SetActive(true); squaresPortrait[i].gameObject.SetActive(false); buttonsLandscape[i].SetActive(true); squaresLandscape[i].gameObject.SetActive(false); } The last three lines of code control which screens are visible when we change over to the game board. By default, it chooses to turn on the Landscape board and makes sure that the Portrait board is turned off. It then turns off the main menu. Don't forget the last curly bracket to close off the function: gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); mainMenuGroup.SetActive(false); } Next, we need to add a single line of code to the end of the ButtonClick function. It is a simple call to check whether anyone has won the game after the buttons and squares have been dealt with: CheckVictory(); The CheckVictory function runs through the possible combinations for victory in the game. If it finds a run of three matching squares, the SetWinner function will be called and the current game will end: public void CheckVictory() { A victory in this game is a run of three matching squares. We start by checking the column that is marked by our loop. If the first square is not Clear, compare it to the square below; if they match, check it against the square below that. Our board is stored as a list but drawn as a grid, so we have to add three to go down a square. The else if statement follows with checks of each row. By multiplying our loop value by three, we will skip down a row of each loop. We'll again compare the square to SquareState.Clear, then to the square to its right, and finally with the two squares to its right. If either set of conditions is correct, we'll send the first square in the set to another function to change our game screen: for(int i=0;i<3;i++) { if(board[i] != SquareState.Clear && board[i] == board[i + 3] && board[i] == board[i + 6]) { SetWinner(board[i]); return; } else if(board[i * 3] != SquareState.Clear && board[i * 3] == board[(i * 3) + 1] && board[i * 3] == board[(i * 3) + 2]) { SetWinner(board[i * 3]); return; } } The following code snippet is largely the same as the if statements that we just saw. However, these lines of code check the diagonals. If the conditions are true, again send out to the other function to change the game screen. You probably also noticed the returns after the function calls. If we have found a winner at any point, there is no need to check any more of the board. So, we'll exit the CheckVictory function early: if(board[0] != SquareState.Clear && board[0] == board[4] && board[0] == board[8]) { SetWinner(board[0]); return; } else if(board[2] != SquareState.Clear && board[2] == board[4] && board[2] == board[6]) { SetWinner(board[2]); return; } This is the last little bit for our CheckVictory function. If no one has won the game, as determined by the previous parts of this function, we have to check for a tie. This is done by checking all the squares of the game board. If any one of them is Clear, the game has yet to finish and we exit the function. But, if we make it through the entire loop without finding a Clear square, we set the winner by declaring a tie: for(int i=0;i<board.Length;i++) { if(board[i] == SquareState.Clear) return; } SetWinner(SquareState.Clear); } Next, we create the SetWinner function that is called repeatedly in our CheckVictory function. This function passes who has won the game, and it initially turns on the game over screen and turns off the game board: public void SetWinner(SquareState toWin) { gameOverGroup.SetActive(true); gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(false); The function then checks to see who won and picks an appropriate message for the victorText object: if(toWin == SquareState.Clear) { victorText.text = "Tie!"; } else if(toWin == SquareState.XControl) { victorText.text = "X Wins!"; } else { victorText.text = "O Wins!"; } } Finally, we have the BackToMainMenu function. This is short and sweet; it is simply called by the button on the game over screen to switch back to the main menu: public void BackToMainMenu() { gameOverGroup.SetActive(false); mainMenuGroup.SetActive(true); } That is all of the code in our game. We have all of the visual pieces that make up our game and now, we also have all of the functional pieces. The last step is to put them together and finish the game. Putting them together We have our code and our menus. Once we connect them together, our game will be complete. To put it all together perform the following steps: Go back to the Unity Editor and select the GameControl object from the Hierarchy panel. The three new properties in its Inspector window need to be filled in. Drag the OpeningMenu canvas to the Main Menu Group slot and GameOverMenu to the Game Over Group slot. Also, find the text object child of GameOverMenu and drag it to the Victor Text slot. Next, we need to connect the button functionality for each of our menus. Let's start by selecting the button object child of our OpeningMenu canvas. Click on the little plus sign at the bottom right of its Button (Script) component to add a new functionality slot. Click on the circle in the center of the new slot and select GameControl from the new pop-up window, just like we did for each of our game board buttons. The drop-down list that currently says No Function is our next target. Click on it and navigate to TicTacToeControl | NewGame (). Repeat these few steps to add the functionality to the Button childof GameOverMenu. Except, select BackToMainMenu() from the list. The very last thing to do is to turn off both the game boards and the game over menu, using the checkbox in the top left of the Inspector. Leave only the opening menu on so that our game will start there when we play it. Congratulations! This is our game. All of our buttons are set, we have multiple menus, and we even created a game board that changes based on the orientation of the player's device. The last thing to do is to build it for our devices and go show it off. A better way to build to device Now for the part of the build process that everyone itches to learn. There is a quicker and easier way to have your game built and play it on your Android device. The long and complicated way is still very good to know. Should this shorter method fail, and it will at some point, it is helpful to know the long method so that you can debug any errors. Also, the short path is only good for building for a single device. If you have multiple devices and a large project, it will take significantly more time to load them all with the short build process. Follow these steps: Start by opening the Build Settings window. Remember, it can be found under File at the top of the Unity Editor. If you have not already done so, save your scene. The option to save your scene is also found under File at the top of the Unity Editor. Click on the Add Current button to add our current scene, also the only scene, to the Scenes In Build list. If this list is empty, there is no game. Be sure to change your Platform to Android, if you haven't already done so. Do not forget to set the Player Settings. Click on the Player Settings button to open them up in the Inspector window. At the top, set the Company Name and Product Name fields. Values of TomPacktAndroid and Ch2 TicTacToe respectively for these fields will match the included completed project. Remember, these fields will be seen by the people playing your game. The Bundle Identifier field under Other Settings needs to be set as well. The format is still com.CompanyName.ProductName, so com.TomPacktAndroid.Ch2.TicTacToe will work well. In order to see our cool dynamic GUI in action on a device, there is one other setting that should be changed. Click on Resolution and Presentation to expand the options. We are interested in Default Orientation. The default is Portrait, but this option means that the game will be fixed in the portrait display mode. Click on the drop-down menu and select Auto Rotation. This option tells Unity to automatically adjust the game to be upright irrespective of the orientation in which it is being held. The new set of options that popped up when Auto Rotation was selected allows to limit the orientations that are supported. Perhaps you are making a game that needs to be wider and held in landscape orientation. By unchecking Portrait and Portrait Upside Down, Unity will still adjust (but only for the remaining orientations). On your Android device, the controls are along one of the shorter sides; these usually are the home, menu, and back or recent apps buttons. This side is generally recognized as the bottom of the device and it is the position of these buttons that dictates what each orientation is. The Portrait mode is when these buttons are down relative to the screen. The Landscape Right mode is when they are to the right. The pattern begins to become clear, does it not? For now, leave all of the orientation options checked and we will go back to Build Settings. The next step (and this very important) is to connect your device to your computer and give it a moment to be recognized. If your device is not the first one connected to your computer, this shorter build path will fail. In the bottom-right corner of the Build Settings window, click on the Build And Run button. You will be asked to give the application file, the APK, a relevant name, and save it to an appropriate location. A name such as Ch2_TicTacToe.apk will be fine, and it is suitable enough to save the file to the desktop. Click on Save and sit back to watch the wonderful loading bar that is provided. After the application is built, there is a pushing to device step. This means that the build was successful and Unity is now putting the application on your device and installing it. Once this is done, the game will start on the device and the loading will be done. We just learned about the Build And Run button provided by the Build Settings window. This is quick, easy, and free from the pain of using the command prompt; isn't the short build path wonderful? However, if the build process fails for any reason including being unable to find the device, the application file will not be saved. You will have to go through the entire build process again, if you want to try installing again. This isn't so bad for our simple Tic-tac-toe game, but it might consume a lot of time for a larger project. Also, you can only have one Android device connected to your computer while building. Any more devices and the build process is a guaranteed failure. Unity also doesn't check for multiple devices until after it has gone through the rest of the potentially long build process. Other than these words of caution, the Build And Run option is really quite nice. Let Unity handle the hard part of getting the game to your device. This gives us much more time to focus on testing and making a great game. If you are up for a challenge, this is a tough one: creating a single player mode. You will have to start by adding an extra button to the opening screen for selecting the second game mode. Any logic for the computer player should go in the Update function. Also, take a look at Random.Range for randomly selecting a square to take control. Otherwise, you could do a little more work and make the computer search for a square where it can win or create a line of two matches. Summary To learn more about ECMAScript and JavaScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Unity Android Game Development by Example Beginner's Guide Unity 5 for Android Essentials Resources for Article: Further resources on this subject: Finding Your Way [article] The Blueprint Class [article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 15215

article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15083
article-image-getting-started-gamesalad
Packt
20 Mar 2012
10 min read
Save for later

Getting Started with GameSalad

Packt
20 Mar 2012
10 min read
Let's get to it shall we?   System requirements In order for you to run GameSalad and create amazingly awesome games, you must meet the minimum system requirements, which are as follows:   Intel Mac (Any Mac from 2006 and above) Mac OS X 10.6 or higher At least 1GB RAM A device running iOS (iPad, iPhone 3G and up, or iPod Touch) If your computer exceeds these requirements, perfect! If not, you will need to upgrade your computer. Keep in mind, these are the minimum requirements, having a computer with better specs is recommended.   Let's get into GameSalad Let's start by downloading GameSalad and registering for an account. Let's go to GameSalad's website, www.gamesalad.com. Click the "Download Free App – GameSalad Creator" button.   While you are waiting for GameSalad to download, you should sign up for a free account. At the top of the page click Sign Up, enter your email address and create a username and password. You have two options for GameSalad membership, you can keep the Basic Pricing, which is completely free or select Professional Pricing. The difference is when you publish your App, you will have a Created with GameSalad splash screen, not a big deal right? Especially, not when you can get this awesome program for free! The Professional Pricing, which is $499 (USD) per year gives you all the features of the free version of GameSalad, plus it allows you to use iAds, Game Center, Promotional Links, your own Custom Splash Screen, and Priority Technical Support.   This does not include your Apple developer cost, which is $99 a year Other tools that are recommended for game development:   Adobe Photoshop (http://www.adobe.com/products/photoshop.html) or a free equivalent, Inkscape, Gimp, and Pixelmator Drawing Tablet (Makes creating sprites much easier but not required) Getting familiar with GameSalad's interface Once you open GameSalad, you are presented with several options on the screen.   Following are the options:   Home: It shows you the latest GameSalad links (Success stories, their latest game release, and so on...). News: It is self-explanatory, this shows you the latest update notes, and what is new in the GS community. Start: The getting started screen, here you have video tutorials, Wiki Docs, Blog, and more. Profile: This shows you, your GameSalad's profile page, messages, followers, and likes. New: These are all your new projects, Blank templates, and various bare bone templates to get you started. Recent: This shows you all of your recently saved projects. Portfolio: This shows all your published Apps through GameSalad.   Back/Forward buttons: Used when navigating back and forth between windows Web Preview: Allows you to see what your game will look like within the browser (HTML5) Home: This takes you right back to the project's main window Publish: Brings up the Publish window, here you can chose to deploy your game to the web, iPhone, iPad, Mac, or Android Scenes: Gives you a drop-down menu of all your scenes Feedback: Have some thoughts about GameSalad? Click this to send them to the Creators! Preview: At the main menu, or while editing an actor this starts your game from the beginning. If you are in a level, it will preview the level Help: Brings up the GameSalad documentation, which lists many help topics. Target Platform and Orientation: This drop-down menu gives you, your device options, iPhone Landscape, iPhone Portrait, GameSalad.com, iPad Landscape, iPad Portrait, and 720p HD Enable Resolution Independence (only when iPhone and iPad device is set): Check this option if you are creating a game specifically for the iPhone 4, 4S, iPad, or Kindles and Nooks. This takes your high resolution images and converts them for iPhone 3GS, 3G, and iPhone (1st Gen) Scenes Tab: Switch to this to see all your wonderful levels! Actors Tab: Select this tab to see all your actors in the game project. From this tab, you can group different types of actors, such as platforms and enemies. This comes in handy when an actor has to collide with numerous other actors (enemies or platforms) + button: Adds a Level - button (when a level is selected): Deletes a level Inspector (with Game selected) Actors: Here, you will see all your in-game items (Players, platforms, collectables, and so on) Attributes: Here, you can edit all the attributes of the game such as the display size. Devices: Here, you can edit all the settings for the mouse, touch screen, accelerometer, screen, and audio. Inspector (with Scene selected) o Attributes: Here, you can edit all the attributes of the current level, such as the size of the level, screen wrap (X,Y), Gravity, background color, camera settings, and autorotate. o Layers: Here, you can create numerous layers with scrollable on or off. For example, a layer called UI with scrollable deselected will have all your user interface items, and they will stay on the screen. Library (with Behaviors selected) o Standard: These are all the standard GameSalad behaviors (Movements, change actor attributes, and more) o Custom: These are your own custom behaviors. Let's say, you needed the same behavior throughout numerous actors but you didn't want to keep re-adding and changing the behavior for each actor. Once, you create the Behavior, drag it into this box and you can use it as much as you want. o Pro: These are all the professional behaviors (only available when you have paid for the professional membership). These include Game Center Connect, iAd, and Open URL Library (with Images selected) Project: This shows all your imported images into this project. Purchased: This is a new feature that shows the images you have purchased through GameSalad's Marketplace. (When you click Purchase Images..., this will take you to the GameSalad Marketplace where you will have a plethora of Content packs and more to purchase and import into your game) When you click the "+" button, you can import images from your hard drive, alternately, you can drag them directly into the Library from the Finder Library (with Sounds selected): This shows you all of your sound effects and music that you have imported into your project. As with images, when you click the "+" button you can import sound effects or music from your hard drive, or drag them directly in from the Finder. Actor Mode: This involves normal mouse functions; it allows you to select actors within the level. Following is the screenshot of the icon: Camera Mode: It allows you to edit the camera, position, and scrolling hot spots for characters that control the camera. Following is the screenshot of the icon: Reset Scene: While previewing your level and if this button is pressed, everything will go back to its initial state. Following is the screenshot of the icon: Play: This will start a preview of the current level. This is different from the green Project Preview button, as this will only preview the current level, and not the whole project. When you complete the level, an alert will appear telling you the scene has ended, and you can either select to preview the next level, or reset the current scene. Following is the screebshot of the icon: Show Initial State: If you have run a preview, and want to see the initial state without ending the preview, then pause the preview, click on the following icon and the initial state is seen. Following is the screenshot of the icon: For now, let's click New | My Great Project This is a fresh project; everything is empty. You can see that you have one level so far, but you can add more at a later time. See the Scenes and Actors Tabs? Currently, Scenes is selected, this shows you all of your levels, but if you click the Actors tab, you will be able to see all your actors (or game objects, characters, collectables, and so on.) in the game. You can also rearrange all of the actors in Actor Tags, to give you an idea of what these are useful for. Take for example, if you have 30 different enemies, when you are setting up your collisions within behaviors, you won't have to set up 30 different collisions. Rather, when you set up all the enemies within a tag named Enemies you can do a single collision behavior for all actors of the tag! This reduces a lot of time when coding. We will get into more detail about actor tags, when we get into creating some games later in the book. If you double-click on the Initial Scene, you will be taken to the level editor. Before we do that, let's go through the buttons shown in the following screenshot: The descriptions of the buttons in the previous screenshot are as as follows: Seems pretty easy, right? It is! GameSalad's user interface is simple. Even if you don't know what a certain button does, just hover your mouse over the button and a tooltip appears and tells you what the button does. Even though it's a very simple user interface, it is very powerful. Take for example, something as simple as the Enable Resolution Independence option. Simply selecting this takes out a lot of time from having to create two sets of images, a high resolution retina-friendly image, and a lower quality set for non-retina display images. With this option, all you have to do is create a high resolution set. Choose this option and GameSalad automatically creates a lower quality set of images for non-retina devices. How great is that? Such a simple option and yet it saves so much time and effort, and isn't simplicity what everyone wants? Now let's get into the scene editor Double-click our initial scene and you will see the Scene Editor, yes it may be a little daunting, but once you get used to the user interface, it is really quite simple. Let's break down all the buttons and see what they do: What do all these buttons mean? Following is a description of all the buttons and boxes: There we go! The GameSalad interface really is that easy to navigate! In this article, you set up an account with GameSalad, you downloaded and installed it and now you know how to use the interface. GameSalad has such a simple interface, but it is really powerful. As we looked at earlier, an option as simple as Resolution Independence is so easy to select and yet one click takes off so much time from creating different sets of images that can be used for developing. This is what makes GameSalad so great; it's such a simple user interface and yet it is so powerful. What is so amazing about all of it, is that there's no programming involved whatsoever! For those who don't have the smartness to program a full game, this is what everyone else wants, simple, quick, and super powerful.
Read more
  • 0
  • 0
  • 14885

article-image-creating-custom-hud
Packt
29 Apr 2013
5 min read
Save for later

Creating a Custom HUD

Packt
29 Apr 2013
5 min read
(For more resources related to this topic, see here.) Mission Briefing In this project we will be creating a HUD that can be used within a Medieval RPG and that will fit nicely into the provided Epic Citadel map, making use of Scaleform and ActionScript 3.0 using Adobe Flash CS6. As usual, we will be following a simple step—by—step process from beginning to end to complete the project. Here is the outline of our tasks: Setting up Flash Creating our HUD Importing Flash files into UDK Setting up Flash Our first step will be setting up Flash in order for us to create our HUD. In order to do this, we must first install the Scaleform Launcher. Prepare for Lift Off At this point, I will assume that you have run Adobe Flash CS6 at least once beforehand. If not, you can skip this section to where we actually import the .swf file into UDK. Alternatively, you can try to use some other way to create a Flash animation, such as FlashDevelop, Flash Builder, or SlickEdit; but that will have to be done on your own. Engage Thrusters The first step will be to install the Scaleform Launcher. The launcher will make it very easy for us to test our Flash content using the GFX hardware—accelerated Flash Player, which is what UDK will use to play it. Let's get started. Open up Adobe Flash CS6 Professional. Once the program starts up, open up Adobe Extension Manager by going to Help | Manage Extensions.... You may see the menu say Performing configuration tasks, please wait.... This is normal; just wait for it to bring up the menu as shown in the following screenshot: Click on the Install option from the top menu on the right—hand side of the screen. In the file browser, locate the path of your UDK installation and then go into the BinariesGFxCLICK Tools folder. Once there, select the ScaleformExtensions.mxp file and then select OK. When the agreement comes up, press the Accept button; then select whether you want the program to be installed for just you or everyone on your computer. If Flash is currently running, you should get a window popping up telling you that the program will not be ready until you restart the program. Close the manager and restart the program. With your reopened version of Flash start up the Scaleform Launcher by clicking on Window | Other Panels | Scaleform Launcher. At this point you should see the Scaleform Launcher panel come up as shown in the following screenshot: At this point all of the options are grayed out as it doesn't know how to access the GFx player, so let's set that up now. Click on the + button to add a new profile. In the profile name section, type in GFXMediaPlayer. Next, we need to reference the GFx player. Click on the + button in the player EXE section. Go to your UDK directory, BinariesGFx, and then select GFxMediaPlayerD3d9.exe. It will then ask you to give a name for the Player Name field with the value already filled in; just hit the OK button. UDK by default uses DirectX 9 for rendering. However, since GDC 2011, it has been possible for users to use DirectX 11. If your project is using 11, feel free to check out http://udn.epicgames.com/Three/DirectX11Rendering.html and use DX11. In order to test our game, we will need to hit the button that says Test with: GFxMediaPlayerD3d9 as shown in the following screenshot: If you know the resolution in which you want your final game to be, you can set up multiple profiles to preview how your UI will look at a specific resolution. For example, if you'd like to see something at a resolution of 960 x 720, you can do so by altering the command params field after %SWF PATH% to include the text —res 960:720. Now that we have the player loaded, we need to install the CLIK library for our usage. Go to the Preferences menu by selecting Edit | Preferences. Click on the ActionScript tab and then click on the ActionScript 3.0 Settings... button. From there, add a new entry to our Source path section by clicking on the + button. After that, click on the folder icon to browse to the folder we want. Add an additional path to our CLIK directory in the file explorer by first going to your UDK installation directory and then going to DevelopmentFlashAS3CLIK. Click on the OK button and drag—and—drop the newly created Scaleform Launcher to the bottom—right corner of the interface. Objective Complete — Mini Debriefing Alright, Flash is now set up for us to work with Scaleform within it, which for all intents and purposes is probably the hardest part about working with Scaleform. Now that we have taken care of it, let's get started on the HUD! As long as you have administrator access to your computer, these settings should be set for whenever you are working with Flash. However, if you do not, you will have to run through all of these settings every time you want to work on Scaleform projects.
Read more
  • 0
  • 0
  • 14668
Modal Close icon
Modal Close icon